Science.gov

Sample records for algorithm originally proposed

  1. Cliftonite in meteorites: A proposed origin

    USGS Publications Warehouse

    Brett, R.; Higgins, G.T.

    1967-01-01

    Cliftonite, a polycrystalline aggregate of graphite with cubic morphology, is known in ten meteorites. Some workers have considered it to be a pseudomorph after diamond, and have used the proposed diamond ancestry as evidence of a meteoritic parent body of at least lunar dimensions. We have synthesized cliftonite in Fe-Ni-C alloys in vacuum, as a product of decomposition of cohenite [(Fe,Ni)3C]. We therefore suggest that a high pressure origin is unnecessary for meteorites which contain cliftonite, and that these meteorites were formed at low pressures. This conclusion is in agreement with other recent evidence.

  2. Cohenite: its occurrence and a proposed origin

    USGS Publications Warehouse

    Brett, R.

    1967-01-01

    Cohenite is found almost exclusively in meteorites containing from 6 to 8 wt.% Ni. On the basis of phase diagrams and kinetic data it is proposed that cohenite cannot form in meteorites having more than 8 wt.% Ni and that any cohenite which formed in meteorites having Ni content lower than 6 wt.% decomposed during cooling. A series of isothermal sections for the system Fe{single bond}Ni{single bond}C has been constructed between 750 and 600??C from published information on the three constitutent binary systems. The diagrams indicate that the presence of a few tenths of a per cent carbon in a Ni{single bond}Fe alloy may reduce the temperature at which kamacite separates from taenite by more than 50??C. Hence C in iron meteorites may be partly responsible for the postulated supercooled nucleation of kamacite in meteorites proposed by recent authors. Cohenite found in meteorites probably formed over the temperature range 650-610??C. For compositions approximating those of metallic meteorites, the greater the C or Ni content of the alloy, the lower the temperature of formation of cohenite. The presence of cohenite in meteorites indicates neither high nor low pressures of formation. However, the absence of cohenite in meteorites containing the assemblage metal + graphite requires low pressures during cooling. Such meteorites therefore cooled in parent bodies of asteroidal size, or near the surface of large bodies. ?? 1967.

  3. [Algorithm for percutaneous origin of irreversible icterus ].

    PubMed

    Marković, Z; Milićević, M; Masulović, D; Saranović, Dj; Stojanović, V; Marković, B; Kovacević, S

    2007-01-01

    It is retrospective analysis of all percutaneous billiary dranage typs used in 600 patients with opstructive icterus in last 10 years.The procedure technics is analysed. It had positiv therapeutical result in about 75% cases. The most frequent complication are showed. The most coressponding percutaneous derivation algorithm is discussed. As initial method is suggested the usage of externo-internal derivation which, in dependence of the procedure, continue by internal derivation-catheteral endoprosthesys or matelic stent. The covered metalic stents usage is suggested as method of choise in metalic endoprosthesys application.

  4. Multi-spectral image enhancement algorithm based on keeping original gray level

    NASA Astrophysics Data System (ADS)

    Wang, Tian; Xu, Linli; Yang, Weiping

    2016-11-01

    Characteristics of multi-spectral imaging system and the image enhancement algorithm are introduced.Because histogram equalization and some other image enhancement will change the original gray level,a new image enhancement algorithm is proposed to maintain the gray level.For this paper, we have chosen 6 narrow-bands multi-spectral images to compare,the experimental results show that the proposed method is better than those histogram equalization and other algorithm to multi-spectral images.It also insures that histogram information contained in original features is preserved and guarantees to make use of data class information.What's more,on the combination of subjective and objective sharpness evaluation,details of the images are enhanced and noise is weaken.

  5. Proposal of an Algorithm to Synthesize Music Suitable for Dance

    NASA Astrophysics Data System (ADS)

    Morioka, Hirofumi; Nakatani, Mie; Nishida, Shogo

    This paper proposes an algorithm for synthesizing music suitable for emotions in moving pictures. Our goal is to support multi-media content creation; web page design, animation films and so on. Here we adopt a human dance as a moving picture to examine the availability of our method. Because we think the dance image has high affinity with music. This algorithm is composed of three modules. The first is the module for computing emotions from an input dance image, the second is for computing emotions from music in the database and the last is for selecting music suitable for input dance via an interface of emotion.

  6. Computed Tomography Image Origin Identification based on Original Sensor Pattern Noise and 3D Image Reconstruction Algorithm Footprints.

    PubMed

    Duan, Yuping; Bouslimi, Dalel; Yang, Guanyu; Shu, Huazhong; Coatrieux, Gouenou

    2016-06-08

    In this paper, we focus on the "blind" identification of the Computed Tomography (CT) scanner that has produced a CT image. To do so, we propose a set of noise features derived from the image chain acquisition and which can be used as CT-Scanner footprint. Basically, we propose two approaches. The first one aims at identifying a CT-Scanner based on an Original Sensor Pattern Noise (OSPN) that is intrinsic to the X-ray detectors. The second one identifies an acquisition system based on the way this noise is modified by its 3D image reconstruction algorithm. As these reconstruction algorithms are manufacturer dependent and kept secret, our features are used as input to train an SVM based classifier so as to discriminate acquisition systems. Experiments conducted on images issued from 15 different CT-Scanner models of 4 distinct manufacturers demonstrate that our system identifies the origin of one CT image with a detection rate of at least 94% and that it achieves better performance than Sensor Pattern Noise (SPN) based strategy proposed for general public camera devices.

  7. Modified multiscale sample entropy computation of laser speckle contrast images and comparison with the original multiscale entropy algorithm

    NASA Astrophysics Data System (ADS)

    Humeau-Heurtier, Anne; Mahé, Guillaume; Abraham, Pierre

    2015-12-01

    Laser speckle contrast imaging (LSCI) enables a noninvasive monitoring of microvascular perfusion. Some studies have proposed to extract information from LSCI data through their multiscale entropy (MSE). However, for reaching a large range of scales, the original MSE algorithm may require long recordings for reliability. Recently, a novel approach to compute MSE with shorter data sets has been proposed: the short-time MSE (sMSE). Our goal is to apply, for the first time, the sMSE algorithm in LSCI data and to compare results with those given by the original MSE. Moreover, we apply the original MSE algorithm on data of different lengths and compare results with those given by longer recordings. For this purpose, synthetic signals and 192 LSCI regions of interest (ROIs) of different sizes are processed. Our results show that the sMSE algorithm is valid to compute the MSE of LSCI data. Moreover, with time series shorter than those initially proposed, the sMSE and original MSE algorithms give results with no statistical difference from those of the original MSE algorithm with longer data sets. The minimal acceptable length depends on the ROI size. Comparisons of MSE from healthy and pathological subjects can be performed with shorter data sets than those proposed until now.

  8. A new proposal concerning the botanical origin of Baltic amber

    PubMed Central

    Wolfe, Alexander P.; Tappert, Ralf; Muehlenbachs, Karlis; Boudreau, Marc; McKellar, Ryan C.; Basinger, James F.; Garrett, Amber

    2009-01-01

    Baltic amber constitutes the largest known deposit of fossil plant resin and the richest repository of fossil insects of any age. Despite a remarkable legacy of archaeological, geochemical and palaeobiological investigation, the botanical origin of this exceptional resource remains controversial. Here, we use taxonomically explicit applications of solid-state Fourier-transform infrared (FTIR) microspectroscopy, coupled with multivariate clustering and palaeobotanical observations, to propose that conifers of the family Sciadopityaceae, closely allied to the sole extant representative, Sciadopitys verticillata, were involved in the genesis of Baltic amber. The fidelity of FTIR-based chemotaxonomic inferences is upheld by modern–fossil comparisons of resins from additional conifer families and genera (Cupressaceae: Metasequoia; Pinaceae: Pinus and Pseudolarix). Our conclusions challenge hypotheses advocating members of either of the families Araucariaceae or Pinaceae as the primary amber-producing trees and correlate favourably with the progressive demise of subtropical forest biomes from northern Europe as palaeotemperatures cooled following the Eocene climate optimum. PMID:19570786

  9. A research proposal on the origin of life.

    PubMed

    de Duve, Christian

    2003-12-01

    This paper describes some experiments the author would have liked to carry out if he had started earlier in the origin-of-life field. The proposal is preceded by a hypothetical outline of the main events in the origin of life. According to this outline, the emergence of life amounts to the transition between two kinds of chemistry: 1) cosmic chemistry, which is beginning to be understood and most likely provided the building blocks with which life was first constructed; and 2) biochemistry, the well-known set of enzyme-catalyzed metabolic reactions that support all living organisms today and must have supported the universal common ancestor, or LUCA, from which all known forms of life are derived. The pathway leading from one to the other of those two chemistries may be divided into three stages, defined as the pre-RNA, RNA, and protein-DNA stages. A brief summary of the events that may have occurred in these three stages and of the possible underlying mechanisms is given. It is emphasized that these events were chemical in nature and, especially, that they must have prefigured present-day biochemical processes. Protometabolism and metabolism, it is argued, must have been congruent. With congruence as the underlying working hypothesis, three problems open to experimental investigation are considered: 1) the involvement of peptides and other multimers as catalysts of early biogenic chemistry; 2) the participation of thioesters in primitive energy transactions; and 3) the influence of amino acids on the molecular selection of RNA molecules.

  10. Vertigo in childhood: proposal for a diagnostic algorithm based upon clinical experience.

    PubMed

    Casani, A P; Dallan, I; Navari, E; Sellari Franceschini, S; Cerchiai, N

    2015-06-01

    The aim of this paper is to analyse, after clinical experience with a series of patients with established diagnoses and review of the literature, all relevant anamnestic features in order to build a simple diagnostic algorithm for vertigo in childhood. This study is a retrospective chart review. A series of 37 children underwent complete clinical and instrumental vestibular examination. Only neurological disorders or genetic diseases represented exclusion criteria. All diagnoses were reviewed after applying the most recent diagnostic guidelines. In our experience, the most common aetiology for dizziness is vestibular migraine (38%), followed by acute labyrinthitis/neuritis (16%) and somatoform vertigo (16%). Benign paroxysmal vertigo was diagnosed in 4 patients (11%) and paroxysmal torticollis was diagnosed in a 1-year-old child. In 8% (3 patients) of cases, the dizziness had a post-traumatic origin: 1 canalolithiasis of the posterior semicircular canal and 2 labyrinthine concussions, respectively. Menière's disease was diagnosed in 2 cases. A bilateral vestibular failure of unknown origin caused chronic dizziness in 1 patient. In conclusion, this algorithm could represent a good tool for guiding clinical suspicion to correct diagnostic assessment in dizzy children where no neurological findings are detectable. The algorithm has just a few simple steps, based mainly on two aspects to be investigated early: temporal features of vertigo and presence of hearing impairment. A different algorithm has been proposed for cases in which a traumatic origin is suspected.

  11. Replication and Comparison of the Newly Proposed ADOS-2, Module 4 Algorithm in ASD without ID: A Multi-Site Study

    ERIC Educational Resources Information Center

    Pugliese, Cara E.; Kenworthy, Lauren; Bal, Vanessa Hus; Wallace, Gregory L.; Yerys, Benjamin E.; Maddox, Brenna B.; White, Susan W.; Popal, Haroon; Armour, Anna Chelsea; Miller, Judith; Herrington, John D.; Schultz, Robert T.; Martin, Alex; Anthony, Laura Gutermuth

    2015-01-01

    Recent updates have been proposed to the Autism Diagnostic Observation Schedule-2 Module 4 diagnostic algorithm. This new algorithm, however, has not yet been validated in an independent sample without intellectual disability (ID). This multi-site study compared the original and revised algorithms in individuals with ASD without ID. The revised…

  12. Cliftonite: A proposed origin, and its bearing on the origin of diamonds in meteorites

    USGS Publications Warehouse

    Brett, R.; Higgins, G.T.

    1969-01-01

    Cliftonite, a polycrystalline aggregate of graphite with spherulitic structure and cubic morphology, is known in 14 meteorites. Some workers have considered it to be a pseudomorph after diamond, and have used the proposed diamond ancestry as evidence of a meteoritic parent body of at least lunar dimensions. Careful examination of meteoritic samples indicates that cliftonite forms by precipitation within kamacite. We have also demonstrated that graphite with cubic morphology may be synthesized in a Fe-Ni-C alloy annealed in a vacuum. We therefore suggest that a high pressure origin is unnecessary for meteorities which contain cliftonite, and that these meteorities were formed at low pressures. This conclusion is in agreement with other recent evidence. We also suggest that recently discovered cubes and cubo-octahedra of lonsdaleite in the Canyon Diablo meteorite are pseudomorphs after cliftonite, not diamond, as has previously been suggested. ?? 1969.

  13. A proposed Fast algorithm to construct the system matrices for a reduced-order groundwater model

    NASA Astrophysics Data System (ADS)

    Ushijima, Timothy T.; Yeh, William W.-G.

    2017-04-01

    Past research has demonstrated that a reduced-order model (ROM) can be two-to-three orders of magnitude smaller than the original model and run considerably faster with acceptable error. A standard method to construct the system matrices for a ROM is Proper Orthogonal Decomposition (POD), which projects the system matrices from the full model space onto a subspace whose range spans the full model space but has a much smaller dimension than the full model space. This projection can be prohibitively expensive to compute if it must be done repeatedly, as with a Monte Carlo simulation. We propose a Fast Algorithm to reduce the computational burden of constructing the system matrices for a parameterized, reduced-order groundwater model (i.e. one whose parameters are represented by zones or interpolation functions). The proposed algorithm decomposes the expensive system matrix projection into a set of simple scalar-matrix multiplications. This allows the algorithm to efficiently construct the system matrices of a POD reduced-order model at a significantly reduced computational cost compared with the standard projection-based method. The developed algorithm is applied to three test cases for demonstration purposes. The first test case is a small, two-dimensional, zoned-parameter, finite-difference model; the second test case is a small, two-dimensional, interpolated-parameter, finite-difference model; and the third test case is a realistically-scaled, two-dimensional, zoned-parameter, finite-element model. In each case, the algorithm is able to accurately and efficiently construct the system matrices of the reduced-order model.

  14. A proposal for the origin of the anomalous magnetic moment

    NASA Astrophysics Data System (ADS)

    Novello, M.; Bittencourt, E.

    2014-05-01

    We investigate a new form of contribution for the anomalous magnetic moment of all particles. This common origin is displayed in the framework of a recent treatment of electrodynamics that is based on the introduction of an electromagnetic metric which has no gravitational character. This effective metric constitutes a universal pure electromagnetic process perceived by all bodies, charged or not charged. As a consequence, it yields a complementary explanation for the existence of anomalous magnetic moment for charged particles and even for noncharged ones like neutrinos.

  15. Phlegethon flow: A proposed origin for spicules and coronal heating

    NASA Technical Reports Server (NTRS)

    Schatten, Kenneth H.; Mayr, Hans G.

    1986-01-01

    A model was develped for the mass, energy, and magnetic field transport into the corona. The focus is on the flow below the photosphere which allows the energy to pass into, and be dissipated within, the solar atmosphere. The high flow velocities observed in spicules are explained. A treatment following the work of Bailyn et al. (1985) is examined. It was concluded that within the framework of the model, energy may dissipate at a temperature comparable to the temperature where the waves originated, allowing for an equipartition solution of atmospheric flow, departing the sun at velocities approaching the maximum Alfven speed.

  16. Event-by-event PET image reconstruction using list-mode origin ensembles algorithm

    NASA Astrophysics Data System (ADS)

    Andreyev, Andriy

    2016-03-01

    There is a great demand for real time or event-by-event (EBE) image reconstruction in emission tomography. Ideally, as soon as event has been detected by the acquisition electronics, it needs to be used in the image reconstruction software. This would greatly speed up the image reconstruction since most of the data will be processed and reconstructed while the patient is still undergoing the scan. Unfortunately, the current industry standard is that the reconstruction of the image would not start until all the data for the current image frame would be acquired. Implementing an EBE reconstruction for MLEM family of algorithms is possible, but not straightforward as multiple (computationally expensive) updates to the image estimate are required. In this work an alternative Origin Ensembles (OE) image reconstruction algorithm for PET imaging is converted to EBE mode and is investigated whether it is viable alternative for real-time image reconstruction. In OE algorithm all acquired events are seen as points that are located somewhere along the corresponding line-of-responses (LORs), together forming a point cloud. Iteratively, with a multitude of quasi-random shifts following the likelihood function the point cloud converges to a reflection of an actual radiotracer distribution with the degree of accuracy that is similar to MLEM. New data can be naturally added into the point cloud. Preliminary results with simulated data show little difference between regular reconstruction and EBE mode, proving the feasibility of the proposed approach.

  17. Validating retinal fundus image analysis algorithms: issues and a proposal.

    PubMed

    Trucco, Emanuele; Ruggeri, Alfredo; Karnowski, Thomas; Giancardo, Luca; Chaum, Edward; Hubschman, Jean Pierre; Al-Diri, Bashir; Cheung, Carol Y; Wong, Damon; Abràmoff, Michael; Lim, Gilbert; Kumar, Dinesh; Burlina, Philippe; Bressler, Neil M; Jelinek, Herbert F; Meriaudeau, Fabrice; Quellec, Gwénolé; Macgillivray, Tom; Dhillon, Bal

    2013-05-01

    This paper concerns the validation of automatic retinal image analysis (ARIA) algorithms. For reasons of space and consistency, we concentrate on the validation of algorithms processing color fundus camera images, currently the largest section of the ARIA literature. We sketch the context (imaging instruments and target tasks) of ARIA validation, summarizing the main image analysis and validation techniques. We then present a list of recommendations focusing on the creation of large repositories of test data created by international consortia, easily accessible via moderated Web sites, including multicenter annotations by multiple experts, specific to clinical tasks, and capable of running submitted software automatically on the data stored, with clear and widely agreed-on performance criteria, to provide a fair comparison.

  18. Evaluation of microscopic hematuria: a critical review and proposed algorithm.

    PubMed

    Niemi, Matthew A; Cohen, Robert A

    2015-07-01

    Microscopic hematuria (MH), often discovered incidentally, has many causes, including benign processes, kidney disease, and genitourinary malignancy. The clinician, therefore, must decide how intensively to investigate the source of MH and select which tests to order and referrals to make, aiming not to overlook serious conditions while simultaneously avoiding unnecessary tests. Existing professional guidelines for the evaluation of MH are largely based on expert opinion and have weak evidence bases. Existing data demonstrate associations between isolated MH and various diseases in certain populations, and these associations serve as the basis for our proposed approach to the evaluation of MH. Various areas of ongoing uncertainty regarding the appropriate evaluation should be the basis for ongoing research.

  19. On the origin of synthetic life: attribution of output to a particular algorithm

    NASA Astrophysics Data System (ADS)

    Yampolskiy, Roman V.

    2017-01-01

    With unprecedented advances in genetic engineering we are starting to see progressively more original examples of synthetic life. As such organisms become more common it is desirable to gain an ability to distinguish between natural and artificial life forms. In this paper, we address this challenge as a generalized version of Darwin’s original problem, which he so brilliantly described in On the Origin of Species. After formalizing the problem of determining the samples’ origin, we demonstrate that the problem is in fact unsolvable. In the general case, if computational resources of considered originator algorithms have not been limited and priors for such algorithms are known to be equal, both explanations are equality likely. Our results should attract attention of astrobiologists and scientists interested in developing a more complete theory of life, as well as of AI-Safety researchers.

  20. A Two-Stage Algorithm for Origin-Destination Matrices Estimation Considering Dynamic Dispersion Parameter for Route Choice

    PubMed Central

    Wang, Yong; Ma, Xiaolei; Liu, Yong; Gong, Ke; Henricakson, Kristian C.; Xu, Maozeng; Wang, Yinhai

    2016-01-01

    This paper proposes a two-stage algorithm to simultaneously estimate origin-destination (OD) matrix, link choice proportion, and dispersion parameter using partial traffic counts in a congested network. A non-linear optimization model is developed which incorporates a dynamic dispersion parameter, followed by a two-stage algorithm in which Generalized Least Squares (GLS) estimation and a Stochastic User Equilibrium (SUE) assignment model are iteratively applied until the convergence is reached. To evaluate the performance of the algorithm, the proposed approach is implemented in a hypothetical network using input data with high error, and tested under a range of variation coefficients. The root mean squared error (RMSE) of the estimated OD demand and link flows are used to evaluate the model estimation results. The results indicate that the estimated dispersion parameter theta is insensitive to the choice of variation coefficients. The proposed approach is shown to outperform two established OD estimation methods and produce parameter estimates that are close to the ground truth. In addition, the proposed approach is applied to an empirical network in Seattle, WA to validate the robustness and practicality of this methodology. In summary, this study proposes and evaluates an innovative computational approach to accurately estimate OD matrices using link-level traffic flow data, and provides useful insight for optimal parameter selection in modeling travelers’ route choice behavior. PMID:26761209

  1. A Two-Stage Algorithm for Origin-Destination Matrices Estimation Considering Dynamic Dispersion Parameter for Route Choice.

    PubMed

    Wang, Yong; Ma, Xiaolei; Liu, Yong; Gong, Ke; Henrickson, Kristian C; Henricakson, Kristian C; Xu, Maozeng; Wang, Yinhai

    2016-01-01

    This paper proposes a two-stage algorithm to simultaneously estimate origin-destination (OD) matrix, link choice proportion, and dispersion parameter using partial traffic counts in a congested network. A non-linear optimization model is developed which incorporates a dynamic dispersion parameter, followed by a two-stage algorithm in which Generalized Least Squares (GLS) estimation and a Stochastic User Equilibrium (SUE) assignment model are iteratively applied until the convergence is reached. To evaluate the performance of the algorithm, the proposed approach is implemented in a hypothetical network using input data with high error, and tested under a range of variation coefficients. The root mean squared error (RMSE) of the estimated OD demand and link flows are used to evaluate the model estimation results. The results indicate that the estimated dispersion parameter theta is insensitive to the choice of variation coefficients. The proposed approach is shown to outperform two established OD estimation methods and produce parameter estimates that are close to the ground truth. In addition, the proposed approach is applied to an empirical network in Seattle, WA to validate the robustness and practicality of this methodology. In summary, this study proposes and evaluates an innovative computational approach to accurately estimate OD matrices using link-level traffic flow data, and provides useful insight for optimal parameter selection in modeling travelers' route choice behavior.

  2. Lung Cancer Classification Employing Proposed Real Coded Genetic Algorithm Based Radial Basis Function Neural Network Classifier

    PubMed Central

    Deepa, S. N.

    2016-01-01

    A proposed real coded genetic algorithm based radial basis function neural network classifier is employed to perform effective classification of healthy and cancer affected lung images. Real Coded Genetic Algorithm (RCGA) is proposed to overcome the Hamming Cliff problem encountered with the Binary Coded Genetic Algorithm (BCGA). Radial Basis Function Neural Network (RBFNN) classifier is chosen as a classifier model because of its Gaussian Kernel function and its effective learning process to avoid local and global minima problem and enable faster convergence. This paper specifically focused on tuning the weights and bias of RBFNN classifier employing the proposed RCGA. The operators used in RCGA enable the algorithm flow to compute weights and bias value so that minimum Mean Square Error (MSE) is obtained. With both the lung healthy and cancer images from Lung Image Database Consortium (LIDC) database and Real time database, it is noted that the proposed RCGA based RBFNN classifier has performed effective classification of the healthy lung tissues and that of the cancer affected lung nodules. The classification accuracy computed using the proposed approach is noted to be higher in comparison with that of the classifiers proposed earlier in the literatures. PMID:28050198

  3. Lung Cancer Classification Employing Proposed Real Coded Genetic Algorithm Based Radial Basis Function Neural Network Classifier.

    PubMed

    Selvakumari Jeya, I Jasmine; Deepa, S N

    2016-01-01

    A proposed real coded genetic algorithm based radial basis function neural network classifier is employed to perform effective classification of healthy and cancer affected lung images. Real Coded Genetic Algorithm (RCGA) is proposed to overcome the Hamming Cliff problem encountered with the Binary Coded Genetic Algorithm (BCGA). Radial Basis Function Neural Network (RBFNN) classifier is chosen as a classifier model because of its Gaussian Kernel function and its effective learning process to avoid local and global minima problem and enable faster convergence. This paper specifically focused on tuning the weights and bias of RBFNN classifier employing the proposed RCGA. The operators used in RCGA enable the algorithm flow to compute weights and bias value so that minimum Mean Square Error (MSE) is obtained. With both the lung healthy and cancer images from Lung Image Database Consortium (LIDC) database and Real time database, it is noted that the proposed RCGA based RBFNN classifier has performed effective classification of the healthy lung tissues and that of the cancer affected lung nodules. The classification accuracy computed using the proposed approach is noted to be higher in comparison with that of the classifiers proposed earlier in the literatures.

  4. Construction Method of Display Proposal for Commodities in Sales Promotion by Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Yumoto, Masaki

    In a sales promotion task, wholesaler prepares and presents the display proposal for commodities in order to negotiate with retailer's buyers what commodities they should sell. For automating the sales promotion tasks, the proposal has to be constructed according to the target retailer's buyer. However, it is difficult to construct the proposal suitable for the target retail store because of too much combination of commodities. This paper proposes a construction method by Genetic algorithm (GA). The proposed method represents initial display proposals for commodities with genes, improve ones with the evaluation value by GA, and rearrange one with the highest evaluation value according to the classification of commodity. Through practical experiment, we can confirm that display proposal by the proposed method is similar with the one constructed by a wholesaler.

  5. Evaluation of Origin Ensemble algorithm for image reconstruction for pixelated solid-state detectors with large number of channels

    NASA Astrophysics Data System (ADS)

    Kolstein, M.; De Lorenzo, G.; Mikhaylova, E.; Chmeissani, M.; Ariño, G.; Calderón, Y.; Ozsahin, I.; Uzun, D.

    2013-04-01

    The Voxel Imaging PET (VIP) Pathfinder project intends to show the advantages of using pixelated solid-state technology for nuclear medicine applications. It proposes designs for Positron Emission Tomography (PET), Positron Emission Mammography (PEM) and Compton gamma camera detectors with a large number of signal channels (of the order of 106). For PET scanners, conventional algorithms like Filtered Back-Projection (FBP) and Ordered Subset Expectation Maximization (OSEM) are straightforward to use and give good results. However, FBP presents difficulties for detectors with limited angular coverage like PEM and Compton gamma cameras, whereas OSEM has an impractically large time and memory consumption for a Compton gamma camera with a large number of channels. In this article, the Origin Ensemble (OE) algorithm is evaluated as an alternative algorithm for image reconstruction. Monte Carlo simulations of the PET design are used to compare the performance of OE, FBP and OSEM in terms of the bias, variance and average mean squared error (MSE) image quality metrics. For the PEM and Compton camera designs, results obtained with OE are presented.

  6. Evaluation of Origin Ensemble algorithm for image reconstruction for pixelated solid-state detectors with large number of channels

    PubMed Central

    Kolstein, M.; De Lorenzo, G.; Mikhaylova, E.; Chmeissani, M.; Ariño, G.; Calderón, Y.; Ozsahin, I.; Uzun, D.

    2013-01-01

    The Voxel Imaging PET (VIP) Pathfinder project intends to show the advantages of using pixelated solid-state technology for nuclear medicine applications. It proposes designs for Positron Emission Tomography (PET), Positron Emission Mammography (PEM) and Compton gamma camera detectors with a large number of signal channels (of the order of 106). For PET scanners, conventional algorithms like Filtered Back-Projection (FBP) and Ordered Subset Expectation Maximization (OSEM) are straightforward to use and give good results. However, FBP presents difficulties for detectors with limited angular coverage like PEM and Compton gamma cameras, whereas OSEM has an impractically large time and memory consumption for a Compton gamma camera with a large number of channels. In this article, the Origin Ensemble (OE) algorithm is evaluated as an alternative algorithm for image reconstruction. Monte Carlo simulations of the PET design are used to compare the performance of OE, FBP and OSEM in terms of the bias, variance and average mean squared error (MSE) image quality metrics. For the PEM and Compton camera designs, results obtained with OE are presented. PMID:23814604

  7. Evaluation of Origin Ensemble algorithm for image reconstruction for pixelated solid-state detectors with large number of channels.

    PubMed

    Kolstein, M; De Lorenzo, G; Mikhaylova, E; Chmeissani, M; Ariño, G; Calderón, Y; Ozsahin, I; Uzun, D

    2013-04-29

    The Voxel Imaging PET (VIP) Pathfinder project intends to show the advantages of using pixelated solid-state technology for nuclear medicine applications. It proposes designs for Positron Emission Tomography (PET), Positron Emission Mammography (PEM) and Compton gamma camera detectors with a large number of signal channels (of the order of 10(6)). For PET scanners, conventional algorithms like Filtered Back-Projection (FBP) and Ordered Subset Expectation Maximization (OSEM) are straightforward to use and give good results. However, FBP presents difficulties for detectors with limited angular coverage like PEM and Compton gamma cameras, whereas OSEM has an impractically large time and memory consumption for a Compton gamma camera with a large number of channels. In this article, the Origin Ensemble (OE) algorithm is evaluated as an alternative algorithm for image reconstruction. Monte Carlo simulations of the PET design are used to compare the performance of OE, FBP and OSEM in terms of the bias, variance and average mean squared error (MSE) image quality metrics. For the PEM and Compton camera designs, results obtained with OE are presented.

  8. Assembled sequence contigs by SOAPdenova and Volvet algorithms from metagenomic short reads of a new bacterial isolate of gut origin

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Assembled sequence contigs by SOAPdenova and Volvet algorithms from metagenomic short reads of a new bacterial isolate of gut origin. This study included 2 submissions with a total of 9.8 million bp of assembled contigs....

  9. Proposed diagnostic algorithm for patients with suspected mastocytosis: a proposal of the European Competence Network on Mastocytosis.

    PubMed

    Valent, P; Escribano, L; Broesby-Olsen, S; Hartmann, K; Grattan, C; Brockow, K; Niedoszytko, M; Nedoszytko, B; Oude Elberink, J N G; Kristensen, T; Butterfield, J H; Triggiani, M; Alvarez-Twose, I; Reiter, A; Sperr, W R; Sotlar, K; Yavuz, S; Kluin-Nelemans, H C; Hermine, O; Radia, D; van Doormaal, J J; Gotlib, J; Orfao, A; Siebenhaar, F; Schwartz, L B; Castells, M; Maurer, M; Horny, H-P; Akin, C; Metcalfe, D D; Arock, M

    2014-10-01

    Mastocytosis is an emerging differential diagnosis in patients with more or less specific mediator-related symptoms. In some of these patients, typical skin lesions are found and the diagnosis of mastocytosis can be established. In other cases, however, skin lesions are absent, which represents a diagnostic challenge. In the light of this unmet need, we developed a diagnostic algorithm for patients with suspected mastocytosis. In adult patients with typical lesions of mastocytosis in the skin, a bone marrow (BM) biopsy should be considered, regardless of the basal serum tryptase concentration. In adults without skin lesions who suffer from mediator-related or other typical symptoms, the basal tryptase level is an important parameter. In those with a slightly increased tryptase level, additional investigations, including a sensitive KIT mutation analysis of blood leucocytes or measurement of urinary histamine metabolites, may be helpful. In adult patients in whom (i) KIT D816V is detected and/or (ii) the basal serum tryptase level is clearly increased (>25-30 ng/ml) and/or (iii) other clinical or laboratory features suggest the presence of 'occult' mastocytosis or another haematologic neoplasm, a BM investigation is recommended. In the absence of KIT D816V and other signs or symptoms of mastocytosis or another haematopoietic disease, no BM investigation is required, but the clinical course and tryptase levels are monitored in the follow-up. In paediatric patients, a BM investigation is usually not required, even if the tryptase level is increased. Although validation is required, it can be expected that the algorithm proposed herein will facilitate the management of patients with suspected mastocytosis and help avoid unnecessary referrals and investigations.

  10. Replication and Comparison of the Newly Proposed ADOS-2, Module 4 Algorithm in ASD without ID: A Multi-site Study

    PubMed Central

    Pugliese, Cara E.; Kenworthy, Lauren; Bal, Vanessa Hus; Wallace, Gregory L; Yerys, Benjamin E; Maddox, Brenna B.; White, Susan W.; Popal, Haroon; Armour, Anna Chelsea; Miller, Judith; Herrington, John D.; Schultz, Robert T.; Martin, Alex; Anthony, Laura Gutermuth

    2015-01-01

    Recent updates have been proposed to the Autism Diagnostic Observation Schedule-2 Module 4 diagnostic algorithm. This new algorithm, however, has not yet been validated in an independent sample without intellectual disability (ID). This multi-site study compared the original and revised algorithms in individuals with ASD without ID. The revised algorithm demonstrated increased sensitivity, but lower specificity in the overall sample. Estimates were highest for females, individuals with a verbal IQ below 85 or above 115, and ages 16 and older. Best practice diagnostic procedures should include the Module 4 in conjunction with other assessment tools. Balancing needs for sensitivity and specificity depending on the purpose of assessment (e.g., clinical vs. research) and demographic characteristics mentioned above will enhance its utility. PMID:26385796

  11. A kinetic model-based algorithm to classify NGS short reads by their allele origin.

    PubMed

    Marinoni, Andrea; Rizzo, Ettore; Limongelli, Ivan; Gamba, Paolo; Bellazzi, Riccardo

    2015-02-01

    Genotyping Next Generation Sequencing (NGS) data of a diploid genome aims to assign the zygosity of identified variants through comparison with a reference genome. Current methods typically employ probabilistic models that rely on the pileup of bases at each locus and on a priori knowledge. We present a new algorithm, called Kimimila (KInetic Modeling based on InforMation theory to Infer Labels of Alleles), which is able to assign reads to alleles by using a distance geometry approach and to infer the variant genotypes accurately, without any kind of assumption. The performance of the model has been assessed on simulated and real data of the 1000 Genomes Project and the results have been compared with several commonly used genotyping methods, i.e., GATK, Samtools, VarScan, FreeBayes and Atlas2. Despite our algorithm does not make use of a priori knowledge, the percentage of correctly genotyped variants is comparable to these algorithms. Furthermore, our method allows the user to split the reads pool depending on the inferred allele origin.

  12. Locally linear manifold model for gap-filling algorithms of hyperspectral imagery: Proposed algorithms and a comparative study

    NASA Astrophysics Data System (ADS)

    Suliman, Suha Ibrahim

    Landsat 7 Enhanced Thematic Mapper Plus (ETM+) Scan Line Corrector (SLC) device, which corrects for the satellite motion, has failed since May 2003 resulting in a loss of about 22% of the data. To improve the reconstruction of Landsat 7 SLC-off images, Locally Linear Manifold (LLM) model is proposed for filling gaps in hyperspectral imagery. In this approach, each spectral band is modeled as a non-linear locally affine manifold that can be learned from the matching bands at different time instances. Moreover, each band is divided into small overlapping spatial patches. In particular, each patch is considered to be a linear combination (approximately on an affine space) of a set of corresponding patches from the same location that are adjacent in time or from the same season of the year. Fill patches are selected from Landsat 5 Thematic Mapper (TM) products of the year 1984 through 2011 which have similar spatial and radiometric resolution as Landsat 7 products. Using this approach, the gap-filling process involves feasible point on the learned manifold to approximate the missing pixels. The proposed LLM framework is compared to some existing single-source (Average and Inverse Distance Weight (IDW)) and multi- source (Local Linear Histogram Matching (LLHM) and Adaptive Window Linear Histogram Matching (AWLHM)) gap-filling methodologies. We analyze the effectiveness of the proposed LLM approach through simulation examples with known ground-truth. It is shown that the LLM-model driven approach outperforms all existing recovery methods considered in this study. The superiority of LLM is illustrated by providing better reconstructed images with higher accuracy even over heterogeneous landscape. Moreover, it is relatively simple to realize algorithmically, and it needs much less computing time when compared to the state- of-the art AWLHM approach.

  13. Flap reconstruction of the knee: A review of current concepts and a proposed algorithm

    PubMed Central

    Gravvanis, Andreas; Kyriakopoulos, Antonios; Kateros, Konstantinos; Tsoutsos, Dimosthenis

    2014-01-01

    A literature search focusing on flap knee reconstruction revealed much controversy regarding the optimal management of around the knee defects. Muscle flaps are the preferred option, mainly in infected wounds. Perforator flaps have recently been introduced in knee coverage with significant advantages due to low donor morbidity and long pedicles with wide arc of rotation. In the case of free flap the choice of recipient vessels is the key point to the reconstruction. Taking the published experience into account, a reconstructive algorithm is proposed according to the size and location of the wound, the presence of infection and/or 3-dimensional defect. PMID:25405089

  14. Macular pigmentation of uncertain aetiology revisited: two case reports and a proposed algorithm for clinical classification.

    PubMed

    Chandran, Veena; Kumarasinghe, Sujith Prasad

    2017-02-01

    Ashy dermatosis, erythema dyschromicum perstans, lichen planus pigmentosus and idiopathic eruptive macular pigmentation are various types of acquired macular hyperpigmentation disorders of the skin described in literature. However, a global consensus on the definitions of these entities is lacking. We report two cases of acquired macular (hyper)pigmentation of uncertain aetiology diagnosed as ashy dermatosis and attempt to clarify the various confusing nosologies based on existing literature. We infer that acquired small and large macular pigmentation of uncertain aetiology should be considered separate from that associated with lichen planus. We also propose a diagnostic algorithm for patients with acquired macular hyperpigmentation.

  15. Bilateral Simultaneous Tubal Ectopic Pregnancy: A Case Report, Review of Literature and a Proposed Management Algorithm

    PubMed Central

    Jena, Saubhagya Kumar; Nayak, Monalisha; Das, Leena; Senapati, Swagatika

    2016-01-01

    Bilateral simultaneous Tubal Ectopic Pregnancy (BTP) is the rarest form of ectopic pregnancy. The incidence is higher in women undergoing assisted reproductive techniques or ovulation induction. The clinical presentation is unpredictable and there are no unique features to distinguish it from unilateral ectopic pregnancy. BTP continues to be a clinician’s dilemma as pre-operative diagnosis is difficult and is commonly made during surgery. Treatment options are varied depending on site of ectopic pregnancy, extent of tubal damage and requirement of future fertility. We report a case of BTP which was diagnosed during surgery and propose an algorithm for management of such patients. PMID:27134950

  16. Carbonado: Physical and chemical properties, a critical evaluation of proposed origins, and a revised genetic model

    NASA Astrophysics Data System (ADS)

    Haggerty, Stephen E.

    2014-03-01

    Carbonado-diamond is the most controversial of all diamond types and is found only in Brazil, and the Central African Republic (Bangui). Neither an affinity to Earth's mantle, nor an origin in the crust can be unequivocally established. Carbonado-diamond is at least 3.8 Ga old, an age about 0.5 Ga older than the oldest diamonds yet reported in kimberlites and lamproites on Earth. Derived from Neo- to Mid-Proterozoic meta-conglomerates, the primary magmatic host rock has not been identified. Discovered in 1841, the material is polycrystalline, robust and coke-like, and is best described as a strongly bonded micro-diamond ceramic. It is characteristically porous, which precludes an origin at high pressures and high temperatures in Earth's deep interior, yet it is also typically patinated, with a glass-like surface that resembles melting. With exotic inclusions of highly reduced metals, carbides, and nitrides the origin of carbonado-diamond is made even more challenging. But the challenge is important because a new diamondiferous host rock may be involved, and the development of a new physical process for generating diamond is possibly assured. The combination of micro-crystals and random crystal orientation leads to extreme mechanical toughness, and a predicable super-hardness. The physical and chemical properties of carbonado are described with a view to the development of a mimetic strategy to synthesize carbonado and to duplicate its extreme toughness and super-hardness. Textural variations are described with an emphasis on melt-like surface features, not previously discussed in the literature, but having a very clear bearing on the history and genesis of carbonado. Selected physical properties are presented and the proposed origins, diverse in character and imaginatively novel, are critically reviewed. From our present knowledge of the dynamic Earth, all indications are that carbonado is unlikely to be of terrestrial origin. A revised model for the origin of

  17. Pandora - Discovering the origin of the moons of Mars (a proposed Discovery mission)

    NASA Astrophysics Data System (ADS)

    Raymond, C. A.; Diniega, S.; Prettyman, T. H.

    2015-12-01

    After decades of intensive exploration of Mars, fundamental questions about the origin and evolution of the martian moons, Phobos and Deimos, remain unanswered. Their spectral characteristics are similar to C- or D-class asteroids, suggesting that they may have originated in the asteroid belt or outer solar system. Perhaps these ancient objects were captured separately, or maybe they are the fragments of a captured asteroid disrupted by impact. Various lines of evidence hint at other possibilities: one alternative is co-formation with Mars, in which case the moons contain primitive martian materials. Another is that they are re-accreted ejecta from a giant impact and contain material from the early martian crust. The Pandora mission, proposed in response to the 2014 NASA Discovery Announcement of Opportunity, will acquire new information needed to determine the provenance of the moons of Mars. Pandora will travel to and successively orbit Phobos and Deimos to map their chemical and mineral composition and further refine their shape and gravity. Geochemical data, acquired by nuclear- and infrared-spectroscopy, can distinguish between key origin hypotheses. High resolution imaging data will enable detailed geologic mapping and crater counting to determine the timing of major events and stratigraphy. Data acquired will be used to determine the nature of and relationship between "red" and "blue" units on Phobos, and determine how Phobos and Deimos are related. After identifying material representative of each moons' bulk composition, analysis of the mineralogical and elemental composition of this material will allow discrimination between the formation hypotheses for each moon. The information acquired by Pandora can then be compared with similar data sets for other solar system bodies and from meteorite studies. Understanding the formation of the martian moons within this larger context will yield a better understanding of processes acting in the early solar system

  18. Surgery in extensive vertebral hemangioma: case report, literature review and a new algorithm proposal.

    PubMed

    Tarantino, Roberto; Donnarumma, Pasquale; Nigro, Lorenzo; Delfini, Roberto

    2015-07-01

    Hemangiomas are benign dysplasias or vascular tumors consisting of vascular spaces lined with endothelium. Nowadays, radiotherapy for vertebral hemangiomas (VHs) is widely accepted as primary treatment for painful lesions. Nevertheless, the role of surgery is still unclear. The purpose of this study is to propose a novel algorithm of treatment about VHs. This is a case report of an extensive VH and a review of the literature. A case of vertebral fracture during radiotherapy at a total dose of 30 Gy given in 10 fractions (treatment time 2 weeks) using a linear accelerator at 15 MV high-energy photons for extensive VH is reported. Using PubMed database, a review of the literature is done. The authors have no study funding sources. The authors have no conflicting financial interests. In the literature, good results in terms of pain and neurological deficits are reported. No cases of vertebral fractures are described. However, there is no consensus regarding the treatment for VHs. Radiotherapy is widely utilized in VHs determining pain. Surgery for VHs determining neurological deficit is also widely accepted. Perhaps, regarding the width of the lesion, no indications are given. We consider it important to make an evaluation before initiating the treatment for the risk of pathologic vertebral fracture, since in radiotherapy, there is no convention regarding structural changes determined in VHs. We propose a new algorithm of treatment. We recommend radiotherapy only for small lesions in which vertebral stability is not concerned. Kyphoplasty can be proposed for asymptomatic patients in which VHs are small and in patients affected by VHs determining pain without spinal canal invasion in which the VH is small. In patients affected by pain without spinal canal invasion but in which the VH is wide or presented with spinal canal invasion and in patients affected by neurological deficits, we propose surgery.

  19. Preliminary field trial of a putative research algorithm for diagnosing ICD-11 personality disorders in psychiatric patients: 2. Proposed trait domains.

    PubMed

    Kim, Youl-Ri; Tyrer, Peter; Lee, Hong-Seock; Kim, Sung-Gon; Hwang, Soon-Taek; Lee, Gi Young; Mulder, Roger

    2015-11-01

    This field trial examines the discriminant validity of five trait domains of the originally proposed research algorithm for diagnosing International Classification of Diseases (ICD)-11 personality disorders. This trial was carried out in South Korea where a total of 124 patients with personality disorder participated in the study. Participants were assessed using originally proposed monothetic trait domains of asocial-schizoid, antisocial-dissocial, anxious-dependent, emotionally unstable and anankastic-obsessional groups of the research algorithm in ICD-11. Their assessments were compared to those from the Personality Assessment Schedule interview, and the five-factor model (FFM). A total of 48.4% of patients were found to have pathology in two or more domains. In the discriminant analysis, 64.2% of the grouped cases of the originally proposed ICD-11 domains were correctly classified by the five domain categories using the Personality Assessment Schedule, with the highest accuracy in the anankastic-obsessional domain and the lowest accuracy in the emotionally unstable domain. In comparison, the asocial-schizoid, anxious-dependent and the emotionally unstable domains were moderately correlated with the FFM, whereas the anankastic-obsessional or antisocial-dissocial domains were not significantly correlated with the FFM. In this field trial, we demonstrated the limited discriminant and the convergent validities of the originally proposed trait domains of the research algorithm for diagnosing ICD-11 personality disorder. The results suggest that the anankastic, asocial and dissocial domains show good discrimination, whereas the anxious-dependent and emotionally unstable ones overlap too much and have been subsequently revised.

  20. Proposal for An Algorithm for Screening for Undernutrition in Hospitalized Children.

    PubMed

    Huysentruyt, Koen; De Schepper, Jean; Bontems, Patrick; Alliet, Philippe; Peeters, Ellen; Roelants, Mathieu; Van Biervliet, Stephanie; Hauser, Bruno; Vandenplas, Yvan

    2016-11-01

    The prevalence of disease-related undernutrition in hospitalized children has not decreased significantly in the last decades in Europe. A recent large multicentric European study reported a percentage of underweight children ranging across countries from 4.0% to 9.3%. Nutritional screening has been put forward as a strategy to detect and prevent undernutrition in hospitalized children. It allows timely implementation of adequate nutritional support and prevents further nutritional deterioration of hospitalized children. In this article, a hands-on practical guideline for the implementation of a nutritional care program in hospitalized children is provided. The difference between nutritional status (anthropometry with or without additional technical investigations) at admission and nutritional risk (the risk of the need for a nutritional intervention or the risk for nutritional deterioration during hospital stay) is the focus of this article. Based on the quality control circle principle of Deming, a nutritional care algorithm, with detailed instructions specific for the pediatric population was developed and implementation in daily practice is proposed. Further research is required to prove the applicability and the merit of this algorithm. It can, however, serve as a basis to provide European or even wider guidelines.

  1. Proposal for An Algorithm for Screening for Under-Nutrition in Hospitalized Children.

    PubMed

    Huysentruyt, Koen; De Schepper, Jean; Bontems, Patrick; Alliet, Philippe; Peeters, Ellen; Roelants, Mathieu; Van Biervliet, Stephanie; Hauser, Bruno; Vandenplas, Yvan

    2016-06-14

    The prevalence of disease-related under-nutrition in hospitalized children has not decreased significantly in the last decades in Europe. A recent large multi-centric European study reported a percentage of underweight children ranging across countries from 4.0% to 9.3%. Nutritional screening has been put forward as a strategy to detect and prevent under-nutrition in hospitalized children. It allows timely implementation of adequate nutritional support and prevents further nutritional deterioration of hospitalized children. In this paper, a hands-on practical guideline for the implementation of a nutritional care program in hospitalized children is provided. The difference between nutritional status (anthropometry with or without additional technical investigations) at admission and nutritional risk (the risk of the need for a nutritional intervention or the risk for nutritional deterioration during hospital stay) is the focus of this article. Based on the quality control circle principle of Deming, a nutritional care algorithm, with detailed instructions specific for the pediatric population was developed and implementation in daily practice is proposed. Further research is required to prove the applicability and the merit of this algorithm. It can however serve as a basis to provide European or even wider guidelines.

  2. Proposal of a brand-new gyrokinetic algorithm for global MHD simulation

    NASA Astrophysics Data System (ADS)

    Naitou, Hiroshi; Kobayashi, Kenichi; Hashimoto, Hiroki; Andachi, Takehisa; Lee, Wei-Li; Tokuda, Shinji; Yagi, Masatoshi

    2009-11-01

    A new algorithm for the gyrokinetic PIC code is proposed. The basic equations are energy conserving and composed of (1) the gyrokinetic Vlasov (GKV) equation, (2) the Vortex equation, and (3) the generalized Ohm's law along the magnetic field. Equation (2) is used to advance electrostatic potential in time. Equation (3) is used to advance longitudinal component of vector potential in time as well as estimating longitudinal induced electric field to accelerate charged particles. The particle information is used to estimate pressure terms in equation (3). The idea was obtained in the process of reviewing the split-weight-scheme formalism. This algorithm was incorporated in the Gpic-MHD code. Preliminary results for the m=1/n=1 internal kink mode simulation in the cylindrical geometry indicate good energy conservation, quite low noise due to particle discreteness, and applicability to larger spatial scale and higher beta regimes. The advantage of new Gpic-MHD is that the lower order moments of the GKV equation are estimated by the moment equation while the particle information is used to evaluate the second order moment.

  3. [Chikungunya, an emerging viral disease. Proposal of an algorithm for its clinical management].

    PubMed

    Palacios-Martínez, D; Díaz-Alonso, R A; Arce-Segura, L J; Díaz-Vera, E

    2015-01-01

    Chikungunya fever (CHIK) is an emerging viral disease. It is caused by the Chikungunya virus, an alphavirus from the Togaviridae family. It is transmitted to humans by the bite of infected mosquitoes, mainly Aedes aegypti and Aedes albopictus. They are also involved in the transmission of dengue, malaria, etc. CHIK is now endemic in any region of Africa and Southeast-Asia. Cases of CHIK have been reported in America, the Caribbean, and Europe (France, Italy and Spain). There are reservoirs of these mosquitoes in some regions of Spain (Catalonia, Alicante, Murcia and Balearic islands). CHIK is characterized by a sudden high and debilitating fever, and severe or disabling symmetrical arthralgia. It tends to improve in days or weeks. There are severe and chronic forms of CHIK. There is no specific treatment or prophylaxis for CHIK. An algorithm is proposed for the clinical management of CHIK based in the latest guidelines.

  4. Surgical Management of Symptomatic Extensor Digitorum Brevis Manus: A Proposed Algorithm for Treatment.

    PubMed

    Waterman, Brian R; Dunn, John C; Kusnezov, Nicholas; Romano, David; Pirela-Cruz, Miguel A

    2015-10-01

    First described in 1734, the extensor digitorum brevis manus (EDBM) is an anomalous extensor muscle found in the dorsum of the wrist and hand. Extensor muscle variants of the hand are not uncommon, and EDBM has an estimated reported incidence of approximately 2%. Although few extensor muscle variants become clinically significant, there is a paucity of literature discussing these anatomic variants, with most reports arising from cadaveric studies or isolated case series. Similarly, there are few established indications for surgical treatment of EDBM. In this case report, we describe the successful treatment of a young patient with persistently symptomatic anomalous extensor tendon with surgical excision and propose an algorithm for management after failure of conservative measures.

  5. Hyperammonemic Encephalopathy Associated With Fibrolamellar Hepatocellular Carcinoma: Case Report, Literature Review, and Proposed Treatment Algorithm

    PubMed Central

    Chapuy, Claudia I.; Sahai, Inderneel; Sharma, Rohit; Zhu, Andrew X.

    2016-01-01

    We report a case of a 31-year-old man with metastatic fibrolamellar hepatocellular carcinoma (FLHCC) treated with gemcitabine and oxaliplatin complicated by hyperammonemic encephalopathy biochemically consistent with acquired ornithine transcarbamylase deficiency. Awareness of FLHCC-associated hyperammonemic encephalopathy and a pathophysiology-based management approach can optimize patient outcome and prevent serious complications. A discussion of the management, literature review, and proposed treatment algorithm of this rare metabolic complication are presented. Implications for Practice: Pathophysiology-guided management of cancer-associated hyperammonemic encephalopathy can improve patient outcome and prevent life-threatening complications. Community and academic oncologists should be aware of this serious metabolic complication of cancer and be familiar with its management. PMID:26975868

  6. Using the erroneous data clustering to improve the feature extraction weights of original image algorithms

    NASA Astrophysics Data System (ADS)

    Wu, Tin-Yu; Chang, Tse; Chu, Teng-Hao

    2017-02-01

    Many data mining adopts the form of Artificial Neural Network (ANN) to solve many problems, many problems will be involved in the process of training Artificial Neural Network, such as the number of samples with volume label, the time and performance of training, the number of hidden layers and Transfer function, if the compared data results are not expected, it cannot be known clearly that which dimension causes the deviation, the main reason is that Artificial Neural Network trains compared results through the form of modifying weight, and it is not a kind of training to improve the original algorithm for the extraction algorithm of image, but tend to obtain correct value aimed at the result plus the weigh; in terms of these problems, this paper will mainly put forward a method to assist in the image data analysis of Artificial Neural Network; normally, a parameter will be set as the value to extract feature vector during processing the image, which will be considered by us as weight, the experiment will use the value extracted from feature point of Speeded Up Robust Features (SURF) Image as the basis for training, SURF itself can extract different feature points according to extracted values, we will make initial semi-supervised clustering according to these values, and use Modified K - on his Neighbors (MFKNN) as training and classification, the matching mode of unknown images is not one-to-one complete comparison, but only compare group Centroid, its main purpose is to save its efficiency and speed up, and its retrieved data results will be observed and analyzed eventually; the method is mainly to make clustering and classification with the use of the nature of image feature point to give values to groups with high error rate to produce new feature points and put them into Input Layer of Artificial Neural Network for training, and finally comparative analysis is made with Back-Propagation Neural Network (BPN) of Genetic Algorithm-Artificial Neural Network

  7. A proposed origin for fossilized Pennsylvanian plant cuticles by pyrite oxidation (Sydney Coalfield, Nova Scotia, Canada)

    USGS Publications Warehouse

    Zodrow, E.L.; Mastalerz, Maria

    2009-01-01

    Fossilized cuticles, though rare in the roof rocks of coal seam in the younger part of the Pennsylvanian Sydney Coalfield, Nova Scotia, represent nearly all of the major plant groups. Selected for investigation, by methods of Fourier transform infrared spectroscopy (FTIR) and elemental analysis, are fossilized cuticles (FCs) and cuticles extracted from compressions by Schulze's process (CCs) of Alethopteris ambigua. These investigations are supplemented by FTIR analysis of FCs and CCs of Cordaites principalis, and a cuticle-fossilized medullosalean(?) axis. The purpose of this study is threefold: (1) to try to determine biochemical discriminators between FCs and CCs of the same species using semi-quantitative FTIR techniques; (2) to assess the effects chemical treatments have, particularly Schulze's process, on functional groups; and most importantly (3) to study the primary origin of FCs. Results are equivocal in respect to (1); (2) after Schulze's treatment aliphatic moieties tend to be reduced relative to oxygenated groups, and some aliphatic chains may be shortened; and (3) a primary chemical model is proposed. The model is based on a variety of geological observations, including stratal distribution, clay and pyrite mineralogies associated with FCs and compressions, and regional geological structure. The model presupposes compression-cuticle fossilization under anoxic conditions for late authigenic deposition of sub-micron-sized pyrite on the compressions. Rock joints subsequently provided conduits for oxygen-enriched ground-water circulation to initiate in situ pyritic oxidation that produced sulfuric acid for macerating compressions, with resultant loss of vitrinite, but with preservation of cuticles as FCs. The timing of the process remains undetermined, though it is assumed to be late to post-diagenetic. Although FCs represent a pathway of organic matter transformation (pomd) distinct from other plant-fossilization processes, global applicability of the

  8. A model-based parallel origin and orientation refinement algorithm for cryoTEM and its application to the study of virus structures

    PubMed Central

    Ji, Yongchang; Marinescu, Dan C.; Zhang, Wei; Zhang, Xing; Yan, Xiaodong; Baker, Timothy S.

    2014-01-01

    We present a model-based parallel algorithm for origin and orientation refinement for 3D reconstruction in cryoTEM. The algorithm is based upon the Projection Theorem of the Fourier Transform. Rather than projecting the current 3D model and searching for the best match between an experimental view and the calculated projections, the algorithm computes the Discrete Fourier Transform (DFT) of each projection and searches for the central section (“cut”) of the 3D DFT that best matches the DFT of the projection. Factors that affect the efficiency of a parallel program are first reviewed and then the performance and limitations of the proposed algorithm are discussed. The parallel program that implements this algorithm, called PO2R, has been used for the refinement of several virus structures, including those of the 500 Å diameter dengue virus (to 9.5 Å resolution), the 850 Å mammalian reovirus (to better than 7 Å), and the 1800 Å paramecium bursaria chlorella virus (to 15 Å). PMID:16459100

  9. Surgical Management of Early Endometrial Cancer: An Update and Proposal of a Therapeutic Algorithm

    PubMed Central

    Falcone, Francesca; Balbi, Giancarlo; Di Martino, Luca; Grauso, Flavio; Salzillo, Maria Elena; Messalli, Enrico Michelino

    2014-01-01

    In the last few years technical improvements have produced a dramatic shift from traditional open surgery towards a minimally invasive approach for the management of early endometrial cancer. Advancement in minimally invasive surgical approaches has allowed extensive staging procedures to be performed with significantly reduced patient morbidity. Debate is ongoing regarding the choice of a minimally invasive approach that has the most effective benefit for the patients, the surgeon, and the healthcare system as a whole. Surgical treatment of women with presumed early endometrial cancer should take into account the features of endometrial disease and the general surgical risk of the patient. Women with endometrial cancer are often aged, obese, and with cardiovascular and metabolic comorbidities that increase the risk of peri-operative complications, so it is important to tailor the extent and the radicalness of surgery in order to decrease morbidity and mortality potentially derivable from unnecessary procedures. In this regard women with negative nodes derive no benefit from unnecessary lymphadenectomy, but may develop short- and long-term morbidity related to this procedure. Preoperative and intraoperative techniques could be critical tools for tailoring the extent and the radicalness of surgery in the management of women with presumed early endometrial cancer. In this review we will discuss updates in surgical management of early endometrial cancer and also the role of preoperative and intraoperative evaluation of lymph node status in influencing surgical options, with the aim of proposing a management algorithm based on the literature and our experience. PMID:25063051

  10. Surgical management of early endometrial cancer: an update and proposal of a therapeutic algorithm.

    PubMed

    Falcone, Francesca; Balbi, Giancarlo; Di Martino, Luca; Grauso, Flavio; Salzillo, Maria Elena; Messalli, Enrico Michelino

    2014-07-26

    In the last few years technical improvements have produced a dramatic shift from traditional open surgery towards a minimally invasive approach for the management of early endometrial cancer. Advancement in minimally invasive surgical approaches has allowed extensive staging procedures to be performed with significantly reduced patient morbidity. Debate is ongoing regarding the choice of a minimally invasive approach that has the most effective benefit for the patients, the surgeon, and the healthcare system as a whole. Surgical treatment of women with presumed early endometrial cancer should take into account the features of endometrial disease and the general surgical risk of the patient. Women with endometrial cancer are often aged, obese, and with cardiovascular and metabolic comorbidities that increase the risk of peri-operative complications, so it is important to tailor the extent and the radicalness of surgery in order to decrease morbidity and mortality potentially derivable from unnecessary procedures. In this regard women with negative nodes derive no benefit from unnecessary lymphadenectomy, but may develop short- and long-term morbidity related to this procedure. Preoperative and intraoperative techniques could be critical tools for tailoring the extent and the radicalness of surgery in the management of women with presumed early endometrial cancer. In this review we will discuss updates in surgical management of early endometrial cancer and also the role of preoperative and intraoperative evaluation of lymph node status in influencing surgical options, with the aim of proposing a management algorithm based on the literature and our experience.

  11. Haemophagocytic lymphohistiocytosis: proposal of a diagnostic algorithm based on perforin expression.

    PubMed

    Aricò, Maurizio; Allen, Michaela; Brusa, Simona; Clementi, Rita; Pende, Daniela; Maccario, Rita; Moretta, Lorenzo; Danesino, Cesare

    2002-10-01

    Haemophagocytic lymphohistiocytosis (HLH) is a rare, fatal disorder of early infancy. Mutations of the PRF1 gene have been identified in a subset of patients. However, the distinction between the different genetically determined and environmental subtypes of the disease remains a major issue to be solved. This may result in delayed or inappropriate application of bone marrow transplantation (BMT). We propose an algorithm that uses a combination of three rapid laboratory tests, i.e. perforin expression by peripheral lymphocytes, assessment of the behaviour of the 2B4 lymphocyte receptor and natural killer (NK) cell activity, to identify the different subgroups of HLH. In 19 patients diagnosed according to current criteria, we tested perforin expression, 2B4 receptor function and NK cell activity. PRF1 mutations were found in all seven patients showing absent perforin expression. In one male with abnormal behaviour of the 2B4 receptor, SH2D1A mutation confirmed the diagnosis of X-linked lymphoproliferative disease. Four patients with normal NK cell activity had evidence of associated infections. Of the seven with impaired NK cell activity, two had a probable genetically determined subtype of HLH and five appeared as sporadic, infection-associated cases. Improving the diagnostic approach may restrict the use of BMT, the only recognized curative treatment, to HLH patients with a documented poor prognosis while patients with milder disorders may be treated less intensively. Our flow chart could also lead to better selection of patients for specific gene analysis.

  12. Automated Analysis of 1p/19q Status by FISH in Oligodendroglial Tumors: Rationale and Proposal of an Algorithm

    PubMed Central

    Duval, Céline; de Tayrac, Marie; Michaud, Karine; Cabillic, Florian; Paquet, Claudie; Gould, Peter Vincent; Saikali, Stéphan

    2015-01-01

    Objective To propose a new algorithm facilitating automated analysis of 1p and 19q status by FISH technique in oligodendroglial tumors with software packages available in the majority of institutions using this technique. Methods We documented all green/red (G/R) probe signal combinations in a retrospective series of 53 oligodendroglial tumors according to literature guidelines (Algorithm 1) and selected only the most significant combinations for a new algorithm (Algorithm 2). This second algorithm was then validated on a prospective internal series of 45 oligodendroglial tumors and on an external series of 36 gliomas. Results Algorithm 2 utilizes 24 G/R combinations which represent less than 40% of combinations observed with Algorithm 1. The new algorithm excludes some common G/R combinations (1/1, 3/2) and redefines the place of others (defining 1/2 as compatible with normal and 3/3, 4/4 and 5/5 as compatible with imbalanced chromosomal status). The new algorithm uses the combination + ratio method of signal probe analysis to give the best concordance between manual and automated analysis on samples of 100 tumor cells (91% concordance for 1p and 89% concordance for 19q) and full concordance on samples of 200 tumor cells. This highlights the value of automated analysis as a means to identify cases in which a larger number of tumor cells should be studied by manual analysis. Validation of this algorithm on a second series from another institution showed a satisfactory concordance (89%, κ = 0.8). Conclusion Our algorithm can be easily implemented on all existing FISH analysis software platforms and should facilitate multicentric evaluation and standardization of 1p/19q assessment in gliomas with reduction of the professional and technical time required. PMID:26135922

  13. To Propose a Reviewer Dispatching Algorithm for Networked Peer Assessment System.

    ERIC Educational Resources Information Center

    Liu, Eric Zhi-Feng

    2005-01-01

    Despite their increasing availability on the Internet, networked peer assessment systems (1-5) lack feasible automatic dispatching algorithm of student's assignments and ultimately inhibit the effectiveness of peer assessment. Therefore, this study presents a reviewer dispatching algorithm capable of supporting networked peer assessment system in…

  14. CDRD and PNPR satellite passive microwave precipitation retrieval algorithms: EuroTRMM/EURAINSAT origins and H-SAF operations

    NASA Astrophysics Data System (ADS)

    Mugnai, A.; Smith, E. A.; Tripoli, G. J.; Bizzarri, B.; Casella, D.; Dietrich, S.; Di Paola, F.; Panegrossi, G.; Sanò, P.

    2013-04-01

    including a few examples of their performance. This aspect of the development of the two algorithms is placed in the context of what we refer to as the TRMM era, which is the era denoting the active and ongoing period of the Tropical Rainfall Measuring Mission (TRMM) that helped inspire their original development. In 2015, the ISAC-Rome precipitation algorithms will undergo a transformation beginning with the upcoming Global Precipitation Measurement (GPM) mission, particularly the GPM Core Satellite technologies. A few years afterward, the first pair of imaging and sounding Meteosat Third Generation (MTG) satellites will be launched, providing additional technological advances. Various of the opportunities presented by the GPM Core and MTG satellites for improving the current CDRD and PNPR precipitation retrieval algorithms, as well as extending their product capability, are discussed.

  15. To Propose an Algorithm for Team Forming: Simulated Annealing K Team-Forming Algorithm for Heterogeneous Grouping.

    ERIC Educational Resources Information Center

    Zhi-Feng Liu, Eric

    2005-01-01

    In recent studies, some researchers were eager for the answer of how to group a perfectly dream team. There are various grouping methods, e.g. random assignment, homogeneous grouping with personality or achievement and heterogeneous grouping with personality or achievement, were proposed. Some instructors could put some students in a team better…

  16. Physical theory, origin of flight, and a synthesis proposed for birds.

    PubMed

    Long, Charles A; Zhang, G P; George, Thomas F; Long, Claudine F

    2003-09-07

    Neither flapping and running to take-off nor gliding from heights can be disproved as the assured evolutionary origin of self-powered flight observed in modern vertebrates. Gliding with set wings would utilize available potential energy from gravity but gain little from flapping. Bipedal running, important in avian phylogeny, possibly facilitated the evolution of flight. Based on physical principles, gliding is a better process for the origin of powered flight than the "ground-up" process, which physically is not feasible in space or time (considering air resistance, metabolic energy costs, and mechanical resistance to bipedal running). Proto-avian ancestors of Archaeopteryx and Microraptor probably flapped their sparsely feathered limbs synchronously while descending from leaps or heights, with such "flutter-gliding" presented as a synthesis of the two earlier theories of flight origin (making use of the available potential energy from gravity, involving wing thrusts and flapping, coping with air resistance that slows air speed, but effecting positive fitness value in providing lift and slowing dangerous falls).

  17. A proposal concerning the origin of life on the planet earth

    NASA Technical Reports Server (NTRS)

    Woese, C. R.

    1979-01-01

    It is proposed that, contrary to the widely accepted Oparin thesis, life on earth arose not in the oceans but in the earth's atmosphere. Difficulties of the Oparin thesis relating to the nonbiological nature of prebiotic evolution are discussed, and autotrophic, photosynthetic cells are proposed as the first living organisms to emerge, thus avoiding these difficulties. Recent developments in the geology of the earth at the time of the emergence of life are interpreted as requiring the absence of liquid surface water, with water partitioned between a molten crust and a dense, CO2-rich atmosphere, similar to the present state of Venus. Biochemistry in such an atmosphere would be primarily membrane chemistry on the interfaces of atmospheric salt water droplets, proceeding at normal temperatures without the absorption of electrical discharges or UV light. Areas not sufficiently accounted for by this scenario include the development of genetic organization and the breaking of the runaway greenhouse condition assumed.

  18. [Septic arthritis of undetermined origin in children: a proposal for the assessment and its therapy].

    PubMed

    Biondolillo, G; Caprasse, P; Battisti, O

    2014-01-01

    Septic arthritis is not a frequent, but quite classical pathology in children. It can be followed by a severe outcome in case of delayed and/or inadequate treatment. The drainage of the infected joint associated with a prompt and adapted antibiotherapy are together the cornerstones of this treatment. The isolation and identification of the causative microorganism is also of the highest importance. Up to now, unfortunately, a large proportion of septic arthritis are treated by antibiotics although all culture remain negative. This paper has two objectives: one is to present the different steps to optimize the assessment and diagnosis; the second, to increase the sensitivity of the pathogen identification. At last, we present our proposal for empirical antibiotherapy.

  19. Proposal of a new plane shape of an opera house-optimized by genetic algorithms

    NASA Astrophysics Data System (ADS)

    Hotehama, Takuya; Ando, Yoichi; Tani, Akinori; Kawamura, Hiroshi

    2004-05-01

    The horseshoe-shaped theater has been the main shape from historical circumstances. However, from acoustical points of view, the rationality of the peculiar plane shape is not yet verified more than historical refinement. In this study, in order to make the theater shape more acoustically excellent, optimization for temporal and spatial factors in the theory of the subjective preference was made using genetic algorithms (GAs) by operating the positions of side walls. Results reconfirm that the plane shape of the optimized theater is a leaf shape, which has been verified to be acoustically rational in a concert hall. And, further possible shapes are also offered.

  20. [Adequacy of clinical interventions in patients with advanced and complex disease. Proposal of a decision making algorithm].

    PubMed

    Ameneiros-Lago, E; Carballada-Rico, C; Garrido-Sanjuán, J A; García Martínez, A

    2015-01-01

    Decision making in the patient with chronic advanced disease is especially complex. Health professionals are obliged to prevent avoidable suffering and not to add any more damage to that of the disease itself. The adequacy of the clinical interventions consists of only offering those diagnostic and therapeutic procedures appropriate to the clinical situation of the patient and to perform only those allowed by the patient or representative. In this article, the use of an algorithm is proposed that should serve to help health professionals in this decision making process.

  1. A miRNA-tRNA mix-up: tRNA origin of proposed miRNA.

    PubMed

    Schopman, Nick C T; Heynen, Stephan; Haasnoot, Joost; Berkhout, Ben

    2010-01-01

    The rapid release of new data from DNA genome sequencing projects has led to a variety of misannotations in public databases. Our results suggest that next generation sequencing approaches are particularly prone to such misannotations. Two related miRNA candidates did recently enter the miRBase database, miR-1274b and miR-1274a, but they share identical 18-nucleotide stretches with tRNA (Lys3) and tRNA (Lys5) , respectively. The possibility that the small RNA fragments that led to the description of these two miRNAs originated from the two tRNAs was examined. The ratio of the miR-1274b:miR-1274a fragments does closely resemble the known tRNA lys3:lys5 ratio in the cell. Furthermore, the proposed miRNA hairpins have a very low prediction score and the proposed miRNA genes are in fact endogenous retroviral elements. We searched for other miRNA-mimics in the human genome and found more examples of tRNA-miRNA mimicry. We propose that the corresponding miRNAs should be validated in more detail, as the small RNA fragments that led to their description are likely derived from tRNA processing.

  2. A direct phasing method based on the origin-free modulus sum function and the FFT algorithm. XII.

    PubMed

    Rius, Jordi; Crespi, Anna; Torrelles, Xavier

    2007-03-01

    An alternative way of refining phases with the origin-free modulus sum function S is shown that, instead of applying the tangent formula in sequential mode [Rius (1993). Acta Cryst. A49, 406-409], applies it in parallel mode with the help of the fast Fourier transform (FFT) algorithm. The test calculations performed on intensity data of small crystal structures at atomic resolution prove the convergence and hence the viability of the procedure. This new procedure called S-FFT is valid for all space groups and especially competitive for low-symmetry ones. It works well when the charge-density peaks in the crystal structure have the same sign, i.e. either positive or negative.

  3. The prevention of adverse reactions to transfusions in patients with haemoglobinopathies: a proposed algorithm

    PubMed Central

    Bennardello, Francesco; Fidone, Carmelo; Spadola, Vincenzo; Cabibbo, Sergio; Travali, Simone; Garozzo, Giovanni; Antolino, Agostino; Tavolino, Giuseppe; Falla, Cadigia; Bonomo, Pietro

    2013-01-01

    Background Transfusion therapy remains the main treatment for patients with severe haemoglobinopathies, but can cause adverse reactions which may be classified as immediate or delayed. The use of targeted prevention with drugs and treatments of blood components in selected patients can contribute to reducing the development of some reactions. The aim of our study was to develop an algorithm capable of guiding behaviours to adopt in order to reduce the incidence of immediate transfusion reactions. Materials and methods Immediate transfusion reactions occurring over a 7-year period in 81 patients with transfusion-dependent haemoglobinopathies were recorded. The patients received transfusions with red cell concentrates that had been filtered prestorage. Various measures were undertaken to prevent transfusion reactions: leucoreduction, washing the red blood cells, prophylactic administration of an antihistamine (loratidine 10 mg tablet) or an antipyretic (paracetamol 500 mg tablet). Results Over the study period 20,668 red cell concentrates were transfused and 64 adverse transfusion reactions were recorded in 36 patients. The mean incidence of reactions in the 7 years of observation was 3.1‰. Over the years the incidence gradually decreased from 6.8‰ in 2004 to 0.9‰ in 2010. Discussion Preventive measures are not required for patients who have an occasional reaction, because the probability that such a type of reaction recurs is very low. In contrast, the targeted use of drugs such as loratidine or paracetamol, sometimes combined with washing and/or double filtration of red blood cells, can reduce the rate of recurrent (allergic) reactions to about 0.9‰. The system for detecting adverse reactions and training staff involved in transfusion therapy are critical points for reliable collection of data and standardisation of the detection system is recommended for those wanting to monitor the incidence of all adverse reactions, including minor ones. PMID:23736930

  4. The use of infrared images to detect ticks in cattle and proposal of an algorithm for quantifying the infestation.

    PubMed

    Barbedo, Jayme Garcia Arnal; Gomes, Claudia Cristina Gulias; Cardoso, Fernando Flores; Domingues, Robert; Ramos, Jeferson Vidart; McManus, Concepta Margaret

    2017-02-15

    This paper presents a study on the use of low resolution infrared images to detect ticks in cattle. Emphasis is given to the main factors that influence the quality of the captured images, as well as to the actions that can increase the amount of information conveyed by these images. In addition, a new automatic method for analyzing the images and counting the ticks is introduced. The proposed algorithm relies only on color transformations and simple mathematical morphology operations, thus being easy to implement and computationally light. Tests were carried out using a large database containing images of the neck and hind end of the animals. It was observed that the proposed algorithm is very effective in detecting ticks visible in the images, even if the contrast with the background is not high. On the other hand, due to both intrinsic and extrinsic factors, the thermographic images used in this study did not always succeed in creating enough contrast between ticks and cattle's hair coat. Although these problems can be mitigated by following some directives, currently only rough estimates for tick counts can be achieved using infrared images with low spatial resolution.

  5. Genetics of type III Bartter syndrome in Spain, proposed diagnostic algorithm.

    PubMed

    García Castaño, Alejandro; Pérez de Nanclares, Gustavo; Madariaga, Leire; Aguirre, Mireia; Madrid, Alvaro; Nadal, Inmaculada; Navarro, Mercedes; Lucas, Elena; Fijo, Julia; Espino, Mar; Espitaletta, Zilac; Castaño, Luis; Ariceta, Gema

    2013-01-01

    The p.Ala204Thr mutation (exon 7) of the CLCNKB gene is a "founder" mutation that causes most of type III Bartter syndrome cases in Spain. We performed genetic analysis of the CLCNKB gene, which encodes for the chloride channel protein ClC-Kb, in a cohort of 26 affected patients from 23 families. The diagnostic algorithm was: first, detection of the p.Ala204Thr mutation; second, detecting large deletions or duplications by Multiplex Ligation-dependent Probe Amplification and Quantitative Multiplex PCR of Short Fluorescent Fragments; and third, sequencing of the coding and flanking regions of the whole CLCNKB gene. In our genetic diagnosis, 20 families presented with the p.Ala204Thr mutation. Of those, 15 patients (15 families) were homozygous (57.7% of overall patients). Another 8 patients (5 families) were compound heterozygous for the founder mutation together with a second one. Thus, 3 patients (2 siblings) presented with the c. -19-?_2053+? del deletion (comprising the entire gene); one patient carried the p.Val170Met mutation (exon 6); and 4 patients (3 siblings) presented with the novel p.Glu442Gly mutation (exon 14). On the other hand, another two patients carried two novel mutations in compound heterozygosis: one presented the p.Ile398_Thr401del mutation (exon 12) associated with the c. -19-?_2053+? del deletion, and the other one carried the c.1756+1G>A splice-site mutation (exon 16) as well as the already described p.Ala210Val change (exon 7). One case turned out to be negative in our genetic screening. In addition, 51 relatives were found to be heterozygous carriers of the described CLCNKB mutations. In conclusion, different mutations cause type III Bartter syndrome in Spain. The high prevalence of the p.Ala204Thr in Spanish families thus justifies an initial screen for this mutation. However, should it not be detected further investigation of the CLCNKB gene is warranted in clinically diagnosed families.

  6. Treatment Algorithm for Chronic Achilles Tendon LesionsReview of the Literature and Proposal of a New Classification.

    PubMed

    Buda, Roberto; Castagnini, Francesco; Pagliazzi, Gherardo; Giannini, Sandro

    2017-03-01

    Chronic Achilles tendon lesions (CATLs) ensue from a neglected acute rupture or a degenerated tendon. Surgical treatment is usually required. The current English literature (PubMed) about CATLs was revised, and particular emphasis was given to articles depicting CATL classification. The available treatment algorithms are based on defect size. We propose the inclusion of other parameters, such as tendon degeneration, etiology, and time from injury to surgery. Partial lesions affecting less than (I stage) or more than (II stage) half of the tendon should be treated conservatively for healthy tendons, within 12 weeks of injury. In II stage complex cases, an end-to-end anastomosis is required. Complete lesions inferior to 2 cm should be addressed by an end-to-end anastomosis, with a tendon transfer in the case of tendon degeneration. Lesions measuring 2 to 5 cm require a turndown flap and a V-Y tendinous flap in the case of a good-quality tendon; degenerated tendons may require a tendon transfer. Lesions larger than 5 cm should be treated using two tendon transfers and V-Y tendinous flaps. A proper algorithm should be introduced to calibrate the surgical procedures. In addition to tendon defect size, tendon degeneration, etiology of the lesion, and time from injury to surgery are crucial factors that should be considered in the surgical planning.

  7. Chlorophyll pigment concentration using spectral curvature algorithms - An evaluation of present and proposed satellite ocean color sensor bands

    NASA Technical Reports Server (NTRS)

    Hoge, Frank E.; Swift, Robert N.

    1986-01-01

    During the past several years symmetric three-band (460-, 490-, 520-nm) spectral curvature algorithm (SCA) has demonstrated rather accurate determination of chlorophyll pigment concentration using low-altitude airborne ocean color data. It is shown herein that the in-water asymmetric SCA, when applied to certain recently proposed OCI (NOAA-K and SPOT-3) and OCM (ERS-1) satellite ocean color bands, can adequately recover chlorophyll-like pigments. These airborne findings suggest that the proposed new ocean color sensor bands are in general satisfactorily, but not necessarily optimally, positioned to allow space evaluation of the SCA using high-precision atmospherically corrected satellite radiances. The pigment concentration recovery is not as good when existing Coastal Zone Color Scanner bands are used in the SCA. The in-water asymmetric SCA chlorophyll pigment recovery evaluations were performed using (1) airborne laser-induced chlorophyll fluorescence and (2) concurrent passive upwelled radiances. Data from a separate ocean color sensor aboard the aircraft were further used to validate the findings.

  8. Notes on quantitative structure-property relationships (QSPR), part 3: density functions origin shift as a source of quantum QSPR algorithms in molecular spaces.

    PubMed

    Carbó-Dorca, Ramon

    2013-04-05

    A general algorithm implementing a useful variant of quantum quantitative structure-property relationships (QQSPR) theory is described. Based on quantum similarity framework and previous theoretical developments on the subject, the present QQSPR procedure relies on the possibility to perform geometrical origin shifts over molecular density function sets. In this way, molecular collections attached to known properties can be easily used over other quantum mechanically well-described molecular structures for the estimation of their unknown property values. The proposed procedure takes quantum mechanical expectation value as provider of causal relation background and overcomes the dimensionality paradox, which haunts classical descriptor space QSPR. Also, contrarily to classical procedures, which are also attached to heavy statistical gear, the present QQSPR approach might use a geometrical assessment only or just some simple statistical outline or both. From an applied point of view, several easily reachable computational levels can be set up. A Fortran 95 program: QQSPR-n is described with two versions, which might be downloaded from a dedicated web site. Various practical examples are provided, yielding excellent results. Finally, it is also shown that an equivalent molecular space classical QSPR formalism can be easily developed.

  9. Loss of Faith in the Origins of Information Literacy in E-Environments: Proposal of a Holistic Approach

    ERIC Educational Resources Information Center

    Nazari, Maryam; Webber, Sheila

    2012-01-01

    The original concept of information literacy (IL) identifies it as an enabler for lifelong learning and learning-to-learn, adaptable and transferable in any learning environment and context. However, practices of IL in electronic information and learning environments (e-environments) tend to question the origins, and workability, of IL on the…

  10. Guidelines and diagnostic algorithm for patients with suspected systemic mastocytosis: a proposal of the Austrian competence network (AUCNM)

    PubMed Central

    Valent, Peter; Aberer, Elisabeth; Beham-Schmid, Christine; Fellinger, Christina; Fuchs, Wolfgang; Gleixner, Karoline V; Greul, Rosemarie; Hadzijusufovic, Emir; Hoermann, Gregor; Sperr, Wolfgang R; Wimazal, Friedrich; Wöhrl, Stefan; Zahel, Brigitte; Pehamberger, Hubert

    2013-01-01

    Systemic mastocytosis (SM) is a hematopoietic neoplasm characterized by pathologic expansion of tissue mast cells in one or more extracutaneous organs. In most children and most adult patients, skin involvement is found. Childhood patients frequently suffer from cutaneous mastocytosis without systemic involvement, whereas most adult patients are diagnosed as suffering from SM. In a smaller subset of patients, SM without skin lesions develops which is a diagnostic challenge. In the current article, a diagnostic algorithm for patients with suspected SM is proposed. In adult patients with skin lesions and histologically confirmed mastocytosis in the skin (MIS), a bone marrow biopsy is recommended regardless of the serum tryptase level. In adult patients without skin lesions who are suffering from typical mediator-related symptoms, the basal serum tryptase level is an important diagnostic parameter. In those with slightly elevated tryptase (15-30 ng/ml), additional non-invasive investigations, including a KIT mutation analysis of peripheral blood cells and sonographic analysis, is performed. In adult patients in whom i) KIT D816V is detected or/and ii) the basal serum tryptase level is clearly elevated (> 30 ng/ml) or/and iii) other clinical or laboratory features are suggesting the presence of occult mastocytosis, a bone marrow biopsy should be performed. In the absence of KIT D816V and other indications of mastocytosis, no bone marrow investigation is required, but the patient’s course and the serum tryptase levels are examined in the follow-up. PMID:23675567

  11. Comparison between PCR and larvae visualization methods for diagnosis of Strongyloides stercoralis out of endemic area: A proposed algorithm.

    PubMed

    Repetto, Silvia A; Ruybal, Paula; Solana, María Elisa; López, Carlota; Berini, Carolina A; Alba Soto, Catalina D; Cappa, Stella M González

    2016-05-01

    Underdiagnosis of chronic infection with the nematode Strongyloides stercoralis may lead to severe disease in the immunosuppressed. Thus, we have set-up a specific and highly sensitive molecular diagnosis in stool samples. Here, we compared the accuracy of our polymerase chain reaction (PCR)-based method with that of conventional diagnostic methods for chronic infection. We also analyzed clinical and epidemiological predictors of infection to propose an algorithm for the diagnosis of strongyloidiasis useful for the clinician. Molecular and gold standard methods were performed to evaluate a cohort of 237 individuals recruited in Buenos Aires, Argentina. Subjects were assigned according to their immunological status, eosinophilia and/or history of residence in endemic areas. Diagnosis of strongyloidiasis by PCR on the first stool sample was achieved in 71/237 (29.9%) individuals whereas only 35/237(27.4%) were positive by conventional methods, requiring up to four serial stool samples at weekly intervals. Eosinophilia and history of residence in endemic areas have been revealed as independent factors as they increase the likelihood of detecting the parasite according to our study population. Our results underscore the usefulness of robust molecular tools aimed to diagnose chronic S. stercoralis infection. Evidence also highlights the need to survey patients with eosinophilia even when history of an endemic area is absent.

  12. Animal-inflicted open wounds in rural Turkey: lessons learned and a proposed treatment algorithm for uncertain scenarios.

    PubMed

    Sezgin, Billur; Ljohiy, Mbaraka; Akgol Gur, Sultan Tuna

    2016-12-01

    Uncertainty in the management of animal-inflicted injuries, especially in rural settings, usually results in a general approach to leave all wounds to heal with secondary intention, which can lead to unsightly scarring and functional loss. This study focusus on different circumstances dealt with by plastic surgeons in a rural setting in Turkey and aims to configure what the general approach should be through an analysis of a wide spectrum of patients. Between June 2013 and December 2014, 205 patients who presented to the emergency department for animal-inflicted injuries were retrospectively analysed. Patients who consulted for plastic surgery were included in the analysis to determine which wounds require further attention. Patients with past animal-inflicted injuries who presented to the outpatient plastic surgery clinic with concerns such as non-healing open wounds or cosmetic or functional impairment were also evaluated. Statistical analysis demostrated a significantly lower rate of infection encountered in animal-inflicted open wounds (AIOWs) of patients who consulted for plastic surgery from the emergency department than those who presented to the outpatient clinic (P < 0·05). The main concern in the management of animal-inflicted wounds is their potential for infection, but this does not mean that every wound will be infected. The most important factor is being able to distinguish wounds that have a higher potential for infection and to select the type of wound management accordingly. An algorithm has been proposed as a guidance for the management of AIOWs, which covers the approach towards both domestic and stray animal-inflicted injuries.

  13. Applying the wisdom of stepping down inhaled corticosteroids in patients with COPD: a proposed algorithm for clinical practice

    PubMed Central

    Kaplan, Alan G

    2015-01-01

    the aforementioned, this perspective article proposes an algorithm for the stepwise withdrawal of ICS in real-life clinical practice. PMID:26648711

  14. A Proposed Extension to the Soil Moisture and Ocean Salinity Level 2 Algorithm for Mixed Forest and Moderate Vegetation Pixels

    NASA Technical Reports Server (NTRS)

    Panciera, Rocco; Walker, Jeffrey P.; Kalma, Jetse; Kim, Edward

    2011-01-01

    The Soil Moisture and Ocean Salinity (SMOS)mission, launched in November 2009, provides global maps of soil moisture and ocean salinity by measuring the L-band (1.4 GHz) emission of the Earth's surface with a spatial resolution of 40-50 km.Uncertainty in the retrieval of soilmoisture over large heterogeneous areas such as SMOS pixels is expected, due to the non-linearity of the relationship between soil moisture and the microwave emission. The current baseline soilmoisture retrieval algorithm adopted by SMOS and implemented in the SMOS Level 2 (SMOS L2) processor partially accounts for the sub-pixel heterogeneity of the land surface, by modelling the individual contributions of different pixel fractions to the overall pixel emission. This retrieval approach is tested in this study using airborne L-band data over an area the size of a SMOS pixel characterised by a mix Eucalypt forest and moderate vegetation types (grassland and crops),with the objective of assessing its ability to correct for the soil moisture retrieval error induced by the land surface heterogeneity. A preliminary analysis using a traditional uniform pixel retrieval approach shows that the sub-pixel heterogeneity of land cover type causes significant errors in soil moisture retrieval (7.7%v/v RMSE, 2%v/v bias) in pixels characterised by a significant amount of forest (40-60%). Although the retrieval approach adopted by SMOS partially reduces this error, it is affected by errors beyond the SMOS target accuracy, presenting in particular a strong dry bias when a fraction of the pixel is occupied by forest (4.1%v/v RMSE,-3.1%v/v bias). An extension to the SMOS approach is proposed that accounts for the heterogeneity of vegetation optical depth within the SMOS pixel. The proposed approach is shown to significantly reduce the error in retrieved soil moisture (2.8%v/v RMSE, -0.3%v/v bias) in pixels characterised by a critical amount of forest (40-60%), at the limited cost of only a crude estimate of the

  15. Novel non-invasive algorithm to identify the origins of re-entry and ectopic foci in the atria from 64-lead ECGs: A computational study

    PubMed Central

    Langley, Philip

    2017-01-01

    Atrial tachy-arrhytmias, such as atrial fibrillation (AF), are characterised by irregular electrical activity in the atria, generally associated with erratic excitation underlain by re-entrant scroll waves, fibrillatory conduction of multiple wavelets or rapid focal activity. Epidemiological studies have shown an increase in AF prevalence in the developed world associated with an ageing society, highlighting the need for effective treatment options. Catheter ablation therapy, commonly used in the treatment of AF, requires spatial information on atrial electrical excitation. The standard 12-lead electrocardiogram (ECG) provides a method for non-invasive identification of the presence of arrhythmia, due to irregularity in the ECG signal associated with atrial activation compared to sinus rhythm, but has limitations in providing specific spatial information. There is therefore a pressing need to develop novel methods to identify and locate the origin of arrhythmic excitation. Invasive methods provide direct information on atrial activity, but may induce clinical complications. Non-invasive methods avoid such complications, but their development presents a greater challenge due to the non-direct nature of monitoring. Algorithms based on the ECG signals in multiple leads (e.g. a 64-lead vest) may provide a viable approach. In this study, we used a biophysically detailed model of the human atria and torso to investigate the correlation between the morphology of the ECG signals from a 64-lead vest and the location of the origin of rapid atrial excitation arising from rapid focal activity and/or re-entrant scroll waves. A focus-location algorithm was then constructed from this correlation. The algorithm had success rates of 93% and 76% for correctly identifying the origin of focal and re-entrant excitation with a spatial resolution of 40 mm, respectively. The general approach allows its application to any multi-lead ECG system. This represents a significant extension to

  16. Case 3018. Cervus gouazoubira Fischer, 1814 (currently Mazama gouazoubira; Mammalia, Artiodactyla): proposed conservation as the correct original spelling

    USGS Publications Warehouse

    Gardner, A.L.

    1999-01-01

    The purpose of this application is to conserve the spelling of the specific name of Cervus gouazoubira Fischer, 1814 for the brown brocket deer of South America (family Cervidae). This spelling, rather than the original gouazoubira, has been in virtually universal usage for almost 50 years.

  17. A new algorithm to diagnose atrial ectopic origin from multi lead ECG systems--insights from 3D virtual human atria and torso.

    PubMed

    Alday, Erick A Perez; Colman, Michael A; Langley, Philip; Butters, Timothy D; Higham, Jonathan; Workman, Antony J; Hancox, Jules C; Zhang, Henggui

    2015-01-01

    Rapid atrial arrhythmias such as atrial fibrillation (AF) predispose to ventricular arrhythmias, sudden cardiac death and stroke. Identifying the origin of atrial ectopic activity from the electrocardiogram (ECG) can help to diagnose the early onset of AF in a cost-effective manner. The complex and rapid atrial electrical activity during AF makes it difficult to obtain detailed information on atrial activation using the standard 12-lead ECG alone. Compared to conventional 12-lead ECG, more detailed ECG lead configurations may provide further information about spatio-temporal dynamics of the body surface potential (BSP) during atrial excitation. We apply a recently developed 3D human atrial model to simulate electrical activity during normal sinus rhythm and ectopic pacing. The atrial model is placed into a newly developed torso model which considers the presence of the lungs, liver and spinal cord. A boundary element method is used to compute the BSP resulting from atrial excitation. Elements of the torso mesh corresponding to the locations of the placement of the electrodes in the standard 12-lead and a more detailed 64-lead ECG configuration were selected. The ectopic focal activity was simulated at various origins across all the different regions of the atria. Simulated BSP maps during normal atrial excitation (i.e. sinoatrial node excitation) were compared to those observed experimentally (obtained from the 64-lead ECG system), showing a strong agreement between the evolution in time of the simulated and experimental data in the P-wave morphology of the ECG and dipole evolution. An algorithm to obtain the location of the stimulus from a 64-lead ECG system was developed. The algorithm presented had a success rate of 93%, meaning that it correctly identified the origin of atrial focus in 75/80 simulations, and involved a general approach relevant to any multi-lead ECG system. This represents a significant improvement over previously developed algorithms.

  18. A New Algorithm to Diagnose Atrial Ectopic Origin from Multi Lead ECG Systems - Insights from 3D Virtual Human Atria and Torso

    PubMed Central

    Alday, Erick A. Perez; Colman, Michael A.; Langley, Philip; Butters, Timothy D.; Higham, Jonathan; Workman, Antony J.; Hancox, Jules C.; Zhang, Henggui

    2015-01-01

    Rapid atrial arrhythmias such as atrial fibrillation (AF) predispose to ventricular arrhythmias, sudden cardiac death and stroke. Identifying the origin of atrial ectopic activity from the electrocardiogram (ECG) can help to diagnose the early onset of AF in a cost-effective manner. The complex and rapid atrial electrical activity during AF makes it difficult to obtain detailed information on atrial activation using the standard 12-lead ECG alone. Compared to conventional 12-lead ECG, more detailed ECG lead configurations may provide further information about spatio-temporal dynamics of the body surface potential (BSP) during atrial excitation. We apply a recently developed 3D human atrial model to simulate electrical activity during normal sinus rhythm and ectopic pacing. The atrial model is placed into a newly developed torso model which considers the presence of the lungs, liver and spinal cord. A boundary element method is used to compute the BSP resulting from atrial excitation. Elements of the torso mesh corresponding to the locations of the placement of the electrodes in the standard 12-lead and a more detailed 64-lead ECG configuration were selected. The ectopic focal activity was simulated at various origins across all the different regions of the atria. Simulated BSP maps during normal atrial excitation (i.e. sinoatrial node excitation) were compared to those observed experimentally (obtained from the 64-lead ECG system), showing a strong agreement between the evolution in time of the simulated and experimental data in the P-wave morphology of the ECG and dipole evolution. An algorithm to obtain the location of the stimulus from a 64-lead ECG system was developed. The algorithm presented had a success rate of 93%, meaning that it correctly identified the origin of atrial focus in 75/80 simulations, and involved a general approach relevant to any multi-lead ECG system. This represents a significant improvement over previously developed algorithms. PMID

  19. Determination of origin and sugars of citrus fruits using genetic algorithm, correspondence analysis and partial least square combined with fiber optic NIR spectroscopy

    NASA Astrophysics Data System (ADS)

    Tewari, Jagdish C.; Dixit, Vivechana; Cho, Byoung-Kwan; Malik, Kamal A.

    2008-12-01

    The capacity to confirm the variety or origin and the estimation of sucrose, glucose, fructose of the citrus fruits are major interests of citrus juice industry. A rapid classification and quantification technique was developed and validated for simultaneous and nondestructive quantifying the sugar constituent's concentrations and the origin of citrus fruits using Fourier Transform Near-Infrared (FT-NIR) spectroscopy in conjunction with Artificial Neural Network (ANN) using genetic algorithm, Chemometrics and Correspondences Analysis (CA). To acquire good classification accuracy and to present a wide range of concentration of sucrose, glucose and fructose, we have collected 22 different varieties of citrus fruits from the market during the entire season of citruses. FT-NIR spectra were recorded in the NIR region from 1100 to 2500 nm using the fiber optic probe and three types of data analysis were performed. Chemometrics analysis using Partial Least Squares (PLS) was performed in order to determine the concentration of individual sugars. Artificial Neural Network analysis was performed for classification, origin or variety identification of citrus fruits using genetic algorithm. Correspondence analysis was performed in order to visualize the relationship between the citrus fruits. To compute a PLS model based upon the reference values and to validate the developed method, high performance liquid chromatography (HPLC) was performed. Spectral range and the number of PLS factors were optimized for the lowest standard error of calibration (SEC), prediction (SEP) and correlation coefficient ( R2). The calibration model developed was able to assess the sucrose, glucose and fructose contents in unknown citrus fruit up to an R2 value of 0.996-0.998. Numbers of factors from F1 to F10 were optimized for correspondence analysis for relationship visualization of citrus fruits based on the output values of genetic algorithm. ANN and CA analysis showed excellent classification

  20. Diamond of Possibly Metallurgical and Seismic Origin: PART 3: Additional Specimens and a Proposal Calling for adjusted Methodologies for Diamondism

    NASA Astrophysics Data System (ADS)

    Giamn, M.

    2007-05-01

    , noniconic, nonstereotyping specimen population of primarily fine grains is needed. My theory accomodates (1) broad compositional ranges;(2) present or historical specimens; and (3)valid on a grain by grain scale as well as regional scale. A great nember of metallic elements are broadly similar to iron in crystal structure, phase equilibria, range of stoicheometry of solid solutions, and properties. Under favorable conditions, they could be as likely as iron to proceed to generate carbon. This expand to a great number of potential source metal for diamond. Further multiplying this number by alloying and centering (of lattice points) variations, the number of potential source could be vast. Above mentioned exercise is expendable to, for instance, Cr, Ni or other metals. This could provide for a missing link between diamond in stable craton and other diamonds. 1 Giamn, M., Diamond of possibly metallurgical and seismic origin in an alloy from the debris after earthquake Taiwan PART I,2004 Eos AGU Spring.2 Giamn, M. submitted to GCA. 3 Giamn, M., PART II (Thermal) past is present.

  1. Extraneous agents testing for substrates of avian origin and viral vaccines for poultry: current provisions and proposals for future approaches.

    PubMed

    Jungbäck, Carmen; Motitschke, Andreas

    2010-05-01

    review gives an analysis of the current provisions of the Ph. Eur. and makes some proposals on how the requirements concerning the testing of extraneous agents could be modified to take into consideration the increase in quality that has been achieved over the past few decades.

  2. A proposed Kalman filter algorithm for estimation of unmeasured output variables for an F100 turbofan engine

    NASA Technical Reports Server (NTRS)

    Alag, Gurbux S.; Gilyard, Glenn B.

    1990-01-01

    To develop advanced control systems for optimizing aircraft engine performance, unmeasurable output variables must be estimated. The estimation has to be done in an uncertain environment and be adaptable to varying degrees of modeling errors and other variations in engine behavior over its operational life cycle. This paper represented an approach to estimate unmeasured output variables by explicitly modeling the effects of off-nominal engine behavior as biases on the measurable output variables. A state variable model accommodating off-nominal behavior is developed for the engine, and Kalman filter concepts are used to estimate the required variables. Results are presented from nonlinear engine simulation studies as well as the application of the estimation algorithm on actual flight data. The formulation presented has a wide range of application since it is not restricted or tailored to the particular application described.

  3. Authentication of the botanical origin of unifloral honey by infrared spectroscopy coupled with support vector machine algorithm

    NASA Astrophysics Data System (ADS)

    Lenhardt, L.; Zeković, I.; Dramićanin, T.; Tešić, Ž.; Milojković-Opsenica, D.; Dramićanin, M. D.

    2014-09-01

    In recent years, the potential of Fourier-transform infrared spectroscopy coupled with different chemometric tools in food analysis has been established. This technique is rapid, low cost, and reliable and requires little sample preparation. In this work, 130 Serbian unifloral honey samples (linden, acacia, and sunflower types) were analyzed using attenuated total reflectance infrared spectroscopy (ATR-IR). For each spectrum, 64 scans were recorded in wavenumbers between 4000 and 500 cm-1 and at a spectral resolution of 4 cm-1. These spectra were analyzed using principal component analysis (PCA), and calculated principal components were then used for support vector machine (SVM) training. In this way, the pattern-recognition tool is obtained for building a classification model for determining the botanical origin of honey. The PCA was used to analyze results and to see if the separation between groups of different types of honeys exists. Using the SVM, the classification model was built and classification errors were acquired. It has been observed that this technique is adequate for determining the botanical origin of honey with a success rate of 98.6%. Based on these results, it can be concluded that this technique offers many possibilities for future rapid qualitative analysis of honey.

  4. The Biochemical Origin of Pain – Proposing a new law of Pain: The origin of all Pain is Inflammation and the Inflammatory Response PART 1 of 3 – A unifying law of pain

    PubMed Central

    2009-01-01

    We are proposing a unifying theory or law of pain, which states: The origin of all pain is inflammation and the inflammatory response. The biochemical mediators of inflammation include cytokines, neuropeptides, growth factors and neurotransmitters. Irrespective of the type of pain whether it is acute or chronic pain, peripheral or central pain, nociceptive or neuropathic pain, the underlying origin is inflammation and the inflammatory response. Activation of pain receptors, transmission and modulation of pain signals, neuro plasticity and central sensitization are all one continuum of inflammation and the inflammatory response. Irrespective of the characteristic of the pain, whether it is sharp, dull, aching, burning, stabbing, numbing or tingling, all pain arise from inflammation and the inflammatory response. We are proposing a re-classification and treatment of pain syndromes based upon their inflammatory profile. Treatment of pain syndromes should be based on these principles: Determination of the inflammatory profile of the pain syndromeInhibition or suppression of production of the appropriate inflammatory mediators e.g. with inflammatory mediator blockers or surgical intervention where appropriateInhibition or suppression of neuronal afferent and efferent (motor) transmission e.g. with anti-seizure drugs or local anesthetic blocksModulation of neuronal transmission e.g. with opioid medication At the L.A. Pain Clinic, we have successfully treated a variety of pain syndromes by utilizing these principles. This theory of the biochemical origin of pain is compatible with, inclusive of, and unifies existing theories and knowledge of the mechanism of pain including the gate control theory, and theories of pre-emptive analgesia, windup and central sensitization. PMID:17240081

  5. The biochemical origin of pain--proposing a new law of pain: the origin of all pain is inflammation and the inflammatory response. Part 1 of 3--a unifying law of pain.

    PubMed

    Omoigui, Sota

    2007-01-01

    We are proposing a unifying theory or law of pain, which states: the origin of all pain is inflammation and the inflammatory response. The biochemical mediators of inflammation include cytokines, neuropeptides, growth factors and neurotransmitters. Irrespective of the type of pain whether it is acute or chronic pain, peripheral or central pain, nociceptive or neuropathic pain, the underlying origin is inflammation and the inflammatory response. Activation of pain receptors, transmission and modulation of pain signals, neuro plasticity and central sensitization are all one continuum of inflammation and the inflammatory response. Irrespective of the characteristic of the pain, whether it is sharp, dull, aching, burning, stabbing, numbing or tingling, all pain arise from inflammation and the inflammatory response. We are proposing a re-classification and treatment of pain syndromes based upon their inflammatory profile. Treatment of pain syndromes should be based on these principles: 1. Determination of the inflammatory profile of the pain syndrome; 2. Inhibition or suppression of production of the appropriate inflammatory mediators, e.g. with inflammatory mediator blockers or surgical intervention where appropriate; 3. Inhibition or suppression of neuronal afferent and efferent (motor) transmission, e.g. with anti-seizure drugs or local anesthetic blocks; 4. Modulation of neuronal transmission, e.g. with opioid medication. At the L.A. Pain Clinic, we have successfully treated a variety of pain syndromes by utilizing these principles. This theory of the biochemical origin of pain is compatible with, inclusive of, and unifies existing theories and knowledge of the mechanism of pain including the gate control theory, and theories of pre-emptive analgesia, windup and central sensitization.

  6. A proposed algorithm for multimodal liver trauma management from a surgical trauma audit in a western European trauma center.

    PubMed

    Di Saverio, S; Sibilio, A; Coniglio, C; Bianchi, E; Biscardi, A; Villani, S; Gordini, G; Tugnoli, G

    2014-11-01

    Management of liver trauma is challenging and may vary widely given the heterogeneity of liver injuries' anatomical configuration, the hemodynamic status, the settings and resources available. Perhaps the use of non-operative management (NOM) may have potential drawbacks and the role of damage control surgery (DCS) and angioembolization represents a major evolving concept.1 Most severe liver trauma in polytrauma patients accounts for a significant morbidity and mortality. Major liver trauma with extensive parenchymal injury and uncontrollable bleeding is therefore a challenge for the trauma team. However a safe and effective surgical hemostasis and a carefully planned multidisciplinary approach can improve the outcome of severe liver trauma. The technique of perihepatic packing, according to DCS approach, is often required to achieve fast, early and effective control of hemorrhage in the highest grades of liver trauma and in unstable patients. A systematic and standardized technique of perihepatic packing may contribute to improve hemostatic efficacy and overall outcomes if wisely combined in a stepwise "sandwich" multimodal approach. DCS philosophy evolved alongside with damage control resuscitation (DCR) in the management of trauma patients, requiring close interaction between surgery and resuscitation. Therefore, as a result of a combined surgical and critical care clinical audit activity in our western European trauma center, a practical algorithm for multimodal sequential management of liver trauma has been developed based on a historical cohort of 253 liver trauma patients and subsequently validated on a prospective cohort of 135 patients in the period 2010-2013.

  7. Proposed standardized definitions for vertical resolution and uncertainty in the NDACC lidar ozone and temperature algorithms - Part 1: Vertical resolution

    NASA Astrophysics Data System (ADS)

    Leblanc, Thierry; Sica, Robert J.; van Gijsel, Joanna A. E.; Godin-Beekmann, Sophie; Haefele, Alexander; Trickl, Thomas; Payen, Guillaume; Gabarrot, Frank

    2016-08-01

    A standardized approach for the definition and reporting of vertical resolution of the ozone and temperature lidar profiles contributing to the Network for the Detection for Atmospheric Composition Change (NDACC) database is proposed. Two standardized definitions homogeneously and unequivocally describing the impact of vertical filtering are recommended. The first proposed definition is based on the width of the response to a finite-impulse-type perturbation. The response is computed by convolving the filter coefficients with an impulse function, namely, a Kronecker delta function for smoothing filters, and a Heaviside step function for derivative filters. Once the response has been computed, the proposed standardized definition of vertical resolution is given by Δz = δz × HFWHM, where δz is the lidar's sampling resolution and HFWHM is the full width at half maximum (FWHM) of the response, measured in sampling intervals. The second proposed definition relates to digital filtering theory. After applying a Laplace transform to a set of filter coefficients, the filter's gain characterizing the effect of the filter on the signal in the frequency domain is computed, from which the cut-off frequency fC, defined as the frequency at which the gain equals 0.5, is computed. Vertical resolution is then defined by Δz = δz/(2fC). Unlike common practice in the field of spectral analysis, a factor 2fC instead of fC is used here to yield vertical resolution values nearly equal to the values obtained with the impulse response definition using the same filter coefficients. When using either of the proposed definitions, unsmoothed signals yield the best possible vertical resolution Δz = δz (one sampling bin). Numerical tools were developed to support the implementation of these definitions across all NDACC lidar groups. The tools consist of ready-to-use "plug-in" routines written in several programming languages that can be inserted into any lidar data processing software and

  8. Gacs quantum algorithmic entropy in infinite dimensional Hilbert spaces

    SciTech Connect

    Benatti, Fabio; Oskouei, Samad Khabbazi Deh Abad, Ahmad Shafiei

    2014-08-15

    We extend the notion of Gacs quantum algorithmic entropy, originally formulated for finitely many qubits, to infinite dimensional quantum spin chains and investigate the relation of this extension with two quantum dynamical entropies that have been proposed in recent years.

  9. Proposal of a new diagnostic algorithm for hepatocellular carcinoma based on the Japanese guidelines but adapted to the Western world for patients under surveillance for chronic liver disease.

    PubMed

    Renzulli, Matteo; Golfieri, Rita

    2016-01-01

    To date, despite many scientific evidences, the guidelines of the principal hepatological societies, such as the American Association for the Study of Liver Diseases, the European Association for the Study of the Liver, and the Asian Pacific Association for the Study of the Liver, do not recognize the diagnostic superiority of magnetic resonance imaging (MRI) over computed tomography in the diagnosis of hepatocellular carcinoma (HCC) and, for the most part, do not contemplate the use of hepatospecific contrast media, such as gadolinium-ethoxybenzyl-diethylenetriamine pentaacetic acid (EOB). The aim of this paper was to analyze the recent results of EOB-MRI in the study of chronic liver disease and the differences between the American Association for the Study of Liver Diseases and the Japan Society of Hepatology guidelines, of which the latter represents the most consolidated experience on EOB-MRI use for HCC diagnosis. Finally, a new diagnostic algorithm for HCC in patients under surveillance for chronic liver disease was formulated, which contemplates the use of EOB. This new diagnostic algorithm is based on the Japan Society of Hepatology algorithm but goes beyond it by adapting it to the Western world, taking into account both the difference between the two and the latest results concerning the diagnosis of HCC. This new diagnostic algorithm for HCC is proposed in order to provide useful diagnostic tools to all those Western countries where the use of EOB (more expensive than extracellular contrast media) is widespread but in which common strategies to manage the nodules that this new contrast agent allows identifying have not been available to date.

  10. Real-time application of critical dimension measurement of TFT-LCD pattern using a newly proposed 2D image-processing algorithm

    NASA Astrophysics Data System (ADS)

    Lee, Jeong-Ho; Kim, You-Sik; Kim, Sung-Ryoung; Lee, Il-Hwan; Pahk, Heui-Jae

    2008-07-01

    A critical dimension measurement system for TFT-LCD patterns has been implemented in this study. To improve the measurement accuracy, an imaging auto-focus algorithm, fast pattern-matching algorithm, and precise edge detection algorithm with subpixel accuracy have been developed and implemented in the system. The optimum focusing position can be calculated using the image focus estimator. The two-step auto-focusing technique has been newly proposed for various LCD patterns, and various focus estimators have been compared to select a stable and accurate one. Fast pattern matching and subpixel edge detection have been developed for measurement. The new approach, called NEMC, is based on edge detection for the selection of influential points; in this approach, points having a strong edge magnitude are only used in the matching procedure. To accelerate pattern matching, point correlation and an image pyramid structure are combined. Edge detection is the most important technique in a vision inspection system. A two-stage edge detection algorithm has been introduced. In the first stage, a first order derivative operator such as the Sobel operator is used to place the edge points and to find the edge directions using a least-square estimation method with pixel accuracy. In the second stage, an eight-connected neighborhood of the estimated edge points is convolved with the LoG (Laplacian of Gaussian) operator, and the LoG-filtered image can be modeled as a continuous function using the facet model. The measurement results of the various patterns are finally presented. The developed system has been successfully used in the TFT-LCD manufacturing industry, and repeatability of less than 30 nm (3 σ) can be obtained with a very fast inspection time.

  11. Segmentation of MRI Brain Images with an Improved Harmony Searching Algorithm

    PubMed Central

    Yang, Zhang; Li, Guo; Weifeng, Ding

    2016-01-01

    The harmony searching (HS) algorithm is a kind of optimization search algorithm currently applied in many practical problems. The HS algorithm constantly revises variables in the harmony database and the probability of different values that can be used to complete iteration convergence to achieve the optimal effect. Accordingly, this study proposed a modified algorithm to improve the efficiency of the algorithm. First, a rough set algorithm was employed to improve the convergence and accuracy of the HS algorithm. Then, the optimal value was obtained using the improved HS algorithm. The optimal value of convergence was employed as the initial value of the fuzzy clustering algorithm for segmenting magnetic resonance imaging (MRI) brain images. Experimental results showed that the improved HS algorithm attained better convergence and more accurate results than those of the original HS algorithm. In our study, the MRI image segmentation effect of the improved algorithm was superior to that of the original fuzzy clustering method. PMID:27403428

  12. Segmentation of MRI Brain Images with an Improved Harmony Searching Algorithm.

    PubMed

    Yang, Zhang; Shufan, Ye; Li, Guo; Weifeng, Ding

    2016-01-01

    The harmony searching (HS) algorithm is a kind of optimization search algorithm currently applied in many practical problems. The HS algorithm constantly revises variables in the harmony database and the probability of different values that can be used to complete iteration convergence to achieve the optimal effect. Accordingly, this study proposed a modified algorithm to improve the efficiency of the algorithm. First, a rough set algorithm was employed to improve the convergence and accuracy of the HS algorithm. Then, the optimal value was obtained using the improved HS algorithm. The optimal value of convergence was employed as the initial value of the fuzzy clustering algorithm for segmenting magnetic resonance imaging (MRI) brain images. Experimental results showed that the improved HS algorithm attained better convergence and more accurate results than those of the original HS algorithm. In our study, the MRI image segmentation effect of the improved algorithm was superior to that of the original fuzzy clustering method.

  13. Cryptococcal antigen screening and preemptive therapy in patients initiating antiretroviral therapy in resource-limited settings: a proposed algorithm for clinical implementation.

    PubMed

    Jarvis, Joseph N; Govender, Nelesh; Chiller, Tom; Park, Benjamin J; Longley, Nicky; Meintjes, Graeme; Bekker, Linda-Gail; Wood, Robin; Lawn, Stephen D; Harrison, Thomas S

    2012-01-01

    HIV-associated cryptococcal meningitis (CM) is estimated to cause over half a million deaths annually in Africa. Many of these deaths are preventable. Screening patients for subclinical cryptococcal infection at the time of entry into antiretroviral therapy programs using cryptococcal antigen (CRAG) immunoassays is highly effective in identifying patients at risk of developing CM, allowing these patients to then be targeted with "preemptive" therapy to prevent the development of severe disease. Such CRAG screening programs are currently being implemented in a number of countries; however, a strong evidence base and clear guidance on how to manage patients with subclinical cryptococcal infection identified by screening are lacking. We review the available evidence and propose a treatment algorithm for the management of patients with asymptomatic cryptococcal antigenemia.

  14. Comparison of Proposed Modified and Original Sequential Organ Failure Assessment Scores in Predicting ICU Mortality: A Prospective, Observational, Follow-Up Study

    PubMed Central

    Gholipour Baradari, Afshin; Daneshiyan, Maryam; Aarabi, Mohsen; Talebiyan Kiakolaye, Yaser; Nouraei, Seyed Mahmood; Zamani Kiasari, Alieh; Habibi, Mohammad Reza; Emami Zeydi, Amir; Sadeghi, Faegheh

    2016-01-01

    Background. The sequential organ failure assessment (SOFA) score has been recommended to triage critically ill patients in the intensive care unit (ICU). This study aimed to compare the performance of our proposed MSOFA and original SOFA scores in predicting ICU mortality. Methods. This prospective observational study was conducted on 250 patients admitted to the ICU. Both tools scores were calculated at the beginning, 24 hours of ICU admission, and 48 hours of ICU admission. Diagnostic odds ratio and receiver operating characteristic (ROC) curve were used to compare the two scores. Results. MSOFA and SOFA predicted mortality similarly with an area under the ROC curve of 0.837, 0.992, and 0.977 for MSOFA 1, MSOFA 2, and MSOFA 3, respectively, and 0.857, 0.988, and 0.988 for SOFA 1, SOFA 2, and SOFA 3, respectively. The sensitivity and specificity of MSOFA 1 in cut-off point 8 were 82.9% and 68.4%, respectively, MSOFA 2 in cut-off point 9.5 were 94.7% and 97.1%, respectively, and MSOFA 3 in cut-off point of 9.3 were 97.4% and 93.1%, respectively. There was a significant positive correlation between the MSOFA 1 and the SOFA 1 (r: 0.942), 24 hours (r: 0.972), and 48 hours (r: 0.960). Conclusion. The proposed MSOFA and the SOFA scores had high diagnostic accuracy, sensitivity, and specificity for predicting mortality. PMID:28116220

  15. Genomic and phylogenetic analyses of an adenovirus isolated from a corn snake (Elaphe guttata) imply a common origin with members of the proposed new genus Atadenovirus.

    PubMed

    Farkas, Szilvia L; Benko, Mária; Elo, Péter; Ursu, Krisztina; Dán, Adám; Ahne, Winfried; Harrach, Balázs

    2002-10-01

    Approximately 60% of the genome of an adenovirus isolated from a corn snake (Elaphe guttata) was cloned and sequenced. The results of homology searches showed that the genes of the corn snake adenovirus (SnAdV-1) were closest to their counterparts in members of the recently proposed new genus ATADENOVIRUS: In phylogenetic analyses of the complete hexon and protease genes, SnAdV-1 indeed clustered together with the atadenoviruses. The characteristic features in the genome organization of SnAdV-1 included the presence of a gene homologous to that for protein p32K, the lack of structural proteins V and IX and the absence of homologues of the E1A and E3 regions. These characteristics are in accordance with the genus-defining markers of atadenoviruses. Comparison of the cleavage sites of the viral protease in core protein pVII also confirmed SnAdV-1 as a candidate member of the genus ATADENOVIRUS: Thus, the hypothesis on the possible reptilian origin of atadenoviruses (Harrach, Acta Veterinaria Hungarica 48, 484-490, 2000) seems to be supported. However, the base composition of DNA sequence (>18 kb) determined from the SnAdV-1 genome showed an equilibrated GC content of 51%, which is unusual for an atadenovirus.

  16. Classification of neuropathic pain in cancer patients: A Delphi expert survey report and EAPC/IASP proposal of an algorithm for diagnostic criteria.

    PubMed

    Brunelli, Cinzia; Bennett, Michael I; Kaasa, Stein; Fainsinger, Robin; Sjøgren, Per; Mercadante, Sebastiano; Løhre, Erik T; Caraceni, Augusto

    2014-12-01

    Neuropathic pain (NP) in cancer patients lacks standards for diagnosis. This study is aimed at reaching consensus on the application of the International Association for the Study of Pain (IASP) special interest group for neuropathic pain (NeuPSIG) criteria to the diagnosis of NP in cancer patients and on the relevance of patient-reported outcome (PRO) descriptors for the screening of NP in this population. An international group of 42 experts was invited to participate in a consensus process through a modified 2-round Internet-based Delphi survey. Relevant topics investigated were: peculiarities of NP in patients with cancer, IASP NeuPSIG diagnostic criteria adaptation and assessment, and standardized PRO assessment for NP screening. Median consensus scores (MED) and interquartile ranges (IQR) were calculated to measure expert consensus after both rounds. Twenty-nine experts answered, and good agreement was found on the statement "the pathophysiology of NP due to cancer can be different from non-cancer NP" (MED=9, IQR=2). Satisfactory consensus was reached for the first 3 NeuPSIG criteria (pain distribution, history, and sensory findings; MEDs⩾8, IQRs⩽3), but not for the fourth one (diagnostic test/imaging; MED=6, IQR=3). Agreement was also reached on clinical examination by soft brush or pin stimulation (MEDs⩾7 and IQRs⩽3) and on the use of PRO descriptors for NP screening (MED=8, IQR=3). Based on the study results, a clinical algorithm for NP diagnostic criteria in cancer patients with pain was proposed. Clinical research on PRO in the screening phase and on the application of the algorithm will be needed to examine their effectiveness in classifying NP in cancer patients.

  17. An improved NAS-RIF algorithm for image restoration

    NASA Astrophysics Data System (ADS)

    Gao, Weizhe; Zou, Jianhua; Xu, Rong; Liu, Changhai; Li, Hengnian

    2016-10-01

    Space optical images are inevitably degraded by atmospheric turbulence, error of the optical system and motion. In order to get the true image, a novel nonnegativity and support constants recursive inverse filtering (NAS-RIF) algorithm is proposed to restore the degraded image. Firstly the image noise is weaken by Contourlet denoising algorithm. Secondly, the reliable object support region estimation is used to accelerate the algorithm convergence. We introduce the optimal threshold segmentation technology to improve the object support region. Finally, an object construction limit and the logarithm function are added to enhance algorithm stability. Experimental results demonstrate that, the proposed algorithm can increase the PSNR, and improve the quality of the restored images. The convergence speed of the proposed algorithm is faster than that of the original NAS-RIF algorithm.

  18. A novel extended kernel recursive least squares algorithm.

    PubMed

    Zhu, Pingping; Chen, Badong; Príncipe, José C

    2012-08-01

    In this paper, a novel extended kernel recursive least squares algorithm is proposed combining the kernel recursive least squares algorithm and the Kalman filter or its extensions to estimate or predict signals. Unlike the extended kernel recursive least squares (Ex-KRLS) algorithm proposed by Liu, the state model of our algorithm is still constructed in the original state space and the hidden state is estimated using the Kalman filter. The measurement model used in hidden state estimation is learned by the kernel recursive least squares algorithm (KRLS) in reproducing kernel Hilbert space (RKHS). The novel algorithm has more flexible state and noise models. We apply this algorithm to vehicle tracking and the nonlinear Rayleigh fading channel tracking, and compare the tracking performances with other existing algorithms.

  19. A fuzzy clustering algorithm to detect planar and quadric shapes

    NASA Technical Reports Server (NTRS)

    Krishnapuram, Raghu; Frigui, Hichem; Nasraoui, Olfa

    1992-01-01

    In this paper, we introduce a new fuzzy clustering algorithm to detect an unknown number of planar and quadric shapes in noisy data. The proposed algorithm is computationally and implementationally simple, and it overcomes many of the drawbacks of the existing algorithms that have been proposed for similar tasks. Since the clustering is performed in the original image space, and since no features need to be computed, this approach is particularly suited for sparse data. The algorithm may also be used in pattern recognition applications.

  20. Improved algorithm for hyperspectral data dimension determination

    NASA Astrophysics Data System (ADS)

    CHEN, Jie; DU, Lei; LI, Jing; HAN, Yachao; GAO, Zihong

    2017-02-01

    The correlation between adjacent bands of hyperspectral image data is relatively strong. However, signal coexists with noise and the HySime (hyperspectral signal identification by minimum error) algorithm which is based on the principle of least squares is designed to calculate the estimated noise value and the estimated signal correlation matrix value. The algorithm is effective with accurate noise value but ineffective with estimated noise value obtained from spectral dimension reduction and de-correlation process. This paper proposes an improved HySime algorithm based on noise whitening process. It carries out the noise whitening, instead of removing noise pixel by pixel, process on the original data first, obtains the noise covariance matrix estimated value accurately, and uses the HySime algorithm to calculate the signal correlation matrix value in order to improve the precision of results. With simulated as well as real data experiments in this paper, results show that: firstly, the improved HySime algorithm are more accurate and stable than the original HySime algorithm; secondly, the improved HySime algorithm results have better consistency under the different conditions compared with the classic noise subspace projection algorithm (NSP); finally, the improved HySime algorithm improves the adaptability of non-white image noise with noise whitening process.

  1. Algorithms and Algorithmic Languages.

    ERIC Educational Resources Information Center

    Veselov, V. M.; Koprov, V. M.

    This paper is intended as an introduction to a number of problems connected with the description of algorithms and algorithmic languages, particularly the syntaxes and semantics of algorithmic languages. The terms "letter, word, alphabet" are defined and described. The concept of the algorithm is defined and the relation between the algorithm and…

  2. Variable neighbourhood simulated annealing algorithm for capacitated vehicle routing problems

    NASA Astrophysics Data System (ADS)

    Xiao, Yiyong; Zhao, Qiuhong; Kaku, Ikou; Mladenovic, Nenad

    2014-04-01

    This article presents the variable neighbourhood simulated annealing (VNSA) algorithm, a variant of the variable neighbourhood search (VNS) combined with simulated annealing (SA), for efficiently solving capacitated vehicle routing problems (CVRPs). In the new algorithm, the deterministic 'Move or not' criterion of the original VNS algorithm regarding the incumbent replacement is replaced by an SA probability, and the neighbourhood shifting of the original VNS (from near to far by k← k+1) is replaced by a neighbourhood shaking procedure following a specified rule. The geographical neighbourhood structure is introduced in constructing the neighbourhood structures for the CVRP of the string model. The proposed algorithm is tested against 39 well-known benchmark CVRP instances of different scales (small/middle, large, very large). The results show that the VNSA algorithm outperforms most existing algorithms in terms of computational effectiveness and efficiency, showing good performance in solving large and very large CVRPs.

  3. Proposed standardized definitions for vertical resolution and uncertainty in the NDACC lidar ozone and temperature algorithms - Part 3: Temperature uncertainty budget

    NASA Astrophysics Data System (ADS)

    Leblanc, Thierry; Sica, Robert J.; van Gijsel, Joanna A. E.; Haefele, Alexander; Payen, Guillaume; Liberti, Gianluigi

    2016-08-01

    A standardized approach for the definition, propagation, and reporting of uncertainty in the temperature lidar data products contributing to the Network for the Detection for Atmospheric Composition Change (NDACC) database is proposed. One important aspect of the proposed approach is the ability to propagate all independent uncertainty components in parallel through the data processing chain. The individual uncertainty components are then combined together at the very last stage of processing to form the temperature combined standard uncertainty. The identified uncertainty sources comprise major components such as signal detection, saturation correction, background noise extraction, temperature tie-on at the top of the profile, and absorption by ozone if working in the visible spectrum, as well as other components such as molecular extinction, the acceleration of gravity, and the molecular mass of air, whose magnitudes depend on the instrument, data processing algorithm, and altitude range of interest. The expression of the individual uncertainty components and their step-by-step propagation through the temperature data processing chain are thoroughly estimated, taking into account the effect of vertical filtering and the merging of multiple channels. All sources of uncertainty except detection noise imply correlated terms in the vertical dimension, which means that covariance terms must be taken into account when vertical filtering is applied and when temperature is integrated from the top of the profile. Quantitatively, the uncertainty budget is presented in a generic form (i.e., as a function of instrument performance and wavelength), so that any NDACC temperature lidar investigator can easily estimate the expected impact of individual uncertainty components in the case of their own instrument. Using this standardized approach, an example of uncertainty budget is provided for the Jet Propulsion Laboratory (JPL) lidar at Mauna Loa Observatory, Hawai'i, which is

  4. An improved NAS-RIF algorithm for blind image restoration

    NASA Astrophysics Data System (ADS)

    Liu, Ning; Jiang, Yanbin; Lou, Shuntian

    2007-01-01

    Image restoration is widely applied in many areas, but when operating on images with different scales for the representation of pixel intensity levels or low SNR, the traditional restoration algorithm lacks validity and induces noise amplification, ringing artifacts and poor convergent ability. In this paper, an improved NAS-RIF algorithm is proposed to overcome the shortcomings of the traditional algorithm. The improved algorithm proposes a new cost function which adds a space-adaptive regularization term and a disunity gain of the adaptive filter. In determining the support region, a pre-segmentation is used to form it close to the object in the image. Compared with the traditional algorithm, simulations show that the improved algorithm behaves better convergence, noise resistance and provides a better estimate of original image.

  5. [Accomplistments in the Last Year Against the Objectives Laid Out in the Original Proposal; the Current Status of the Research; the Work to go in the Next Year; and Publications

    NASA Technical Reports Server (NTRS)

    Elliot, James

    2005-01-01

    Below is the annual progress report (through 2005-01-31) on NASA Grant NNG04GF25G. It is organized according to: (I) Accomplishments in the last year against the objectives laid out in the original proposal; (II) The current status of the research; (III) The work to go in the next year; (IV) Publications. Since this program is a continuation of the occultation work supported in a predecessor grant, the "Accomplishments" section lists all the tasks written into the proposal (in June 2003) through the end of the first year of the new grant.

  6. A Cuckoo Search Algorithm for Multimodal Optimization

    PubMed Central

    2014-01-01

    Interest in multimodal optimization is expanding rapidly, since many practical engineering problems demand the localization of multiple optima within a search space. On the other hand, the cuckoo search (CS) algorithm is a simple and effective global optimization algorithm which can not be directly applied to solve multimodal optimization problems. This paper proposes a new multimodal optimization algorithm called the multimodal cuckoo search (MCS). Under MCS, the original CS is enhanced with multimodal capacities by means of (1) the incorporation of a memory mechanism to efficiently register potential local optima according to their fitness value and the distance to other potential solutions, (2) the modification of the original CS individual selection strategy to accelerate the detection process of new local minima, and (3) the inclusion of a depuration procedure to cyclically eliminate duplicated memory elements. The performance of the proposed approach is compared to several state-of-the-art multimodal optimization algorithms considering a benchmark suite of fourteen multimodal problems. Experimental results indicate that the proposed strategy is capable of providing better and even a more consistent performance over existing well-known multimodal algorithms for the majority of test problems yet avoiding any serious computational deterioration. PMID:25147850

  7. A cuckoo search algorithm for multimodal optimization.

    PubMed

    Cuevas, Erik; Reyna-Orta, Adolfo

    2014-01-01

    Interest in multimodal optimization is expanding rapidly, since many practical engineering problems demand the localization of multiple optima within a search space. On the other hand, the cuckoo search (CS) algorithm is a simple and effective global optimization algorithm which can not be directly applied to solve multimodal optimization problems. This paper proposes a new multimodal optimization algorithm called the multimodal cuckoo search (MCS). Under MCS, the original CS is enhanced with multimodal capacities by means of (1) the incorporation of a memory mechanism to efficiently register potential local optima according to their fitness value and the distance to other potential solutions, (2) the modification of the original CS individual selection strategy to accelerate the detection process of new local minima, and (3) the inclusion of a depuration procedure to cyclically eliminate duplicated memory elements. The performance of the proposed approach is compared to several state-of-the-art multimodal optimization algorithms considering a benchmark suite of fourteen multimodal problems. Experimental results indicate that the proposed strategy is capable of providing better and even a more consistent performance over existing well-known multimodal algorithms for the majority of test problems yet avoiding any serious computational deterioration.

  8. [The origins of the regulation of prostitution in contemporary Spain from Cabarrús's proposal (1792) to the Madrid Regulations (1847)].

    PubMed

    Guereña, J L

    1995-01-01

    The publication in 1847 of the Reglamento para la represión de los excesos de la prostitución en Madrid (Regulations for the repression of the excesses of prostitution in Madrid) inaugurated an era of regulated prostitution in Spain, which followed upon a period of abolitionism decreed by Philip IV. In view of the spread of prostitution and venereal diseases, police measures, and especially medical measures were both considered in the development of these regulations, which had first been proposed by the Count of Cabarrús in 1792. Allthough completely confidential, the new system of regulations, drawn up in 1847, set the stage for the wide-reaching regulation of prostitution that came into effect in several cities in Spain during and after the mid-nineteenth century, and which included city residence and periodic health surveillance for prostitutes.

  9. A new algorithmic approach for fingers detection and identification

    NASA Astrophysics Data System (ADS)

    Mubashar Khan, Arslan; Umar, Waqas; Choudhary, Taimoor; Hussain, Fawad; Haroon Yousaf, Muhammad

    2013-03-01

    Gesture recognition is concerned with the goal of interpreting human gestures through mathematical algorithms. Gestures can originate from any bodily motion or state but commonly originate from the face or hand. Hand gesture detection in a real time environment, where the time and memory are important issues, is a critical operation. Hand gesture recognition largely depends on the accurate detection of the fingers. This paper presents a new algorithmic approach to detect and identify fingers of human hand. The proposed algorithm does not depend upon the prior knowledge of the scene. It detects the active fingers and Metacarpophalangeal (MCP) of the inactive fingers from an already detected hand. Dynamic thresholding technique and connected component labeling scheme are employed for background elimination and hand detection respectively. Algorithm proposed a new approach for finger identification in real time environment keeping the memory and time constraint as low as possible.

  10. Originator dynamics

    PubMed Central

    Manapat, Michael; Ohtsuki, Hisashi; Bürger, Reinhard; Nowak, Martin A.

    2009-01-01

    We study the origin of evolution. Evolution is based on replication, mutation, and selection. But how does evolution begin? When do chemical kinetics turn into evolutionary dynamics? We propose “prelife” and “prevolution” as the logical precursors of life and evolution. Prelife generates sequences of variable length. Prelife is a generative chemistry that proliferates information and produces diversity without replication. The resulting “prevolutionary dynamics” have mutation and selection. We propose an equation that allows us to investigate the origin of evolution. In one limit, this “originator equation” gives the classical selection equation. In the other limit, we obtain “prelife.” There is competition between life and prelife and there can be selection for or against replication. Simple prelife equations with uniform rate constants have the property that longer sequences are exponentially less frequent than shorter ones. But replication can reverse such an ordering. As the replication rate increases, some longer sequences can become more frequent than shorter ones. Thus, replication can lead to “reversals” in the equilibrium portraits. We study these reversals, which mark the transition from prelife to life in our model. If the replication potential exceeds a critical value, then life replicates into existence. PMID:18996397

  11. An original approach to fill the gap in the earthquake disaster experience - a proposal for 'the archive of the quake experience' -

    NASA Astrophysics Data System (ADS)

    Tanaka, Y.; Hirayama, Y.; Kuroda, S.; Yoshida, M.

    2015-12-01

    People without severe disaster experience infallibly forget even the extraordinary one like 3.11 as time advances. Therefore, to improve the resilient society, an ingenious attempt to keep people's memory of disaster not to fade away is necessary. Since 2011, we have been caring out earthquake disaster drills for residents of high-rise apartments, for schoolchildren, for citizens of the coastal area, etc. Using a portable earthquake simulator (1), the drill consists of three parts, the first: a short lecture explaining characteristic quakes expected for Japanese people to have in the future, the second: reliving experience of major earthquakes hit Japan since 1995, and the third: a short lecture for preparation that can be done at home and/or in an office. For the quake experience, although it is two dimensional movement, the real earthquake observation record is used to control the simulator to provide people to relive an experience of different kinds of earthquake including the long period motion of skyscrapers. Feedback of the drill is always positive because participants understand that the reliving the quake experience with proper lectures is one of the best method to communicate the past disasters to their family and to inherit them to the next generation. There are several kinds of archive for disaster as inheritance such as pictures, movies, documents, interviews, and so on. In addition to them, here we propose to construct 'the archive of the quake experience' which compiles observed data ready to relive with the simulator. We would like to show some movies of our quake drill in the presentation. Reference: (1) Kuroda, S. et al. (2012), "Development of portable earthquake simulator for enlightenment of disaster preparedness", 15th World Conference on Earthquake Engineering 2012, Vol. 12, 9412-9420.

  12. Algorithm for shortest path search in Geographic Information Systems by using reduced graphs.

    PubMed

    Rodríguez-Puente, Rafael; Lazo-Cortés, Manuel S

    2013-01-01

    The use of Geographic Information Systems has increased considerably since the eighties and nineties. As one of their most demanding applications we can mention shortest paths search. Several studies about shortest path search show the feasibility of using graphs for this purpose. Dijkstra's algorithm is one of the classic shortest path search algorithms. This algorithm is not well suited for shortest path search in large graphs. This is the reason why various modifications to Dijkstra's algorithm have been proposed by several authors using heuristics to reduce the run time of shortest path search. One of the most used heuristic algorithms is the A* algorithm, the main goal is to reduce the run time by reducing the search space. This article proposes a modification of Dijkstra's shortest path search algorithm in reduced graphs. It shows that the cost of the path found in this work, is equal to the cost of the path found using Dijkstra's algorithm in the original graph. The results of finding the shortest path, applying the proposed algorithm, Dijkstra's algorithm and A* algorithm, are compared. This comparison shows that, by applying the approach proposed, it is possible to obtain the optimal path in a similar or even in less time than when using heuristic algorithms.

  13. Generalized Pattern Search Algorithm for Peptide Structure Prediction

    PubMed Central

    Nicosia, Giuseppe; Stracquadanio, Giovanni

    2008-01-01

    Finding the near-native structure of a protein is one of the most important open problems in structural biology and biological physics. The problem becomes dramatically more difficult when a given protein has no regular secondary structure or it does not show a fold similar to structures already known. This situation occurs frequently when we need to predict the tertiary structure of small molecules, called peptides. In this research work, we propose a new ab initio algorithm, the generalized pattern search algorithm, based on the well-known class of Search-and-Poll algorithms. We performed an extensive set of simulations over a well-known set of 44 peptides to investigate the robustness and reliability of the proposed algorithm, and we compared the peptide conformation with a state-of-the-art algorithm for peptide structure prediction known as PEPstr. In particular, we tested the algorithm on the instances proposed by the originators of PEPstr, to validate the proposed algorithm; the experimental results confirm that the generalized pattern search algorithm outperforms PEPstr by 21.17% in terms of average root mean-square deviation, RMSD Cα. PMID:18487293

  14. The original SPF10 LiPA25 algorithm is more sensitive and suitable for epidemiologic HPV research than the SPF10 INNO-LiPA Extra.

    PubMed

    Geraets, Daan T; Struijk, Linda; Kleter, Bernhard; Molijn, Anco; van Doorn, Leen-Jan; Quint, Wim G V; Colau, Brigitte

    2015-04-01

    Two commercial HPV tests target the same 65 bp fragment of the human papillomavirus genome (designated SPF10): the original HPV SPF10 PCR-DEIA-LiPA25 system, version 1, (LiPA25) and the INNO-LiPA HPV Genotyping Extra (INNO-LiPA). The original SPF10 LiPA25 system was designed to have high analytical sensitivity and applied in HPV vaccine and epidemiology studies worldwide. But due to apparent similarities, this test can be easily confused with INNO-LiPA, a more recent assay of which the intended use, i.e., epidemiological or clinical, is currently unclear. The aim was to compare the analytical sensitivity of SPF10 LiPA25 to that of INNO-LiPA on the level of general HPV detection and genotyping. HPV testing by both assays was performed on the same DNA isolated from cervical swab (n = 365) and biopsy (n = 42) specimens. In cervical swabs, SPF10 LiPA25 and INNO-LiPA identified 35.3% and 29.3% multiple infections, 52.6% and 51.5% single infections, and no HPV type in 12.1% and 19.2%, respectively. Genotyping results were 64.7% identical, 26.0% compatible and 9.3% discordant between both methods. SPF10 LiPA25 detected significantly more genotypes (p < 0.001). The higher analytical sensitivity of SPF10 LiPA25 was confirmed by the MPTS123 genotyping assay. HPV positivity by the general probes in SPF10 DEIA was significantly higher (87.9%) than by those on INNO-LiPA (77.0%) (kappa = 0.592, p < 0.001). In cervical biopsies, SPF10 LiPA25 and INNO-LiPA identified 21.4% and 9.5% multiple types, 76.2% and 81.0% single types, and no type in 2.4% and 9.5%, respectively. Between both tests, the identification of genotypes was 76.3% identical, 14.3% compatible and 9.5% discordant. Overall, significantly more genotypes were detected by SPF10 LiPA25 (kappa = 0.853, p = 0.022). HPV positivity was higher by the SPF10 DEIA (97.6%) than by the INNO-LiPA strip (92.9%). These results demonstrate that SPF10 LiPA25 is more suitable for HPV genotyping in epidemiologic and vaccine

  15. A hybrid artificial bee colony algorithm for numerical function optimization

    NASA Astrophysics Data System (ADS)

    Alqattan, Zakaria N.; Abdullah, Rosni

    2015-02-01

    Artificial Bee Colony (ABC) algorithm is one of the swarm intelligence algorithms; it has been introduced by Karaboga in 2005. It is a meta-heuristic optimization search algorithm inspired from the intelligent foraging behavior of the honey bees in nature. Its unique search process made it as one of the most competitive algorithm with some other search algorithms in the area of optimization, such as Genetic algorithm (GA) and Particle Swarm Optimization (PSO). However, the ABC performance of the local search process and the bee movement or the solution improvement equation still has some weaknesses. The ABC is good in avoiding trapping at the local optimum but it spends its time searching around unpromising random selected solutions. Inspired by the PSO, we propose a Hybrid Particle-movement ABC algorithm called HPABC, which adapts the particle movement process to improve the exploration of the original ABC algorithm. Numerical benchmark functions were used in order to experimentally test the HPABC algorithm. The results illustrate that the HPABC algorithm can outperform the ABC algorithm in most of the experiments (75% better in accuracy and over 3 times faster).

  16. Optimal Golomb Ruler Sequences Generation for Optical WDM Systems: A Novel Parallel Hybrid Multi-objective Bat Algorithm

    NASA Astrophysics Data System (ADS)

    Bansal, Shonak; Singh, Arun Kumar; Gupta, Neena

    2017-02-01

    In real-life, multi-objective engineering design problems are very tough and time consuming optimization problems due to their high degree of nonlinearities, complexities and inhomogeneity. Nature-inspired based multi-objective optimization algorithms are now becoming popular for solving multi-objective engineering design problems. This paper proposes original multi-objective Bat algorithm (MOBA) and its extended form, namely, novel parallel hybrid multi-objective Bat algorithm (PHMOBA) to generate shortest length Golomb ruler called optimal Golomb ruler (OGR) sequences at a reasonable computation time. The OGRs found their application in optical wavelength division multiplexing (WDM) systems as channel-allocation algorithm to reduce the four-wave mixing (FWM) crosstalk. The performances of both the proposed algorithms to generate OGRs as optical WDM channel-allocation is compared with other existing classical computing and nature-inspired algorithms, including extended quadratic congruence (EQC), search algorithm (SA), genetic algorithms (GAs), biogeography based optimization (BBO) and big bang-big crunch (BB-BC) optimization algorithms. Simulations conclude that the proposed parallel hybrid multi-objective Bat algorithm works efficiently as compared to original multi-objective Bat algorithm and other existing algorithms to generate OGRs for optical WDM systems. The algorithm PHMOBA to generate OGRs, has higher convergence and success rate than original MOBA. The efficiency improvement of proposed PHMOBA to generate OGRs up to 20-marks, in terms of ruler length and total optical channel bandwidth (TBW) is 100 %, whereas for original MOBA is 85 %. Finally the implications for further research are also discussed.

  17. Optimal Golomb Ruler Sequences Generation for Optical WDM Systems: A Novel Parallel Hybrid Multi-objective Bat Algorithm

    NASA Astrophysics Data System (ADS)

    Bansal, Shonak; Singh, Arun Kumar; Gupta, Neena

    2016-07-01

    In real-life, multi-objective engineering design problems are very tough and time consuming optimization problems due to their high degree of nonlinearities, complexities and inhomogeneity. Nature-inspired based multi-objective optimization algorithms are now becoming popular for solving multi-objective engineering design problems. This paper proposes original multi-objective Bat algorithm (MOBA) and its extended form, namely, novel parallel hybrid multi-objective Bat algorithm (PHMOBA) to generate shortest length Golomb ruler called optimal Golomb ruler (OGR) sequences at a reasonable computation time. The OGRs found their application in optical wavelength division multiplexing (WDM) systems as channel-allocation algorithm to reduce the four-wave mixing (FWM) crosstalk. The performances of both the proposed algorithms to generate OGRs as optical WDM channel-allocation is compared with other existing classical computing and nature-inspired algorithms, including extended quadratic congruence (EQC), search algorithm (SA), genetic algorithms (GAs), biogeography based optimization (BBO) and big bang-big crunch (BB-BC) optimization algorithms. Simulations conclude that the proposed parallel hybrid multi-objective Bat algorithm works efficiently as compared to original multi-objective Bat algorithm and other existing algorithms to generate OGRs for optical WDM systems. The algorithm PHMOBA to generate OGRs, has higher convergence and success rate than original MOBA. The efficiency improvement of proposed PHMOBA to generate OGRs up to 20-marks, in terms of ruler length and total optical channel bandwidth (TBW) is 100 %, whereas for original MOBA is 85 %. Finally the implications for further research are also discussed.

  18. An efficient cuckoo search algorithm for numerical function optimization

    NASA Astrophysics Data System (ADS)

    Ong, Pauline; Zainuddin, Zarita

    2013-04-01

    Cuckoo search algorithm which reproduces the breeding strategy of the best known brood parasitic bird, the cuckoos has demonstrated its superiority in obtaining the global solution for numerical optimization problems. However, the involvement of fixed step approach in its exploration and exploitation behavior might slow down the search process considerably. In this regards, an improved cuckoo search algorithm with adaptive step size adjustment is introduced and its feasibility on a variety of benchmarks is validated. The obtained results show that the proposed scheme outperforms the standard cuckoo search algorithm in terms of convergence characteristic while preserving the fascinating features of the original method.

  19. Molecular diagnosis of distal renal tubular acidosis in Tunisian patients: proposed algorithm for Northern Africa populations for the ATP6V1B1, ATP6V0A4 and SCL4A1 genes

    PubMed Central

    2013-01-01

    Background Primary distal renal tubular acidosis (dRTA) caused by mutations in the genes that codify for the H + −ATPase pump subunits is a heterogeneous disease with a poor phenotype-genotype correlation. Up to now, large cohorts of dRTA Tunisian patients have not been analyzed, and molecular defects may differ from those described in other ethnicities. We aim to identify molecular defects present in the ATP6V1B1, ATP6V0A4 and SLC4A1 genes in a Tunisian cohort, according to the following algorithm: first, ATP6V1B1 gene analysis in dRTA patients with sensorineural hearing loss (SNHL) or unknown hearing status. Afterwards, ATP6V0A4 gene study in dRTA patients with normal hearing, and in those without any structural mutation in the ATP6V1B1 gene despite presenting SNHL. Finally, analysis of the SLC4A1 gene in those patients with a negative result for the previous studies. Methods 25 children (19 boys) with dRTA from 20 families of Tunisian origin were studied. DNAs were extracted by the standard phenol/chloroform method. Molecular analysis was performed by PCR amplification and direct sequencing. Results In the index cases, ATP6V1B1 gene screening resulted in a mutation detection rate of 81.25%, which increased up to 95% after ATP6V0A4 gene analysis. Three ATP6V1B1 mutations were observed: one frameshift mutation (c.1155dupC; p.Ile386fs), in exon 12; a G to C single nucleotide substitution, on the acceptor splicing site (c.175-1G > C; p.?) in intron 2, and one novel missense mutation (c.1102G > A; p.Glu368Lys), in exon 11. We also report four mutations in the ATP6V0A4 gene: one single nucleotide deletion in exon 13 (c.1221delG; p.Met408Cysfs*10); the nonsense c.16C > T; p.Arg6*, in exon 3; and the missense changes c.1739 T > C; p.Met580Thr, in exon 17 and c.2035G > T; p.Asp679Tyr, in exon 19. Conclusion Molecular diagnosis of ATP6V1B1 and ATP6V0A4 genes was performed in a large Tunisian cohort with dRTA. We identified three different ATP6V1

  20. Memory-hazard-aware k-buffer algorithm for order-independent transparency rendering.

    PubMed

    Zhang, Nan

    2014-02-01

    The (k)-buffer algorithm is an efficient GPU-based fragment level sorting algorithm for rendering transparent surfaces. Because of the inherent massive parallelism of GPU stream processors, this algorithm suffers serious read-after-write memory hazards now. In this paper, we introduce an improved (k)-buffer algorithm with error correction coding to combat memory hazards. Our algorithm results in significantly reduced artifacts. While preserving all the merits of the original algorithm, it requires merely OpenGL 3.x support from the GPU, instead of the atomic operations appearing only in the latest OpenGL 4.2 standard. Our algorithm is simple to implement and efficient in performance. Future GPU support for improving this algorithm is also proposed.

  1. Memory-Hazard-Aware K-Buffer Algorithm for Order-Independent Transparency Rendering.

    PubMed

    Zhang, Nan

    2013-04-04

    The k-buffer algorithm is an efficient GPU based fragment level sorting algorithm for rendering transparent surfaces. Because of the inherent massive parallelism of GPU stream processors, this algorithm suffers serious read-after-write memory hazards now. In this paper, we introduce an improved k-buffer algorithm with error correction coding to combat memory hazards. Our algorithm results in significantly reduced artifacts. While preserving all the merits of the original algorithm, it requires merely OpenGL 3.x support from the GPU, instead of the atomic operations appearing only in the latest OpenGL 4.2 standard. Our algorithm is simple to implement and efficient in performance. Future GPU support for improving this algorithm is also proposed.

  2. Evolving a Nelder-Mead Algorithm for Optimization with Genetic Programming.

    PubMed

    Fajfar, Iztok; Puhan, Janez; Bűrmen, Árpád

    2016-01-25

    We used genetic programming to evolve a direct search optimization algorithm, similar to that of the standard downhill simplex optimization method proposed by Nelder and Mead (1965). In the training process, we used several ten-dimensional quadratic functions with randomly displaced parameters and different randomly generated starting simplices. The genetically obtained optimization algorithm showed overall better performance than the original Nelder-Mead method on a standard set of test functions. We observed that many parts of the genetically produced algorithm were seldom or never executed, which allowed us to greatly simplify the algorithm by removing the redundant parts. The resulting algorithm turns out to be considerably simpler than the original Nelder-Mead method while still performing better than the original method.

  3. Proposed standardized definitions for vertical resolution and uncertainty in the NDACC lidar ozone and temperature algorithms - Part 2: Ozone DIAL uncertainty budget

    NASA Astrophysics Data System (ADS)

    Leblanc, Thierry; Sica, Robert J.; van Gijsel, Joanna A. E.; Godin-Beekmann, Sophie; Haefele, Alexander; Trickl, Thomas; Payen, Guillaume; Liberti, Gianluigi

    2016-08-01

    A standardized approach for the definition, propagation, and reporting of uncertainty in the ozone differential absorption lidar data products contributing to the Network for the Detection for Atmospheric Composition Change (NDACC) database is proposed. One essential aspect of the proposed approach is the propagation in parallel of all independent uncertainty components through the data processing chain before they are combined together to form the ozone combined standard uncertainty. The independent uncertainty components contributing to the overall budget include random noise associated with signal detection, uncertainty due to saturation correction, background noise extraction, the absorption cross sections of O3, NO2, SO2, and O2, the molecular extinction cross sections, and the number densities of the air, NO2, and SO2. The expression of the individual uncertainty components and their step-by-step propagation through the ozone differential absorption lidar (DIAL) processing chain are thoroughly estimated. All sources of uncertainty except detection noise imply correlated terms in the vertical dimension, which requires knowledge of the covariance matrix when the lidar signal is vertically filtered. In addition, the covariance terms must be taken into account if the same detection hardware is shared by the lidar receiver channels at the absorbed and non-absorbed wavelengths. The ozone uncertainty budget is presented as much as possible in a generic form (i.e., as a function of instrument performance and wavelength) so that all NDACC ozone DIAL investigators across the network can estimate, for their own instrument and in a straightforward manner, the expected impact of each reviewed uncertainty component. In addition, two actual examples of full uncertainty budget are provided, using nighttime measurements from the tropospheric ozone DIAL located at the Jet Propulsion Laboratory (JPL) Table Mountain Facility, California, and nighttime measurements from the JPL

  4. An infrared maritime target detection algorithm applicable to heavy sea fog

    NASA Astrophysics Data System (ADS)

    Wang, Bin; Dong, Lili; Zhao, Ming; Wu, Houde; Ji, Yuanyuan; Xu, Wenhai

    2015-07-01

    Infrared maritime images taken in heavy sea fog (HSF) are usually nonuniform in brightness distribution and targets in different region have significant differences in local contrast, which will cause great difficulty for normal target detection algorithm to remove background clutters and extract targets. To this problem, this paper proposes a new target detection algorithm based on image region division and wavelet inter-subband correlation. This algorithm will firstly divide the original image into different regions by an adaptive thresholding method OTSU. Then, wavelet threshold denoising is adopted to suppress noise in subbands. Finally, the real target is extracted according to its inter-subband correlation and local singularity in original image. Experiment results show that this algorithm can overcome the brightness nonuniformity and background clutters to extract all targets accurately. Besides, target's area is well retained. So the proposed algorithm has high practical value in maritime target search based on infrared imaging system.

  5. A Novel Hybrid Firefly Algorithm for Global Optimization

    PubMed Central

    Zhang, Lina; Liu, Liqiang; Yang, Xin-She; Dai, Yuntao

    2016-01-01

    Global optimization is challenging to solve due to its nonlinearity and multimodality. Traditional algorithms such as the gradient-based methods often struggle to deal with such problems and one of the current trends is to use metaheuristic algorithms. In this paper, a novel hybrid population-based global optimization algorithm, called hybrid firefly algorithm (HFA), is proposed by combining the advantages of both the firefly algorithm (FA) and differential evolution (DE). FA and DE are executed in parallel to promote information sharing among the population and thus enhance searching efficiency. In order to evaluate the performance and efficiency of the proposed algorithm, a diverse set of selected benchmark functions are employed and these functions fall into two groups: unimodal and multimodal. The experimental results show better performance of the proposed algorithm compared to the original version of the firefly algorithm (FA), differential evolution (DE) and particle swarm optimization (PSO) in the sense of avoiding local minima and increasing the convergence rate. PMID:27685869

  6. A fast non-local image denoising algorithm

    NASA Astrophysics Data System (ADS)

    Dauwe, A.; Goossens, B.; Luong, H. Q.; Philips, W.

    2008-02-01

    In this paper we propose several improvements to the original non-local means algorithm introduced by Buades et al. which obtains state-of-the-art denoising results. The strength of this algorithm is to exploit the repetitive character of the image in order to denoise the image unlike conventional denoising algorithms, which typically operate in a local neighbourhood. Due to the enormous amount of weight computations, the original algorithm has a high computational cost. An improvement of image quality towards the original algorithm is to ignore the contributions from dissimilar windows. Even though their weights are very small at first sight, the new estimated pixel value can be severely biased due to the many small contributions. This bad influence of dissimilar windows can be eliminated by setting their corresponding weights to zero. Using the preclassification based on the first three statistical moments, only contributions from similar neighborhoods are computed. To decide whether a window is similar or dissimilar, we will derive thresholds for images corrupted with additive white Gaussian noise. Our accelerated approach is further optimized by taking advantage of the symmetry in the weights, which roughly halves the computation time, and by using a lookup table to speed up the weight computations. Compared to the original algorithm, our proposed method produces images with increased PSNR and better visual performance in less computation time. Our proposed method even outperforms state-of-the-art wavelet denoising techniques in both visual quality and PSNR values for images containing a lot of repetitive structures such as textures: the denoised images are much sharper and contain less artifacts. The proposed optimizations can also be applied in other image processing tasks which employ the concept of repetitive structures such as intra-frame super-resolution or detection of digital image forgery.

  7. Fast double-phase retrieval in Fresnel domain using modified Gerchberg-Saxton algorithm for lensless optical security systems.

    PubMed

    Hwang, Hone-Ene; Chang, Hsuan T; Lie, Wen-Nung

    2009-08-03

    A novel fast double-phase retrieval algorithm for lensless optical security systems based on the Fresnel domain is presented in this paper. Two phase-only masks are efficiently determined by using a modified Gerchberg-Saxton algorithm, in which two cascaded Fresnel transforms are replaced by one Fourier transform with compensations to reduce the consumed computations. Simulation results show that the proposed algorithm substantially speeds up the iterative process, while keeping the reconstructed image highly correlated with the original one.

  8. A Nonhomogeneous Cuckoo Search Algorithm Based on Quantum Mechanism for Real Parameter Optimization.

    PubMed

    Cheung, Ngaam J; Ding, Xue-Ming; Shen, Hong-Bin

    2017-02-01

    Cuckoo search (CS) algorithm is a nature-inspired search algorithm, in which all the individuals have identical search behaviors. However, this simple homogeneous search behavior is not always optimal to find the potential solution to a special problem, and it may trap the individuals into local regions leading to premature convergence. To overcome the drawback, this paper presents a new variant of CS algorithm with nonhomogeneous search strategies based on quantum mechanism to enhance search ability of the classical CS algorithm. Featured contributions in this paper include: 1) quantum-based strategy is developed for nonhomogeneous update laws and 2) we, for the first time, present a set of theoretical analyses on CS algorithm as well as the proposed algorithm, respectively, and conclude a set of parameter boundaries guaranteeing the convergence of the CS algorithm and the proposed algorithm. On 24 benchmark functions, we compare our method with five existing CS-based methods and other ten state-of-the-art algorithms. The numerical results demonstrate that the proposed algorithm is significantly better than the original CS algorithm and the rest of compared methods according to two nonparametric tests.

  9. Adaptive Load-Balancing Algorithms using Symmetric Broadcast Networks

    NASA Technical Reports Server (NTRS)

    Das, Sajal K.; Harvey, Daniel J.; Biswas, Rupak; Biegel, Bryan A. (Technical Monitor)

    2002-01-01

    In a distributed computing environment, it is important to ensure that the processor workloads are adequately balanced, Among numerous load-balancing algorithms, a unique approach due to Das and Prasad defines a symmetric broadcast network (SBN) that provides a robust communication pattern among the processors in a topology-independent manner. In this paper, we propose and analyze three efficient SBN-based dynamic load-balancing algorithms, and implement them on an SGI Origin2000. A thorough experimental study with Poisson distributed synthetic loads demonstrates that our algorithms are effective in balancing system load. By optimizing completion time and idle time, the proposed algorithms are shown to compare favorably with several existing approaches.

  10. A hybrid ECT image reconstruction based on Tikhonov regularization theory and SIRT algorithm

    NASA Astrophysics Data System (ADS)

    Lei, Wang; Xiaotong, Du; Xiaoyin, Shao

    2007-07-01

    Electrical Capacitance Tomography (ECT) image reconstruction is a key problem that is not well solved due to the influence of soft-field in the ECT system. In this paper, a new hybrid ECT image reconstruction algorithm is proposed by combining Tikhonov regularization theory and Simultaneous Reconstruction Technique (SIRT) algorithm. Tikhonov regularization theory is used to solve ill-posed image reconstruction problem to obtain a stable original reconstructed image in the region of the optimized solution aggregate. Then, SIRT algorithm is used to improve the quality of the final reconstructed image. In order to satisfy the industrial requirement of real-time computation, the proposed algorithm is further been modified to improve the calculation speed. Test results show that the quality of reconstructed image is better than that of the well-known Filter Linear Back Projection (FLBP) algorithm and the time consumption of the new algorithm is less than 0.1 second that satisfies the online requirements.

  11. An efficient iterative CBCT reconstruction approach using gradient projection sparse reconstruction algorithm

    PubMed Central

    Lee, Heui Chang; Song, Bongyong; Kim, Jin Sung; Jung, James J.; Li, H. Harold; Mutic, Sasa; Park, Justin C.

    2016-01-01

    The purpose of this study is to develop a fast and convergence proofed CBCT reconstruction framework based on the compressed sensing theory which not only lowers the imaging dose but also is computationally practicable in the busy clinic. We simplified the original mathematical formulation of gradient projection for sparse reconstruction (GPSR) to minimize the number of forward and backward projections for line search processes at each iteration. GPSR based algorithms generally showed improved image quality over the FDK algorithm especially when only a small number of projection data were available. When there were only 40 projections from 360 degree fan beam geometry, the quality of GPSR based algorithms surpassed FDK algorithm within 10 iterations in terms of the mean squared relative error. Our proposed GPSR algorithm converged as fast as the conventional GPSR with a reasonably low computational complexity. The outcomes demonstrate that the proposed GPSR algorithm is attractive for use in real time applications such as on-line IGRT. PMID:27894103

  12. Improvement of phase unwrapping algorithm based on image segmentation and merging

    NASA Astrophysics Data System (ADS)

    Wang, Huaying; Liu, Feifei; Zhu, Qiaofen

    2013-11-01

    A modified algorithm based on image segmentation and merging is proposed and demonstrated to improve the accuracy of the phase unwrapping algorithm. There are three improved aspects. Firstly, the method of unequal region segmentation is taken, which can make the regional information to be completely and accurately reproduced. Secondly, for the condition of noise and undersampling in different regions, different phase unwrapping algorithms are used, respectively. Lastly, for the sake of improving the accuracy of the phase unwrapping results, a method of weighted stack is applied to the overlapping region originated from blocks merging. The proposed algorithm has been verified by simulations and experiments. The results not only validate the accuracy and rapidity of the improved algorithm to recover the phase information of the measured object, but also illustrate the importance of the improved algorithm in Traditional Chinese Medicine Decoction Pieces cell identification.

  13. Belief network algorithms: A study of performance

    SciTech Connect

    Jitnah, N.

    1996-12-31

    This abstract gives an overview of the work. We present a survey of Belief Network algorithms and propose a domain characterization system to be used as a basis for algorithm comparison and for predicting algorithm performance.

  14. [An algorithm for highlightling structure in multispectral remote sensing].

    PubMed

    Wang, Qin-Jun; Lin, Qi-Zhong; Li, Ming-Xiao; Wei, Yong-Ming; Wang, Li-Ming

    2009-07-01

    Based on the principle of mineral generation, structures could provide not only passage ways for ore-forming fluid, but also space for them to aggregate. So, it was very important to study the feature of structures in study area before mineral exploration. In order to highlight structures using multispectral remote sensing data, an algorithm integrating principle component analysis (PCA), maximum noise fraction transformation (MNF) and original image data was proposed here. In the algorithm, the original image was firstly transformed by PCA and MNF; then all bands were normalized to reduce errors caused by different band dimensions, and three bands containing detailed structure information were selected to form the false color image in which structures in study area were highlighted. Results of transformation on enhanced thematic mapper (ETM) data acquired on June 27th 2000 in Hatu area, Xinjiang province, China showed that (1) the transformed image was not only more colorful than the original data, but also more gradational than the original data. (2) The color difference among objects was enhanced by the algorithm. (3) Structrues were highlighted by the algorithm. Therefore, the algorithm's effect of highlighting structures in study area was noticeable.

  15. Imaging of the spleen: a proposed algorithm

    SciTech Connect

    Shirkhoda, A.; McCartney, W.H.; Staab, E.V.; Mittelstaedt, C.A.

    1980-07-01

    The /sup 99m/Tc sulfur colloid scan is an effective initial method for evaluating splenic size, position, and focal or diffusely altered radionuclide uptake. Sonography is a useful next step in determining whether focal lesions are solid or cystic and the relation of the spleen to adjacent organs. In our opinion, computed tomography (CT) may be reserved for the few instances in which diagnostic questions remain unanswered after radionuclide scanning and sonography. Angiography is used primarily in splenic trauma. We evaluated 900 patients suspected of having liver-spleen abnormality. This experience, which led to a logically sequenced noninvasive imaging approach for evaluating suspected splenic pathology, is summarized and illustrated by several cases.

  16. License plate detection algorithm

    NASA Astrophysics Data System (ADS)

    Broitman, Michael; Klopovsky, Yuri; Silinskis, Normunds

    2013-12-01

    A novel algorithm for vehicle license plates localization is proposed. The algorithm is based on pixel intensity transition gradient analysis. Near to 2500 natural-scene gray-level vehicle images of different backgrounds and ambient illumination was tested. The best set of algorithm's parameters produces detection rate up to 0.94. Taking into account abnormal camera location during our tests and therefore geometrical distortion and troubles from trees this result could be considered as passable. Correlation between source data, such as license Plate dimensions and texture, cameras location and others, and parameters of algorithm were also defined.

  17. Digital watermarking algorithm research of color images based on quaternion Fourier transform

    NASA Astrophysics Data System (ADS)

    An, Mali; Wang, Weijiang; Zhao, Zhen

    2013-10-01

    A watermarking algorithm of color images based on the quaternion Fourier Transform (QFFT) and improved quantization index algorithm (QIM) is proposed in this paper. The original image is transformed by QFFT, the watermark image is processed by compression and quantization coding, and then the processed watermark image is embedded into the components of the transformed original image. It achieves embedding and blind extraction of the watermark image. The experimental results show that the watermarking algorithm based on the improved QIM algorithm with distortion compensation achieves a good tradeoff between invisibility and robustness, and better robustness for the attacks of Gaussian noises, salt and pepper noises, JPEG compression, cropping, filtering and image enhancement than the traditional QIM algorithm.

  18. A curvature filter and PDE based non-uniformity correction algorithm

    NASA Astrophysics Data System (ADS)

    Cheng, Kuanhong; Zhou, Huixin; Qin, Hanlin; Zhao, Dong; Qian, Kun; Rong, Shenghui; Yin, Shimin

    2016-10-01

    In this paper, a curvature filter and PDE based non-uniformity correction algorithm is proposed, the key point of this algorithm is the way to estimate FPN. We use anisotropic diffusion to smooth noise and Gaussian curvature filter to extract the details of original image. Then combine these two parts together by guided image filter and subtract the result from original image to get the crude approximation of FPN. After that, a Temporal Low Pass Filter (TLPF) is utilized to filter out random noise and get the accurate FPN. Finally, subtract the FPN from original image to achieve non-uniformity correction. The performance of this algorithm is tested with two infrared image sequences, and the experimental results show that the proposed method achieves a better non-uniformity correction performance.

  19. Reconstruction-plane-dependent weighted FDK algorithm for cone beam volumetric CT

    NASA Astrophysics Data System (ADS)

    Tang, Xiangyang; Hsieh, Jiang

    2005-04-01

    The original FDK algorithm has been extensively employed in medical and industrial imaging applications. With an increased cone angle, cone beam (CB) artifacts in images reconstructed by the original FDK algorithm deteriorate, since the circular trajectory does not satisfy the so-called data sufficiency condition (DSC). A few "circular plus" trajectories have been proposed in the past to reduce CB artifacts by meeting the DSC. However, the circular trajectory has distinct advantages over other scanning trajectories in practical CT imaging, such as cardiac, vascular and perfusion applications. In addition to looking into the DSC, another insight into the CB artifacts of the original FDK algorithm is the inconsistency between conjugate rays that are 180° apart in view angle. The inconsistence between conjugate rays is pixel dependent, i.e., it varies dramatically over pixels within the image plane to be reconstructed. However, the original FDK algorithm treats all conjugate rays equally, resulting in CB artifacts that can be avoided if appropriate view weighting strategy is exercised. In this paper, a modified FDK algorithm is proposed, along with an experimental evaluation and verification, in which the helical body phantom and a humanoid head phantom scanned by a volumetric CT (64 x 0.625 mm) are utilized. Without extra trajectories supplemental to the circular trajectory, the modified FDK algorithm applies reconstruction-plane-dependent view weighting on projection data before 3D backprojection, which reduces the inconsistency between conjugate rays by suppressing the contribution of one of the conjugate rays with a larger cone angle. Both computer-simulated and real phantom studies show that, up to a moderate cone angle, the CB artifacts can be substantially suppressed by the modified FDK algorithm, while advantages of the original FDK algorithm, such as the filtered backprojection algorithm structure, 1D ramp filtering, and data manipulation efficiency, can be

  20. A distributed Canny edge detector: algorithm and FPGA implementation.

    PubMed

    Xu, Qian; Varadarajan, Srenivas; Chakrabarti, Chaitali; Karam, Lina J

    2014-07-01

    The Canny edge detector is one of the most widely used edge detection algorithms due to its superior performance. Unfortunately, not only is it computationally more intensive as compared with other edge detection algorithms, but it also has a higher latency because it is based on frame-level statistics. In this paper, we propose a mechanism to implement the Canny algorithm at the block level without any loss in edge detection performance compared with the original frame-level Canny algorithm. Directly applying the original Canny algorithm at the block-level leads to excessive edges in smooth regions and to loss of significant edges in high-detailed regions since the original Canny computes the high and low thresholds based on the frame-level statistics. To solve this problem, we present a distributed Canny edge detection algorithm that adaptively computes the edge detection thresholds based on the block type and the local distribution of the gradients in the image block. In addition, the new algorithm uses a nonuniform gradient magnitude histogram to compute block-based hysteresis thresholds. The resulting block-based algorithm has a significantly reduced latency and can be easily integrated with other block-based image codecs. It is capable of supporting fast edge detection of images and videos with high resolutions, including full-HD since the latency is now a function of the block size instead of the frame size. In addition, quantitative conformance evaluations and subjective tests show that the edge detection performance of the proposed algorithm is better than the original frame-based algorithm, especially when noise is present in the images. Finally, this algorithm is implemented using a 32 computing engine architecture and is synthesized on the Xilinx Virtex-5 FPGA. The synthesized architecture takes only 0.721 ms (including the SRAM READ/WRITE time and the computation time) to detect edges of 512 × 512 images in the USC SIPI database when clocked at 100

  1. Genetic algorithms

    NASA Technical Reports Server (NTRS)

    Wang, Lui; Bayer, Steven E.

    1991-01-01

    Genetic algorithms are mathematical, highly parallel, adaptive search procedures (i.e., problem solving methods) based loosely on the processes of natural genetics and Darwinian survival of the fittest. Basic genetic algorithms concepts are introduced, genetic algorithm applications are introduced, and results are presented from a project to develop a software tool that will enable the widespread use of genetic algorithm technology.

  2. Gradient maintenance: A new algorithm for fast online replanning

    SciTech Connect

    Ahunbay, Ergun E. Li, X. Allen

    2015-06-15

    Purpose: Clinical use of online adaptive replanning has been hampered by the unpractically long time required to delineate volumes based on the image of the day. The authors propose a new replanning algorithm, named gradient maintenance (GM), which does not require the delineation of organs at risk (OARs), and can enhance automation, drastically reducing planning time and improving consistency and throughput of online replanning. Methods: The proposed GM algorithm is based on the hypothesis that if the dose gradient toward each OAR in daily anatomy can be maintained the same as that in the original plan, the intended plan quality of the original plan would be preserved in the adaptive plan. The algorithm requires a series of partial concentric rings (PCRs) to be automatically generated around the target toward each OAR on the planning and the daily images. The PCRs are used in the daily optimization objective function. The PCR dose constraints are generated with dose–volume data extracted from the original plan. To demonstrate this idea, GM plans generated using daily images acquired using an in-room CT were compared to regular optimization and image guided radiation therapy repositioning plans for representative prostate and pancreatic cancer cases. Results: The adaptive replanning using the GM algorithm, requiring only the target contour from the CT of the day, can be completed within 5 min without using high-power hardware. The obtained adaptive plans were almost as good as the regular optimization plans and were better than the repositioning plans for the cases studied. Conclusions: The newly proposed GM replanning algorithm, requiring only target delineation, not full delineation of OARs, substantially increased planning speed for online adaptive replanning. The preliminary results indicate that the GM algorithm may be a solution to improve the ability for automation and may be especially suitable for sites with small-to-medium size targets surrounded by

  3. Error Estimation for the Linearized Auto-Localization Algorithm

    PubMed Central

    Guevara, Jorge; Jiménez, Antonio R.; Prieto, Jose Carlos; Seco, Fernando

    2012-01-01

    The Linearized Auto-Localization (LAL) algorithm estimates the position of beacon nodes in Local Positioning Systems (LPSs), using only the distance measurements to a mobile node whose position is also unknown. The LAL algorithm calculates the inter-beacon distances, used for the estimation of the beacons’ positions, from the linearized trilateration equations. In this paper we propose a method to estimate the propagation of the errors of the inter-beacon distances obtained with the LAL algorithm, based on a first order Taylor approximation of the equations. Since the method depends on such approximation, a confidence parameter τ is defined to measure the reliability of the estimated error. Field evaluations showed that by applying this information to an improved weighted-based auto-localization algorithm (WLAL), the standard deviation of the inter-beacon distances can be improved by more than 30% on average with respect to the original LAL method. PMID:22736965

  4. A biconjugate gradient type algorithm on massively parallel architectures

    NASA Technical Reports Server (NTRS)

    Freund, Roland W.; Hochbruck, Marlis

    1991-01-01

    The biconjugate gradient (BCG) method is the natural generalization of the classical conjugate gradient algorithm for Hermitian positive definite matrices to general non-Hermitian linear systems. Unfortunately, the original BCG algorithm is susceptible to possible breakdowns and numerical instabilities. Recently, Freund and Nachtigal have proposed a novel BCG type approach, the quasi-minimal residual method (QMR), which overcomes the problems of BCG. Here, an implementation is presented of QMR based on an s-step version of the nonsymmetric look-ahead Lanczos algorithm. The main feature of the s-step Lanczos algorithm is that, in general, all inner products, except for one, can be computed in parallel at the end of each block; this is unlike the other standard Lanczos process where inner products are generated sequentially. The resulting implementation of QMR is particularly attractive on massively parallel SIMD architectures, such as the Connection Machine.

  5. Automatic design of decision-tree algorithms with evolutionary algorithms.

    PubMed

    Barros, Rodrigo C; Basgalupp, Márcio P; de Carvalho, André C P L F; Freitas, Alex A

    2013-01-01

    This study reports the empirical analysis of a hyper-heuristic evolutionary algorithm that is capable of automatically designing top-down decision-tree induction algorithms. Top-down decision-tree algorithms are of great importance, considering their ability to provide an intuitive and accurate knowledge representation for classification problems. The automatic design of these algorithms seems timely, given the large literature accumulated over more than 40 years of research in the manual design of decision-tree induction algorithms. The proposed hyper-heuristic evolutionary algorithm, HEAD-DT, is extensively tested using 20 public UCI datasets and 10 microarray gene expression datasets. The algorithms automatically designed by HEAD-DT are compared with traditional decision-tree induction algorithms, such as C4.5 and CART. Experimental results show that HEAD-DT is capable of generating algorithms which are significantly more accurate than C4.5 and CART.

  6. Efficiency Improvements in Meta-Heuristic Algorithms to Solve the Optimal Power Flow Problem

    NASA Astrophysics Data System (ADS)

    Reddy, S. Surender; Bijwe, P. R.

    2016-12-01

    This paper proposes the efficient approaches for solving the Optimal Power Flow (OPF) problem using the meta-heuristic algorithms. Mathematically, OPF is formulated as non-linear equality and inequality constrained optimization problem. The main drawback of meta-heuristic algorithm based OPF is the excessive execution time required due to the large number of power flows needed in the solution process. The proposed efficient approaches uses the lower and upper bounds of objective function values. By using this approach, the number of power flows to be performed are reduced substantially, resulting in the solution speed up. The efficiently generated objective function bounds can result in the faster solutions of meta-heuristic algorithms. The original advantages of meta-heuristic algorithms, such as ability to handle complex non-linearities, discontinuities in the objective function, discrete variables handling, and multi-objective optimization, etc., are still available in the proposed efficient approaches. The proposed OPF formulation includes the active and reactive power generation limits, Valve Point Loading (VPL) and Prohibited Operating Zones (POZs) effects of generating units. The effectiveness of proposed approach is examined on IEEE 30, 118 and 300 bus test systems, and the simulation results confirm the efficiency and superiority of the proposed approaches over the other meta-heuristic algorithms. The proposed efficient approach is generic enough to use with any type of meta-heuristic algorithm based OPF.

  7. Fast Outlier Detection Using a Grid-Based Algorithm.

    PubMed

    Lee, Jihwan; Cho, Nam-Wook

    2016-01-01

    As one of data mining techniques, outlier detection aims to discover outlying observations that deviate substantially from the reminder of the data. Recently, the Local Outlier Factor (LOF) algorithm has been successfully applied to outlier detection. However, due to the computational complexity of the LOF algorithm, its application to large data with high dimension has been limited. The aim of this paper is to propose grid-based algorithm that reduces the computation time required by the LOF algorithm to determine the k-nearest neighbors. The algorithm divides the data spaces in to a smaller number of regions, called as a "grid", and calculates the LOF value of each grid. To examine the effectiveness of the proposed method, several experiments incorporating different parameters were conducted. The proposed method demonstrated a significant computation time reduction with predictable and acceptable trade-off errors. Then, the proposed methodology was successfully applied to real database transaction logs of Korea Atomic Energy Research Institute. As a result, we show that for a very large dataset, the grid-LOF can be considered as an acceptable approximation for the original LOF. Moreover, it can also be effectively used for real-time outlier detection.

  8. Original Misunderstanding

    ERIC Educational Resources Information Center

    Holtzman, Alexander

    2009-01-01

    Humorist Josh Billings quipped, "About the most originality that any writer can hope to achieve honestly is to steal with good judgment." Billings was harsh in his view of originality, but his critique reveals a tension faced by students every time they write a history paper. Research is the essence of any history paper. Especially in high school,…

  9. Stochastic Leader Gravitational Search Algorithm for Enhanced Adaptive Beamforming Technique

    PubMed Central

    Darzi, Soodabeh; Islam, Mohammad Tariqul; Tiong, Sieh Kiong; Kibria, Salehin; Singh, Mandeep

    2015-01-01

    In this paper, stochastic leader gravitational search algorithm (SL-GSA) based on randomized k is proposed. Standard GSA (SGSA) utilizes the best agents without any randomization, thus it is more prone to converge at suboptimal results. Initially, the new approach randomly choses k agents from the set of all agents to improve the global search ability. Gradually, the set of agents is reduced by eliminating the agents with the poorest performances to allow rapid convergence. The performance of the SL-GSA was analyzed for six well-known benchmark functions, and the results are compared with SGSA and some of its variants. Furthermore, the SL-GSA is applied to minimum variance distortionless response (MVDR) beamforming technique to ensure compatibility with real world optimization problems. The proposed algorithm demonstrates superior convergence rate and quality of solution for both real world problems and benchmark functions compared to original algorithm and other recent variants of SGSA. PMID:26552032

  10. A local and iterative neural reconstruction algorithm for cone-beam data

    NASA Astrophysics Data System (ADS)

    Gallo, Ignazio

    2010-04-01

    This work presents a new neural algorithm designed for the reconstruction of tomographic images from Cone Beam data. The main objective of this work is the search of a new reconstruction method, able to work locally, more robust in presence of noisy data and in situations with a small number of projections. This study should be intended as the first step to evaluate the potentialities of the proposed algorithm. The algorithm is iterative and based on a set of neural networks that are working locally and sequentially. All the x-rays passing through a cell of the volume to be reconstructed, give origin to a neural network which is a single-layer perceptron network. The network does not need a training set but uses the line integral of a single x-ray as ground-truth of each output neuron. The neural network uses a gradient descent algorithm in order to minimize a local cost function by varying the value of the cells to be reconstructed. The proposed strategy was first evaluated in conditions where the quality and quantity of input data varies widely, using a the Shepp-Logan Phantom. The algorithm was also compared with the iterative ART algorithm and the well known filtered backprojection method. The results show how the proposed algorithm is much more accurate even in the presence of noise and under conditions of lack of data. In situations with little noise the reconstruction, after a few iterations, is almost identical to the original.

  11. A Fast Algorithm for Denoising Magnitude Diffusion-Weighted Images with Rank and Edge Constraints

    PubMed Central

    Lam, Fan; Liu, Ding; Song, Zhuang; Schuff, Norbert; Liang, Zhi-Pei

    2015-01-01

    Purpose To accelerate denoising of magnitude diffusion-weighted images subject to joint rank and edge constraints. Methods We extend a previously proposed majorize-minimize (MM) method for statistical estimation that involves noncentral χ distributions and joint rank and edge constraints. A new algorithm is derived which decomposes the constrained noncentral χ denoising problem into a series of constrained Gaussian denoising problems each of which is then solved using an efficient alternating minimization scheme. Results The performance of the proposed algorithm has been evaluated using both simulated and experimental data. Results from simulations based on ex vivo data show that the new algorithm achieves about a factor of 10 speed up over the original Quasi-Newton based algorithm. This improvement in computational efficiency enabled denoising of large data sets containing many diffusion-encoding directions. The denoising performance of the new efficient algorithm is found to be comparable to or even better than that of the original slow algorithm. For an in vivo high-resolution Q-ball acquisition, comparison of fiber tracking results around hippocampus region before and after denoising will also be shown to demonstrate the denoising effects of the new algorithm. Conclusion The optimization problem associated with denoising noncentral χ distributed diffusion-weighted images subject to joint rank and edge constraints can be solved efficiently using an MM-based algorithm. PMID:25733066

  12. Spectral methods and sum acceleration algorithms. Final report

    SciTech Connect

    Boyd, J.

    1995-03-01

    The principle investigator pursued his investigation of numerical algorithms during the period of the grant. The attached list of publications is so lengthy that it is impossible to describe them in detail. However, the author calls attention to the four articles on sequence acceleration and fourteen more on spectral methods, which fulfill the goals of the original proposal. He also continued his research on nonlinear waves, and wrote a dozen papers on this, too.

  13. Implementation details of the coupled QMR algorithm

    NASA Technical Reports Server (NTRS)

    Freund, Roland W.; Nachtigal, Noel M.

    1992-01-01

    The original quasi-minimal residual method (QMR) relies on the three-term look-ahead Lanczos process, to generate basis vectors for the underlying Krylov subspaces. However, empirical observations indicate that, in finite precision arithmetic, three-term vector recurrences are less robust than mathematically equivalent coupled two-term recurrences. Therefore, we recently proposed a new implementation of the QMR method based on a coupled two-term look-ahead Lanczos procedure. In this paper, we describe implementation details of this coupled QMR algorithm, and we present results of numerical experiments.

  14. Inclusive Flavour Tagging Algorithm

    NASA Astrophysics Data System (ADS)

    Likhomanenko, Tatiana; Derkach, Denis; Rogozhnikov, Alex

    2016-10-01

    Identifying the flavour of neutral B mesons production is one of the most important components needed in the study of time-dependent CP violation. The harsh environment of the Large Hadron Collider makes it particularly hard to succeed in this task. We present an inclusive flavour-tagging algorithm as an upgrade of the algorithms currently used by the LHCb experiment. Specifically, a probabilistic model which efficiently combines information from reconstructed vertices and tracks using machine learning is proposed. The algorithm does not use information about underlying physics process. It reduces the dependence on the performance of lower level identification capacities and thus increases the overall performance. The proposed inclusive flavour-tagging algorithm is applicable to tag the flavour of B mesons in any proton-proton experiment.

  15. Quantum Algorithms

    NASA Technical Reports Server (NTRS)

    Abrams, D.; Williams, C.

    1999-01-01

    This thesis describes several new quantum algorithms. These include a polynomial time algorithm that uses a quantum fast Fourier transform to find eigenvalues and eigenvectors of a Hamiltonian operator, and that can be applied in cases for which all know classical algorithms require exponential time.

  16. The Algorithm Selection Problem

    NASA Technical Reports Server (NTRS)

    Minton, Steve; Allen, John; Deiss, Ron (Technical Monitor)

    1994-01-01

    Work on NP-hard problems has shown that many instances of these theoretically computationally difficult problems are quite easy. The field has also shown that choosing the right algorithm for the problem can have a profound effect on the time needed to find a solution. However, to date there has been little work showing how to select the right algorithm for solving any particular problem. The paper refers to this as the algorithm selection problem. It describes some of the aspects that make this problem difficult, as well as proposes a technique for addressing it.

  17. Fast autodidactic adaptive equalization algorithms

    NASA Astrophysics Data System (ADS)

    Hilal, Katia

    Autodidactic equalization by adaptive filtering is addressed in a mobile radio communication context. A general method, using an adaptive stochastic gradient Bussgang type algorithm, to deduce two low cost computation algorithms is given: one equivalent to the initial algorithm and the other having improved convergence properties thanks to a block criteria minimization. Two start algorithms are reworked: the Godard algorithm and the decision controlled algorithm. Using a normalization procedure, and block normalization, the performances are improved, and their common points are evaluated. These common points are used to propose an algorithm retaining the advantages of the two initial algorithms. This thus inherits the robustness of the Godard algorithm and the precision and phase correction of the decision control algorithm. The work is completed by a study of the stable states of Bussgang type algorithms and of the stability of the Godard algorithms, initial and normalized. The simulation of these algorithms, carried out in a mobile radio communications context, and under severe conditions on the propagation channel, gave a 75% reduction in the number of samples required for the processing in relation with the initial algorithms. The improvement of the residual error was of a much lower return. These performances are close to making possible the use of autodidactic equalization in the mobile radio system.

  18. Improved dynamic-programming-based algorithms for segmentation of masses in mammograms

    SciTech Connect

    Dominguez, Alfonso Rojas; Nandi, Asoke K.

    2007-11-15

    In this paper, two new boundary tracing algorithms for segmentation of breast masses are presented. These new algorithms are based on the dynamic programming-based boundary tracing (DPBT) algorithm proposed in Timp and Karssemeijer, [S. Timp and N. Karssemeijer, Med. Phys. 31, 958-971 (2004)] The DPBT algorithm contains two main steps: (1) construction of a local cost function, and (2) application of dynamic programming to the selection of the optimal boundary based on the local cost function. The validity of some assumptions used in the design of the DPBT algorithm is tested in this paper using a set of 349 mammographic images. Based on the results of the tests, modifications to the computation of the local cost function have been designed and have resulted in the Improved-DPBT (IDPBT) algorithm. A procedure for the dynamic selection of the strength of the components of the local cost function is presented that makes these parameters independent of the image dataset. Incorporation of this dynamic selection procedure has produced another new algorithm which we have called ID{sup 2}PBT. Methods for the determination of some other parameters of the DPBT algorithm that were not covered in the original paper are presented as well. The merits of the new IDPBT and ID{sup 2}PBT algorithms are demonstrated experimentally by comparison against the DPBT algorithm. The segmentation results are evaluated with base on the area overlap measure and other segmentation metrics. Both of the new algorithms outperform the original DPBT; the improvements in the algorithms performance are more noticeable around the values of the segmentation metrics corresponding to the highest segmentation accuracy, i.e., the new algorithms produce more optimally segmented regions, rather than a pronounced increase in the average quality of all the segmented regions.

  19. Localized Ambient Solidity Separation Algorithm Based Computer User Segmentation.

    PubMed

    Sun, Xiao; Zhang, Tongda; Chai, Yueting; Liu, Yi

    2015-01-01

    Most of popular clustering methods typically have some strong assumptions of the dataset. For example, the k-means implicitly assumes that all clusters come from spherical Gaussian distributions which have different means but the same covariance. However, when dealing with datasets that have diverse distribution shapes or high dimensionality, these assumptions might not be valid anymore. In order to overcome this weakness, we proposed a new clustering algorithm named localized ambient solidity separation (LASS) algorithm, using a new isolation criterion called centroid distance. Compared with other density based isolation criteria, our proposed centroid distance isolation criterion addresses the problem caused by high dimensionality and varying density. The experiment on a designed two-dimensional benchmark dataset shows that our proposed LASS algorithm not only inherits the advantage of the original dissimilarity increments clustering method to separate naturally isolated clusters but also can identify the clusters which are adjacent, overlapping, and under background noise. Finally, we compared our LASS algorithm with the dissimilarity increments clustering method on a massive computer user dataset with over two million records that contains demographic and behaviors information. The results show that LASS algorithm works extremely well on this computer user dataset and can gain more knowledge from it.

  20. Localized Ambient Solidity Separation Algorithm Based Computer User Segmentation

    PubMed Central

    Sun, Xiao; Zhang, Tongda; Chai, Yueting; Liu, Yi

    2015-01-01

    Most of popular clustering methods typically have some strong assumptions of the dataset. For example, the k-means implicitly assumes that all clusters come from spherical Gaussian distributions which have different means but the same covariance. However, when dealing with datasets that have diverse distribution shapes or high dimensionality, these assumptions might not be valid anymore. In order to overcome this weakness, we proposed a new clustering algorithm named localized ambient solidity separation (LASS) algorithm, using a new isolation criterion called centroid distance. Compared with other density based isolation criteria, our proposed centroid distance isolation criterion addresses the problem caused by high dimensionality and varying density. The experiment on a designed two-dimensional benchmark dataset shows that our proposed LASS algorithm not only inherits the advantage of the original dissimilarity increments clustering method to separate naturally isolated clusters but also can identify the clusters which are adjacent, overlapping, and under background noise. Finally, we compared our LASS algorithm with the dissimilarity increments clustering method on a massive computer user dataset with over two million records that contains demographic and behaviors information. The results show that LASS algorithm works extremely well on this computer user dataset and can gain more knowledge from it. PMID:26221133

  1. A novel harmony search-K means hybrid algorithm for clustering gene expression data.

    PubMed

    Nazeer, Ka Abdul; Sebastian, Mp; Kumar, Sd Madhu

    2013-01-01

    Recent progress in bioinformatics research has led to the accumulation of huge quantities of biological data at various data sources. The DNA microarray technology makes it possible to simultaneously analyze large number of genes across different samples. Clustering of microarray data can reveal the hidden gene expression patterns from large quantities of expression data that in turn offers tremendous possibilities in functional genomics, comparative genomics, disease diagnosis and drug development. The k- ¬means clustering algorithm is widely used for many practical applications. But the original k-¬means algorithm has several drawbacks. It is computationally expensive and generates locally optimal solutions based on the random choice of the initial centroids. Several methods have been proposed in the literature for improving the performance of the k-¬means algorithm. A meta-heuristic optimization algorithm named harmony search helps find out near-global optimal solutions by searching the entire solution space. Low clustering accuracy of the existing algorithms limits their use in many crucial applications of life sciences. In this paper we propose a novel Harmony Search-K means Hybrid (HSKH) algorithm for clustering the gene expression data. Experimental results show that the proposed algorithm produces clusters with better accuracy in comparison with the existing algorithms.

  2. Simplified calculation of distance measure in DP algorithm

    NASA Astrophysics Data System (ADS)

    Hu, Tao; Ren, Xian-yi; Lu, Yu-ming

    2014-01-01

    Distance measure of point to segment is one of the determinants which affect the efficiency of DP (Douglas-Peucker) polyline simplification algorithm. Zone-divided distance measure instead of only perpendicular distance is proposed by Dan Sunday [1] to improve the deficiency of the original DP algorithm. A new efficiency zone-divided distance measure method is proposed in this paper. Firstly, a rotating coordinate is established based on the two endpoints of curve. Secondly, the new coordinate value in the rotating coordinate is computed for each point. Finally, the new coordinate values are used to divide points into three zones and to calculate distance, Manhattan distance is adopted in zone I and III, perpendicular distance in zone II. Compared with Dan Sunday's method, the proposed method can take full advantage of the computation result of previous point. The calculation amount basically keeps for points in zone I and III, and the calculation amount reduces significantly for points in zone II which own highest proportion. Experimental results show that the proposed distance measure method can improve the efficiency of original DP algorithm.

  3. Deconvolution of interferometric data using interior point iterative algorithms

    NASA Astrophysics Data System (ADS)

    Theys, C.; Lantéri, H.; Aime, C.

    2016-09-01

    We address the problem of deconvolution of astronomical images that could be obtained with future large interferometers in space. The presentation is made in two complementary parts. The first part gives an introduction to the image deconvolution with linear and nonlinear algorithms. The emphasis is made on nonlinear iterative algorithms that verify the constraints of non-negativity and constant flux. The Richardson-Lucy algorithm appears there as a special case for photon counting conditions. More generally, the algorithm published recently by Lanteri et al. (2015) is based on scale invariant divergences without assumption on the statistic model of the data. The two proposed algorithms are interior-point algorithms, the latter being more efficient in terms of speed of calculation. These algorithms are applied to the deconvolution of simulated images corresponding to an interferometric system of 16 diluted telescopes in space. Two non-redundant configurations, one disposed around a circle and the other on an hexagonal lattice, are compared for their effectiveness on a simple astronomical object. The comparison is made in the direct and Fourier spaces. Raw "dirty" images have many artifacts due to replicas of the original object. Linear methods cannot remove these replicas while iterative methods clearly show their efficacy in these examples.

  4. Dual key speech encryption algorithm based underdetermined BSS.

    PubMed

    Zhao, Huan; He, Shaofang; Chen, Zuo; Zhang, Xixiang

    2014-01-01

    When the number of the mixed signals is less than that of the source signals, the underdetermined blind source separation (BSS) is a significant difficult problem. Due to the fact that the great amount data of speech communications and real-time communication has been required, we utilize the intractability of the underdetermined BSS problem to present a dual key speech encryption method. The original speech is mixed with dual key signals which consist of random key signals (one-time pad) generated by secret seed and chaotic signals generated from chaotic system. In the decryption process, approximate calculation is used to recover the original speech signals. The proposed algorithm for speech signals encryption can resist traditional attacks against the encryption system, and owing to approximate calculation, decryption becomes faster and more accurate. It is demonstrated that the proposed method has high level of security and can recover the original signals quickly and efficiently yet maintaining excellent audio quality.

  5. A Modified MinMax k-Means Algorithm Based on PSO.

    PubMed

    Wang, Xiaoyan; Bai, Yanping

    The MinMax k-means algorithm is widely used to tackle the effect of bad initialization by minimizing the maximum intraclustering errors. Two parameters, including the exponent parameter and memory parameter, are involved in the executive process. Since different parameters have different clustering errors, it is crucial to choose appropriate parameters. In the original algorithm, a practical framework is given. Such framework extends the MinMax k-means to automatically adapt the exponent parameter to the data set. It has been believed that if the maximum exponent parameter has been set, then the programme can reach the lowest intraclustering errors. However, our experiments show that this is not always correct. In this paper, we modified the MinMax k-means algorithm by PSO to determine the proper values of parameters which can subject the algorithm to attain the lowest clustering errors. The proposed clustering method is tested on some favorite data sets in several different initial situations and is compared to the k-means algorithm and the original MinMax k-means algorithm. The experimental results indicate that our proposed algorithm can reach the lowest clustering errors automatically.

  6. A Modified MinMax k-Means Algorithm Based on PSO

    PubMed Central

    2016-01-01

    The MinMax k-means algorithm is widely used to tackle the effect of bad initialization by minimizing the maximum intraclustering errors. Two parameters, including the exponent parameter and memory parameter, are involved in the executive process. Since different parameters have different clustering errors, it is crucial to choose appropriate parameters. In the original algorithm, a practical framework is given. Such framework extends the MinMax k-means to automatically adapt the exponent parameter to the data set. It has been believed that if the maximum exponent parameter has been set, then the programme can reach the lowest intraclustering errors. However, our experiments show that this is not always correct. In this paper, we modified the MinMax k-means algorithm by PSO to determine the proper values of parameters which can subject the algorithm to attain the lowest clustering errors. The proposed clustering method is tested on some favorite data sets in several different initial situations and is compared to the k-means algorithm and the original MinMax k-means algorithm. The experimental results indicate that our proposed algorithm can reach the lowest clustering errors automatically. PMID:27656201

  7. Development of a new metal artifact reduction algorithm by using an edge preserving method for CBCT imaging

    NASA Astrophysics Data System (ADS)

    Kim, Juhye; Nam, Haewon; Lee, Rena

    2015-07-01

    CT (computed tomography) images, metal materials such as tooth supplements or surgical clips can cause metal artifact and degrade image quality. In severe cases, this may lead to misdiagnosis. In this research, we developed a new MAR (metal artifact reduction) algorithm by using an edge preserving filter and the MATLAB program (Mathworks, version R2012a). The proposed algorithm consists of 6 steps: image reconstruction from projection data, metal segmentation, forward projection, interpolation, applied edge preserving smoothing filter, and new image reconstruction. For an evaluation of the proposed algorithm, we obtained both numerical simulation data and data for a Rando phantom. In the numerical simulation data, four metal regions were added into the Shepp Logan phantom for metal artifacts. The projection data of the metal-inserted Rando phantom were obtained by using a prototype CBCT scanner manufactured by medical engineering and medical physics (MEMP) laboratory research group in medical science at Ewha Womans University. After these had been adopted the proposed algorithm was performed, and the result were compared with the original image (with metal artifact without correction) and with a corrected image based on linear interpolation. Both visual and quantitative evaluations were done. Compared with the original image with metal artifacts and with the image corrected by using linear interpolation, both the numerical and the experimental phantom data demonstrated that the proposed algorithm reduced the metal artifact. In conclusion, the evaluation in this research showed that the proposed algorithm outperformed the interpolation based MAR algorithm. If an optimization and a stability evaluation of the proposed algorithm can be performed, the developed algorithm is expected to be an effective tool for eliminating metal artifacts even in commercial CT systems.

  8. Novel Spectrum Sensing Algorithms for OFDM Cognitive Radio Networks

    PubMed Central

    Shi, Zhenguo; Wu, Zhilu; Yin, Zhendong; Cheng, Qingqing

    2015-01-01

    Spectrum sensing technology plays an increasingly important role in cognitive radio networks. Consequently, several spectrum sensing algorithms have been proposed in the literature. In this paper, we present a new spectrum sensing algorithm “Differential Characteristics-Based OFDM (DC-OFDM)” for detecting OFDM signal on account of differential characteristics. We put the primary value on channel gain θ around zero to detect the presence of primary user. Furthermore, utilizing the same method of differential operation, we improve two traditional OFDM sensing algorithms (cyclic prefix and pilot tones detecting algorithms), and propose a “Differential Characteristics-Based Cyclic Prefix (DC-CP)” detector and a “Differential Characteristics-Based Pilot Tones (DC-PT)” detector, respectively. DC-CP detector is based on auto-correlation vector to sense the spectrum, while the DC-PT detector takes the frequency-domain cross-correlation of PT as the test statistic to detect the primary user. Moreover, the distributions of the test statistics of the three proposed methods have been derived. Simulation results illustrate that all of the three proposed methods can achieve good performance under low signal to noise ratio (SNR) with the presence of timing delay. Specifically, the DC-OFDM detector gets the best performance among the presented detectors. Moreover, both of the DC-CP and DC-PT detector achieve significant improvements compared with their corresponding original detectors. PMID:26083226

  9. Novel Spectrum Sensing Algorithms for OFDM Cognitive Radio Networks.

    PubMed

    Shi, Zhenguo; Wu, Zhilu; Yin, Zhendong; Cheng, Qingqing

    2015-06-15

    Spectrum sensing technology plays an increasingly important role in cognitive radio networks. Consequently, several spectrum sensing algorithms have been proposed in the literature. In this paper, we present a new spectrum sensing algorithm "Differential Characteristics-Based OFDM (DC-OFDM)" for detecting OFDM signal on account of differential characteristics. We put the primary value on channel gain θ around zero to detect the presence of primary user. Furthermore, utilizing the same method of differential operation, we improve two traditional OFDM sensing algorithms (cyclic prefix and pilot tones detecting algorithms), and propose a "Differential Characteristics-Based Cyclic Prefix (DC-CP)" detector and a "Differential Characteristics-Based Pilot Tones (DC-PT)" detector, respectively. DC-CP detector is based on auto-correlation vector to sense the spectrum, while the DC-PT detector takes the frequency-domain cross-correlation of PT as the test statistic to detect the primary user. Moreover, the distributions of the test statistics of the three proposed methods have been derived. Simulation results illustrate that all of the three proposed methods can achieve good performance under low signal to noise ratio (SNR) with the presence of timing delay. Specifically, the DC-OFDM detector gets the best performance among the presented detectors. Moreover, both of the DC-CP and DC-PT detector achieve significant improvements compared with their corresponding original detectors.

  10. A "Tuned" Mask Learnt Approach Based on Gravitational Search Algorithm.

    PubMed

    Wan, Youchuan; Wang, Mingwei; Ye, Zhiwei; Lai, Xudong

    2016-01-01

    Texture image classification is an important topic in many applications in machine vision and image analysis. Texture feature extracted from the original texture image by using "Tuned" mask is one of the simplest and most effective methods. However, hill climbing based training methods could not acquire the satisfying mask at a time; on the other hand, some commonly used evolutionary algorithms like genetic algorithm (GA) and particle swarm optimization (PSO) easily fall into the local optimum. A novel approach for texture image classification exemplified with recognition of residential area is detailed in the paper. In the proposed approach, "Tuned" mask is viewed as a constrained optimization problem and the optimal "Tuned" mask is acquired by maximizing the texture energy via a newly proposed gravitational search algorithm (GSA). The optimal "Tuned" mask is achieved through the convergence of GSA. The proposed approach has been, respectively, tested on some public texture and remote sensing images. The results are then compared with that of GA, PSO, honey-bee mating optimization (HBMO), and artificial immune algorithm (AIA). Moreover, feature extracted by Gabor wavelet is also utilized to make a further comparison. Experimental results show that the proposed method is robust and adaptive and exhibits better performance than other methods involved in the paper in terms of fitness value and classification accuracy.

  11. A Modified ART 1 Algorithm more Suitable for VLSI Implementations.

    PubMed

    Linares-Barranco, Bernabe; Serrano-Gotarredona, Teresa

    1996-08-01

    This paper presents a modification to the original ART 1 algorithm ([Carpenter and Grossberg, 1987a], A massively parallel architecture for a self-organizing neural pattern recognition machine, Computer Vision, Graphics, and Image Processing, 37, 54-115) that is conceptually similar, can be implemented in hardware with less sophisticated building blocks, and maintains the computational capabilities of the originally proposed algorithm. This modified ART 1 algorithm (which we will call here ART 1(m)) is the result of hardware motivated simplifications investigated during the design of an actual ART 1 chip [Serrano-Gotarredona et al., 1994, Proc. 1994 IEEE Int. Conf. Neural Networks (Vol. 3, pp. 1912-1916); [Serrano-Gotarredona and Linares-Barranco, 1996], IEEE Trans. VLSI Systems, (in press)]. The purpose of this paper is simply to justify theoretically that the modified algorithm preserves the computational properties of the original one and to study the difference in behavior between the two approaches. Copyright 1996 Elsevier Science Ltd.

  12. Performance analysis of LVQ algorithms: a statistical physics approach.

    PubMed

    Ghosh, Anarta; Biehl, Michael; Hammer, Barbara

    2006-01-01

    Learning vector quantization (LVQ) constitutes a powerful and intuitive method for adaptive nearest prototype classification. However, original LVQ has been introduced based on heuristics and numerous modifications exist to achieve better convergence and stability. Recently, a mathematical foundation by means of a cost function has been proposed which, as a limiting case, yields a learning rule similar to classical LVQ2.1. It also motivates a modification which shows better stability. However, the exact dynamics as well as the generalization ability of many LVQ algorithms have not been thoroughly investigated so far. Using concepts from statistical physics and the theory of on-line learning, we present a mathematical framework to analyse the performance of different LVQ algorithms in a typical scenario in terms of their dynamics, sensitivity to initial conditions, and generalization ability. Significant differences in the algorithmic stability and generalization ability can be found already for slightly different variants of LVQ. We study five LVQ algorithms in detail: Kohonen's original LVQ1, unsupervised vector quantization (VQ), a mixture of VQ and LVQ, LVQ2.1, and a variant of LVQ which is based on a cost function. Surprisingly, basic LVQ1 shows very good performance in terms of stability, asymptotic generalization ability, and robustness to initializations and model parameters which, in many cases, is superior to recent alternative proposals.

  13. The Langley Parameterized Shortwave Algorithm (LPSA) for Surface Radiation Budget Studies. 1.0

    NASA Technical Reports Server (NTRS)

    Gupta, Shashi K.; Kratz, David P.; Stackhouse, Paul W., Jr.; Wilber, Anne C.

    2001-01-01

    An efficient algorithm was developed during the late 1980's and early 1990's by W. F. Staylor at NASA/LaRC for the purpose of deriving shortwave surface radiation budget parameters on a global scale. While the algorithm produced results in good agreement with observations, the lack of proper documentation resulted in a weak acceptance by the science community. The primary purpose of this report is to develop detailed documentation of the algorithm. In the process, the algorithm was modified whenever discrepancies were found between the algorithm and its referenced literature sources. In some instances, assumptions made in the algorithm could not be justified and were replaced with those that were justifiable. The algorithm uses satellite and operational meteorological data for inputs. Most of the original data sources have been replaced by more recent, higher quality data sources, and fluxes are now computed on a higher spatial resolution. Many more changes to the basic radiation scheme and meteorological inputs have been proposed to improve the algorithm and make the product more useful for new research projects. Because of the many changes already in place and more planned for the future, the algorithm has been renamed the Langley Parameterized Shortwave Algorithm (LPSA).

  14. Research on super-resolution image reconstruction based on an improved POCS algorithm

    NASA Astrophysics Data System (ADS)

    Xu, Haiming; Miao, Hong; Yang, Chong; Xiong, Cheng

    2015-07-01

    Super-resolution image reconstruction (SRIR) can improve the fuzzy image's resolution; solve the shortage of the spatial resolution, excessive noise, and low-quality problem of the image. Firstly, we introduce the image degradation model to reveal the essence of super-resolution reconstruction process is an ill-posed inverse problem in mathematics. Secondly, analysis the blurring reason of optical imaging process - light diffraction and small angle scattering is the main reason for the fuzzy; propose an image point spread function estimation method and an improved projection onto convex sets (POCS) algorithm which indicate effectiveness by analyzing the changes between the time domain and frequency domain algorithm in the reconstruction process, pointed out that the improved POCS algorithms based on prior knowledge have the effect to restore and approach the high frequency of original image scene. Finally, we apply the algorithm to reconstruct synchrotron radiation computer tomography (SRCT) image, and then use these images to reconstruct the three-dimensional slice images. Comparing the differences between the original method and super-resolution algorithm, it is obvious that the improved POCS algorithm can restrain the noise and enhance the image resolution, so it is indicated that the algorithm is effective. This study and exploration to super-resolution image reconstruction by improved POCS algorithm is proved to be an effective method. It has important significance and broad application prospects - for example, CT medical image processing and SRCT ceramic sintering analyze of microstructure evolution mechanism.

  15. Optimal band selection for high dimensional remote sensing data using genetic algorithm

    NASA Astrophysics Data System (ADS)

    Zhang, Xianfeng; Sun, Quan; Li, Jonathan

    2009-06-01

    A 'fused' method may not be suitable for reducing the dimensionality of data and a band/feature selection method needs to be used for selecting an optimal subset of original data bands. This study examined the efficiency of GA in band selection for remote sensing classification. A GA-based algorithm for band selection was designed deliberately in which a Bhattacharyya distance index that indicates separability between classes of interest is used as fitness function. A binary string chromosome is designed in which each gene location has a value of 1 representing a feature being included or 0 representing a band being not included. The algorithm was implemented in MATLAB programming environment, and a band selection task for lithologic classification in the Chocolate Mountain area (California) was used to test the proposed algorithm. The proposed feature selection algorithm can be useful in multi-source remote sensing data preprocessing, especially in hyperspectral dimensionality reduction.

  16. A numerical comparison of discrete Kalman filtering algorithms - An orbit determination case study

    NASA Technical Reports Server (NTRS)

    Thornton, C. L.; Bierman, G. J.

    1976-01-01

    An improved Kalman filter algorithm based on a modified Givens matrix triangularization technique is proposed for solving a nonstationary discrete-time linear filtering problem. The proposed U-D covariance factorization filter uses orthogonal transformation technique; measurement and time updating of the U-D factors involve separate application of Gentleman's fast square-root-free Givens rotations. Numerical stability and accuracy of the algorithm are compared with those of the conventional and stabilized Kalman filters and the Potter-Schmidt square-root filter, by applying these techniques to a realistic planetary navigation problem (orbit determination for the Saturn approach phase of the Mariner Jupiter-Saturn Mission, 1977). The new algorithm is shown to combine the numerical precision of square root filtering with the efficiency of the original Kalman algorithm.

  17. A Novel Adaptive Frequency Estimation Algorithm Based on Interpolation FFT and Improved Adaptive Notch Filter

    NASA Astrophysics Data System (ADS)

    Shen, Ting-ao; Li, Hua-nan; Zhang, Qi-xin; Li, Ming

    2017-02-01

    The convergence rate and the continuous tracking precision are two main problems of the existing adaptive notch filter (ANF) for frequency tracking. To solve the problems, the frequency is detected by interpolation FFT at first, which aims to overcome the convergence rate of the ANF. Then, referring to the idea of negative feedback, an evaluation factor is designed to monitor the ANF parameters and realize continuously high frequency tracking accuracy. According to the principle, a novel adaptive frequency estimation algorithm based on interpolation FFT and improved ANF is put forward. Its basic idea, specific measures and implementation steps are described in detail. The proposed algorithm obtains a fast estimation of the signal frequency, higher accuracy and better universality qualities. Simulation results verified the superiority and validity of the proposed algorithm when compared with original algorithms.

  18. A Novel Histogram Region Merging Based Multithreshold Segmentation Algorithm for MR Brain Images

    PubMed Central

    Shen, Xuanjing; Feng, Yuncong

    2017-01-01

    Multithreshold segmentation algorithm is time-consuming, and the time complexity will increase exponentially with the increase of thresholds. In order to reduce the time complexity, a novel multithreshold segmentation algorithm is proposed in this paper. First, all gray levels are used as thresholds, so the histogram of the original image is divided into 256 small regions, and each region corresponds to one gray level. Then, two adjacent regions are merged in each iteration by a new designed scheme, and a threshold is removed each time. To improve the accuracy of the merger operation, variance and probability are used as energy. No matter how many the thresholds are, the time complexity of the algorithm is stable at O(L). Finally, the experiment is conducted on many MR brain images to verify the performance of the proposed algorithm. Experiment results show that our method can reduce the running time effectively and obtain segmentation results with high accuracy.

  19. Algorithm development for hyperspectral anomaly detection

    NASA Astrophysics Data System (ADS)

    Rosario, Dalton S.

    2008-10-01

    This dissertation proposes and evaluates a novel anomaly detection algorithm suite for ground-to-ground, or air-to-ground, applications requiring automatic target detection using hyperspectral (HS) data. Targets are manmade objects in natural background clutter under unknown illumination and atmospheric conditions. The use of statistical models herein is purely for motivation of particular formulas for calculating anomaly output surfaces. In particular, formulas from semiparametrics are utilized to obtain novel forms for output surfaces, and alternative scoring algorithms are proposed to calculate output surfaces that are comparable to those of semiparametrics. Evaluation uses both simulated data and real HS data from a joint data collection effort between the Army Research Laboratory and the Army Armament Research Development & Engineering Center. A data transformation method is presented for use by the two-sample data structure univariate semiparametric and nonparametric scoring algorithms, such that, the two-sample data are mapped from their original multivariate space to an univariate domain, where the statistical power of the univariate scoring algorithms is shown to be improved relative to existing multivariate scoring algorithms testing the same two-sample data. An exhaustive simulation experimental study is conducted to assess the performance of different HS anomaly detection techniques, where the null and alternative hypotheses are completely specified, including all parameters, using multivariate normal and mixtures of multivariate normal distributions. Finally, for ground-to-ground anomaly detection applications, where the unknown scales of targets add to the problem complexity, a novel global anomaly detection algorithm suite is introduced, featuring autonomous partial random sampling (PRS) of the data cube. The PRS method is proposed to automatically sample the unknown background clutter in the test HS imagery, and by repeating multiple times this

  20. GPU accelerated Foreign Object Debris Detection on Airfield Pavement with visual saliency algorithm

    NASA Astrophysics Data System (ADS)

    Qi, Jun; Gong, Guoping; Cao, Xiaoguang

    2017-01-01

    We present a GPU-based implementation of visual saliency algorithm to detect foreign object debris(FOD) on airfield pavement with effectiveness and efficiency. Visual saliency algorithm is introduced in FOD detection for the first time. We improve the image signature algorithm to target at FOD detection in complex background of pavement. First, we make pooling operations in obtaining saliency map to improve recall rate. Then, connected component analysis is applied to filter candidate regions in saliency map to get the final targets in original image. Besides, we map the algorithm to GPU-based kernels and data structures. The parallel version of the algorithm is able to get the results with 23.5 times speedup. Experimental results elucidate that the proposed method is effective to detect FOD real-time.

  1. Heuristic-based tabu search algorithm for folding two-dimensional AB off-lattice model proteins.

    PubMed

    Liu, Jingfa; Sun, Yuanyuan; Li, Gang; Song, Beibei; Huang, Weibo

    2013-12-01

    The protein structure prediction problem is a classical NP hard problem in bioinformatics. The lack of an effective global optimization method is the key obstacle in solving this problem. As one of the global optimization algorithms, tabu search (TS) algorithm has been successfully applied in many optimization problems. We define the new neighborhood conformation, tabu object and acceptance criteria of current conformation based on the original TS algorithm and put forward an improved TS algorithm. By integrating the heuristic initialization mechanism, the heuristic conformation updating mechanism, and the gradient method into the improved TS algorithm, a heuristic-based tabu search (HTS) algorithm is presented for predicting the two-dimensional (2D) protein folding structure in AB off-lattice model which consists of hydrophobic (A) and hydrophilic (B) monomers. The tabu search minimization leads to the basins of local minima, near which a local search mechanism is then proposed to further search for lower-energy conformations. To test the performance of the proposed algorithm, experiments are performed on four Fibonacci sequences and two real protein sequences. The experimental results show that the proposed algorithm has found the lowest-energy conformations so far for three shorter Fibonacci sequences and renewed the results for the longest one, as well as two real protein sequences, demonstrating that the HTS algorithm is quite promising in finding the ground states for AB off-lattice model proteins.

  2. A Data Preprocessing Algorithm for Classification Model Based On Rough Sets

    NASA Astrophysics Data System (ADS)

    Xiang-wei, Li; Yian-fang, Qi

    Aimed to solve the limitation of abundant data to constructing classification modeling in data mining, the paper proposed a novel effective preprocessing algorithm based on rough sets. Firstly, we construct the relation Information System using original data sets. Secondly, make use of attribute reduction theory of Rough sets to produce the Core of Information System. Core is the most important and necessary information which cannot reduce in original Information System. So it can get a same effect as original data sets to data analysis, and can construct classification modeling using it. Thirdly, construct indiscernibility matrix using reduced Information System, and finally, get the classification of original data sets. Compared to existing techniques, the developed algorithm enjoy following advantages: (1) avoiding the abundant data in follow-up data processing, and (2) avoiding large amount of computation in whole data mining process. (3) The results become more effective because of introducing the attributes reducing theory of Rough Sets.

  3. Quantum gate decomposition algorithms.

    SciTech Connect

    Slepoy, Alexander

    2006-07-01

    Quantum computing algorithms can be conveniently expressed in a format of a quantum logical circuits. Such circuits consist of sequential coupled operations, termed ''quantum gates'', or quantum analogs of bits called qubits. We review a recently proposed method [1] for constructing general ''quantum gates'' operating on an qubits, as composed of a sequence of generic elementary ''gates''.

  4. An Improved Passive Phase Conjugation Array Communication Algorithm

    NASA Astrophysics Data System (ADS)

    Jia, Ning; Guo, Zhongyuan; Huang, Jianchun; Chen, Geng

    2010-09-01

    The time-varying, dispersive, multipath underwater acoustic channel is a challenging environment for reliable coherent communications. A method proposed recently to cope with intersymbol interference (ISI) is Passive-Phase-Conjugation (PPC) cascaded with Decision-Feedback Equalization (DFE). Based on the theory of signal propagation in a waveguide, PPC can mitigate channel fading and improve the signal-to-noise ratio (SNR) by using a receiver array. At the same time the residual ISI will be removed by DFE. This method will lead to explosive divergence when the channel is changed by a large amount, because PPC estimates channels inaccurately. An improved algorithm is introduced in this paper to estimate the channel during all the communication process; as a result, the change of the channel can be found in time and the PPC could use more accurate channel estimated. Using simulated and at-sea data, we demonstrate that this algorithm can improve the stability of original algorithm in changed channels.

  5. One improved LSB steganography algorithm

    NASA Astrophysics Data System (ADS)

    Song, Bing; Zhang, Zhi-hong

    2013-03-01

    It is easy to be detected by X2 and RS steganalysis with high accuracy that using LSB algorithm to hide information in digital image. We started by selecting information embedded location and modifying the information embedded method, combined with sub-affine transformation and matrix coding method, improved the LSB algorithm and a new LSB algorithm was proposed. Experimental results show that the improved one can resist the X2 and RS steganalysis effectively.

  6. Parallel algorithms for unconstrained optimizations by multisplitting

    SciTech Connect

    He, Qing

    1994-12-31

    In this paper a new parallel iterative algorithm for unconstrained optimization using the idea of multisplitting is proposed. This algorithm uses the existing sequential algorithms without any parallelization. Some convergence and numerical results for this algorithm are presented. The experiments are performed on an Intel iPSC/860 Hyper Cube with 64 nodes. It is interesting that the sequential implementation on one node shows that if the problem is split properly, the algorithm converges much faster than one without splitting.

  7. A hyper-chaos-based image encryption algorithm using pixel-level permutation and bit-level permutation

    NASA Astrophysics Data System (ADS)

    Li, Yueping; Wang, Chunhua; Chen, Hua

    2017-03-01

    Recently, a number of chaos-based image encryption algorithms that use low-dimensional chaotic map and permutation-diffusion architecture have been proposed. However, low-dimensional chaotic map is less safe than high-dimensional chaotic system. And permutation process is independent of plaintext and diffusion process. Therefore, they cannot resist efficiently the chosen-plaintext attack and chosen-ciphertext attack. In this paper, we propose a hyper-chaos-based image encryption algorithm. The algorithm adopts a 5-D multi-wing hyper-chaotic system, and the key stream generated by hyper-chaotic system is related to the original image. Then, pixel-level permutation and bit-level permutation are employed to strengthen security of the cryptosystem. Finally, a diffusion operation is employed to change pixels. Theoretical analysis and numerical simulations demonstrate that the proposed algorithm is secure and reliable for image encryption.

  8. A novel image fusion algorithm based on human vision system

    NASA Astrophysics Data System (ADS)

    Miao, Qiguang; Wang, Baoshu

    2006-04-01

    The proposed new fusion algorithm is based on the improved pulse coupled neural network(PCNN) model, the fundamental characteristics of images and the properties of human vision system. Compared with the traditional algorithm where the linking strength of each neuron is the same and its value is chosen through experimentation, this algorithm uses the contrast of each pixel as its value, so that the linking strength of each pixel can be chosen adaptively. After the processing of PCNN with the adaptive linking strength, new fire mapping images are obtained for each image taking part in the fusion. The clear objects of each original image are decided by the compare-selection operator with the fire mapping images pixel by pixel and then all of them are merged into a new clear image. Furthermore, by this algorithm, other parameters, for example, Δ, the threshold adjusting constant, only have a slight effect on the new fused image. It therefore overcomes the difficulty in adjusting parameters in PCNN. Experiments show that the proposed algorithm works better in preserving the edge and texture information than the wavelet transform method and the Laplacian pyramid method do image fusion.

  9. A novel fingerprint recognition algorithm based on VK-LLE

    NASA Astrophysics Data System (ADS)

    Luo, Jing; Lin, Shu-zhong; Ni, Jian-yun; Song, Li-mei

    2009-07-01

    It is a challenging problem to overcome shift and rotation and nonlinearity in fingerprint images. By analyzing the shortcoming of fingerprint recognition algorithm on shift or rotation images at present, manifold learning algorithm is introduced. A fingerprint recognition algorithm has been proposed based on locally linear embedding of variable neighbourhood k (VK-LLE). Firstly, approximate geodesic distance between any two points is computed by ISOMAP ( isometric feature mapping) and then the neighborhood is determined for each point by the relationship between its local estimated geodesic distance matrix and local Euclidean distance matrix. Secondly, the dimension of fingerprint image is reduced by nonlinear dimension-reduction method. And the best projected features of original fingerprint data of large dimension are acquired. By analyzing the changes of recognition accuracy with the neighborhood and embedding dimension, the neighborhood and embedding dimension is determined at last. Finally, fingerprint recognition is accomplished by Euclidean distance Classifier. The experimental results based on standard fingerprint datasets have verified the proposed algorithm had a better robustness to those fingerprint images of shift or rotation or nonlinearity than the algorithm using LLE, thus this method has some values in practice.

  10. Object tracking algorithm based on contextual visual saliency

    NASA Astrophysics Data System (ADS)

    Fu, Bao; Peng, XianRong

    2016-09-01

    As to object tracking, the local context surrounding of the target could provide much effective information for getting a robust tracker. The spatial-temporal context (STC) learning algorithm proposed recently considers the information of the dense context around the target and has achieved a better performance. However STC only used image intensity as the object appearance model. But this appearance model not enough to deal with complicated tracking scenarios. In this paper, we propose a novel object appearance model learning algorithm. Our approach formulates the spatial-temporal relationships between the object of interest and its local context based on a Bayesian framework, which models the statistical correlation between high-level features (Circular-Multi-Block Local Binary Pattern) from the target and its surrounding regions. The tracking problem is posed by computing a visual saliency map, and obtaining the best target location by maximizing an object location likelihood function. Extensive experimental results on public benchmark databases show that our algorithm outperforms the original STC algorithm and other state-of-the-art tracking algorithms.

  11. A three-dimensional weighted cone beam filtered backprojection (CB-FBP) algorithm for image reconstruction in volumetric CT under a circular source trajectory

    NASA Astrophysics Data System (ADS)

    Tang, Xiangyang; Hsieh, Jiang; Hagiwara, Akira; Nilsen, Roy A.; Thibault, Jean-Baptiste; Drapkin, Evgeny

    2005-08-01

    The original FDK algorithm proposed for cone beam (CB) image reconstruction under a circular source trajectory has been extensively employed in medical and industrial imaging applications. With increasing cone angle, CB artefacts in images reconstructed by the original FDK algorithm deteriorate, since the circular trajectory does not satisfy the so-called data sufficiency condition (DSC). A few 'circular plus' trajectories have been proposed in the past to help the original FDK algorithm to reduce CB artefacts by meeting the DSC. However, the circular trajectory has distinct advantages over other scanning trajectories in practical CT imaging, such as head imaging, breast imaging, cardiac, vascular and perfusion applications. In addition to looking into the DSC, another insight into the CB artefacts existing in the original FDK algorithm is the inconsistency between conjugate rays that are 180° apart in view angle (namely conjugate ray inconsistency). The conjugate ray inconsistency is pixel dependent, varying dramatically over pixels within the image plane to be reconstructed. However, the original FDK algorithm treats all conjugate rays equally, resulting in CB artefacts that can be avoided if appropriate weighting strategies are exercised. Along with an experimental evaluation and verification, a three-dimensional (3D) weighted axial cone beam filtered backprojection (CB-FBP) algorithm is proposed in this paper for image reconstruction in volumetric CT under a circular source trajectory. Without extra trajectories supplemental to the circular trajectory, the proposed algorithm applies 3D weighting on projection data before 3D backprojection to reduce conjugate ray inconsistency by suppressing the contribution from one of the conjugate rays with a larger cone angle. Furthermore, the 3D weighting is dependent on the distance between the reconstruction plane and the central plane determined by the circular trajectory. The proposed 3D weighted axial CB-FBP algorithm

  12. Paideia: Origins.

    ERIC Educational Resources Information Center

    Burns, John W.

    The ideas in Mortimer Adler's educational manifesto, "The Paideia Proposal," are compared to the Greek concept of paideia (meaning upbringing of a child) and discredited. Committed to universal education, Adler wants schooling based on a set of uniformly applied objectives achieved by packaging pre-organized knowledge in established…

  13. Non-parametric Algorithm to Isolate Chunks in Response Sequences

    PubMed Central

    Alamia, Andrea; Solopchuk, Oleg; Olivier, Etienne; Zenon, Alexandre

    2016-01-01

    Chunking consists in grouping items of a sequence into small clusters, named chunks, with the assumed goal of lessening working memory load. Despite extensive research, the current methods used to detect chunks, and to identify different chunking strategies, remain discordant and difficult to implement. Here, we propose a simple and reliable method to identify chunks in a sequence and to determine their stability across blocks. This algorithm is based on a ranking method and its major novelty is that it provides concomitantly both the features of individual chunk in a given sequence, and an overall index that quantifies the chunking pattern consistency across sequences. The analysis of simulated data confirmed the validity of our method in different conditions of noise, chunk lengths and chunk numbers; moreover, we found that this algorithm was particularly efficient in the noise range observed in real data, provided that at least 4 sequence repetitions were included in each experimental block. Furthermore, we applied this algorithm to actual reaction time series gathered from 3 published experiments and were able to confirm the findings obtained in the original reports. In conclusion, this novel algorithm is easy to implement, is robust to outliers and provides concurrent and reliable estimation of chunk position and chunking dynamics, making it useful to study both sequence-specific and general chunking effects. The algorithm is available at: https://github.com/artipago/Non-parametric-algorithm-to-isolate-chunks-in-response-sequences. PMID:27708565

  14. Astronomical image denoising by means of improved adaptive backtracking-based matching pursuit algorithm.

    PubMed

    Liu, Qianshun; Bai, Jian; Yu, Feihong

    2014-11-10

    In an effort to improve compressive sensing and spare signal reconstruction by way of the backtracking-based adaptive orthogonal matching pursuit (BAOMP), a new sparse coding algorithm called improved adaptive backtracking-based OMP (ABOMP) is proposed in this study. Many aspects have been improved compared to the original BAOMP method, including replacing the fixed threshold with an adaptive one, adding residual feedback and support set verification, and others. Because of these ameliorations, the proposed algorithm can more precisely choose the atoms. By adding the adaptive step-size mechanism, it requires much less iteration and thus executes more efficiently. Additionally, a simple but effective contrast enhancement method is also adopted to further improve the denoising results and visual effect. By combining the IABOMP algorithm with the state-of-art dictionary learning algorithm K-SVD, the proposed algorithm achieves better denoising effects for astronomical images. Numerous experimental results show that the proposed algorithm performs successfully and effectively on Gaussian and Poisson noise removal.

  15. Scheduling algorithms

    NASA Astrophysics Data System (ADS)

    Wolfe, William J.; Wood, David; Sorensen, Stephen E.

    1996-12-01

    This paper discusses automated scheduling as it applies to complex domains such as factories, transportation, and communications systems. The window-constrained-packing problem is introduced as an ideal model of the scheduling trade offs. Specific algorithms are compared in terms of simplicity, speed, and accuracy. In particular, dispatch, look-ahead, and genetic algorithms are statistically compared on randomly generated job sets. The conclusion is that dispatch methods are fast and fairly accurate; while modern algorithms, such as genetic and simulate annealing, have excessive run times, and are too complex to be practical.

  16. Haplotyping algorithms

    SciTech Connect

    Sobel, E.; Lange, K.; O`Connell, J.R.

    1996-12-31

    Haplotyping is the logical process of inferring gene flow in a pedigree based on phenotyping results at a small number of genetic loci. This paper formalizes the haplotyping problem and suggests four algorithms for haplotype reconstruction. These algorithms range from exhaustive enumeration of all haplotype vectors to combinatorial optimization by simulated annealing. Application of the algorithms to published genetic analyses shows that manual haplotyping is often erroneous. Haplotyping is employed in screening pedigrees for phenotyping errors and in positional cloning of disease genes from conserved haplotypes in population isolates. 26 refs., 6 figs., 3 tabs.

  17. An Efficient Globally Optimal Algorithm for Asymmetric Point Matching.

    PubMed

    Lian, Wei; Zhang, Lei; Yang, Ming-Hsuan

    2016-08-29

    Although the robust point matching algorithm has been demonstrated to be effective for non-rigid registration, there are several issues with the adopted deterministic annealing optimization technique. First, it is not globally optimal and regularization on the spatial transformation is needed for good matching results. Second, it tends to align the mass centers of two point sets. To address these issues, we propose a globally optimal algorithm for the robust point matching problem where each model point has a counterpart in scene set. By eliminating the transformation variables, we show that the original matching problem is reduced to a concave quadratic assignment problem where the objective function has a low rank Hessian matrix. This facilitates the use of large scale global optimization techniques. We propose a branch-and-bound algorithm based on rectangular subdivision where in each iteration, multiple rectangles are used to increase the chances of subdividing the one containing the global optimal solution. In addition, we present an efficient lower bounding scheme which has a linear assignment formulation and can be efficiently solved. Extensive experiments on synthetic and real datasets demonstrate the proposed algorithm performs favorably against the state-of-the-art methods in terms of robustness to outliers, matching accuracy, and run-time.

  18. Localized density matrix minimization and linear-scaling algorithms

    NASA Astrophysics Data System (ADS)

    Lai, Rongjie; Lu, Jianfeng

    2016-06-01

    We propose a convex variational approach to compute localized density matrices for both zero temperature and finite temperature cases, by adding an entry-wise ℓ1 regularization to the free energy of the quantum system. Based on the fact that the density matrix decays exponentially away from the diagonal for insulating systems or systems at finite temperature, the proposed ℓ1 regularized variational method provides an effective way to approximate the original quantum system. We provide theoretical analysis of the approximation behavior and also design convergence guaranteed numerical algorithms based on Bregman iteration. More importantly, the ℓ1 regularized system naturally leads to localized density matrices with banded structure, which enables us to develop approximating algorithms to find the localized density matrices with computation cost linearly dependent on the problem size.

  19. Web multimedia information retrieval using improved Bayesian algorithm.

    PubMed

    Yu, Yi-Jun; Chen, Chun; Yu, Yi-Min; Lin, Huai-Zhong

    2003-01-01

    The main thrust of this paper is application of a novel data mining approach on the log of user's feedback to improve web multimedia information retrieval performance. A user space model was constructed based on data mining, and then integrated into the original information space model to improve the accuracy of the new information space model. It can remove clutter and irrelevant text information and help to eliminate mismatch between the page author's expression and the user's understanding and expectation. User space model was also utilized to discover the relationship between high-level and low-level features for assigning weight. The authors proposed improved Bayesian algorithm for data mining. Experiment proved that the authors' proposed algorithm was efficient.

  20. Program Proposal

    ERIC Educational Resources Information Center

    Baskas, Richard S.

    2012-01-01

    A study was conducted to determine if a deficiency, or learning gap, existed in a particular working environment. To determine if an assessment was to be conducted, a program proposal would need to be developed to explore this situation. In order for a particular environment to react and grow with other environments, it must be able to take on…

  1. Multi-pattern string matching algorithms comparison for intrusion detection system

    NASA Astrophysics Data System (ADS)

    Hasan, Awsan A.; Rashid, Nur'Aini Abdul; Abdulrazzaq, Atheer A.

    2014-12-01

    Computer networks are developing exponentially and running at high speeds. With the increasing number of Internet users, computers have become the preferred target for complex attacks that require complex analyses to be detected. The Intrusion detection system (IDS) is created and turned into an important part of any modern network to protect the network from attacks. The IDS relies on string matching algorithms to identify network attacks, but these string matching algorithms consume a considerable amount of IDS processing time, thereby slows down the IDS performance. A new algorithm that can overcome the weakness of the IDS needs to be developed. Improving the multi-pattern matching algorithm ensure that an IDS can work properly and the limitations can be overcome. In this paper, we perform a comparison between our three multi-pattern matching algorithms; MP-KR, MPHQS and MPH-BMH with their corresponding original algorithms Kr, QS and BMH respectively. The experiments show that MPH-QS performs best among the proposed algorithms, followed by MPH-BMH, and MP-KR is the slowest. MPH-QS detects a large number of signature patterns in short time compared to other two algorithms. This finding can prove that the multi-pattern matching algorithms are more efficient in high-speed networks.

  2. About one algorithm of the broken line approximation and a modeling of tool path for CNC plate cutting machines

    NASA Astrophysics Data System (ADS)

    Kurennov, D. V.; Petunin, A. A.; Repnitskii, V. B.; Shipacheva, E. N.

    2016-12-01

    The problem of approximating two-dimensional broken line with composite curve consisting of arc and line segments is considered. The resulting curve nodes have to coincide with source broken line nodes. This problem arises in the development of control programs for CNC (computer numerical control) cutting machines, permitting circular interpolation. An original algorithm is proposed minimizing the number of nodes for resulting composite curve. The algorithm is implemented in the environment of the Russian CAD system T-Flex CAD using its API (Application Program Interface). The algorithm optimality is investigated. The result of test calculation along with its geometrical visualization is given.

  3. Algorithm That Synthesizes Other Algorithms for Hashing

    NASA Technical Reports Server (NTRS)

    James, Mark

    2010-01-01

    An algorithm that includes a collection of several subalgorithms has been devised as a means of synthesizing still other algorithms (which could include computer code) that utilize hashing to determine whether an element (typically, a number or other datum) is a member of a set (typically, a list of numbers). Each subalgorithm synthesizes an algorithm (e.g., a block of code) that maps a static set of key hashes to a somewhat linear monotonically increasing sequence of integers. The goal in formulating this mapping is to cause the length of the sequence thus generated to be as close as practicable to the original length of the set and thus to minimize gaps between the elements. The advantage of the approach embodied in this algorithm is that it completely avoids the traditional approach of hash-key look-ups that involve either secondary hash generation and look-up or further searching of a hash table for a desired key in the event of collisions. This algorithm guarantees that it will never be necessary to perform a search or to generate a secondary key in order to determine whether an element is a member of a set. This algorithm further guarantees that any algorithm that it synthesizes can be executed in constant time. To enforce these guarantees, the subalgorithms are formulated to employ a set of techniques, each of which works very effectively covering a certain class of hash-key values. These subalgorithms are of two types, summarized as follows: Given a list of numbers, try to find one or more solutions in which, if each number is shifted to the right by a constant number of bits and then masked with a rotating mask that isolates a set of bits, a unique number is thereby generated. In a variant of the foregoing procedure, omit the masking. Try various combinations of shifting, masking, and/or offsets until the solutions are found. From the set of solutions, select the one that provides the greatest compression for the representation and is executable in the

  4. A hybrid algorithm with GA and DAEM

    NASA Astrophysics Data System (ADS)

    Wan, HongJie; Deng, HaoJiang; Wang, XueWei

    2013-03-01

    Although the expectation-maximization (EM) algorithm has been widely used for finding maximum likelihood estimation of parameters in probabilistic models, it has the problem of trapping by local maxima. To overcome this problem, the deterministic annealing EM (DAEM) algorithm was once proposed and had achieved better performance than EM algorithm, but it is not very effective at avoiding local maxima. In this paper, a solution is proposed by integrating GA and DAEM into one procedure to further improve the solution quality. The population based search of genetic algorithm will produce different solutions and thus can increase the search space of DAEM. Therefore, the proposed algorithm will reach better solution than just using DAEM. The algorithm retains the property of DAEM and gets the better solution by genetic operation. Experiment results on Gaussian mixture model parameter estimation demonstrate that the proposed algorithm can achieve better performance.

  5. Array signal recovery algorithm for a single-RF-channel DBF array

    NASA Astrophysics Data System (ADS)

    Zhang, Duo; Wu, Wen; Fang, Da Gang

    2016-12-01

    An array signal recovery algorithm based on sparse signal reconstruction theory is proposed for a single-RF-channel digital beamforming (DBF) array. A single-RF-channel antenna array is a low-cost antenna array in which signals are obtained from all antenna elements by only one microwave digital receiver. The spatially parallel array signals are converted into time-sequence signals, which are then sampled by the system. The proposed algorithm uses these time-sequence samples to recover the original parallel array signals by exploiting the second-order sparse structure of the array signals. Additionally, an optimization method based on the artificial bee colony (ABC) algorithm is proposed to improve the reconstruction performance. Using the proposed algorithm, the motion compensation problem for the single-RF-channel DBF array can be solved effectively, and the angle and Doppler information for the target can be simultaneously estimated. The effectiveness of the proposed algorithms is demonstrated by the results of numerical simulations.

  6. Origin of life

    NASA Astrophysics Data System (ADS)

    Ehrenfreund, P.; Cleaves, H. J.

    2003-10-01

    Deciphering the origin of life requires some knowledge of the early planetary environment. Unfortunately, we lack definitive evidence of the atmospheric composition, surface temperature, oceanic pH, and other environmental conditions that may have been important for the appearance of the first living systems on Earth. The rock remnants of the early Archean are extremely scarce and most of the record has been lost. The first indications of life from carbon inclusions in rocks and the oldest fossil record are currently under debate but there is a consensus that life started during the first billion years after the Earth formed. Life as we know it is a chemical phenomenon. The chemistry that could have produced self-organizing systems is the central problem in the origin of life. There are several competing theories for how this chemistry may have arisen. In spite of their diversity, proposals for a prebiotic "soup", for the role of submarine hydrothermal vents, or for the extraterrestrial origin of organic compounds have as a common background assumption the idea that abiotic organic compounds were necessary for the emergence of life. It is possible that a combination of these sources - exogenous and endogenous - contributed building blocks for the origin of life on Earth. In this paper we provide a review of the main ideas on the origin of life from the astrobiological perspective and discuss the probability of life on extrasolar planets.

  7. DNABIT Compress - Genome compression algorithm.

    PubMed

    Rajarajeswari, Pothuraju; Apparao, Allam

    2011-01-22

    Data compression is concerned with how information is organized in data. Efficient storage means removal of redundancy from the data being stored in the DNA molecule. Data compression algorithms remove redundancy and are used to understand biologically important molecules. We present a compression algorithm, "DNABIT Compress" for DNA sequences based on a novel algorithm of assigning binary bits for smaller segments of DNA bases to compress both repetitive and non repetitive DNA sequence. Our proposed algorithm achieves the best compression ratio for DNA sequences for larger genome. Significantly better compression results show that "DNABIT Compress" algorithm is the best among the remaining compression algorithms. While achieving the best compression ratios for DNA sequences (Genomes),our new DNABIT Compress algorithm significantly improves the running time of all previous DNA compression programs. Assigning binary bits (Unique BIT CODE) for (Exact Repeats, Reverse Repeats) fragments of DNA sequence is also a unique concept introduced in this algorithm for the first time in DNA compression. This proposed new algorithm could achieve the best compression ratio as much as 1.58 bits/bases where the existing best methods could not achieve a ratio less than 1.72 bits/bases.

  8. Correction of Faulty Sensors in Phased Array Radars Using Symmetrical Sensor Failure Technique and Cultural Algorithm with Differential Evolution

    PubMed Central

    Khan, S. U.; Qureshi, I. M.; Zaman, F.; Shoaib, B.; Naveed, A.; Basit, A.

    2014-01-01

    Three issues regarding sensor failure at any position in the antenna array are discussed. We assume that sensor position is known. The issues include raise in sidelobe levels, displacement of nulls from their original positions, and diminishing of null depth. The required null depth is achieved by making the weight of symmetrical complement sensor passive. A hybrid method based on memetic computing algorithm is proposed. The hybrid method combines the cultural algorithm with differential evolution (CADE) which is used for the reduction of sidelobe levels and placement of nulls at their original positions. Fitness function is used to minimize the error between the desired and estimated beam patterns along with null constraints. Simulation results for various scenarios have been given to exhibit the validity and performance of the proposed algorithm. PMID:24688440

  9. Cropping and noise resilient steganography algorithm using secret image sharing

    NASA Astrophysics Data System (ADS)

    Juarez-Sandoval, Oswaldo; Fierro-Radilla, Atoany; Espejel-Trujillo, Angelina; Nakano-Miyatake, Mariko; Perez-Meana, Hector

    2015-03-01

    This paper proposes an image steganography scheme, in which a secret image is hidden into a cover image using a secret image sharing (SIS) scheme. Taking advantage of the fault tolerant property of the (k,n)-threshold SIS, where using any k of n shares (k≤n), the secret data can be recovered without any ambiguity, the proposed steganography algorithm becomes resilient to cropping and impulsive noise contamination. Among many SIS schemes proposed until now, Lin and Chan's scheme is selected as SIS, due to its lossless recovery capability of a large amount of secret data. The proposed scheme is evaluated from several points of view, such as imperceptibility of the stegoimage respect to its original cover image, robustness of hidden data to cropping operation and impulsive noise contamination. The evaluation results show a high quality of the extracted secret image from the stegoimage when it suffered more than 20% cropping or high density noise contamination.

  10. Denoising infrared maritime imagery using tailored dictionaries via modified K-SVD algorithm.

    PubMed

    Smith, L N; Olson, C C; Judd, K P; Nichols, J M

    2012-06-10

    Recent work has shown that tailored overcomplete dictionaries can provide a better image model than standard basis functions for a variety of image processing tasks. Here we propose a modified K-SVD dictionary learning algorithm designed to maintain the advantages of the original approach but with a focus on improved convergence. We then use the learned model to denoise infrared maritime imagery and compare the performance to the original K-SVD algorithm, several overcomplete "fixed" dictionaries, and a standard wavelet denoising algorithm. Results indicate the superiority of overcomplete representations and show that our tailored approach provides similar peak signal-to-noise ratios as the traditional K-SVD at roughly half the computational cost.

  11. Multichannel algorithms for seismic reflectivity inversion

    NASA Astrophysics Data System (ADS)

    Wang, Ruo; Wang, Yanghua

    2017-02-01

    Seismic reflectivity inversion is a deconvolution process for quantitatively extracting the reflectivity series and depicting the layered subsurface structure. The conventional method is a single channel inversion and cannot clearly characterise stratified structures, especially from seismic data with low signal-to-noise ratio. Because it is implemented on a trace-by-trace basis, the continuity along reflections in the original seismic data is deteriorated in the inversion results. We propose here multichannel inversion algorithms that apply the information of adjacent traces during seismic reflectivity inversion. Explicitly, we incorporate a spatial prediction filter into the conventional Cauchy-constrained inversion method. We verify the validity and feasibility of the method using field data experiments and find an improved lateral continuity and clearer structures achieved by the multichannel algorithms. Finally, we compare the performance of three multichannel algorithms and merit the effectiveness based on the lateral coherency and structure characterisation of the inverted reflectivity profiles, and the residual energy of the seismic data at the same time.

  12. Generating object proposals for improved object detection in aerial images

    NASA Astrophysics Data System (ADS)

    Sommer, Lars W.; Schuchert, Tobias; Beyerer, Jürgen

    2016-10-01

    Screening of aerial images covering large areas is important for many applications such as surveillance, tracing or rescue tasks. To reduce the workload of image analysts, an automatic detection of candidate objects is required. In general, object detection is performed by applying classifiers or a cascade of classifiers within a sliding window algorithm. However, the huge number of windows to classify, especially in case of multiple object scales, makes these approaches computationally expensive. To overcome this challenge, we reduce the number of candidate windows by generating so called object proposals. Object proposals are a set of candidate regions in an image that are likely to contain an object. We apply the Selective Search approach that has been broadly used as proposals method for detectors like R-CNN or Fast R-CNN. Therefore, a set of small regions is generated by initial segmentation followed by hierarchical grouping of the initial regions to generate proposals at different scales. To reduce the computational costs of the original approach, which consists of 80 combinations of segmentation settings and grouping strategies, we only apply the most appropriate combination. Therefore, we analyze the impact of varying segmentation settings, different merging strategies, and various colour spaces by calculating the recall with regard to the number of object proposals and the intersection over union between generated proposals and ground truth annotations. As aerial images differ considerably from datasets that are typically used for exploring object proposals methods, in particular in object size and the image fraction occupied by an object, we further adapt the Selective Search algorithm to aerial images by replacing the random order of generated proposals by a weighted order based on the object proposal size and integrate a termination criterion for the merging strategies. Finally, the adapted approach is compared to the original Selective Search algorithm

  13. Learning Intelligent Genetic Algorithms Using Japanese Nonograms

    ERIC Educational Resources Information Center

    Tsai, Jinn-Tsong; Chou, Ping-Yi; Fang, Jia-Cen

    2012-01-01

    An intelligent genetic algorithm (IGA) is proposed to solve Japanese nonograms and is used as a method in a university course to learn evolutionary algorithms. The IGA combines the global exploration capabilities of a canonical genetic algorithm (CGA) with effective condensed encoding, improved fitness function, and modified crossover and…

  14. Highly Scalable Matching Pursuit Signal Decomposition Algorithm

    NASA Technical Reports Server (NTRS)

    Christensen, Daniel; Das, Santanu; Srivastava, Ashok N.

    2009-01-01

    Matching Pursuit Decomposition (MPD) is a powerful iterative algorithm for signal decomposition and feature extraction. MPD decomposes any signal into linear combinations of its dictionary elements or atoms . A best fit atom from an arbitrarily defined dictionary is determined through cross-correlation. The selected atom is subtracted from the signal and this procedure is repeated on the residual in the subsequent iterations until a stopping criterion is met. The reconstructed signal reveals the waveform structure of the original signal. However, a sufficiently large dictionary is required for an accurate reconstruction; this in return increases the computational burden of the algorithm, thus limiting its applicability and level of adoption. The purpose of this research is to improve the scalability and performance of the classical MPD algorithm. Correlation thresholds were defined to prune insignificant atoms from the dictionary. The Coarse-Fine Grids and Multiple Atom Extraction techniques were proposed to decrease the computational burden of the algorithm. The Coarse-Fine Grids method enabled the approximation and refinement of the parameters for the best fit atom. The ability to extract multiple atoms within a single iteration enhanced the effectiveness and efficiency of each iteration. These improvements were implemented to produce an improved Matching Pursuit Decomposition algorithm entitled MPD++. Disparate signal decomposition applications may require a particular emphasis of accuracy or computational efficiency. The prominence of the key signal features required for the proper signal classification dictates the level of accuracy necessary in the decomposition. The MPD++ algorithm may be easily adapted to accommodate the imposed requirements. Certain feature extraction applications may require rapid signal decomposition. The full potential of MPD++ may be utilized to produce incredible performance gains while extracting only slightly less energy than the

  15. Clever eye algorithm for target detection of remote sensing imagery

    NASA Astrophysics Data System (ADS)

    Geng, Xiurui; Ji, Luyan; Sun, Kang

    2016-04-01

    Target detection algorithms for hyperspectral remote sensing imagery, such as the two most commonly used remote sensing detection algorithms, the constrained energy minimization (CEM) and matched filter (MF), can usually be attributed to the inner product between a weight filter (or detector) and a pixel vector. CEM and MF have the same expression except that MF requires data centralization first. However, this difference leads to a difference in the target detection results. That is to say, the selection of the data origin could directly affect the performance of the detector. Therefore, does there exist another data origin other than the zero and mean-vector points for a better target detection performance? This is a very meaningful issue in the field of target detection, but it has not been paid enough attention yet. In this study, we propose a novel objective function by introducing the data origin as another variable, and the solution of the function is corresponding to the data origin with the minimal output energy. The process of finding the optimal solution can be vividly regarded as a clever eye automatically searching the best observing position and direction in the feature space, which corresponds to the largest separation between the target and background. Therefore, this new algorithm is referred to as the clever eye algorithm (CE). Based on the Sherman-Morrison formula and the gradient ascent method, CE could derive the optimal target detection result in terms of energy. Experiments with both synthetic and real hyperspectral data have verified the effectiveness of our method.

  16. Rule Extraction Based on Extreme Learning Machine and an Improved Ant-Miner Algorithm for Transient Stability Assessment.

    PubMed

    Li, Yang; Li, Guoqing; Wang, Zhenhao

    2015-01-01

    In order to overcome the problems of poor understandability of the pattern recognition-based transient stability assessment (PRTSA) methods, a new rule extraction method based on extreme learning machine (ELM) and an improved Ant-miner (IAM) algorithm is presented in this paper. First, the basic principles of ELM and Ant-miner algorithm are respectively introduced. Then, based on the selected optimal feature subset, an example sample set is generated by the trained ELM-based PRTSA model. And finally, a set of classification rules are obtained by IAM algorithm to replace the original ELM network. The novelty of this proposal is that transient stability rules are extracted from an example sample set generated by the trained ELM-based transient stability assessment model by using IAM algorithm. The effectiveness of the proposed method is shown by the application results on the New England 39-bus power system and a practical power system--the southern power system of Hebei province.

  17. Spaceborne SAR Imaging Algorithm for Coherence Optimized

    PubMed Central

    Qiu, Zhiwei; Yue, Jianping; Wang, Xueqin; Yue, Shun

    2016-01-01

    This paper proposes SAR imaging algorithm with largest coherence based on the existing SAR imaging algorithm. The basic idea of SAR imaging algorithm in imaging processing is that output signal can have maximum signal-to-noise ratio (SNR) by using the optimal imaging parameters. Traditional imaging algorithm can acquire the best focusing effect, but would bring the decoherence phenomenon in subsequent interference process. Algorithm proposed in this paper is that SAR echo adopts consistent imaging parameters in focusing processing. Although the SNR of the output signal is reduced slightly, their coherence is ensured greatly, and finally the interferogram with high quality is obtained. In this paper, two scenes of Envisat ASAR data in Zhangbei are employed to conduct experiment for this algorithm. Compared with the interferogram from the traditional algorithm, the results show that this algorithm is more suitable for SAR interferometry (InSAR) research and application. PMID:26871446

  18. [Curvelet denoising algorithm for medical ultrasound image based on adaptive threshold].

    PubMed

    Zhuang, Zhemin; Yao, Weike; Yang, Jinyao; Li, FenLan; Yuan, Ye

    2014-11-01

    The traditional denoising algorithm for ultrasound images would lost a lot of details and weak edge information when suppressing speckle noise. A new denoising algorithm of adaptive threshold based on curvelet transform is proposed in this paper. The algorithm utilizes differences of coefficients' local variance between texture and smooth region in each layer of ultrasound image to define fuzzy regions and membership functions. In the end, using the adaptive threshold that determine by the membership function to denoise the ultrasound image. The experimental text shows that the algorithm can reduce the speckle noise effectively and retain the detail information of original image at the same time, thus it can greatly enhance the performance of B ultrasound instrument.

  19. On the convergence of the Fitness-Complexity algorithm

    NASA Astrophysics Data System (ADS)

    Pugliese, Emanuele; Zaccaria, Andrea; Pietronero, Luciano

    2016-10-01

    We investigate the convergence properties of an algorithm which has been recently proposed to measure the competitiveness of countries and the quality of their exported products. These quantities are called respectively Fitness F and Complexity Q. The algorithm was originally based on the adjacency matrix M of the bipartite network connecting countries with the products they export, but can be applied to any bipartite network. The structure of the adjacency matrix turns to be essential to determine which countries and products converge to non zero values of F and Q. Also the speed of convergence to zero depends on the matrix structure. A major role is played by the shape of the ordered matrix and, in particular, only those matrices whose diagonal does not cross the empty part are guaranteed to have non zero values as outputs when the algorithm reaches the fixed point. We prove this result analytically for simplified structures of the matrix, and numerically for real cases. Finally, we propose some practical indications to take into account our results when the algorithm is applied.

  20. Multiplatform GPGPU implementation of the active contours without edges algorithm

    NASA Astrophysics Data System (ADS)

    Zavala-Romero, Olmo; Meyer-Baese, Anke; Meyer-Baese, Uwe

    2012-05-01

    An OpenCL implementation of the Active Contours Without Edges algorithm is presented. The proposed algorithm uses the General Purpose Computing on Graphics Processing Units (GPGPU) to accelerate the original model by parallelizing the two main steps of the segmentation process, the computation of the Signed Distance Function (SDF) and the evolution of the segmented curve. The proposed scheme for the computation of the SDF is based on the iterative construction of partial Voronoi diagrams of a reduced dimension and obtains the exact Euclidean distance in a time of order O(N/p), where N is the number of pixels and p the number of processors. With high resolution images the segmentation algorithm runs 10 times faster than its equivalent sequential implementation. This work is being done as an open source software that, being programmed in OpenCL, can be used in dierent platforms allowing a broad number of nal users and can be applied in dierent areas of computer vision, like medical imaging, tracking, robotics, etc. This work uses OpenGL to visualize the algorithm results in real time.

  1. Algorithm development

    NASA Technical Reports Server (NTRS)

    Barth, Timothy J.; Lomax, Harvard

    1987-01-01

    The past decade has seen considerable activity in algorithm development for the Navier-Stokes equations. This has resulted in a wide variety of useful new techniques. Some examples for the numerical solution of the Navier-Stokes equations are presented, divided into two parts. One is devoted to the incompressible Navier-Stokes equations, and the other to the compressible form.

  2. Approximation algorithms

    PubMed Central

    Schulz, Andreas S.; Shmoys, David B.; Williamson, David P.

    1997-01-01

    Increasing global competition, rapidly changing markets, and greater consumer awareness have altered the way in which corporations do business. To become more efficient, many industries have sought to model some operational aspects by gigantic optimization problems. It is not atypical to encounter models that capture 106 separate “yes” or “no” decisions to be made. Although one could, in principle, try all 2106 possible solutions to find the optimal one, such a method would be impractically slow. Unfortunately, for most of these models, no algorithms are known that find optimal solutions with reasonable computation times. Typically, industry must rely on solutions of unguaranteed quality that are constructed in an ad hoc manner. Fortunately, for some of these models there are good approximation algorithms: algorithms that produce solutions quickly that are provably close to optimal. Over the past 6 years, there has been a sequence of major breakthroughs in our understanding of the design of approximation algorithms and of limits to obtaining such performance guarantees; this area has been one of the most flourishing areas of discrete mathematics and theoretical computer science. PMID:9370525

  3. A novel algorithm with differential evolution and coral reef optimization for extreme learning machine training.

    PubMed

    Yang, Zhiyong; Zhang, Taohong; Zhang, Dezheng

    2016-02-01

    Extreme learning machine (ELM) is a novel and fast learning method to train single layer feed-forward networks. However due to the demand for larger number of hidden neurons, the prediction speed of ELM is not fast enough. An evolutionary based ELM with differential evolution (DE) has been proposed to reduce the prediction time of original ELM. But it may still get stuck at local optima. In this paper, a novel algorithm hybridizing DE and metaheuristic coral reef optimization (CRO), which is called differential evolution coral reef optimization (DECRO), is proposed to balance the explorative power and exploitive power to reach better performance. The thought and the implement of DECRO algorithm are discussed in this article with detail. DE, CRO and DECRO are applied to ELM training respectively. Experimental results show that DECRO-ELM can reduce the prediction time of original ELM, and obtain better performance for training ELM than both DE and CRO.

  4. Analysis of the contact graph routing algorithm: Bounding interplanetary paths

    NASA Astrophysics Data System (ADS)

    Birrane, Edward; Burleigh, Scott; Kasch, Niels

    2012-06-01

    Interplanetary communication networks comprise orbiters, deep-space relays, and stations on planetary surfaces. These networks must overcome node mobility, constrained resources, and significant propagation delays. Opportunities for wireless contact rely on calculating transmit and receive opportunities, but the Euclidean-distance diameter of these networks (measured in light-seconds and light-minutes) precludes node discovery and contact negotiation. Propagation delay may be larger than the line-of-sight contact between nodes. For example, Mars and Earth orbiters may be separated by up to 20.8 min of signal propagation time. Such spacecraft may never share line-of-sight, but may uni-directionally communicate if one orbiter knows the other's future position. The Contact Graph Routing (CGR) approach is a family of algorithms presented to solve the messaging problem of interplanetary communications. These algorithms exploit networks where nodes exhibit deterministic mobility. For CGR, mobility and bandwidth information is pre-configured throughout the network allowing nodes to construct transmit opportunities. Once constructed, routing algorithms operate on this contact graph to build an efficient path through the network. The interpretation of the contact graph, and the construction of a bounded approximate path, is critically important for adoption in operational systems. Brute force approaches, while effective in small networks, are computationally expensive and will not scale. Methods of inferring cycles or other librations within the graph are difficult to detect and will guide the practical implementation of any routing algorithm. This paper presents a mathematical analysis of a multi-destination contact graph algorithm (MD-CGR), demonstrates that it is NP-complete, and proposes realistic constraints that make the problem solvable in polynomial time, as is the case with the originally proposed CGR algorithm. An analysis of path construction to complement hop

  5. Improved LMS algorithm for adaptive beamforming

    NASA Technical Reports Server (NTRS)

    Godara, Lal C.

    1990-01-01

    Two adaptive algorithms which make use of all the available samples to estimate the required gradient are proposed and studied. The first algorithm is referred to as the recursive LMS (least mean squares) and is applicable to a general array. The second algorithm is referred to as the improved LMS algorithm and exploits the Toeplitz structure of the ACM (array correlation matrix); it can be used only for an equispaced linear array.

  6. An item-oriented recommendation algorithm on cold-start problem

    NASA Astrophysics Data System (ADS)

    Qiu, Tian; Chen, Guang; Zhang, Zi-Ke; Zhou, Tao

    2011-09-01

    Based on a hybrid algorithm incorporating the heat conduction and probability spreading processes (Proc. Natl. Acad. Sci. U.S.A., 107 (2010) 4511), in this letter, we propose an improved method by introducing an item-oriented function, focusing on solving the dilemma of the recommendation accuracy between the cold and popular items. Differently from previous works, the present algorithm does not require any additional information (e.g., tags). Further experimental results obtained in three real datasets, RYM, Netflix and MovieLens, show that, compared with the original hybrid method, the proposed algorithm significantly enhances the recommendation accuracy of the cold items, while it keeps the recommendation accuracy of the overall and the popular items. This work might shed some light on both understanding and designing effective methods for long-tailed online applications of recommender systems.

  7. Double color image encryption using iterative phase retrieval algorithm in quaternion gyrator domain.

    PubMed

    Shao, Zhuhong; Shu, Huazhong; Wu, Jiasong; Dong, Zhifang; Coatrieux, Gouenou; Coatrieux, Jean Louis

    2014-03-10

    This paper describes a novel algorithm to encrypt double color images into a single undistinguishable image in quaternion gyrator domain. By using an iterative phase retrieval algorithm, the phase masks used for encryption are obtained. Subsequently, the encrypted image is generated via cascaded quaternion gyrator transforms with different rotation angles. The parameters in quaternion gyrator transforms and phases serve as encryption keys. By knowing these keys, the original color images can be fully restituted. Numerical simulations have demonstrated the validity of the proposed encryption system as well as its robustness against loss of data and additive Gaussian noise.

  8. Deterministic implementations of single-photon multi-qubit Deutsch-Jozsa algorithms with linear optics

    NASA Astrophysics Data System (ADS)

    Wei, Hai-Rui; Liu, Ji-Zhen

    2017-02-01

    It is very important to seek an efficient and robust quantum algorithm demanding less quantum resources. We propose one-photon three-qubit original and refined Deutsch-Jozsa algorithms with polarization and two linear momentums degrees of freedom (DOFs). Our schemes are constructed by solely using linear optics. Compared to the traditional ones with one DOF, our schemes are more economic and robust because the necessary photons are reduced from three to one. Our linear-optic schemes are working in a determinate way, and they are feasible with current experimental technology.

  9. Effective algorithm for random mask generation used in secured optical data encryption and communication

    NASA Astrophysics Data System (ADS)

    Liu, Yuexin; Metzner, John J.; Guo, Ruyan; Yu, Francis T. S.

    2005-09-01

    An efficient and secure algorithm for random phase mask generation used in optical data encryption and transmission system is proposed, based on Diffie-Hellman public key distribution. Thus-generated random mask has higher security due to the fact that it is never exposed to the vulnerable transmitting channels. The effectiveness to retrieve the original image and its robustness against blind manipulation have been demonstrated by our numerical results. In addition, this algorithm can be easily extended to multicast networking system and refresh of this shared random key is also very simple to implement.

  10. A genetic-algorithm-based method to find unitary transformations for any desired quantum computation and application to a one-bit oracle decision problem

    NASA Astrophysics Data System (ADS)

    Bang, Jeongho; Yoo, Seokwon

    2014-12-01

    We propose a genetic-algorithm-based method to find the unitary transformations for any desired quantum computation. We formulate a simple genetic algorithm by introducing the "genetic parameter vector" of the unitary transformations to be found. In the genetic algorithm process, all components of the genetic parameter vectors are supposed to evolve to the solution parameters of the unitary transformations. We apply our method to find the optimal unitary transformations and to generalize the corresponding quantum algorithms for a realistic problem, the one-bit oracle decision problem, or the often-called Deutsch problem. By numerical simulations, we can faithfully find the appropriate unitary transformations to solve the problem by using our method. We analyze the quantum algorithms identified by the found unitary transformations and generalize the variant models of the original Deutsch's algorithm.

  11. Brief Report: exploratory analysis of the ADOS revised algorithm: specificity and predictive value with Hispanic children referred for autism spectrum disorders.

    PubMed

    Overton, Terry; Fielding, Cheryl; de Alba, Roman Garcia

    2008-07-01

    This study compared Autism diagnostic observation schedule (ADOS) algorithm scores of a sample of 26 children who were administered modules 1-3 of the ADOS with the scores obtained applying the revised ADOS algorithm proposed by Gotham et al. (2007). Results of this application were inconsistent, yielding slightly more accurate results for module 1. New algorithm scores on modules 2 and 3 remained consistent with the original algorithm scores. The Mann-Whitney U was applied to compare revised algorithm and clinical levels of social impairment to determine if significant differences were evident. Results of Mann-Whitney U analyses were inconsistent and demonstrated less specificity for children with milder levels of social impairment. The revised algorithm demonstrated accuracy for the more severe autistic group.

  12. Rotational Invariant Dimensionality Reduction Algorithms.

    PubMed

    Lai, Zhihui; Xu, Yong; Yang, Jian; Shen, Linlin; Zhang, David

    2016-06-30

    A common intrinsic limitation of the traditional subspace learning methods is the sensitivity to the outliers and the image variations of the object since they use the L₂ norm as the metric. In this paper, a series of methods based on the L₂,₁-norm are proposed for linear dimensionality reduction. Since the L₂,₁-norm based objective function is robust to the image variations, the proposed algorithms can perform robust image feature extraction for classification. We use different ideas to design different algorithms and obtain a unified rotational invariant (RI) dimensionality reduction framework, which extends the well-known graph embedding algorithm framework to a more generalized form. We provide the comprehensive analyses to show the essential properties of the proposed algorithm framework. This paper indicates that the optimization problems have global optimal solutions when all the orthogonal projections of the data space are computed and used. Experimental results on popular image datasets indicate that the proposed RI dimensionality reduction algorithms can obtain competitive performance compared with the previous L₂ norm based subspace learning algorithms.

  13. [Near-infrared spectra combining with CARS and SPA algorithms to screen the variables and samples for quantitatively determining the soluble solids content in strawberry].

    PubMed

    Li, Jiang-bo; Guo, Zhi-ming; Huang, Wen-qian; Zhang, Bao-hua; Zhao, Chun-jiang

    2015-02-01

    In using spectroscopy to quantitatively or qualitatively analyze the quality of fruit, how to obtain a simple and effective correction model is very critical for the application and maintenance of the developed model. Strawberry as the research object, this research mainly focused on selecting the key variables and characteristic samples for quantitatively determining the soluble solids content. Competitive adaptive reweighted sampling (CARS) algorithm was firstly proposed to select the spectra variables. Then, Samples of correction set were selected by successive projections algorithm (SPA), and 98 characteristic samples were obtained. Next, based on the selected variables and characteristic samples, the second variable selection was performed by using SPA method. 25 key variables were obtained. In order to verify the performance of the proposed CARS algorithm, variable selection algorithms including Monte Carlo-uninformative variable elimination (MC-UVE) and SPA were used as the comparison algorithms. Results showed that CARS algorithm could eliminate uninformative variables and remove the collinearity information at the same time. Similarly, in order to assess the performance of the proposed SPA algorithm for selecting the characteristic samples, SPA algorithm was compared with classical Kennard-Stone algorithm Results showed that SPA algorithm could be used for selection of the characteristic samples in the calibration set. Finally, PLS and MLR model for quantitatively predicting the SSC (soluble solids content) in the strawberry were proposed based on the variables/samples subset (25/98), respectively. Results show that models built by using the 0.59% and 65.33% information of original variables and samples could obtain better performance than using the ones obtained by using all information of the original variables and samples. MLR model was the best with R(pre)2 = 0.9097, RMSEP=0.3484 and RPD = 3.3278.

  14. Dynamic Shortest Path Algorithms for Hypergraphs

    DTIC Science & Technology

    2012-01-01

    geometric hypergraphs and the Enron email data set. The latter illustrates the application of the proposed algorithms in social networks for identifying...analyze the time complexity of the proposed algorithms and perform simulation experiments for both random geometric hypergraphs and the Enron email data...geometric hypergraph model and a real data set of a social network ( Enron email data set), we study the average performance of these two algorithms in

  15. A high resolution spectrum reconstruction algorithm using compressive sensing theory

    NASA Astrophysics Data System (ADS)

    Zheng, Zhaoyu; Liang, Dakai; Liu, Shulin; Feng, Shuqing

    2015-07-01

    This paper proposes a quick spectrum scanning and reconstruction method using compressive sensing in composite structure. The strain field of corrugated structure is simulated by finite element analysis. Then the reflect spectrum is calculated using an improved transfer matrix algorithm. The K-means singular value decomposition sparse dictionary is trained . In the test the spectrum with limited sample points can be obtained and the high resolution spectrum is reconstructed by solving sparse representation equation. Compared with the other conventional basis, the effect of this method is better. The match rate of the recovered spectrum and the original spectrum is over 95%.

  16. An Algorithm for Suffix Stripping

    ERIC Educational Resources Information Center

    Porter, M. F.

    2006-01-01

    Purpose: The automatic removal of suffixes from words in English is of particular interest in the field of information retrieval. This work was originally published in Program in 1980 and is republished as part of a series of articles commemorating the 40th anniversary of the journal. Design/methodology/approach: An algorithm for suffix stripping…

  17. Research on Routing Selection Algorithm Based on Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Gao, Guohong; Zhang, Baojian; Li, Xueyong; Lv, Jinna

    The hereditary algorithm is a kind of random searching and method of optimizing based on living beings natural selection and hereditary mechanism. In recent years, because of the potentiality in solving complicate problems and the successful application in the fields of industrial project, hereditary algorithm has been widely concerned by the domestic and international scholar. Routing Selection communication has been defined a standard communication model of IP version 6.This paper proposes a service model of Routing Selection communication, and designs and implements a new Routing Selection algorithm based on genetic algorithm.The experimental simulation results show that this algorithm can get more resolution at less time and more balanced network load, which enhances search ratio and the availability of network resource, and improves the quality of service.

  18. Parallel algorithms for unconstrained optimization by multisplitting with inexact subspace search - the abstract

    SciTech Connect

    Renaut, R.; He, Q.

    1994-12-31

    In a new parallel iterative algorithm for unconstrained optimization by multisplitting is proposed. In this algorithm the original problem is split into a set of small optimization subproblems which are solved using well known sequential algorithms. These algorithms are iterative in nature, e.g. DFP variable metric method. Here the authors use sequential algorithms based on an inexact subspace search, which is an extension to the usual idea of an inexact fine search. Essentially the idea of the inexact line search for nonlinear minimization is that at each iteration the authors only find an approximate minimum in the line search direction. Hence by inexact subspace search, they mean that, instead of finding the minimum of the subproblem at each interation, they do an incomplete down hill search to give an approximate minimum. Some convergence and numerical results for this algorithm will be presented. Further, the original theory will be generalized to the situation with a singular Hessian. Applications for nonlinear least squares problems will be presented. Experimental results will be presented for implementations on an Intel iPSC/860 Hypercube with 64 nodes as well as on the Intel Paragon.

  19. An enhanced version of the heat exchange algorithm with excellent energy conservation properties.

    PubMed

    Wirnsberger, P; Frenkel, D; Dellago, C

    2015-09-28

    We propose a new algorithm for non-equilibrium molecular dynamics simulations of thermal gradients. The algorithm is an extension of the heat exchange algorithm developed by Hafskjold et al. [Mol. Phys. 80, 1389 (1993); 81, 251 (1994)], in which a certain amount of heat is added to one region and removed from another by rescaling velocities appropriately. Since the amount of added and removed heat is the same and the dynamics between velocity rescaling steps is Hamiltonian, the heat exchange algorithm is expected to conserve the energy. However, it has been reported previously that the original version of the heat exchange algorithm exhibits a pronounced drift in the total energy, the exact cause of which remained hitherto unclear. Here, we show that the energy drift is due to the truncation error arising from the operator splitting and suggest an additional coordinate integration step as a remedy. The new algorithm retains all the advantages of the original one whilst exhibiting excellent energy conservation as illustrated for a Lennard-Jones liquid and SPC/E water.

  20. A Bat Algorithm with Mutation for UCAV Path Planning

    PubMed Central

    Wang, Gaige; Guo, Lihong; Duan, Hong; Liu, Luo; Wang, Heqi

    2012-01-01

    Path planning for uninhabited combat air vehicle (UCAV) is a complicated high dimension optimization problem, which mainly centralizes on optimizing the flight route considering the different kinds of constrains under complicated battle field environments. Original bat algorithm (BA) is used to solve the UCAV path planning problem. Furthermore, a new bat algorithm with mutation (BAM) is proposed to solve the UCAV path planning problem, and a modification is applied to mutate between bats during the process of the new solutions updating. Then, the UCAV can find the safe path by connecting the chosen nodes of the coordinates while avoiding the threat areas and costing minimum fuel. This new approach can accelerate the global convergence speed while preserving the strong robustness of the basic BA. The realization procedure for original BA and this improved metaheuristic approach BAM is also presented. To prove the performance of this proposed metaheuristic method, BAM is compared with BA and other population-based optimization methods, such as ACO, BBO, DE, ES, GA, PBIL, PSO, and SGA. The experiment shows that the proposed approach is more effective and feasible in UCAV path planning than the other models. PMID:23365518

  1. A Synchronous-Asynchronous Particle Swarm Optimisation Algorithm

    PubMed Central

    Ab Aziz, Nor Azlina; Mubin, Marizan; Mohamad, Mohd Saberi; Ab Aziz, Kamarulzaman

    2014-01-01

    In the original particle swarm optimisation (PSO) algorithm, the particles' velocities and positions are updated after the whole swarm performance is evaluated. This algorithm is also known as synchronous PSO (S-PSO). The strength of this update method is in the exploitation of the information. Asynchronous update PSO (A-PSO) has been proposed as an alternative to S-PSO. A particle in A-PSO updates its velocity and position as soon as its own performance has been evaluated. Hence, particles are updated using partial information, leading to stronger exploration. In this paper, we attempt to improve PSO by merging both update methods to utilise the strengths of both methods. The proposed synchronous-asynchronous PSO (SA-PSO) algorithm divides the particles into smaller groups. The best member of a group and the swarm's best are chosen to lead the search. Members within a group are updated synchronously, while the groups themselves are asynchronously updated. Five well-known unimodal functions, four multimodal functions, and a real world optimisation problem are used to study the performance of SA-PSO, which is compared with the performances of S-PSO and A-PSO. The results are statistically analysed and show that the proposed SA-PSO has performed consistently well. PMID:25121109

  2. WDM Multicast Tree Construction Algorithms and Their Comparative Evaluations

    NASA Astrophysics Data System (ADS)

    Makabe, Tsutomu; Mikoshi, Taiju; Takenaka, Toyofumi

    We propose novel tree construction algorithms for multicast communication in photonic networks. Since multicast communications consume many more link resources than unicast communications, effective algorithms for route selection and wavelength assignment are required. We propose a novel tree construction algorithm, called the Weighted Steiner Tree (WST) algorithm and a variation of the WST algorithm, called the Composite Weighted Steiner Tree (CWST) algorithm. Because these algorithms are based on the Steiner Tree algorithm, link resources among source and destination pairs tend to be commonly used and link utilization ratios are improved. Because of this, these algorithms can accept many more multicast requests than other multicast tree construction algorithms based on the Dijkstra algorithm. However, under certain delay constraints, the blocking characteristics of the proposed Weighted Steiner Tree algorithm deteriorate since some light paths between source and destinations use many hops and cannot satisfy the delay constraint. In order to adapt the approach to the delay-sensitive environments, we have devised the Composite Weighted Steiner Tree algorithm comprising the Weighted Steiner Tree algorithm and the Dijkstra algorithm for use in a delay constrained environment such as an IPTV application. In this paper, we also give the results of simulation experiments which demonstrate the superiority of the proposed Composite Weighted Steiner Tree algorithm compared with the Distributed Minimum Hop Tree (DMHT) algorithm, from the viewpoint of the light-tree request blocking.

  3. Comprehensive eye evaluation algorithm

    NASA Astrophysics Data System (ADS)

    Agurto, C.; Nemeth, S.; Zamora, G.; Vahtel, M.; Soliz, P.; Barriga, S.

    2016-03-01

    In recent years, several research groups have developed automatic algorithms to detect diabetic retinopathy (DR) in individuals with diabetes (DM), using digital retinal images. Studies have indicated that diabetics have 1.5 times the annual risk of developing primary open angle glaucoma (POAG) as do people without DM. Moreover, DM patients have 1.8 times the risk for age-related macular degeneration (AMD). Although numerous investigators are developing automatic DR detection algorithms, there have been few successful efforts to create an automatic algorithm that can detect other ocular diseases, such as POAG and AMD. Consequently, our aim in the current study was to develop a comprehensive eye evaluation algorithm that not only detects DR in retinal images, but also automatically identifies glaucoma suspects and AMD by integrating other personal medical information with the retinal features. The proposed system is fully automatic and provides the likelihood of each of the three eye disease. The system was evaluated in two datasets of 104 and 88 diabetic cases. For each eye, we used two non-mydriatic digital color fundus photographs (macula and optic disc centered) and, when available, information about age, duration of diabetes, cataracts, hypertension, gender, and laboratory data. Our results show that the combination of multimodal features can increase the AUC by up to 5%, 7%, and 8% in the detection of AMD, DR, and glaucoma respectively. Marked improvement was achieved when laboratory results were combined with retinal image features.

  4. A generalized vector-valued total variation algorithm

    SciTech Connect

    Wohlberg, Brendt; Rodriguez, Paul

    2009-01-01

    We propose a simple but flexible method for solving the generalized vector-valued TV (VTV) functional, which includes both the {ell}{sup 2}-VTV and {ell}{sup 1}-VTV regularizations as special cases, to address the problems of deconvolution and denoising of vector-valued (e.g. color) images with Gaussian or salt-andpepper noise. This algorithm is the vectorial extension of the Iteratively Reweighted Norm (IRN) algorithm [I] originally developed for scalar (grayscale) images. This method offers competitive computational performance for denoising and deconvolving vector-valued images corrupted with Gaussian ({ell}{sup 2}-VTV case) and salt-and-pepper noise ({ell}{sup 1}-VTV case).

  5. Algorithms versus architectures for computational chemistry

    NASA Technical Reports Server (NTRS)

    Partridge, H.; Bauschlicher, C. W., Jr.

    1986-01-01

    The algorithms employed are computationally intensive and, as a result, increased performance (both algorithmic and architectural) is required to improve accuracy and to treat larger molecular systems. Several benchmark quantum chemistry codes are examined on a variety of architectures. While these codes are only a small portion of a typical quantum chemistry library, they illustrate many of the computationally intensive kernels and data manipulation requirements of some applications. Furthermore, understanding the performance of the existing algorithm on present and proposed supercomputers serves as a guide for future programs and algorithm development. The algorithms investigated are: (1) a sparse symmetric matrix vector product; (2) a four index integral transformation; and (3) the calculation of diatomic two electron Slater integrals. The vectorization strategies are examined for these algorithms for both the Cyber 205 and Cray XMP. In addition, multiprocessor implementations of the algorithms are looked at on the Cray XMP and on the MIT static data flow machine proposed by DENNIS.

  6. Dual signal subspace projection (DSSP): a novel algorithm for removing large interference in biomagnetic measurements

    NASA Astrophysics Data System (ADS)

    Sekihara, Kensuke; Kawabata, Yuya; Ushio, Shuta; Sumiya, Satoshi; Kawabata, Shigenori; Adachi, Yoshiaki; Nagarajan, Srikantan S.

    2016-06-01

    Objective. In functional electrophysiological imaging, signals are often contaminated by interference that can be of considerable magnitude compared to the signals of interest. This paper proposes a novel algorithm for removing such interferences that does not require separate noise measurements. Approach. The algorithm is based on a dual definition of the signal subspace in the spatial- and time-domains. Since the algorithm makes use of this duality, it is named the dual signal subspace projection (DSSP). The DSSP algorithm first projects the columns of the measured data matrix onto the inside and outside of the spatial-domain signal subspace, creating a set of two preprocessed data matrices. The intersection of the row spans of these two matrices is estimated as the time-domain interference subspace. The original data matrix is projected onto the subspace that is orthogonal to this interference subspace. Main results. The DSSP algorithm is validated by using the computer simulation, and using two sets of real biomagnetic data: spinal cord evoked field data measured from a healthy volunteer and magnetoencephalography data from a patient with a vagus nerve stimulator. Significance. The proposed DSSP algorithm is effective for removing overlapped interference in a wide variety of biomagnetic measurements.

  7. Tilted cone beam VCT reconstruction algorithm

    NASA Astrophysics Data System (ADS)

    Hsieh, Jiang; Tang, Xiangyang

    2005-04-01

    Reconstruction algorithms for volumetric CT have been the focus of many studies. Several exact and approximate reconstruction algorithms have been proposed for step-and-shoot and helical scanning trajectories to combat cone beam related artifacts. In this paper, we present a closed form cone beam reconstruction formula for tilted gantry data acquisition. Although several algorithms were proposed to compensate for errors induced by the gantry tilt, none of the algorithms addresses the case in which the cone beam geometry is first rebinned to a set of parallel beams prior to the filtered backprojection. Because of the rebinning process, the amount of iso-center adjustment depends not only on the projection angle and tilt angle, but also on the reconstructed pixel location. The proposed algorithm has been tested extensively on both 16 and 64 slice VCT with phantoms and clinical data. The efficacy of the algorithm is clearly demonstrated by the experiments.

  8. Adaptive link selection algorithms for distributed estimation

    NASA Astrophysics Data System (ADS)

    Xu, Songcen; de Lamare, Rodrigo C.; Poor, H. Vincent

    2015-12-01

    This paper presents adaptive link selection algorithms for distributed estimation and considers their application to wireless sensor networks and smart grids. In particular, exhaustive search-based least mean squares (LMS) / recursive least squares (RLS) link selection algorithms and sparsity-inspired LMS / RLS link selection algorithms that can exploit the topology of networks with poor-quality links are considered. The proposed link selection algorithms are then analyzed in terms of their stability, steady-state, and tracking performance and computational complexity. In comparison with the existing centralized or distributed estimation strategies, the key features of the proposed algorithms are as follows: (1) more accurate estimates and faster convergence speed can be obtained and (2) the network is equipped with the ability of link selection that can circumvent link failures and improve the estimation performance. The performance of the proposed algorithms for distributed estimation is illustrated via simulations in applications of wireless sensor networks and smart grids.

  9. The global Minmax k-means algorithm.

    PubMed

    Wang, Xiaoyan; Bai, Yanping

    2016-01-01

    The global k-means algorithm is an incremental approach to clustering that dynamically adds one cluster center at a time through a deterministic global search procedure from suitable initial positions, and employs k-means to minimize the sum of the intra-cluster variances. However the global k-means algorithm sometimes results singleton clusters and the initial positions sometimes are bad, after a bad initialization, poor local optimal can be easily obtained by k-means algorithm. In this paper, we modified the global k-means algorithm to eliminate the singleton clusters at first, and then we apply MinMax k-means clustering error method to global k-means algorithm to overcome the effect of bad initialization, proposed the global Minmax k-means algorithm. The proposed clustering method is tested on some popular data sets and compared to the k-means algorithm, the global k-means algorithm and the MinMax k-means algorithm. The experiment results show our proposed algorithm outperforms other algorithms mentioned in the paper.

  10. Linear Bregman algorithm implemented in parallel GPU

    NASA Astrophysics Data System (ADS)

    Li, Pengyan; Ke, Jue; Sui, Dong; Wei, Ping

    2015-08-01

    At present, most compressed sensing (CS) algorithms have poor converging speed, thus are difficult to run on PC. To deal with this issue, we use a parallel GPU, to implement a broadly used compressed sensing algorithm, the Linear Bregman algorithm. Linear iterative Bregman algorithm is a reconstruction algorithm proposed by Osher and Cai. Compared with other CS reconstruction algorithms, the linear Bregman algorithm only involves the vector and matrix multiplication and thresholding operation, and is simpler and more efficient for programming. We use C as a development language and adopt CUDA (Compute Unified Device Architecture) as parallel computing architectures. In this paper, we compared the parallel Bregman algorithm with traditional CPU realized Bregaman algorithm. In addition, we also compared the parallel Bregman algorithm with other CS reconstruction algorithms, such as OMP and TwIST algorithms. Compared with these two algorithms, the result of this paper shows that, the parallel Bregman algorithm needs shorter time, and thus is more convenient for real-time object reconstruction, which is important to people's fast growing demand to information technology.

  11. A new frame-based registration algorithm

    NASA Technical Reports Server (NTRS)

    Yan, C. H.; Whalen, R. T.; Beaupre, G. S.; Sumanaweera, T. S.; Yen, S. Y.; Napel, S.

    1998-01-01

    This paper presents a new algorithm for frame registration. Our algorithm requires only that the frame be comprised of straight rods, as opposed to the N structures or an accurate frame model required by existing algorithms. The algorithm utilizes the full 3D information in the frame as well as a least squares weighting scheme to achieve highly accurate registration. We use simulated CT data to assess the accuracy of our algorithm. We compare the performance of the proposed algorithm to two commonly used algorithms. Simulation results show that the proposed algorithm is comparable to the best existing techniques with knowledge of the exact mathematical frame model. For CT data corrupted with an unknown in-plane rotation or translation, the proposed technique is also comparable to the best existing techniques. However, in situations where there is a discrepancy of more than 2 mm (0.7% of the frame dimension) between the frame and the mathematical model, the proposed technique is significantly better (p < or = 0.05) than the existing techniques. The proposed algorithm can be applied to any existing frame without modification. It provides better registration accuracy and is robust against model mis-match. It allows greater flexibility on the frame structure. Lastly, it reduces the frame construction cost as adherence to a concise model is not required.

  12. Clustering algorithm for determining community structure in large networks

    NASA Astrophysics Data System (ADS)

    Pujol, Josep M.; Béjar, Javier; Delgado, Jordi

    2006-07-01

    We propose an algorithm to find the community structure in complex networks based on the combination of spectral analysis and modularity optimization. The clustering produced by our algorithm is as accurate as the best algorithms on the literature of modularity optimization; however, the main asset of the algorithm is its efficiency. The best match for our algorithm is Newman’s fast algorithm, which is the reference algorithm for clustering in large networks due to its efficiency. When both algorithms are compared, our algorithm outperforms the fast algorithm both in efficiency and accuracy of the clustering, in terms of modularity. Thus, the results suggest that the proposed algorithm is a good choice to analyze the community structure of medium and large networks in the range of tens and hundreds of thousand vertices.

  13. Algorithms, games, and evolution

    PubMed Central

    Chastain, Erick; Livnat, Adi; Papadimitriou, Christos; Vazirani, Umesh

    2014-01-01

    Even the most seasoned students of evolution, starting with Darwin himself, have occasionally expressed amazement that the mechanism of natural selection has produced the whole of Life as we see it around us. There is a computational way to articulate the same amazement: “What algorithm could possibly achieve all this in a mere three and a half billion years?” In this paper we propose an answer: We demonstrate that in the regime of weak selection, the standard equations of population genetics describing natural selection in the presence of sex become identical to those of a repeated game between genes played according to multiplicative weight updates (MWUA), an algorithm known in computer science to be surprisingly powerful and versatile. MWUA maximizes a tradeoff between cumulative performance and entropy, which suggests a new view on the maintenance of diversity in evolution. PMID:24979793

  14. Feature extraction and classification algorithms for high dimensional data

    NASA Technical Reports Server (NTRS)

    Lee, Chulhee; Landgrebe, David

    1993-01-01

    Feature extraction and classification algorithms for high dimensional data are investigated. Developments with regard to sensors for Earth observation are moving in the direction of providing much higher dimensional multispectral imagery than is now possible. In analyzing such high dimensional data, processing time becomes an important factor. With large increases in dimensionality and the number of classes, processing time will increase significantly. To address this problem, a multistage classification scheme is proposed which reduces the processing time substantially by eliminating unlikely classes from further consideration at each stage. Several truncation criteria are developed and the relationship between thresholds and the error caused by the truncation is investigated. Next an approach to feature extraction for classification is proposed based directly on the decision boundaries. It is shown that all the features needed for classification can be extracted from decision boundaries. A characteristic of the proposed method arises by noting that only a portion of the decision boundary is effective in discriminating between classes, and the concept of the effective decision boundary is introduced. The proposed feature extraction algorithm has several desirable properties: it predicts the minimum number of features necessary to achieve the same classification accuracy as in the original space for a given pattern recognition problem; and it finds the necessary feature vectors. The proposed algorithm does not deteriorate under the circumstances of equal means or equal covariances as some previous algorithms do. In addition, the decision boundary feature extraction algorithm can be used both for parametric and non-parametric classifiers. Finally, some problems encountered in analyzing high dimensional data are studied and possible solutions are proposed. First, the increased importance of the second order statistics in analyzing high dimensional data is recognized

  15. Segmentation of pomegranate MR images using spatial fuzzy c-means (SFCM) algorithm

    NASA Astrophysics Data System (ADS)

    Moradi, Ghobad; Shamsi, Mousa; Sedaaghi, M. H.; Alsharif, M. R.

    2011-10-01

    Segmentation is one of the fundamental issues of image processing and machine vision. It plays a prominent role in a variety of image processing applications. In this paper, one of the most important applications of image processing in MRI segmentation of pomegranate is explored. Pomegranate is a fruit with pharmacological properties such as being anti-viral and anti-cancer. Having a high quality product in hand would be critical factor in its marketing. The internal quality of the product is comprehensively important in the sorting process. The determination of qualitative features cannot be manually made. Therefore, the segmentation of the internal structures of the fruit needs to be performed as accurately as possible in presence of noise. Fuzzy c-means (FCM) algorithm is noise-sensitive and pixels with noise are classified inversely. As a solution, in this paper, the spatial FCM algorithm in pomegranate MR images' segmentation is proposed. The algorithm is performed with setting the spatial neighborhood information in FCM and modification of fuzzy membership function for each class. The segmentation algorithm results on the original and the corrupted Pomegranate MR images by Gaussian, Salt Pepper and Speckle noises show that the SFCM algorithm operates much more significantly than FCM algorithm. Also, after diverse steps of qualitative and quantitative analysis, we have concluded that the SFCM algorithm with 5×5 window size is better than the other windows.

  16. [Baseline Correction Algorithm for Raman Spectroscopy Based on Non-Uniform B-Spline].

    PubMed

    Fan, Xian-guang; Wang, Hai-tao; Wang, Xin; Xu, Ying-jie; Wang, Xiu-fen; Que, Jing

    2016-03-01

    As one of the necessary steps for data processing of Raman spectroscopy, baseline correction is commonly used to eliminate the interference of fluorescence spectra. The traditional baseline correction algorithm based on polynomial fitting is simple and easy to implement, but its flexibility is poor due to the uncertain fitting order. In this paper, instead of using polynomial fitting, non-uniform B-spline is proposed to overcome the shortcomings of the traditional method. Based on the advantages of the traditional algorithm, the node vector of non-uniform B-spline is fixed adaptively using the peak position of the original Raman spectrum, and then the baseline is fitted with the fixed order. In order to verify this algorithm, the Raman spectra of parathion-methyl and colza oil are detected and their baselines are corrected using this algorithm, the result is made comparison with two other baseline correction algorithms. The experimental results show that the effect of baseline correction is improved by using this algorithm with a fixed fitting order and less parameters, and there is no over or under fitting phenomenon. Therefore, non-uniform B-spline is proved to be an effective baseline correction algorithm of Raman spectroscopy.

  17. CenLP: A centrality-based label propagation algorithm for community detection in networks

    NASA Astrophysics Data System (ADS)

    Sun, Heli; Liu, Jiao; Huang, Jianbin; Wang, Guangtao; Yang, Zhou; Song, Qinbao; Jia, Xiaolin

    2015-10-01

    Community detection is an important work for discovering the structure and features of complex networks. Many existing methods are sensitive to critical user-dependent parameters or time-consuming in practice. In this paper, we propose a novel label propagation algorithm, called CenLP (Centrality-based Label Propagation). The algorithm introduces a new function to measure the centrality of nodes quantitatively without any user interaction by calculating the local density and the similarity with higher density neighbors for each node. Based on the centrality of nodes, we present a new label propagation algorithm with specific update order and node preference to uncover communities in large-scale networks automatically without imposing any prior restriction. Experiments on both real-world and synthetic networks manifest our algorithm retains the simplicity, effectiveness, and scalability of the original label propagation algorithm and becomes more robust and accurate. Extensive experiments demonstrate the superior performance of our algorithm over the baseline methods. Moreover, our detailed experimental evaluation on real-world networks indicates that our algorithm can effectively measure the centrality of nodes in social networks.

  18. Adaptive-feedback control algorithm.

    PubMed

    Huang, Debin

    2006-06-01

    This paper is motivated by giving the detailed proofs and some interesting remarks on the results the author obtained in a series of papers [Phys. Rev. Lett. 93, 214101 (2004); Phys. Rev. E 71, 037203 (2005); 69, 067201 (2004)], where an adaptive-feedback algorithm was proposed to effectively stabilize and synchronize chaotic systems. This note proves in detail the strictness of this algorithm from the viewpoint of mathematics, and gives some interesting remarks for its potential applications to chaos control & synchronization. In addition, a significant comment on synchronization-based parameter estimation is given, which shows some techniques proposed in literature less strict and ineffective in some cases.

  19. One cutting plane algorithm using auxiliary functions

    NASA Astrophysics Data System (ADS)

    Zabotin, I. Ya; Kazaeva, K. E.

    2016-11-01

    We propose an algorithm for solving a convex programming problem from the class of cutting methods. The algorithm is characterized by the construction of approximations using some auxiliary functions, instead of the objective function. Each auxiliary function bases on the exterior penalty function. In proposed algorithm the admissible set and the epigraph of each auxiliary function are embedded into polyhedral sets. In connection with the above, the iteration points are found by solving linear programming problems. We discuss the implementation of the algorithm and prove its convergence.

  20. Optic Disc Boundary and Vessel Origin Segmentation of Fundus Images.

    PubMed

    Roychowdhury, Sohini; Koozekanani, Dara D; Kuchinka, Sam N; Parhi, Keshab K

    2016-11-01

    This paper presents a novel classification-based optic disc (OD) segmentation algorithm that detects the OD boundary and the location of vessel origin (VO) pixel. First, the green plane of each fundus image is resized and morphologically reconstructed using a circular structuring element. Bright regions are then extracted from the morphologically reconstructed image that lie in close vicinity of the major blood vessels. Next, the bright regions are classified as bright probable OD regions and non-OD regions using six region-based features and a Gaussian mixture model classifier. The classified bright probable OD region with maximum Vessel-Sum and Solidity is detected as the best candidate region for the OD. Other bright probable OD regions within 1-disc diameter from the centroid of the best candidate OD region are then detected as remaining candidate regions for the OD. A convex hull containing all the candidate OD regions is then estimated, and a best-fit ellipse across the convex hull becomes the segmented OD boundary. Finally, the centroid of major blood vessels within the segmented OD boundary is detected as the VO pixel location. The proposed algorithm has low computation time complexity and it is robust to variations in image illumination, imaging angles, and retinal abnormalities. This algorithm achieves 98.8%-100% OD segmentation success and OD segmentation overlap score in the range of 72%-84% on images from the six public datasets of DRIVE, DIARETDB1, DIARETDB0, CHASE_DB1, MESSIDOR, and STARE in less than 2.14 s per image. Thus, the proposed algorithm can be used for automated detection of retinal pathologies, such as glaucoma, diabetic retinopathy, and maculopathy.

  1. Algorithm for Compressing Time-Series Data

    NASA Technical Reports Server (NTRS)

    Hawkins, S. Edward, III; Darlington, Edward Hugo

    2012-01-01

    An algorithm based on Chebyshev polynomials effects lossy compression of time-series data or other one-dimensional data streams (e.g., spectral data) that are arranged in blocks for sequential transmission. The algorithm was developed for use in transmitting data from spacecraft scientific instruments to Earth stations. In spite of its lossy nature, the algorithm preserves the information needed for scientific analysis. The algorithm is computationally simple, yet compresses data streams by factors much greater than two. The algorithm is not restricted to spacecraft or scientific uses: it is applicable to time-series data in general. The algorithm can also be applied to general multidimensional data that have been converted to time-series data, a typical example being image data acquired by raster scanning. However, unlike most prior image-data-compression algorithms, this algorithm neither depends on nor exploits the two-dimensional spatial correlations that are generally present in images. In order to understand the essence of this compression algorithm, it is necessary to understand that the net effect of this algorithm and the associated decompression algorithm is to approximate the original stream of data as a sequence of finite series of Chebyshev polynomials. For the purpose of this algorithm, a block of data or interval of time for which a Chebyshev polynomial series is fitted to the original data is denoted a fitting interval. Chebyshev approximation has two properties that make it particularly effective for compressing serial data streams with minimal loss of scientific information: The errors associated with a Chebyshev approximation are nearly uniformly distributed over the fitting interval (this is known in the art as the "equal error property"); and the maximum deviations of the fitted Chebyshev polynomial from the original data have the smallest possible values (this is known in the art as the "min-max property").

  2. Improved autonomous star identification algorithm

    NASA Astrophysics Data System (ADS)

    Luo, Li-Yan; Xu, Lu-Ping; Zhang, Hua; Sun, Jing-Rong

    2015-06-01

    The log-polar transform (LPT) is introduced into the star identification because of its rotation invariance. An improved autonomous star identification algorithm is proposed in this paper to avoid the circular shift of the feature vector and to reduce the time consumed in the star identification algorithm using LPT. In the proposed algorithm, the star pattern of the same navigation star remains unchanged when the stellar image is rotated, which makes it able to reduce the star identification time. The logarithmic values of the plane distances between the navigation and its neighbor stars are adopted to structure the feature vector of the navigation star, which enhances the robustness of star identification. In addition, some efforts are made to make it able to find the identification result with fewer comparisons, instead of searching the whole feature database. The simulation results demonstrate that the proposed algorithm can effectively accelerate the star identification. Moreover, the recognition rate and robustness by the proposed algorithm are better than those by the LPT algorithm and the modified grid algorithm. Project supported by the National Natural Science Foundation of China (Grant Nos. 61172138 and 61401340), the Open Research Fund of the Academy of Satellite Application, China (Grant No. 2014_CXJJ-DH_12), the Fundamental Research Funds for the Central Universities, China (Grant Nos. JB141303 and 201413B), the Natural Science Basic Research Plan in Shaanxi Province, China (Grant No. 2013JQ8040), the Research Fund for the Doctoral Program of Higher Education of China (Grant No. 20130203120004), and the Xi’an Science and Technology Plan, China (Grant. No CXY1350(4)).

  3. A “Tuned” Mask Learnt Approach Based on Gravitational Search Algorithm

    PubMed Central

    Wan, Youchuan; Ye, Zhiwei

    2016-01-01

    Texture image classification is an important topic in many applications in machine vision and image analysis. Texture feature extracted from the original texture image by using “Tuned” mask is one of the simplest and most effective methods. However, hill climbing based training methods could not acquire the satisfying mask at a time; on the other hand, some commonly used evolutionary algorithms like genetic algorithm (GA) and particle swarm optimization (PSO) easily fall into the local optimum. A novel approach for texture image classification exemplified with recognition of residential area is detailed in the paper. In the proposed approach, “Tuned” mask is viewed as a constrained optimization problem and the optimal “Tuned” mask is acquired by maximizing the texture energy via a newly proposed gravitational search algorithm (GSA). The optimal “Tuned” mask is achieved through the convergence of GSA. The proposed approach has been, respectively, tested on some public texture and remote sensing images. The results are then compared with that of GA, PSO, honey-bee mating optimization (HBMO), and artificial immune algorithm (AIA). Moreover, feature extracted by Gabor wavelet is also utilized to make a further comparison. Experimental results show that the proposed method is robust and adaptive and exhibits better performance than other methods involved in the paper in terms of fitness value and classification accuracy. PMID:28090204

  4. Reflective Practice: Origins and Interpretations

    ERIC Educational Resources Information Center

    Reynolds, Michael

    2011-01-01

    The idea of reflection is central to the theory and practice of learning--especially learning which is grounded in past or current experience. This paper proposes a working definition of reflection and reviews its origins and recent developments. The author also provides an account of "critical reflection", including its rationale and…

  5. Validation of an improved 'diffeomorphic demons' algorithm for deformable image registration in image-guided radiation therapy.

    PubMed

    Zhou, Lu; Zhou, Linghong; Zhang, Shuxu; Zhen, Xin; Yu, Hui; Zhang, Guoqian; Wang, Ruihao

    2014-01-01

    Deformable image registration (DIR) was widely used in radiation therapy, such as in automatic contour generation, dose accumulation, tumor growth or regression analysis. To achieve higher registration accuracy and faster convergence, an improved 'diffeomorphic demons' registration algorithm was proposed and validated. Based on Brox et al.'s gradient constancy assumption and Malis's efficient second-order minimization (ESM) algorithm, a grey value gradient similarity term and a transformation error term were added into the demons energy function, and a formula was derived to calculate the update of transformation field. The limited Broyden-Fletcher-Goldfarb-Shanno (L-BFGS) algorithm was used to optimize the energy function so that the iteration number could be determined automatically. The proposed algorithm was validated using mathematically deformed images and physically deformed phantom images. Compared with the original 'diffeomorphic demons' algorithm, the registration method proposed achieve a higher precision and a faster convergence speed. Due to the influence of different scanning conditions in fractionated radiation, the density range of the treatment image and the planning image may be different. In such a case, the improved demons algorithm can achieve faster and more accurate radiotherapy.

  6. Multimodal Estimation of Distribution Algorithms.

    PubMed

    Yang, Qiang; Chen, Wei-Neng; Li, Yun; Chen, C L Philip; Xu, Xiang-Min; Zhang, Jun

    2016-02-15

    Taking the advantage of estimation of distribution algorithms (EDAs) in preserving high diversity, this paper proposes a multimodal EDA. Integrated with clustering strategies for crowding and speciation, two versions of this algorithm are developed, which operate at the niche level. Then these two algorithms are equipped with three distinctive techniques: 1) a dynamic cluster sizing strategy; 2) an alternative utilization of Gaussian and Cauchy distributions to generate offspring; and 3) an adaptive local search. The dynamic cluster sizing affords a potential balance between exploration and exploitation and reduces the sensitivity to the cluster size in the niching methods. Taking advantages of Gaussian and Cauchy distributions, we generate the offspring at the niche level through alternatively using these two distributions. Such utilization can also potentially offer a balance between exploration and exploitation. Further, solution accuracy is enhanced through a new local search scheme probabilistically conducted around seeds of niches with probabilities determined self-adaptively according to fitness values of these seeds. Extensive experiments conducted on 20 benchmark multimodal problems confirm that both algorithms can achieve competitive performance compared with several state-of-the-art multimodal algorithms, which is supported by nonparametric tests. Especially, the proposed algorithms are very promising for complex problems with many local optima.

  7. Algorithmic advances in stochastic programming

    SciTech Connect

    Morton, D.P.

    1993-07-01

    Practical planning problems with deterministic forecasts of inherently uncertain parameters often yield unsatisfactory solutions. Stochastic programming formulations allow uncertain parameters to be modeled as random variables with known distributions, but the size of the resulting mathematical programs can be formidable. Decomposition-based algorithms take advantage of special structure and provide an attractive approach to such problems. We consider two classes of decomposition-based stochastic programming algorithms. The first type of algorithm addresses problems with a ``manageable`` number of scenarios. The second class incorporates Monte Carlo sampling within a decomposition algorithm. We develop and empirically study an enhanced Benders decomposition algorithm for solving multistage stochastic linear programs within a prespecified tolerance. The enhancements include warm start basis selection, preliminary cut generation, the multicut procedure, and decision tree traversing strategies. Computational results are presented for a collection of ``real-world`` multistage stochastic hydroelectric scheduling problems. Recently, there has been an increased focus on decomposition-based algorithms that use sampling within the optimization framework. These approaches hold much promise for solving stochastic programs with many scenarios. A critical component of such algorithms is a stopping criterion to ensure the quality of the solution. With this as motivation, we develop a stopping rule theory for algorithms in which bounds on the optimal objective function value are estimated by sampling. Rules are provided for selecting sample sizes and terminating the algorithm under which asymptotic validity of confidence interval statements for the quality of the proposed solution can be verified. Issues associated with the application of this theory to two sampling-based algorithms are considered, and preliminary empirical coverage results are presented.

  8. Understanding Air Transportation Market Dynamics Using a Search Algorithm for Calibrating Travel Demand and Price

    NASA Technical Reports Server (NTRS)

    Kumar, Vivek; Horio, Brant M.; DeCicco, Anthony H.; Hasan, Shahab; Stouffer, Virginia L.; Smith, Jeremy C.; Guerreiro, Nelson M.

    2015-01-01

    This paper presents a search algorithm based framework to calibrate origin-destination (O-D) market specific airline ticket demands and prices for the Air Transportation System (ATS). This framework is used for calibrating an agent based model of the air ticket buy-sell process - Airline Evolutionary Simulation (Airline EVOS) -that has fidelity of detail that accounts for airline and consumer behaviors and the interdependencies they share between themselves and the NAS. More specificially, this algorithm simultaneous calibrates demand and airfares for each O-D market, to within specified threshold of a pre-specified target value. The proposed algorithm is illustrated with market data targets provided by the Transportation System Analysis Model (TSAM) and Airline Origin and Destination Survey (DB1B). Although we specify these models and datasources for this calibration exercise, the methods described in this paper are applicable to calibrating any low-level model of the ATS to some other demand forecast model-based data. We argue that using a calibration algorithm such as the one we present here to synchronize ATS models with specialized forecast demand models, is a powerful tool for establishing credible baseline conditions in experiments analyzing the effects of proposed policy changes to the ATS.

  9. A New Modified Artificial Bee Colony Algorithm with Exponential Function Adaptive Steps.

    PubMed

    Mao, Wei; Lan, Heng-You; Li, Hao-Ru

    2016-01-01

    As one of the most recent popular swarm intelligence techniques, artificial bee colony algorithm is poor at exploitation and has some defects such as slow search speed, poor population diversity, the stagnation in the working process, and being trapped into the local optimal solution. The purpose of this paper is to develop a new modified artificial bee colony algorithm in view of the initial population structure, subpopulation groups, step updating, and population elimination. Further, depending on opposition-based learning theory and the new modified algorithms, an improved S-type grouping method is proposed and the original way of roulette wheel selection is substituted through sensitivity-pheromone way. Then, an adaptive step with exponential functions is designed for replacing the original random step. Finally, based on the new test function versions CEC13, six benchmark functions with the dimensions D = 20 and D = 40 are chosen and applied in the experiments for analyzing and comparing the iteration speed and accuracy of the new modified algorithms. The experimental results show that the new modified algorithm has faster and more stable searching and can quickly increase poor population diversity and bring out the global optimal solutions.

  10. A New Modified Artificial Bee Colony Algorithm with Exponential Function Adaptive Steps

    PubMed Central

    Mao, Wei; Li, Hao-ru

    2016-01-01

    As one of the most recent popular swarm intelligence techniques, artificial bee colony algorithm is poor at exploitation and has some defects such as slow search speed, poor population diversity, the stagnation in the working process, and being trapped into the local optimal solution. The purpose of this paper is to develop a new modified artificial bee colony algorithm in view of the initial population structure, subpopulation groups, step updating, and population elimination. Further, depending on opposition-based learning theory and the new modified algorithms, an improved S-type grouping method is proposed and the original way of roulette wheel selection is substituted through sensitivity-pheromone way. Then, an adaptive step with exponential functions is designed for replacing the original random step. Finally, based on the new test function versions CEC13, six benchmark functions with the dimensions D = 20 and D = 40 are chosen and applied in the experiments for analyzing and comparing the iteration speed and accuracy of the new modified algorithms. The experimental results show that the new modified algorithm has faster and more stable searching and can quickly increase poor population diversity and bring out the global optimal solutions. PMID:27293426

  11. Generalized total least squares prediction algorithm for universal 3D similarity transformation

    NASA Astrophysics Data System (ADS)

    Wang, Bin; Li, Jiancheng; Liu, Chao; Yu, Jie

    2017-02-01

    Three-dimensional (3D) similarity datum transformation is extensively applied to transform coordinates from GNSS-based datum to a local coordinate system. Recently, some total least squares (TLS) algorithms have been successfully developed to solve the universal 3D similarity transformation problem (probably with big rotation angles and an arbitrary scale ratio). However, their procedures of the parameter estimation and new point (non-common point) transformation were implemented separately, and the statistical correlation which often exists between the common and new points in the original coordinate system was not considered. In this contribution, a generalized total least squares prediction (GTLSP) algorithm, which implements the parameter estimation and new point transformation synthetically, is proposed. All of the random errors in the original and target coordinates, and their variance-covariance information will be considered. The 3D transformation model in this case is abstracted as a kind of generalized errors-in-variables (EIV) model and the equation for new point transformation is incorporated into the functional model as well. Then the iterative solution is derived based on the Gauss-Newton approach of nonlinear least squares. The performance of GTLSP algorithm is verified in terms of a simulated experiment, and the results show that GTLSP algorithm can improve the statistical accuracy of the transformed coordinates compared with the existing TLS algorithms for 3D similarity transformation.

  12. A very fast iterative algorithm for TV-regularized image reconstruction with applications to low-dose and few-view CT

    NASA Astrophysics Data System (ADS)

    Kudo, Hiroyuki; Yamazaki, Fukashi; Nemoto, Takuya; Takaki, Keita

    2016-10-01

    This paper concerns iterative reconstruction for low-dose and few-view CT by minimizing a data-fidelity term regularized with the Total Variation (TV) penalty. We propose a very fast iterative algorithm to solve this problem. The algorithm derivation is outlined as follows. First, the original minimization problem is reformulated into the saddle point (primal-dual) problem by using the Lagrangian duality, to which we apply the first-order primal-dual iterative methods. Second, we precondition the iteration formula using the ramp filter of Filtered Backprojection (FBP) reconstruction algorithm in such a way that the problem solution is not altered. The resulting algorithm resembles the structure of so-called iterative FBP algorithm, and it converges to the exact minimizer of cost function very fast.

  13. The Origin of Mercury

    NASA Astrophysics Data System (ADS)

    Benz, W.; Anic, A.; Horner, J.; Whitby, J. A.

    Mercury's unusually high mean density has always been attributed to special circumstances that occurred during the formation of the planet or shortly thereafter, and due to the planet's close proximity to the Sun. The nature of these special circumstances is still being debated and several scenarios, all proposed more than 20 years ago, have been suggested. In all scenarios, the high mean density is the result of severe fractionation occurring between silicates and iron. It is the origin of this fractionation that is at the centre of the debate: is it due to differences in condensation temperature and/or in material characteristics (e.g. density, strength)? Is it because of mantle evaporation due to the close proximity to the Sun? Or is it due to the blasting off of the mantle during a giant impact?

  14. An Algorithmic Framework for Multiobjective Optimization

    PubMed Central

    Ganesan, T.; Elamvazuthi, I.; Shaari, Ku Zilati Ku; Vasant, P.

    2013-01-01

    Multiobjective (MO) optimization is an emerging field which is increasingly being encountered in many fields globally. Various metaheuristic techniques such as differential evolution (DE), genetic algorithm (GA), gravitational search algorithm (GSA), and particle swarm optimization (PSO) have been used in conjunction with scalarization techniques such as weighted sum approach and the normal-boundary intersection (NBI) method to solve MO problems. Nevertheless, many challenges still arise especially when dealing with problems with multiple objectives (especially in cases more than two). In addition, problems with extensive computational overhead emerge when dealing with hybrid algorithms. This paper discusses these issues by proposing an alternative framework that utilizes algorithmic concepts related to the problem structure for generating efficient and effective algorithms. This paper proposes a framework to generate new high-performance algorithms with minimal computational overhead for MO optimization. PMID:24470795

  15. Some nonlinear space decomposition algorithms

    SciTech Connect

    Tai, Xue-Cheng; Espedal, M.

    1996-12-31

    Convergence of a space decomposition method is proved for a general convex programming problem. The space decomposition refers to methods that decompose a space into sums of subspaces, which could be a domain decomposition or a multigrid method for partial differential equations. Two algorithms are proposed. Both can be used for linear as well as nonlinear elliptic problems and they reduce to the standard additive and multiplicative Schwarz methods for linear elliptic problems. Two {open_quotes}hybrid{close_quotes} algorithms are also presented. They converge faster than the additive one and have better parallelism than the multiplicative method. Numerical tests with a two level domain decomposition for linear, nonlinear and interface elliptic problems are presented for the proposed algorithms.

  16. Optimized Swinging Door Algorithm for Wind Power Ramp Event Detection: Preprint

    SciTech Connect

    Cui, Mingjian; Zhang, Jie; Florita, Anthony R.; Hodge, Bri-Mathias; Ke, Deping; Sun, Yuanzhang

    2015-08-06

    Significant wind power ramp events (WPREs) are those that influence the integration of wind power, and they are a concern to the continued reliable operation of the power grid. As wind power penetration has increased in recent years, so has the importance of wind power ramps. In this paper, an optimized swinging door algorithm (SDA) is developed to improve ramp detection performance. Wind power time series data are segmented by the original SDA, and then all significant ramps are detected and merged through a dynamic programming algorithm. An application of the optimized SDA is provided to ascertain the optimal parameter of the original SDA. Measured wind power data from the Electric Reliability Council of Texas (ERCOT) are used to evaluate the proposed optimized SDA.

  17. A novel blind watermarking of ECG signals on medical images using EZW algorithm.

    PubMed

    Nambakhsh, Mohammad S; Ahmadian, Alireza; Ghavami, Mohammad; Dilmaghani, Reza S; Karimi-Fard, S

    2006-01-01

    In this paper, we present a novel blind watermarking method with secret key by embedding ECG signals in medical images. The embedding is done when the original image is compressed using the embedded zero-tree wavelet (EZW) algorithm. The extraction process is performed at the decompression time of the watermarked image. Our algorithm has been tested on several CT and MRI images and the peak signal to noise ratio (PSNR) between the original and watermarked image is greater than 35 dB for watermarking of 512 to 8192 bytes of the mark signal. The proposed method is able to utilize about 15% of the host image to embed the mark signal. This marking percentage has improved previous works while preserving the image details.

  18. The origin of Neandertals

    PubMed Central

    Hublin, J. J.

    2009-01-01

    Western Eurasia yielded a rich Middle (MP) and Late Pleistocene (LP) fossil record documenting the evolution of the Neandertals that can be analyzed in light of recently acquired paleogenetical data, an abundance of archeological evidence, and a well-known environmental context. Their origin likely relates to an episode of recolonization of Western Eurasia by hominins of African origin carrying the Acheulean technology into Europe around 600 ka. An enhancement of both glacial and interglacial phases may have played a crucial role in this event, as well as in the subsequent evolutionary history of the Western Eurasian populations. In addition to climatic adaptations and an increase in encephalization, genetic drift seems to have played a major role in their evolution. To date, a clear speciation event is not documented, and the most likely scenario for the fixation of Neandertal characteristics seems to be an accretion of features along the second half of the MP. Although a separation time for the African and Eurasian populations is difficult to determine, it certainly predates OIS 11 as phenotypic Neandertal features are documented as far back as and possibly before this time. It is proposed to use the term “Homo rhodesiensis” to designate the large-brained hominins ancestral to H. sapiens in Africa and at the root of the Neandertals in Europe, and to use the term “Homo neanderthalensis” to designate all of the specimens carrying derived metrical or non-metrical features used in the definition of the LP Neandertals. PMID:19805257

  19. Two-stage hybrid feature selection algorithms for diagnosing erythemato-squamous diseases.

    PubMed

    Xie, Juanying; Lei, Jinhu; Xie, Weixin; Shi, Yong; Liu, Xiaohui

    2013-01-01

    This paper proposes two-stage hybrid feature selection algorithms to build the stable and efficient diagnostic models where a new accuracy measure is introduced to assess the models. The two-stage hybrid algorithms adopt Support Vector Machines (SVM) as a classification tool, and the extended Sequential Forward Search (SFS), Sequential Forward Floating Search (SFFS), and Sequential Backward Floating Search (SBFS), respectively, as search strategies, and the generalized F-score (GF) to evaluate the importance of each feature. The new accuracy measure is used as the criterion to evaluated the performance of a temporary SVM to direct the feature selection algorithms. These hybrid methods combine the advantages of filters and wrappers to select the optimal feature subset from the original feature set to build the stable and efficient classifiers. To get the stable, statistical and optimal classifiers, we conduct 10-fold cross validation experiments in the first stage; then we merge the 10 selected feature subsets of the 10-cross validation experiments, respectively, as the new full feature set to do feature selection in the second stage for each algorithm. We repeat the each hybrid feature selection algorithm in the second stage on the one fold that has got the best result in the first stage. Experimental results show that our proposed two-stage hybrid feature selection algorithms can construct efficient diagnostic models which have got better accuracy than that built by the corresponding hybrid feature selection algorithms without the second stage feature selection procedures. Furthermore our methods have got better classification accuracy when compared with the available algorithms for diagnosing erythemato-squamous diseases.

  20. Threshold extended ID3 algorithm

    NASA Astrophysics Data System (ADS)

    Kumar, A. B. Rajesh; Ramesh, C. Phani; Madhusudhan, E.; Padmavathamma, M.

    2012-04-01

    Information exchange over insecure networks needs to provide authentication and confidentiality to the database in significant problem in datamining. In this paper we propose a novel authenticated multiparty ID3 Algorithm used to construct multiparty secret sharing decision tree for implementation in medical transactions.

  1. Quantum Color Image Encryption Algorithm Based on A Hyper-Chaotic System and Quantum Fourier Transform

    NASA Astrophysics Data System (ADS)

    Tan, Ru-Chao; Lei, Tong; Zhao, Qing-Min; Gong, Li-Hua; Zhou, Zhi-Hong

    2016-12-01

    To improve the slow processing speed of the classical image encryption algorithms and enhance the security of the private color images, a new quantum color image encryption algorithm based on a hyper-chaotic system is proposed, in which the sequences generated by the Chen's hyper-chaotic system are scrambled and diffused with three components of the original color image. Sequentially, the quantum Fourier transform is exploited to fulfill the encryption. Numerical simulations show that the presented quantum color image encryption algorithm possesses large key space to resist illegal attacks, sensitive dependence on initial keys, uniform distribution of gray values for the encrypted image and weak correlation between two adjacent pixels in the cipher-image.

  2. Neural network algorithm for image reconstruction using the "grid-friendly" projections.

    PubMed

    Cierniak, Robert

    2011-09-01

    The presented paper describes a development of original approach to the reconstruction problem using a recurrent neural network. Particularly, the "grid-friendly" angles of performed projections are selected according to the discrete Radon transform (DRT) concept to decrease the number of projections required. The methodology of our approach is consistent with analytical reconstruction algorithms. Reconstruction problem is reformulated in our approach to optimization problem. This problem is solved in present concept using method based on the maximum likelihood methodology. The reconstruction algorithm proposed in this work is consequently adapted for more practical discrete fan beam projections. Computer simulation results show that the neural network reconstruction algorithm designed to work in this way improves obtained results and outperforms conventional methods in reconstructed image quality.

  3. An implementation of the look-ahead Lanczos algorithm for non-Hermitian matrices

    NASA Technical Reports Server (NTRS)

    Freund, Roland W.; Gutknecht, Martin H.; Nachtigal, Noel M.

    1991-01-01

    The nonsymmetric Lanczos method can be used to compute eigenvalues of large sparse non-Hermitian matrices or to solve large sparse non-Hermitian linear systems. However, the original Lanczos algorithm is susceptible to possible breakdowns and potential instabilities. An implementation is presented of a look-ahead version of the Lanczos algorithm that, except for the very special situation of an incurable breakdown, overcomes these problems by skipping over those steps in which a breakdown or near-breakdown would occur in the standard process. The proposed algorithm can handle look-ahead steps of any length and requires the same number of matrix-vector products and inner products as the standard Lanczos process without look-ahead.

  4. cOSPREY: A Cloud-Based Distributed Algorithm for Large-Scale Computational Protein Design.

    PubMed

    Pan, Yuchao; Dong, Yuxi; Zhou, Jingtian; Hallen, Mark; Donald, Bruce R; Zeng, Jianyang; Xu, Wei

    2016-09-01

    Finding the global minimum energy conformation (GMEC) of a huge combinatorial search space is the key challenge in computational protein design (CPD) problems. Traditional algorithms lack a scalable and efficient distributed design scheme, preventing researchers from taking full advantage of current cloud infrastructures. We design cloud OSPREY (cOSPREY), an extension to a widely used protein design software OSPREY, to allow the original design framework to scale to the commercial cloud infrastructures. We propose several novel designs to integrate both algorithm and system optimizations, such as GMEC-specific pruning, state search partitioning, asynchronous algorithm state sharing, and fault tolerance. We evaluate cOSPREY on three different cloud platforms using different technologies and show that it can solve a number of large-scale protein design problems that have not been possible with previous approaches.

  5. New algorithm for efficient pattern recall using a static threshold with the Steinbuch Lernmatrix

    NASA Astrophysics Data System (ADS)

    Juan Carbajal Hernández, José; Sánchez Fernández, Luis P.

    2011-03-01

    An associative memory is a binary relationship between inputs and outputs, which is stored in an M matrix. The fundamental purpose of an associative memory is to recover correct output patterns from input patterns, which can be altered by additive, subtractive or combined noise. The Steinbuch Lernmatrix was the first associative memory developed in 1961, and is used as a pattern recognition classifier. However, a misclassification problem is presented when crossbar saturation occurs. A new algorithm that corrects the misclassification in the Lernmatrix is proposed in this work. The results of crossbar saturation with fundamental patterns demonstrate a better performance of pattern recalling using the new algorithm. Experiments with real data show a more efficient classifier when the algorithm is introduced in the original Lernmatrix. Therefore, the thresholded Lernmatrix memory emerges as a suitable and alternative classifier to be used in the developing pattern processing field.

  6. Accelerating Families of Fuzzy K-Means Algorithms for Vector Quantization Codebook Design

    PubMed Central

    Mata, Edson; Bandeira, Silvio; de Mattos Neto, Paulo; Lopes, Waslon; Madeiro, Francisco

    2016-01-01

    The performance of signal processing systems based on vector quantization depends on codebook design. In the image compression scenario, the quality of the reconstructed images depends on the codebooks used. In this paper, alternatives are proposed for accelerating families of fuzzy K-means algorithms for codebook design. The acceleration is obtained by reducing the number of iterations of the algorithms and applying efficient nearest neighbor search techniques. Simulation results concerning image vector quantization have shown that the acceleration obtained so far does not decrease the quality of the reconstructed images. Codebook design time savings up to about 40% are obtained by the accelerated versions with respect to the original versions of the algorithms. PMID:27886061

  7. An implementation of the look-ahead Lanczos algorithm for non-Hermitian matrices, part 1

    NASA Technical Reports Server (NTRS)

    Freund, Roland W.; Gutknecht, Martin H.; Nachtigal, Noel M.

    1990-01-01

    The nonsymmetric Lanczos method can be used to compute eigenvalues of large sparse non-Hermitian matrices or to solve large sparse non-Hermitian linear systems. However, the original Lanczos algorithm is susceptible to possible breakdowns and potential instabilities. We present an implementation of a look-ahead version of the Lanczos algorithm which overcomes these problems by skipping over those steps in which a breakdown or near-breakdown would occur in the standard process. The proposed algorithm can handle look-ahead steps of any length and is not restricted to steps of length 2, as earlier implementations are. Also, our implementation has the feature that it requires roughly the same number of inner products as the standard Lanczos process without look-ahead.

  8. A genetic algorithm for solving supply chain network design model

    NASA Astrophysics Data System (ADS)

    Firoozi, Z.; Ismail, N.; Ariafar, S. H.; Tang, S. H.; Ariffin, M. K. M. A.

    2013-09-01

    Network design is by nature costly and optimization models play significant role in reducing the unnecessary cost components of a distribution network. This study proposes a genetic algorithm to solve a distribution network design model. The structure of the chromosome in the proposed algorithm is defined in a novel way that in addition to producing feasible solutions, it also reduces the computational complexity of the algorithm. Computational results are presented to show the algorithm performance.

  9. The origins of originality: the neural bases of creative thinking and originality.

    PubMed

    Shamay-Tsoory, S G; Adler, N; Aharon-Peretz, J; Perry, D; Mayseless, N

    2011-01-01

    Although creativity has been related to prefrontal activity, recent neurological case studies postulate that patients who have left frontal and temporal degeneration involving deterioration of language abilities may actually develop de novo artistic abilities. In this study, we propose a neural and cognitive model according to which a balance between the two hemispheres affects a major aspect of creative cognition, namely, originality. In order to examine the neural basis of originality, that is, the ability to produce statistically infrequent ideas, patients with localized lesions in the medial prefrontal cortex (mPFC), inferior frontal gyrus (IFG), and posterior parietal and temporal cortex (PC), were assessed by two tasks involving divergent thinking and originality. Results indicate that lesions in the mPFC involved the most profound impairment in originality. Furthermore, precise anatomical mapping of lesions indicated that while the extent of lesion in the right mPFC was associated with impaired originality, lesions in the left PC were associated with somewhat elevated levels of originality. A positive correlation between creativity scores and left PC lesions indicated that the larger the lesion is in this area the greater the originality. On the other hand, a negative correlation was observed between originality scores and lesions in the right mPFC. It is concluded that the right mPFC is part of a right fronto-parietal network which is responsible for producing original ideas. It is possible that more linear cognitive processing such as language, mediated by left hemisphere structures interferes with creative cognition. Therefore, lesions in the left hemisphere may be associated with elevated levels of originality.

  10. Evolving evolutionary algorithms using linear genetic programming.

    PubMed

    Oltean, Mihai

    2005-01-01

    A new model for evolving Evolutionary Algorithms is proposed in this paper. The model is based on the Linear Genetic Programming (LGP) technique. Every LGP chromosome encodes an EA which is used for solving a particular problem. Several Evolutionary Algorithms for function optimization, the Traveling Salesman Problem and the Quadratic Assignment Problem are evolved by using the considered model. Numerical experiments show that the evolved Evolutionary Algorithms perform similarly and sometimes even better than standard approaches for several well-known benchmarking problems.

  11. A Traffic Motion Object Extraction Algorithm

    NASA Astrophysics Data System (ADS)

    Wu, Shaofei

    2015-12-01

    A motion object extraction algorithm based on the active contour model is proposed. Firstly, moving areas involving shadows are segmented with the classical background difference algorithm. Secondly, performing shadow detection and coarse removal, then a grid method is used to extract initial contours. Finally, the active contour model approach is adopted to compute the contour of the real object by iteratively tuning the parameter of the model. Experiments show the algorithm can remove the shadow and keep the integrity of a moving object.

  12. Genetic Algorithms Viewed as Anticipatory Systems

    NASA Astrophysics Data System (ADS)

    Mocanu, Irina; Kalisz, Eugenia; Negreanu, Lorina

    2010-11-01

    This paper proposes a new version of genetic algorithms—the anticipatory genetic algorithm AGA. The performance evaluation included in the paper shows that AGA is superior to traditional genetic algorithm from both speed and accuracy points of view. The paper also presents how this algorithm can be applied to solve a complex problem: image annotation, intended to be used in content based image retrieval systems.

  13. Algorithmic chemistry

    SciTech Connect

    Fontana, W.

    1990-12-13

    In this paper complex adaptive systems are defined by a self- referential loop in which objects encode functions that act back on these objects. A model for this loop is presented. It uses a simple recursive formal language, derived from the lambda-calculus, to provide a semantics that maps character strings into functions that manipulate symbols on strings. The interaction between two functions, or algorithms, is defined naturally within the language through function composition, and results in the production of a new function. An iterated map acting on sets of functions and a corresponding graph representation are defined. Their properties are useful to discuss the behavior of a fixed size ensemble of randomly interacting functions. This function gas'', or Turning gas'', is studied under various conditions, and evolves cooperative interaction patterns of considerable intricacy. These patterns adapt under the influence of perturbations consisting in the addition of new random functions to the system. Different organizations emerge depending on the availability of self-replicators.

  14. A MEDLINE categorization algorithm

    PubMed Central

    Darmoni, Stefan J; Névéol, Aurelie; Renard, Jean-Marie; Gehanno, Jean-Francois; Soualmia, Lina F; Dahamna, Badisse; Thirion, Benoit

    2006-01-01

    Background Categorization is designed to enhance resource description by organizing content description so as to enable the reader to grasp quickly and easily what are the main topics discussed in it. The objective of this work is to propose a categorization algorithm to classify a set of scientific articles indexed with the MeSH thesaurus, and in particular those of the MEDLINE bibliographic database. In a large bibliographic database such as MEDLINE, finding materials of particular interest to a specialty group, or relevant to a particular audience, can be difficult. The categorization refines the retrieval of indexed material. In the CISMeF terminology, metaterms can be considered as super-concepts. They were primarily conceived to improve recall in the CISMeF quality-controlled health gateway. Methods The MEDLINE categorization algorithm (MCA) is based on semantic links existing between MeSH terms and metaterms on the one hand and between MeSH subheadings and metaterms on the other hand. These links are used to automatically infer a list of metaterms from any MeSH term/subheading indexing. Medical librarians manually select the semantic links. Results The MEDLINE categorization algorithm lists the medical specialties relevant to a MEDLINE file by decreasing order of their importance. The MEDLINE categorization algorithm is available on a Web site. It can run on any MEDLINE file in a batch mode. As an example, the top 3 medical specialties for the set of 60 articles published in BioMed Central Medical Informatics & Decision Making, which are currently indexed in MEDLINE are: information science, organization and administration and medical informatics. Conclusion We have presented a MEDLINE categorization algorithm in order to classify the medical specialties addressed in any MEDLINE file in the form of a ranked list of relevant specialties. The categorization method introduced in this paper is based on the manual indexing of resources with MeSH (terms

  15. An algorithm on distributed mining association rules

    NASA Astrophysics Data System (ADS)

    Xu, Fan

    2005-12-01

    With the rapid development of the Internet/Intranet, distributed databases have become a broadly used environment in various areas. It is a critical task to mine association rules in distributed databases. The algorithms of distributed mining association rules can be divided into two classes. One is a DD algorithm, and another is a CD algorithm. A DD algorithm focuses on data partition optimization so as to enhance the efficiency. A CD algorithm, on the other hand, considers a setting where the data is arbitrarily partitioned horizontally among the parties to begin with, and focuses on parallelizing the communication. A DD algorithm is not always applicable, however, at the time the data is generated, it is often already partitioned. In many cases, it cannot be gathered and repartitioned for reasons of security and secrecy, cost transmission, or sheer efficiency. A CD algorithm may be a more appealing solution for systems which are naturally distributed over large expenses, such as stock exchange and credit card systems. An FDM algorithm provides enhancement to CD algorithm. However, CD and FDM algorithms are both based on net-structure and executing in non-shareable resources. In practical applications, however, distributed databases often are star-structured. This paper proposes an algorithm based on star-structure networks, which are more practical in application, have lower maintenance costs and which are more practical in the construction of the networks. In addition, the algorithm provides high efficiency in communication and good extension in parallel computation.

  16. An improved conscan algorithm based on a Kalman filter

    NASA Technical Reports Server (NTRS)

    Eldred, D. B.

    1994-01-01

    Conscan is commonly used by DSN antennas to allow adaptive tracking of a target whose position is not precisely known. This article describes an algorithm that is based on a Kalman filter and is proposed to replace the existing fast Fourier transform based (FFT-based) algorithm for conscan. Advantages of this algorithm include better pointing accuracy, continuous update information, and accommodation of missing data. Additionally, a strategy for adaptive selection of the conscan radius is proposed. The performance of the algorithm is illustrated through computer simulations and compared to the FFT algorithm. The results show that the Kalman filter algorithm is consistently superior.

  17. A optimized context-based adaptive binary arithmetic coding algorithm in progressive H.264 encoder

    NASA Astrophysics Data System (ADS)

    Xiao, Guang; Shi, Xu-li; An, Ping; Zhang, Zhao-yang; Gao, Ge; Teng, Guo-wei

    2006-05-01

    Context-based Adaptive Binary Arithmetic Coding (CABAC) is a new entropy coding method presented in H.264/AVC that is highly efficient in video coding. In the method, the probability of current symbol is estimated by using the wisely designed context model, which is adaptive and can approach to the statistic characteristic. Then an arithmetic coding mechanism largely reduces the redundancy in inter-symbol. Compared with UVLC method in the prior standard, CABAC is complicated but efficiently reduce the bit rate. Based on thorough analysis of coding and decoding methods of CABAC, This paper proposed two methods, sub-table method and stream-reuse methods, to improve the encoding efficiency implemented in H.264 JM code. In JM, the CABAC function produces bits one by one of every syntactic element. Multiplication operating times after times in the CABAC function lead to it inefficient.The proposed algorithm creates tables beforehand and then produce every bits of syntactic element. In JM, intra-prediction and inter-prediction mode selection algorithm with different criterion is based on RDO(rate distortion optimization) model. One of the parameter of the RDO model is bit rate that is produced by CABAC operator. After intra-prediction or inter-prediction mode selection, the CABAC stream is discard and is recalculated to output stream. The proposed Stream-reuse algorithm puts the stream in memory that is created in mode selection algorithm and reuses it in encoding function. Experiment results show that our proposed algorithm can averagely speed up 17 to 78 MSEL higher speed for QCIF and CIF sequences individually compared with the original algorithm of JM at the cost of only a little memory space. The CABAC was realized in our progressive h.264 encoder.

  18. Correlation between P-wave morphology and origin of atrial focal tachycardia--insights from realistic models of the human atria and torso.

    PubMed

    Colman, Michael A; Aslanidi, Oleg V; Stott, Jonathan; Holden, Arun V; Zhang, Henggui

    2011-10-01

    Atrial arrhythmias resulting from abnormally rapid focal activity in the atria may be reflected in an altered P-wave morphology (PWM) in the ECG. Although clinically important, detailed relationships between PWM and origins of atrial focal excitations have not been established. To study such relationships, we developed computational models of the human atria and torso. The model simulation results were used to evaluate an extant clinical algorithm for locating the origin of atrial focal points from the ECG. The simulations showed that the algorithm was practical and could predict the atrial focal locations with 85% accuracy. We proposed a further refinement of the algorithm to distinguish between focal locations within the large atrial bundles.

  19. Cascaded Fresnel holographic image encryption scheme based on a constrained optimization algorithm and Henon map

    NASA Astrophysics Data System (ADS)

    Su, Yonggang; Tang, Chen; Chen, Xia; Li, Biyuan; Xu, Wenjun; Lei, Zhenkun

    2017-01-01

    We propose an image encryption scheme using chaotic phase masks and cascaded Fresnel transform holography based on a constrained optimization algorithm. In the proposed encryption scheme, the chaotic phase masks are generated by Henon map, and the initial conditions and parameters of Henon map serve as the main secret keys during the encryption and decryption process. With the help of multiple chaotic phase masks, the original image can be encrypted into the form of a hologram. The constrained optimization algorithm makes it possible to retrieve the original image from only single frame hologram. The use of chaotic phase masks makes the key management and transmission become very convenient. In addition, the geometric parameters of optical system serve as the additional keys, which can improve the security level of the proposed scheme. Comprehensive security analysis performed on the proposed encryption scheme demonstrates that the scheme has high resistance against various potential attacks. Moreover, the proposed encryption scheme can be used to encrypt video information. And simulations performed on a video in AVI format have also verified the feasibility of the scheme for video encryption.

  20. QPSO-based adaptive DNA computing algorithm.

    PubMed

    Karakose, Mehmet; Cigdem, Ugur

    2013-01-01

    DNA (deoxyribonucleic acid) computing that is a new computation model based on DNA molecules for information storage has been increasingly used for optimization and data analysis in recent years. However, DNA computing algorithm has some limitations in terms of convergence speed, adaptability, and effectiveness. In this paper, a new approach for improvement of DNA computing is proposed. This new approach aims to perform DNA computing algorithm with adaptive parameters towards the desired goal using quantum-behaved particle swarm optimization (QPSO). Some contributions provided by the proposed QPSO based on adaptive DNA computing algorithm are as follows: (1) parameters of population size, crossover rate, maximum number of operations, enzyme and virus mutation rate, and fitness function of DNA computing algorithm are simultaneously tuned for adaptive process, (2) adaptive algorithm is performed using QPSO algorithm for goal-driven progress, faster operation, and flexibility in data, and (3) numerical realization of DNA computing algorithm with proposed approach is implemented in system identification. Two experiments with different systems were carried out to evaluate the performance of the proposed approach with comparative results. Experimental results obtained with Matlab and FPGA demonstrate ability to provide effective optimization, considerable convergence speed, and high accuracy according to DNA computing algorithm.

  1. DHCP Origin Traceback

    NASA Astrophysics Data System (ADS)

    Majumdar, Saugat; Kulkarni, Dhananjay; Ravishankar, Chinya V.

    Imagine that the DHCP server is under attack from malicious hosts in your network. How would you know where these DHCP packets are coming from, or which path they took in the network? This paper investigates the problem of determining the origin of a DHCP packet in a network. We propose a practical method for adding a new option field that does not violate any RFC's, which we believe should be a crucial requirement while proposing any related solution. The new DHCP option will contain the ingress port and the switch MAC address. We recommend that this new option be added at the edge so that we can use the recorded value for performing traceback. The computational overhead of our solution is low, and the related network management tasks are low as well. We also address issues related to securing the field in order to maintain privacy of switch MAC addresses, fragmentation of packets, and possible attack scenarios. Our study shows that the traceback scheme is effective and practical to use in most network environments.

  2. Fetal origins of cardiovascular disease.

    PubMed

    Barker, D J

    1999-04-01

    Low birthweight, thinness and short body length at birth are now known to be associated with increased rates of cardiovascular disease and non-insulin dependent diabetes in adult life. The fetal origins hypothesis proposes that these diseases originate through adaptations which the fetus makes when it is undernourished. These adaptations may be cardiovascular, metabolic or endocrine. They permanently change the structure and function of the body. Prevention of the diseases may depend on prevention of imbalances in fetal growth or imbalances between prenatal and postnatal growth, or imbalances in nutrient supply to the fetus.

  3. Cooperative Scheduling of Imaging Observation Tasks for High-Altitude Airships Based on Propagation Algorithm

    PubMed Central

    Chuan, He; Dishan, Qiu; Jin, Liu

    2012-01-01

    The cooperative scheduling problem on high-altitude airships for imaging observation tasks is discussed. A constraint programming model is established by analyzing the main constraints, which takes the maximum task benefit and the minimum cruising distance as two optimization objectives. The cooperative scheduling problem of high-altitude airships is converted into a main problem and a subproblem by adopting hierarchy architecture. The solution to the main problem can construct the preliminary matching between tasks and observation resource in order to reduce the search space of the original problem. Furthermore, the solution to the sub-problem can detect the key nodes that each airship needs to fly through in sequence, so as to get the cruising path. Firstly, the task set is divided by using k-core neighborhood growth cluster algorithm (K-NGCA). Then, a novel swarm intelligence algorithm named propagation algorithm (PA) is combined with the key node search algorithm (KNSA) to optimize the cruising path of each airship and determine the execution time interval of each task. Meanwhile, this paper also provides the realization approach of the above algorithm and especially makes a detailed introduction on the encoding rules, search models, and propagation mechanism of the PA. Finally, the application results and comparison analysis show the proposed models and algorithms are effective and feasible. PMID:23365522

  4. Use of Algorithm of Changes for Optimal Design of Heat Exchanger

    NASA Astrophysics Data System (ADS)

    Tam, S. C.; Tam, H. K.; Chio, C. H.; Tam, L. M.

    2010-05-01

    For economic reasons, the optimal design of heat exchanger is required. Design of heat exchanger is usually based on the iterative process. The design conditions, equipment geometries, the heat transfer and friction factor correlations are totally involved in the process. Using the traditional iterative method, many trials are needed for satisfying the compromise between the heat exchange performance and the cost consideration. The process is cumbersome and the optimal design is often depending on the design engineer's experience. Therefore, in the recent studies, many researchers, reviewed in [1], applied the genetic algorithm (GA) [2] for designing the heat exchanger. The results outperformed the traditional method. In this study, the alternative approach, algorithm of changes, is proposed for optimal design of shell-tube heat exchanger [3]. This new method, algorithm of changes based on I Ching (???), is developed originality by the author. In the algorithms, the hexagram operations in I Ching has been generalized to binary string case and the iterative procedure which imitates the I Ching inference is also defined. On the basis of [3], the shell inside diameter, tube outside diameter, and baffles spacing were treated as the design (or optimized) variables. The cost of the heat exchanger was arranged as the objective function. Through the case study, the results show that the algorithm of changes is comparable to the GA method. Both of method can find the optimal solution in a short time. However, without interchanging information between binary strings, the algorithm of changes has advantage on parallel computation over GA.

  5. Cooperative scheduling of imaging observation tasks for high-altitude airships based on propagation algorithm.

    PubMed

    Chuan, He; Dishan, Qiu; Jin, Liu

    2012-01-01

    The cooperative scheduling problem on high-altitude airships for imaging observation tasks is discussed. A constraint programming model is established by analyzing the main constraints, which takes the maximum task benefit and the minimum cruising distance as two optimization objectives. The cooperative scheduling problem of high-altitude airships is converted into a main problem and a subproblem by adopting hierarchy architecture. The solution to the main problem can construct the preliminary matching between tasks and observation resource in order to reduce the search space of the original problem. Furthermore, the solution to the sub-problem can detect the key nodes that each airship needs to fly through in sequence, so as to get the cruising path. Firstly, the task set is divided by using k-core neighborhood growth cluster algorithm (K-NGCA). Then, a novel swarm intelligence algorithm named propagation algorithm (PA) is combined with the key node search algorithm (KNSA) to optimize the cruising path of each airship and determine the execution time interval of each task. Meanwhile, this paper also provides the realization approach of the above algorithm and especially makes a detailed introduction on the encoding rules, search models, and propagation mechanism of the PA. Finally, the application results and comparison analysis show the proposed models and algorithms are effective and feasible.

  6. A Modified Decision Tree Algorithm Based on Genetic Algorithm for Mobile User Classification Problem

    PubMed Central

    Liu, Dong-sheng; Fan, Shu-jiang

    2014-01-01

    In order to offer mobile customers better service, we should classify the mobile user firstly. Aimed at the limitations of previous classification methods, this paper puts forward a modified decision tree algorithm for mobile user classification, which introduced genetic algorithm to optimize the results of the decision tree algorithm. We also take the context information as a classification attributes for the mobile user and we classify the context into public context and private context classes. Then we analyze the processes and operators of the algorithm. At last, we make an experiment on the mobile user with the algorithm, we can classify the mobile user into Basic service user, E-service user, Plus service user, and Total service user classes and we can also get some rules about the mobile user. Compared to C4.5 decision tree algorithm and SVM algorithm, the algorithm we proposed in this paper has higher accuracy and more simplicity. PMID:24688389

  7. A modified decision tree algorithm based on genetic algorithm for mobile user classification problem.

    PubMed

    Liu, Dong-sheng; Fan, Shu-jiang

    2014-01-01

    In order to offer mobile customers better service, we should classify the mobile user firstly. Aimed at the limitations of previous classification methods, this paper puts forward a modified decision tree algorithm for mobile user classification, which introduced genetic algorithm to optimize the results of the decision tree algorithm. We also take the context information as a classification attributes for the mobile user and we classify the context into public context and private context classes. Then we analyze the processes and operators of the algorithm. At last, we make an experiment on the mobile user with the algorithm, we can classify the mobile user into Basic service user, E-service user, Plus service user, and Total service user classes and we can also get some rules about the mobile user. Compared to C4.5 decision tree algorithm and SVM algorithm, the algorithm we proposed in this paper has higher accuracy and more simplicity.

  8. Modifications to Axially Symmetric Simulations Using New DSMC (2007) Algorithms

    NASA Technical Reports Server (NTRS)

    Liechty, Derek S.

    2008-01-01

    Several modifications aimed at improving physical accuracy are proposed for solving axially symmetric problems building on the DSMC (2007) algorithms introduced by Bird. Originally developed to solve nonequilibrium, rarefied flows, the DSMC method is now regularly used to solve complex problems over a wide range of Knudsen numbers. These new algorithms include features such as nearest neighbor collisions excluding the previous collision partners, separate collision and sampling cells, automatically adaptive variable time steps, a modified no-time counter procedure for collisions, and discontinuous and event-driven physical processes. Axially symmetric solutions require radial weighting for the simulated molecules since the molecules near the axis represent fewer real molecules than those farther away from the axis due to the difference in volume of the cells. In the present methodology, these radial weighting factors are continuous, linear functions that vary with the radial position of each simulated molecule. It is shown that how one defines the number of tentative collisions greatly influences the mean collision time near the axis. The method by which the grid is treated for axially symmetric problems also plays an important role near the axis, especially for scalar pressure. A new method to treat how the molecules are traced through the grid is proposed to alleviate the decrease in scalar pressure at the axis near the surface. Also, a modification to the duplication buffer is proposed to vary the duplicated molecular velocities while retaining the molecular kinetic energy and axially symmetric nature of the problem.

  9. Analysis of the geophysical data using a posteriori algorithms

    NASA Astrophysics Data System (ADS)

    Voskoboynikova, Gyulnara; Khairetdinov, Marat

    2016-04-01

    The problems of monitoring, prediction and prevention of extraordinary natural and technogenic events are priority of modern problems. These events include earthquakes, volcanic eruptions, the lunar-solar tides, landslides, falling celestial bodies, explosions utilized stockpiles of ammunition, numerous quarry explosion in open coal mines, provoking technogenic earthquakes. Monitoring is based on a number of successive stages, which include remote registration of the events responses, measurement of the main parameters as arrival times of seismic waves or the original waveforms. At the final stage the inverse problems associated with determining the geographic location and time of the registration event are solving. Therefore, improving the accuracy of the parameters estimation of the original records in the high noise is an important problem. As is known, the main measurement errors arise due to the influence of external noise, the difference between the real and model structures of the medium, imprecision of the time definition in the events epicenter, the instrumental errors. Therefore, posteriori algorithms more accurate in comparison with known algorithms are proposed and investigated. They are based on a combination of discrete optimization method and fractal approach for joint detection and estimation of the arrival times in the quasi-periodic waveforms sequence in problems of geophysical monitoring with improved accuracy. Existing today, alternative approaches to solving these problems does not provide the given accuracy. The proposed algorithms are considered for the tasks of vibration sounding of the Earth in times of lunar and solar tides, and for the problem of monitoring of the borehole seismic source location in trade drilling.

  10. Regularizing common spatial patterns to improve BCI designs: unified theory and new algorithms.

    PubMed

    Lotte, Fabien; Guan, Cuntai

    2011-02-01

    One of the most popular feature extraction algorithms for brain-computer interfaces (BCI) is common spatial patterns (CSPs). Despite its known efficiency and widespread use, CSP is also known to be very sensitive to noise and prone to overfitting. To address this issue, it has been recently proposed to regularize CSP. In this paper, we present a simple and unifying theoretical framework to design such a regularized CSP (RCSP). We then present a review of existing RCSP algorithms and describe how to cast them in this framework. We also propose four new RCSP algorithms. Finally, we compare the performances of 11 different RCSP (including the four new ones and the original CSP), on electroencephalography data from 17 subjects, from BCI competition datasets. Results showed that the best RCSP methods can outperform CSP by nearly 10% in median classification accuracy and lead to more neurophysiologically relevant spatial filters. They also enable us to perform efficient subject-to-subject transfer. Overall, the best RCSP algorithms were CSP with Tikhonov regularization and weighted Tikhonov regularization, both proposed in this paper.

  11. A genetic algorithm for the arrival probability in the stochastic networks.

    PubMed

    Shirdel, Gholam H; Abdolhosseinzadeh, Mohsen

    2016-01-01

    A genetic algorithm is presented to find the arrival probability in a directed acyclic network with stochastic parameters, that gives more reliability of transmission flow in delay sensitive networks. Some sub-networks are extracted from the original network, and a connection is established between the original source node and the original destination node by randomly selecting some local source and the local destination nodes. The connections are sorted according to their arrival probabilities and the best established connection is determined with the maximum arrival probability. There is an established discrete time Markov chain in the network. The arrival probability to a given destination node from a given source node in the network is defined as the multi-step transition probability of the absorbtion in the final state of the established Markov chain. The proposed method is applicable on large stochastic networks, where the previous methods were not. The effectiveness of the proposed method is illustrated by some numerical results with perfect fitness values of the proposed genetic algorithm.

  12. Hybrid Simulated Annealing and Genetic Algorithms for Industrial Production Management Problems

    NASA Astrophysics Data System (ADS)

    Vasant, Pandian; Barsoum, Nader

    2009-08-01

    This paper describes the origin and significant contribution on the development of the Hybrid Simulated Annealing and Genetic Algorithms (HSAGA) approach for finding global optimization. HSAGA provide an insight approach to handle in solving complex optimization problems. The method is, the combination of meta-heuristic approaches of Simulated Annealing and novel Genetic Algorithms for solving a non-linear objective function with uncertain technical coefficients in an industrial production management problems. The proposed novel hybrid method is designed to search for global optimal for the non-linear objective function and search for the best feasible solutions of the decision variables. Simulated experiments were carried out rigorously to reflect the advantages of the proposed method. A description of the well developed method and the advanced computational experiment with MATLAB technical tool is presented. An industrial production management optimization problem is solved using HSAGA technique. The results are very much promising.

  13. Optical double image security using random phase fractional Fourier domain encoding and phase-retrieval algorithm

    NASA Astrophysics Data System (ADS)

    Rajput, Sudheesh K.; Nishchal, Naveen K.

    2017-04-01

    We propose a novel security scheme based on the double random phase fractional domain encoding (DRPE) and modified Gerchberg-Saxton (G-S) phase retrieval algorithm for securing two images simultaneously. Any one of the images to be encrypted is converted into a phase-only image using modified G-S algorithm and this function is used as a key for encrypting another image. The original images are retrieved employing the concept of known-plaintext attack and following the DRPE decryption steps with all correct keys. The proposed scheme is also used for encryption of two color images with the help of convolution theorem and phase-truncated fractional Fourier transform. With some modification, the scheme is extended for simultaneous encryption of gray-scale and color images. As a proof-of-concept, simulation results have been presented for securing two gray-scale images, two color images, and simultaneous gray-scale and color images.

  14. A novel chaotic image encryption algorithm using block scrambling and dynamic index based diffusion

    NASA Astrophysics Data System (ADS)

    Xu, Lu; Gou, Xu; Li, Zhi; Li, Jian

    2017-04-01

    In this paper, we propose a novel chaotic image encryption algorithm which involves a block image scrambling scheme and a new dynamic index based diffusion scheme. Firstly, the original image is divided into two equal blocks by vertical or horizontal directions. Then, we use the chaos matrix to construct X coordinate, Y coordinate and swapping control tables. By searching the X coordinate and Y coordinate tables, the swapping position of the processing pixel is located. The swapping control table is used to control the swapping of the pixel in the current block or the other block. Finally, the dynamic index scheme is applied to the diffusing of the scrambled image. The simulation results and performance analysis show that the proposed algorithm has an excellent safety performance with only one round.

  15. An integrated optimal control algorithm for discrete-time nonlinear stochastic system

    NASA Astrophysics Data System (ADS)

    Kek, Sie Long; Lay Teo, Kok; Mohd Ismail, A. A.

    2010-12-01

    Consider a discrete-time nonlinear system with random disturbances appearing in the real plant and the output channel where the randomly perturbed output is measurable. An iterative procedure based on the linear quadratic Gaussian optimal control model is developed for solving the optimal control of this stochastic system. The optimal state estimate provided by Kalman filtering theory and the optimal control law obtained from the linear quadratic regulator problem are then integrated into the dynamic integrated system optimisation and parameter estimation algorithm. The iterative solutions of the optimal control problem for the model obtained converge to the solution of the original optimal control problem of the discrete-time nonlinear system, despite model-reality differences, when the convergence is achieved. An illustrative example is solved using the method proposed. The results obtained show the effectiveness of the algorithm proposed.

  16. A Winner Determination Algorithm for Combinatorial Auctions Based on Hybrid Artificial Fish Swarm Algorithm

    NASA Astrophysics Data System (ADS)

    Zheng, Genrang; Lin, ZhengChun

    The problem of winner determination in combinatorial auctions is a hotspot electronic business, and a NP hard problem. A Hybrid Artificial Fish Swarm Algorithm(HAFSA), which is combined with First Suite Heuristic Algorithm (FSHA) and Artificial Fish Swarm Algorithm (AFSA), is proposed to solve the problem after probing it base on the theories of AFSA. Experiment results show that the HAFSA is a rapidly and efficient algorithm for The problem of winner determining. Compared with Ant colony Optimization Algorithm, it has a good performance with broad and prosperous application.

  17. Algorithm Animation with Galant.

    PubMed

    Stallmann, Matthias F

    2017-01-01

    Although surveys suggest positive student attitudes toward the use of algorithm animations, it is not clear that they improve learning outcomes. The Graph Algorithm Animation Tool, or Galant, challenges and motivates students to engage more deeply with algorithm concepts, without distracting them with programming language details or GUIs. Even though Galant is specifically designed for graph algorithms, it has also been used to animate other algorithms, most notably sorting algorithms.

  18. An algorithm for haplotype analysis

    SciTech Connect

    Lin, Shili; Speed, T.P.

    1997-12-01

    This paper proposes an algorithm for haplotype analysis based on a Monte Carlo method. Haplotype configurations are generated according to the distribution of joint haplotypes of individuals in a pedigree given their phenotype data, via a Markov chain Monte Carlo algorithm. The haplotype configuration which maximizes this conditional probability distribution can thus be estimated. In addition, the set of haplotype configurations with relatively high probabilities can also be estimated as possible alternatives to the most probable one. This flexibility enables geneticists to choose the haplotype configurations which are most reasonable to them, allowing them to include their knowledge of the data under analysis. 18 refs., 2 figs., 1 tab.

  19. Improved algorithm for calculating the Chandrasekhar function

    NASA Astrophysics Data System (ADS)

    Jablonski, A.

    2013-02-01

    number of abscissas N. (4) For Romberg quadrature, to optimize the performance, the mixed algorithm C was proposed in which algorithm A is used for argument x smaller than or equal to x0=0.4, while algorithm B is used for x larger than 0.4 [1]. For Gauss-Legendre quadrature, the limit x0 was found to depend on the number of abscissas N. For each value of N considered, the time of calculations of the H function was determined for pairs of arguments uniformly distributed in the ranges 0<=x<=0.05 and 0<=omega<=1, and for pairs of arguments uniformly distributed in the ranges 0.05<=x<=1 and 0<=omega<=1. As shown in Fig. 2 for N=64, algorithm A is faster than algorithm B for x smaller than or equal to 0.0225. Comparison of the running times of algorithms A and B. Open circles: algorithm B is faster than the algorithm A; full circles: algorithm A is faster than algorithm B. Thus, the value of x0=0.0225 is proposed for the mixed algorithm C when Gauss-Legendere quadrature with N=64 is used. Similar computer experiments performed for other values of N are summarized below. L N0 1 16 0.25 2 20 0.15 3 24 0.10 4 32 0.050 5 40 0.030 6 48 0.045 7 64 0.0225-Recommended 8 80 0.0125 9 96 0.020 The flag L is one of the input parameters for the subroutine GAUSS. In the programs implementing algorithms A, B, and C (CHANDRA, CHANDRB, and CHANDRC), Gauss-Legendre quadrature with N=64 is currently set. As follows from Fig. 1, algorithm B (and consequently algorithm C) is the fastest in that case. It is still possible to change the number of abscissas; the flag L then has to be modified in lines 165, 169, 185, 189, and 304 of program CHANDRAS_v2, and the value of x0 in line 111 has to be adjusted according to the table above. (5) The above modifications of the code did not affect the accuracy of the calculated Chandrasekhar function, as compared to the original code [1]. For the pairs of arguments shown in Fig. 2, the accuracy of the H function, calculated from algorithms A and B, reached at

  20. An Adaptive Unified Differential Evolution Algorithm for Global Optimization

    SciTech Connect

    Qiang, Ji; Mitchell, Chad

    2014-11-03

    In this paper, we propose a new adaptive unified differential evolution algorithm for single-objective global optimization. Instead of the multiple mutation strate- gies proposed in conventional differential evolution algorithms, this algorithm employs a single equation unifying multiple strategies into one expression. It has the virtue of mathematical simplicity and also provides users the flexibility for broader exploration of the space of mutation operators. By making all control parameters in the proposed algorithm self-adaptively evolve during the process of optimization, it frees the application users from the burden of choosing appro- priate control parameters and also improves the performance of the algorithm. In numerical tests using thirteen basic unimodal and multimodal functions, the proposed adaptive unified algorithm shows promising performance in compari- son to several conventional differential evolution algorithms.

  1. Bedside Bleeding Control, Review Paper and Proposed Algorithm

    PubMed Central

    Simman, Richard; Reynolds, David; Saad, Sharon

    2013-01-01

    Bleeding is a common occurrence in practice, but occasionally it may be challenging issue to overcome. It can come from numerous sources such as, trauma, during or post-surgical intervention, disorders of platelet and coagulation factors and increased fibrinolysis, wounds and cancers. This paper was inspired from our experience with a patient admitted to a local long term acute care facility with a large fungating right breast cancerous wound. During her hospital stay spontaneous bleeding from her breast cancerous mass was encountered and became more frequent and significant over the period of her stay. Different hemostatic technologies were used to control her bleeding. We felt that it was important to share our experience with our colleagues to help with potential similar situation that they may face. PMID:24527382

  2. A modified WTC algorithm for the Painlevé test of nonlinear variable-coefficient PDEs

    NASA Astrophysics Data System (ADS)

    Zhao, Yin-Long; Liu, Yin-Ping; Li, Zhi-Bin

    2009-11-01

    A modified WTC algorithm for the Painlevé test of nonlinear PDEs with variable coefficients is proposed. Compared to the Kruskal's simplification algorithm, the modified algorithm further simplifies the computation in the third step of the Painlevé test for variable-coefficient PDEs to some extent. Two examples illustrate the proposed modified algorithm.

  3. Monte Carlo algorithm for free energy calculation.

    PubMed

    Bi, Sheng; Tong, Ning-Hua

    2015-07-01

    We propose a Monte Carlo algorithm for the free energy calculation based on configuration space sampling. An upward or downward temperature scan can be used to produce F(T). We implement this algorithm for the Ising model on a square lattice and triangular lattice. Comparison with the exact free energy shows an excellent agreement. We analyze the properties of this algorithm and compare it with the Wang-Landau algorithm, which samples in energy space. This method is applicable to general classical statistical models. The possibility of extending it to quantum systems is discussed.

  4. Approximate learning algorithm in Boltzmann machines.

    PubMed

    Yasuda, Muneki; Tanaka, Kazuyuki

    2009-11-01

    Boltzmann machines can be regarded as Markov random fields. For binary cases, they are equivalent to the Ising spin model in statistical mechanics. Learning systems in Boltzmann machines are one of the NP-hard problems. Thus, in general we have to use approximate methods to construct practical learning algorithms in this context. In this letter, we propose new and practical learning algorithms for Boltzmann machines by using the belief propagation algorithm and the linear response approximation, which are often referred as advanced mean field methods. Finally, we show the validity of our algorithm using numerical experiments.

  5. Agricultural origins: centers and noncenters.

    PubMed

    Harlan, J R

    1971-10-29

    I propose the theory that agriculture originated independently in three different areas and that, in each case, there was a system composed of a center of origin and a noncenter, in which activities of domestication were dispersed over a span of 5,000 to 10,000 kilometers. One system includes a definable Near East center and a noncenter in Africa; another system includes a North Chinese center and a noncenter in Southeast Asia and the South Pacific; the third system includes a Mesoamerican center and a South American noncenter. There are suggestions that, in each case, the center and noncenter interact with each other. Crops did not necessarily originate in centers (in any conventional concept of the term), nor did agriculture necessarily develop in a geographical "center."

  6. 76 FR 2892 - Proposed Subsequent Arrangement

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-01-18

    ... was originally obtained by Nippon Nuclear Fuel Development Co., Ltd from Martin Marietta Energy... From the Federal Register Online via the Government Publishing Office DEPARTMENT OF ENERGY... of Energy. ACTION: Proposed subsequent arrangement. SUMMARY: This notice is being issued under...

  7. Improved algorithm of ray tracing in ICF cryogenic targets

    NASA Astrophysics Data System (ADS)

    Zhang, Rui; Yang, Yongying; Ling, Tong; Jiang, Jiabin

    2016-10-01

    The high precision ray tracing inside inertial confinement fusion (ICF) cryogenic targets plays an important role in the reconstruction of the three-dimensional density distribution by algebraic reconstruction technique (ART) algorithm. The traditional Runge-Kutta methods, which is restricted by the precision of the grid division and the step size of ray tracing, cannot make an accurate calculation in the case of refractive index saltation. In this paper, we propose an improved algorithm of ray tracing based on the Runge-Kutta methods and Snell's law of refraction to achieve high tracing precision. On the boundary of refractive index, we apply Snell's law of refraction and contact point search algorithm to ensure accuracy of the simulation. Inside the cryogenic target, the combination of the Runge-Kutta methods and self-adaptive step algorithm are employed for computation. The original refractive index data, which is used to mesh the target, can be obtained by experimental measurement or priori refractive index distribution function. A finite differential method is performed to calculate the refractive index gradient of mesh nodes, and the distance weighted average interpolation methods is utilized to obtain refractive index and gradient of each point in space. In the simulation, we take ideal ICF target, Luneberg lens and Graded index rod as simulation model to calculate the spot diagram and wavefront map. Compared the simulation results to Zemax, it manifests that the improved algorithm of ray tracing based on the fourth-order Runge-Kutta methods and Snell's law of refraction exhibits high accuracy. The relative error of the spot diagram is 0.2%, and the peak-to-valley (PV) error and the root-mean-square (RMS) error of the wavefront map is less than λ/35 and λ/100, correspondingly.

  8. A Proposal for the Upgrade of the Muon Drift Tubes Trigger for the CMS Experiment at the HL-LHC

    NASA Astrophysics Data System (ADS)

    Pozzobon, Nicola; Zotto, Pierluigi; Montecassiano, Fabio

    2016-11-01

    A major upgrade of the readout and trigger electronics of the CMS Drift Tubes muon detector is foreseen in order to allow its efficient operation at the High Luminosity LHC. A proposal for a new L1 Trigger Primitives Generator for this detector is presented, featuring an algorithm operating on the time of charge collection measurements provided by the asynchronous readout of the new TDC system being developed. The algorithm is being designed around the implementation in state-of-the-art FPGA devices of the original development of a Compact Hough Transform (CHT) algorithm combined with a Majority Mean-Timer, to identify both the parent bunch crossing and the muon track parameters. The current state of the design is presented along with the performance requirements, focusing on the future developments.

  9. An enhanced algorithm for multiple sequence alignment of protein sequences using genetic algorithm

    PubMed Central

    Kumar, Manish

    2015-01-01

    One of the most fundamental operations in biological sequence analysis is multiple sequence alignment (MSA). The basic of multiple sequence alignment problems is to determine the most biologically plausible alignments of protein or DNA sequences. In this paper, an alignment method using genetic algorithm for multiple sequence alignment has been proposed. Two different genetic operators mainly crossover and mutation were defined and implemented with the proposed method in order to know the population evolution and quality of the sequence aligned. The proposed method is assessed with protein benchmark dataset, e.g., BALIBASE, by comparing the obtained results to those obtained with other alignment algorithms, e.g., SAGA, RBT-GA, PRRP, HMMT, SB-PIMA, CLUSTALX, CLUSTAL W, DIALIGN and PILEUP8 etc. Experiments on a wide range of data have shown that the proposed algorithm is much better (it terms of score) than previously proposed algorithms in its ability to achieve high alignment quality. PMID:27065770

  10. Expectation-maximization algorithms for learning a finite mixture of univariate survival time distributions from partially specified class values

    SciTech Connect

    Lee, Youngrok

    2013-05-15

    Heterogeneity exists on a data set when samples from di erent classes are merged into the data set. Finite mixture models can be used to represent a survival time distribution on heterogeneous patient group by the proportions of each class and by the survival time distribution within each class as well. The heterogeneous data set cannot be explicitly decomposed to homogeneous subgroups unless all the samples are precisely labeled by their origin classes; such impossibility of decomposition is a barrier to overcome for estimating nite mixture models. The expectation-maximization (EM) algorithm has been used to obtain maximum likelihood estimates of nite mixture models by soft-decomposition of heterogeneous samples without labels for a subset or the entire set of data. In medical surveillance databases we can find partially labeled data, that is, while not completely unlabeled there is only imprecise information about class values. In this study we propose new EM algorithms that take advantages of using such partial labels, and thus incorporate more information than traditional EM algorithms. We particularly propose four variants of the EM algorithm named EM-OCML, EM-PCML, EM-HCML and EM-CPCML, each of which assumes a specific mechanism of missing class values. We conducted a simulation study on exponential survival trees with five classes and showed that the advantages of incorporating substantial amount of partially labeled data can be highly signi cant. We also showed model selection based on AIC values fairly works to select the best proposed algorithm on each specific data set. A case study on a real-world data set of gastric cancer provided by Surveillance, Epidemiology and End Results (SEER) program showed a superiority of EM-CPCML to not only the other proposed EM algorithms but also conventional supervised, unsupervised and semi-supervised learning algorithms.

  11. Origins of GEMS Grains

    NASA Technical Reports Server (NTRS)

    Messenger, S.; Walker, R. M.

    2012-01-01

    Interplanetary dust particles (IDPs) collected in the Earth s stratosphere contain high abundances of submicrometer amorphous silicates known as GEMS grains. From their birth as condensates in the outflows of oxygen-rich evolved stars, processing in interstellar space, and incorporation into disks around new stars, amorphous silicates predominate in most astrophysical environments. Amorphous silicates were a major building block of our Solar System and are prominent in infrared spectra of comets. Anhydrous interplanetary dust particles (IDPs) thought to derive from comets contain abundant amorphous silicates known as GEMS (glass with embedded metal and sulfides) grains. GEMS grains have been proposed to be isotopically and chemically homogenized interstellar amorphous silicate dust. We evaluated this hypothesis through coordinated chemical and isotopic analyses of GEMS grains in a suite of IDPs to constrain their origins. GEMS grains show order of magnitude variations in Mg, Fe, Ca, and S abundances. GEMS grains do not match the average element abundances inferred for ISM dust containing on average, too little Mg, Fe, and Ca, and too much S. GEMS grains have complementary compositions to the crystalline components in IDPs suggesting that they formed from the same reservoir. We did not observe any unequivocal microstructural or chemical evidence that GEMS grains experienced prolonged exposure to radiation. We identified four GEMS grains having O isotopic compositions that point to origins in red giant branch or asymptotic giant branch stars and supernovae. Based on their O isotopic compositions, we estimate that 1-6% of GEMS grains are surviving circumstellar grains. The remaining 94-99% of GEMS grains have O isotopic compositions that are indistinguishable from terrestrial materials and carbonaceous chondrites. These isotopically solar GEMS grains either formed in the Solar System or were completely homogenized in the interstellar medium (ISM). However, the

  12. A Learning Algorithm for Multimodal Grammar Inference.

    PubMed

    D'Ulizia, A; Ferri, F; Grifoni, P

    2011-12-01

    The high costs of development and maintenance of multimodal grammars in integrating and understanding input in multimodal interfaces lead to the investigation of novel algorithmic solutions in automating grammar generation and in updating processes. Many algorithms for context-free grammar inference have been developed in the natural language processing literature. An extension of these algorithms toward the inference of multimodal grammars is necessary for multimodal input processing. In this paper, we propose a novel grammar inference mechanism that allows us to learn a multimodal grammar from its positive samples of multimodal sentences. The algorithm first generates the multimodal grammar that is able to parse the positive samples of sentences and, afterward, makes use of two learning operators and the minimum description length metrics in improving the grammar description and in avoiding the over-generalization problem. The experimental results highlight the acceptable performances of the algorithm proposed in this paper since it has a very high probability of parsing valid sentences.

  13. Fast algorithm of byte-to-byte wavelet transform for image compression applications

    NASA Astrophysics Data System (ADS)

    Pogrebnyak, Oleksiy B.; Sossa Azuela, Juan H.; Ramirez, Pablo M.

    2002-11-01

    A new fast algorithm of 2D DWT transform is presented. The algorithm operates on byte represented images and performs image transformation with the Cohen-Daubechies-Feauveau wavelet of the second order. It uses the lifting scheme for the calculations. The proposed algorithm is based on the "checkerboard" computation scheme for non-separable 2D wavelet. The problem of data extension near the image borders is resolved computing 1D Haar wavelet in the vicinity of the borders. With the checkerboard splitting, at each level of decomposition only one detail image is produced that simplify the further analysis for data compression. The calculations are rather simple, without any floating point operation allowing the implementation of the designed algorithm in fixed point DSP processors for fast, near real time processing. The proposed algorithm does not possesses perfect restoration of the processed data because of rounding that is introduced at each level of decomposition/restoration to perform operations with byte represented data. The designed algorithm was tested on different images. The criterion to estimate quantitatively the quality of the restored images was the well known PSNR. For the visual quality estimation the error maps between original and restored images were calculated. The obtained simulation results show that the visual and quantitative quality of the restored images is degraded with number of decomposition level increasing but is sufficiently high even after 6 levels. The introduced distortion are concentrated in the vicinity of high spatial activity details and are absent in the homogeneous regions. The designed algorithm can be used for image lossy compression and in noise suppression applications.

  14. Novel and efficient tag SNPs selection algorithms.

    PubMed

    Chen, Wen-Pei; Hung, Che-Lun; Tsai, Suh-Jen Jane; Lin, Yaw-Ling

    2014-01-01

    SNPs are the most abundant forms of genetic variations amongst species; the association studies between complex diseases and SNPs or haplotypes have received great attention. However, these studies are restricted by the cost of genotyping all SNPs; thus, it is necessary to find smaller subsets, or tag SNPs, representing the rest of the SNPs. In fact, the existing tag SNP selection algorithms are notoriously time-consuming. An efficient algorithm for tag SNP selection was presented, which was applied to analyze the HapMap YRI data. The experimental results show that the proposed algorithm can achieve better performance than the existing tag SNP selection algorithms; in most cases, this proposed algorithm is at least ten times faster than the existing methods. In many cases, when the redundant ratio of the block is high, the proposed algorithm can even be thousands times faster than the previously known methods. Tools and web services for haplotype block analysis integrated by hadoop MapReduce framework are also developed using the proposed algorithm as computation kernels.

  15. Improved wavefront reconstruction algorithm from slope measurements

    NASA Astrophysics Data System (ADS)

    Phuc, Phan Huy; Manh, Nguyen The; Rhee, Hyug-Gyo; Ghim, Young-Sik; Yang, Ho-Soon; Lee, Yun-Woo

    2017-03-01

    In this paper, we propose a wavefront reconstruction algorithm from slope measurements based on a zonal method. In this algorithm, the slope measurement sampling geometry used is the Southwell geometry, in which the phase values and the slope data are measured at the same nodes. The proposed algorithm estimates the phase value at a node point using the slope measurements of eight points around the node, as doing so is believed to result in better accuracy with regard to the wavefront. For optimization of the processing time, a successive over-relaxation method is applied to iteration loops. We use a trial-and-error method to determine the best relaxation factor for each type of wavefront in order to optimize the iteration time and, thus, the processing time of the algorithm. Specifically, for a circularly symmetric wavefront, the convergence rate of the algorithm can be improved by using the result of a Fourier Transform as an initial value for the iteration. Various simulations are presented to demonstrate the improvements realized when using the proposed algorithm. Several experimental measurements of deflectometry are also processed by using the proposed algorithm.

  16. A joint image encryption and watermarking algorithm based on compressive sensing and chaotic map

    NASA Astrophysics Data System (ADS)

    Xiao, Di; Cai, Hong-Kun; Zheng, Hong-Ying

    2015-06-01

    In this paper, a compressive sensing (CS) and chaotic map-based joint image encryption and watermarking algorithm is proposed. The transform domain coefficients of the original image are scrambled by Arnold map firstly. Then the watermark is adhered to the scrambled data. By compressive sensing, a set of watermarked measurements is obtained as the watermarked cipher image. In this algorithm, watermark embedding and data compression can be performed without knowing the original image; similarly, watermark extraction will not interfere with decryption. Due to the characteristics of CS, this algorithm features compressible cipher image size, flexible watermark capacity, and lossless watermark extraction from the compressed cipher image as well as robustness against packet loss. Simulation results and analyses show that the algorithm achieves good performance in the sense of security, watermark capacity, extraction accuracy, reconstruction, robustness, etc. Project supported by the Open Research Fund of Chongqing Key Laboratory of Emergency Communications, China (Grant No. CQKLEC, 20140504), the National Natural Science Foundation of China (Grant Nos. 61173178, 61302161, and 61472464), and the Fundamental Research Funds for the Central Universities, China (Grant Nos. 106112013CDJZR180005 and 106112014CDJZR185501).

  17. The Geometric Cluster Algorithm: Rejection-Free Monte Carlo Simulation of Complex Fluids

    NASA Astrophysics Data System (ADS)

    Luijten, Erik

    2005-03-01

    The study of complex fluids is an area of intense research activity, in which exciting and counter-intuitive behavior continue to be uncovered. Ironically, one of the very factors responsible for such interesting properties, namely the presence of multiple relevant time and length scales, often greatly complicates accurate theoretical calculations and computer simulations that could explain the observations. We have recently developed a new Monte Carlo simulation methodootnotetextJ. Liu and E. Luijten, Phys. Rev. Lett.92, 035504 (2004); see also Physics Today, March 2004, pp. 25--27. that overcomes this problem for several classes of complex fluids. Our approach can accelerate simulations by orders of magnitude by introducing nonlocal, collective moves of the constituents. Strikingly, these cluster Monte Carlo moves are proposed in such a manner that the algorithm is rejection-free. The identification of the clusters is based upon geometric symmetries and can be considered as the off-latice generalization of the widely-used Swendsen--Wang and Wolff algorithms for lattice spin models. While phrased originally for complex fluids that are governed by the Boltzmann distribution, the geometric cluster algorithm can be used to efficiently sample configurations from an arbitrary underlying distribution function and may thus be applied in a variety of other areas. In addition, I will briefly discuss various extensions of the original algorithm, including methods to influence the size of the clusters that are generated and ways to introduce density fluctuations.

  18. Visualizing output for a data learning algorithm

    NASA Astrophysics Data System (ADS)

    Carson, Daniel; Graham, James; Ternovskiy, Igor

    2016-05-01

    This paper details the process we went through to visualize the output for our data learning algorithm. We have been developing a hierarchical self-structuring learning algorithm based around the general principles of the LaRue model. One example of a proposed application of this algorithm would be traffic analysis, chosen because it is conceptually easy to follow and there is a significant amount of already existing data and related research material with which to work with. While we choose the tracking of vehicles for our initial approach, it is by no means the only target of our algorithm. Flexibility is the end goal, however, we still need somewhere to start. To that end, this paper details our creation of the visualization GUI for our algorithm, the features we included and the initial results we obtained from our algorithm running a few of the traffic based scenarios we designed.

  19. Runtime support for parallelizing data mining algorithms

    NASA Astrophysics Data System (ADS)

    Jin, Ruoming; Agrawal, Gagan

    2002-03-01

    With recent technological advances, shared memory parallel machines have become more scalable, and offer large main memories and high bus bandwidths. They are emerging as good platforms for data warehousing and data mining. In this paper, we focus on shared memory parallelization of data mining algorithms. We have developed a series of techniques for parallelization of data mining algorithms, including full replication, full locking, fixed locking, optimized full locking, and cache-sensitive locking. Unlike previous work on shared memory parallelization of specific data mining algorithms, all of our techniques apply to a large number of common data mining algorithms. In addition, we propose a reduction-object based interface for specifying a data mining algorithm. We show how our runtime system can apply any of the technique we have developed starting from a common specification of the algorithm.

  20. Research on palmprint identification method based on quantum algorithms.

    PubMed

    Li, Hui; Zhang, Zhanzhan

    2014-01-01

    Quantum image recognition is a technology by using quantum algorithm to process the image information. It can obtain better effect than classical algorithm. In this paper, four different quantum algorithms are used in the three stages of palmprint recognition. First, quantum adaptive median filtering algorithm is presented in palmprint filtering processing. Quantum filtering algorithm can get a better filtering result than classical algorithm through the comparison. Next, quantum Fourier transform (QFT) is used to extract pattern features by only one operation due to quantum parallelism. The proposed algorithm exhibits an exponential speed-up compared with discrete Fourier transform in the feature extraction. Finally, quantum set operations and Grover algorithm are used in palmprint matching. According to the experimental results, quantum algorithm only needs to apply square of N operations to find out the target palmprint, but the traditional method needs N times of calculation. At the same time, the matching accuracy of quantum algorithm is almost 100%.

  1. Research on Palmprint Identification Method Based on Quantum Algorithms

    PubMed Central

    Zhang, Zhanzhan

    2014-01-01

    Quantum image recognition is a technology by using quantum algorithm to process the image information. It can obtain better effect than classical algorithm. In this paper, four different quantum algorithms are used in the three stages of palmprint recognition. First, quantum adaptive median filtering algorithm is presented in palmprint filtering processing. Quantum filtering algorithm can get a better filtering result than classical algorithm through the comparison. Next, quantum Fourier transform (QFT) is used to extract pattern features by only one operation due to quantum parallelism. The proposed algorithm exhibits an exponential speed-up compared with discrete Fourier transform in the feature extraction. Finally, quantum set operations and Grover algorithm are used in palmprint matching. According to the experimental results, quantum algorithm only needs to apply square of N operations to find out the target palmprint, but the traditional method needs N times of calculation. At the same time, the matching accuracy of quantum algorithm is almost 100%. PMID:25105165

  2. Performance Comparison Of Evolutionary Algorithms For Image Clustering

    NASA Astrophysics Data System (ADS)

    Civicioglu, P.; Atasever, U. H.; Ozkan, C.; Besdok, E.; Karkinli, A. E.; Kesikoglu, A.

    2014-09-01

    Evolutionary computation tools are able to process real valued numerical sets in order to extract suboptimal solution of designed problem. Data clustering algorithms have been intensively used for image segmentation in remote sensing applications. Despite of wide usage of evolutionary algorithms on data clustering, their clustering performances have been scarcely studied by using clustering validation indexes. In this paper, the recently proposed evolutionary algorithms (i.e., Artificial Bee Colony Algorithm (ABC), Gravitational Search Algorithm (GSA), Cuckoo Search Algorithm (CS), Adaptive Differential Evolution Algorithm (JADE), Differential Search Algorithm (DSA) and Backtracking Search Optimization Algorithm (BSA)) and some classical image clustering techniques (i.e., k-means, fcm, som networks) have been used to cluster images and their performances have been compared by using four clustering validation indexes. Experimental test results exposed that evolutionary algorithms give more reliable cluster-centers than classical clustering techniques, but their convergence time is quite long.

  3. Enhanced beam-steering-based diagonal beamforming algorithm for binaural hearing support devices.

    PubMed

    Lee, Jun Chang; Nam, Kyoung Won; Cho, Kyeongwon; Lee, Sangmin; Kim, Dongwook; Hong, Sung Hwa; Jang, Dong Pyo; Kim, In Young

    2014-07-01

    In order to improve speech intelligibility for hearing-impaired people in various listening situations, it is necessary to diversify the possible focusing directions of a beamformer. In a previous report, the concept of binaural beam-steering that can focus a beamformer in diagonal directions was applied to a binaural hearing aid; however, in the previously proposed protocol, the effective frequency range for consistent diagonal beam-steering was limited to the 200-750 Hz range, which is far narrower than that of normal speech signals (200-4000 Hz). In this study, we proposed a modified binaural diagonal beam-steering technique that can reduce the focusing-direction deviations at high input frequencies up to 4000 Hz by introducing a new correction factor to the original protocol that can reduce the differences in gradient between the signal and the noise components at frequencies up to 4000 Hz. In simulation tests, the focusing effect of the proposed algorithm was more consistent than conventional algorithms. The deviations between the target and the focusing directions were reduced 27% in the left device and 6% in the right device with 45° steering at a 4000 Hz input signal, and were reduced 3% in the left device and 25% in the right device with 135° steering. On the basis of the experimental results, we believe that the proposed algorithm has the potential to help hearing-impaired people in various listening situations.

  4. Memetic algorithm for community detection in networks.

    PubMed

    Gong, Maoguo; Fu, Bao; Jiao, Licheng; Du, Haifeng

    2011-11-01

    Community structure is one of the most important properties in networks, and community detection has received an enormous amount of attention in recent years. Modularity is by far the most used and best known quality function for measuring the quality of a partition of a network, and many community detection algorithms are developed to optimize it. However, there is a resolution limit problem in modularity optimization methods. In this study, a memetic algorithm, named Meme-Net, is proposed to optimize another quality function, modularity density, which includes a tunable parameter that allows one to explore the network at different resolutions. Our proposed algorithm is a synergy of a genetic algorithm with a hill-climbing strategy as the local search procedure. Experiments on computer-generated and real-world networks show the effectiveness and the multiresolution ability of the proposed method.

  5. A Novel Algorithm Combining Finite State Method and Genetic Algorithm for Solving Crude Oil Scheduling Problem

    PubMed Central

    Duan, Qian-Qian; Yang, Gen-Ke; Pan, Chang-Chun

    2014-01-01

    A hybrid optimization algorithm combining finite state method (FSM) and genetic algorithm (GA) is proposed to solve the crude oil scheduling problem. The FSM and GA are combined to take the advantage of each method and compensate deficiencies of individual methods. In the proposed algorithm, the finite state method makes up for the weakness of GA which is poor at local searching ability. The heuristic returned by the FSM can guide the GA algorithm towards good solutions. The idea behind this is that we can generate promising substructure or partial solution by using FSM. Furthermore, the FSM can guarantee that the entire solution space is uniformly covered. Therefore, the combination of the two algorithms has better global performance than the existing GA or FSM which is operated individually. Finally, a real-life crude oil scheduling problem from the literature is used for conducting simulation. The experimental results validate that the proposed method outperforms the state-of-art GA method. PMID:24772031

  6. A novel algorithm combining finite state method and genetic algorithm for solving crude oil scheduling problem.

    PubMed

    Duan, Qian-Qian; Yang, Gen-Ke; Pan, Chang-Chun

    2014-01-01

    A hybrid optimization algorithm combining finite state method (FSM) and genetic algorithm (GA) is proposed to solve the crude oil scheduling problem. The FSM and GA are combined to take the advantage of each method and compensate deficiencies of individual methods. In the proposed algorithm, the finite state method makes up for the weakness of GA which is poor at local searching ability. The heuristic returned by the FSM can guide the GA algorithm towards good solutions. The idea behind this is that we can generate promising substructure or partial solution by using FSM. Furthermore, the FSM can guarantee that the entire solution space is uniformly covered. Therefore, the combination of the two algorithms has better global performance than the existing GA or FSM which is operated individually. Finally, a real-life crude oil scheduling problem from the literature is used for conducting simulation. The experimental results validate that the proposed method outperforms the state-of-art GA method.

  7. The BR eigenvalue algorithm

    SciTech Connect

    Geist, G.A.; Howell, G.W.; Watkins, D.S.

    1997-11-01

    The BR algorithm, a new method for calculating the eigenvalues of an upper Hessenberg matrix, is introduced. It is a bulge-chasing algorithm like the QR algorithm, but, unlike the QR algorithm, it is well adapted to computing the eigenvalues of the narrowband, nearly tridiagonal matrices generated by the look-ahead Lanczos process. This paper describes the BR algorithm and gives numerical evidence that it works well in conjunction with the Lanczos process. On the biggest problems run so far, the BR algorithm beats the QR algorithm by a factor of 30--60 in computing time and a factor of over 100 in matrix storage space.

  8. Origins of magnetospheric physics

    SciTech Connect

    Van Allen, J.A.

    1983-01-01

    The history of the scientific investigation of the earth magnetosphere during the period 1946-1960 is reviewed, with a focus on satellite missions leading to the discovery of the inner and outer radiation belts. Chapters are devoted to ground-based studies of the earth magnetic field through the 1930s, the first U.S. rocket flights carrying scientific instruments, the rockoon flights from the polar regions (1952-1957), U.S. planning for scientific use of artificial satellites (1956), the launch of Sputnik I (1957), the discovery of the inner belt by Explorers I and III (1958), the Argus high-altitude atomic-explosion tests (1958), the confirmation of the inner belt and discovery of the outer belt by Explorer IV and Pioneers I-V, related studies by Sputniks II and III and Luniks I-III, and the observational and theoretical advances of 1959-1961. Photographs, drawings, diagrams, graphs, and copies of original notes and research proposals are provided. 227 references.

  9. Origin of Neutron Stars

    NASA Astrophysics Data System (ADS)

    Brecher, K.

    1999-12-01

    The origin of the concept of neutron stars can be traced to two brief, incredibly insightful publications. Work on the earlier paper by Lev Landau (Phys. Z. Sowjetunion, 1, 285, 1932) actually predated the discovery of neutrons. Nonetheless, Landau arrived at the notion of a collapsed star with the density of a nucleus (really a "nucleus star") and demonstrated (at about the same time as, and independent of, Chandrasekhar) that there is an upper mass limit for dense stellar objects of about 1.5 solar masses. Perhaps even more remarkable is the abstract of a talk presented at the December 1933 meeting of the American Physical Society published by Walter Baade and Fritz Zwicky in 1934 (Phys. Rev. 45, 138). It followed the discovery of the neutron by just over a year. Their report, which was about the same length as the present abstract: (1) invented the concept and word supernova; (2) suggested that cosmic rays are produced by supernovae; and (3) in the authors own words, proposed "with all reserve ... the view that supernovae represent the transitions from ordinary stars to neutron stars (italics), which in their final stages consist of extremely closely packed neutrons." The abstract by Baade and Zwicky probably contains the highest density of new, important (and correct) ideas in high energy astrophysics ever published in a single paper. In this talk, we will discuss some of the facts and myths surrounding these two publications.

  10. User-scheduling algorithm for a MU-MIMO system

    NASA Astrophysics Data System (ADS)

    Yu, Haiyang; Choi, Jaeho

    2015-12-01

    A user-scheduling algorithm for MU-MIMO systems is presented in this paper. The algorithm is a codebook based precoding method which can be suitable for the IEEE 802.16m mobile broadband standard. The proposed algorithm can effectively improve the sum capacity and fairness among the users.

  11. Newton Algorithms for Analytic Rotation: An Implicit Function Approach

    ERIC Educational Resources Information Center

    Boik, Robert J.

    2008-01-01

    In this paper implicit function-based parameterizations for orthogonal and oblique rotation matrices are proposed. The parameterizations are used to construct Newton algorithms for minimizing differentiable rotation criteria applied to "m" factors and "p" variables. The speed of the new algorithms is compared to that of existing algorithms and to…

  12. Region labeling algorithm based on boundary tracking for binary image

    NASA Astrophysics Data System (ADS)

    Chen, Li; Yang, Yang; Cen, Zhaofeng; Li, Xiaotong

    2010-11-01

    Region labeling for binary image is an important part of image processing. For the special use of small and multi-objects labeling, a new region labeling algorithm based on boundary tracking is proposed in this paper. Experiments prove that our algorithm is feasible and efficient, and even faster than some of other algorithms.

  13. Applications of Spectral Gradient Algorithm for Solving Matrix ℓ2,1-Norm Minimization Problems in Machine Learning

    PubMed Central

    Xiao, Yunhai; Wang, Qiuyu; Liu, Lihong

    2016-01-01

    The main purpose of this study is to propose, then analyze, and later test a spectral gradient algorithm for solving a convex minimization problem. The considered problem covers the matrix ℓ2,1-norm regularized least squares which is widely used in multi-task learning for capturing the joint feature among each task. To solve the problem, we firstly minimize a quadratic approximated model of the objective function to derive a search direction at current iteration. We show that this direction descends automatically and reduces to the original spectral gradient direction if the regularized term is removed. Secondly, we incorporate a nonmonotone line search along this direction to improve the algorithm’s numerical performance. Furthermore, we show that the proposed algorithm converges to a critical point under some mild conditions. The attractive feature of the proposed algorithm is that it is easily performable and only requires the gradient of the smooth function and the objective function’s values at each and every step. Finally, we operate some experiments on synthetic data, which verifies that the proposed algorithm works quite well and performs better than the compared ones. PMID:27861526

  14. Partially constrained ant colony optimization algorithm for the solution of constrained optimization problems: Application to storm water network design

    NASA Astrophysics Data System (ADS)

    Afshar, M. H.

    2007-04-01

    This paper exploits the unique feature of the Ant Colony Optimization Algorithm (ACOA), namely incremental solution building mechanism, to develop partially constraint ACO algorithms for the solution of optimization problems with explicit constraints. The method is based on the provision of a tabu list for each ant at each decision point of the problem so that some constraints of the problem are satisfied. The application of the method to the problem of storm water network design is formulated and presented. The network nodes are considered as the decision points and the nodal elevations of the network are used as the decision variables of the optimization problem. Two partially constrained ACO algorithms are formulated and applied to a benchmark example of storm water network design and the results are compared with those of the original unconstrained algorithm and existing methods. In the first algorithm the positive slope constraints are satisfied explicitly and the rest are satisfied by using the penalty method while in the second one the satisfaction of constraints regarding the maximum ratio of flow depth to the diameter are also achieved explicitly via the tabu list. The method is shown to be very effective and efficient in locating the optimal solutions and in terms of the convergence characteristics of the resulting ACO algorithms. The proposed algorithms are also shown to be relatively insensitive to the initial colony used compared to the original algorithm. Furthermore, the method proves itself capable of finding an optimal or near-optimal solution, independent of the discretisation level and the size of the colony used.

  15. Modern human origins: progress and prospects.

    PubMed Central

    Stringer, Chris

    2002-01-01

    The question of the mode of origin of modern humans (Homo sapiens) has dominated palaeoanthropological debate over the last decade. This review discusses the main models proposed to explain modern human origins, and examines relevant fossil evidence from Eurasia, Africa and Australasia. Archaeological and genetic data are also discussed, as well as problems with the concept of 'modernity' itself. It is concluded that a recent African origin can be supported for H. sapiens, morphologically, behaviourally and genetically, but that more evidence will be needed, both from Africa and elsewhere, before an absolute African origin for our species and its behavioural characteristics can be established and explained. PMID:12028792

  16. Robust digital image-in-image watermarking algorithm using the fast Hadamard transform

    NASA Astrophysics Data System (ADS)

    Ho, Anthony T. S.; Shen, Jun; Tan, Soon H.

    2003-01-01

    In this paper, we propose a robust image-in-image watermarking algorithm based on the fast Hadamard transform (FHT) for the copyright protection of digital images. Most current research makes use of a normally distributed random vector as a watermark and where the watermark can only be detected by cross-correlating the received coefficients with the watermark generated by secret key and then comparing an experimental threshold value. However, the FHT image-in-image method involves a "blind" watermarking process that retrieves the watermark without the need for an original image present. In the proposed approach, a number of pseudorandom selected 8×8 sub-blocks of original image and a watermark image are decomposed into Hadamard coefficients. To increase the invisibility of the watermark, a visual model based on original image characteristics, such as edges and textures are incorporated to determine the watermarking strength factor. All the AC Hadamard coefficients of watermark image is scaled by the watermarking strength factor and inserted into several middle and high frequency AC components of the Hadamard coefficients from the sub-blocks of original image. To further increase the reliability of the watermarking against the common geometric distortions, such as rotation and scaling, a post-processing technique is proposed. Understanding the type of distortion provides a mean to apply a reversal of the attack on the watermarked image, enabling the restoration to the synchronization of the embedding positions. The performance of the proposed algorithm is evaluated using Stirmark. The experiment uses container image of size 512×512×8bits and the watermark image of size 64×64×8bits. It survives about 60% of all Stirmark attacks. The simplicity of Hadamard transform offers a significant advantage in shorter processing time and ease of hardware implementation than the commonly used DCT and DWT techniques.

  17. Study of sequential optimal control algorithm smart isolation structure based on Simulink-S function

    NASA Astrophysics Data System (ADS)

    Liu, Xiaohuan; Liu, Yanhui

    2017-01-01

    The study of this paper focuses on smart isolation structure, a method for realizing structural vibration control by using Simulink simulation is proposed according to the proposed sequential optimal control algorithm. In the Simulink simulation environment, A smart isolation structure is used to compare the control effect of three algorithms, i.e., classical optimal control algorithm, linear quadratic gaussian control algorithm and sequential optimal control algorithm under the condition of sensor contaminated with noise. Simulation results show that this method can be applied to the simulation of sequential optimal control algorithm and the proposed sequential optimal control algorithm has a good ability of resisting the noise and better control efficiency.

  18. Birkhoffian symplectic algorithms derived from Hamiltonian symplectic algorithms

    NASA Astrophysics Data System (ADS)

    Xin-Lei, Kong; Hui-Bin, Wu; Feng-Xiang, Mei

    2016-01-01

    In this paper, we focus on the construction of structure preserving algorithms for Birkhoffian systems, based on existing symplectic schemes for the Hamiltonian equations. The key of the method is to seek an invertible transformation which drives the Birkhoffian equations reduce to the Hamiltonian equations. When there exists such a transformation, applying the corresponding inverse map to symplectic discretization of the Hamiltonian equations, then resulting difference schemes are verified to be Birkhoffian symplectic for the original Birkhoffian equations. To illustrate the operation process of the method, we construct several desirable algorithms for the linear damped oscillator and the single pendulum with linear dissipation respectively. All of them exhibit excellent numerical behavior, especially in preserving conserved quantities. Project supported by the National Natural Science Foundation of China (Grant No. 11272050), the Excellent Young Teachers Program of North China University of Technology (Grant No. XN132), and the Construction Plan for Innovative Research Team of North China University of Technology (Grant No. XN129).

  19. Fourier Lucas-Kanade algorithm.

    PubMed

    Lucey, Simon; Navarathna, Rajitha; Ashraf, Ahmed Bilal; Sridharan, Sridha

    2013-06-01

    In this paper, we propose a framework for both gradient descent image and object alignment in the Fourier domain. Our method centers upon the classical Lucas & Kanade (LK) algorithm where we represent the source and template/model in the complex 2D Fourier domain rather than in the spatial 2D domain. We refer to our approach as the Fourier LK (FLK) algorithm. The FLK formulation is advantageous when one preprocesses the source image and template/model with a bank of filters (e.g., oriented edges, Gabor, etc.) as 1) it can handle substantial illumination variations, 2) the inefficient preprocessing filter bank step can be subsumed within the FLK algorithm as a sparse diagonal weighting matrix, 3) unlike traditional LK, the computational cost is invariant to the number of filters and as a result is far more efficient, and 4) this approach can be extended to the Inverse Compositional (IC) form of the LK algorithm where nearly all steps (including Fourier transform and filter bank preprocessing) can be precomputed, leading to an extremely efficient and robust approach to gradient descent image matching. Further, these computational savings translate to nonrigid object alignment tasks that are considered extensions of the LK algorithm, such as those found in Active Appearance Models (AAMs).

  20. A Unified Differential Evolution Algorithm for Global Optimization

    SciTech Connect

    Qiang, Ji; Mitchell, Chad

    2014-06-24

    Abstract?In this paper, we propose a new unified differential evolution (uDE) algorithm for single objective global optimization. Instead of selecting among multiple mutation strategies as in the conventional differential evolution algorithm, this algorithm employs a single equation as the mutation strategy. It has the virtue of mathematical simplicity and also provides users the flexbility for broader exploration of different mutation strategies. Numerical tests using twelve basic unimodal and multimodal functions show promising performance of the proposed algorithm in comparison to convential differential evolution algorithms.

  1. Two New PRP Conjugate Gradient Algorithms for Minimization Optimization Models.

    PubMed

    Yuan, Gonglin; Duan, Xiabin; Liu, Wenjie; Wang, Xiaoliang; Cui, Zengru; Sheng, Zhou

    2015-01-01

    Two new PRP conjugate Algorithms are proposed in this paper based on two modified PRP conjugate gradient methods: the first algorithm is proposed for solving unconstrained optimization problems, and the second algorithm is proposed for solving nonlinear equations. The first method contains two aspects of information: function value and gradient value. The two methods both possess some good properties, as follows: 1) βk ≥ 0 2) the search direction has the trust region property without the use of any line search method 3) the search direction has sufficient descent property without the use of any line search method. Under some suitable conditions, we establish the global convergence of the two algorithms. We conduct numerical experiments to evaluate our algorithms. The numerical results indicate that the first algorithm is effective and competitive for solving unconstrained optimization problems and that the second algorithm is effective for solving large-scale nonlinear equations.

  2. Two New PRP Conjugate Gradient Algorithms for Minimization Optimization Models

    PubMed Central

    Yuan, Gonglin; Duan, Xiabin; Liu, Wenjie; Wang, Xiaoliang; Cui, Zengru; Sheng, Zhou

    2015-01-01

    Two new PRP conjugate Algorithms are proposed in this paper based on two modified PRP conjugate gradient methods: the first algorithm is proposed for solving unconstrained optimization problems, and the second algorithm is proposed for solving nonlinear equations. The first method contains two aspects of information: function value and gradient value. The two methods both possess some good properties, as follows: 1)βk ≥ 0 2) the search direction has the trust region property without the use of any line search method 3) the search direction has sufficient descent property without the use of any line search method. Under some suitable conditions, we establish the global convergence of the two algorithms. We conduct numerical experiments to evaluate our algorithms. The numerical results indicate that the first algorithm is effective and competitive for solving unconstrained optimization problems and that the second algorithm is effective for solving large-scale nonlinear equations. PMID:26502409

  3. A robust fuzzy local information C-Means clustering algorithm.

    PubMed

    Krinidis, Stelios; Chatzis, Vassilios

    2010-05-01

    This paper presents a variation of fuzzy c-means (FCM) algorithm that provides image clustering. The proposed algorithm incorporates the local spatial information and gray level information in a novel fuzzy way. The new algorithm is called fuzzy local information C-Means (FLICM). FLICM can overcome the disadvantages of the known fuzzy c-means algorithms and at the same time enhances the clustering performance. The major characteristic of FLICM is the use of a fuzzy local (both spatial and gray level) similarity measure, aiming to guarantee noise insensitiveness and image detail preservation. Furthermore, the proposed algorithm is fully free of the empirically adjusted parameters (a, ¿(g), ¿(s), etc.) incorporated into all other fuzzy c-means algorithms proposed in the literature. Experiments performed on synthetic and real-world images show that FLICM algorithm is effective and efficient, providing robustness to noisy images.

  4. An innovative thinking-based intelligent information fusion algorithm.

    PubMed

    Lu, Huimin; Hu, Liang; Liu, Gang; Zhou, Jin

    2013-01-01

    This study proposes an intelligent algorithm that can realize information fusion in reference to the relative research achievements in brain cognitive theory and innovative computation. This algorithm treats knowledge as core and information fusion as a knowledge-based innovative thinking process. Furthermore, the five key parts of this algorithm including information sense and perception, memory storage, divergent thinking, convergent thinking, and evaluation system are simulated and modeled. This algorithm fully develops innovative thinking skills of knowledge in information fusion and is a try to converse the abstract conception of brain cognitive science to specific and operable research routes and strategies. Furthermore, the influences of each parameter of this algorithm on algorithm performance are analyzed and compared with those of classical intelligent algorithms trough test. Test results suggest that the algorithm proposed in this study can obtain the optimum problem solution by less target evaluation times, improve optimization effectiveness, and achieve the effective fusion of information.

  5. Multidisciplinary Multiobjective Optimal Design for Turbomachinery Using Evolutionary Algorithm

    NASA Technical Reports Server (NTRS)

    2005-01-01

    This report summarizes Dr. Lian s efforts toward developing a robust and efficient tool for multidisciplinary and multi-objective optimal design for turbomachinery using evolutionary algorithms. This work consisted of two stages. The first stage (from July 2003 to June 2004) Dr. Lian focused on building essential capabilities required for the project. More specifically, Dr. Lian worked on two subjects: an enhanced genetic algorithm (GA) and an integrated optimization system with a GA and a surrogate model. The second stage (from July 2004 to February 2005) Dr. Lian formulated aerodynamic optimization and structural optimization into a multi-objective optimization problem and performed multidisciplinary and multi-objective optimizations on a transonic compressor blade based on the proposed model. Dr. Lian s numerical results showed that the proposed approach can effectively reduce the blade weight and increase the stage pressure ratio in an efficient manner. In addition, the new design was structurally safer than the original design. Five conference papers and three journal papers were published on this topic by Dr. Lian.

  6. A new algorithm for the detection of seismic quiescence: introduction of the RTM algorithm, a modified RTL algorithm

    NASA Astrophysics Data System (ADS)

    Nagao, Toshiyasu; Takeuchi, Akihiro; Nakamura, Kenji

    2011-03-01

    There are a number of reports on seismic quiescence phenomena before large earthquakes. The RTL algorithm is a weighted coefficient statistical method that takes into account the magnitude, occurrence time, and place of earthquake when seismicity pattern changes before large earthquakes are being investigated. However, we consider the original RTL algorithm to be overweighted on distance. In this paper, we introduce a modified RTL algorithm, called the RTM algorithm, and apply it to three large earthquakes in Japan, namely, the Hyogo-ken Nanbu earthquake in 1995 ( M JMA7.3), the Noto Hanto earthquake in 2007 ( M JMA 6.9), and the Iwate-Miyagi Nairiku earthquake in 2008 ( M JMA 7.2), as test cases. Because this algorithm uses several parameters to characterize the weighted coefficients, multiparameter sets have to be prepared for the tests. The results show that the RTM algorithm is more sensitive than the RTL algorithm to seismic quiescence phenomena. This paper represents the first step in a series of future analyses of seismic quiescence phenomena using the RTM algorithm. At this moment, whole surveyed parameters are empirically selected for use in the method. We have to consider the physical meaning of the "best fit" parameter, such as the relation of ACFS, among others, in future analyses.

  7. Metabolic symbiosis at the origin of eukaryotes.

    PubMed

    López-Garćia, P; Moreira, D

    1999-03-01

    Thirty years after Margulis revived the endosymbiosis theory for the origin of mitochondria and chloroplasts, two novel symbiosis hypotheses for the origin of eukaryotes have been put forward. Both propose that eukaryotes arose through metabolic symbiosis (syntrophy) between eubacteria and methanogenic Archaea. They also propose that this was mediated by interspecies hydrogen transfer and that, initially, mitochondria were anaerobic. These hypotheses explain the mosaic character of eukaryotes (i.e. an archaeal-like genetic machinery and a eubacterial-like metabolism), as well as distinct eukaryotic characteristics (which are proposed to be products of symbiosis). Combined data from comparative genomics, microbial ecology and the fossil record should help to test their validity.

  8. An improved SIFT algorithm in the application of close-range Stereo image matching

    NASA Astrophysics Data System (ADS)

    Zhang, Xuehua; Wang, Xiaoqing; Yuan, Xiaoxiang; Wang, Shumin

    2016-11-01

    As unmanned aerial vehicle (UAV) remote sensing is applied in small area aerial photogrammetry surveying, disaster monitoring and emergency command, 3D urban construction and other fields, the image processing of UAV has become a hot topic in current research. The precise matching of UAV image is a key problem, which affects the subsequent processing precision directly, such as 3D reconstruction and automatic aerial triangulation, etc. At present, SIFT (Scale Invariant Feature Transform) algorithm proposed by DAVID G. LOWE as the main method is, is widely used in image matching, since its strong stability to image rotation, shift, scaling, and the change of illumination conditions. It has been successfully applied in target recognition, SFM (Structure from Motion), and many other fields. SIFT algorithm needs the colour images to be converted into grayscale images, detects extremum points under different scales and uses neighbourhood pixels to generate descriptor. As we all know that UAV images with rich colour information, the SIFT algorithm improved through combining with the image colour information in this paper, the experiments are conducted from matching efficiency and accuracy compared with the original SIFT algorithm. The results show that the method which proposed in this paper decreases on the efficiency, but is improved on the precision and provides a basis choice for matching method.

  9. Applications and accuracy of the parallel diagonal dominant algorithm

    NASA Technical Reports Server (NTRS)

    Sun, Xian-He

    1993-01-01

    The Parallel Diagonal Dominant (PDD) algorithm is a highly efficient, ideally scalable tridiagonal solver. In this paper, a detailed study of the PDD algorithm is given. First the PDD algorithm is introduced. Then the algorithm is extended to solve periodic tridiagonal systems. A variant, the reduced PDD algorithm, is also proposed. Accuracy analysis is provided for a class of tridiagonal systems, the symmetric, and anti-symmetric Toeplitz tridiagonal systems. Implementation results show that the analysis gives a good bound on the relative error, and the algorithm is a good candidate for the emerging massively parallel machines.

  10. Basis for spectral curvature algorithms in remote sensing of chlorophyll

    NASA Technical Reports Server (NTRS)

    Campbell, J. W.; Esaias, W. E.

    1983-01-01

    A simple, empirically derived algorithm for estimating oceanic chlorophyll concentrations from spectral radiances measured by a low-flying spectroradiometer has proved highly successful in field experiments in 1980-82. The sensor used was the Multichannel Ocean Color Sensor, and the originator of the algorithm was Grew (1981). This paper presents an explanation for the algorithm based on the optical properties of waters containing chlorophyll and other phytoplankton pigments and the radiative transfer equations governing the remotely sensed signal. The effects of varying solar zenith, atmospheric transmittance, and interfering substances in the water on the chlorophyll algorithm are characterized, and applicability of the algorithm is discussed.

  11. A VLSI architecture for simplified arithmetic Fourier transform algorithm

    NASA Technical Reports Server (NTRS)

    Reed, Irving S.; Shih, Ming-Tang; Truong, T. K.; Hendon, E.; Tufts, D. W.

    1992-01-01

    The arithmetic Fourier transform (AFT) is a number-theoretic approach to Fourier analysis which has been shown to perform competitively with the classical FFT in terms of accuracy, complexity, and speed. Theorems developed in a previous paper for the AFT algorithm are used here to derive the original AFT algorithm which Bruns found in 1903. This is shown to yield an algorithm of less complexity and of improved performance over certain recent AFT algorithms. A VLSI architecture is suggested for this simplified AFT algorithm. This architecture uses a butterfly structure which reduces the number of additions by 25 percent of that used in the direct method.

  12. Analytical modeling of materialized view maintenance algorithms

    SciTech Connect

    Srivastava, J.; Rotem, D.

    1987-10-01

    In the recent past there has been increasing interest in the idea of maintaining materialized copies of views, and use them to process view queries (ADIB 80, LIND 86, BLAK 86, ROSS 86, HANS 87). Various algorithms have been proposed, and their performance analyzed. However, there does not exist a comprehensive analytical framework under which the problem can be systematically studied. We present a queueing model which facilitates both a systematic study of the problem, and provides a means to compare various proposed algorithms. Specifically, we propose a parametrized approach in which both the user and system viewpoints are integrated, and the setting of the parameter decides the relative importance of each table.

  13. Improvement of wavelet threshold filtered back-projection image reconstruction algorithm

    NASA Astrophysics Data System (ADS)

    Ren, Zhong; Liu, Guodong; Huang, Zhen

    2014-11-01

    Image reconstruction technique has been applied into many fields including some medical imaging, such as X ray computer tomography (X-CT), positron emission tomography (PET) and nuclear magnetic resonance imaging (MRI) etc, but the reconstructed effects are still not satisfied because original projection data are inevitably polluted by noises in process of image reconstruction. Although some traditional filters e.g., Shepp-Logan (SL) and Ram-Lak (RL) filter have the ability to filter some noises, Gibbs oscillation phenomenon are generated and artifacts leaded by back-projection are not greatly improved. Wavelet threshold denoising can overcome the noises interference to image reconstruction. Since some inherent defects exist in the traditional soft and hard threshold functions, an improved wavelet threshold function combined with filtered back-projection (FBP) algorithm was proposed in this paper. Four different reconstruction algorithms were compared in simulated experiments. Experimental results demonstrated that this improved algorithm greatly eliminated the shortcomings of un-continuity and large distortion of traditional threshold functions and the Gibbs oscillation. Finally, the availability of this improved algorithm was verified from the comparison of two evaluation criterions, i.e. mean square error (MSE), peak signal to noise ratio (PSNR) among four different algorithms, and the optimum dual threshold values of improved wavelet threshold function was gotten.

  14. A novel image encryption algorithm based on chaos maps with Markov properties

    NASA Astrophysics Data System (ADS)

    Liu, Quan; Li, Pei-yue; Zhang, Ming-chao; Sui, Yong-xin; Yang, Huai-jiang

    2015-02-01

    In order to construct high complexity, secure and low cost image encryption algorithm, a class of chaos with Markov properties was researched and such algorithm was also proposed. The kind of chaos has higher complexity than the Logistic map and Tent map, which keeps the uniformity and low autocorrelation. An improved couple map lattice based on the chaos with Markov properties is also employed to cover the phase space of the chaos and enlarge the key space, which has better performance than the original one. A novel image encryption algorithm is constructed on the new couple map lattice, which is used as a key stream generator. A true random number is used to disturb the key which can dynamically change the permutation matrix and the key stream. From the experiments, it is known that the key stream can pass SP800-22 test. The novel image encryption can resist CPA and CCA attack and differential attack. The algorithm is sensitive to the initial key and can change the distribution the pixel values of the image. The correlation of the adjacent pixels can also be eliminated. When compared with the algorithm based on Logistic map, it has higher complexity and better uniformity, which is nearer to the true random number. It is also efficient to realize which showed its value in common use.

  15. Comparison of Statistical Algorithms for the Detection of Infectious Disease Outbreaks in Large Multiple Surveillance Systems

    PubMed Central

    Farrington, C. Paddy; Noufaily, Angela; Andrews, Nick J.; Charlett, Andre

    2016-01-01

    A large-scale multiple surveillance system for infectious disease outbreaks has been in operation in England and Wales since the early 1990s. Changes to the statistical algorithm at the heart of the system were proposed and the purpose of this paper is to compare two new algorithms with the original algorithm. Test data to evaluate performance are created from weekly counts of the number of cases of each of more than 2000 diseases over a twenty-year period. The time series of each disease is separated into one series giving the baseline (background) disease incidence and a second series giving disease outbreaks. One series is shifted forward by twelve months and the two are then recombined, giving a realistic series in which it is known where outbreaks have been added. The metrics used to evaluate performance include a scoring rule that appropriately balances sensitivity against specificity and is sensitive to variation in probabilities near 1. In the context of disease surveillance, a scoring rule can be adapted to reflect the size of outbreaks and this was done. Results indicate that the two new algorithms are comparable to each other and better than the algorithm they were designed to replace. PMID:27513749

  16. Fast algorithm for scaling analysis with higher-order detrending moving average method

    NASA Astrophysics Data System (ADS)

    Tsujimoto, Yutaka; Miki, Yuki; Shimatani, Satoshi; Kiyono, Ken

    2016-05-01

    Among scaling analysis methods based on the root-mean-square deviation from the estimated trend, it has been demonstrated that centered detrending moving average (DMA) analysis with a simple moving average has good performance when characterizing long-range correlation or fractal scaling behavior. Furthermore, higher-order DMA has also been proposed; it is shown to have better detrending capabilities, removing higher-order polynomial trends than original DMA. However, a straightforward implementation of higher-order DMA requires a very high computational cost, which would prevent practical use of this method. To solve this issue, in this study, we introduce a fast algorithm for higher-order DMA, which consists of two techniques: (1) parallel translation of moving averaging windows by a fixed interval; (2) recurrence formulas for the calculation of summations. Our algorithm can significantly reduce computational cost. Monte Carlo experiments show that the computational time of our algorithm is approximately proportional to the data length, although that of the conventional algorithm is proportional to the square of the data length. The efficiency of our algorithm is also shown by a systematic study of the performance of higher-order DMA, such as the range of detectable scaling exponents and detrending capability for removing polynomial trends. In addition, through the analysis of heart-rate variability time series, we discuss possible applications of higher-order DMA.

  17. A dynamic material discrimination algorithm for dual MV energy X-ray digital radiography.

    PubMed

    Li, Liang; Li, Ruizhe; Zhang, Siyuan; Zhao, Tiao; Chen, Zhiqiang

    2016-08-01

    Dual-energy X-ray radiography has become a well-established technique in medical, industrial, and security applications, because of its material or tissue discrimination capability. The main difficulty of this technique is dealing with the materials overlapping problem. When there are two or more materials along the X-ray beam path, its material discrimination performance will be affected. In order to solve this problem, a new dynamic material discrimination algorithm is proposed for dual-energy X-ray digital radiography, which can also be extended to multi-energy X-ray situations. The algorithm has three steps: α-curve-based pre-classification, decomposition of overlapped materials, and the final material recognition. The key of the algorithm is to establish a dual-energy radiograph database of both pure basis materials and pair combinations of them. After the pre-classification results, original dual-energy projections of overlapped materials can be dynamically decomposed into two sets of dual-energy radiographs of each pure material by the algorithm. Thus, more accurate discrimination results can be provided even with the existence of the overlapping problem. Both numerical and experimental results that prove the validity and effectiveness of the algorithm are presented.

  18. Random generation of periodic hard ellipsoids based on molecular dynamics: A computationally-efficient algorithm

    NASA Astrophysics Data System (ADS)

    Ghossein, Elias; Lévesque, Martin

    2013-11-01

    This paper presents a computationally-efficient algorithm for generating random periodic packings of hard ellipsoids. The algorithm is based on molecular dynamics where the ellipsoids are set in translational and rotational motion and their volumes gradually increase. Binary collision times are computed by simply finding the roots of a non-linear function. In addition, an original and efficient method to compute the collision time between an ellipsoid and a cube face is proposed. The algorithm can generate all types of ellipsoids (prolate, oblate and scalene) with very high aspect ratios (i.e., >10). It is the first time that such packings are reported in the literature. Orientations tensors were computed for the generated packings and it has been shown that ellipsoids had a uniform distribution of orientations. Moreover, it seems that for low aspect ratios (i.e., ⩽10), the volume fraction is the most influential parameter on the algorithm CPU time. For higher aspect ratios, the influence of the latter becomes as important as the volume fraction. All necessary pseudo-codes are given so that the reader can easily implement the algorithm.

  19. Uses of clinical algorithms.

    PubMed

    Margolis, C Z

    1983-02-04

    The clinical algorithm (flow chart) is a text format that is specially suited for representing a sequence of clinical decisions, for teaching clinical decision making, and for guiding patient care. A representative clinical algorithm is described in detail; five steps for writing an algorithm and seven steps for writing a set of algorithms are outlined. Five clinical education and patient care uses of algorithms are then discussed, including a map for teaching clinical decision making and protocol charts for guiding step-by-step care of specific problems. Clinical algorithms are compared as to their clinical usefulness with decision analysis. Three objections to clinical algorithms are answered, including the one that they restrict thinking. It is concluded that methods should be sought for writing clinical algorithms that represent expert consensus. A clinical algorithm could then be written for any area of medical decision making that can be standardized. Medical practice could then be taught more effectively, monitored accurately, and understood better.

  20. Complex algorithm of optical flow determination by weighted full search

    NASA Astrophysics Data System (ADS)

    Panin, S. V.; Chemezov, V. O.; Lyubutin, P. S.

    2016-11-01

    An optical flow determination algorithm is proposed, developed and tested in the article. The algorithm is aimed at improving the accuracy of displacement determination at the scene element boundaries (objects). The results show that the application of the proposed algorithm is rather promising for stereo vision applications. Variations in calculating parameters have allowed determining their rational values and reducing the average absolute error of the end point displacement determination (AEE). The peculiarity of the proposed algorithm is performing calculations within the local regions, which makes it possible to carry out such calculations simultaneously (to attract parallel calculations).

  1. Recurrent neural networks training with stable bounding ellipsoid algorithm.

    PubMed

    Yu, Wen; de Jesús Rubio, José

    2009-06-01

    Bounding ellipsoid (BE) algorithms offer an attractive alternative to traditional training algorithms for neural networks, for example, backpropagation and least squares methods. The benefits include high computational efficiency and fast convergence speed. In this paper, we propose an ellipsoid propagation algorithm to train the weights of recurrent neural networks for nonlinear systems identification. Both hidden layers and output layers can be updated. The stability of the BE algorithm is proven.

  2. MU-MIMO Pairing Algorithm Using Received Power

    NASA Astrophysics Data System (ADS)

    Kim, Young-Joon; Lee, Jung-Seung; Baik, Doo-Kwon

    In this letter, a new received power pairing scheduling (PPS) algorithm is proposed for Multi User Multiple Input and Multiple Output (MU-MIMO) systems. In contrast to existing algorithms that manage complex orthogonal factors, the PPS algorithm simply utilizes CINR to determine a MU-MIMO pair. Simulation results show that the PPS algorithm achieves up to 77% of MU-MIMO gain of determinant pairing scheduling (DPS) with low complexity.

  3. Historical development of origins research.

    PubMed

    Lazcano, Antonio

    2010-11-01

    Following the publication of the Origin of Species in 1859, many naturalists adopted the idea that living organisms were the historical outcome of gradual transformation of lifeless matter. These views soon merged with the developments of biochemistry and cell biology and led to proposals in which the origin of protoplasm was equated with the origin of life. The heterotrophic origin of life proposed by Oparin and Haldane in the 1920s was part of this tradition, which Oparin enriched by transforming the discussion of the emergence of the first cells into a workable multidisciplinary research program. On the other hand, the scientific trend toward understanding biological phenomena at the molecular level led authors like Troland, Muller, and others to propose that single molecules or viruses represented primordial living systems. The contrast between these opposing views on the origin of life represents not only contrasting views of the nature of life itself, but also major ideological discussions that reached a surprising intensity in the years following Stanley Miller's seminal result which showed the ease with which organic compounds of biochemical significance could be synthesized under putative primitive conditions. In fact, during the years following the Miller experiment, attempts to understand the origin of life were strongly influenced by research on DNA replication and protein biosynthesis, and, in socio-political terms, by the atmosphere created by Cold War tensions. The catalytic versatility of RNA molecules clearly merits a critical reappraisal of Muller's viewpoint. However, the discovery of ribozymes does not imply that autocatalytic nucleic acid molecules ready to be used as primordial genes were floating in the primitive oceans, or that the RNA world emerged completely assembled from simple precursors present in the prebiotic soup. The evidence supporting the presence of a wide range of organic molecules on the primitive Earth, including membrane

  4. Historical Development of Origins Research

    PubMed Central

    Lazcano, Antonio

    2010-01-01

    Following the publication of the Origin of Species in 1859, many naturalists adopted the idea that living organisms were the historical outcome of gradual transformation of lifeless matter. These views soon merged with the developments of biochemistry and cell biology and led to proposals in which the origin of protoplasm was equated with the origin of life. The heterotrophic origin of life proposed by Oparin and Haldane in the 1920s was part of this tradition, which Oparin enriched by transforming the discussion of the emergence of the first cells into a workable multidisciplinary research program. On the other hand, the scientific trend toward understanding biological phenomena at the molecular level led authors like Troland, Muller, and others to propose that single molecules or viruses represented primordial living systems. The contrast between these opposing views on the origin of life represents not only contrasting views of the nature of life itself, but also major ideological discussions that reached a surprising intensity in the years following Stanley Miller’s seminal result which showed the ease with which organic compounds of biochemical significance could be synthesized under putative primitive conditions. In fact, during the years following the Miller experiment, attempts to understand the origin of life were strongly influenced by research on DNA replication and protein biosynthesis, and, in socio-political terms, by the atmosphere created by Cold War tensions. The catalytic versatility of RNA molecules clearly merits a critical reappraisal of Muller’s viewpoint. However, the discovery of ribozymes does not imply that autocatalytic nucleic acid molecules ready to be used as primordial genes were floating in the primitive oceans, or that the RNA world emerged completely assembled from simple precursors present in the prebiotic soup. The evidence supporting the presence of a wide range of organic molecules on the primitive Earth, including membrane

  5. Comparative analysis of instance selection algorithms for instance-based classifiers in the context of medical decision support

    NASA Astrophysics Data System (ADS)

    Mazurowski, Maciej A.; Malof, Jordan M.; Tourassi, Georgia D.

    2011-01-01

    When constructing a pattern classifier, it is important to make best use of the instances (a.k.a. cases, examples, patterns or prototypes) available for its development. In this paper we present an extensive comparative analysis of algorithms that, given a pool of previously acquired instances, attempt to select those that will be the most effective to construct an instance-based classifier in terms of classification performance, time efficiency and storage requirements. We evaluate seven previously proposed instance selection algorithms and compare their performance to simple random selection of instances. We perform the evaluation using k-nearest neighbor classifier and three classification problems: one with simulated Gaussian data and two based on clinical databases for breast cancer detection and diagnosis, respectively. Finally, we evaluate the impact of the number of instances available for selection on the performance of the selection algorithms and conduct initial analysis of the selected instances. The experiments show that for all investigated classification problems, it was possible to reduce the size of the original development dataset to less than 3% of its initial size while maintaining or improving the classification performance. Random mutation hill climbing emerges as the superior selection algorithm. Furthermore, we show that some previously proposed algorithms perform worse than random selection. Regarding the impact of the number of instances available for the classifier development on the performance of the selection algorithms, we confirm that the selection algorithms are generally more effective as the pool of available instances increases. In conclusion, instance selection is generally beneficial for instance-based classifiers as it can improve their performance, reduce their storage requirements and improve their response time. However, choosing the right selection algorithm is crucial.

  6. LiveWire interactive boundary extraction algorithm based on Haar wavelet transform and control point set direction search

    NASA Astrophysics Data System (ADS)

    Cheng, Jun; Zhang, Jun; Tian, Jinwen

    2015-12-01

    Based on deep analysis of the LiveWire interactive boundary extraction algorithm, a new algorithm focusing on improving the speed of LiveWire algorithm is proposed in this paper. Firstly, the Haar wavelet transform is carried on the input image, and the boundary is extracted on the low resolution image obtained by the wavelet transform of the input image. Secondly, calculating LiveWire shortest path is based on the control point set direction search by utilizing the spatial relationship between the two control points users provide in real time. Thirdly, the search order of the adjacent points of the starting node is set in advance. An ordinary queue instead of a priority queue is taken as the storage pool of the points when optimizing their shortest path value, thus reducing the complexity of the algorithm from O[n2] to O[n]. Finally, A region iterative backward projection method based on neighborhood pixel polling has been used to convert dual-pixel boundary of the reconstructed image to single-pixel boundary after Haar wavelet inverse transform. The algorithm proposed in this paper combines the advantage of the Haar wavelet transform and the advantage of the optimal path searching method based on control point set direction search. The former has fast speed of image decomposition and reconstruction and is more consistent with the texture features of the image and the latter can reduce the time complexity of the original algorithm. So that the algorithm can improve the speed in interactive boundary extraction as well as reflect the boundary information of the image more comprehensively. All methods mentioned above have a big role in improving the execution efficiency and the robustness of the algorithm.

  7. Accelerated fast iterative shrinkage thresholding algorithms for sparsity-regularized cone-beam CT image reconstruction

    PubMed Central

    Xu, Qiaofeng; Yang, Deshan; Tan, Jun; Sawatzky, Alex; Anastasio, Mark A.

    2016-01-01

    Purpose: The development of iterative image reconstruction algorithms for cone-beam computed tomography (CBCT) remains an active and important research area. Even with hardware acceleration, the overwhelming majority of the available 3D iterative algorithms that implement nonsmooth regularizers remain computationally burdensome and have not been translated for routine use in time-sensitive applications such as image-guided radiation therapy (IGRT). In this work, two variants of the fast iterative shrinkage thresholding algorithm (FISTA) are proposed and investigated for accelerated iterative image reconstruction in CBCT. Methods: Algorithm acceleration was achieved by replacing the original gradient-descent step in the FISTAs by a subproblem that is solved by use of the ordered subset simultaneous algebraic reconstruction technique (OS-SART). Due to the preconditioning matrix adopted in the OS-SART method, two new weighted proximal problems were introduced and corresponding fast gradient projection-type algorithms were developed for solving them. We also provided efficient numerical implementations of the proposed algorithms that exploit the massive data parallelism of multiple graphics processing units. Results: The improved rates of convergence of the proposed algorithms were quantified in computer-simulation studies and by use of clinical projection data corresponding to an IGRT study. The accelerated FISTAs were shown to possess dramatically improved convergence properties as compared to the standard FISTAs. For example, the number of iterations to achieve a specified reconstruction error could be reduced by an order of magnitude. Volumetric images reconstructed from clinical data were produced in under 4 min. Conclusions: The FISTA achieves a quadratic convergence rate and can therefore potentially reduce the number of iterations required to produce an image of a specified image quality as compared to first-order methods. We have proposed and investigated

  8. A Fusion Algorithm for GFP Image and Phase Contrast Image of Arabidopsis Cell Based on SFL-Contourlet Transform

    PubMed Central

    Feng, Peng; Wang, Jing; Wei, Biao; Mi, Deling

    2013-01-01

    A hybrid multiscale and multilevel image fusion algorithm for green fluorescent protein (GFP) image and phase contrast image of Arabidopsis cell is proposed in this paper. Combining intensity-hue-saturation (IHS) transform and sharp frequency localization Contourlet transform (SFL-CT), this algorithm uses different fusion strategies for different detailed subbands, which include neighborhood consistency measurement (NCM) that can adaptively find balance between color background and gray structure. Also two kinds of neighborhood classes based on empirical model are taken into consideration. Visual information fidelity (VIF) as an objective criterion is introduced to evaluate the fusion image. The experimental results of 117 groups of Arabidopsis cell image from John Innes Center show that the new algorithm cannot only make the details of original images well preserved but also improve the visibility of the fusion image, which shows the superiority of the novel method to traditional ones. PMID:23476716

  9. Optimal Pid Controller Design Using Adaptive Vurpso Algorithm

    NASA Astrophysics Data System (ADS)

    Zirkohi, Majid Moradi

    2015-04-01

    The purpose of this paper is to improve theVelocity Update Relaxation Particle Swarm Optimization algorithm (VURPSO). The improved algorithm is called Adaptive VURPSO (AVURPSO) algorithm. Then, an optimal design of a Proportional-Integral-Derivative (PID) controller is obtained using the AVURPSO algorithm. An adaptive momentum factor is used to regulate a trade-off between the global and the local exploration abilities in the proposed algorithm. This operation helps the system to reach the optimal solution quickly and saves the computation time. Comparisons on the optimal PID controller design confirm the superiority of AVURPSO algorithm to the optimization algorithms mentioned in this paper namely the VURPSO algorithm, the Ant Colony algorithm, and the conventional approach. Comparisons on the speed of convergence confirm that the proposed algorithm has a faster convergence in a less computation time to yield a global optimum value. The proposed AVURPSO can be used in the diverse areas of optimization problems such as industrial planning, resource allocation, scheduling, decision making, pattern recognition and machine learning. The proposed AVURPSO algorithm is efficiently used to design an optimal PID controller.

  10. An efficient algorithm for some highly nonlinear fractional PDEs in mathematical physics.

    PubMed

    Ahmad, Jamshad; Mohyud-Din, Syed Tauseef

    2014-01-01

    In this paper, a fractional complex transform (FCT) is used to convert the given fractional partial differential equations (FPDEs) into corresponding partial differential equations (PDEs) and subsequently Reduced Differential Transform Method (RDTM) is applied on the transformed system of linear and nonlinear time-fractional PDEs. The results so obtained are re-stated by making use of inverse transformation which yields it in terms of original variables. It is observed that the proposed algorithm is highly efficient and appropriate for fractional PDEs and hence can be extended to other complex problems of diversified nonlinear nature.

  11. An Efficient Algorithm for Some Highly Nonlinear Fractional PDEs in Mathematical Physics

    PubMed Central

    Ahmad, Jamshad; Mohyud-Din, Syed Tauseef

    2014-01-01

    In this paper, a fractional complex transform (FCT) is used to convert the given fractional partial differential equations (FPDEs) into corresponding partial differential equations (PDEs) and subsequently Reduced Differential Transform Method (RDTM) is applied on the transformed system of linear and nonlinear time-fractional PDEs. The results so obtained are re-stated by making use of inverse transformation which yields it in terms of original variables. It is observed that the proposed algorithm is highly efficient and appropriate for fractional PDEs and hence can be extended to other complex problems of diversified nonlinear nature. PMID:25525804

  12. Implementation of a new iterative learning control algorithm on real data

    NASA Astrophysics Data System (ADS)

    Zamanian, Hamed; Koohi, Ardavan

    2016-02-01

    In this paper, a newly presented approach is proposed for closed-loop automatic tuning of a proportional integral derivative (PID) controller based on iterative learning control (ILC) algorithm. A modified ILC scheme iteratively changes the control signal by adjusting it. Once a satisfactory performance is achieved, a linear compensator is identified in the ILC behavior using casual relationship between the closed loop signals. This compensator is approximated by a PD controller which is used to tune the original PID controller. Results of implementing this approach presented on the experimental data of Damavand tokamak and are consistent with simulation outcome.

  13. ADART: an adaptive algebraic reconstruction algorithm for discrete tomography.

    PubMed

    Maestre-Deusto, F Javier; Scavello, Giovanni; Pizarro, Joaquín; Galindo, Pedro L

    2011-08-01

    In this paper we suggest an algorithm based on the Discrete Algebraic Reconstruction Technique (DART) which is capable of computing high quality reconstructions from substantially fewer projections than required for conventional continuous tomography. Adaptive DART (ADART) goes a step further than DART on the reduction of the number of unknowns of the associated linear system achieving a significant reduction in the pixel error rate of reconstructed objects. The proposed methodology automatically adapts the border definition criterion at each iteration, resulting in a reduction of the number of pixels belonging to the border, and consequently of the number of unknowns in the general algebraic reconstruction linear system to be solved, being this reduction specially important at the final stage of the iterative process. Experimental results show that reconstruction errors are considerably reduced using ADART when compared to original DART, both in clean and noisy environments.

  14. Iterative phase retrieval algorithms. I: optimization.

    PubMed

    Guo, Changliang; Liu, Shi; Sheridan, John T

    2015-05-20

    Two modified Gerchberg-Saxton (GS) iterative phase retrieval algorithms are proposed. The first we refer to as the spatial phase perturbation GS algorithm (SPP GSA). The second is a combined GS hybrid input-output algorithm (GS/HIOA). In this paper (Part I), it is demonstrated that the SPP GS and GS/HIO algorithms are both much better at avoiding stagnation during phase retrieval, allowing them to successfully locate superior solutions compared with either the GS or the HIO algorithms. The performances of the SPP GS and GS/HIO algorithms are also compared. Then, the error reduction (ER) algorithm is combined with the HIO algorithm (ER/HIOA) to retrieve the input object image and the phase, given only some knowledge of its extent and the amplitude in the Fourier domain. In Part II, the algorithms developed here are applied to carry out known plaintext and ciphertext attacks on amplitude encoding and phase encoding double random phase encryption systems. Significantly, ER/HIOA is then used to carry out a ciphertext-only attack on AE DRPE systems.

  15. A parallel algorithm for random searches

    NASA Astrophysics Data System (ADS)

    Wosniack, M. E.; Raposo, E. P.; Viswanathan, G. M.; da Luz, M. G. E.

    2015-11-01

    We discuss a parallelization procedure for a two-dimensional random search of a single individual, a typical sequential process. To assure the same features of the sequential random search in the parallel version, we analyze the former spatial patterns of the encountered targets for different search strategies and densities of homogeneously distributed targets. We identify a lognormal tendency for the distribution of distances between consecutively detected targets. Then, by assigning the distinct mean and standard deviation of this distribution for each corresponding configuration in the parallel simulations (constituted by parallel random walkers), we are able to recover important statistical properties, e.g., the target detection efficiency, of the original problem. The proposed parallel approach presents a speedup of nearly one order of magnitude compared with the sequential implementation. This algorithm can be easily adapted to different instances, as searches in three dimensions. Its possible range of applicability covers problems in areas as diverse as automated computer searchers in high-capacity databases and animal foraging.

  16. New segmentation algorithm for detecting tiny objects

    NASA Astrophysics Data System (ADS)

    Sun, Han; Yang, Jingyu; Ren, Mingwu; Gao, Jian-zhen

    2001-09-01

    Road cracks in the highway surface are very dangerous to traffic. They should be found and repaired as early as possible. So we designed the system of auto detecting cracks in the highway surface. In this system, there are several key steps. For instance, the first step, image recording should use high quality photography device because of the high speed. In addition, the original data is very large, so it needs huge storage media and some effective compress processing. As the illumination is affected by environment greatly, it is essential to do some preprocessing first, such as image reconstruction and enhancement. Because the cracks are too tiny to detect, segmentation is rather difficult. This paper here proposed a new segmentation method to detect such tiny cracks, even 2mm-width ones. In this algorithm, we first do edge detecting to get seeds for line growing in the following. Then delete the false ones and get the information of cracks. It is accurate and fast enough.

  17. Origin of Fibrosing Cells in Systemic Sclerosis

    PubMed Central

    Ebmeier, Sarah; Horsley, Valerie

    2015-01-01

    Purpose of review Systemic sclerosis (SSc), an autoimmune disease of unknown origin, is characterized by progressive fibrosis that can affect all organs of the body. To date, there are no effective therapies for the disease. This paucity of treatment options is primarily due to limited understanding of the processes that initiate and promote fibrosis in general and a lack of animal models that specifically emulate the chronic nature of systemic sclerosis. Most models capitulate acute injury-induced fibrosis in specific organs. Regardless of the model however, a major outstanding question in the field is the cellular origin of fibrosing cells. Recent findings A multitude of origins have been proposed in a variety of tissues, including resident tissue stroma, fibrocytes, pericytes, adipocytes, epithelial cells, and endothelial cells. Developmentally derived fibroblast lineages have recently been elucidated with fibrosing potential in injury models. Increasing data supports the pericyte as a fibrosing cell origin in diverse fibrosis models and adipocytes have recently been proposed. Fibrocytes, epithelial cells, and endothelial cells have been examined, though data does not as strongly support these possible origins. Summary In this review, we discuss recent evidence arguing in favor of and against proposed origins of fibrosing cells in diverse models of fibrosis. We highlight outstanding controversies and propose how future research may elucidate how fibrosing cells arise and what processes can be targeted in order to treat systemic sclerosis. PMID:26352735

  18. Node Self-Deployment Algorithm Based on an Uneven Cluster with Radius Adjusting for Underwater Sensor Networks

    PubMed Central

    Jiang, Peng; Xu, Yiming; Wu, Feng

    2016-01-01

    Existing move-restricted node self-deployment algorithms are based on a fixed node communication radius, evaluate the performance based on network coverage or the connectivity rate and do not consider the number of nodes near the sink node and the energy consumption distribution of the network topology, thereby degrading network reliability and the energy consumption balance. Therefore, we propose a distributed underwater node self-deployment algorithm. First, each node begins the uneven clustering based on the distance on the water surface. Each cluster head node selects its next-hop node to synchronously construct a connected path to the sink node. Second, the cluster head node adjusts its depth while maintaining the layout formed by the uneven clustering and then adjusts the positions of in-cluster nodes. The algorithm originally considers the network reliability and energy consumption balance during node deployment and considers the coverage redundancy rate of all positions that a node may reach during the node position adjustment. Simulation results show, compared to the connected dominating set (CDS) based depth computation algorithm, that the proposed algorithm can increase the number of the nodes near the sink node and improve network reliability while guaranteeing the network connectivity rate. Moreover, it can balance energy consumption during network operation, further improve network coverage rate and reduce energy consumption. PMID:26784193

  19. Dynamic displacement measurement of large-scale structures based on the Lucas-Kanade template tracking algorithm

    NASA Astrophysics Data System (ADS)

    Guo, Jie; Zhu, Chang`an

    2016-01-01

    The development of optics and computer technologies enables the application of the vision-based technique that uses digital cameras to the displacement measurement of large-scale structures. Compared with traditional contact measurements, vision-based technique allows for remote measurement, has a non-intrusive characteristic, and does not necessitate mass introduction. In this study, a high-speed camera system is developed to complete the displacement measurement in real time. The system consists of a high-speed camera and a notebook computer. The high-speed camera can capture images at a speed of hundreds of frames per second. To process the captured images in computer, the Lucas-Kanade template tracking algorithm in the field of computer vision is introduced. Additionally, a modified inverse compositional algorithm is proposed to reduce the computing time of the original algorithm and improve the efficiency further. The modified algorithm can rapidly accomplish one displacement extraction within 1 ms without having to install any pre-designed target panel onto the structures in advance. The accuracy and the efficiency of the system in the remote measurement of dynamic displacement are demonstrated in the experiments on motion platform and sound barrier on suspension viaduct. Experimental results show that the proposed algorithm can extract accurate displacement signal and accomplish the vibration measurement of large-scale structures.

  20. Character-embedded watermarking algorithm using the fast Hadamard transform for satellite images

    NASA Astrophysics Data System (ADS)

    Ho, Anthony T. S.; Shen, Jun; Tan, Soon H.

    2003-01-01

    In this paper, a character-embedded watermarking algorithm is proposed for copyright protection of satellite images based on the Fast Hadamard transform (FHT). By using a private-key watermarking scheme, the watermark can be retrieved without using the original image. To increase the invisibility of the watermark, a visual model based on original image characteristics, such as edges and textures are incorporated to determine the watermarking strength factor. This factor determines the strength of watermark bits embedded according to the region complexity of the image. Detailed or coarse areas will be assigned more strength and smooth areas with less strength. Error correction coding is also used to increase the reliability of the information bits. A post-processing technique based on log-polar mapping is incorporated to enhance the robustness against geometric distortion attacks. Experiments showed that the proposed watermarking scheme was able to survive more than 70% of attacks from a common benchmarking tool called Stirmark, and about 90% against Checkmark non-geometric attacks. These attacks were performed on a number of SPOT images of size 512×512×8bit embedded with 32 characters. The proposed FHT algorithm also has the advantage of easy software and hardware implementation as well as speed, comparing to other orthogonal transforms such as Cosine, Fourier and wavelet transform.

  1. [Origin of biosphere].

    PubMed

    Levchenko, V F; Starobogatov, Ia I

    2010-01-01

    Concepts of origin of life on the planet are briefly considered. The problem of origin of biosphere is discussed, with a suggestion that the origin of living organisms and biosphere are two aspects of the same process. There is put forward a hypothesis of embryosphere--the primary medium, in which preorganisms could appear. The ecosystemic approach to origin of life poses a question about sources of the matter and energy used by the primary life as well as about causes of the biochemical unity that exists in all Earth organisms.

  2. [A new algorithm for NIR modeling based on manifold learning].

    PubMed

    Hong, Ming-Jian; Wen, Zhi-Yu; Zhang, Xiao-Hong; Wen, Quan

    2009-07-01

    Manifold learning is a new kind of algorithm originating from the field of machine learning to find the intrinsic dimensionality of numerous and complex data and to extract most important information from the raw data to develop a regression or classification model. The basic assumption of the manifold learning is that the high-dimensional data measured from the same object using some devices must reside on a manifold with much lower dimensions determined by a few properties of the object. While NIR spectra are characterized by their high dimensions and complicated band assignment, the authors may assume that the NIR spectra of the same kind of substances with different chemical concentrations should reside on a manifold with much lower dimensions determined by the concentrations, according to the above assumption. As one of the best known algorithms of manifold learning, locally linear embedding (LLE) further assumes that the underlying manifold is locally linear. So, every data point in the manifold should be a linear combination of its neighbors. Based on the above assumptions, the present paper proposes a new algorithm named least square locally weighted regression (LS-LWR), which is a kind of LWR with weights determined by the least squares instead of a predefined function. Then, the NIR spectra of glucose solutions with various concentrations are measured using a NIR spectrometer and LS-LWR is verified by predicting the concentrations of glucose solutions quantitatively. Compared with the existing algorithms such as principal component regression (PCR) and partial least squares regression (PLSR), the LS-LWR has better predictability measured by the standard error of prediction (SEP) and generates an elegant model with good stability and efficiency.

  3. The origin of risk aversion

    PubMed Central

    Zhang, Ruixun; Brennan, Thomas J.; Lo, Andrew W.

    2014-01-01

    Risk aversion is one of the most basic assumptions of economic behavior, but few studies have addressed the question of where risk preferences come from and why they differ from one individual to the next. Here, we propose an evolutionary explanation for the origin of risk aversion. In the context of a simple binary-choice model, we show that risk aversion emerges by natural selection if reproductive risk is systematic (i.e., correlated across individuals in a given generation). In contrast, risk neutrality emerges if reproductive risk is idiosyncratic (i.e., uncorrelated across each given generation). More generally, our framework implies that the degree of risk aversion is determined by the stochastic nature of reproductive rates, and we show that different statistical properties lead to different utility functions. The simplicity and generality of our model suggest that these implications are primitive and cut across species, physiology, and genetic origins. PMID:25453072

  4. Detection algorithm for glass bottle mouth defect by continuous wavelet transform based on machine vision

    NASA Astrophysics Data System (ADS)

    Qian, Jinfang; Zhang, Changjiang

    2014-11-01

    An efficient algorithm based on continuous wavelet transform combining with pre-knowledge, which can be used to detect the defect of glass bottle mouth, is proposed. Firstly, under the condition of ball integral light source, a perfect glass bottle mouth image is obtained by Japanese Computar camera through the interface of IEEE-1394b. A single threshold method based on gray level histogram is used to obtain the binary image of the glass bottle mouth. In order to efficiently suppress noise, moving average filter is employed to smooth the histogram of original glass bottle mouth image. And then continuous wavelet transform is done to accurately determine the segmentation threshold. Mathematical morphology operations are used to get normal binary bottle mouth mask. A glass bottle to be detected is moving to the detection zone by conveyor belt. Both bottle mouth image and binary image are obtained by above method. The binary image is multiplied with normal bottle mask and a region of interest is got. Four parameters (number of connected regions, coordinate of centroid position, diameter of inner cycle, and area of annular region) can be computed based on the region of interest. Glass bottle mouth detection rules are designed by above four parameters so as to accurately detect and identify the defect conditions of glass bottle. Finally, the glass bottles of Coca-Cola Company are used to verify the proposed algorithm. The experimental results show that the proposed algorithm can accurately detect the defect conditions of the glass bottles and have 98% detecting accuracy.

  5. Localization Algorithm Based on a Spring Model (LASM) for Large Scale Wireless Sensor Networks.

    PubMed

    Chen, Wanming; Mei, Tao; Meng, Max Q-H; Liang, Huawei; Liu, Yumei; Li, Yangming; Li, Shuai

    2008-03-15

    A navigation method for a lunar rover based on large scale wireless sensornetworks is proposed. To obtain high navigation accuracy and large exploration area, highnode localization accuracy and large network scale are required. However, thecomputational and communication complexity and time consumption are greatly increasedwith the increase of the network scales. A localization algorithm based on a spring model(LASM) method is proposed to reduce the computational complexity, while maintainingthe localization accuracy in large scale sensor networks. The algorithm simulates thedynamics of physical spring system to estimate the positions of nodes. The sensor nodesare set as particles with masses and connected with neighbor nodes by virtual springs. Thevirtual springs will force the particles move to the original positions, the node positionscorrespondingly, from the randomly set positions. Therefore, a blind node position can bedetermined from the LASM algorithm by calculating the related forces with the neighbornodes. The computational and communication complexity are O(1) for each node, since thenumber of the neighbor nodes does not increase proportionally with the network scale size.Three patches are proposed to avoid local optimization, kick out bad nodes and deal withnode variation. Simulation results show that the computational and communicationcomplexity are almost constant despite of the increase of the network scale size. The time consumption has also been proven to remain almost constant since the calculation steps arealmost unrelated with the network scale size.

  6. Insect-Inspired Navigation Algorithm for an Aerial Agent Using Satellite Imagery

    PubMed Central

    Gaffin, Douglas D.; Dewar, Alexander; Graham, Paul; Philippides, Andrew

    2015-01-01

    Humans have long marveled at the ability of animals to navigate swiftly, accurately, and across long distances. Many mechanisms have been proposed for how animals acquire, store, and retrace learned routes, yet many of these hypotheses appear incongruent with behavioral observations and the animals’ neural constraints. The “Navigation by Scene Familiarity Hypothesis” proposed originally for insect navigation offers an elegantly simple solution for retracing previously experienced routes without the need for complex neural architectures and memory retrieval mechanisms. This hypothesis proposes that an animal can return to a target location by simply moving toward the most familiar scene at any given point. Proof of concept simulations have used computer-generated ant’s-eye views of the world, but here we test the ability of scene familiarity algorithms to navigate training routes across satellite images extracted from Google Maps. We find that Google satellite images are so rich in visual information that familiarity algorithms can be used to retrace even tortuous routes with low-resolution sensors. We discuss the implications of these findings not only for animal navigation but also for the potential development of visual augmentation systems and robot guidance algorithms. PMID:25874764

  7. Improved hybrid optimization algorithm for 3D protein structure prediction.

    PubMed

    Zhou, Changjun; Hou, Caixia; Wei, Xiaopeng; Zhang, Qiang

    2014-07-01

    A new improved hybrid optimization algorithm - PGATS algorithm, which is based on toy off-lattice model, is presented for dealing with three-dimensional protein structure prediction problems. The algorithm combines the particle swarm optimization (PSO), genetic algorithm (GA), and tabu search (TS) algorithms. Otherwise, we also take some different improved strategies. The factor of stochastic disturbance is joined in the particle swarm optimization to improve the search ability; the operations of crossover and mutation that are in the genetic algorithm are changed to a kind of random liner method; at last tabu search algorithm is improved by appending a mutation operator. Through the combination of a variety of strategies and algorithms, the protein structure prediction (PSP) in a 3D off-lattice model is achieved. The PSP problem is an NP-hard problem, but the problem can be attributed to a global optimization problem of multi-extremum and multi-parameters. This is the theoretical principle of the hybrid optimization algorithm that is proposed in this paper. The algorithm combines local search and global search, which overcomes the shortcoming of a single algorithm, giving full play to the advantage of each algorithm. In the current universal standard sequences, Fibonacci sequences and real protein sequences are certified. Experiments show that the proposed new method outperforms single algorithms on the accuracy of calculating the protein sequence energy value, which is proved to be an effective way to predict the structure of proteins.

  8. Software For Genetic Algorithms

    NASA Technical Reports Server (NTRS)

    Wang, Lui; Bayer, Steve E.

    1992-01-01

    SPLICER computer program is genetic-algorithm software tool used to solve search and optimization problems. Provides underlying framework and structure for building genetic-algorithm application program. Written in Think C.

  9. Algorithm-development activities

    NASA Technical Reports Server (NTRS)

    Carder, Kendall L.

    1994-01-01

    The task of algorithm-development activities at USF continues. The algorithm for determining chlorophyll alpha concentration, (Chl alpha) and gelbstoff absorption coefficient for SeaWiFS and MODIS-N radiance data is our current priority.

  10. Testing block subdivision algorithms on block designs

    NASA Astrophysics Data System (ADS)

    Wiseman, Natalie; Patterson, Zachary

    2016-01-01

    Integrated land use-transportation models predict future transportation demand taking into account how households and firms arrange themselves partly as a function of the transportation system. Recent integrated models require parcels as inputs and produce household and employment predictions at the parcel scale. Block subdivision algorithms automatically generate parcel patterns within blocks. Evaluating block subdivision algorithms is done by way of generating parcels and comparing them to those in a parcel database. Three block subdivision algorithms are evaluated on how closely they reproduce parcels of different block types found in a parcel database from Montreal, Canada. While the authors who developed each of the algorithms have evaluated them, they have used their own metrics and block types to evaluate their own algorithms. This makes it difficult to compare their strengths and weaknesses. The contribution of this paper is in resolving this difficulty with the aim of finding a better algorithm suited to subdividing each block type. The proposed hypothesis is that given the different approaches that block subdivision algorithms take, it's likely that different algorithms are better adapted to subdividing different block types. To test this, a standardized block type classification is used that consists of mutually exclusive and comprehensive categories. A statistical method is used for finding a better algorithm and the probability it will perform well for a given block type. Results suggest the oriented bounding box algorithm performs better for warped non-uniform sites, as well as gridiron and fragmented uniform sites. It also produces more similar parcel areas and widths. The Generalized Parcel Divider 1 algorithm performs better for gridiron non-uniform sites. The Straight Skeleton algorithm performs better for loop and lollipop networks as well as fragmented non-uniform and warped uniform sites. It also produces more similar parcel shapes and patterns.

  11. Hierarchical fractional-step approximations and parallel kinetic Monte Carlo algorithms

    SciTech Connect

    Arampatzis, Giorgos; Katsoulakis, Markos A.; Plechac, Petr; Taufer, Michela; Xu, Lifan

    2012-10-01

    We present a mathematical framework for constructing and analyzing parallel algorithms for lattice kinetic Monte Carlo (KMC) simulations. The resulting algorithms have the capacity to simulate a wide range of spatio-temporal scales in spatially distributed, non-equilibrium physiochemical processes with complex chemistry and transport micro-mechanisms. Rather than focusing on constructing exactly the stochastic trajectories, our approach relies on approximating the evolution of observables, such as density, coverage, correlations and so on. More specifically, we develop a spatial domain decomposition of the Markov operator (generator) that describes the evolution of all observables according to the kinetic Monte Carlo algorithm. This domain decomposition corresponds to a decomposition of the Markov generator into a hierarchy of operators and can be tailored to specific hierarchical parallel architectures such as multi-core processors or clusters of Graphical Processing Units (GPUs). Based on this operator decomposition, we formulate parallel Fractional step kinetic Monte Carlo algorithms by employing the Trotter Theorem and its randomized variants; these schemes, (a) are partially asynchronous on each fractional step time-window, and (b) are characterized by their communication schedule between processors. The proposed mathematical framework allows us to rigorously justify the numerical and statistical consistency of the proposed algorithms, showing the convergence of our approximating schemes to the original serial KMC. The approach also provides a systematic evaluation of different processor communicating schedules. We carry out a detailed benchmarking of the parallel KMC schemes using available exact solutions, for example, in Ising-type systems and we demonstrate the capabilities of the method to simulate complex spatially distributed reactions at very large scales on GPUs. Finally, we discuss work load balancing between processors and propose a re

  12. Planning Readings: A Comparative Exploration of Basic Algorithms

    ERIC Educational Resources Information Center

    Piater, Justus H.

    2009-01-01

    Conventional introduction to computer science presents individual algorithmic paradigms in the context of specific, prototypical problems. To complement this algorithm-centric instruction, this study additionally advocates problem-centric instruction. I present an original problem drawn from students' life that is simply stated but provides rich…

  13. Why different passive microwave algorithms give different soil moisture retrievals

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Several algorithms have been used to retrieve surface soil moisture from brightness temperature observations provided by low frequency microwave satellite sensors such as the Advanced Microwave Scanning Radiometer on NASA EOS satellite Aqua (AMSR-E). Most of these algorithms have originated from the...

  14. A region labeling algorithm based on block

    NASA Astrophysics Data System (ADS)

    Wang, Jing

    2009-10-01

    The time performance of region labeling algorithm is important for image process. However, common region labeling algorithms cannot meet the requirements of real-time image processing. In this paper, a technique using block to record the connective area is proposed. By this technique, connective closure and information related to the target can be computed during a one-time image scan. It records the edge pixel's coordinate, including outer side edges and inner side edges, as well as the label, and then it can calculate connecting area's shape center, area and gray. Compared to others, this block based region labeling algorithm is more efficient. It can well meet the time requirements of real-time processing. Experiment results also validate the correctness and efficiency of the algorithm. Experiment results show that it can detect any connecting areas in binary images, which contains various complex and quaint patterns. The block labeling algorithm is used in a real-time image processing program now.

  15. Automatic ionospheric layers detection: Algorithms analysis

    NASA Astrophysics Data System (ADS)

    Molina, María G.; Zuccheretti, Enrico; Cabrera, Miguel A.; Bianchi, Cesidio; Sciacca, Umberto; Baskaradas, James

    2016-03-01

    Vertical sounding is a widely used technique to obtain ionosphere measurements, such as an estimation of virtual height versus frequency scanning. It is performed by high frequency radar for geophysical applications called ;ionospheric sounder; (or ;ionosonde;). Radar detection depends mainly on targets characteristics. While several targets behavior and correspondent echo detection algorithms have been studied, a survey to address a suitable algorithm for ionospheric sounder has to be carried out. This paper is focused on automatic echo detection algorithms implemented in particular for an ionospheric sounder, target specific characteristics were studied as well. Adaptive threshold detection algorithms are proposed, compared to the current implemented algorithm, and tested using actual data obtained from the Advanced Ionospheric Sounder (AIS-INGV) at Rome Ionospheric Observatory. Different cases of study have been selected according typical ionospheric and detection conditions.

  16. Optimal Battery Sizing in Photovoltaic Based Distributed Generation Using Enhanced Opposition-Based Firefly Algorithm for Voltage Rise Mitigation

    PubMed Central

    Wong, Ling Ai; Shareef, Hussain; Mohamed, Azah; Ibrahim, Ahmad Asrul

    2014-01-01

    This paper presents the application of enhanced opposition-based firefly algorithm in obtaining the optimal battery energy storage systems (BESS) sizing in photovoltaic generation integrated radial distribution network in order to mitigate the voltage rise problem. Initially, the performance of the original firefly algorithm is enhanced by utilizing the opposition-based learning and introducing inertia weight. After evaluating the performance of the enhanced opposition-based firefly algorithm (EOFA) with fifteen benchmark functions, it is then adopted to determine the optimal size for BESS. Two optimization processes are conducted where the first optimization aims to obtain the optimal battery output power on hourly basis and the second optimization aims to obtain the optimal BESS capacity by considering the state of charge constraint of BESS. The effectiveness of the proposed method is validated by applying the algorithm to the 69-bus distribution system and by comparing the performance of EOFA with conventional firefly algorithm and gravitational search algorithm. Results show that EOFA has the best performance comparatively in terms of mitigating the voltage rise problem. PMID:25054184

  17. Optimal battery sizing in photovoltaic based distributed generation using enhanced opposition-based firefly algorithm for voltage rise mitigation.

    PubMed

    Wong, Ling Ai; Shareef, Hussain; Mohamed, Azah; Ibrahim, Ahmad Asrul

    2014-01-01

    This paper presents the application of enhanced opposition-based firefly algorithm in obtaining the optimal battery energy storage systems (BESS) sizing in photovoltaic generation integrated radial distribution network in order to mitigate the voltage rise problem. Initially, the performance of the original firefly algorithm is enhanced by utilizing the opposition-based learning and introducing inertia weight. After evaluating the performance of the enhanced opposition-based firefly algorithm (EOFA) with fifteen benchmark functions, it is then adopted to determine the optimal size for BESS. Two optimization processes are conducted where the first optimization aims to obtain the optimal battery output power on hourly basis and the second optimization aims to obtain the optimal BESS capacity by considering the state of charge constraint of BESS. The effectiveness of the proposed method is validated by applying the algorithm to the 69-bus distribution system and by comparing the performance of EOFA with conventional firefly algorithm and gravitational search algorithm. Results show that EOFA has the best performance comparatively in terms of mitigating the voltage rise problem.

  18. Accurate Finite Difference Algorithms

    NASA Technical Reports Server (NTRS)

    Goodrich, John W.

    1996-01-01

    Two families of finite difference algorithms for computational aeroacoustics are presented and compared. All of the algorithms are single step explicit methods, they have the same order of accuracy in both space and time, with examples up to eleventh order, and they have multidimensional extensions. One of the algorithm families has spectral like high resolution. Propagation with high order and high resolution algorithms can produce accurate results after O(10(exp 6)) periods of propagation with eight grid points per wavelength.

  19. Quantum algorithms: an overview

    NASA Astrophysics Data System (ADS)

    Montanaro, Ashley

    2016-01-01

    Quantum computers are designed to outperform standard computers by running quantum algorithms. Areas in which quantum algorithms can be applied include cryptography, search and optimisation, simulation of quantum systems and solving large systems of linear equations. Here we briefly survey some known quantum algorithms, with an emphasis on a broad overview of their applications rather than their technical details. We include a discussion of recent developments and near-term applications of quantum algorithms.

  20. INSENS classification algorithm report

    SciTech Connect

    Hernandez, J.E.; Frerking, C.J.; Myers, D.W.

    1993-07-28

    This report describes a new algorithm developed for the Imigration and Naturalization Service (INS) in support of the INSENS project for classifying vehicles and pedestrians using seismic data. This algorithm is less sensitive to nuisance alarms due to environmental events than the previous algorithm. Furthermore, the algorithm is simple enough that it can be implemented in the 8-bit microprocessor used in the INSENS system.

  1. Comparison of algorithms for automatic border detection of melanoma in dermoscopy images

    NASA Astrophysics Data System (ADS)

    Srinivasa Raghavan, Sowmya; Kaur, Ravneet; LeAnder, Robert

    2016-09-01

    Melanoma is one of the most rapidly accelerating cancers in the world [1]. Early diagnosis is critical to an effective cure. We propose a new algorithm for more accurately detecting melanoma borders in dermoscopy images. Proper border detection requires eliminating occlusions like hair and bubbles by processing the original image. The preprocessing step involves transforming the RGB image to the CIE L*u*v* color space, in order to decouple brightness from color information, then increasing contrast, using contrast-limited adaptive histogram equalization (CLAHE), followed by artifacts removal using a Gaussian filter. After preprocessing, the Chen-Vese technique segments the preprocessed images to create a lesion mask which undergoes a morphological closing operation. Next, the largest central blob in the lesion is detected, after which, the blob is dilated to generate an image output mask. Finally, the automatically-generated mask is compared to the manual mask by calculating the XOR error [3]. Our border detection algorithm was developed using training and test sets of 30 and 20 images, respectively. This detection method was compared to the SRM method [4] by calculating the average XOR error for each of the two algorithms. Average error for test images was 0.10, using the new algorithm, and 0.99, using SRM method. In comparing the average error values produced by the two algorithms, it is evident that the average XOR error for our technique is lower than the SRM method, thereby implying that the new algorithm detects borders of melanomas more accurately than the SRM algorithm.

  2. Musical emotions: Functions, origins, evolution

    NASA Astrophysics Data System (ADS)

    Perlovsky, Leonid

    2010-03-01

    Theories of music origins and the role of musical emotions in the mind are reviewed. Most existing theories contradict each other, and cannot explain mechanisms or roles of musical emotions in workings of the mind, nor evolutionary reasons for music origins. Music seems to be an enigma. Nevertheless, a synthesis of cognitive science and mathematical models of the mind has been proposed describing a fundamental role of music in the functioning and evolution of the mind, consciousness, and cultures. The review considers ancient theories of music as well as contemporary theories advanced by leading authors in this field. It addresses one hypothesis that promises to unify the field and proposes a theory of musical origin based on a fundamental role of music in cognition and evolution of consciousness and culture. We consider a split in the vocalizations of proto-humans into two types: one less emotional and more concretely-semantic, evolving into language, and the other preserving emotional connections along with semantic ambiguity, evolving into music. The proposed hypothesis departs from other theories in considering specific mechanisms of the mind-brain, which required the evolution of music parallel with the evolution of cultures and languages. Arguments are reviewed that the evolution of language toward becoming the semantically powerful tool of today required emancipation from emotional encumbrances. The opposite, no less powerful mechanisms required a compensatory evolution of music toward more differentiated and refined emotionality. The need for refined music in the process of cultural evolution is grounded in fundamental mechanisms of the mind. This is why today's human mind and cultures cannot exist without today's music. The reviewed hypothesis gives a basis for future analysis of why different evolutionary paths of languages were paralleled by different evolutionary paths of music. Approaches toward experimental verification of this hypothesis in

  3. Musical emotions: functions, origins, evolution.

    PubMed

    Perlovsky, Leonid

    2010-03-01

    Theories of music origins and the role of musical emotions in the mind are reviewed. Most existing theories contradict each other, and cannot explain mechanisms or roles of musical emotions in workings of the mind, nor evolutionary reasons for music origins. Music seems to be an enigma. Nevertheless, a synthesis of cognitive science and mathematical models of the mind has been proposed describing a fundamental role of music in the functioning and evolution of the mind, consciousness, and cultures. The review considers ancient theories of music as well as contemporary theories advanced by leading authors in this field. It addresses one hypothesis that promises to unify the field and proposes a theory of musical origin based on a fundamental role of music in cognition and evolution of consciousness and culture. We consider a split in the vocalizations of proto-humans into two types: one less emotional and more concretely-semantic, evolving into language, and the other preserving emotional connections along with semantic ambiguity, evolving into music. The proposed hypothesis departs from other theories in considering specific mechanisms of the mind-brain, which required the evolution of music parallel with the evolution of cultures and languages. Arguments are reviewed that the evolution of language toward becoming the semantically powerful tool of today required emancipation from emotional encumbrances. The opposite, no less powerful mechanisms required a compensatory evolution of music toward more differentiated and refined emotionality. The need for refined music in the process of cultural evolution is grounded in fundamental mechanisms of the mind. This is why today's human mind and cultures cannot exist without today's music. The reviewed hypothesis gives a basis for future analysis of why different evolutionary paths of languages were paralleled by different evolutionary paths of music. Approaches toward experimental verification of this hypothesis in

  4. Clustering algorithm studies

    NASA Astrophysics Data System (ADS)

    Graf, Norman A.

    2001-07-01

    An object-oriented framework for undertaking clustering algorithm studies has been developed. We present here the definitions for the abstract Cells and Clusters as well as the interface for the algorithm. We intend to use this framework to investigate the interplay between various clustering algorithms and the resulting jet reconstruction efficiency and energy resolutions to assist in the design of the calorimeter detector.

  5. An Algorithm of Making Switching Operation Sequence for Fault Testing using Tree Structured Data

    NASA Astrophysics Data System (ADS)

    Shiota, Masatoshi; Komai, Kenji; Yamanishi, Asao

    This paper describes an algorithm of making switching operation sequence for fault testing using tree structured data. When the faulty section is not isolated exactly, faulty section is tested whether the fault exist in the section by energizing. The proposed algorithm can determine appropriate order of components for fault testing and valid switching operation sequence for each fault testing. An example shows the effectiveness of the proposed algorithm. The proposed algorithm is used at actual control centers.

  6. Algorithm for Increasing Traffic Capacity of Level-Crossing Using Scheduling Theory and Intelligent Embedded Devices

    NASA Astrophysics Data System (ADS)

    Alps, Ivars; Gorobetz, Mikhail; Levchenkov, Anatoly

    2011-01-01

    In this paper the authors present heuristics algorithm for level-crossing traffic capacity increasing. The genetic algorithm is proposed for this task solution. The control of motion speed and operation with level-crossing barriers are proposed to create control centre and installed embedded intelligent devices on railway vehicles. Algorithm is tested using computer. The results of experiments show big promises for rail transport schedule fulfilment and level-crossing traffic capacity increasing using proposed algorithm.

  7. An enhanced mode shape identification algorithm

    NASA Technical Reports Server (NTRS)

    Roemer, Michael J.; Mook, D. Joseph

    1989-01-01

    A mode shape identification algorithm is developed which is characterized by a low sensitivity to measurement noise and a high accuracy of mode identification. The algorithm proposed here is also capable of identifying the mode shapes of structures with significant damping. The combined results indicate that mode shape identification is much more dependent on measurement noise than identification of natural frequencies. Accurate detection of modal parameters and mode shapes is demonstrated for modes with damping ratios exceeding 15 percent.

  8. Adaptive Cuckoo Search Algorithm for Unconstrained Optimization

    PubMed Central

    2014-01-01

    Modification of the intensification and diversification approaches in the recently developed cuckoo search algorithm (CSA) is performed. The alteration involves the implementation of adaptive step size adjustment strategy, and thus enabling faster convergence to the global optimal solutions. The feasibility of the proposed algorithm is validated against benchmark optimization functions, where the obtained results demonstrate a marked improvement over the standard CSA, in all the cases. PMID:25298971

  9. Self-organization and clustering algorithms

    NASA Technical Reports Server (NTRS)

    Bezdek, James C.

    1991-01-01

    Kohonen's feature maps approach to clustering is often likened to the k or c-means clustering algorithms. Here, the author identifies some similarities and differences between the hard and fuzzy c-Means (HCM/FCM) or ISODATA algorithms and Kohonen's self-organizing approach. The author concludes that some differences are significant, but at the same time there may be some important unknown relationships between the two methodologies. Several avenues of research are proposed.

  10. Internal labelling problem: an algorithmic procedure

    NASA Astrophysics Data System (ADS)

    Campoamor-Stursberg, Rutwig

    2011-01-01

    Combining the decomposition of Casimir operators induced by the embedding of a subalgebra into a semisimple Lie algebra with the properties of commutators of subgroup scalars, an analytical algorithm for the computation of missing label operators with the commutativity requirement is proposed. Two new criteria for subgroups scalars to commute are given. The algorithm is completed with a recursive method to construct orthonormal bases of states. As examples to illustrate the procedure, four labelling problems are explicitly studied.

  11. Adaptive cuckoo search algorithm for unconstrained optimization.

    PubMed

    Ong, Pauline

    2014-01-01

    Modification of the intensification and diversification approaches in the recently developed cuckoo search algorithm (CSA) is performed. The alteration involves the implementation of adaptive step size adjustment strategy, and thus enabling faster convergence to the global optimal solutions. The feasibility of the proposed algorithm is validated against benchmark optimization functions, where the obtained results demonstrate a marked improvement over the standard CSA, in all the cases.

  12. Jamming cancellation algorithm for wideband imaging radar

    NASA Astrophysics Data System (ADS)

    Zheng, Yibin; Yu, Kai-Bor

    1998-10-01

    We describe a jamming cancellation algorithm for wide-band imaging radar. After reviewing high range resolution imaging principle, several key factors affecting jamming cancellation performances, such as the 'instantaneous narrow-band' assumption, bandwidth, de-chirped interference, are formulated and analyzed. Some numerical simulation results, using a hypothetical phased array radar and synthetic point targets, are presented. The results demonstrated the effectiveness of the proposed algorithm.

  13. A Floor-Map-Aided WiFi/Pseudo-Odometry Integration Algorithm for an Indoor Positioning System

    PubMed Central

    Wang, Jian; Hu, Andong; Liu, Chunyan; Li, Xin

    2015-01-01

    This paper proposes a scheme for indoor positioning by fusing floor map, WiFi and smartphone sensor data to provide meter-level positioning without additional infrastructure. A topology-constrained K nearest neighbor (KNN) algorithm based on a floor map layout provides the coordinates required to integrate WiFi data with pseudo-odometry (P-O) measurements simulated using a pedestrian dead reckoning (PDR) approach. One method of further improving the positioning accuracy is to use a more effective multi-threshold step detection algorithm, as proposed by the authors. The “go and back” phenomenon caused by incorrect matching of the reference points (RPs) of a WiFi algorithm is eliminated using an adaptive fading-factor-based extended Kalman filter (EKF), taking WiFi positioning coordinates, P-O measurements and fused heading angles as observations. The “cross-wall” problem is solved based on the development of a floor-map-aided particle filter algorithm by weighting the particles, thereby also eliminating the gross-error effects originating from WiFi or P-O measurements. The performance observed in a field experiment performed on the fourth floor of the School of Environmental Science and Spatial Informatics (SESSI) building on the China University of Mining and Technology (CUMT) campus confirms that the proposed scheme can reliably achieve meter-level positioning. PMID:25811224

  14. A MARKOV CHAIN MONTE CARLO ALGORITHM FOR ANALYSIS OF LOW SIGNAL-TO-NOISE COSMIC MICROWAVE BACKGROUND DATA

    SciTech Connect

    Jewell, J. B.; O'Dwyer, I. J.; Huey, Greg; Gorski, K. M.; Eriksen, H. K.; Wandelt, B. D. E-mail: h.k.k.eriksen@astro.uio.no

    2009-05-20

    We present a new Markov Chain Monte Carlo (MCMC) algorithm for cosmic microwave background (CMB) analysis in the low signal-to-noise regime. This method builds on and complements the previously described CMB Gibbs sampler, and effectively solves the low signal-to-noise inefficiency problem of the direct Gibbs sampler. The new algorithm is a simple Metropolis-Hastings sampler with a general proposal rule for the power spectrum, C {sub l}, followed by a particular deterministic rescaling operation of the sky signal, s. The acceptance probability for this joint move depends on the sky map only through the difference of {chi}{sup 2} between the original and proposed sky sample, which is close to unity in the low signal-to-noise regime. The algorithm is completed by alternating this move with a standard Gibbs move. Together, these two proposals constitute a computationally efficient algorithm for mapping out the full joint CMB posterior, both in the high and low signal-to-noise regimes.

  15. A floor-map-aided WiFi/pseudo-odometry integration algorithm for an indoor positioning system.

    PubMed

    Wang, Jian; Hu, Andong; Liu, Chunyan; Li, Xin

    2015-03-24

    This paper proposes a scheme for indoor positioning by fusing floor map, WiFi and smartphone sensor data to provide meter-level positioning without additional infrastructure. A topology-constrained K nearest neighbor (KNN) algorithm based on a floor map layout provides the coordinates required to integrate WiFi data with pseudo-odometry (P-O) measurements simulated using a pedestrian dead reckoning (PDR) approach. One method of further improving the positioning accuracy is to use a more effective multi-threshold step detection algorithm, as proposed by the authors. The "go and back" phenomenon caused by incorrect matching of the reference points (RPs) of a WiFi algorithm is eliminated using an adaptive fading-factor-based extended Kalman filter (EKF), taking WiFi positioning coordinates, P-O measurements and fused heading angles as observations. The "cross-wall" problem is solved based on the development of a floor-map-aided particle filter algorithm by weighting the particles, thereby also eliminating the gross-error effects originating from WiFi or P-O measurements. The performance observed in a field experiment performed on the fourth floor of the School of Environmental Science and Spatial Informatics (SESSI) building on the China University of Mining and Technology (CUMT) campus confirms that the proposed scheme can reliably achieve meter-level positioning.

  16. Chemical Origins of Life

    ERIC Educational Resources Information Center

    Fox, J. Lawrence

    1972-01-01

    Reviews ideas and evidence bearing on the origin of life. Shows that evidence to support modifications of Oparin's theories of the origin of biological constituents from inorganic materials is accumulating, and that the necessary components are readily obtained from the simple gases found in the universe. (AL)

  17. The Moon's Origin.

    ERIC Educational Resources Information Center

    Cadogan, Peter

    1983-01-01

    Presents findings and conclusions about the origin of the moon, favoring the capture hypothesis of lunar origin. Advantage of the hypothesis is that it allows the moon to have been formed elsewhere, specifically in a hotter part of the solar nebula, accounting for chemical differences between earth and moon. (JN)

  18. Originalism in the Classroom

    ERIC Educational Resources Information Center

    Forte, David F.

    2011-01-01

    In this article, the author provides a detailed legal history of originalism and investigates whether, and to what extent, originalism is a part of law school teaching on the Constitution. He shares the results of an examination of the leading constitutional law textbooks used in the top fifty law schools and a selection of responses gathered from…

  19. A Minimal Periods Algorithm with Applications

    NASA Astrophysics Data System (ADS)

    Xu, Zhi

    Kosaraju in "Computation of squares in a string" briefly described a linear-time algorithm for computing the minimal squares starting at each position in a word. Using the same construction of suffix trees, we generalize his result and describe in detail how to compute the minimal α power, with a period of length longer than s, starting at each position in a word w for arbitrary exponent α> 1 and integer s ≥ 0. The algorithm runs in O(α|w|)-time for s = 0 and in O(|w|2)-time otherwise. We provide a complete proof of the correctness and computational complexity of the algorithm. The algorithm can be used to detect certain types of pseudo-patterns in words, which was our original goal in studying this generalization.

  20. A generating set direct search augmented Lagrangian algorithm for optimization with a combination of general and linear constraints.

    SciTech Connect

    Lewis, Robert Michael (College of William and Mary, Williamsburg, VA); Torczon, Virginia Joanne (College of William and Mary, Williamsburg, VA); Kolda, Tamara Gibson

    2006-08-01

    We consider the solution of nonlinear programs in the case where derivatives of the objective function and nonlinear constraints are unavailable. To solve such problems, we propose an adaptation of a method due to Conn, Gould, Sartenaer, and Toint that proceeds by approximately minimizing a succession of linearly constrained augmented Lagrangians. Our modification is to use a derivative-free generating set direct search algorithm to solve the linearly constrained subproblems. The stopping criterion proposed by Conn, Gould, Sartenaer and Toint for the approximate solution of the subproblems requires explicit knowledge of derivatives. Such information is presumed absent in the generating set search method we employ. Instead, we show that stationarity results for linearly constrained generating set search methods provide a derivative-free stopping criterion, based on a step-length control parameter, that is sufficient to preserve the convergence properties of the original augmented Lagrangian algorithm.

  1. Novel image compression-encryption hybrid algorithm based on key-controlled measurement matrix in compressive sensing

    NASA Astrophysics Data System (ADS)

    Zhou, Nanrun; Zhang, Aidi; Zheng, Fen; Gong, Lihua

    2014-10-01

    The existing ways to encrypt images based on compressive sensing usually treat the whole measurement matrix as the key, which renders the key too large to distribute and memorize or store. To solve this problem, a new image compression-encryption hybrid algorithm is proposed to realize compression and encryption simultaneously, where the key is easily distributed, stored or memorized. The input image is divided into 4 blocks to compress and encrypt, then the pixels of the two adjacent blocks are exchanged randomly by random matrices. The measurement matrices in compressive sensing are constructed by utilizing the circulant matrices and controlling the original row vectors of the circulant matrices with logistic map. And the random matrices used in random pixel exchanging are bound with the measurement matrices. Simulation results verify the effectiveness, security of the proposed algorithm and the acceptable compression performance.

  2. Algorithmic Perspectives on Problem Formulations in MDO

    NASA Technical Reports Server (NTRS)

    Alexandrov, Natalia M.; Lewis, Robert Michael

    2000-01-01

    This work is concerned with an approach to formulating the multidisciplinary optimization (MDO) problem that reflects an algorithmic perspective on MDO problem solution. The algorithmic perspective focuses on formulating the problem in light of the abilities and inabilities of optimization algorithms, so that the resulting nonlinear programming problem can be solved reliably and efficiently by conventional optimization techniques. We propose a modular approach to formulating MDO problems that takes advantage of the problem structure, maximizes the autonomy of implementation, and allows for multiple easily interchangeable problem statements to be used depending on the available resources and the characteristics of the application problem.

  3. An exact accelerated stochastic simulation algorithm.

    PubMed

    Mjolsness, Eric; Orendorff, David; Chatelain, Philippe; Koumoutsakos, Petros

    2009-04-14

    An exact method for stochastic simulation of chemical reaction networks, which accelerates the stochastic simulation algorithm (SSA), is proposed. The present "ER-leap" algorithm is derived from analytic upper and lower bounds on the multireaction probabilities sampled by SSA, together with rejection sampling and an adaptive multiplicity for reactions. The algorithm is tested on a number of well-quantified reaction networks and is found experimentally to be very accurate on test problems including a chaotic reaction network. At the same time ER-leap offers a substantial speedup over SSA with a simulation time proportional to the 23 power of the number of reaction events in a Galton-Watson process.

  4. Complexity of the Quantum Adiabatic Algorithm

    NASA Technical Reports Server (NTRS)

    Hen, Itay

    2013-01-01

    The Quantum Adiabatic Algorithm (QAA) has been proposed as a mechanism for efficiently solving optimization problems on a quantum computer. Since adiabatic computation is analog in nature and does not require the design and use of quantum gates, it can be thought of as a simpler and perhaps more profound method for performing quantum computations that might also be easier to implement experimentally. While these features have generated substantial research in QAA, to date there is still a lack of solid evidence that the algorithm can outperform classical optimization algorithms.

  5. Multilevel algorithms for nonlinear optimization

    NASA Technical Reports Server (NTRS)

    Alexandrov, Natalia; Dennis, J. E., Jr.

    1994-01-01

    Multidisciplinary design optimization (MDO) gives rise to nonlinear optimization problems characterized by a large number of constraints that naturally occur in blocks. We propose a class of multilevel optimization methods motivated by the structure and number of constraints and by the expense of the derivative computations for MDO. The algorithms are an extension to the nonlinear programming problem of the successful class of local Brown-Brent algorithms for nonlinear equations. Our extensions allow the user to partition constraints into arbitrary blocks to fit the application, and they separately process each block and the objective function, restricted to certain subspaces. The methods use trust regions as a globalization strategy, and they have been shown to be globally convergent under reasonable assumptions. The multilevel algorithms can be applied to all classes of MDO formulations. Multilevel algorithms for solving nonlinear systems of equations are a special case of the multilevel optimization methods. In this case, they can be viewed as a trust-region globalization of the Brown-Brent class.

  6. Least significant qubit algorithm for quantum images

    NASA Astrophysics Data System (ADS)

    Sang, Jianzhi; Wang, Shen; Li, Qiong

    2016-11-01

    To study the feasibility of the classical image least significant bit (LSB) information hiding algorithm on quantum computer, a least significant qubit (LSQb) information hiding algorithm of quantum image is proposed. In this paper, we focus on a novel quantum representation for color digital images (NCQI). Firstly, by designing the three qubits comparator and unitary operators, the reasonability and feasibility of LSQb based on NCQI are presented. Then, the concrete LSQb information hiding algorithm is proposed, which can realize the aim of embedding the secret qubits into the least significant qubits of RGB channels of quantum cover image. Quantum circuit of the LSQb information hiding algorithm is also illustrated. Furthermore, the secrets extracting algorithm and circuit are illustrated through utilizing control-swap gates. The two merits of our algorithm are: (1) it is absolutely blind and (2) when extracting secret binary qubits, it does not need any quantum measurement operation or any other help from classical computer. Finally, simulation and comparative analysis show the performance of our algorithm.

  7. An algorithmic approach to crustal deformation analysis

    NASA Technical Reports Server (NTRS)

    Iz, Huseyin Baki

    1987-01-01

    In recent years the analysis of crustal deformation measurements has become important as a result of current improvements in geodetic methods and an increasing amount of theoretical and observational data provided by several earth sciences. A first-generation data analysis algorithm which combines a priori information with current geodetic measurements was proposed. Relevant methods which can be used in the algorithm were discussed. Prior information is the unifying feature of this algorithm. Some of the problems which may arise through the use of a priori information in the analysis were indicated and preventive measures were demonstrated. The first step in the algorithm is the optimal design of deformation networks. The second step in the algorithm identifies the descriptive model of the deformation field. The final step in the algorithm is the improved estimation of deformation parameters. Although deformation parameters are estimated in the process of model discrimination, they can further be improved by the use of a priori information about them. According to the proposed algorithm this information must first be tested against the estimates calculated using the sample data only. Null-hypothesis testing procedures were developed for this purpose. Six different estimators which employ a priori information were examined. Emphasis was put on the case when the prior information is wrong and analytical expressions for possible improvements under incompatible prior information were derived.

  8. A methodology for finding the optimal iteration number of the SIRT algorithm for quantitative Electron Tomography.

    PubMed

    Okariz, Ana; Guraya, Teresa; Iturrondobeitia, Maider; Ibarretxe, Julen

    2017-02-01

    The SIRT (Simultaneous Iterative Reconstruction Technique) algorithm is commonly used in Electron Tomography to calculate the original volume of the sample from noisy images, but the results provided by this iterative procedure are strongly dependent on the specific implementation of the algorithm, as well as on the number of iterations employed for the reconstruction. In this work, a methodology for selecting the iteration number of the SIRT reconstruction that provides the most accurate segmentation is proposed. The methodology is based on the statistical analysis of the intensity profiles at the edge of the objects in the reconstructed volume. A phantom which resembles a a carbon black aggregate has been created to validate the methodology and the SIRT implementations of two free software packages (TOMOJ and TOMO3D) have been used.

  9. Improved evolutionary algorithm for the global optimization of clusters with competing attractive and repulsive interactions

    NASA Astrophysics Data System (ADS)

    Cruz, S. M. A.; Marques, J. M. C.; Pereira, F. B.

    2016-10-01

    We propose improvements to our evolutionary algorithm (EA) [J. M. C. Marques and F. B. Pereira, J. Mol. Liq. 210, 51 (2015)] in order to avoid dissociative solutions in the global optimization of clusters with competing attractive and repulsive interactions. The improved EA outperforms the original version of the method for charged colloidal clusters in the size range 3 ≤ N ≤ 25, which is a very stringent test for global optimization algorithms. While the Bernal spiral is the global minimum for clusters in the interval 13 ≤ N ≤ 18, the lowest-energy structure is a peculiar, so-called beaded-necklace, motif for 19 ≤ N ≤ 25. We have also applied the method for larger sizes and unusual quasi-linear and branched clusters arise as low-energy structures.

  10. Fast parallel molecular algorithms for DNA-based computation: factoring integers.

    PubMed

    Chang, Weng-Long; Guo, Minyi; Ho, Michael Shan-Hui

    2005-06-01

    The RSA public-key cryptosystem is an algorithm that converts input data to an unrecognizable encryption and converts the unrecognizable data back into its original decryption form. The security of the RSA public-key cryptosystem is based on the difficulty of factoring the product of two large prime numbers. This paper demonstrates to factor the product of two large prime numbers, and is a breakthrough in basic biological operations using a molecular computer. In order to achieve this, we propose three DNA-based algorithms for parallel subtractor, parallel comparator, and parallel modular arithmetic that formally verify our designed molecular solutions for factoring the product of two large prime numbers. Furthermore, this work indicates that the cryptosystems using public-key are perhaps insecure and also presents clear evidence of the ability of molecular computing to perform complicated mathematical operations.

  11. Rolling ball algorithm as a multitask filter for terrain conductivity measurements

    NASA Astrophysics Data System (ADS)

    Rashed, Mohamed

    2016-09-01

    Portable frequency domain electromagnetic devices, commonly known as terrain conductivity meters, have become increasingly popular in recent years, especially in locating underground utilities. Data collected using these devices, however, usually suffer from major problems such as complexity and interference of apparent conductivity anomalies, near edge local spikes, and fading of conductivity contrast between a utility and the surrounding soil. This study presents the experience of adopting the rolling ball algorithm, originally designed to remove background from medical images, to treat these major problems in terrain conductivity measurements. Applying the proposed procedure to data collected using different terrain conductivity meters at different locations and conditions proves the capability of the rolling ball algorithm to treat these data both efficiently and quickly.

  12. Reptation quantum Monte Carlo algorithm for lattice Hamiltonians with a directed-update scheme.

    PubMed

    Carleo, Giuseppe; Becca, Federico; Moroni, Saverio; Baroni, Stefano

    2010-10-01

    We provide an extension to lattice systems of the reptation quantum Monte Carlo algorithm, originally devised for continuous Hamiltonians. For systems affected by the sign problem, a method to systematically improve upon the so-called fixed-node approximation is also proposed. The generality of the method, which also takes advantage of a canonical worm algorithm scheme to measure off-diagonal observables, makes it applicable to a vast variety of quantum systems and eases the study of their ground-state and excited-state properties. As a case study, we investigate the quantum dynamics of the one-dimensional Heisenberg model and we provide accurate estimates of the ground-state energy of the two-dimensional fermionic Hubbard model.

  13. A Globally Convergent Augmented Lagrangian Pattern Search Algorithm for Optimization with General Constraints and Simple Bounds

    NASA Technical Reports Server (NTRS)

    Lewis, Robert Michael; Torczon, Virginia

    1998-01-01

    We give a pattern search adaptation of an augmented Lagrangian method due to Conn, Gould, and Toint. The algorithm proceeds by successive bound constrained minimization of an augmented Lagrangian. In the pattern search adaptation we solve this subproblem approximately using a bound constrained pattern search method. The stopping criterion proposed by Conn, Gould, and Toint for the solution of this subproblem requires explicit knowledge of derivatives. Such information is presumed absent in pattern search methods; however, we show how we can replace this with a stopping criterion based on the pattern size in a way that preserves the convergence properties of the original algorithm. In this way we proceed by successive, inexact, bound constrained minimization without knowing exactly how inexact the minimization is. So far as we know, this is the first provably convergent direct search method for general nonlinear programming.

  14. An effective detection algorithm for region duplication forgery in digital images

    NASA Astrophysics Data System (ADS)

    Yavuz, Fatih; Bal, Abdullah; Cukur, Huseyin

    2016-04-01

    Powerful image editing tools are very common and easy to use these days. This situation may cause some forgeries by adding or removing some information on the digital images. In order to detect these types of forgeries such as region duplication, we present an effective algorithm based on fixed-size block computation and discrete wavelet transform (DWT). In this approach, the original image is divided into fixed-size blocks, and then wavelet transform is applied for dimension reduction. Each block is processed by Fourier Transform and represented by circle regions. Four features are extracted from each block. Finally, the feature vectors are lexicographically sorted, and duplicated image blocks are detected according to comparison metric results. The experimental results show that the proposed algorithm presents computational efficiency due to fixed-size circle block architecture.

  15. Conformal interpolating algorithm based on B-spline for aspheric ultra-precision machining

    NASA Astrophysics Data System (ADS)

    Li, Chenggui; Sun, Dan; Wang, Min

    2006-02-01

    Numeric control machining and on-line compensation for aspheric surface are key techniques for ultra-precision machining. In this paper, conformal cubic B-spline interpolating curve is first applied to fit the character curve of aspheric surface. Its algorithm and process are also proposed and imitated by Matlab7.0 software. To evaluate the performance of the conformal B-spline interpolation, comparison was made between linear and circular interpolations. The result verifies this method can ensure smoothness of interpolating spline curve and preserve original shape characters. The surface quality interpolated by B-spline is higher than by line and by circle arc. The algorithm is benefit to increasing the surface form precision of workpiece during ultra-precision machining.

  16. Fairness algorithm of the resilient packet ring

    NASA Astrophysics Data System (ADS)

    Tu, Lai; Huang, Benxiong; Zhang, Fan; Wang, Xiaoling

    2004-04-01

    Resilient Packet Ring (RPR) is a newly developed Layer 2 access technology for ring topology based high speed network. Fairness Algorithm (FA), one of its key technologies, takes responsibility for regulating each station access to the ring. Since different methods emphasize particularly on different aspects, the RPR Work Group have tabled several proposals. This paper will discuss two of them and propose an improved algorithm, which can be seen as a generalization of the two schemes proposed in [1] and [2]. The new algorithm is a distributed algorithm, and uses a multi level feedback mechanism. Each station calculates its own fair rate to regulate its access to the ring, and sends fairness control message (FCM) with its bandwidth demand information to the whole ring. All stations keep a bandwidth demand image, which update periodically based on the information of received FCM. The image can be used for local fair rate calculation to achieve fair access. In the properties study section of this paper, we compare our algorithm with the two existing one both in theoretical method and in scenario simulation. Our algorithm has successfully resolve lack of the awareness of multi congestion points in [1] and the drawback of weakness of fault tolerance in [2].

  17. Analysis of Multivariate Experimental Data Using A Simplified Regression Model Search Algorithm

    NASA Technical Reports Server (NTRS)

    Ulbrich, Norbert M.

    2013-01-01

    A new regression model search algorithm was developed that may be applied to both general multivariate experimental data sets and wind tunnel strain-gage balance calibration data. The algorithm is a simplified version of a more complex algorithm that was originally developed for the NASA Ames Balance Calibration Laboratory. The new algorithm performs regression model term reduction to prevent overfitting of data. It has the advantage that it needs only about one tenth of the original algorithm's CPU time for the completion of a regression model search. In addition, extensive testing showed that the prediction accuracy of math models obtained from the simplified algorithm is similar to the prediction accuracy of math models obtained from the original algorithm. The simplified algorithm, however, cannot guarantee that search constraints related to a set of statistical quality requirements are always satisfied in the optimized regression model. Therefore, the simplified algorithm is not intended to replace the original algorithm. Instead, it may be used to generate an alternate optimized regression model of experimental data whenever the application of the original search algorithm fails or requires too much CPU time. Data from a machine calibration of NASA's MK40 force balance is used to illustrate the application of the new search algorithm.

  18. Analysis of Multivariate Experimental Data Using A Simplified Regression Model Search Algorithm

    NASA Technical Reports Server (NTRS)

    Ulbrich, Norbert Manfred

    2013-01-01

    A new regression model search algorithm was developed in 2011 that may be used to analyze both general multivariate experimental data sets and wind tunnel strain-gage balance calibration data. The new algorithm is a simplified version of a more complex search algorithm that was originally developed at the NASA Ames Balance Calibration Laboratory. The new algorithm has the advantage that it needs only about one tenth of the original algorithm's CPU time for the completion of a search. In addition, extensive testing showed that the prediction accuracy of math models obtained from the simplified algorithm is similar to the prediction accuracy of math models obtained from the original algorithm. The simplified algorithm, however, cannot guarantee that search constraints related to a set of statistical quality requirements are always satisfied in the optimized regression models. Therefore, the simplified search algorithm is not intended to replace the original search algorithm. Instead, it may be used to generate an alternate optimized regression model of experimental data whenever the application of the original search algorithm either fails or requires too much CPU time. Data from a machine calibration of NASA's MK40 force balance is used to illustrate the application of the new regression model search algorithm.

  19. An efficient algorithm for estimating noise covariances in distributed systems

    NASA Technical Reports Server (NTRS)

    Dee, D. P.; Cohn, S. E.; Ghil, M.; Dalcher, A.

    1985-01-01

    An efficient computational algorithm for estimating the noise covariance matrices of large linear discrete stochatic-dynamic systems is presented. Such systems arise typically by discretizing distributed-parameter systems, and their size renders computational efficiency a major consideration. The proposed adaptive filtering algorithm is based on the ideas of Belanger, and is algebraically equivalent to his algorithm. The earlier algorithm, however, has computational complexity proportional to p to the 6th, where p is the number of observations of the system state, while the new algorithm has complexity proportional to only p-cubed. Further, the formulation of noise covariance estimation as a secondary filter, analogous to state estimation as a primary filter, suggests several generalizations of the earlier algorithm. The performance of the proposed algorithm is demonstrated for a distributed system arising in numerical weather prediction.

  20. Underwater Sensor Network Redeployment Algorithm Based on Wolf Search

    PubMed Central

    Jiang, Peng; Feng, Yang; Wu, Feng

    2016-01-01

    This study addresses the optimization of node redeployment coverage in underwater wireless sensor networks. Given that nodes could easily become invalid under a poor environment and the large scale of underwater wireless sensor networks, an underwater sensor network redeployment algorithm was developed based on wolf search. This study is to apply the wolf search algorithm combined with crowded degree control in the deployment of underwater wireless sensor networks. The proposed algorithm uses nodes to ensure coverage of the events, and it avoids the prematurity of the nodes. The algorithm has good coverage effects. In addition, considering that obstacles exist in the underwater environment, nodes are prevented from being invalid by imitating the mechanism of avoiding predators. Thus, the energy consumption of the network is reduced. Comparative analysis shows that the algorithm is simple and effective in wireless sensor network deployment. Compared with the optimized artificial fish swarm algorithm, the proposed algorithm exhibits advantages in network coverage, energy conservation, and obstacle avoidance. PMID:27775659

  1. Efficient Algorithms for Handling Nondeterministic Automata

    NASA Astrophysics Data System (ADS)

    Vojnar, Tomáš

    Finite (word, tree, or omega) automata play an important role in different areas of computer science, including, for instance, formal verification. Often, deterministic automata are used for which traditional algorithms for important operations such as minimisation and inclusion checking are available. However, the use of deterministic automata implies a need to determinise nondeterministic automata that often arise during various computations even when the computations start with deterministic automata. Unfortunately, determinisation is a very expensive step since deterministic automata may be exponentially bigger than the original nondeterministic automata. That is why, it appears advantageous to avoid determinisation and work directly with nondeterministic automata. This, however, brings a need to be able to implement operations traditionally done on deterministic automata on nondeterministic automata instead. In particular, this is the case of inclusion checking and minimisation (or rather reduction of the size of automata). In the talk, we review several recently proposed techniques for inclusion checking on nondeterministic finite word and tree automata as well as Büchi automata. These techniques are based on using the so called antichains, possibly combined with a use of suitable simulation relations (and, in the case of Büchi automata, the so called Ramsey-based or rank-based approaches). Further, we discuss techniques for reducing the size of nondeterministic word and tree automata using quotienting based on the recently proposed notion of mediated equivalences. The talk is based on several common works with Parosh Aziz Abdulla, Ahmed Bouajjani, Yu-Fang Chen, Peter Habermehl, Lisa Kaati, Richard Mayr, Tayssir Touili, Lorenzo Clemente, Lukáš Holík, and Chih-Duo Hong.

  2. An Improved Back Propagation Neural Network Algorithm on Classification Problems

    NASA Astrophysics Data System (ADS)

    Nawi, Nazri Mohd; Ransing, R. S.; Salleh, Mohd Najib Mohd; Ghazali, Rozaida; Hamid, Norhamreeza Abdul

    The back propagation algorithm is one the most popular algorithms to train feed forward neural networks. However, the convergence of this algorithm is slow, it is mainly because of gradient descent algorithm. Previous research demonstrated that in 'feed forward' algorithm, the slope of the activation function is directly influenced by a parameter referred to as 'gain'. This research proposed an algorithm for improving the performance of the back propagation algorithm by introducing the adaptive gain of the activation function. The gain values change adaptively for each node. The influence of the adaptive gain on the learning ability of a neural network is analysed. Multi layer feed forward neural networks have been assessed. Physical interpretation of the relationship between the gain value and the learning rate and weight values is given. The efficiency of the proposed algorithm is compared with conventional Gradient Descent Method and verified by means of simulation on four classification problems. In learning the patterns, the simulations result demonstrate that the proposed method converged faster on Wisconsin breast cancer with an improvement ratio of nearly 2.8, 1.76 on diabetes problem, 65% better on thyroid data sets and 97% faster on IRIS classification problem. The results clearly show that the proposed algorithm significantly improves the learning speed of the conventional back-propagation algorithm.

  3. Electricity Load Forecasting Using Support Vector Regression with Memetic Algorithms

    PubMed Central

    Hu, Zhongyi; Xiong, Tao

    2013-01-01

    Electricity load forecasting is an important issue that is widely explored and examined in power systems operation literature and commercial transactions in electricity markets literature as well. Among the existing forecasting models, support vector regression (SVR) has gained much attention. Considering the performance of SVR highly depends on its parameters; this study proposed a firefly algorithm (FA) based memetic algorithm (FA-MA) to appropriately determine the parameters of SVR forecasting model. In the proposed FA-MA algorithm, the FA algorithm is applied to explore the solution space, and the pattern search is used to conduct individual learning and thus enhance the exploitation of FA. Experimental results confirm that the proposed FA-MA based SVR model can not only yield more accurate forecasting results than the other four evolutionary algorithms based SVR models and three well-known forecasting models but also outperform the hybrid algorithms in the related existing literature. PMID:24459425

  4. Robust face recognition algorithm for identifition of disaster victims

    NASA Astrophysics Data System (ADS)

    Gevaert, Wouter J. R.; de With, Peter H. N.

    2013-02-01

    We present a robust face recognition algorithm for the identification of occluded, injured and mutilated faces with a limited training set per person. In such cases, the conventional face recognition methods fall short due to specific aspects in the classification. The proposed algorithm involves recursive Principle Component Analysis for reconstruction of afiected facial parts, followed by a feature extractor based on Gabor wavelets and uniform multi-scale Local Binary Patterns. As a classifier, a Radial Basis Neural Network is employed. In terms of robustness to facial abnormalities, tests show that the proposed algorithm outperforms conventional face recognition algorithms like, the Eigenfaces approach, Local Binary Patterns and the Gabor magnitude method. To mimic real-life conditions in which the algorithm would have to operate, specific databases have been constructed and merged with partial existing databases and jointly compiled. Experiments on these particular databases show that the proposed algorithm achieves recognition rates beyond 95%.

  5. Modified Landweber algorithm for robust particle sizing by using Fraunhofer diffraction.

    PubMed

    Xu, Lijun; Wei, Tianxiao; Zhou, Jiayi; Cao, Zhang

    2014-09-20

    In this paper, a robust modified Landweber algorithm was proposed to retrieve the particle size distributions from Fraunhofer diffraction. Three typical particle size distributions, i.e., Rosin-Rammler, lognormal, and bimodal normal distributions for particles ranging from 4.8 to 96 μm, were employed to verify the performance of the algorithm. To show its merits, the proposed algorithm was compared with the Tikhonov regularization algorithm and the ℓ1-norm-based algorithm. Simulation results showed that, for noise-free data, both the modified Landweber algorithm and the ℓ1-norm-based algorithm were better than the Tikhonov regularization algorithm in terms of accuracy. When the data was noise-contaminated, the modified Landweber algorithm was superior to the other two algorithms in both accuracy and speed. An experimental setup was also established and the results validated the feasibility and effectiveness of the proposed method.

  6. A modified genetic algorithm with fuzzy roulette wheel selection for job-shop scheduling problems

    NASA Astrophysics Data System (ADS)

    Thammano, Arit; Teekeng, Wannaporn

    2015-05-01

    The job-shop scheduling problem is one of the most difficult production planning problems. Since it is in the NP-hard class, a recent trend in solving the job-shop scheduling problem is shifting towards the use of heuristic and metaheuristic algorithms. This paper proposes a novel metaheuristic algorithm, which is a modification of the genetic algorithm. This proposed algorithm introduces two new concepts to the standard genetic algorithm: (1) fuzzy roulette wheel selection and (2) the mutation operation with tabu list. The proposed algorithm has been evaluated and compared with several state-of-the-art algorithms in the literature. The experimental results on 53 JSSPs show that the proposed algorithm is very effective in solving the combinatorial optimization problems. It outperforms all state-of-the-art algorithms on all benchmark problems in terms of the ability to achieve the optimal solution and the computational time.

  7. The algorithm stitching for medical imaging

    NASA Astrophysics Data System (ADS)

    Semenishchev, E.; Marchuk, V.; Voronin, V.; Pismenskova, M.; Tolstova, I.; Svirin, I.

    2016-05-01

    In this paper we propose a stitching algorithm of medical images into one. The algorithm is designed to stitching the medical x-ray imaging, biological particles in microscopic images, medical microscopic images and other. Such image can improve the diagnosis accuracy and quality for minimally invasive studies (e.g., laparoscopy, ophthalmology and other). The proposed algorithm is based on the following steps: the searching and selection areas with overlap boundaries; the keypoint and feature detection; the preliminary stitching images and transformation to reduce the visible distortion; the search a single unified borders in overlap area; brightness, contrast and white balance converting; the superimposition into a one image. Experimental results demonstrate the effectiveness of the proposed method in the task of image stitching.

  8. A correlation-based algorithm for recognition and tracking of partially occluded objects

    NASA Astrophysics Data System (ADS)

    Ruchay, Alexey; Kober, Vitaly

    2016-09-01

    In this work, a correlation-based algorithm consisting of a set of adaptive filters for recognition of occluded objects in still and dynamic scenes in the presence of additive noise is proposed. The designed algorithm is adaptive to the input scene, which may contain different fragments of the target, false objects, and background to be rejected. The algorithm output is high correlation peaks corresponding to pieces of the target in scenes. The proposed algorithm uses a bank of composite optimum filters. The performance of the proposed algorithm for recognition partially occluded objects is compared with that of common algorithms in terms of objective metrics.

  9. Modification of the random forest algorithm to avoid statistical dependence problems when classifying remote sensing imagery

    NASA Astrophysics Data System (ADS)

    Cánovas-García, Fulgencio; Alonso-Sarría, Francisco; Gomariz-Castillo, Francisco; Oñate-Valdivieso, Fernando

    2017-06-01

    Random forest is a classification technique widely used in remote sensing. One of its advantages is that it produces an estimation of classification accuracy based on the so called out-of-bag cross-validation method. It is usually assumed that such estimation is not biased and may be used instead of validation based on an external data-set or a cross-validation external to the algorithm. In this paper we show that this is not necessarily the case when classifying remote sensing imagery using training areas with several pixels or objects. According to our results, out-of-bag cross-validation clearly overestimates accuracy, both overall and per class. The reason is that, in a training patch, pixels or objects are not independent (from a statistical point of view) of each other; however, they are split by bootstrapping into in-bag and out-of-bag as if they were really independent. We believe that putting whole patch, rather than pixels/objects, in one or the other set would produce a less biased out-of-bag cross-validation. To deal with the problem, we propose a modification of the random forest algorithm to split training patches instead of the pixels (or objects) that compose them. This modified algorithm does not overestimate accuracy and has no lower predictive capability than the original. When its results are validated with an external data-set, the accuracy is not different from that obtained with the original algorithm. We analysed three remote sensing images with different classification approaches (pixel and object based); in the three cases reported, the modification we propose produces a less biased accuracy estimation.

  10. Algorithmic Mechanism Design of Evolutionary Computation

    PubMed Central

    Pei, Yan

    2015-01-01

    We consider algorithmic design, enhancement, and improvement of evolutionary computation as a mechanism design problem. All individuals or several groups of individuals can be considered as self-interested agents. The individuals in evolutionary computation can manipulate parameter settings and operations by satisfying their own preferences, which are defined by an evolutionary computation algorithm designer, rather than by following a fixed algorithm rule. Evolutionary computation algorithm designers or self-adaptive methods should construct proper rules and mechanisms for all agents (individuals) to conduct their evolution behaviour correctly in order to definitely achieve the desired and preset objective(s). As a case study, we propose a formal framework on parameter setting, strategy selection, and algorithmic design of evolutionary computation by considering the Nash strategy equilibrium of a mechanism design in the search process. The evaluation results present the efficiency of the framework. This primary principle can be implemented in any evolutionary computation algorithm that needs to consider strategy selection issues in its optimization process. The final objective of our work is to solve evolutionary computation design as an algorithmic mechanism design problem and establish its fundamental aspect by taking this perspective. This paper is the first step towards achieving this objective by implementing a strategy equilibrium solution (such as Nash equilibrium) in evolutionary computation algorithm. PMID:26257777

  11. The Origin of Mercury

    NASA Astrophysics Data System (ADS)

    Benz, W.; Anic, A.; Horner, J.; Whitby, J. A.

    2007-10-01

    Mercury’s unusually high mean density has always been attributed to special circumstances that occurred during the formation of the planet or shortly thereafter, and due to the planet’s close proximity to the Sun. The nature of these special circumstances is still being debated and several scenarios, all proposed more than 20 years ago, have been suggested. In all scenarios, the high mean density is the result of severe fractionation occurring between silicates and iron. It is the origin of this fractionation that is at the centre of the debate: is it due to differences in condensation temperature and/or in material characteristics (e.g. density, strength)? Is it because of mantle evaporation due to the close proximity to the Sun? Or is it due to the blasting off of the mantle during a giant impact? In this paper we investigate, in some detail, the fractionation induced by a giant impact on a proto-Mercury having roughly chondritic elemental abundances. We have extended the previous work on this hypothesis in two significant directions. First, we have considerably increased the resolution of the simulation of the collision itself. Second, we have addressed the fate of the ejecta following the impact by computing the expected reaccretion timescale and comparing it to the removal timescale from gravitational interactions with other planets (essentially Venus) and the Poynting Robertson effect. To compute the latter, we have determined the expected size distribution of the condensates formed during the cooling of the expanding vapor cloud generated by the impact. We find that, even though some ejected material will be reaccreted, the removal of the mantle of proto-Mercury following a giant impact can indeed lead to the required long-term fractionation between silicates and iron and therefore account for the anomalously high mean density of the planet. Detailed coupled dynamical chemical modeling of this formation mechanism should be carried out in such a way as to

  12. General properties of the Foldy-Wouthuysen transformation and applicability of the corrected original Foldy-Wouthuysen method

    NASA Astrophysics Data System (ADS)

    Silenko, Alexander J.

    2016-02-01

    General properties of the Foldy-Wouthuysen transformation which is widely used in quantum mechanics and quantum chemistry are considered. Merits and demerits of the original Foldy-Wouthuysen transformation method are analyzed. While this method does not satisfy the Eriksen condition of the Foldy-Wouthuysen transformation, it can be corrected with the use of the Baker-Campbell-Hausdorff formula. We show a possibility of such a correction and propose an appropriate algorithm of calculations. An applicability of the corrected Foldy-Wouthuysen method is restricted by the condition of convergence of a series of relativistic corrections.

  13. Distributed k-Means Algorithm and Fuzzy c-Means Algorithm for Sensor Networks Based on Multiagent Consensus Theory.

    PubMed

    Qin, Jiahu; Fu, Weiming; Gao, Huijun; Zheng, Wei Xing

    2016-03-03

    This paper is concerned with developing a distributed k-means algorithm and a distributed fuzzy c-means algorithm for wireless sensor networks (WSNs) where each node is equipped with sensors. The underlying topology of the WSN is supposed to be strongly connected. The consensus algorithm in multiagent consensus theory is utilized to exchange the measurement information of the sensors in WSN. To obtain a faster convergence speed as well as a higher possibility of having the global optimum, a distributed k-means++ algorithm is first proposed to find the initial centroids before executing the distributed k-means algorithm and the distributed fuzzy c-means algorithm. The proposed distributed k-means algorithm is capable of partitioning the data observed by the nodes into measure-dependent groups which have small in-group and large out-group distances, while the proposed distributed fuzzy c-means algorithm is capable of partitioning the data observed by the nodes into different measure-dependent groups with degrees of membership values ranging from 0 to 1. Simulation results show that the proposed distributed algorithms can achieve almost the same results as that given by the centralized clustering algorithms.

  14. Speed-up hyperspheres homotopic path tracking algorithm for PWL circuits simulations.

    PubMed

    Ramirez-Pinero, A; Vazquez-Leal, H; Jimenez-Fernandez, V M; Sedighi, H M; Rashidi, M M; Filobello-Nino, U; Castaneda-Sheissa, R; Huerta-Chua, J; Sarmiento-Reyes, L A; Laguna-Camacho, J R; Castro-Gonzalez, F

    2016-01-01

    In the present work, we introduce an improved version of the hyperspheres path tracking method adapted for piecewise linear (PWL) circuits. This enhanced version takes advantage of the PWL characteristics from the homotopic curve, achieving faster path tracking and improving the performance of the homotopy continuation method (HCM). Faster computing time allows the study of complex circuits with higher complexity; the proposed method also decrease, significantly, the probability of having a diverging problem when using the Newton-Raphson method because it is applied just twice per linear region on the homotopic path. Equilibrium equations of the studied circuits are obtained applying the modified nodal analysis; this method allows to propose an algorithm for nonlinear circuit analysis. Besides, a starting point criteria is proposed to obtain better performance of the HCM and a technique for avoiding the reversion phenomenon is also proposed. To prove the efficiency of the path tracking method, several cases study with bipolar (BJT) and CMOS transistors are provided. Simulation results show that the proposed approach can be up to twelve times faster than the original path tracking method and also helps to avoid several reversion cases that appears when original hyperspheres path tracking scheme was employed.

  15. Genomic-enabled prediction with classification algorithms

    PubMed Central

    Ornella, L; Pérez, P; Tapia, E; González-Camacho, J M; Burgueño, J; Zhang, X; Singh, S; Vicente, F S; Bonnett, D; Dreisigacker, S; Singh, R; Long, N; Crossa, J

    2014-01-01

    Pearson's correlation coefficient (ρ) is the most commonly reported metric of the success of prediction in genomic selection (GS). However, in real breeding ρ may not be very useful for assessing the quality of the regression in the tails of the distribution, where individuals are chosen for selection. This research used 14 maize and 16 wheat data sets with different trait–environment combinations. Six different models were evaluated by means of a cross-validation scheme (50 random partitions each, with 90% of the individuals in the training set and 10% in the testing set). The predictive accuracy of these algorithms for selecting individuals belonging to the best α=10, 15, 20, 25, 30, 35, 40% of the distribution was estimated using Cohen's kappa coefficient (κ) and an ad hoc measure, which we call relative efficiency (RE), which indicates the expected genetic gain due to selection when individuals are selected based on GS exclusively. We put special emphasis on the analysis for α=15%, because it is a percentile commonly used in plant breeding programmes (for example, at CIMMYT). We also used ρ as a criterion for overall success. The algorithms used were: Bayesian LASSO (BL), Ridge Regression (RR), Reproducing Kernel Hilbert Spaces (RHKS), Random Forest Regression (RFR), and Support Vector Regression (SVR) with linear (lin) and Gaussian kernels (rbf). The performance of regression methods for selecting the best individuals was compared with that of three supervised classification algorithms: Random Forest Classification (RFC) and Support Vector Classification (SVC) with linear (lin) and Gaussian (rbf) kernels. Classification methods were evaluated using the same cross-validation scheme but with the response vector of the original training sets dichotomised using a given threshold. For α=15%, SVC-lin presented the highest κ coefficients in 13 of the 14 maize data sets, with best values ranging from 0.131 to 0.722 (statistically significant in 9 data sets

  16. Genomic-enabled prediction with classification algorithms.

    PubMed

    Ornella, L; Pérez, P; Tapia, E; González-Camacho, J M; Burgueño, J; Zhang, X; Singh, S; Vicente, F S; Bonnett, D; Dreisigacker, S; Singh, R; Long, N; Crossa, J

    2014-06-01

    Pearson's correlation coefficient (ρ) is the most commonly reported metric of the success of prediction in genomic selection (GS). However, in real breeding ρ may not be very useful for assessing the quality of the regression in the tails of the distribution, where individuals are chosen for selection. This research used 14 maize and 16 wheat data sets with different trait-environment combinations. Six different models were evaluated by means of a cross-validation scheme (50 random partitions each, with 90% of the individuals in the training set and 10% in the testing set). The predictive accuracy of these algorithms for selecting individuals belonging to the best α=10, 15, 20, 25, 30, 35, 40% of the distribution was estimated using Cohen's kappa coefficient (κ) and an ad hoc measure, which we call relative efficiency (RE), which indicates the expected genetic gain due to selection when individuals are selected based on GS exclusively. We put special emphasis on the analysis for α=15%, because it is a percentile commonly used in plant breeding programmes (for example, at CIMMYT). We also used ρ as a criterion for overall success. The algorithms used were: Bayesian LASSO (BL), Ridge Regression (RR), Reproducing Kernel Hilbert Spaces (RHKS), Random Forest Regression (RFR), and Support Vector Regression (SVR) with linear (lin) and Gaussian kernels (rbf). The performance of regression methods for selecting the best individuals was compared with that of three supervised classification algorithms: Random Forest Classification (RFC) and Support Vector Classification (SVC) with linear (lin) and Gaussian (rbf) kernels. Classification methods were evaluated using the same cross-validation scheme but with the response vector of the original training sets dichotomised using a given threshold. For α=15%, SVC-lin presented the highest κ coefficients in 13 of the 14 maize data sets, with best values ranging from 0.131 to 0.722 (statistically significant in 9 data sets

  17. MUSIC algorithms for rebar detection

    NASA Astrophysics Data System (ADS)

    Solimene, Raffaele; Leone, Giovanni; Dell'Aversano, Angela

    2013-12-01

    The MUSIC (MUltiple SIgnal Classification) algorithm is employed to detect and localize an unknown number of scattering objects which are small in size as compared to the wavelength. The ensemble of objects to be detected consists of both strong and weak scatterers. This represents a scattering environment challenging for detection purposes as strong scatterers tend to mask the weak ones. Consequently, the detection of more weakly scattering objects is not always guaranteed and can be completely impaired when the noise corrupting data is of a relatively high level. To overcome this drawback, here a new technique is proposed, starting from the idea of applying a two-stage MUSIC algorithm. In the first stage strong scatterers are detected. Then, information concerning their number and location is employed in the second stage focusing only on the weak scatterers. The role of an adequate scattering model is emphasized to improve drastically detection performance in realistic scenarios.

  18. Immunohistochemical algorithm alone is not enough for predicting the outcome of patients with diffuse large B-cell lymphoma treated with R-CHOP.

    PubMed

    Lu, Ting-Xun; Gong, Qi-Xing; Wang, Li; Fan, Lei; Zhang, Xiao-Yan; Chen, Yao-Yu; Wang, Zhen; Xu, Wei; Zhang, Zhi-Hong; Li, Jian-Yong

    2015-01-01

    Gene expression profiling (GEP), which can divide DLBCL into three groups, is impractical to perform routinely. Although algorithms based on immunohistochemistry (IHC) have been proposed as a surrogate for GEP analysis, the power of them has diminished since rituximab added to the chemotherapy. We assessed the prognostic value of four conventional algorithms and the genes in each and out of algorithm by IHC and fluorescence in situ hybridization in DLBCL patients receiving immunochemotherapy. The results showed that neither single protein within algorithms nor the IHC algorithms themselves had strong prognostic power. Using MYC aberrations (MA) either on the genetic or protein levels, we established a new algorithm called MA that could divide patients into distinct prognostic groups. Patients of MA had much shorter overall survival (OS) and progression-free survival (PFS) than non-MA (2-year OS: 56.9% vs. 98.7%; 2-year PFS: 26.8% vs. 86.9%; P < 0.0001 for both). In conclusions, using additional prognostic markers not associated with cell of origin may accurately predict outcomes of DLBCL. Studies with larger samples should be performed to confirm our algorithm and optimize the prognostic system of DLBCL.

  19. Genetic Algorithms for Digital Quantum Simulations.

    PubMed

    Las Heras, U; Alvarez-Rodriguez, U; Solano, E; Sanz, M

    2016-06-10

    We propose genetic algorithms, which are robust optimization techniques inspired by natural selection, to enhance the versatility of digital quantum simulations. In this sense, we show that genetic algorithms can be employed to increase the fidelity and optimize the resource requirements of digital quantum simulation protocols while adapting naturally to the experimental constraints. Furthermore, this method allows us to reduce not only digital errors but also experimental errors in quantum gates. Indeed, by adding ancillary qubits, we design a modular gate made out of imperfect gates, whose fidelity is larger than the fidelity of any of the constituent gates. Finally, we prove that the proposed modular gates are resilient against different gate errors.

  20. Kidney-inspired algorithm for optimization problems

    NASA Astrophysics Data System (ADS)

    Jaddi, Najmeh Sadat; Alvankarian, Jafar; Abdullah, Salwani

    2017-01-01

    In this paper, a population-based algorithm inspired by the kidney process in the human body is proposed. In this algorithm the solutions are filtered in a rate that is calculated based on the mean of objective functions of all solutions in the current population of each iteration. The filtered solutions as the better solutions are moved to filtered blood and the rest are transferred to waste representing the worse solutions. This is a simulation of the glomerular filtration process in the kidney. The waste solutions are reconsidered in the iterations if after applying a defined movement operator they satisfy the filtration rate, otherwise it is expelled from the waste solutions, simulating the reabsorption and excretion functions of the kidney. In addition, a solution assigned as better solution is secreted if it is not better than the worst solutions simulating the secreting process of blood in the kidney. After placement of all the solutions in the population, the best of them is ranked, the waste and filtered blood are merged to become a new population and the filtration rate is updated. Filtration provides the required exploitation while generating a new solution and reabsorption gives the necessary exploration for the algorithm. The algorithm is assessed by applying it on eight well-known benchmark test functions and compares the results with other algorithms in the literature. The performance of the proposed algorithm is better on seven out of eight test functions when it is compared with the most recent researches in literature. The proposed kidney-inspired algorithm is able to find the global optimum with less function evaluations on six out of eight test functions. A statistical analysis further confirms the ability of this algorithm to produce good-quality results.

  1. Algorithm-Based Fault Tolerance Integrated with Replication

    NASA Technical Reports Server (NTRS)

    Some, Raphael; Rennels, David

    2008-01-01

    In a proposed approach to programming and utilization of commercial off-the-shelf computing equipment, a combination of algorithm-based fault tolerance (ABFT) and replication would be utilized to obtain high degrees of fault tolerance without incurring excessive costs. The basic idea of the proposed approach is to integrate ABFT with replication such that the algorithmic portions of computations would be protected by ABFT, and the logical portions by replication. ABFT is an extremely efficient, inexpensive, high-coverage technique for detecting and mitigating faults in computer systems used for algorithmic computations, but does not protect against errors in logical operations surrounding algorithms.

  2. Region counting algorithm based on region labeling automaton

    NASA Astrophysics Data System (ADS)

    Yang, Sudi; Gu, Guoqing

    2007-12-01

    Region counting is a conception in computer graphics and image analysis, and it has many applications in medical area recently. The existing region-counting algorithms are almost based on filling method. Although filling algorithm has been improved well, the speed of these algorithms used to count regions is not satisfied. A region counting algorithm based on region labeling automaton is proposed in this paper. By tracing the boundaries of the regions, the number of the region can be obtained fast. And the proposed method was found to be fastest and requiring less memory.

  3. An affine projection algorithm using grouping selection of input vectors

    NASA Astrophysics Data System (ADS)

    Shin, JaeWook; Kong, NamWoong; Park, PooGyeon

    2011-10-01

    This paper present an affine projection algorithm (APA) using grouping selection of input vectors. To improve the performance of conventional APA, the proposed algorithm adjusts the number of the input vectors using two procedures: grouping procedure and selection procedure. In grouping procedure, the some input vectors that have overlapping information for update is grouped using normalized inner product. Then, few input vectors that have enough information for for coefficient update is selected using steady-state mean square error (MSE) in selection procedure. Finally, the filter coefficients update using selected input vectors. The experimental results show that the proposed algorithm has small steady-state estimation errors comparing with the existing algorithms.

  4. Heuristic algorithm for off-lattice protein folding problem*

    PubMed Central

    Chen, Mao; Huang, Wen-qi

    2006-01-01

    Enlightened by the law of interactions among objects in the physical world, we propose a heuristic algorithm for solving the three-dimensional (3D) off-lattice protein folding problem. Based on a physical model, the problem is converted from a nonlinear constraint-satisfied problem to an unconstrained optimization problem which can be solved by the well-known gradient method. To improve the efficiency of our algorithm, a strategy was introduced to generate initial configuration. Computational results showed that this algorithm could find states with lower energy than previously proposed ground states obtained by nPERM algorithm for all chains with length ranging from 13 to 55. PMID:16365919

  5. Nonlinear Smoothing and the EM Algorithm for Positive Integral Equations of the First Kind

    SciTech Connect

    Eggermont, P. P. B.

    1999-01-15

    We study a modification of the EMS algorithm in which each step of the EMS algorithm is preceded by a nonlinear smoothing step of the form Nf-exp(S*log f) , where S is the smoothing operator of the EMS algorithm. In the context of positive integral equations (a la positron emission tomography) the resulting algorithm is related to a convex minimization problem which always admits a unique smooth solution, in contrast to the unmodified maximum likelihood setup. The new algorithm has slightly stronger monotonicity properties than the original EM algorithm. This suggests that the modified EMS algorithm is actually an EM algorithm for the modified problem. The existence of a smooth solution to the modified maximum likelihood problem and the monotonicity together imply the strong convergence of the new algorithm. We also present some simulation results for the integral equation of stereology, which suggests that the new algorithm behaves roughly like the EMS algorithm.

  6. Origins of organic geochemistry

    USGS Publications Warehouse

    Kvenvolden, K.A.

    2008-01-01

    When organic geochemistry actually began as a recognized geoscience is a matter of definition and perspective. Constraints on its beginning are placed by the historical development of its parent disciplines, geology and organic chemistry. These disciplines originated independently and developed in parallel, starting in the latter half of the 18th century and flourishing thereafter into the 21st century. Organic geochemistry began sometime between 1860 and 1983; I argue that 1930 is the best year to mark its origin.

  7. Threshold-Based OSIC Detection Algorithm for Per-Antenna-Coded TIMO-OFDM Systems

    NASA Astrophysics Data System (ADS)

    Wang, Xinzheng; Chen, Ming; Zhu, Pengcheng

    Threshold-based ordered successive interference cancellation (OSIC) detection algorithm is proposed for per-antenna-coded (PAC) two-input multiple-output (TIMO) orthogonal frequency division multiplexing (OFDM) systems. Successive interference cancellation (SIC) is performed selectively according to channel conditions. Compared with the conventional OSIC algorithm, the proposed algorithm reduces the complexity significantly with only a slight performance degradation.

  8. Study on Underwater Image Denoising Algorithm Based on Wavelet Transform

    NASA Astrophysics Data System (ADS)

    Jian, Sun; Wen, Wang

    2017-02-01

    This paper analyzes the application of MATLAB in underwater image processing, the transmission characteristics of the underwater laser light signal and the kinds of underwater noise has been described, the common noise suppression algorithm: Wiener filter, median filter, average filter algorithm is brought out. Then the advantages and disadvantages of each algorithm in image sharpness and edge protection areas have been compared. A hybrid filter algorithm based on wavelet transform has been proposed which can be used for Color Image Denoising. At last the PSNR and NMSE of each algorithm has been given out, which compares the ability to de-noising

  9. Generalized Jaynes-Cummings model as a quantum search algorithm

    SciTech Connect

    Romanelli, A.

    2009-07-15

    We propose a continuous time quantum search algorithm using a generalization of the Jaynes-Cummings model. In this model the states of the atom are the elements among which the algorithm realizes the search, exciting resonances between the initial and the searched states. This algorithm behaves like Grover's algorithm; the optimal search time is proportional to the square root of the size of the search set and the probability to find the searched state oscillates periodically in time. In this frame, it is possible to reinterpret the usual Jaynes-Cummings model as a trivial case of the quantum search algorithm.

  10. A hybrid monkey search algorithm for clustering analysis.

    PubMed

    Chen, Xin; Zhou, Yongquan; Luo, Qifang

    2014-01-01

    Clustering is a popular data analysis and data mining technique. The k-means clustering algorithm is one of the most commonly used methods. However, it highly depends on the initial solution and is easy to fall into local optimum solution. In view of the disadvantages of the k-means method, this paper proposed a hybrid monkey algorithm based on search operator of artificial bee colony algorithm for clustering analysis and experiment on synthetic and real life datasets to show that the algorithm has a good performance than that of the basic monkey algorithm for clustering analysis.

  11. Novel biomedical tetrahedral mesh methods: algorithms and applications

    NASA Astrophysics Data System (ADS)

    Yu, Xiao; Jin, Yanfeng; Chen, Weitao; Huang, Pengfei; Gu, Lixu

    2007-12-01

    Tetrahedral mesh generation algorithm, as a prerequisite of many soft tissue simulation methods, becomes very important in the virtual surgery programs because of the real-time requirement. Aiming to speed up the computation in the simulation, we propose a revised Delaunay algorithm which makes a good balance of quality of tetrahedra, boundary preservation and time complexity, with many improved methods. Another mesh algorithm named Space-Disassembling is also presented in this paper, and a comparison of Space-Disassembling, traditional Delaunay algorithm and the revised Delaunay algorithm is processed based on clinical soft-tissue simulation projects, including craniofacial plastic surgery and breast reconstruction plastic surgery.

  12. Petri nets SM-cover-based on heuristic coloring algorithm

    NASA Astrophysics Data System (ADS)

    Tkacz, Jacek; Doligalski, Michał

    2015-09-01

    In the paper, coloring heuristic algorithm of interpreted Petri nets is presented. Coloring is used to determine the State Machines (SM) subnets. The present algorithm reduces the Petri net in order to reduce the computational complexity and finds one of its possible State Machines cover. The proposed algorithm uses elements of interpretation of Petri nets. The obtained result may not be the best, but it is sufficient for use in rapid prototyping of logic controllers. Found SM-cover will be also used in the development of algorithms for decomposition, and modular synthesis and implementation of parallel logic controllers. Correctness developed heuristic algorithm was verified using Gentzen formal reasoning system.

  13. Back to the Origin

    PubMed Central

    Evertts, Adam G.

    2012-01-01

    In bacteria, replication is a carefully orchestrated event that unfolds the same way for each bacterium and each cell division. The process of DNA replication in bacteria optimizes cell growth and coordinates high levels of simultaneous replication and transcription. In metazoans, the organization of replication is more enigmatic. The lack of a specific sequence that defines origins of replication has, until recently, severely limited our ability to define the organizing principles of DNA replication. This question is of particular importance as emerging data suggest that replication stress is an important contributor to inherited genetic damage and the genomic instability in tumors. We consider here the replication program in several different organisms including recent genome-wide analyses of replication origins in humans. We review recent studies on the role of cytosine methylation in replication origins, the role of transcriptional looping and gene gating in DNA replication, and the role of chromatin’s 3-dimensional structure in DNA replication. We use these new findings to consider several questions surrounding DNA replication in metazoans: How are origins selected? What is the relationship between replication and transcription? How do checkpoints inhibit origin firing? Why are there early and late firing origins? We then discuss whether oncogenes promote cancer through a role in DNA replication and whether errors in DNA replication are important contributors to the genomic alterations and gene fusion events observed in cancer. We conclude with some important areas for future experimentation. PMID:23634256

  14. A bio-inspired cooperative algorithm for distributed source localization with mobile nodes.

    PubMed

    Khalili, Azam; Rastegarnia, Amir; Islam, Md Kafiul; Yang, Zhi

    2013-01-01

    In this paper we propose an algorithm for distributed optimization in mobile nodes. Compared with many published works, an important consideration here is that the nodes do not know the cost function beforehand. Instead of decision-making based on linear combination of the neighbor estimates, the proposed algorithm relies on information-rich nodes that are iteratively identified. To quickly find these nodes, the algorithm adopts a larger step size during the initial iterations. The proposed algorithm can be used in many different applications, such as distributed odor source localization and mobile robots. Comparative simulation results are presented to support the proposed algorithm.

  15. A new optimization approach for shell and tube heat exchangers by using electromagnetism-like algorithm (EM)

    NASA Astrophysics Data System (ADS)

    Abed, Azher M.; Abed, Issa Ahmed; Majdi, Hasan Sh.; Al-Shamani, Ali Najah; Sopian, K.

    2016-12-01

    This study proposes a new procedure for optimal design of shell and tube heat exchangers. The electromagnetism-like algorithm is applied to save on heat exchanger capital cost and designing a compact, high performance heat exchanger with effective use of the allowable pressure drop (cost of the pump). An optimization algorithm is then utilized to determine the optimal values of both geometric design parameters and maximum allowable pressure drop by pursuing the minimization of a total cost function. A computer code is developed for the optimal shell and tube heat exchangers. Different test cases are solved to demonstrate the effectiveness and ability of the proposed algorithm. Results are also compared with those obtained by other approaches available in the literature. The comparisons indicate that a proposed design procedure can be successfully applied in the optimal design of shell and tube heat exchangers. In particular, in the examined cases a reduction of total costs up to 30, 29, and 56.15 % compared with the original design and up to 18, 5.5 and 7.4 % compared with other approaches for case study 1, 2 and 3 respectively, are observed. In this work, economic optimization resulting from the proposed design procedure are relevant especially when the size/volume is critical for high performance and compact unit, moderate volume and cost are needed.

  16. Algorithmic Coordination in Robotic Networks

    DTIC Science & Technology

    2010-11-29

    motion.me.ucsb.edu November 29, 2010 Contents 1 Original Proposal Summary i 2 Technical Accomplishments ii 2.1 Dynamic vehicle routing and target assignment ii...as fully or partly supported by this award. The results are organized in four main thrusts: 1. dynamic vehicle routing and target assignment. 2...journal publications, listed in Section 3, are organized in the same four thrusts. 2.1 Dynamic vehicle routing and target assignment Supported

  17. Efficient parallel algorithms for elastic plastic finite element analysis

    NASA Astrophysics Data System (ADS)

    Ding, K. Z.; Qin, Q.-H.; Cardew-Hall, M.; Kalyanasundaram, S.

    2008-03-01

    This paper presents our new development of parallel finite element algorithms for elastic plastic problems. The proposed method is based on dividing the original structure under consideration into a number of substructures which are treated as isolated finite element models via the interface conditions. Throughout the analysis, each processor stores only the information relevant to its substructure and generates the local stiffness matrix. A parallel substructure oriented preconditioned conjugate gradient method, which is combined with MR smoothing and diagonal storage scheme are employed to solve linear systems of equations. After having obtained the displacements of the problem under consideration, a substepping scheme is used to integrate elastic plastic stress strain relations. The procedure outlined controls the error of the computed stress by choosing each substep size automatically according to a prescribed tolerance. The combination of these algorithms shows a good speedup when increasing the number of processors and the effective solution of 3D elastic plastic problems whose size is much too large for a single workstation becomes possible.

  18. Binarization algorithm for document image with complex background

    NASA Astrophysics Data System (ADS)

    Miao, Shaojun; Lu, Tongwei; Min, Feng

    2015-12-01

    The most important step in image preprocessing for Optical Character Recognition (OCR) is binarization. Due to the complex background or varying light in the text image, binarization is a very difficult problem. This paper presents the improved binarization algorithm. The algorithm can be divided into several steps. First, the background approximation can be obtained by the polynomial fitting, and the text is sharpened by using bilateral filter. Second, the image contrast compensation is done to reduce the impact of light and improve contrast of the original image. Third, the first derivative of the pixels in the compensated image are calculated to get the average value of the threshold, then the edge detection is obtained. Fourth, the stroke width of the text is estimated through a measuring of distance between edge pixels. The final stroke width is determined by choosing the most frequent distance in the histogram. Fifth, according to the value of the final stroke width, the window size is calculated, then a local threshold estimation approach can begin to binaries the image. Finally, the small noise is removed based on the morphological operators. The experimental result shows that the proposed method can effectively remove the noise caused by complex background and varying light.

  19. Modified Discrete Grey Wolf Optimizer Algorithm for Multilevel Image Thresholding

    PubMed Central

    Sun, Lijuan; Guo, Jian; Xu, Bin; Li, Shujing

    2017-01-01

    The computation of image segmentation has become more complicated with the increasing number of thresholds, and the option and application of the thresholds in image thresholding fields have become an NP problem at the same time. The paper puts forward the modified discrete grey wolf optimizer algorithm (MDGWO), which improves on the optimal solution updating mechanism of the search agent by the weights. Taking Kapur's entropy as the optimized function and based on the discreteness of threshold in image segmentation, the paper firstly discretizes the grey wolf optimizer (GWO) and then proposes a new attack strategy by using the weight coefficient to replace the search formula for optimal solution used in the original algorithm. The experimental results show that MDGWO can search out the optimal thresholds efficiently and precisely, which are very close to the result examined by exhaustive searches. In comparison with the electromagnetism optimization (EMO), the differential evolution (DE), the Artifical Bee Colony (ABC), and the classical GWO, it is concluded that MDGWO has advantages over the latter four in terms of image segmentation quality and objective function values and their stability. PMID:28127305

  20. Formal Verification of a Conflict Resolution and Recovery Algorithm

    NASA Technical Reports Server (NTRS)

    Maddalon, Jeffrey; Butler, Ricky; Geser, Alfons; Munoz, Cesar

    2004-01-01

    New air traffic management concepts distribute the duty of traffic separation among system participants. As a consequence, these concepts have a greater dependency and rely heavily on on-board software and hardware systems. One example of a new on-board capability in a distributed air traffic management system is air traffic conflict detection and resolution (CD&R). Traditional methods for safety assessment such as human-in-the-loop simulations, testing, and flight experiments may not be sufficient for this highly distributed system as the set of possible scenarios is too large to have a reasonable coverage. This paper proposes a new method for the safety assessment of avionics systems that makes use of formal methods to drive the development of critical systems. As a case study of this approach, the mechanical veri.cation of an algorithm for air traffic conflict resolution and recovery called RR3D is presented. The RR3D algorithm uses a geometric optimization technique to provide a choice of resolution and recovery maneuvers. If the aircraft adheres to these maneuvers, they will bring the aircraft out of conflict and the aircraft will follow a conflict-free path to its original destination. Veri.cation of RR3D is carried out using the Prototype Verification System (PVS).