Science.gov

Sample records for algorithm originally proposed

  1. The Origins of Counting Algorithms

    PubMed Central

    Cantlon, Jessica F.; Piantadosi, Steven T.; Ferrigno, Stephen; Hughes, Kelly D.; Barnard, Allison M.

    2015-01-01

    Humans’ ability to ‘count’ by verbally labeling discrete quantities is unique in animal cognition. The evolutionary origins of counting algorithms are not understood. We report that non-human primates exhibit a cognitive ability that is algorithmically and logically similar to human counting. Monkeys were given the task of choosing between two food caches. Monkeys saw one cache baited with some number of food items, one item at a time. Then, a second cache was baited with food items, one at a time. At the point when the second set approximately outnumbered the first set, monkeys spontaneously moved to choose the second set even before it was completely baited. Using a novel Bayesian analysis, we show that monkeys used an approximate counting algorithm to increment and compare quantities in sequence. This algorithm is structurally similar to formal counting in humans and thus may have been an important evolutionary precursor to human counting. PMID:25953949

  2. The origins of counting algorithms.

    PubMed

    Cantlon, Jessica F; Piantadosi, Steven T; Ferrigno, Stephen; Hughes, Kelly D; Barnard, Allison M

    2015-06-01

    Humans' ability to count by verbally labeling discrete quantities is unique in animal cognition. The evolutionary origins of counting algorithms are not understood. We report that nonhuman primates exhibit a cognitive ability that is algorithmically and logically similar to human counting. Monkeys were given the task of choosing between two food caches. First, they saw one cache baited with some number of food items, one item at a time. Then, a second cache was baited with food items, one at a time. At the point when the second set was approximately equal to the first set, the monkeys spontaneously moved to choose the second set even before that cache was completely baited. Using a novel Bayesian analysis, we show that the monkeys used an approximate counting algorithm for comparing quantities in sequence that is incremental, iterative, and condition controlled. This proto-counting algorithm is structurally similar to formal counting in humans and thus may have been an important evolutionary precursor to human counting. PMID:25953949

  3. The algorithmic origins of life

    PubMed Central

    Walker, Sara Imari; Davies, Paul C. W.

    2013-01-01

    Although it has been notoriously difficult to pin down precisely what is it that makes life so distinctive and remarkable, there is general agreement that its informational aspect is one key property, perhaps the key property. The unique informational narrative of living systems suggests that life may be characterized by context-dependent causal influences, and, in particular, that top-down (or downward) causation—where higher levels influence and constrain the dynamics of lower levels in organizational hierarchies—may be a major contributor to the hierarchal structure of living systems. Here, we propose that the emergence of life may correspond to a physical transition associated with a shift in the causal structure, where information gains direct and context-dependent causal efficacy over the matter in which it is instantiated. Such a transition may be akin to more traditional physical transitions (e.g. thermodynamic phase transitions), with the crucial distinction that determining which phase (non-life or life) a given system is in requires dynamical information and therefore can only be inferred by identifying causal architecture. We discuss some novel research directions based on this hypothesis, including potential measures of such a transition that may be amenable to laboratory study, and how the proposed mechanism corresponds to the onset of the unique mode of (algorithmic) information processing characteristic of living systems. PMID:23235265

  4. Cliftonite in meteorites: A proposed origin

    USGS Publications Warehouse

    Brett, R.; Higgins, G.T.

    1967-01-01

    Cliftonite, a polycrystalline aggregate of graphite with cubic morphology, is known in ten meteorites. Some workers have considered it to be a pseudomorph after diamond, and have used the proposed diamond ancestry as evidence of a meteoritic parent body of at least lunar dimensions. We have synthesized cliftonite in Fe-Ni-C alloys in vacuum, as a product of decomposition of cohenite [(Fe,Ni)3C]. We therefore suggest that a high pressure origin is unnecessary for meteorites which contain cliftonite, and that these meteorites were formed at low pressures. This conclusion is in agreement with other recent evidence.

  5. Cohenite: its occurrence and a proposed origin

    USGS Publications Warehouse

    Brett, R.

    1967-01-01

    Cohenite is found almost exclusively in meteorites containing from 6 to 8 wt.% Ni. On the basis of phase diagrams and kinetic data it is proposed that cohenite cannot form in meteorites having more than 8 wt.% Ni and that any cohenite which formed in meteorites having Ni content lower than 6 wt.% decomposed during cooling. A series of isothermal sections for the system Fe{single bond}Ni{single bond}C has been constructed between 750 and 600??C from published information on the three constitutent binary systems. The diagrams indicate that the presence of a few tenths of a per cent carbon in a Ni{single bond}Fe alloy may reduce the temperature at which kamacite separates from taenite by more than 50??C. Hence C in iron meteorites may be partly responsible for the postulated supercooled nucleation of kamacite in meteorites proposed by recent authors. Cohenite found in meteorites probably formed over the temperature range 650-610??C. For compositions approximating those of metallic meteorites, the greater the C or Ni content of the alloy, the lower the temperature of formation of cohenite. The presence of cohenite in meteorites indicates neither high nor low pressures of formation. However, the absence of cohenite in meteorites containing the assemblage metal + graphite requires low pressures during cooling. Such meteorites therefore cooled in parent bodies of asteroidal size, or near the surface of large bodies. ?? 1967.

  6. Proposal of an Algorithm to Synthesize Music Suitable for Dance

    NASA Astrophysics Data System (ADS)

    Morioka, Hirofumi; Nakatani, Mie; Nishida, Shogo

    This paper proposes an algorithm for synthesizing music suitable for emotions in moving pictures. Our goal is to support multi-media content creation; web page design, animation films and so on. Here we adopt a human dance as a moving picture to examine the availability of our method. Because we think the dance image has high affinity with music. This algorithm is composed of three modules. The first is the module for computing emotions from an input dance image, the second is for computing emotions from music in the database and the last is for selecting music suitable for input dance via an interface of emotion.

  7. Modified multiscale sample entropy computation of laser speckle contrast images and comparison with the original multiscale entropy algorithm

    NASA Astrophysics Data System (ADS)

    Humeau-Heurtier, Anne; Mahé, Guillaume; Abraham, Pierre

    2015-12-01

    Laser speckle contrast imaging (LSCI) enables a noninvasive monitoring of microvascular perfusion. Some studies have proposed to extract information from LSCI data through their multiscale entropy (MSE). However, for reaching a large range of scales, the original MSE algorithm may require long recordings for reliability. Recently, a novel approach to compute MSE with shorter data sets has been proposed: the short-time MSE (sMSE). Our goal is to apply, for the first time, the sMSE algorithm in LSCI data and to compare results with those given by the original MSE. Moreover, we apply the original MSE algorithm on data of different lengths and compare results with those given by longer recordings. For this purpose, synthetic signals and 192 LSCI regions of interest (ROIs) of different sizes are processed. Our results show that the sMSE algorithm is valid to compute the MSE of LSCI data. Moreover, with time series shorter than those initially proposed, the sMSE and original MSE algorithms give results with no statistical difference from those of the original MSE algorithm with longer data sets. The minimal acceptable length depends on the ROI size. Comparisons of MSE from healthy and pathological subjects can be performed with shorter data sets than those proposed until now.

  8. Burst abdomen in pregnancy: A proposed management algorithm.

    PubMed

    Okpala, Amalachukwu M; Debrah, Samuel A; Mouhajer, Mohammed

    2016-06-01

    Management of the burst abdomen is complex due to the co-morbidities associated with it. When coupled with intraabdominal sepsis and pregnancy, it becomes even more difficult due to the ethical issues that have to be considered when managing both mother and child. Due to the paucity of literature on this subject, a management algorithm has been proposed which aims at tackling this delicate issue. However, the major consideration in the management of these cases is that decisions are to be made based on optimization of the condition of the mother. PMID:27635100

  9. Vertigo in childhood: proposal for a diagnostic algorithm based upon clinical experience.

    PubMed

    Casani, A P; Dallan, I; Navari, E; Sellari Franceschini, S; Cerchiai, N

    2015-06-01

    The aim of this paper is to analyse, after clinical experience with a series of patients with established diagnoses and review of the literature, all relevant anamnestic features in order to build a simple diagnostic algorithm for vertigo in childhood. This study is a retrospective chart review. A series of 37 children underwent complete clinical and instrumental vestibular examination. Only neurological disorders or genetic diseases represented exclusion criteria. All diagnoses were reviewed after applying the most recent diagnostic guidelines. In our experience, the most common aetiology for dizziness is vestibular migraine (38%), followed by acute labyrinthitis/neuritis (16%) and somatoform vertigo (16%). Benign paroxysmal vertigo was diagnosed in 4 patients (11%) and paroxysmal torticollis was diagnosed in a 1-year-old child. In 8% (3 patients) of cases, the dizziness had a post-traumatic origin: 1 canalolithiasis of the posterior semicircular canal and 2 labyrinthine concussions, respectively. Menière's disease was diagnosed in 2 cases. A bilateral vestibular failure of unknown origin caused chronic dizziness in 1 patient. In conclusion, this algorithm could represent a good tool for guiding clinical suspicion to correct diagnostic assessment in dizzy children where no neurological findings are detectable. The algorithm has just a few simple steps, based mainly on two aspects to be investigated early: temporal features of vertigo and presence of hearing impairment. A different algorithm has been proposed for cases in which a traumatic origin is suspected.

  10. A new proposal concerning the botanical origin of Baltic amber

    PubMed Central

    Wolfe, Alexander P.; Tappert, Ralf; Muehlenbachs, Karlis; Boudreau, Marc; McKellar, Ryan C.; Basinger, James F.; Garrett, Amber

    2009-01-01

    Baltic amber constitutes the largest known deposit of fossil plant resin and the richest repository of fossil insects of any age. Despite a remarkable legacy of archaeological, geochemical and palaeobiological investigation, the botanical origin of this exceptional resource remains controversial. Here, we use taxonomically explicit applications of solid-state Fourier-transform infrared (FTIR) microspectroscopy, coupled with multivariate clustering and palaeobotanical observations, to propose that conifers of the family Sciadopityaceae, closely allied to the sole extant representative, Sciadopitys verticillata, were involved in the genesis of Baltic amber. The fidelity of FTIR-based chemotaxonomic inferences is upheld by modern–fossil comparisons of resins from additional conifer families and genera (Cupressaceae: Metasequoia; Pinaceae: Pinus and Pseudolarix). Our conclusions challenge hypotheses advocating members of either of the families Araucariaceae or Pinaceae as the primary amber-producing trees and correlate favourably with the progressive demise of subtropical forest biomes from northern Europe as palaeotemperatures cooled following the Eocene climate optimum. PMID:19570786

  11. Replication and Comparison of the Newly Proposed ADOS-2, Module 4 Algorithm in ASD without ID: A Multi-Site Study

    ERIC Educational Resources Information Center

    Pugliese, Cara E.; Kenworthy, Lauren; Bal, Vanessa Hus; Wallace, Gregory L.; Yerys, Benjamin E.; Maddox, Brenna B.; White, Susan W.; Popal, Haroon; Armour, Anna Chelsea; Miller, Judith; Herrington, John D.; Schultz, Robert T.; Martin, Alex; Anthony, Laura Gutermuth

    2015-01-01

    Recent updates have been proposed to the Autism Diagnostic Observation Schedule-2 Module 4 diagnostic algorithm. This new algorithm, however, has not yet been validated in an independent sample without intellectual disability (ID). This multi-site study compared the original and revised algorithms in individuals with ASD without ID. The revised…

  12. Cliftonite: A proposed origin, and its bearing on the origin of diamonds in meteorites

    USGS Publications Warehouse

    Brett, R.; Higgins, G.T.

    1969-01-01

    Cliftonite, a polycrystalline aggregate of graphite with spherulitic structure and cubic morphology, is known in 14 meteorites. Some workers have considered it to be a pseudomorph after diamond, and have used the proposed diamond ancestry as evidence of a meteoritic parent body of at least lunar dimensions. Careful examination of meteoritic samples indicates that cliftonite forms by precipitation within kamacite. We have also demonstrated that graphite with cubic morphology may be synthesized in a Fe-Ni-C alloy annealed in a vacuum. We therefore suggest that a high pressure origin is unnecessary for meteorities which contain cliftonite, and that these meteorities were formed at low pressures. This conclusion is in agreement with other recent evidence. We also suggest that recently discovered cubes and cubo-octahedra of lonsdaleite in the Canyon Diablo meteorite are pseudomorphs after cliftonite, not diamond, as has previously been suggested. ?? 1969.

  13. Phlegethon flow: A proposed origin for spicules and coronal heating

    NASA Technical Reports Server (NTRS)

    Schatten, Kenneth H.; Mayr, Hans G.

    1986-01-01

    A model was develped for the mass, energy, and magnetic field transport into the corona. The focus is on the flow below the photosphere which allows the energy to pass into, and be dissipated within, the solar atmosphere. The high flow velocities observed in spicules are explained. A treatment following the work of Bailyn et al. (1985) is examined. It was concluded that within the framework of the model, energy may dissipate at a temperature comparable to the temperature where the waves originated, allowing for an equipartition solution of atmospheric flow, departing the sun at velocities approaching the maximum Alfven speed.

  14. Event-by-event PET image reconstruction using list-mode origin ensembles algorithm

    NASA Astrophysics Data System (ADS)

    Andreyev, Andriy

    2016-03-01

    There is a great demand for real time or event-by-event (EBE) image reconstruction in emission tomography. Ideally, as soon as event has been detected by the acquisition electronics, it needs to be used in the image reconstruction software. This would greatly speed up the image reconstruction since most of the data will be processed and reconstructed while the patient is still undergoing the scan. Unfortunately, the current industry standard is that the reconstruction of the image would not start until all the data for the current image frame would be acquired. Implementing an EBE reconstruction for MLEM family of algorithms is possible, but not straightforward as multiple (computationally expensive) updates to the image estimate are required. In this work an alternative Origin Ensembles (OE) image reconstruction algorithm for PET imaging is converted to EBE mode and is investigated whether it is viable alternative for real-time image reconstruction. In OE algorithm all acquired events are seen as points that are located somewhere along the corresponding line-of-responses (LORs), together forming a point cloud. Iteratively, with a multitude of quasi-random shifts following the likelihood function the point cloud converges to a reflection of an actual radiotracer distribution with the degree of accuracy that is similar to MLEM. New data can be naturally added into the point cloud. Preliminary results with simulated data show little difference between regular reconstruction and EBE mode, proving the feasibility of the proposed approach.

  15. 76 FR 69328 - Proposed Collection; Comment Request; Race and National Origin Identification

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-11-08

    ... Proposed Collection; Comment Request; Race and National Origin Identification AGENCY: Department of The.... Type of Review: Revision of a currently approved collection. Title: Race and National Origin... and national origin information electronically from an applicant. The data will be used to...

  16. Evaluation of microscopic hematuria: a critical review and proposed algorithm.

    PubMed

    Niemi, Matthew A; Cohen, Robert A

    2015-07-01

    Microscopic hematuria (MH), often discovered incidentally, has many causes, including benign processes, kidney disease, and genitourinary malignancy. The clinician, therefore, must decide how intensively to investigate the source of MH and select which tests to order and referrals to make, aiming not to overlook serious conditions while simultaneously avoiding unnecessary tests. Existing professional guidelines for the evaluation of MH are largely based on expert opinion and have weak evidence bases. Existing data demonstrate associations between isolated MH and various diseases in certain populations, and these associations serve as the basis for our proposed approach to the evaluation of MH. Various areas of ongoing uncertainty regarding the appropriate evaluation should be the basis for ongoing research. PMID:26088073

  17. A Two-Stage Algorithm for Origin-Destination Matrices Estimation Considering Dynamic Dispersion Parameter for Route Choice

    PubMed Central

    Wang, Yong; Ma, Xiaolei; Liu, Yong; Gong, Ke; Henricakson, Kristian C.; Xu, Maozeng; Wang, Yinhai

    2016-01-01

    This paper proposes a two-stage algorithm to simultaneously estimate origin-destination (OD) matrix, link choice proportion, and dispersion parameter using partial traffic counts in a congested network. A non-linear optimization model is developed which incorporates a dynamic dispersion parameter, followed by a two-stage algorithm in which Generalized Least Squares (GLS) estimation and a Stochastic User Equilibrium (SUE) assignment model are iteratively applied until the convergence is reached. To evaluate the performance of the algorithm, the proposed approach is implemented in a hypothetical network using input data with high error, and tested under a range of variation coefficients. The root mean squared error (RMSE) of the estimated OD demand and link flows are used to evaluate the model estimation results. The results indicate that the estimated dispersion parameter theta is insensitive to the choice of variation coefficients. The proposed approach is shown to outperform two established OD estimation methods and produce parameter estimates that are close to the ground truth. In addition, the proposed approach is applied to an empirical network in Seattle, WA to validate the robustness and practicality of this methodology. In summary, this study proposes and evaluates an innovative computational approach to accurately estimate OD matrices using link-level traffic flow data, and provides useful insight for optimal parameter selection in modeling travelers’ route choice behavior. PMID:26761209

  18. A Two-Stage Algorithm for Origin-Destination Matrices Estimation Considering Dynamic Dispersion Parameter for Route Choice.

    PubMed

    Wang, Yong; Ma, Xiaolei; Liu, Yong; Gong, Ke; Henrickson, Kristian C; Henricakson, Kristian C; Xu, Maozeng; Wang, Yinhai

    2016-01-01

    This paper proposes a two-stage algorithm to simultaneously estimate origin-destination (OD) matrix, link choice proportion, and dispersion parameter using partial traffic counts in a congested network. A non-linear optimization model is developed which incorporates a dynamic dispersion parameter, followed by a two-stage algorithm in which Generalized Least Squares (GLS) estimation and a Stochastic User Equilibrium (SUE) assignment model are iteratively applied until the convergence is reached. To evaluate the performance of the algorithm, the proposed approach is implemented in a hypothetical network using input data with high error, and tested under a range of variation coefficients. The root mean squared error (RMSE) of the estimated OD demand and link flows are used to evaluate the model estimation results. The results indicate that the estimated dispersion parameter theta is insensitive to the choice of variation coefficients. The proposed approach is shown to outperform two established OD estimation methods and produce parameter estimates that are close to the ground truth. In addition, the proposed approach is applied to an empirical network in Seattle, WA to validate the robustness and practicality of this methodology. In summary, this study proposes and evaluates an innovative computational approach to accurately estimate OD matrices using link-level traffic flow data, and provides useful insight for optimal parameter selection in modeling travelers' route choice behavior. PMID:26761209

  19. Construction Method of Display Proposal for Commodities in Sales Promotion by Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Yumoto, Masaki

    In a sales promotion task, wholesaler prepares and presents the display proposal for commodities in order to negotiate with retailer's buyers what commodities they should sell. For automating the sales promotion tasks, the proposal has to be constructed according to the target retailer's buyer. However, it is difficult to construct the proposal suitable for the target retail store because of too much combination of commodities. This paper proposes a construction method by Genetic algorithm (GA). The proposed method represents initial display proposals for commodities with genes, improve ones with the evaluation value by GA, and rearrange one with the highest evaluation value according to the classification of commodity. Through practical experiment, we can confirm that display proposal by the proposed method is similar with the one constructed by a wholesaler.

  20. Proposal for Creation of Various Sign Sounds Using Interactive Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Ogawa, Shintaro; Fukumoto, Makoto

    Sign sounds are used in various daily situations. However, they are not made by reflecting user's own preference. This study proposed Interactive Genetic Algorithm (IGA) method creating various sign sounds suited with each user's preference and objective. The proposed method created sign sounds by using all musical notes, while a previous IGA method have created different sign sounds by using musical rules for restriction of use of musical notes. To investigate the efficacy of the proposed method, two listening experiments were performed with a concrete system based on the proposed method. Result of the listening experiments creating bright and warn sign sounds showed significant increases of fitness value.

  1. Differentiating origins of outflow tract ventricular arrhythmias: a comparison of three different electrocardiographic algorithms

    PubMed Central

    Jiao, Z.Y.; Li, Y.B.; Mao, J.; Liu, X.Y.; Yang, X.C.; Tan, C.; Chu, J.M.; Liu, X.P.

    2016-01-01

    Our objective is to evaluate the accuracy of three algorithms in differentiating the origins of outflow tract ventricular arrhythmias (OTVAs). This study involved 110 consecutive patients with OTVAs for whom a standard 12-lead surface electrocardiogram (ECG) showed typical left bundle branch block morphology with an inferior axis. All the ECG tracings were retrospectively analyzed using the following three recently published ECG algorithms: 1) the transitional zone (TZ) index, 2) the V2 transition ratio, and 3) V2 R wave duration and R/S wave amplitude indices. Considering all patients, the V2 transition ratio had the highest sensitivity (92.3%), while the R wave duration and R/S wave amplitude indices in V2 had the highest specificity (93.9%). The latter finding had a maximal area under the ROC curve of 0.925. In patients with left ventricular (LV) rotation, the V2 transition ratio had the highest sensitivity (94.1%), while the R wave duration and R/S wave amplitude indices in V2 had the highest specificity (87.5%). The former finding had a maximal area under the ROC curve of 0.892. All three published ECG algorithms are effective in differentiating the origin of OTVAs, while the V2 transition ratio, and the V2 R wave duration and R/S wave amplitude indices are the most sensitive and specific algorithms, respectively. Amongst all of the patients, the V2 R wave duration and R/S wave amplitude algorithm had the maximal area under the ROC curve, but in patients with LV rotation the V2 transition ratio algorithm had the maximum area under the ROC curve. PMID:27143173

  2. A Proposed India-Specific Algorithm for Management of Type 2 Diabetes.

    PubMed

    2016-06-01

    Several algorithms and guidelines have been proposed by countries and international professional bodies; however, no recent updated management algorithm is available for Asian Indians. Specifically, algorithms developed and validated in developed nations may not be relevant or applicable to patients in India because of several factors: early age of onset of diabetes, occurrence of diabetes in nonobese and sometimes lean people, differences in the relative contributions of insulin resistance and β-cell dysfunction, marked postprandial glycemia, frequent infections including tuberculosis, low access to healthcare and medications in people of low socioeconomic stratum, ethnic dietary practices (e.g., ingestion of high-carbohydrate diets), and inadequate education regarding hypoglycemia. All these factors should be considered to choose appropriate therapeutic option in this population. The proposed algorithm is simple, suggests less expensive drugs, and tries to provide an effective and comprehensive framework for delivery of diabetes therapy in primary care in India. The proposed guidelines agree with international recommendations in favoring individualization of therapeutic targets as well as modalities of treatment in a flexible manner suitable to the Indian population. PMID:26909751

  3. Proposed algorithm for determining the delta intercept of a thermocouple psychrometer curve

    SciTech Connect

    Kurzmack, M.A.

    1993-07-01

    The USGS Hydrologic Investigations Program is currently developing instrumentation to study the unsaturated zone at Yucca Mountain in Nevada. Surface-based boreholes up to 2,500 feet in depth will be drilled, and then instrumented in order to define the water potential field within the unsaturated zone. Thermocouple psychrometers will be used to monitor the in-situ water potential. An algorithm is proposed for simply and efficiently reducing a six wire thermocouple psychrometer voltage output curve to a single value, the delta intercept. The algorithm identifies a plateau region in the psychrometer curve and extrapolates a linear regression back to the initial start of relaxation. When properly conditioned for the measurements being made, the algorithm results in reasonable results even with incomplete or noisy psychrometer curves over a 1 to 60 bar range.

  4. Superior thyroid artery origin in Caucasian Greeks: A new classification proposal and review of the literature.

    PubMed

    Natsis, Konstantinos; Raikos, Athanasios; Foundos, Ioannis; Noussios, George; Lazaridis, Nikolaos; Njau, Samouel N

    2011-09-01

    Studies on the origin of the superior thyroid artery, define that it could originate either from the external carotid artery, (at the level of common carotid bifurcation), or from the common carotid artery. However, there is a classical anatomic knowledge that the superior thyroid artery is a branch of the external carotid artery. Variability in the anatomy of the superior thyroid artery was studied on 100 carotids. Moreover, a review about the origin of superior thyroid artery between recent and previous cadaveric, autopsy, and angiographic studies, on adults and fetuses, was carried out. The superior thyroid artery originated from the external carotid artery in 39% and at the level of carotid bifurcation and common carotid artery in 61% of cases. The anterior branches of the external carotid artery were separate in 76% of cases, while common trunks between the arteries were found in 24% of the specimens. A new classification proposal on the origin of the superior thyroid artery is also suggested. In this study, the origin of superior thyroid artery is considered at the level of the carotid bifurcation and not from the external carotid artery as stated in many classical anatomy textbooks. This has a great impact on the terminology when referring to the anterior branches of the external carotid artery, which could be termed as anterior branches of the cervical carotid artery. Head and neck surgeons must be familiar with anatomical variations of the superior thyroid artery in order to achieve a better surgical outcome.

  5. Evaluation of Origin Ensemble algorithm for image reconstruction for pixelated solid-state detectors with large number of channels

    NASA Astrophysics Data System (ADS)

    Kolstein, M.; De Lorenzo, G.; Mikhaylova, E.; Chmeissani, M.; Ariño, G.; Calderón, Y.; Ozsahin, I.; Uzun, D.

    2013-04-01

    The Voxel Imaging PET (VIP) Pathfinder project intends to show the advantages of using pixelated solid-state technology for nuclear medicine applications. It proposes designs for Positron Emission Tomography (PET), Positron Emission Mammography (PEM) and Compton gamma camera detectors with a large number of signal channels (of the order of 106). For PET scanners, conventional algorithms like Filtered Back-Projection (FBP) and Ordered Subset Expectation Maximization (OSEM) are straightforward to use and give good results. However, FBP presents difficulties for detectors with limited angular coverage like PEM and Compton gamma cameras, whereas OSEM has an impractically large time and memory consumption for a Compton gamma camera with a large number of channels. In this article, the Origin Ensemble (OE) algorithm is evaluated as an alternative algorithm for image reconstruction. Monte Carlo simulations of the PET design are used to compare the performance of OE, FBP and OSEM in terms of the bias, variance and average mean squared error (MSE) image quality metrics. For the PEM and Compton camera designs, results obtained with OE are presented.

  6. Assembled sequence contigs by SOAPdenova and Volvet algorithms from metagenomic short reads of a new bacterial isolate of gut origin

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Assembled sequence contigs by SOAPdenova and Volvet algorithms from metagenomic short reads of a new bacterial isolate of gut origin. This study included 2 submissions with a total of 9.8 million bp of assembled contigs....

  7. Locally linear manifold model for gap-filling algorithms of hyperspectral imagery: Proposed algorithms and a comparative study

    NASA Astrophysics Data System (ADS)

    Suliman, Suha Ibrahim

    Landsat 7 Enhanced Thematic Mapper Plus (ETM+) Scan Line Corrector (SLC) device, which corrects for the satellite motion, has failed since May 2003 resulting in a loss of about 22% of the data. To improve the reconstruction of Landsat 7 SLC-off images, Locally Linear Manifold (LLM) model is proposed for filling gaps in hyperspectral imagery. In this approach, each spectral band is modeled as a non-linear locally affine manifold that can be learned from the matching bands at different time instances. Moreover, each band is divided into small overlapping spatial patches. In particular, each patch is considered to be a linear combination (approximately on an affine space) of a set of corresponding patches from the same location that are adjacent in time or from the same season of the year. Fill patches are selected from Landsat 5 Thematic Mapper (TM) products of the year 1984 through 2011 which have similar spatial and radiometric resolution as Landsat 7 products. Using this approach, the gap-filling process involves feasible point on the learned manifold to approximate the missing pixels. The proposed LLM framework is compared to some existing single-source (Average and Inverse Distance Weight (IDW)) and multi- source (Local Linear Histogram Matching (LLHM) and Adaptive Window Linear Histogram Matching (AWLHM)) gap-filling methodologies. We analyze the effectiveness of the proposed LLM approach through simulation examples with known ground-truth. It is shown that the LLM-model driven approach outperforms all existing recovery methods considered in this study. The superiority of LLM is illustrated by providing better reconstructed images with higher accuracy even over heterogeneous landscape. Moreover, it is relatively simple to realize algorithmically, and it needs much less computing time when compared to the state- of-the art AWLHM approach.

  8. Bilateral Simultaneous Tubal Ectopic Pregnancy: A Case Report, Review of Literature and a Proposed Management Algorithm

    PubMed Central

    Jena, Saubhagya Kumar; Nayak, Monalisha; Das, Leena; Senapati, Swagatika

    2016-01-01

    Bilateral simultaneous Tubal Ectopic Pregnancy (BTP) is the rarest form of ectopic pregnancy. The incidence is higher in women undergoing assisted reproductive techniques or ovulation induction. The clinical presentation is unpredictable and there are no unique features to distinguish it from unilateral ectopic pregnancy. BTP continues to be a clinician’s dilemma as pre-operative diagnosis is difficult and is commonly made during surgery. Treatment options are varied depending on site of ectopic pregnancy, extent of tubal damage and requirement of future fertility. We report a case of BTP which was diagnosed during surgery and propose an algorithm for management of such patients. PMID:27134950

  9. Flap reconstruction of the knee: A review of current concepts and a proposed algorithm

    PubMed Central

    Gravvanis, Andreas; Kyriakopoulos, Antonios; Kateros, Konstantinos; Tsoutsos, Dimosthenis

    2014-01-01

    A literature search focusing on flap knee reconstruction revealed much controversy regarding the optimal management of around the knee defects. Muscle flaps are the preferred option, mainly in infected wounds. Perforator flaps have recently been introduced in knee coverage with significant advantages due to low donor morbidity and long pedicles with wide arc of rotation. In the case of free flap the choice of recipient vessels is the key point to the reconstruction. Taking the published experience into account, a reconstructive algorithm is proposed according to the size and location of the wound, the presence of infection and/or 3-dimensional defect. PMID:25405089

  10. A proposed origin of the Olympus Mons escarpment. [Martian volcanic feature

    NASA Technical Reports Server (NTRS)

    King, J. S.; Riehle, J. R.

    1974-01-01

    Olympus Mons (Nix Olympica) on Mars is delimited by a unique steep, nearly circular scarp. A pyroclastic model is proposed for the construct's origin. It is postulated that the Olympus Mons plateau is constructed predominantly of numerous ash-flow tuffs which were erupted from central sources over an extended period of time. Lava flows may be intercalated with the tuffs. A schematic radial profile incorporating the inferred compaction zones for an ash sheet is proposed. Following emplacement, eolian (and possibly fluvial) erosion and abrasion during dust storms would act on the ash sheets. Interior portions of the sheets would spall and slump following eolian erosion, generating steep, relatively smooth boundary scarps. The scarp would be circular due to symmetrical distribution of compaction zones. The model implies further that the Olympus Mons plateau rests on a more resistant rock substrate.

  11. A Proposed Implementation of Tarjan's Algorithm for Scheduling the Solution Sequence of Systems of Federated Models

    SciTech Connect

    McNunn, Gabriel S; Bryden, Kenneth M

    2013-01-01

    Tarjan's algorithm schedules the solution of systems of equations by noting the coupling and grouping between the equations. Simulating complex systems, e.g., advanced power plants, aerodynamic systems, or the multi-scale design of components, requires the linkage of large groups of coupled models. Currently, this is handled manually in systems modeling packages. That is, the analyst explicitly defines both the method and solution sequence necessary to couple the models. In small systems of models and equations this works well. However, as additional detail is needed across systems and across scales, the number of models grows rapidly. This precludes the manual assembly of large systems of federated models, particularly in systems composed of high fidelity models. This paper examines extending Tarjan's algorithm from sets of equations to sets of models. The proposed implementation of the algorithm is demonstrated using a small one-dimensional system of federated models representing the heat transfer and thermal stress in a gas turbine blade with thermal barrier coating. Enabling the rapid assembly and substitution of different models permits the rapid turnaround needed to support the “what-if” kinds of questions that arise in engineering design.

  12. Surgery in extensive vertebral hemangioma: case report, literature review and a new algorithm proposal.

    PubMed

    Tarantino, Roberto; Donnarumma, Pasquale; Nigro, Lorenzo; Delfini, Roberto

    2015-07-01

    Hemangiomas are benign dysplasias or vascular tumors consisting of vascular spaces lined with endothelium. Nowadays, radiotherapy for vertebral hemangiomas (VHs) is widely accepted as primary treatment for painful lesions. Nevertheless, the role of surgery is still unclear. The purpose of this study is to propose a novel algorithm of treatment about VHs. This is a case report of an extensive VH and a review of the literature. A case of vertebral fracture during radiotherapy at a total dose of 30 Gy given in 10 fractions (treatment time 2 weeks) using a linear accelerator at 15 MV high-energy photons for extensive VH is reported. Using PubMed database, a review of the literature is done. The authors have no study funding sources. The authors have no conflicting financial interests. In the literature, good results in terms of pain and neurological deficits are reported. No cases of vertebral fractures are described. However, there is no consensus regarding the treatment for VHs. Radiotherapy is widely utilized in VHs determining pain. Surgery for VHs determining neurological deficit is also widely accepted. Perhaps, regarding the width of the lesion, no indications are given. We consider it important to make an evaluation before initiating the treatment for the risk of pathologic vertebral fracture, since in radiotherapy, there is no convention regarding structural changes determined in VHs. We propose a new algorithm of treatment. We recommend radiotherapy only for small lesions in which vertebral stability is not concerned. Kyphoplasty can be proposed for asymptomatic patients in which VHs are small and in patients affected by VHs determining pain without spinal canal invasion in which the VH is small. In patients affected by pain without spinal canal invasion but in which the VH is wide or presented with spinal canal invasion and in patients affected by neurological deficits, we propose surgery. PMID:25720346

  13. Surgery in extensive vertebral hemangioma: case report, literature review and a new algorithm proposal.

    PubMed

    Tarantino, Roberto; Donnarumma, Pasquale; Nigro, Lorenzo; Delfini, Roberto

    2015-07-01

    Hemangiomas are benign dysplasias or vascular tumors consisting of vascular spaces lined with endothelium. Nowadays, radiotherapy for vertebral hemangiomas (VHs) is widely accepted as primary treatment for painful lesions. Nevertheless, the role of surgery is still unclear. The purpose of this study is to propose a novel algorithm of treatment about VHs. This is a case report of an extensive VH and a review of the literature. A case of vertebral fracture during radiotherapy at a total dose of 30 Gy given in 10 fractions (treatment time 2 weeks) using a linear accelerator at 15 MV high-energy photons for extensive VH is reported. Using PubMed database, a review of the literature is done. The authors have no study funding sources. The authors have no conflicting financial interests. In the literature, good results in terms of pain and neurological deficits are reported. No cases of vertebral fractures are described. However, there is no consensus regarding the treatment for VHs. Radiotherapy is widely utilized in VHs determining pain. Surgery for VHs determining neurological deficit is also widely accepted. Perhaps, regarding the width of the lesion, no indications are given. We consider it important to make an evaluation before initiating the treatment for the risk of pathologic vertebral fracture, since in radiotherapy, there is no convention regarding structural changes determined in VHs. We propose a new algorithm of treatment. We recommend radiotherapy only for small lesions in which vertebral stability is not concerned. Kyphoplasty can be proposed for asymptomatic patients in which VHs are small and in patients affected by VHs determining pain without spinal canal invasion in which the VH is small. In patients affected by pain without spinal canal invasion but in which the VH is wide or presented with spinal canal invasion and in patients affected by neurological deficits, we propose surgery.

  14. Preliminary field trial of a putative research algorithm for diagnosing ICD-11 personality disorders in psychiatric patients: 2. Proposed trait domains.

    PubMed

    Kim, Youl-Ri; Tyrer, Peter; Lee, Hong-Seock; Kim, Sung-Gon; Hwang, Soon-Taek; Lee, Gi Young; Mulder, Roger

    2015-11-01

    This field trial examines the discriminant validity of five trait domains of the originally proposed research algorithm for diagnosing International Classification of Diseases (ICD)-11 personality disorders. This trial was carried out in South Korea where a total of 124 patients with personality disorder participated in the study. Participants were assessed using originally proposed monothetic trait domains of asocial-schizoid, antisocial-dissocial, anxious-dependent, emotionally unstable and anankastic-obsessional groups of the research algorithm in ICD-11. Their assessments were compared to those from the Personality Assessment Schedule interview, and the five-factor model (FFM). A total of 48.4% of patients were found to have pathology in two or more domains. In the discriminant analysis, 64.2% of the grouped cases of the originally proposed ICD-11 domains were correctly classified by the five domain categories using the Personality Assessment Schedule, with the highest accuracy in the anankastic-obsessional domain and the lowest accuracy in the emotionally unstable domain. In comparison, the asocial-schizoid, anxious-dependent and the emotionally unstable domains were moderately correlated with the FFM, whereas the anankastic-obsessional or antisocial-dissocial domains were not significantly correlated with the FFM. In this field trial, we demonstrated the limited discriminant and the convergent validities of the originally proposed trait domains of the research algorithm for diagnosing ICD-11 personality disorder. The results suggest that the anankastic, asocial and dissocial domains show good discrimination, whereas the anxious-dependent and emotionally unstable ones overlap too much and have been subsequently revised.

  15. Carbonado: Physical and chemical properties, a critical evaluation of proposed origins, and a revised genetic model

    NASA Astrophysics Data System (ADS)

    Haggerty, Stephen E.

    2014-03-01

    Carbonado-diamond is the most controversial of all diamond types and is found only in Brazil, and the Central African Republic (Bangui). Neither an affinity to Earth's mantle, nor an origin in the crust can be unequivocally established. Carbonado-diamond is at least 3.8 Ga old, an age about 0.5 Ga older than the oldest diamonds yet reported in kimberlites and lamproites on Earth. Derived from Neo- to Mid-Proterozoic meta-conglomerates, the primary magmatic host rock has not been identified. Discovered in 1841, the material is polycrystalline, robust and coke-like, and is best described as a strongly bonded micro-diamond ceramic. It is characteristically porous, which precludes an origin at high pressures and high temperatures in Earth's deep interior, yet it is also typically patinated, with a glass-like surface that resembles melting. With exotic inclusions of highly reduced metals, carbides, and nitrides the origin of carbonado-diamond is made even more challenging. But the challenge is important because a new diamondiferous host rock may be involved, and the development of a new physical process for generating diamond is possibly assured. The combination of micro-crystals and random crystal orientation leads to extreme mechanical toughness, and a predicable super-hardness. The physical and chemical properties of carbonado are described with a view to the development of a mimetic strategy to synthesize carbonado and to duplicate its extreme toughness and super-hardness. Textural variations are described with an emphasis on melt-like surface features, not previously discussed in the literature, but having a very clear bearing on the history and genesis of carbonado. Selected physical properties are presented and the proposed origins, diverse in character and imaginatively novel, are critically reviewed. From our present knowledge of the dynamic Earth, all indications are that carbonado is unlikely to be of terrestrial origin. A revised model for the origin of

  16. Pandora - Discovering the origin of the moons of Mars (a proposed Discovery mission)

    NASA Astrophysics Data System (ADS)

    Raymond, C. A.; Diniega, S.; Prettyman, T. H.

    2015-12-01

    After decades of intensive exploration of Mars, fundamental questions about the origin and evolution of the martian moons, Phobos and Deimos, remain unanswered. Their spectral characteristics are similar to C- or D-class asteroids, suggesting that they may have originated in the asteroid belt or outer solar system. Perhaps these ancient objects were captured separately, or maybe they are the fragments of a captured asteroid disrupted by impact. Various lines of evidence hint at other possibilities: one alternative is co-formation with Mars, in which case the moons contain primitive martian materials. Another is that they are re-accreted ejecta from a giant impact and contain material from the early martian crust. The Pandora mission, proposed in response to the 2014 NASA Discovery Announcement of Opportunity, will acquire new information needed to determine the provenance of the moons of Mars. Pandora will travel to and successively orbit Phobos and Deimos to map their chemical and mineral composition and further refine their shape and gravity. Geochemical data, acquired by nuclear- and infrared-spectroscopy, can distinguish between key origin hypotheses. High resolution imaging data will enable detailed geologic mapping and crater counting to determine the timing of major events and stratigraphy. Data acquired will be used to determine the nature of and relationship between "red" and "blue" units on Phobos, and determine how Phobos and Deimos are related. After identifying material representative of each moons' bulk composition, analysis of the mineralogical and elemental composition of this material will allow discrimination between the formation hypotheses for each moon. The information acquired by Pandora can then be compared with similar data sets for other solar system bodies and from meteorite studies. Understanding the formation of the martian moons within this larger context will yield a better understanding of processes acting in the early solar system

  17. Statistical model and error analysis of a proposed audio fingerprinting algorithm

    NASA Astrophysics Data System (ADS)

    McCarthy, E. P.; Balado, F.; Silvestre, G. C. M.; Hurley, N. J.

    2006-01-01

    In this paper we present a statistical analysis of a particular audio fingerprinting method proposed by Haitsma et al.1 Due to the excellent robustness and synchronisation properties of this particular fingerprinting method, we would like to examine its performance for varying values of the parameters involved in the computation and ascertain its capabilities. For this reason, we pursue a statistical model of the fingerprint (also known as a hash, message digest or label). Initially we follow the work of a previous attempt made by Doets and Lagendijk 2-4 to obtain such a statistical model. By reformulating the representation of the fingerprint as a quadratic form, we present a model in which the parameters derived by Doets and Lagendijk may be obtained more easily. Furthermore, our model allows further insight into certain aspects of the behaviour of the fingerprinting algorithm not previously examined. Using our model, we then analyse the probability of error (P e) of the hash. We identify two particular error scenarios and obtain an expression for the probability of error in each case. We present three methods of varying accuracy to approximate P e following Gaussian noise addition to the signal of interest. We then analyse the probability of error following desynchronisation of the signal at the input of the hashing system and provide an approximation to P e for different parameters of the algorithm under varying degrees of desynchronisation.

  18. Proposal of a Clinical Decision Tree Algorithm Using Factors Associated with Severe Dengue Infection

    PubMed Central

    Hussin, Narwani; Cheah, Wee Kooi; Ng, Kee Sing; Muninathan, Prema

    2016-01-01

    Background WHO’s new classification in 2009: dengue with or without warning signs and severe dengue, has necessitated large numbers of admissions to hospitals of dengue patients which in turn has been imposing a huge economical and physical burden on many hospitals around the globe, particularly South East Asia and Malaysia where the disease has seen a rapid surge in numbers in recent years. Lack of a simple tool to differentiate mild from life threatening infection has led to unnecessary hospitalization of dengue patients. Methods We conducted a single-centre, retrospective study involving serologically confirmed dengue fever patients, admitted in a single ward, in Hospital Kuala Lumpur, Malaysia. Data was collected for 4 months from February to May 2014. Socio demography, co-morbidity, days of illness before admission, symptoms, warning signs, vital signs and laboratory result were all recorded. Descriptive statistics was tabulated and simple and multiple logistic regression analysis was done to determine significant risk factors associated with severe dengue. Results 657 patients with confirmed dengue were analysed, of which 59 (9.0%) had severe dengue. Overall, the commonest warning sign were vomiting (36.1%) and abdominal pain (32.1%). Previous co-morbid, vomiting, diarrhoea, pleural effusion, low systolic blood pressure, high haematocrit, low albumin and high urea were found as significant risk factors for severe dengue using simple logistic regression. However the significant risk factors for severe dengue with multiple logistic regressions were only vomiting, pleural effusion, and low systolic blood pressure. Using those 3 risk factors, we plotted an algorithm for predicting severe dengue. When compared to the classification of severe dengue based on the WHO criteria, the decision tree algorithm had a sensitivity of 0.81, specificity of 0.54, positive predictive value of 0.16 and negative predictive of 0.96. Conclusion The decision tree algorithm proposed

  19. A proposed origin for fossilized Pennsylvanian plant cuticles by pyrite oxidation (Sydney Coalfield, Nova Scotia, Canada)

    USGS Publications Warehouse

    Zodrow, E.L.; Mastalerz, Maria

    2009-01-01

    Fossilized cuticles, though rare in the roof rocks of coal seam in the younger part of the Pennsylvanian Sydney Coalfield, Nova Scotia, represent nearly all of the major plant groups. Selected for investigation, by methods of Fourier transform infrared spectroscopy (FTIR) and elemental analysis, are fossilized cuticles (FCs) and cuticles extracted from compressions by Schulze's process (CCs) of Alethopteris ambigua. These investigations are supplemented by FTIR analysis of FCs and CCs of Cordaites principalis, and a cuticle-fossilized medullosalean(?) axis. The purpose of this study is threefold: (1) to try to determine biochemical discriminators between FCs and CCs of the same species using semi-quantitative FTIR techniques; (2) to assess the effects chemical treatments have, particularly Schulze's process, on functional groups; and most importantly (3) to study the primary origin of FCs. Results are equivocal in respect to (1); (2) after Schulze's treatment aliphatic moieties tend to be reduced relative to oxygenated groups, and some aliphatic chains may be shortened; and (3) a primary chemical model is proposed. The model is based on a variety of geological observations, including stratal distribution, clay and pyrite mineralogies associated with FCs and compressions, and regional geological structure. The model presupposes compression-cuticle fossilization under anoxic conditions for late authigenic deposition of sub-micron-sized pyrite on the compressions. Rock joints subsequently provided conduits for oxygen-enriched ground-water circulation to initiate in situ pyritic oxidation that produced sulfuric acid for macerating compressions, with resultant loss of vitrinite, but with preservation of cuticles as FCs. The timing of the process remains undetermined, though it is assumed to be late to post-diagenetic. Although FCs represent a pathway of organic matter transformation (pomd) distinct from other plant-fossilization processes, global applicability of the

  20. Surgical Management of Early Endometrial Cancer: An Update and Proposal of a Therapeutic Algorithm

    PubMed Central

    Falcone, Francesca; Balbi, Giancarlo; Di Martino, Luca; Grauso, Flavio; Salzillo, Maria Elena; Messalli, Enrico Michelino

    2014-01-01

    In the last few years technical improvements have produced a dramatic shift from traditional open surgery towards a minimally invasive approach for the management of early endometrial cancer. Advancement in minimally invasive surgical approaches has allowed extensive staging procedures to be performed with significantly reduced patient morbidity. Debate is ongoing regarding the choice of a minimally invasive approach that has the most effective benefit for the patients, the surgeon, and the healthcare system as a whole. Surgical treatment of women with presumed early endometrial cancer should take into account the features of endometrial disease and the general surgical risk of the patient. Women with endometrial cancer are often aged, obese, and with cardiovascular and metabolic comorbidities that increase the risk of peri-operative complications, so it is important to tailor the extent and the radicalness of surgery in order to decrease morbidity and mortality potentially derivable from unnecessary procedures. In this regard women with negative nodes derive no benefit from unnecessary lymphadenectomy, but may develop short- and long-term morbidity related to this procedure. Preoperative and intraoperative techniques could be critical tools for tailoring the extent and the radicalness of surgery in the management of women with presumed early endometrial cancer. In this review we will discuss updates in surgical management of early endometrial cancer and also the role of preoperative and intraoperative evaluation of lymph node status in influencing surgical options, with the aim of proposing a management algorithm based on the literature and our experience. PMID:25063051

  1. Surgical management of early endometrial cancer: an update and proposal of a therapeutic algorithm.

    PubMed

    Falcone, Francesca; Balbi, Giancarlo; Di Martino, Luca; Grauso, Flavio; Salzillo, Maria Elena; Messalli, Enrico Michelino

    2014-07-26

    In the last few years technical improvements have produced a dramatic shift from traditional open surgery towards a minimally invasive approach for the management of early endometrial cancer. Advancement in minimally invasive surgical approaches has allowed extensive staging procedures to be performed with significantly reduced patient morbidity. Debate is ongoing regarding the choice of a minimally invasive approach that has the most effective benefit for the patients, the surgeon, and the healthcare system as a whole. Surgical treatment of women with presumed early endometrial cancer should take into account the features of endometrial disease and the general surgical risk of the patient. Women with endometrial cancer are often aged, obese, and with cardiovascular and metabolic comorbidities that increase the risk of peri-operative complications, so it is important to tailor the extent and the radicalness of surgery in order to decrease morbidity and mortality potentially derivable from unnecessary procedures. In this regard women with negative nodes derive no benefit from unnecessary lymphadenectomy, but may develop short- and long-term morbidity related to this procedure. Preoperative and intraoperative techniques could be critical tools for tailoring the extent and the radicalness of surgery in the management of women with presumed early endometrial cancer. In this review we will discuss updates in surgical management of early endometrial cancer and also the role of preoperative and intraoperative evaluation of lymph node status in influencing surgical options, with the aim of proposing a management algorithm based on the literature and our experience.

  2. When and why should mentally ill prisoners be transferred to secure hospitals: a proposed algorithm.

    PubMed

    Vogel, Tobias; Lanquillon, Stefan; Graf, Marc

    2013-01-01

    For reasons well known and researched in detail, worldwide prevalence rates for mental disorders are much higher in prison populations than in general, not only for sentenced prisoners but also for prisoners on remand, asylum seekers on warrant for deportation and others. Moreover, the proportion of imprisoned individuals is rising in most countries. Therefore forensic psychiatry must deal not only with the typically young criminal population, vulnerable to mental illness due to social stress and at an age when rates of schizophrenia, suicide, drug abuse and most personality disorders are highest, but also with an increasingly older population with age-related diseases such as dementia. While treatment standards for these mental disorders are largely published and accepted, and scientific evidence as to screening prisoners for mental illness is growing, where to treat them is dependent on considerations for public safety and local conditions such as national legislation, special regulations and the availability of treatment facilities (e.g., in prisons, in special medical wards within prisons or in secure hospitals). While from a medical point of view a mentally ill prisoner should be treated in a hospital, the ultimate decision must consider these different issues. In this article the authors propose an algorithm comprising screening procedures for mental health and a treatment chain for mentally ill prisoners based on treatment facilities in prison, medical safety, human rights, ethics, and the availability of services at this interface between prison and medicine.

  3. Automated Analysis of 1p/19q Status by FISH in Oligodendroglial Tumors: Rationale and Proposal of an Algorithm

    PubMed Central

    Duval, Céline; de Tayrac, Marie; Michaud, Karine; Cabillic, Florian; Paquet, Claudie; Gould, Peter Vincent; Saikali, Stéphan

    2015-01-01

    Objective To propose a new algorithm facilitating automated analysis of 1p and 19q status by FISH technique in oligodendroglial tumors with software packages available in the majority of institutions using this technique. Methods We documented all green/red (G/R) probe signal combinations in a retrospective series of 53 oligodendroglial tumors according to literature guidelines (Algorithm 1) and selected only the most significant combinations for a new algorithm (Algorithm 2). This second algorithm was then validated on a prospective internal series of 45 oligodendroglial tumors and on an external series of 36 gliomas. Results Algorithm 2 utilizes 24 G/R combinations which represent less than 40% of combinations observed with Algorithm 1. The new algorithm excludes some common G/R combinations (1/1, 3/2) and redefines the place of others (defining 1/2 as compatible with normal and 3/3, 4/4 and 5/5 as compatible with imbalanced chromosomal status). The new algorithm uses the combination + ratio method of signal probe analysis to give the best concordance between manual and automated analysis on samples of 100 tumor cells (91% concordance for 1p and 89% concordance for 19q) and full concordance on samples of 200 tumor cells. This highlights the value of automated analysis as a means to identify cases in which a larger number of tumor cells should be studied by manual analysis. Validation of this algorithm on a second series from another institution showed a satisfactory concordance (89%, κ = 0.8). Conclusion Our algorithm can be easily implemented on all existing FISH analysis software platforms and should facilitate multicentric evaluation and standardization of 1p/19q assessment in gliomas with reduction of the professional and technical time required. PMID:26135922

  4. To Propose a Reviewer Dispatching Algorithm for Networked Peer Assessment System.

    ERIC Educational Resources Information Center

    Liu, Eric Zhi-Feng

    2005-01-01

    Despite their increasing availability on the Internet, networked peer assessment systems (1-5) lack feasible automatic dispatching algorithm of student's assignments and ultimately inhibit the effectiveness of peer assessment. Therefore, this study presents a reviewer dispatching algorithm capable of supporting networked peer assessment system in…

  5. CDRD and PNPR satellite passive microwave precipitation retrieval algorithms: EuroTRMM/EURAINSAT origins and H-SAF operations

    NASA Astrophysics Data System (ADS)

    Mugnai, A.; Smith, E. A.; Tripoli, G. J.; Bizzarri, B.; Casella, D.; Dietrich, S.; Di Paola, F.; Panegrossi, G.; Sanò, P.

    2013-04-01

    including a few examples of their performance. This aspect of the development of the two algorithms is placed in the context of what we refer to as the TRMM era, which is the era denoting the active and ongoing period of the Tropical Rainfall Measuring Mission (TRMM) that helped inspire their original development. In 2015, the ISAC-Rome precipitation algorithms will undergo a transformation beginning with the upcoming Global Precipitation Measurement (GPM) mission, particularly the GPM Core Satellite technologies. A few years afterward, the first pair of imaging and sounding Meteosat Third Generation (MTG) satellites will be launched, providing additional technological advances. Various of the opportunities presented by the GPM Core and MTG satellites for improving the current CDRD and PNPR precipitation retrieval algorithms, as well as extending their product capability, are discussed.

  6. To Propose an Algorithm for Team Forming: Simulated Annealing K Team-Forming Algorithm for Heterogeneous Grouping.

    ERIC Educational Resources Information Center

    Zhi-Feng Liu, Eric

    2005-01-01

    In recent studies, some researchers were eager for the answer of how to group a perfectly dream team. There are various grouping methods, e.g. random assignment, homogeneous grouping with personality or achievement and heterogeneous grouping with personality or achievement, were proposed. Some instructors could put some students in a team better…

  7. A proposed adaptive step size perturbation and observation maximum power point tracking algorithm based on photovoltaic system modeling

    NASA Astrophysics Data System (ADS)

    Huang, Yu

    Solar energy becomes one of the major alternative renewable energy options for its huge abundance and accessibility. Due to the intermittent nature, the high demand of Maximum Power Point Tracking (MPPT) techniques exists when a Photovoltaic (PV) system is used to extract energy from the sunlight. This thesis proposed an advanced Perturbation and Observation (P&O) algorithm aiming for relatively practical circumstances. Firstly, a practical PV system model is studied with determining the series and shunt resistances which are neglected in some research. Moreover, in this proposed algorithm, the duty ratio of a boost DC-DC converter is the object of the perturbation deploying input impedance conversion to achieve working voltage adjustment. Based on the control strategy, the adaptive duty ratio step size P&O algorithm is proposed with major modifications made for sharp insolation change as well as low insolation scenarios. Matlab/Simulink simulation for PV model, boost converter control strategy and various MPPT process is conducted step by step. The proposed adaptive P&O algorithm is validated by the simulation results and detail analysis of sharp insolation changes, low insolation condition and continuous insolation variation.

  8. 75 FR 2482 - Proposed Information Collection; Comment Request; Fisheries Certificate of Origin

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-01-15

    ... Certificate of Origin AGENCY: National Oceanic and Atmospheric Administration (NOAA). ACTION: Notice. SUMMARY... Constitution Avenue, NW., Washington, DC 20230 (or via the Internet at dHynek@doc.gov ). FOR...

  9. Origins.

    ERIC Educational Resources Information Center

    Online-Offline, 1999

    1999-01-01

    Provides an annotated list of resources dealing with the theme of origins of life, the universe, and traditions. Includes Web sites, videos, books, audio materials, and magazines with appropriate grade levels and/or subject disciplines indicated; professional resources; and learning activities. (LRW)

  10. Origins.

    PubMed

    Weinberg, S

    1985-10-01

    The farthest of the galaxies that can be seen through the large ground-based telescopes of modern astronomy, such as those on La Palma in the Canary Islands, are so far away that they appear as they did close to the time of the origin of the universe, perhaps some 10 billion years ago. Much has been learned, and much has still to be learned, about the young universe from optical and radio telescopes, but these instruments cannot be used to look directly at the universe in its first few hundred thousand years. Instead, they are used to search the relatively recent past for relics of much earlier times. Together with experiments planned for the next generation of elementary particle accelerators, astronomical observations should continue to extend what is known about the universe backward in time to the Big Bang and may eventually help to reveal the origins of the physical laws that govern the universe.

  11. Review of techniques for the removal of trapped rings on fingers with a proposed new algorithm.

    PubMed

    Kalkan, Asim; Kose, Ozkan; Tas, Mahmut; Meric, Gokhan

    2013-11-01

    Various removal techniques for rings trapped on the finger have been described in the current literature. However, despite this being a frequently encountered situation in emergency departments, there is no comprehensive algorithm to manage and follow these patients in the current literature. The purposes of this study were to describe the most commonly used ring removal techniques and to establish an algorithm for the removal of rings trapped on fingers. We performed a comprehensive literature search in several databases to identify all articles, case reports, letters, and book chapters that focus on ring removal techniques in English language from 1960 to the present. There are 2 methods of removal: (1) noncutting techniques in which the rings can be removed without breaking the integrity of the ring and (2) various ring-cutting equipments and tools. All these techniques are classified into distinct groups and described in detail with illustrations. Furthermore, an algorithm for handling such patients is established according to case-based patient care. Following an algorithm for the removal of trapped rings on the finger will be useful for patients and emergency physicians. It will also prevent possible complications and will save time.

  12. The origin of the medial circumflex femoral artery: a meta-analysis and proposal of a new classification system.

    PubMed

    Tomaszewski, Krzysztof A; Henry, Brandon M; Vikse, Jens; Roy, Joyeeta; Pękala, Przemysław A; Svensen, Maren; Guay, Daniel L; Saganiak, Karolina; Walocha, Jerzy A

    2016-01-01

    Background and Objectives. The medial circumflex femoral artery (MCFA) is a common branch of the deep femoral artery (DFA) responsible for supplying the femoral head and the greater trochanteric fossa. The prevalence rates of MCFA origin, its branching patterns and its distance to the mid-inguinal point (MIP) vary significantly throughout the literature. The aim of this study was to determine the true prevalence of these characteristics and to study their associated anatomical and clinical relevance. Methods. A search of the major electronic databases Pubmed, EMBASE, Scopus, ScienceDirect, Web of Science, SciELO, BIOSIS, and CNKI was performed to identify all articles reporting data on the origin of the MCFA, its branching patterns and its distance to the MIP. No data or language restriction was set. Additionally, an extensive search of the references of all relevant articles was performed. All data on origin, branching and distance to MIP was extracted and pooled into a meta-analysis using MetaXL v2.0. Results. A total of 38 (36 cadaveric and 2 imaging) studies (n = 4,351 lower limbs) were included into the meta-analysis. The pooled prevalence of the MCFA originating from the DFA was 64.6% (95% CI [58.0-71.5]), while the pooled prevalence of the MCFA originating from the CFA was 32.2% (95% CI [25.9-39.1]). The CFA-derived MCFA was found to originate as a single branch in 81.1% (95% CI [70.1-91.7]) of cases with a mean pooled distance of 50.14 mm (95% CI [42.50-57.78]) from the MIP. Conclusion. The MCFA's variability must be taken into account by surgeons, especially during orthopedic interventions in the region of the hip to prevent iatrogenic injury to the circulation of the femoral head. Based on our analysis, we present a new proposed classification system for origin of the MCFA.

  13. The origin of the medial circumflex femoral artery: a meta-analysis and proposal of a new classification system.

    PubMed

    Tomaszewski, Krzysztof A; Henry, Brandon M; Vikse, Jens; Roy, Joyeeta; Pękala, Przemysław A; Svensen, Maren; Guay, Daniel L; Saganiak, Karolina; Walocha, Jerzy A

    2016-01-01

    Background and Objectives. The medial circumflex femoral artery (MCFA) is a common branch of the deep femoral artery (DFA) responsible for supplying the femoral head and the greater trochanteric fossa. The prevalence rates of MCFA origin, its branching patterns and its distance to the mid-inguinal point (MIP) vary significantly throughout the literature. The aim of this study was to determine the true prevalence of these characteristics and to study their associated anatomical and clinical relevance. Methods. A search of the major electronic databases Pubmed, EMBASE, Scopus, ScienceDirect, Web of Science, SciELO, BIOSIS, and CNKI was performed to identify all articles reporting data on the origin of the MCFA, its branching patterns and its distance to the MIP. No data or language restriction was set. Additionally, an extensive search of the references of all relevant articles was performed. All data on origin, branching and distance to MIP was extracted and pooled into a meta-analysis using MetaXL v2.0. Results. A total of 38 (36 cadaveric and 2 imaging) studies (n = 4,351 lower limbs) were included into the meta-analysis. The pooled prevalence of the MCFA originating from the DFA was 64.6% (95% CI [58.0-71.5]), while the pooled prevalence of the MCFA originating from the CFA was 32.2% (95% CI [25.9-39.1]). The CFA-derived MCFA was found to originate as a single branch in 81.1% (95% CI [70.1-91.7]) of cases with a mean pooled distance of 50.14 mm (95% CI [42.50-57.78]) from the MIP. Conclusion. The MCFA's variability must be taken into account by surgeons, especially during orthopedic interventions in the region of the hip to prevent iatrogenic injury to the circulation of the femoral head. Based on our analysis, we present a new proposed classification system for origin of the MCFA. PMID:26966661

  14. A proposed origin for palimpsests and anomalous pit craters on Ganymede and Callisto

    NASA Technical Reports Server (NTRS)

    Croft, S. K.

    1983-01-01

    The hypothesis that palimpsests and anomalous pit craters are essentially pristine crater forms derived from high-velocity impacts and/or impacts into an ice crust with preimpact temperatures near melting is explored. The observational data are briefly reviewed, and an impact model is proposed for the direct formation of a palimpsest from an impact when the modification flow which produces the final crater is dominated by 'wet' fluid flow, as opposed to the 'dry' granular flow which produces normal craters. Conditions of 'wet' modification occur when the volume of impact melt remaining in the transient crater attains a volume comparable to the transient crater. The normal crater-palimpsest transition is found to occur for sufficiently large impacts or sufficiently fast impactors. The range of crater diameters and morphological characteristics inferred from the impact model is consistent with the observed characteristics of palimpsests and anomalous pit craters.

  15. Methodology to automatically detect abnormal values of vital parameters in anesthesia time-series: Proposal for an adaptable algorithm.

    PubMed

    Lamer, Antoine; Jeanne, Mathieu; Marcilly, Romaric; Kipnis, Eric; Schiro, Jessica; Logier, Régis; Tavernier, Benoît

    2016-06-01

    Abnormal values of vital parameters such as hypotension or tachycardia may occur during anesthesia and may be detected by analyzing time-series data collected during the procedure by the Anesthesia Information Management System. When crossed with other data from the Hospital Information System, abnormal values of vital parameters have been linked with postoperative morbidity and mortality. However, methods for the automatic detection of these events are poorly documented in the literature and differ between studies, making it difficult to reproduce results. In this paper, we propose a methodology for the automatic detection of abnormal values of vital parameters. This methodology uses an algorithm allowing the configuration of threshold values for any vital parameters as well as the management of missing data. Four examples illustrate the application of the algorithm, after which it is applied to three vital signs (heart rate, SpO2, and mean arterial pressure) to all 2014 anesthetic records at our institution. PMID:26817405

  16. Analysis of trans-Neptunian objects and a proposed theory to explain their origin

    NASA Astrophysics Data System (ADS)

    Brown, Robert B.; Firth, Jordan A.

    2016-02-01

    Current theories cannot explain how trans-Neptunian objects (TNOs) either formed in situ or how ultrawide trans-Neptunian binaries (TNBs) exist if they were formed closer to the Sun and were later dispersed during Neptune's migration. Furthermore, no theory can adequately explain the documented clustering of ω near 0° for TNOs with a > 150 au. Here, we show that not only is ω clustered for the nine long-period TNOs (LPTNOs) with a > 200 au, but Ω is also grouped almost as closely. Neither of these orbital elements is randomly distributed for any collection of TNOs investigated, including those that are not in resonance with Neptune, those with q > 30 au, q > 44 au, and LPTNOs. Every frequency distribution of ω and Ω indicates that many TNOs were recently affected by Neptune. Based on this study, we propose that TNOs were inside Neptune's orbit in the last few Myr. The TNOs then migrated outwards in a relatively short time period. Ultrawide TNBs never came close to Neptune during this migration, allowing these fragile pairs to remain intact. However, many other TNOs were perturbed as they passed Neptune, resulting in the distribution of orbital elements we see today for all TNOs, including those in the Kuiper belt and the LPTNOs.

  17. Fat-constrained 18F-FDG PET reconstruction using Dixon MR imaging and the origin ensemble algorithm

    NASA Astrophysics Data System (ADS)

    Wülker, Christian; Heinzer, Susanne; Börnert, Peter; Renisch, Steffen; Prevrhal, Sven

    2015-03-01

    Combined PET/MR imaging allows to incorporate the high-resolution anatomical information delivered by MRI into the PET reconstruction algorithm for improvement of PET accuracy beyond standard corrections. We used the working hypothesis that glucose uptake in adipose tissue is low. Thus, our aim was to shift 18F-FDG PET signal into image regions with a low fat content. Dixon MR imaging can be used to generate fat-only images via the water/fat chemical shift difference. On the other hand, the Origin Ensemble (OE) algorithm, a novel Markov chain Monte Carlo method, allows to reconstruct PET data without the use of forward- and back projection operations. By adequate modifications to the Markov chain transition kernel, it is possible to include anatomical a priori knowledge into the OE algorithm. In this work, we used the OE algorithm to reconstruct PET data of a modified IEC/NEMA Body Phantom simulating body water/fat composition. Reconstruction was performed 1) natively, 2) informed with the Dixon MR fat image to down-weight 18F-FDG signal in fatty tissue compartments in favor of adjacent regions, and 3) informed with the fat image to up-weight 18F-FDG signal in fatty tissue compartments, for control purposes. Image intensity profiles confirmed the visibly improved contrast and reduced partial volume effect at water/fat interfaces. We observed a 17+/-2% increased SNR of hot lesions surrounded by fat, while image quality was almost completely retained in fat-free image regions. An additional in vivo experiment proved the applicability of the presented technique in practice, and again verified the beneficial impact of fat-constrained OE reconstruction on PET image quality.

  18. Syncope in adults: systematic review and proposal of a diagnostic and therapeutic algorithm.

    PubMed

    Rosanio, Salvatore; Schwarz, Ernst R; Ware, David L; Vitarelli, Antonio

    2013-01-20

    This review aims to provide a practical and up-to-date description on the relevance and classification of syncope in adults as well as a guidance on the optimal evaluation, management and treatment of this very common clinical and socioeconomic medical problem. We have summarized recent active research and emphasized the value for physicians to adhere current guidelines. A modern management of syncope should take into account 1) use of risk stratification algorithms and implementation of syncope management units to increase the diagnostic yield and reduce costs; 2) early implantable loop recorders rather than late in the evaluation of unexplained syncope; and 3) isometric physical counter-pressure maneuvers as first-line treatment for patients with neurally-mediated reflex syncope and prodromal symptoms.

  19. Lung ultrasound in the diagnosis of pneumonia in children: proposal for a new diagnostic algorithm

    PubMed Central

    Capasso, Maria; De Luca, Giuseppe; Prisco, Salvatore; Mancusi, Carlo; Laganà, Bruno; Comune, Vincenzo

    2015-01-01

    Background. Despite guideline recommendations, chest radiography (CR) for the diagnosis of community-acquired pneumonia (CAP) in children is commonly used also in mild and/or uncomplicated cases. The aim of this study is to assess the reliability of lung ultrasonography (LUS) as an alternative test in these cases and suggest a new diagnostic algorithm. Methods. We reviewed the medical records of all patients admitted to the pediatric ward from February 1, 2013 to December 31, 2014 with respiratory signs and symptoms. We selected only cases with mild/uncomplicated clinical course and in which CR and LUS were performed within 24 h of each other. The LUS was not part of the required exams recorded in medical records but performed independently. The discharge diagnosis, made only on the basis of history and physical examination, laboratory and instrumental tests, including CR (without LUS), was used as a reference test to compare CR and LUS findings. Results. Of 52 selected medical records CAP diagnosis was confirmed in 29 (55.7%). CR was positive in 25 cases, whereas LUS detected pneumonia in 28 cases. Four patients with negative CR were positive in ultrasound findings. Instead, one patient with negative LUS was positive in radiographic findings. The LUS sensitivity was 96.5% (95% CI [82.2%–99.9%]), specificity of 95.6% (95% CI [78.0%–99.9%]), positive likelihood ratio of 22.2 (95% CI [3.2–151.2]), and negative likelihood ratio of 0.04 (95% CI [0.01–0.25]) for diagnosing pneumonia. Conclusion. LUS can be considered as a valid alternative diagnostic tool of CAP in children and its use must be promoted as a first approach in accordance with our new diagnostic algorithm. PMID:26587343

  20. Microcodium: An extensive review and a proposed non-rhizogenic biologically induced origin for its formation

    NASA Astrophysics Data System (ADS)

    Kabanov, Pavel; Anadón, Pere; Krumbein, Wolfgang E.

    2008-04-01

    Microcodium has been previously described as a mainly Cenozoic calcification pattern ascribed to various organisms. A review of the available literature and our data reveal two peaks in Microcodium abundance; the Moscovian-early Permian and the latest Cretaceous-Paleogene. A detailed analysis of late Paleozoic and Cenozoic examples leads to the following new conclusions. Typical Microcodium-forming unilayered 'corn-cob' aggregates of elongated grains and thick multilayered (palisade) replacing structures cannot be linked to smaller-grained intracellular root calcifications, as became widely accepted after the work of Klappa [Klappa, C.F., 1979. Calcified filaments in Quaternary calcretes: organo-mineral interactions in the subaerial vadose environment. J. Sediment. Petrol. 49, 955-968.] Typical Microcodium is recognized from the early Carboniferous (with doubtful Devonian reports) to Quaternary as a biologically induced mineralization formed via dissolution/precipitation processes in various aerobic Ca-rich soil and subsoil terrestrial environments. Morphology and δ13C signatures of Microcodium suggest that neither plants, algae, or roots and root-associated mycorrhiza regulate the formation of these fossil structures. Non-recrystallized Microcodium grains basically consist of slender (1.5-4 μm) curved radiating monocrystalline prisms with occasionally preserved hyphae-like morphology. Thin (0.5-3 μm) hypha-like canals can also be observed. These supposed hyphae may belong to actinobacteria. However, thin fungal mycelia cannot be excluded. We propose a model of Microcodium formation involving a mycelial saprotrophic organism responsible for substrate corrosion and associated bacteria capable of consuming acidic metabolites and CaCO 3 reprecipitation into the Microcodium structures.

  1. [Adequacy of clinical interventions in patients with advanced and complex disease. Proposal of a decision making algorithm].

    PubMed

    Ameneiros-Lago, E; Carballada-Rico, C; Garrido-Sanjuán, J A; García Martínez, A

    2015-01-01

    Decision making in the patient with chronic advanced disease is especially complex. Health professionals are obliged to prevent avoidable suffering and not to add any more damage to that of the disease itself. The adequacy of the clinical interventions consists of only offering those diagnostic and therapeutic procedures appropriate to the clinical situation of the patient and to perform only those allowed by the patient or representative. In this article, the use of an algorithm is proposed that should serve to help health professionals in this decision making process.

  2. The origin of the medial circumflex femoral artery: a meta-analysis and proposal of a new classification system

    PubMed Central

    Henry, Brandon M.; Vikse, Jens; Roy, Joyeeta; Pękala, Przemysław A.; Svensen, Maren; Guay, Daniel L.; Saganiak, Karolina; Walocha, Jerzy A.

    2016-01-01

    Background and Objectives. The medial circumflex femoral artery (MCFA) is a common branch of the deep femoral artery (DFA) responsible for supplying the femoral head and the greater trochanteric fossa. The prevalence rates of MCFA origin, its branching patterns and its distance to the mid-inguinal point (MIP) vary significantly throughout the literature. The aim of this study was to determine the true prevalence of these characteristics and to study their associated anatomical and clinical relevance. Methods. A search of the major electronic databases Pubmed, EMBASE, Scopus, ScienceDirect, Web of Science, SciELO, BIOSIS, and CNKI was performed to identify all articles reporting data on the origin of the MCFA, its branching patterns and its distance to the MIP. No data or language restriction was set. Additionally, an extensive search of the references of all relevant articles was performed. All data on origin, branching and distance to MIP was extracted and pooled into a meta-analysis using MetaXL v2.0. Results. A total of 38 (36 cadaveric and 2 imaging) studies (n = 4,351 lower limbs) were included into the meta-analysis. The pooled prevalence of the MCFA originating from the DFA was 64.6% (95% CI [58.0–71.5]), while the pooled prevalence of the MCFA originating from the CFA was 32.2% (95% CI [25.9–39.1]). The CFA-derived MCFA was found to originate as a single branch in 81.1% (95% CI [70.1–91.7]) of cases with a mean pooled distance of 50.14 mm (95% CI [42.50–57.78]) from the MIP. Conclusion. The MCFA’s variability must be taken into account by surgeons, especially during orthopedic interventions in the region of the hip to prevent iatrogenic injury to the circulation of the femoral head. Based on our analysis, we present a new proposed classification system for origin of the MCFA. PMID:26966661

  3. The prevention of adverse reactions to transfusions in patients with haemoglobinopathies: a proposed algorithm

    PubMed Central

    Bennardello, Francesco; Fidone, Carmelo; Spadola, Vincenzo; Cabibbo, Sergio; Travali, Simone; Garozzo, Giovanni; Antolino, Agostino; Tavolino, Giuseppe; Falla, Cadigia; Bonomo, Pietro

    2013-01-01

    Background Transfusion therapy remains the main treatment for patients with severe haemoglobinopathies, but can cause adverse reactions which may be classified as immediate or delayed. The use of targeted prevention with drugs and treatments of blood components in selected patients can contribute to reducing the development of some reactions. The aim of our study was to develop an algorithm capable of guiding behaviours to adopt in order to reduce the incidence of immediate transfusion reactions. Materials and methods Immediate transfusion reactions occurring over a 7-year period in 81 patients with transfusion-dependent haemoglobinopathies were recorded. The patients received transfusions with red cell concentrates that had been filtered prestorage. Various measures were undertaken to prevent transfusion reactions: leucoreduction, washing the red blood cells, prophylactic administration of an antihistamine (loratidine 10 mg tablet) or an antipyretic (paracetamol 500 mg tablet). Results Over the study period 20,668 red cell concentrates were transfused and 64 adverse transfusion reactions were recorded in 36 patients. The mean incidence of reactions in the 7 years of observation was 3.1‰. Over the years the incidence gradually decreased from 6.8‰ in 2004 to 0.9‰ in 2010. Discussion Preventive measures are not required for patients who have an occasional reaction, because the probability that such a type of reaction recurs is very low. In contrast, the targeted use of drugs such as loratidine or paracetamol, sometimes combined with washing and/or double filtration of red blood cells, can reduce the rate of recurrent (allergic) reactions to about 0.9‰. The system for detecting adverse reactions and training staff involved in transfusion therapy are critical points for reliable collection of data and standardisation of the detection system is recommended for those wanting to monitor the incidence of all adverse reactions, including minor ones. PMID:23736930

  4. Non-invasive diagnosis in a case of bronchopulmonary sequestration and proposal of diagnostic algorithm.

    PubMed

    Caradonna, P; Bellia, M; Cannizzaro, F; Regio, S; Midiri, M; Bellia, V

    2008-09-01

    The case of a 43-year-old woman with intralobar pulmonary sequestration, Pryce type one, is presented. The medical history was characterised by recurrent bronchopneumonia, productive cough with purulent sputum and hemoptysis in the last three years. Diagnosis was made by CT angiography: multiplanar, maximum intensity projection and volume rendering reconstructions were visualised. A volume reduction of middle and lower lobe with multiple cyst-like bronchiectasis was detected and no evident relationship with tracheobronchial tree was pointed out. Reconstructions aimed at evaluating bronchial structures demonstrated no patency of middle and lower lobar bronchi. The study carried out after contrast medium infusion in arterial phase showed a vascular disorder characterised by an accessory arterial branch arising from the upper portion of thoracic aorta which, after moving caudally to pulmonary hilus with a tortuous course, supplied the atelectatic parenchyma. No anomalous venous drainage was detected. The patient underwent surgery with resection of two pulmonary lobes. CT compares favourably with other alternative imaging technique for pulmonary sequestration as multiplanar reconstructions allow not only the detection of supplying vessel, but also the accurate description of heterogeneous characteristics of the mass and adjacent structures. Finally an imaging-based diagnostic algorhithm is proposed. PMID:19065849

  5. A Method of Solving Scheduling Problems Using Improved Guided Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Ou, Gyouhi; Tamura, Hiroki; Tanno, Koichi; Tang, Zheng

    In this paper, an improved guided genetic algorithm is proposed forJob-shop schueduling problem. The proposed method is improved by genetic algorithm using multipliers which can be adjusted during the search process. The simulation results based on some benchmark problems that proves the proposed method can find better solutions than genetic algorithm and original guided genetic algorithm.

  6. A miRNA-tRNA mix-up: tRNA origin of proposed miRNA.

    PubMed

    Schopman, Nick C T; Heynen, Stephan; Haasnoot, Joost; Berkhout, Ben

    2010-01-01

    The rapid release of new data from DNA genome sequencing projects has led to a variety of misannotations in public databases. Our results suggest that next generation sequencing approaches are particularly prone to such misannotations. Two related miRNA candidates did recently enter the miRBase database, miR-1274b and miR-1274a, but they share identical 18-nucleotide stretches with tRNA (Lys3) and tRNA (Lys5) , respectively. The possibility that the small RNA fragments that led to the description of these two miRNAs originated from the two tRNAs was examined. The ratio of the miR-1274b:miR-1274a fragments does closely resemble the known tRNA lys3:lys5 ratio in the cell. Furthermore, the proposed miRNA hairpins have a very low prediction score and the proposed miRNA genes are in fact endogenous retroviral elements. We searched for other miRNA-mimics in the human genome and found more examples of tRNA-miRNA mimicry. We propose that the corresponding miRNAs should be validated in more detail, as the small RNA fragments that led to their description are likely derived from tRNA processing. PMID:20818168

  7. Chlorophyll pigment concentration using spectral curvature algorithms - An evaluation of present and proposed satellite ocean color sensor bands

    NASA Technical Reports Server (NTRS)

    Hoge, Frank E.; Swift, Robert N.

    1986-01-01

    During the past several years symmetric three-band (460-, 490-, 520-nm) spectral curvature algorithm (SCA) has demonstrated rather accurate determination of chlorophyll pigment concentration using low-altitude airborne ocean color data. It is shown herein that the in-water asymmetric SCA, when applied to certain recently proposed OCI (NOAA-K and SPOT-3) and OCM (ERS-1) satellite ocean color bands, can adequately recover chlorophyll-like pigments. These airborne findings suggest that the proposed new ocean color sensor bands are in general satisfactorily, but not necessarily optimally, positioned to allow space evaluation of the SCA using high-precision atmospherically corrected satellite radiances. The pigment concentration recovery is not as good when existing Coastal Zone Color Scanner bands are used in the SCA. The in-water asymmetric SCA chlorophyll pigment recovery evaluations were performed using (1) airborne laser-induced chlorophyll fluorescence and (2) concurrent passive upwelled radiances. Data from a separate ocean color sensor aboard the aircraft were further used to validate the findings.

  8. Notes on quantitative structure-property relationships (QSPR), part 3: density functions origin shift as a source of quantum QSPR algorithms in molecular spaces.

    PubMed

    Carbó-Dorca, Ramon

    2013-04-01

    A general algorithm implementing a useful variant of quantum quantitative structure-property relationships (QQSPR) theory is described. Based on quantum similarity framework and previous theoretical developments on the subject, the present QQSPR procedure relies on the possibility to perform geometrical origin shifts over molecular density function sets. In this way, molecular collections attached to known properties can be easily used over other quantum mechanically well-described molecular structures for the estimation of their unknown property values. The proposed procedure takes quantum mechanical expectation value as provider of causal relation background and overcomes the dimensionality paradox, which haunts classical descriptor space QSPR. Also, contrarily to classical procedures, which are also attached to heavy statistical gear, the present QQSPR approach might use a geometrical assessment only or just some simple statistical outline or both. From an applied point of view, several easily reachable computational levels can be set up. A Fortran 95 program: QQSPR-n is described with two versions, which might be downloaded from a dedicated web site. Various practical examples are provided, yielding excellent results. Finally, it is also shown that an equivalent molecular space classical QSPR formalism can be easily developed. PMID:23238931

  9. Comparison between PCR and larvae visualization methods for diagnosis of Strongyloides stercoralis out of endemic area: A proposed algorithm.

    PubMed

    Repetto, Silvia A; Ruybal, Paula; Solana, María Elisa; López, Carlota; Berini, Carolina A; Alba Soto, Catalina D; Cappa, Stella M González

    2016-05-01

    Underdiagnosis of chronic infection with the nematode Strongyloides stercoralis may lead to severe disease in the immunosuppressed. Thus, we have set-up a specific and highly sensitive molecular diagnosis in stool samples. Here, we compared the accuracy of our polymerase chain reaction (PCR)-based method with that of conventional diagnostic methods for chronic infection. We also analyzed clinical and epidemiological predictors of infection to propose an algorithm for the diagnosis of strongyloidiasis useful for the clinician. Molecular and gold standard methods were performed to evaluate a cohort of 237 individuals recruited in Buenos Aires, Argentina. Subjects were assigned according to their immunological status, eosinophilia and/or history of residence in endemic areas. Diagnosis of strongyloidiasis by PCR on the first stool sample was achieved in 71/237 (29.9%) individuals whereas only 35/237(27.4%) were positive by conventional methods, requiring up to four serial stool samples at weekly intervals. Eosinophilia and history of residence in endemic areas have been revealed as independent factors as they increase the likelihood of detecting the parasite according to our study population. Our results underscore the usefulness of robust molecular tools aimed to diagnose chronic S. stercoralis infection. Evidence also highlights the need to survey patients with eosinophilia even when history of an endemic area is absent.

  10. Loss of Faith in the Origins of Information Literacy in E-Environments: Proposal of a Holistic Approach

    ERIC Educational Resources Information Center

    Nazari, Maryam; Webber, Sheila

    2012-01-01

    The original concept of information literacy (IL) identifies it as an enabler for lifelong learning and learning-to-learn, adaptable and transferable in any learning environment and context. However, practices of IL in electronic information and learning environments (e-environments) tend to question the origins, and workability, of IL on the…

  11. Applying the wisdom of stepping down inhaled corticosteroids in patients with COPD: a proposed algorithm for clinical practice.

    PubMed

    Kaplan, Alan G

    2015-01-01

    the aforementioned, this perspective article proposes an algorithm for the stepwise withdrawal of ICS in real-life clinical practice. PMID:26648711

  12. Applying the wisdom of stepping down inhaled corticosteroids in patients with COPD: a proposed algorithm for clinical practice

    PubMed Central

    Kaplan, Alan G

    2015-01-01

    the aforementioned, this perspective article proposes an algorithm for the stepwise withdrawal of ICS in real-life clinical practice. PMID:26648711

  13. A Proposed Extension to the Soil Moisture and Ocean Salinity Level 2 Algorithm for Mixed Forest and Moderate Vegetation Pixels

    NASA Technical Reports Server (NTRS)

    Panciera, Rocco; Walker, Jeffrey P.; Kalma, Jetse; Kim, Edward

    2011-01-01

    The Soil Moisture and Ocean Salinity (SMOS)mission, launched in November 2009, provides global maps of soil moisture and ocean salinity by measuring the L-band (1.4 GHz) emission of the Earth's surface with a spatial resolution of 40-50 km.Uncertainty in the retrieval of soilmoisture over large heterogeneous areas such as SMOS pixels is expected, due to the non-linearity of the relationship between soil moisture and the microwave emission. The current baseline soilmoisture retrieval algorithm adopted by SMOS and implemented in the SMOS Level 2 (SMOS L2) processor partially accounts for the sub-pixel heterogeneity of the land surface, by modelling the individual contributions of different pixel fractions to the overall pixel emission. This retrieval approach is tested in this study using airborne L-band data over an area the size of a SMOS pixel characterised by a mix Eucalypt forest and moderate vegetation types (grassland and crops),with the objective of assessing its ability to correct for the soil moisture retrieval error induced by the land surface heterogeneity. A preliminary analysis using a traditional uniform pixel retrieval approach shows that the sub-pixel heterogeneity of land cover type causes significant errors in soil moisture retrieval (7.7%v/v RMSE, 2%v/v bias) in pixels characterised by a significant amount of forest (40-60%). Although the retrieval approach adopted by SMOS partially reduces this error, it is affected by errors beyond the SMOS target accuracy, presenting in particular a strong dry bias when a fraction of the pixel is occupied by forest (4.1%v/v RMSE,-3.1%v/v bias). An extension to the SMOS approach is proposed that accounts for the heterogeneity of vegetation optical depth within the SMOS pixel. The proposed approach is shown to significantly reduce the error in retrieved soil moisture (2.8%v/v RMSE, -0.3%v/v bias) in pixels characterised by a critical amount of forest (40-60%), at the limited cost of only a crude estimate of the

  14. New stereo matching algorithm

    NASA Astrophysics Data System (ADS)

    Ahmed, Yasser A.; Afifi, Hossam; Rubino, Gerardo

    1999-05-01

    This paper present a new algorithm for stereo matching. The main idea is to decompose the original problem into independent hierarchical and more elementary problems that can be solved faster without any complicated mathematics using BBD. To achieve that, we use a new image feature called 'continuity feature' instead of classical noise. This feature can be extracted from any kind of images by a simple process and without using a searching technique. A new matching technique is proposed to match the continuity feature. The new algorithm resolves the main disadvantages of feature based stereo matching algorithms.

  15. Reliability-based design optimization of reinforced concrete structures including soil-structure interaction using a discrete gravitational search algorithm and a proposed metamodel

    NASA Astrophysics Data System (ADS)

    Khatibinia, M.; Salajegheh, E.; Salajegheh, J.; Fadaee, M. J.

    2013-10-01

    A new discrete gravitational search algorithm (DGSA) and a metamodelling framework are introduced for reliability-based design optimization (RBDO) of reinforced concrete structures. The RBDO of structures with soil-structure interaction (SSI) effects is investigated in accordance with performance-based design. The proposed DGSA is based on the standard gravitational search algorithm (GSA) to optimize the structural cost under deterministic and probabilistic constraints. The Monte-Carlo simulation (MCS) method is considered as the most reliable method for estimating the probabilities of reliability. In order to reduce the computational time of MCS, the proposed metamodelling framework is employed to predict the responses of the SSI system in the RBDO procedure. The metamodel consists of a weighted least squares support vector machine (WLS-SVM) and a wavelet kernel function, which is called WWLS-SVM. Numerical results demonstrate the efficiency and computational advantages of DGSA and the proposed metamodel for RBDO of reinforced concrete structures.

  16. Reflections on "Multiplication as Original Sin": The Implications of Using a Case to Help Preservice Teachers Understand Invented Algorithms

    ERIC Educational Resources Information Center

    Harkness, Shelly Sheats; Thomas, Jonathan

    2008-01-01

    This article describes the use of a case report, Multiplication as original sin (Corwin, R. B. (1989). "Multiplication as original sin." "Journal of Mathematical Behavior, 8", 223-225), as an assignment in a mathematics course for preservice elementary teachers. In this case study, Corwin described her experience as a 6th grader when she revealed…

  17. Case 3018. Cervus gouazoubira Fischer, 1814 (currently Mazama gouazoubira; Mammalia, Artiodactyla): proposed conservation as the correct original spelling

    USGS Publications Warehouse

    Gardner, A.L.

    1999-01-01

    The purpose of this application is to conserve the spelling of the specific name of Cervus gouazoubira Fischer, 1814 for the brown brocket deer of South America (family Cervidae). This spelling, rather than the original gouazoubira, has been in virtually universal usage for almost 50 years.

  18. A new algorithm to diagnose atrial ectopic origin from multi lead ECG systems--insights from 3D virtual human atria and torso.

    PubMed

    Alday, Erick A Perez; Colman, Michael A; Langley, Philip; Butters, Timothy D; Higham, Jonathan; Workman, Antony J; Hancox, Jules C; Zhang, Henggui

    2015-01-01

    Rapid atrial arrhythmias such as atrial fibrillation (AF) predispose to ventricular arrhythmias, sudden cardiac death and stroke. Identifying the origin of atrial ectopic activity from the electrocardiogram (ECG) can help to diagnose the early onset of AF in a cost-effective manner. The complex and rapid atrial electrical activity during AF makes it difficult to obtain detailed information on atrial activation using the standard 12-lead ECG alone. Compared to conventional 12-lead ECG, more detailed ECG lead configurations may provide further information about spatio-temporal dynamics of the body surface potential (BSP) during atrial excitation. We apply a recently developed 3D human atrial model to simulate electrical activity during normal sinus rhythm and ectopic pacing. The atrial model is placed into a newly developed torso model which considers the presence of the lungs, liver and spinal cord. A boundary element method is used to compute the BSP resulting from atrial excitation. Elements of the torso mesh corresponding to the locations of the placement of the electrodes in the standard 12-lead and a more detailed 64-lead ECG configuration were selected. The ectopic focal activity was simulated at various origins across all the different regions of the atria. Simulated BSP maps during normal atrial excitation (i.e. sinoatrial node excitation) were compared to those observed experimentally (obtained from the 64-lead ECG system), showing a strong agreement between the evolution in time of the simulated and experimental data in the P-wave morphology of the ECG and dipole evolution. An algorithm to obtain the location of the stimulus from a 64-lead ECG system was developed. The algorithm presented had a success rate of 93%, meaning that it correctly identified the origin of atrial focus in 75/80 simulations, and involved a general approach relevant to any multi-lead ECG system. This represents a significant improvement over previously developed algorithms. PMID

  19. A new algorithm to diagnose atrial ectopic origin from multi lead ECG systems--insights from 3D virtual human atria and torso.

    PubMed

    Alday, Erick A Perez; Colman, Michael A; Langley, Philip; Butters, Timothy D; Higham, Jonathan; Workman, Antony J; Hancox, Jules C; Zhang, Henggui

    2015-01-01

    Rapid atrial arrhythmias such as atrial fibrillation (AF) predispose to ventricular arrhythmias, sudden cardiac death and stroke. Identifying the origin of atrial ectopic activity from the electrocardiogram (ECG) can help to diagnose the early onset of AF in a cost-effective manner. The complex and rapid atrial electrical activity during AF makes it difficult to obtain detailed information on atrial activation using the standard 12-lead ECG alone. Compared to conventional 12-lead ECG, more detailed ECG lead configurations may provide further information about spatio-temporal dynamics of the body surface potential (BSP) during atrial excitation. We apply a recently developed 3D human atrial model to simulate electrical activity during normal sinus rhythm and ectopic pacing. The atrial model is placed into a newly developed torso model which considers the presence of the lungs, liver and spinal cord. A boundary element method is used to compute the BSP resulting from atrial excitation. Elements of the torso mesh corresponding to the locations of the placement of the electrodes in the standard 12-lead and a more detailed 64-lead ECG configuration were selected. The ectopic focal activity was simulated at various origins across all the different regions of the atria. Simulated BSP maps during normal atrial excitation (i.e. sinoatrial node excitation) were compared to those observed experimentally (obtained from the 64-lead ECG system), showing a strong agreement between the evolution in time of the simulated and experimental data in the P-wave morphology of the ECG and dipole evolution. An algorithm to obtain the location of the stimulus from a 64-lead ECG system was developed. The algorithm presented had a success rate of 93%, meaning that it correctly identified the origin of atrial focus in 75/80 simulations, and involved a general approach relevant to any multi-lead ECG system. This represents a significant improvement over previously developed algorithms.

  20. A New Algorithm to Diagnose Atrial Ectopic Origin from Multi Lead ECG Systems - Insights from 3D Virtual Human Atria and Torso

    PubMed Central

    Alday, Erick A. Perez; Colman, Michael A.; Langley, Philip; Butters, Timothy D.; Higham, Jonathan; Workman, Antony J.; Hancox, Jules C.; Zhang, Henggui

    2015-01-01

    Rapid atrial arrhythmias such as atrial fibrillation (AF) predispose to ventricular arrhythmias, sudden cardiac death and stroke. Identifying the origin of atrial ectopic activity from the electrocardiogram (ECG) can help to diagnose the early onset of AF in a cost-effective manner. The complex and rapid atrial electrical activity during AF makes it difficult to obtain detailed information on atrial activation using the standard 12-lead ECG alone. Compared to conventional 12-lead ECG, more detailed ECG lead configurations may provide further information about spatio-temporal dynamics of the body surface potential (BSP) during atrial excitation. We apply a recently developed 3D human atrial model to simulate electrical activity during normal sinus rhythm and ectopic pacing. The atrial model is placed into a newly developed torso model which considers the presence of the lungs, liver and spinal cord. A boundary element method is used to compute the BSP resulting from atrial excitation. Elements of the torso mesh corresponding to the locations of the placement of the electrodes in the standard 12-lead and a more detailed 64-lead ECG configuration were selected. The ectopic focal activity was simulated at various origins across all the different regions of the atria. Simulated BSP maps during normal atrial excitation (i.e. sinoatrial node excitation) were compared to those observed experimentally (obtained from the 64-lead ECG system), showing a strong agreement between the evolution in time of the simulated and experimental data in the P-wave morphology of the ECG and dipole evolution. An algorithm to obtain the location of the stimulus from a 64-lead ECG system was developed. The algorithm presented had a success rate of 93%, meaning that it correctly identified the origin of atrial focus in 75/80 simulations, and involved a general approach relevant to any multi-lead ECG system. This represents a significant improvement over previously developed algorithms. PMID

  1. Experimental testing of integral truncation algorithms for the calculation of beam widths by proposed ISO standard methods

    NASA Astrophysics Data System (ADS)

    Apte, Paul; Gower, Malcolm C.; Ward, Brooke A.

    1995-04-01

    The experimental testing of baseline clipping algorithms was carried out on a purposely constructed test bench. Three different lasers were used for the tests including HeNe and collimated laserdiode. The beam profile intensity distribution was measured using a CCD camera at various distances from a reference lens. Results were analyzed on an 486 PC running custom developed software written in Turbo Pascal. This allows very fast evaluation of the algorithms to be performed at rates of several times per second depending upon computational load. Tables of beam width data were created and then analyzed using Mathematica to see if the data confirmed ABCD propagation laws. Values for the beam waist location, size, and propagation constant were calculated.

  2. Improvements of HITS Algorithms for Spam Links

    NASA Astrophysics Data System (ADS)

    Asano, Yasuhito; Tezuka, Yu; Nishizeki, Takao

    The HITS algorithm proposed by Kleinberg is one of the representative methods of scoring Web pages by using hyperlinks. In the days when the algorithm was proposed, most of the pages given high score by the algorithm were really related to a given topic, and hence the algorithm could be used to find related pages. However, the algorithm and the variants including Bharat's improved HITS, abbreviated to BHITS, proposed by Bharat and Henzinger cannot be used to find related pages any more on today's Web, due to an increase of spam links. In this paper, we first propose three methods to find “linkfarms,” that is, sets of spam links forming a densely connected subgraph of a Web graph. We then present an algorithm, called a trust-score algorithm, to give high scores to pages which are not spam pages with a high probability. Combining the three methods and the trust-score algorithm with BHITS, we obtain several variants of the HITS algorithm. We ascertain by experiments that one of them, named TaN+BHITS using the trust-score algorithm and the method of finding linkfarms by employing name servers, is most suitable for finding related pages on today's Web. Our algorithms take time and memory no more than those required by the original HITS algorithm, and can be executed on a PC with a small amount of main memory.

  3. A proposed Kalman filter algorithm for estimation of unmeasured output variables for an F100 turbofan engine

    NASA Technical Reports Server (NTRS)

    Alag, Gurbux S.; Gilyard, Glenn B.

    1990-01-01

    To develop advanced control systems for optimizing aircraft engine performance, unmeasurable output variables must be estimated. The estimation has to be done in an uncertain environment and be adaptable to varying degrees of modeling errors and other variations in engine behavior over its operational life cycle. This paper represented an approach to estimate unmeasured output variables by explicitly modeling the effects of off-nominal engine behavior as biases on the measurable output variables. A state variable model accommodating off-nominal behavior is developed for the engine, and Kalman filter concepts are used to estimate the required variables. Results are presented from nonlinear engine simulation studies as well as the application of the estimation algorithm on actual flight data. The formulation presented has a wide range of application since it is not restricted or tailored to the particular application described.

  4. Authentication of the botanical origin of unifloral honey by infrared spectroscopy coupled with support vector machine algorithm

    NASA Astrophysics Data System (ADS)

    Lenhardt, L.; Zeković, I.; Dramićanin, T.; Tešić, Ž.; Milojković-Opsenica, D.; Dramićanin, M. D.

    2014-09-01

    In recent years, the potential of Fourier-transform infrared spectroscopy coupled with different chemometric tools in food analysis has been established. This technique is rapid, low cost, and reliable and requires little sample preparation. In this work, 130 Serbian unifloral honey samples (linden, acacia, and sunflower types) were analyzed using attenuated total reflectance infrared spectroscopy (ATR-IR). For each spectrum, 64 scans were recorded in wavenumbers between 4000 and 500 cm-1 and at a spectral resolution of 4 cm-1. These spectra were analyzed using principal component analysis (PCA), and calculated principal components were then used for support vector machine (SVM) training. In this way, the pattern-recognition tool is obtained for building a classification model for determining the botanical origin of honey. The PCA was used to analyze results and to see if the separation between groups of different types of honeys exists. Using the SVM, the classification model was built and classification errors were acquired. It has been observed that this technique is adequate for determining the botanical origin of honey with a success rate of 98.6%. Based on these results, it can be concluded that this technique offers many possibilities for future rapid qualitative analysis of honey.

  5. Diamond of Possibly Metallurgical and Seismic Origin: PART 3: Additional Specimens and a Proposal Calling for adjusted Methodologies for Diamondism

    NASA Astrophysics Data System (ADS)

    Giamn, M.

    2007-05-01

    , noniconic, nonstereotyping specimen population of primarily fine grains is needed. My theory accomodates (1) broad compositional ranges;(2) present or historical specimens; and (3)valid on a grain by grain scale as well as regional scale. A great nember of metallic elements are broadly similar to iron in crystal structure, phase equilibria, range of stoicheometry of solid solutions, and properties. Under favorable conditions, they could be as likely as iron to proceed to generate carbon. This expand to a great number of potential source metal for diamond. Further multiplying this number by alloying and centering (of lattice points) variations, the number of potential source could be vast. Above mentioned exercise is expendable to, for instance, Cr, Ni or other metals. This could provide for a missing link between diamond in stable craton and other diamonds. 1 Giamn, M., Diamond of possibly metallurgical and seismic origin in an alloy from the debris after earthquake Taiwan PART I,2004 Eos AGU Spring.2 Giamn, M. submitted to GCA. 3 Giamn, M., PART II (Thermal) past is present.

  6. Gastric tubes and airway management in patients at risk of aspiration: history, current concepts, and proposal of an algorithm.

    PubMed

    Salem, M Ramez; Khorasani, Arjang; Saatee, Siavosh; Crystal, George J; El-Orbany, Mohammad

    2014-03-01

    Rapid sequence induction and intubation (RSII) and awake tracheal intubation are commonly used anesthetic techniques in patients at risk of pulmonary aspiration of gastric or esophageal contents. Some of these patients may have a gastric tube (GT) placed preoperatively. Currently, there are no guidelines regarding which patient should have a GT placed before anesthetic induction. Furthermore, clinicians are not in agreement as to whether to keep a GT in situ, or to partially or completely withdraw it before anesthetic induction. In this review we provide a historical perspective of the use of GTs during anesthetic induction in patients at risk of pulmonary aspiration. Before the introduction of cricoid pressure (CP) in 1961, various techniques were used including RSII combined with a head-up tilt. Sellick initially recommended the withdrawal of the GT before anesthetic induction. He hypothesized that a GT increases the risk of regurgitation and interferes with the compression of the upper esophagus during CP. He later modified his view and emphasized the safety of CP in the presence of a GT. Despite subsequent studies supporting the effectiveness of CP in occluding the esophagus around a GT, Sellick's early view has been perpetuated by investigators who recommend partial or complete withdrawal of the GT. On the basis of available information, we have formulated an algorithm for airway management in patients at risk of aspiration of gastric or esophageal contents. The approach in an individual patient depends on: the procedure; type and severity of the underlying pathology; state of consciousness; likelihood of difficult airway; whether or not the GT is in place; contraindications to the use of RSII or CP. The algorithm calls for the preanesthetic use of a large-bore GT to remove undigested food particles and awake intubation in patients with achalasia, and emptying the pouch by external pressure and avoidance of a GT in patients with Zenker diverticulum. It also

  7. Extraneous agents testing for substrates of avian origin and viral vaccines for poultry: current provisions and proposals for future approaches.

    PubMed

    Jungbäck, Carmen; Motitschke, Andreas

    2010-05-01

    review gives an analysis of the current provisions of the Ph. Eur. and makes some proposals on how the requirements concerning the testing of extraneous agents could be modified to take into consideration the increase in quality that has been achieved over the past few decades.

  8. Gacs quantum algorithmic entropy in infinite dimensional Hilbert spaces

    SciTech Connect

    Benatti, Fabio; Oskouei, Samad Khabbazi Deh Abad, Ahmad Shafiei

    2014-08-15

    We extend the notion of Gacs quantum algorithmic entropy, originally formulated for finitely many qubits, to infinite dimensional quantum spin chains and investigate the relation of this extension with two quantum dynamical entropies that have been proposed in recent years.

  9. Proposed standardized definitions for vertical resolution and uncertainty in the NDACC lidar ozone and temperature algorithms - Part 1: Vertical resolution

    NASA Astrophysics Data System (ADS)

    Leblanc, Thierry; Sica, Robert J.; van Gijsel, Joanna A. E.; Godin-Beekmann, Sophie; Haefele, Alexander; Trickl, Thomas; Payen, Guillaume; Gabarrot, Frank

    2016-08-01

    A standardized approach for the definition and reporting of vertical resolution of the ozone and temperature lidar profiles contributing to the Network for the Detection for Atmospheric Composition Change (NDACC) database is proposed. Two standardized definitions homogeneously and unequivocally describing the impact of vertical filtering are recommended. The first proposed definition is based on the width of the response to a finite-impulse-type perturbation. The response is computed by convolving the filter coefficients with an impulse function, namely, a Kronecker delta function for smoothing filters, and a Heaviside step function for derivative filters. Once the response has been computed, the proposed standardized definition of vertical resolution is given by Δz = δz × HFWHM, where δz is the lidar's sampling resolution and HFWHM is the full width at half maximum (FWHM) of the response, measured in sampling intervals. The second proposed definition relates to digital filtering theory. After applying a Laplace transform to a set of filter coefficients, the filter's gain characterizing the effect of the filter on the signal in the frequency domain is computed, from which the cut-off frequency fC, defined as the frequency at which the gain equals 0.5, is computed. Vertical resolution is then defined by Δz = δz/(2fC). Unlike common practice in the field of spectral analysis, a factor 2fC instead of fC is used here to yield vertical resolution values nearly equal to the values obtained with the impulse response definition using the same filter coefficients. When using either of the proposed definitions, unsmoothed signals yield the best possible vertical resolution Δz = δz (one sampling bin). Numerical tools were developed to support the implementation of these definitions across all NDACC lidar groups. The tools consist of ready-to-use "plug-in" routines written in several programming languages that can be inserted into any lidar data processing software and

  10. Segmentation of MRI Brain Images with an Improved Harmony Searching Algorithm

    PubMed Central

    Yang, Zhang; Li, Guo; Weifeng, Ding

    2016-01-01

    The harmony searching (HS) algorithm is a kind of optimization search algorithm currently applied in many practical problems. The HS algorithm constantly revises variables in the harmony database and the probability of different values that can be used to complete iteration convergence to achieve the optimal effect. Accordingly, this study proposed a modified algorithm to improve the efficiency of the algorithm. First, a rough set algorithm was employed to improve the convergence and accuracy of the HS algorithm. Then, the optimal value was obtained using the improved HS algorithm. The optimal value of convergence was employed as the initial value of the fuzzy clustering algorithm for segmenting magnetic resonance imaging (MRI) brain images. Experimental results showed that the improved HS algorithm attained better convergence and more accurate results than those of the original HS algorithm. In our study, the MRI image segmentation effect of the improved algorithm was superior to that of the original fuzzy clustering method. PMID:27403428

  11. Segmentation of MRI Brain Images with an Improved Harmony Searching Algorithm.

    PubMed

    Yang, Zhang; Shufan, Ye; Li, Guo; Weifeng, Ding

    2016-01-01

    The harmony searching (HS) algorithm is a kind of optimization search algorithm currently applied in many practical problems. The HS algorithm constantly revises variables in the harmony database and the probability of different values that can be used to complete iteration convergence to achieve the optimal effect. Accordingly, this study proposed a modified algorithm to improve the efficiency of the algorithm. First, a rough set algorithm was employed to improve the convergence and accuracy of the HS algorithm. Then, the optimal value was obtained using the improved HS algorithm. The optimal value of convergence was employed as the initial value of the fuzzy clustering algorithm for segmenting magnetic resonance imaging (MRI) brain images. Experimental results showed that the improved HS algorithm attained better convergence and more accurate results than those of the original HS algorithm. In our study, the MRI image segmentation effect of the improved algorithm was superior to that of the original fuzzy clustering method. PMID:27403428

  12. The Biochemical Origin of Pain – Proposing a new law of Pain: The origin of all Pain is Inflammation and the Inflammatory Response PART 1 of 3 – A unifying law of pain

    PubMed Central

    2009-01-01

    We are proposing a unifying theory or law of pain, which states: The origin of all pain is inflammation and the inflammatory response. The biochemical mediators of inflammation include cytokines, neuropeptides, growth factors and neurotransmitters. Irrespective of the type of pain whether it is acute or chronic pain, peripheral or central pain, nociceptive or neuropathic pain, the underlying origin is inflammation and the inflammatory response. Activation of pain receptors, transmission and modulation of pain signals, neuro plasticity and central sensitization are all one continuum of inflammation and the inflammatory response. Irrespective of the characteristic of the pain, whether it is sharp, dull, aching, burning, stabbing, numbing or tingling, all pain arise from inflammation and the inflammatory response. We are proposing a re-classification and treatment of pain syndromes based upon their inflammatory profile. Treatment of pain syndromes should be based on these principles: Determination of the inflammatory profile of the pain syndromeInhibition or suppression of production of the appropriate inflammatory mediators e.g. with inflammatory mediator blockers or surgical intervention where appropriateInhibition or suppression of neuronal afferent and efferent (motor) transmission e.g. with anti-seizure drugs or local anesthetic blocksModulation of neuronal transmission e.g. with opioid medication At the L.A. Pain Clinic, we have successfully treated a variety of pain syndromes by utilizing these principles. This theory of the biochemical origin of pain is compatible with, inclusive of, and unifies existing theories and knowledge of the mechanism of pain including the gate control theory, and theories of pre-emptive analgesia, windup and central sensitization. PMID:17240081

  13. Modified projection algorithms for solving the split equality problems.

    PubMed

    Dong, Qiao-Li; He, Songnian

    2014-01-01

    The split equality problem (SEP) has extraordinary utility and broad applicability in many areas of applied mathematics. Recently, Byrne and Moudafi (2013) proposed a CQ algorithm for solving it. In this paper, we propose a modification for the CQ algorithm, which computes the stepsize adaptively and performs an additional projection step onto two half-spaces in each iteration. We further propose a relaxation scheme for the self-adaptive projection algorithm by using projections onto half-spaces instead of those onto the original convex sets, which is much more practical. Weak convergence results for both algorithms are analyzed.

  14. Numerical algorithm for solving mathematical programming problems with a smooth surface as a constraint

    NASA Astrophysics Data System (ADS)

    Chernyaev, Yu. A.

    2016-03-01

    A numerical algorithm for minimizing a convex function on a smooth surface is proposed. The algorithm is based on reducing the original problem to a sequence of convex programming problems. Necessary extremum conditions are examined, and the convergence of the algorithm is analyzed.

  15. Comparison of cone beam artifacts reduction: two pass algorithm vs TV-based CS algorithm

    NASA Astrophysics Data System (ADS)

    Choi, Shinkook; Baek, Jongduk

    2015-03-01

    In a cone beam computed tomography (CBCT), the severity of the cone beam artifacts is increased as the cone angle increases. To reduce the cone beam artifacts, several modified FDK algorithms and compressed sensing based iterative algorithms have been proposed. In this paper, we used two pass algorithm and Gradient-Projection-Barzilai-Borwein (GPBB) algorithm to reduce the cone beam artifacts, and compared their performance using structural similarity (SSIM) index. In two pass algorithm, it is assumed that the cone beam artifacts are mainly caused by extreme-density(ED) objects, and therefore the algorithm reproduces the cone beam artifacts(i.e., error image) produced by ED objects, and then subtract it from the original image. GPBB algorithm is a compressed sensing based iterative algorithm which minimizes an energy function for calculating the gradient projection with the step size determined by the Barzilai- Borwein formulation, therefore it can estimate missing data caused by the cone beam artifacts. To evaluate the performance of two algorithms, we used testing objects consisting of 7 ellipsoids separated along the z direction and cone beam artifacts were generated using 30 degree cone angle. Even though the FDK algorithm produced severe cone beam artifacts with a large cone angle, two pass algorithm reduced the cone beam artifacts with small residual errors caused by inaccuracy of ED objects. In contrast, GPBB algorithm completely removed the cone beam artifacts and restored the original shape of the objects.

  16. Genomic and phylogenetic analyses of an adenovirus isolated from a corn snake (Elaphe guttata) imply a common origin with members of the proposed new genus Atadenovirus.

    PubMed

    Farkas, Szilvia L; Benko, Mária; Elo, Péter; Ursu, Krisztina; Dán, Adám; Ahne, Winfried; Harrach, Balázs

    2002-10-01

    Approximately 60% of the genome of an adenovirus isolated from a corn snake (Elaphe guttata) was cloned and sequenced. The results of homology searches showed that the genes of the corn snake adenovirus (SnAdV-1) were closest to their counterparts in members of the recently proposed new genus ATADENOVIRUS: In phylogenetic analyses of the complete hexon and protease genes, SnAdV-1 indeed clustered together with the atadenoviruses. The characteristic features in the genome organization of SnAdV-1 included the presence of a gene homologous to that for protein p32K, the lack of structural proteins V and IX and the absence of homologues of the E1A and E3 regions. These characteristics are in accordance with the genus-defining markers of atadenoviruses. Comparison of the cleavage sites of the viral protease in core protein pVII also confirmed SnAdV-1 as a candidate member of the genus ATADENOVIRUS: Thus, the hypothesis on the possible reptilian origin of atadenoviruses (Harrach, Acta Veterinaria Hungarica 48, 484-490, 2000) seems to be supported. However, the base composition of DNA sequence (>18 kb) determined from the SnAdV-1 genome showed an equilibrated GC content of 51%, which is unusual for an atadenovirus.

  17. A proposal to incorporate trial data into a hybrid ACC/AHA algorithm for the allocation of statin therapy in primary prevention.

    PubMed

    Ridker, Paul M; Rose, Lynda; Cook, Nancy R

    2015-03-10

    Current algorithms for statin allocation in primary prevention use epidemiologic estimates of absolute risk. However, a global risk prediction score has not been used as an enrollment criterion in any randomized trial of statin therapy. Moreover, completed statin trials show greater relative risk reductions for those patients at lower levels of absolute risk. Thus, risk calculators that rely solely on epidemiologic modeling do not ensure that those who will benefit are selected for treatment. We propose a hybrid approach to statin prescription for apparently healthy men and women that strongly endorses pharmacologic treatment for those who have estimated 10-year risks ≥7.5% and for whom trial-based evidence supports statin efficacy in primary prevention. Although individuals could still be treated on the basis of absolute risk alone, the hybrid approach is evidence-based, is easily applied in clinical practice, and may increase the transparency of physician-patient interactions concerning prescription of statin therapy in primary prevention.

  18. Algorithms and Algorithmic Languages.

    ERIC Educational Resources Information Center

    Veselov, V. M.; Koprov, V. M.

    This paper is intended as an introduction to a number of problems connected with the description of algorithms and algorithmic languages, particularly the syntaxes and semantics of algorithmic languages. The terms "letter, word, alphabet" are defined and described. The concept of the algorithm is defined and the relation between the algorithm and…

  19. Bayesian Smoothing Algorithms in Partially Observed Markov Chains

    NASA Astrophysics Data System (ADS)

    Ait-el-Fquih, Boujemaa; Desbouvries, François

    2006-11-01

    Let x = {xn}n∈N be a hidden process, y = {yn}n∈N an observed process and r = {rn}n∈N some auxiliary process. We assume that t = {tn}n∈N with tn = (xn, rn, yn-1) is a (Triplet) Markov Chain (TMC). TMC are more general than Hidden Markov Chains (HMC) and yet enable the development of efficient restoration and parameter estimation algorithms. This paper is devoted to Bayesian smoothing algorithms for TMC. We first propose twelve algorithms for general TMC. In the Gaussian case, these smoothers reduce to a set of algorithms which include, among other solutions, extensions to TMC of classical Kalman-like smoothing algorithms (originally designed for HMC) such as the RTS algorithms, the Two-Filter algorithms or the Bryson and Frazier algorithm.

  20. Proposed standardized definitions for vertical resolution and uncertainty in the NDACC lidar ozone and temperature algorithms - Part 3: Temperature uncertainty budget

    NASA Astrophysics Data System (ADS)

    Leblanc, Thierry; Sica, Robert J.; van Gijsel, Joanna A. E.; Haefele, Alexander; Payen, Guillaume; Liberti, Gianluigi

    2016-08-01

    A standardized approach for the definition, propagation, and reporting of uncertainty in the temperature lidar data products contributing to the Network for the Detection for Atmospheric Composition Change (NDACC) database is proposed. One important aspect of the proposed approach is the ability to propagate all independent uncertainty components in parallel through the data processing chain. The individual uncertainty components are then combined together at the very last stage of processing to form the temperature combined standard uncertainty. The identified uncertainty sources comprise major components such as signal detection, saturation correction, background noise extraction, temperature tie-on at the top of the profile, and absorption by ozone if working in the visible spectrum, as well as other components such as molecular extinction, the acceleration of gravity, and the molecular mass of air, whose magnitudes depend on the instrument, data processing algorithm, and altitude range of interest. The expression of the individual uncertainty components and their step-by-step propagation through the temperature data processing chain are thoroughly estimated, taking into account the effect of vertical filtering and the merging of multiple channels. All sources of uncertainty except detection noise imply correlated terms in the vertical dimension, which means that covariance terms must be taken into account when vertical filtering is applied and when temperature is integrated from the top of the profile. Quantitatively, the uncertainty budget is presented in a generic form (i.e., as a function of instrument performance and wavelength), so that any NDACC temperature lidar investigator can easily estimate the expected impact of individual uncertainty components in the case of their own instrument. Using this standardized approach, an example of uncertainty budget is provided for the Jet Propulsion Laboratory (JPL) lidar at Mauna Loa Observatory, Hawai'i, which is

  1. GPU Accelerated Event Detection Algorithm

    2011-05-25

    Smart grid external require new algorithmic approaches as well as parallel formulations. One of the critical components is the prediction of changes and detection of anomalies within the power grid. The state-of-the-art algorithms are not suited to handle the demands of streaming data analysis. (i) need for events detection algorithms that can scale with the size of data, (ii) need for algorithms that can not only handle multi dimensional nature of the data, but alsomore » model both spatial and temporal dependencies in the data, which, for the most part, are highly nonlinear, (iii) need for algorithms that can operate in an online fashion with streaming data. The GAEDA code is a new online anomaly detection techniques that take into account spatial, temporal, multi-dimensional aspects of the data set. The basic idea behind the proposed approach is to (a) to convert a multi-dimensional sequence into a univariate time series that captures the changes between successive windows extracted from the original sequence using singular value decomposition (SVD), and then (b) to apply known anomaly detection techniques for univariate time series. A key challenge for the proposed approach is to make the algorithm scalable to huge datasets by adopting techniques from perturbation theory, incremental SVD analysis. We used recent advances in tensor decomposition techniques which reduce computational complexity to monitor the change between successive windows and detect anomalies in the same manner as described above. Therefore we propose to develop the parallel solutions on many core systems such as GPUs, because these algorithms involve lot of numerical operations and are highly data-parallelizable.« less

  2. GPU Accelerated Event Detection Algorithm

    SciTech Connect

    2011-05-25

    Smart grid external require new algorithmic approaches as well as parallel formulations. One of the critical components is the prediction of changes and detection of anomalies within the power grid. The state-of-the-art algorithms are not suited to handle the demands of streaming data analysis. (i) need for events detection algorithms that can scale with the size of data, (ii) need for algorithms that can not only handle multi dimensional nature of the data, but also model both spatial and temporal dependencies in the data, which, for the most part, are highly nonlinear, (iii) need for algorithms that can operate in an online fashion with streaming data. The GAEDA code is a new online anomaly detection techniques that take into account spatial, temporal, multi-dimensional aspects of the data set. The basic idea behind the proposed approach is to (a) to convert a multi-dimensional sequence into a univariate time series that captures the changes between successive windows extracted from the original sequence using singular value decomposition (SVD), and then (b) to apply known anomaly detection techniques for univariate time series. A key challenge for the proposed approach is to make the algorithm scalable to huge datasets by adopting techniques from perturbation theory, incremental SVD analysis. We used recent advances in tensor decomposition techniques which reduce computational complexity to monitor the change between successive windows and detect anomalies in the same manner as described above. Therefore we propose to develop the parallel solutions on many core systems such as GPUs, because these algorithms involve lot of numerical operations and are highly data-parallelizable.

  3. A Proposal to Unify the Classification of Breast and Prostate Cancers Based on the Anatomic Site of Cancer Origin and on Long-term Patient Outcome.

    PubMed

    Tabár, László; Dean, Peter B; Yen, Amy M-F; Tarján, Miklós; Chiu, Sherry Y-H; Chen, Sam L-S; Fann, Jean C-Y; Chen, Tony H-H

    2014-01-01

    The similarity between the structure and function of the breast and prostate has been known for a long time, but there are serious discrepancies in the terminology describing breast and prostate cancers. The use of the large, thick-section (3D) histology technique for both organs exposes the irrationality of the breast cancer terminology. Pathologists with expertise in diagnosing prostate cancer take the anatomic site of cancer origin into account when using the terms AAP (acinar adenocarcinoma of the prostate) and DAP (ductal adenocarcinoma of the prostate) to distinguish between the prostate cancers originating primarily from the fluid-producing acinar portion of the organ (AAP) and the tumors originating either purely from the larger ducts (DAP) or from both the acini and the main ducts combined (DAP and AAP). Long-term patient outcome is closely correlated with the terminology, because patients with DAP have a significantly poorer prognosis than patients with AAP. The current breast cancer terminology could be improved by modeling it after the method of classifying prostate cancer to reflect the anatomic site of breast cancer origin and the patient outcome. The long-term survival curves of our consecutive breast cancer cases collected since 1977 clearly show that the non-palpable, screen-detected breast cancers originating from the milk-producing acini have excellent prognosis, irrespective of their histologic malignancy grade or biomarkers. Correspondingly, the breast cancer subtypes of truly ductal origin have a significantly poorer outcome, despite recent improvements in diagnosis and therapy. The mammographic appearance of breast cancers reflects the underlying tissue structure. Addition of these "mammographic tumor features" to the currently used histologic phenotypes makes it possible to distinguish the breast cancer cases of ductal origin with a poor outcome, termed DAB (ductal adenocarcinoma of the breast), from the more easily managed breast cancers of

  4. A fast portable implementation of the Secure Hash Algorithm, III.

    SciTech Connect

    McCurley, Kevin S.

    1992-10-01

    In 1992, NIST announced a proposed standard for a collision-free hash function. The algorithm for producing the hash value is known as the Secure Hash Algorithm (SHA), and the standard using the algorithm in known as the Secure Hash Standard (SHS). Later, an announcement was made that a scientist at NSA had discovered a weakness in the original algorithm. A revision to this standard was then announced as FIPS 180-1, and includes a slight change to the algorithm that eliminates the weakness. This new algorithm is called SHA-1. In this report we describe a portable and efficient implementation of SHA-1 in the C language. Performance information is given, as well as tips for porting the code to other architectures. We conclude with some observations on the efficiency of the algorithm, and a discussion of how the efficiency of SHA might be improved.

  5. A cuckoo search algorithm for multimodal optimization.

    PubMed

    Cuevas, Erik; Reyna-Orta, Adolfo

    2014-01-01

    Interest in multimodal optimization is expanding rapidly, since many practical engineering problems demand the localization of multiple optima within a search space. On the other hand, the cuckoo search (CS) algorithm is a simple and effective global optimization algorithm which can not be directly applied to solve multimodal optimization problems. This paper proposes a new multimodal optimization algorithm called the multimodal cuckoo search (MCS). Under MCS, the original CS is enhanced with multimodal capacities by means of (1) the incorporation of a memory mechanism to efficiently register potential local optima according to their fitness value and the distance to other potential solutions, (2) the modification of the original CS individual selection strategy to accelerate the detection process of new local minima, and (3) the inclusion of a depuration procedure to cyclically eliminate duplicated memory elements. The performance of the proposed approach is compared to several state-of-the-art multimodal optimization algorithms considering a benchmark suite of fourteen multimodal problems. Experimental results indicate that the proposed strategy is capable of providing better and even a more consistent performance over existing well-known multimodal algorithms for the majority of test problems yet avoiding any serious computational deterioration. PMID:25147850

  6. A cuckoo search algorithm for multimodal optimization.

    PubMed

    Cuevas, Erik; Reyna-Orta, Adolfo

    2014-01-01

    Interest in multimodal optimization is expanding rapidly, since many practical engineering problems demand the localization of multiple optima within a search space. On the other hand, the cuckoo search (CS) algorithm is a simple and effective global optimization algorithm which can not be directly applied to solve multimodal optimization problems. This paper proposes a new multimodal optimization algorithm called the multimodal cuckoo search (MCS). Under MCS, the original CS is enhanced with multimodal capacities by means of (1) the incorporation of a memory mechanism to efficiently register potential local optima according to their fitness value and the distance to other potential solutions, (2) the modification of the original CS individual selection strategy to accelerate the detection process of new local minima, and (3) the inclusion of a depuration procedure to cyclically eliminate duplicated memory elements. The performance of the proposed approach is compared to several state-of-the-art multimodal optimization algorithms considering a benchmark suite of fourteen multimodal problems. Experimental results indicate that the proposed strategy is capable of providing better and even a more consistent performance over existing well-known multimodal algorithms for the majority of test problems yet avoiding any serious computational deterioration.

  7. An Adaptive Digital Image Watermarking Algorithm Based on Morphological Haar Wavelet Transform

    NASA Astrophysics Data System (ADS)

    Huang, Xiaosheng; Zhao, Sujuan

    At present, much more of the wavelet-based digital watermarking algorithms are based on linear wavelet transform and fewer on non-linear wavelet transform. In this paper, we propose an adaptive digital image watermarking algorithm based on non-linear wavelet transform--Morphological Haar Wavelet Transform. In the algorithm, the original image and the watermark image are decomposed with multi-scale morphological wavelet transform respectively. Then the watermark information is adaptively embedded into the original image in different resolutions, combining the features of Human Visual System (HVS). The experimental results show that our method is more robust and effective than the ordinary wavelet transform algorithms.

  8. [Accomplistments in the Last Year Against the Objectives Laid Out in the Original Proposal; the Current Status of the Research; the Work to go in the Next Year; and Publications

    NASA Technical Reports Server (NTRS)

    Elliot, James

    2005-01-01

    Below is the annual progress report (through 2005-01-31) on NASA Grant NNG04GF25G. It is organized according to: (I) Accomplishments in the last year against the objectives laid out in the original proposal; (II) The current status of the research; (III) The work to go in the next year; (IV) Publications. Since this program is a continuation of the occultation work supported in a predecessor grant, the "Accomplishments" section lists all the tasks written into the proposal (in June 2003) through the end of the first year of the new grant.

  9. Analysis of a wavelet-based robust hash algorithm

    NASA Astrophysics Data System (ADS)

    Meixner, Albert; Uhl, Andreas

    2004-06-01

    This paper paper is a quantitative evaluation of a wavelet-based, robust authentication hashing algorithm. Based on the results of a series of robustness and tampering sensitivity tests, we describepossible shortcomings and propose variousmodifications to the algorithm to improve its performance. The second part of the paper describes and attack against the scheme. It allows an attacker to modify a tampered image, such that it's hash value closely matches the hash value of the original.

  10. Algorithm for shortest path search in Geographic Information Systems by using reduced graphs.

    PubMed

    Rodríguez-Puente, Rafael; Lazo-Cortés, Manuel S

    2013-01-01

    The use of Geographic Information Systems has increased considerably since the eighties and nineties. As one of their most demanding applications we can mention shortest paths search. Several studies about shortest path search show the feasibility of using graphs for this purpose. Dijkstra's algorithm is one of the classic shortest path search algorithms. This algorithm is not well suited for shortest path search in large graphs. This is the reason why various modifications to Dijkstra's algorithm have been proposed by several authors using heuristics to reduce the run time of shortest path search. One of the most used heuristic algorithms is the A* algorithm, the main goal is to reduce the run time by reducing the search space. This article proposes a modification of Dijkstra's shortest path search algorithm in reduced graphs. It shows that the cost of the path found in this work, is equal to the cost of the path found using Dijkstra's algorithm in the original graph. The results of finding the shortest path, applying the proposed algorithm, Dijkstra's algorithm and A* algorithm, are compared. This comparison shows that, by applying the approach proposed, it is possible to obtain the optimal path in a similar or even in less time than when using heuristic algorithms.

  11. [The origins of the regulation of prostitution in contemporary Spain from Cabarrús's proposal (1792) to the Madrid Regulations (1847)].

    PubMed

    Guereña, J L

    1995-01-01

    The publication in 1847 of the Reglamento para la represión de los excesos de la prostitución en Madrid (Regulations for the repression of the excesses of prostitution in Madrid) inaugurated an era of regulated prostitution in Spain, which followed upon a period of abolitionism decreed by Philip IV. In view of the spread of prostitution and venereal diseases, police measures, and especially medical measures were both considered in the development of these regulations, which had first been proposed by the Count of Cabarrús in 1792. Allthough completely confidential, the new system of regulations, drawn up in 1847, set the stage for the wide-reaching regulation of prostitution that came into effect in several cities in Spain during and after the mid-nineteenth century, and which included city residence and periodic health surveillance for prostitutes. PMID:11624755

  12. [The origins of the regulation of prostitution in contemporary Spain from Cabarrús's proposal (1792) to the Madrid Regulations (1847)].

    PubMed

    Guereña, J L

    1995-01-01

    The publication in 1847 of the Reglamento para la represión de los excesos de la prostitución en Madrid (Regulations for the repression of the excesses of prostitution in Madrid) inaugurated an era of regulated prostitution in Spain, which followed upon a period of abolitionism decreed by Philip IV. In view of the spread of prostitution and venereal diseases, police measures, and especially medical measures were both considered in the development of these regulations, which had first been proposed by the Count of Cabarrús in 1792. Allthough completely confidential, the new system of regulations, drawn up in 1847, set the stage for the wide-reaching regulation of prostitution that came into effect in several cities in Spain during and after the mid-nineteenth century, and which included city residence and periodic health surveillance for prostitutes.

  13. An original approach to fill the gap in the earthquake disaster experience - a proposal for 'the archive of the quake experience' -

    NASA Astrophysics Data System (ADS)

    Tanaka, Y.; Hirayama, Y.; Kuroda, S.; Yoshida, M.

    2015-12-01

    People without severe disaster experience infallibly forget even the extraordinary one like 3.11 as time advances. Therefore, to improve the resilient society, an ingenious attempt to keep people's memory of disaster not to fade away is necessary. Since 2011, we have been caring out earthquake disaster drills for residents of high-rise apartments, for schoolchildren, for citizens of the coastal area, etc. Using a portable earthquake simulator (1), the drill consists of three parts, the first: a short lecture explaining characteristic quakes expected for Japanese people to have in the future, the second: reliving experience of major earthquakes hit Japan since 1995, and the third: a short lecture for preparation that can be done at home and/or in an office. For the quake experience, although it is two dimensional movement, the real earthquake observation record is used to control the simulator to provide people to relive an experience of different kinds of earthquake including the long period motion of skyscrapers. Feedback of the drill is always positive because participants understand that the reliving the quake experience with proper lectures is one of the best method to communicate the past disasters to their family and to inherit them to the next generation. There are several kinds of archive for disaster as inheritance such as pictures, movies, documents, interviews, and so on. In addition to them, here we propose to construct 'the archive of the quake experience' which compiles observed data ready to relive with the simulator. We would like to show some movies of our quake drill in the presentation. Reference: (1) Kuroda, S. et al. (2012), "Development of portable earthquake simulator for enlightenment of disaster preparedness", 15th World Conference on Earthquake Engineering 2012, Vol. 12, 9412-9420.

  14. A hybrid artificial bee colony algorithm for numerical function optimization

    NASA Astrophysics Data System (ADS)

    Alqattan, Zakaria N.; Abdullah, Rosni

    2015-02-01

    Artificial Bee Colony (ABC) algorithm is one of the swarm intelligence algorithms; it has been introduced by Karaboga in 2005. It is a meta-heuristic optimization search algorithm inspired from the intelligent foraging behavior of the honey bees in nature. Its unique search process made it as one of the most competitive algorithm with some other search algorithms in the area of optimization, such as Genetic algorithm (GA) and Particle Swarm Optimization (PSO). However, the ABC performance of the local search process and the bee movement or the solution improvement equation still has some weaknesses. The ABC is good in avoiding trapping at the local optimum but it spends its time searching around unpromising random selected solutions. Inspired by the PSO, we propose a Hybrid Particle-movement ABC algorithm called HPABC, which adapts the particle movement process to improve the exploration of the original ABC algorithm. Numerical benchmark functions were used in order to experimentally test the HPABC algorithm. The results illustrate that the HPABC algorithm can outperform the ABC algorithm in most of the experiments (75% better in accuracy and over 3 times faster).

  15. An efficient cuckoo search algorithm for numerical function optimization

    NASA Astrophysics Data System (ADS)

    Ong, Pauline; Zainuddin, Zarita

    2013-04-01

    Cuckoo search algorithm which reproduces the breeding strategy of the best known brood parasitic bird, the cuckoos has demonstrated its superiority in obtaining the global solution for numerical optimization problems. However, the involvement of fixed step approach in its exploration and exploitation behavior might slow down the search process considerably. In this regards, an improved cuckoo search algorithm with adaptive step size adjustment is introduced and its feasibility on a variety of benchmarks is validated. The obtained results show that the proposed scheme outperforms the standard cuckoo search algorithm in terms of convergence characteristic while preserving the fascinating features of the original method.

  16. Effective FCM noise clustering algorithms in medical images.

    PubMed

    Kannan, S R; Devi, R; Ramathilagam, S; Takezawa, K

    2013-02-01

    The main motivation of this paper is to introduce a class of robust non-Euclidean distance measures for the original data space to derive new objective function and thus clustering the non-Euclidean structures in data to enhance the robustness of the original clustering algorithms to reduce noise and outliers. The new objective functions of proposed algorithms are realized by incorporating the noise clustering concept into the entropy based fuzzy C-means algorithm with suitable noise distance which is employed to take the information about noisy data in the clustering process. This paper presents initial cluster prototypes using prototype initialization method, so that this work tries to obtain the final result with less number of iterations. To evaluate the performance of the proposed methods in reducing the noise level, experimental work has been carried out with a synthetic image which is corrupted by Gaussian noise. The superiority of the proposed methods has been examined through the experimental study on medical images. The experimental results show that the proposed algorithms perform significantly better than the standard existing algorithms. The accurate classification percentage of the proposed fuzzy C-means segmentation method is obtained using silhouette validity index.

  17. Optimal Golomb Ruler Sequences Generation for Optical WDM Systems: A Novel Parallel Hybrid Multi-objective Bat Algorithm

    NASA Astrophysics Data System (ADS)

    Bansal, Shonak; Singh, Arun Kumar; Gupta, Neena

    2016-07-01

    In real-life, multi-objective engineering design problems are very tough and time consuming optimization problems due to their high degree of nonlinearities, complexities and inhomogeneity. Nature-inspired based multi-objective optimization algorithms are now becoming popular for solving multi-objective engineering design problems. This paper proposes original multi-objective Bat algorithm (MOBA) and its extended form, namely, novel parallel hybrid multi-objective Bat algorithm (PHMOBA) to generate shortest length Golomb ruler called optimal Golomb ruler (OGR) sequences at a reasonable computation time. The OGRs found their application in optical wavelength division multiplexing (WDM) systems as channel-allocation algorithm to reduce the four-wave mixing (FWM) crosstalk. The performances of both the proposed algorithms to generate OGRs as optical WDM channel-allocation is compared with other existing classical computing and nature-inspired algorithms, including extended quadratic congruence (EQC), search algorithm (SA), genetic algorithms (GAs), biogeography based optimization (BBO) and big bang-big crunch (BB-BC) optimization algorithms. Simulations conclude that the proposed parallel hybrid multi-objective Bat algorithm works efficiently as compared to original multi-objective Bat algorithm and other existing algorithms to generate OGRs for optical WDM systems. The algorithm PHMOBA to generate OGRs, has higher convergence and success rate than original MOBA. The efficiency improvement of proposed PHMOBA to generate OGRs up to 20-marks, in terms of ruler length and total optical channel bandwidth (TBW) is 100 %, whereas for original MOBA is 85 %. Finally the implications for further research are also discussed.

  18. A Novel Hybrid Firefly Algorithm for Global Optimization

    PubMed Central

    Zhang, Lina; Liu, Liqiang; Yang, Xin-She; Dai, Yuntao

    2016-01-01

    Global optimization is challenging to solve due to its nonlinearity and multimodality. Traditional algorithms such as the gradient-based methods often struggle to deal with such problems and one of the current trends is to use metaheuristic algorithms. In this paper, a novel hybrid population-based global optimization algorithm, called hybrid firefly algorithm (HFA), is proposed by combining the advantages of both the firefly algorithm (FA) and differential evolution (DE). FA and DE are executed in parallel to promote information sharing among the population and thus enhance searching efficiency. In order to evaluate the performance and efficiency of the proposed algorithm, a diverse set of selected benchmark functions are employed and these functions fall into two groups: unimodal and multimodal. The experimental results show better performance of the proposed algorithm compared to the original version of the firefly algorithm (FA), differential evolution (DE) and particle swarm optimization (PSO) in the sense of avoiding local minima and increasing the convergence rate. PMID:27685869

  19. Molecular diagnosis of distal renal tubular acidosis in Tunisian patients: proposed algorithm for Northern Africa populations for the ATP6V1B1, ATP6V0A4 and SCL4A1 genes

    PubMed Central

    2013-01-01

    Background Primary distal renal tubular acidosis (dRTA) caused by mutations in the genes that codify for the H + −ATPase pump subunits is a heterogeneous disease with a poor phenotype-genotype correlation. Up to now, large cohorts of dRTA Tunisian patients have not been analyzed, and molecular defects may differ from those described in other ethnicities. We aim to identify molecular defects present in the ATP6V1B1, ATP6V0A4 and SLC4A1 genes in a Tunisian cohort, according to the following algorithm: first, ATP6V1B1 gene analysis in dRTA patients with sensorineural hearing loss (SNHL) or unknown hearing status. Afterwards, ATP6V0A4 gene study in dRTA patients with normal hearing, and in those without any structural mutation in the ATP6V1B1 gene despite presenting SNHL. Finally, analysis of the SLC4A1 gene in those patients with a negative result for the previous studies. Methods 25 children (19 boys) with dRTA from 20 families of Tunisian origin were studied. DNAs were extracted by the standard phenol/chloroform method. Molecular analysis was performed by PCR amplification and direct sequencing. Results In the index cases, ATP6V1B1 gene screening resulted in a mutation detection rate of 81.25%, which increased up to 95% after ATP6V0A4 gene analysis. Three ATP6V1B1 mutations were observed: one frameshift mutation (c.1155dupC; p.Ile386fs), in exon 12; a G to C single nucleotide substitution, on the acceptor splicing site (c.175-1G > C; p.?) in intron 2, and one novel missense mutation (c.1102G > A; p.Glu368Lys), in exon 11. We also report four mutations in the ATP6V0A4 gene: one single nucleotide deletion in exon 13 (c.1221delG; p.Met408Cysfs*10); the nonsense c.16C > T; p.Arg6*, in exon 3; and the missense changes c.1739 T > C; p.Met580Thr, in exon 17 and c.2035G > T; p.Asp679Tyr, in exon 19. Conclusion Molecular diagnosis of ATP6V1B1 and ATP6V0A4 genes was performed in a large Tunisian cohort with dRTA. We identified three different ATP6V1

  20. An adaptive algorithm for low contrast infrared image enhancement

    NASA Astrophysics Data System (ADS)

    Liu, Sheng-dong; Peng, Cheng-yuan; Wang, Ming-jia; Wu, Zhi-guo; Liu, Jia-qi

    2013-08-01

    An adaptive infrared image enhancement algorithm for low contrast is proposed in this paper, to deal with the problem that conventional image enhancement algorithm is not able to effective identify the interesting region when dynamic range is large in image. This algorithm begin with the human visual perception characteristics, take account of the global adaptive image enhancement and local feature boost, not only the contrast of image is raised, but also the texture of picture is more distinct. Firstly, the global image dynamic range is adjusted from the overall, the dynamic range of original image and display grayscale form corresponding relationship, the gray scale of bright object is raised and the the gray scale of dark target is reduced at the same time, to improve the overall image contrast. Secondly, the corresponding filtering algorithm is used on the current point and its neighborhood pixels to extract image texture information, to adjust the brightness of the current point in order to enhance the local contrast of the image. The algorithm overcomes the default that the outline is easy to vague in traditional edge detection algorithm, and ensure the distinctness of texture detail in image enhancement. Lastly, we normalize the global luminance adjustment image and the local brightness adjustment image, to ensure a smooth transition of image details. A lot of experiments is made to compare the algorithm proposed in this paper with other convention image enhancement algorithm, and two groups of vague IR image are taken in experiment. Experiments show that: the contrast ratio of the picture is boosted after handled by histogram equalization algorithm, but the detail of the picture is not clear, the detail of the picture can be distinguished after handled by the Retinex algorithm. The image after deal with by self-adaptive enhancement algorithm proposed in this paper becomes clear in details, and the image contrast is markedly improved in compared with Retinex

  1. Proposed standardized definitions for vertical resolution and uncertainty in the NDACC lidar ozone and temperature algorithms - Part 2: Ozone DIAL uncertainty budget

    NASA Astrophysics Data System (ADS)

    Leblanc, Thierry; Sica, Robert J.; van Gijsel, Joanna A. E.; Godin-Beekmann, Sophie; Haefele, Alexander; Trickl, Thomas; Payen, Guillaume; Liberti, Gianluigi

    2016-08-01

    A standardized approach for the definition, propagation, and reporting of uncertainty in the ozone differential absorption lidar data products contributing to the Network for the Detection for Atmospheric Composition Change (NDACC) database is proposed. One essential aspect of the proposed approach is the propagation in parallel of all independent uncertainty components through the data processing chain before they are combined together to form the ozone combined standard uncertainty. The independent uncertainty components contributing to the overall budget include random noise associated with signal detection, uncertainty due to saturation correction, background noise extraction, the absorption cross sections of O3, NO2, SO2, and O2, the molecular extinction cross sections, and the number densities of the air, NO2, and SO2. The expression of the individual uncertainty components and their step-by-step propagation through the ozone differential absorption lidar (DIAL) processing chain are thoroughly estimated. All sources of uncertainty except detection noise imply correlated terms in the vertical dimension, which requires knowledge of the covariance matrix when the lidar signal is vertically filtered. In addition, the covariance terms must be taken into account if the same detection hardware is shared by the lidar receiver channels at the absorbed and non-absorbed wavelengths. The ozone uncertainty budget is presented as much as possible in a generic form (i.e., as a function of instrument performance and wavelength) so that all NDACC ozone DIAL investigators across the network can estimate, for their own instrument and in a straightforward manner, the expected impact of each reviewed uncertainty component. In addition, two actual examples of full uncertainty budget are provided, using nighttime measurements from the tropospheric ozone DIAL located at the Jet Propulsion Laboratory (JPL) Table Mountain Facility, California, and nighttime measurements from the JPL

  2. Cohenite in meteorites: A proposed origin

    USGS Publications Warehouse

    Brett, R.

    1966-01-01

    Cohenite [(Fe, Ni)3C] is found almost exclusively in meteorites containing from 6 to 8 percent nickel (by weight). On the basis of iron-nickel-carbon phase diagrams at 1 atmosphere and of kinetic data, the occurrence of cohenite within this narrow composition range as a low-pressure metastable phase and the nonoccurrence of cohenite in meteorites outside the range 6 to 8 percent nickel can be explained. Cohenite formed in meteorites containing less than 6 to 8 percent nickel decomposed to metal and graphite during cooling; it cannot form in meteorites containing more than about 8 percent. The presence of cohenite in meteorites cannot be used as an indicator of pressure of formation. However, the absence of cohenite in meteorites containing the assemblage, metal plus graphite, requires low pressures during cooling.

  3. Adaptive Load-Balancing Algorithms using Symmetric Broadcast Networks

    NASA Technical Reports Server (NTRS)

    Das, Sajal K.; Harvey, Daniel J.; Biswas, Rupak; Biegel, Bryan A. (Technical Monitor)

    2002-01-01

    In a distributed computing environment, it is important to ensure that the processor workloads are adequately balanced, Among numerous load-balancing algorithms, a unique approach due to Das and Prasad defines a symmetric broadcast network (SBN) that provides a robust communication pattern among the processors in a topology-independent manner. In this paper, we propose and analyze three efficient SBN-based dynamic load-balancing algorithms, and implement them on an SGI Origin2000. A thorough experimental study with Poisson distributed synthetic loads demonstrates that our algorithms are effective in balancing system load. By optimizing completion time and idle time, the proposed algorithms are shown to compare favorably with several existing approaches.

  4. Improvement of phase unwrapping algorithm based on image segmentation and merging

    NASA Astrophysics Data System (ADS)

    Wang, Huaying; Liu, Feifei; Zhu, Qiaofen

    2013-11-01

    A modified algorithm based on image segmentation and merging is proposed and demonstrated to improve the accuracy of the phase unwrapping algorithm. There are three improved aspects. Firstly, the method of unequal region segmentation is taken, which can make the regional information to be completely and accurately reproduced. Secondly, for the condition of noise and undersampling in different regions, different phase unwrapping algorithms are used, respectively. Lastly, for the sake of improving the accuracy of the phase unwrapping results, a method of weighted stack is applied to the overlapping region originated from blocks merging. The proposed algorithm has been verified by simulations and experiments. The results not only validate the accuracy and rapidity of the improved algorithm to recover the phase information of the measured object, but also illustrate the importance of the improved algorithm in Traditional Chinese Medicine Decoction Pieces cell identification.

  5. Belief network algorithms: A study of performance

    SciTech Connect

    Jitnah, N.

    1996-12-31

    This abstract gives an overview of the work. We present a survey of Belief Network algorithms and propose a domain characterization system to be used as a basis for algorithm comparison and for predicting algorithm performance.

  6. Robust watermarking on copyright protection of digital originals

    NASA Astrophysics Data System (ADS)

    Gu, C.; Hu, X. Y.

    2010-06-01

    The issues about the difference between digital vector originals and raster originals were discussed. A new algorithm based on displacing vertices to realize the embedding and extracting of digital watermarking in vector data was proposed after that. The results showed that the watermark produced by the method is resistant against translation, scaling, rotation, additive random noise; it is also resistant, to some extent, against cropping. This paper also modified the DCT raster image watermarking algorithm, using a bitmap image as watermark embedded into target images, instead of some meaningless serial numbers or simple symbols. The embedding and extraction part of these two digital watermark systems achieved with software. Experiments proved that both algorithms are not only imperceptible, but also have strong resistance against the common attracts, which can prove the copyright more effectively.

  7. Application of Modified Differential Evolution Algorithm to Magnetotelluric and Vertical Electrical Sounding Data

    NASA Astrophysics Data System (ADS)

    Mingolo, Nusharin; Sarakorn, Weerachai

    2016-04-01

    In this research, the Modified Differential Evolution (DE) algorithm is proposed and applied to the Magnetotelluric (MT) and Vertical Electrical sounding (VES) data to reveal the reasonable resistivity structure. The common processes of DE algorithm, including initialization, mutation and crossover, are modified by introducing both new control parameters and some constraints to obtain the fitting-reasonable resistivity model. The validity and efficiency of our developed modified DE algorithm is tested on both synthetic and real observed data. Our developed DE algorithm is also compared to the well-known OCCAM's algorithm for real case of MT data. For the synthetic case, our modified DE algorithm with appropriate control parameters can reveal the reasonable-fitting models when compared to the original synthetic models. For the real data case, the resistivity structures revealed by our algorithm are closed to those obtained by OCCAM's inversion, but our obtained structures reveal layers more apparently.

  8. Blind Alley Aware ACO Routing Algorithm

    NASA Astrophysics Data System (ADS)

    Yoshikawa, Masaya; Otani, Kazuo

    2010-10-01

    The routing problem is applied to various engineering fields. Many researchers study this problem. In this paper, we propose a new routing algorithm which is based on Ant Colony Optimization. The proposed algorithm introduces the tabu search mechanism to escape the blind alley. Thus, the proposed algorithm enables to find the shortest route, even if the map data contains the blind alley. Experiments using map data prove the effectiveness in comparison with Dijkstra algorithm which is the most popular conventional routing algorithm.

  9. Texture orientation-based algorithm for detecting infrared maritime targets.

    PubMed

    Wang, Bin; Dong, Lili; Zhao, Ming; Wu, Houde; Xu, Wenhai

    2015-05-20

    Infrared maritime target detection is a key technology for maritime target searching systems. However, in infrared maritime images (IMIs) taken under complicated sea conditions, background clutters, such as ocean waves, clouds or sea fog, usually have high intensity that can easily overwhelm the brightness of real targets, which is difficult for traditional target detection algorithms to deal with. To mitigate this problem, this paper proposes a novel target detection algorithm based on texture orientation. This algorithm first extracts suspected targets by analyzing the intersubband correlation between horizontal and vertical wavelet subbands of the original IMI on the first scale. Then the self-adaptive wavelet threshold denoising and local singularity analysis of the original IMI is combined to remove false alarms further. Experiments show that compared with traditional algorithms, this algorithm can suppress background clutter much better and realize better single-frame detection for infrared maritime targets. Besides, in order to guarantee accurate target extraction further, the pipeline-filtering algorithm is adopted to eliminate residual false alarms. The high practical value and applicability of this proposed strategy is backed strongly by experimental data acquired under different environmental conditions.

  10. Genetic algorithms

    NASA Technical Reports Server (NTRS)

    Wang, Lui; Bayer, Steven E.

    1991-01-01

    Genetic algorithms are mathematical, highly parallel, adaptive search procedures (i.e., problem solving methods) based loosely on the processes of natural genetics and Darwinian survival of the fittest. Basic genetic algorithms concepts are introduced, genetic algorithm applications are introduced, and results are presented from a project to develop a software tool that will enable the widespread use of genetic algorithm technology.

  11. The hierarchical algorithms--theory and applications

    NASA Astrophysics Data System (ADS)

    Su, Zheng-Yao

    Monte Carlo simulations are one of the most important numerical techniques for investigating statistical physical systems. Among these systems, spin models are a typical example which also play an essential role in constructing the abstract mechanism for various complex systems. Unfortunately, traditional Monte Carlo algorithms are afflicted with "critical slowing down" near continuous phase transitions and the efficiency of the Monte Carlo simulation goes to zero as the size of the lattice is increased. To combat critical slowing down, a very different type of collective-mode algorithm, in contrast to the traditional single-spin-flipmode, was proposed by Swendsen and Wang in 1987 for Potts spin models. Since then, there has been an explosion of work attempting to understand, improve, or generalize it. In these so-called "cluster" algorithms, clusters of spin are regarded as one template and are updated at each step of the Monte Carlo procedure. In implementing these algorithms the cluster labeling is a major time-consuming bottleneck and is also isomorphic to the problem of computing connected components of an undirected graph seen in other application areas, such as pattern recognition.A number of cluster labeling algorithms for sequential computers have long existed. However, the dynamic irregular nature of clusters complicates the task of finding good parallel algorithms and this is particularly true on SIMD (single-instruction-multiple-data machines. Our design of the Hierarchical Cluster Labeling Algorithm aims at alleviating this problem by building a hierarchical structure on the problem domain and by incorporating local and nonlocal communication schemes. We present an estimate for the computational complexity of cluster labeling and prove the key features of this algorithm (such as lower computational complexity, data locality, and easy implementation) compared with the methods formerly known. In particular, this algorithm can be viewed as a generalized

  12. Study of improved ray tracing parallel algorithm for CGH of 3D objects on GPU

    NASA Astrophysics Data System (ADS)

    Cong, Bin; Jiang, Xiaoyu; Yao, Jun; Zhao, Kai

    2014-11-01

    An improved parallel algorithm for holograms of three-dimensional objects was presented. According to the physical characteristics and mathematical properties of the original ray tracing algorithm for computer generated holograms (CGH), using transform approximation and numerical analysis methods, we extract parts of ray tracing algorithm which satisfy parallelization features and implement them on graphics processing unit (GPU). Meanwhile, through proper design of parallel numerical procedure, we did parallel programming to the two-dimensional slices of three-dimensional object with CUDA. According to the experiments, an effective method of dealing with occlusion problem in ray tracing is proposed, as well as generating the holograms of 3D objects with additive property. Our results indicate that the improved algorithm can effectively shorten the computing time. Due to the different sizes of spatial object points and hologram pixels, the speed has increased 20 to 70 times comparing with original ray tracing algorithm.

  13. Digital watermarking algorithm research of color images based on quaternion Fourier transform

    NASA Astrophysics Data System (ADS)

    An, Mali; Wang, Weijiang; Zhao, Zhen

    2013-10-01

    A watermarking algorithm of color images based on the quaternion Fourier Transform (QFFT) and improved quantization index algorithm (QIM) is proposed in this paper. The original image is transformed by QFFT, the watermark image is processed by compression and quantization coding, and then the processed watermark image is embedded into the components of the transformed original image. It achieves embedding and blind extraction of the watermark image. The experimental results show that the watermarking algorithm based on the improved QIM algorithm with distortion compensation achieves a good tradeoff between invisibility and robustness, and better robustness for the attacks of Gaussian noises, salt and pepper noises, JPEG compression, cropping, filtering and image enhancement than the traditional QIM algorithm.

  14. Semioptimal practicable algorithmic cooling

    NASA Astrophysics Data System (ADS)

    Elias, Yuval; Mor, Tal; Weinstein, Yossi

    2011-04-01

    Algorithmic cooling (AC) of spins applies entropy manipulation algorithms in open spin systems in order to cool spins far beyond Shannon’s entropy bound. Algorithmic cooling of nuclear spins was demonstrated experimentally and may contribute to nuclear magnetic resonance spectroscopy. Several cooling algorithms were suggested in recent years, including practicable algorithmic cooling (PAC) and exhaustive AC. Practicable algorithms have simple implementations, yet their level of cooling is far from optimal; exhaustive algorithms, on the other hand, cool much better, and some even reach (asymptotically) an optimal level of cooling, but they are not practicable. We introduce here semioptimal practicable AC (SOPAC), wherein a few cycles (typically two to six) are performed at each recursive level. Two classes of SOPAC algorithms are proposed and analyzed. Both attain cooling levels significantly better than PAC and are much more efficient than the exhaustive algorithms. These algorithms are shown to bridge the gap between PAC and exhaustive AC. In addition, we calculated the number of spins required by SOPAC in order to purify qubits for quantum computation. As few as 12 and 7 spins are required (in an ideal scenario) to yield a mildly pure spin (60% polarized) from initial polarizations of 1% and 10%, respectively. In the latter case, about five more spins are sufficient to produce a highly pure spin (99.99% polarized), which could be relevant for fault-tolerant quantum computing.

  15. Enhancement of the ill-conditioned original recordings using novel ICA technique

    NASA Astrophysics Data System (ADS)

    Naik, Ganesh R.

    2012-07-01

    The independent component analysis (ICA) method proposed in this study uses FastICA algorithm to improve the quality of the original recordings, which can be used as valuable pre-processing technique in signal processing methods. Initially, the ill-conditioned original audio recordings are separated using ICA methods and later, they are reconstructed using modified un-mixing matrix. The simulation results showed huge improvement of the original signal after reconstruction. The new method is found to be good because the accuracy is more compared to others in terms of the variance of the Gain matrix. The proposed method has potential applications in audio and biosignal processing techniques.

  16. A distributed Canny edge detector: algorithm and FPGA implementation.

    PubMed

    Xu, Qian; Varadarajan, Srenivas; Chakrabarti, Chaitali; Karam, Lina J

    2014-07-01

    The Canny edge detector is one of the most widely used edge detection algorithms due to its superior performance. Unfortunately, not only is it computationally more intensive as compared with other edge detection algorithms, but it also has a higher latency because it is based on frame-level statistics. In this paper, we propose a mechanism to implement the Canny algorithm at the block level without any loss in edge detection performance compared with the original frame-level Canny algorithm. Directly applying the original Canny algorithm at the block-level leads to excessive edges in smooth regions and to loss of significant edges in high-detailed regions since the original Canny computes the high and low thresholds based on the frame-level statistics. To solve this problem, we present a distributed Canny edge detection algorithm that adaptively computes the edge detection thresholds based on the block type and the local distribution of the gradients in the image block. In addition, the new algorithm uses a nonuniform gradient magnitude histogram to compute block-based hysteresis thresholds. The resulting block-based algorithm has a significantly reduced latency and can be easily integrated with other block-based image codecs. It is capable of supporting fast edge detection of images and videos with high resolutions, including full-HD since the latency is now a function of the block size instead of the frame size. In addition, quantitative conformance evaluations and subjective tests show that the edge detection performance of the proposed algorithm is better than the original frame-based algorithm, especially when noise is present in the images. Finally, this algorithm is implemented using a 32 computing engine architecture and is synthesized on the Xilinx Virtex-5 FPGA. The synthesized architecture takes only 0.721 ms (including the SRAM READ/WRITE time and the computation time) to detect edges of 512 × 512 images in the USC SIPI database when clocked at 100

  17. A distributed Canny edge detector: algorithm and FPGA implementation.

    PubMed

    Xu, Qian; Varadarajan, Srenivas; Chakrabarti, Chaitali; Karam, Lina J

    2014-07-01

    The Canny edge detector is one of the most widely used edge detection algorithms due to its superior performance. Unfortunately, not only is it computationally more intensive as compared with other edge detection algorithms, but it also has a higher latency because it is based on frame-level statistics. In this paper, we propose a mechanism to implement the Canny algorithm at the block level without any loss in edge detection performance compared with the original frame-level Canny algorithm. Directly applying the original Canny algorithm at the block-level leads to excessive edges in smooth regions and to loss of significant edges in high-detailed regions since the original Canny computes the high and low thresholds based on the frame-level statistics. To solve this problem, we present a distributed Canny edge detection algorithm that adaptively computes the edge detection thresholds based on the block type and the local distribution of the gradients in the image block. In addition, the new algorithm uses a nonuniform gradient magnitude histogram to compute block-based hysteresis thresholds. The resulting block-based algorithm has a significantly reduced latency and can be easily integrated with other block-based image codecs. It is capable of supporting fast edge detection of images and videos with high resolutions, including full-HD since the latency is now a function of the block size instead of the frame size. In addition, quantitative conformance evaluations and subjective tests show that the edge detection performance of the proposed algorithm is better than the original frame-based algorithm, especially when noise is present in the images. Finally, this algorithm is implemented using a 32 computing engine architecture and is synthesized on the Xilinx Virtex-5 FPGA. The synthesized architecture takes only 0.721 ms (including the SRAM READ/WRITE time and the computation time) to detect edges of 512 × 512 images in the USC SIPI database when clocked at 100

  18. Reconstruction-plane-dependent weighted FDK algorithm for cone beam volumetric CT

    NASA Astrophysics Data System (ADS)

    Tang, Xiangyang; Hsieh, Jiang

    2005-04-01

    The original FDK algorithm has been extensively employed in medical and industrial imaging applications. With an increased cone angle, cone beam (CB) artifacts in images reconstructed by the original FDK algorithm deteriorate, since the circular trajectory does not satisfy the so-called data sufficiency condition (DSC). A few "circular plus" trajectories have been proposed in the past to reduce CB artifacts by meeting the DSC. However, the circular trajectory has distinct advantages over other scanning trajectories in practical CT imaging, such as cardiac, vascular and perfusion applications. In addition to looking into the DSC, another insight into the CB artifacts of the original FDK algorithm is the inconsistency between conjugate rays that are 180° apart in view angle. The inconsistence between conjugate rays is pixel dependent, i.e., it varies dramatically over pixels within the image plane to be reconstructed. However, the original FDK algorithm treats all conjugate rays equally, resulting in CB artifacts that can be avoided if appropriate view weighting strategy is exercised. In this paper, a modified FDK algorithm is proposed, along with an experimental evaluation and verification, in which the helical body phantom and a humanoid head phantom scanned by a volumetric CT (64 x 0.625 mm) are utilized. Without extra trajectories supplemental to the circular trajectory, the modified FDK algorithm applies reconstruction-plane-dependent view weighting on projection data before 3D backprojection, which reduces the inconsistency between conjugate rays by suppressing the contribution of one of the conjugate rays with a larger cone angle. Both computer-simulated and real phantom studies show that, up to a moderate cone angle, the CB artifacts can be substantially suppressed by the modified FDK algorithm, while advantages of the original FDK algorithm, such as the filtered backprojection algorithm structure, 1D ramp filtering, and data manipulation efficiency, can be

  19. Automatic design of decision-tree algorithms with evolutionary algorithms.

    PubMed

    Barros, Rodrigo C; Basgalupp, Márcio P; de Carvalho, André C P L F; Freitas, Alex A

    2013-01-01

    This study reports the empirical analysis of a hyper-heuristic evolutionary algorithm that is capable of automatically designing top-down decision-tree induction algorithms. Top-down decision-tree algorithms are of great importance, considering their ability to provide an intuitive and accurate knowledge representation for classification problems. The automatic design of these algorithms seems timely, given the large literature accumulated over more than 40 years of research in the manual design of decision-tree induction algorithms. The proposed hyper-heuristic evolutionary algorithm, HEAD-DT, is extensively tested using 20 public UCI datasets and 10 microarray gene expression datasets. The algorithms automatically designed by HEAD-DT are compared with traditional decision-tree induction algorithms, such as C4.5 and CART. Experimental results show that HEAD-DT is capable of generating algorithms which are significantly more accurate than C4.5 and CART.

  20. Gradient maintenance: A new algorithm for fast online replanning

    SciTech Connect

    Ahunbay, Ergun E. Li, X. Allen

    2015-06-15

    Purpose: Clinical use of online adaptive replanning has been hampered by the unpractically long time required to delineate volumes based on the image of the day. The authors propose a new replanning algorithm, named gradient maintenance (GM), which does not require the delineation of organs at risk (OARs), and can enhance automation, drastically reducing planning time and improving consistency and throughput of online replanning. Methods: The proposed GM algorithm is based on the hypothesis that if the dose gradient toward each OAR in daily anatomy can be maintained the same as that in the original plan, the intended plan quality of the original plan would be preserved in the adaptive plan. The algorithm requires a series of partial concentric rings (PCRs) to be automatically generated around the target toward each OAR on the planning and the daily images. The PCRs are used in the daily optimization objective function. The PCR dose constraints are generated with dose–volume data extracted from the original plan. To demonstrate this idea, GM plans generated using daily images acquired using an in-room CT were compared to regular optimization and image guided radiation therapy repositioning plans for representative prostate and pancreatic cancer cases. Results: The adaptive replanning using the GM algorithm, requiring only the target contour from the CT of the day, can be completed within 5 min without using high-power hardware. The obtained adaptive plans were almost as good as the regular optimization plans and were better than the repositioning plans for the cases studied. Conclusions: The newly proposed GM replanning algorithm, requiring only target delineation, not full delineation of OARs, substantially increased planning speed for online adaptive replanning. The preliminary results indicate that the GM algorithm may be a solution to improve the ability for automation and may be especially suitable for sites with small-to-medium size targets surrounded by

  1. Error estimation for the linearized auto-localization algorithm.

    PubMed

    Guevara, Jorge; Jiménez, Antonio R; Prieto, Jose Carlos; Seco, Fernando

    2012-01-01

    The Linearized Auto-Localization (LAL) algorithm estimates the position of beacon nodes in Local Positioning Systems (LPSs), using only the distance measurements to a mobile node whose position is also unknown. The LAL algorithm calculates the inter-beacon distances, used for the estimation of the beacons' positions, from the linearized trilateration equations. In this paper we propose a method to estimate the propagation of the errors of the inter-beacon distances obtained with the LAL algorithm, based on a first order Taylor approximation of the equations. Since the method depends on such approximation, a confidence parameter τ is defined to measure the reliability of the estimated error. Field evaluations showed that by applying this information to an improved weighted-based auto-localization algorithm (WLAL), the standard deviation of the inter-beacon distances can be improved by more than 30% on average with respect to the original LAL method.

  2. A biconjugate gradient type algorithm on massively parallel architectures

    NASA Technical Reports Server (NTRS)

    Freund, Roland W.; Hochbruck, Marlis

    1991-01-01

    The biconjugate gradient (BCG) method is the natural generalization of the classical conjugate gradient algorithm for Hermitian positive definite matrices to general non-Hermitian linear systems. Unfortunately, the original BCG algorithm is susceptible to possible breakdowns and numerical instabilities. Recently, Freund and Nachtigal have proposed a novel BCG type approach, the quasi-minimal residual method (QMR), which overcomes the problems of BCG. Here, an implementation is presented of QMR based on an s-step version of the nonsymmetric look-ahead Lanczos algorithm. The main feature of the s-step Lanczos algorithm is that, in general, all inner products, except for one, can be computed in parallel at the end of each block; this is unlike the other standard Lanczos process where inner products are generated sequentially. The resulting implementation of QMR is particularly attractive on massively parallel SIMD architectures, such as the Connection Machine.

  3. Error estimation for the linearized auto-localization algorithm.

    PubMed

    Guevara, Jorge; Jiménez, Antonio R; Prieto, Jose Carlos; Seco, Fernando

    2012-01-01

    The Linearized Auto-Localization (LAL) algorithm estimates the position of beacon nodes in Local Positioning Systems (LPSs), using only the distance measurements to a mobile node whose position is also unknown. The LAL algorithm calculates the inter-beacon distances, used for the estimation of the beacons' positions, from the linearized trilateration equations. In this paper we propose a method to estimate the propagation of the errors of the inter-beacon distances obtained with the LAL algorithm, based on a first order Taylor approximation of the equations. Since the method depends on such approximation, a confidence parameter τ is defined to measure the reliability of the estimated error. Field evaluations showed that by applying this information to an improved weighted-based auto-localization algorithm (WLAL), the standard deviation of the inter-beacon distances can be improved by more than 30% on average with respect to the original LAL method. PMID:22736965

  4. Error Estimation for the Linearized Auto-Localization Algorithm

    PubMed Central

    Guevara, Jorge; Jiménez, Antonio R.; Prieto, Jose Carlos; Seco, Fernando

    2012-01-01

    The Linearized Auto-Localization (LAL) algorithm estimates the position of beacon nodes in Local Positioning Systems (LPSs), using only the distance measurements to a mobile node whose position is also unknown. The LAL algorithm calculates the inter-beacon distances, used for the estimation of the beacons’ positions, from the linearized trilateration equations. In this paper we propose a method to estimate the propagation of the errors of the inter-beacon distances obtained with the LAL algorithm, based on a first order Taylor approximation of the equations. Since the method depends on such approximation, a confidence parameter τ is defined to measure the reliability of the estimated error. Field evaluations showed that by applying this information to an improved weighted-based auto-localization algorithm (WLAL), the standard deviation of the inter-beacon distances can be improved by more than 30% on average with respect to the original LAL method. PMID:22736965

  5. Stochastic Leader Gravitational Search Algorithm for Enhanced Adaptive Beamforming Technique

    PubMed Central

    Darzi, Soodabeh; Islam, Mohammad Tariqul; Tiong, Sieh Kiong; Kibria, Salehin; Singh, Mandeep

    2015-01-01

    In this paper, stochastic leader gravitational search algorithm (SL-GSA) based on randomized k is proposed. Standard GSA (SGSA) utilizes the best agents without any randomization, thus it is more prone to converge at suboptimal results. Initially, the new approach randomly choses k agents from the set of all agents to improve the global search ability. Gradually, the set of agents is reduced by eliminating the agents with the poorest performances to allow rapid convergence. The performance of the SL-GSA was analyzed for six well-known benchmark functions, and the results are compared with SGSA and some of its variants. Furthermore, the SL-GSA is applied to minimum variance distortionless response (MVDR) beamforming technique to ensure compatibility with real world optimization problems. The proposed algorithm demonstrates superior convergence rate and quality of solution for both real world problems and benchmark functions compared to original algorithm and other recent variants of SGSA. PMID:26552032

  6. Kernel simplex growing algorithm for hyperspectral endmember extraction

    NASA Astrophysics Data System (ADS)

    Zhao, Liaoying; Zheng, Junpeng; Li, Xiaorun; Wang, Lijiao

    2014-01-01

    In order to effectively extract endmembers for hyperspectral imagery where linear mixing model may not be appropriate due to multiple scattering effects, this paper extends the simplex growing algorithm (SGA) to its kernel version. A new simplex volume formula without dimension reduction is used in SGA to form a new simplex growing algorithm (NSGA). The original data are nonlinearly mapped into a high-dimensional space where the scatters can be ignored. To avoid determining complex nonlinear mapping, a kernel function is used to extend the NSGA to kernel NSGA (KNSGA). Experimental results of simulated and real data prove that the proposed KNSGA approach outperforms SGA and NSGA.

  7. Efficient algorithms for robust recovery of images from compressed data.

    PubMed

    Pham, Duc-Son; Venkatesh, Svetha

    2013-12-01

    Compressed sensing (CS) is an important theory for sub-Nyquist sampling and recovery of compressible data. Recently, it has been extended to cope with the case where corruption to the CS data is modeled as impulsive noise. The new formulation, termed as robust CS, combines robust statistics and CS into a single framework to suppress outliers in the CS recovery. To solve the newly formulated robust CS problem, a scheme that iteratively solves a number of CS problems--the solutions from which provably converge to the true robust CS solution--is suggested. This scheme is, however, rather inefficient as it has to use existing CS solvers as a proxy. To overcome limitations with the original robust CS algorithm, we propose in this paper more computationally efficient algorithms by following latest advances in large-scale convex optimization for nonsmooth regularization. Furthermore, we also extend the robust CS formulation to various settings, including additional affine constraints, l1-norm loss function, mix-norm regularization, and multitasking, so as to further improve robust CS and derive simple but effective algorithms to solve these extensions. We demonstrate that the new algorithms provide much better computational advantage over the original robust CS method on the original robust CS formulation, and effectively solve more sophisticated extensions where the original methods simply cannot. We demonstrate the usefulness of the extensions on several imaging tasks.

  8. Spectral methods and sum acceleration algorithms. Final report

    SciTech Connect

    Boyd, J.

    1995-03-01

    The principle investigator pursued his investigation of numerical algorithms during the period of the grant. The attached list of publications is so lengthy that it is impossible to describe them in detail. However, the author calls attention to the four articles on sequence acceleration and fourteen more on spectral methods, which fulfill the goals of the original proposal. He also continued his research on nonlinear waves, and wrote a dozen papers on this, too.

  9. Random sampler M-estimator algorithm with sequential probability ratio test for robust function approximation via feed-forward neural networks.

    PubMed

    El-Melegy, Moumen T

    2013-07-01

    This paper addresses the problem of fitting a functional model to data corrupted with outliers using a multilayered feed-forward neural network. Although it is of high importance in practical applications, this problem has not received careful attention from the neural network research community. One recent approach to solving this problem is to use a neural network training algorithm based on the random sample consensus (RANSAC) framework. This paper proposes a new algorithm that offers two enhancements over the original RANSAC algorithm. The first one improves the algorithm accuracy and robustness by employing an M-estimator cost function to decide on the best estimated model from the randomly selected samples. The other one improves the time performance of the algorithm by utilizing a statistical pretest based on Wald's sequential probability ratio test. The proposed algorithm is successfully evaluated on synthetic and real data, contaminated with varying degrees of outliers, and compared with existing neural network training algorithms.

  10. An efficient algorithm for function optimization: modified stem cells algorithm

    NASA Astrophysics Data System (ADS)

    Taherdangkoo, Mohammad; Paziresh, Mahsa; Yazdi, Mehran; Bagheri, Mohammad

    2013-03-01

    In this paper, we propose an optimization algorithm based on the intelligent behavior of stem cell swarms in reproduction and self-organization. Optimization algorithms, such as the Genetic Algorithm (GA), Particle Swarm Optimization (PSO) algorithm, Ant Colony Optimization (ACO) algorithm and Artificial Bee Colony (ABC) algorithm, can give solutions to linear and non-linear problems near to the optimum for many applications; however, in some case, they can suffer from becoming trapped in local optima. The Stem Cells Algorithm (SCA) is an optimization algorithm inspired by the natural behavior of stem cells in evolving themselves into new and improved cells. The SCA avoids the local optima problem successfully. In this paper, we have made small changes in the implementation of this algorithm to obtain improved performance over previous versions. Using a series of benchmark functions, we assess the performance of the proposed algorithm and compare it with that of the other aforementioned optimization algorithms. The obtained results prove the superiority of the Modified Stem Cells Algorithm (MSCA).

  11. Simple Common Plane contact algorithm for explicit FE/FD methods

    SciTech Connect

    Vorobiev, O

    2006-12-18

    Common-plane (CP) algorithm is widely used in Discrete Element Method (DEM) to model contact forces between interacting particles or blocks. A new simple contact algorithm is proposed to model contacts in FE/FD methods which is similar to the CP algorithm. The CP is defined as a plane separating interacting faces of FE/FD mesh instead of blocks or particles used in the original CP method. The new method does not require iterations even for very stiff contacts. It is very robust and easy to implement both in 2D and 3D parallel codes.

  12. Simple Common Plane contact detection algorithm for FE/FD methods

    SciTech Connect

    Vorobiev, O

    2006-07-19

    Common-plane (CP) algorithm is widely used in Discrete Element Method (DEM) to model contact forces between interacting particles or blocks. A new simple contact detection algorithm is proposed to model contacts in FE/FD methods which is similar to the CP algorithm. The CP is defined as a plane separating interacting faces of FE/FD mesh instead of blocks or particles in the original CP method. The method does not require iterations. It is very robust and easy to implement both in 2D and 3D case.

  13. A Hybrid Shortest Path Algorithm for Navigation System

    NASA Astrophysics Data System (ADS)

    Cho, Hsun-Jung; Lan, Chien-Lun

    2007-12-01

    Combined with Geographic Information System (GIS) and Global Positioning System (GPS), the vehicle navigation system had become a quite popular product in daily life. A key component of the navigation system is the Shortest Path Algorithm. Navigation in real world must face a network consists of tens of thousands nodes and links, and even more. Under the limited computation capability of vehicle navigation equipment, it is difficult to satisfy the realtime response requirement that user expected. Hence, this study focused on shortest path algorithm that enhances the computation speed with less memory requirement. Several well-known algorithms such as Dijkstra, A* and hierarchical concepts were integrated to build hybrid algorithms that reduce searching space and improve searching speed. Numerical examples were conducted on Taiwan highway network that consists of more than four hundred thousands of links and nearly three hundred thousands of nodes. This real network was divided into two connected sub-networks (layers). The upper layer is constructed by freeways and expressways; the lower layer is constructed by local networks. Test origin-destination pairs were chosen randomly and divided into three distance categories; short, medium and long distances. The evaluation of outcome is judged by actual length and travel time. The numerical example reveals that the hybrid algorithm proposed by this research might be tens of thousands times faster than traditional Dijkstra algorithm; the memory requirement of the hybrid algorithm is also much smaller than the tradition algorithm. This outcome shows that this proposed algorithm would have an advantage over vehicle navigation system.

  14. Localized Ambient Solidity Separation Algorithm Based Computer User Segmentation

    PubMed Central

    Sun, Xiao; Zhang, Tongda; Chai, Yueting; Liu, Yi

    2015-01-01

    Most of popular clustering methods typically have some strong assumptions of the dataset. For example, the k-means implicitly assumes that all clusters come from spherical Gaussian distributions which have different means but the same covariance. However, when dealing with datasets that have diverse distribution shapes or high dimensionality, these assumptions might not be valid anymore. In order to overcome this weakness, we proposed a new clustering algorithm named localized ambient solidity separation (LASS) algorithm, using a new isolation criterion called centroid distance. Compared with other density based isolation criteria, our proposed centroid distance isolation criterion addresses the problem caused by high dimensionality and varying density. The experiment on a designed two-dimensional benchmark dataset shows that our proposed LASS algorithm not only inherits the advantage of the original dissimilarity increments clustering method to separate naturally isolated clusters but also can identify the clusters which are adjacent, overlapping, and under background noise. Finally, we compared our LASS algorithm with the dissimilarity increments clustering method on a massive computer user dataset with over two million records that contains demographic and behaviors information. The results show that LASS algorithm works extremely well on this computer user dataset and can gain more knowledge from it. PMID:26221133

  15. Localized Ambient Solidity Separation Algorithm Based Computer User Segmentation.

    PubMed

    Sun, Xiao; Zhang, Tongda; Chai, Yueting; Liu, Yi

    2015-01-01

    Most of popular clustering methods typically have some strong assumptions of the dataset. For example, the k-means implicitly assumes that all clusters come from spherical Gaussian distributions which have different means but the same covariance. However, when dealing with datasets that have diverse distribution shapes or high dimensionality, these assumptions might not be valid anymore. In order to overcome this weakness, we proposed a new clustering algorithm named localized ambient solidity separation (LASS) algorithm, using a new isolation criterion called centroid distance. Compared with other density based isolation criteria, our proposed centroid distance isolation criterion addresses the problem caused by high dimensionality and varying density. The experiment on a designed two-dimensional benchmark dataset shows that our proposed LASS algorithm not only inherits the advantage of the original dissimilarity increments clustering method to separate naturally isolated clusters but also can identify the clusters which are adjacent, overlapping, and under background noise. Finally, we compared our LASS algorithm with the dissimilarity increments clustering method on a massive computer user dataset with over two million records that contains demographic and behaviors information. The results show that LASS algorithm works extremely well on this computer user dataset and can gain more knowledge from it. PMID:26221133

  16. Original Misunderstanding

    ERIC Educational Resources Information Center

    Holtzman, Alexander

    2009-01-01

    Humorist Josh Billings quipped, "About the most originality that any writer can hope to achieve honestly is to steal with good judgment." Billings was harsh in his view of originality, but his critique reveals a tension faced by students every time they write a history paper. Research is the essence of any history paper. Especially in high school,…

  17. Deconvolution of interferometric data using interior point iterative algorithms

    NASA Astrophysics Data System (ADS)

    Theys, C.; Lantéri, H.; Aime, C.

    2016-09-01

    We address the problem of deconvolution of astronomical images that could be obtained with future large interferometers in space. The presentation is made in two complementary parts. The first part gives an introduction to the image deconvolution with linear and nonlinear algorithms. The emphasis is made on nonlinear iterative algorithms that verify the constraints of non-negativity and constant flux. The Richardson-Lucy algorithm appears there as a special case for photon counting conditions. More generally, the algorithm published recently by Lanteri et al. (2015) is based on scale invariant divergences without assumption on the statistic model of the data. The two proposed algorithms are interior-point algorithms, the latter being more efficient in terms of speed of calculation. These algorithms are applied to the deconvolution of simulated images corresponding to an interferometric system of 16 diluted telescopes in space. Two non-redundant configurations, one disposed around a circle and the other on an hexagonal lattice, are compared for their effectiveness on a simple astronomical object. The comparison is made in the direct and Fourier spaces. Raw "dirty" images have many artifacts due to replicas of the original object. Linear methods cannot remove these replicas while iterative methods clearly show their efficacy in these examples.

  18. Comparison and improvement of algorithms for computing minimal cut sets

    PubMed Central

    2013-01-01

    Background Constrained minimal cut sets (cMCSs) have recently been introduced as a framework to enumerate minimal genetic intervention strategies for targeted optimization of metabolic networks. Two different algorithmic schemes (adapted Berge algorithm and binary integer programming) have been proposed to compute cMCSs from elementary modes. However, in their original formulation both algorithms are not fully comparable. Results Here we show that by a small extension to the integer program both methods become equivalent. Furthermore, based on well-known preprocessing procedures for integer programming we present efficient preprocessing steps which can be used for both algorithms. We then benchmark the numerical performance of the algorithms in several realistic medium-scale metabolic models. The benchmark calculations reveal (i) that these preprocessing steps can lead to an enormous speed-up under both algorithms, and (ii) that the adapted Berge algorithm outperforms the binary integer approach. Conclusions Generally, both of our new implementations are by at least one order of magnitude faster than other currently available implementations. PMID:24191903

  19. A Modified MinMax k-Means Algorithm Based on PSO

    PubMed Central

    2016-01-01

    The MinMax k-means algorithm is widely used to tackle the effect of bad initialization by minimizing the maximum intraclustering errors. Two parameters, including the exponent parameter and memory parameter, are involved in the executive process. Since different parameters have different clustering errors, it is crucial to choose appropriate parameters. In the original algorithm, a practical framework is given. Such framework extends the MinMax k-means to automatically adapt the exponent parameter to the data set. It has been believed that if the maximum exponent parameter has been set, then the programme can reach the lowest intraclustering errors. However, our experiments show that this is not always correct. In this paper, we modified the MinMax k-means algorithm by PSO to determine the proper values of parameters which can subject the algorithm to attain the lowest clustering errors. The proposed clustering method is tested on some favorite data sets in several different initial situations and is compared to the k-means algorithm and the original MinMax k-means algorithm. The experimental results indicate that our proposed algorithm can reach the lowest clustering errors automatically. PMID:27656201

  20. A Modified MinMax k-Means Algorithm Based on PSO

    PubMed Central

    2016-01-01

    The MinMax k-means algorithm is widely used to tackle the effect of bad initialization by minimizing the maximum intraclustering errors. Two parameters, including the exponent parameter and memory parameter, are involved in the executive process. Since different parameters have different clustering errors, it is crucial to choose appropriate parameters. In the original algorithm, a practical framework is given. Such framework extends the MinMax k-means to automatically adapt the exponent parameter to the data set. It has been believed that if the maximum exponent parameter has been set, then the programme can reach the lowest intraclustering errors. However, our experiments show that this is not always correct. In this paper, we modified the MinMax k-means algorithm by PSO to determine the proper values of parameters which can subject the algorithm to attain the lowest clustering errors. The proposed clustering method is tested on some favorite data sets in several different initial situations and is compared to the k-means algorithm and the original MinMax k-means algorithm. The experimental results indicate that our proposed algorithm can reach the lowest clustering errors automatically.

  1. An analysis dictionary learning algorithm under a noisy data model with orthogonality constraint.

    PubMed

    Zhang, Ye; Yu, Tenglong; Wang, Wenwu

    2014-01-01

    Two common problems are often encountered in analysis dictionary learning (ADL) algorithms. The first one is that the original clean signals for learning the dictionary are assumed to be known, which otherwise need to be estimated from noisy measurements. This, however, renders a computationally slow optimization process and potentially unreliable estimation (if the noise level is high), as represented by the Analysis K-SVD (AK-SVD) algorithm. The other problem is the trivial solution to the dictionary, for example, the null dictionary matrix that may be given by a dictionary learning algorithm, as discussed in the learning overcomplete sparsifying transform (LOST) algorithm. Here we propose a novel optimization model and an iterative algorithm to learn the analysis dictionary, where we directly employ the observed data to compute the approximate analysis sparse representation of the original signals (leading to a fast optimization procedure) and enforce an orthogonality constraint on the optimization criterion to avoid the trivial solutions. Experiments demonstrate the competitive performance of the proposed algorithm as compared with three baselines, namely, the AK-SVD, LOST, and NAAOLA algorithms.

  2. Dual key speech encryption algorithm based underdetermined BSS.

    PubMed

    Zhao, Huan; He, Shaofang; Chen, Zuo; Zhang, Xixiang

    2014-01-01

    When the number of the mixed signals is less than that of the source signals, the underdetermined blind source separation (BSS) is a significant difficult problem. Due to the fact that the great amount data of speech communications and real-time communication has been required, we utilize the intractability of the underdetermined BSS problem to present a dual key speech encryption method. The original speech is mixed with dual key signals which consist of random key signals (one-time pad) generated by secret seed and chaotic signals generated from chaotic system. In the decryption process, approximate calculation is used to recover the original speech signals. The proposed algorithm for speech signals encryption can resist traditional attacks against the encryption system, and owing to approximate calculation, decryption becomes faster and more accurate. It is demonstrated that the proposed method has high level of security and can recover the original signals quickly and efficiently yet maintaining excellent audio quality. PMID:24955430

  3. Algorithmic height compression of unordered trees.

    PubMed

    Ben-Naoum, Farah; Godin, Christophe

    2016-01-21

    By nature, tree structures frequently present similarities between their sub-parts. Making use of this redundancy, different types of tree compression techniques have been designed in the literature to reduce the complexity of tree structures. A popular and efficient way to compress a tree consists of merging its isomorphic subtrees, which produces a directed acyclic graph (DAG) equivalent to the original tree. An important property of this method is that the compressed structure (i.e. the DAG) has the same height as the original tree, thus limiting partially the possibility of compression. In this paper we address the problem of further compressing this DAG in height. The difficulty is that compression must be carried out on substructures that are not exactly isomorphic as they are strictly nested within each-other. We thus introduced a notion of quasi-isomorphism between subtrees that makes it possible to define similar patterns along any given path in a tree. We then proposed an algorithm to detect these patterns and to merge them, thus leading to compressed structures corresponding to DAGs augmented with return edges. In this way, redundant information is removed from the original tree in both width and height, thus achieving minimal structural compression. The complete compression algorithm is then illustrated on the compression of various plant-like structures.

  4. Algorithmic height compression of unordered trees.

    PubMed

    Ben-Naoum, Farah; Godin, Christophe

    2016-01-21

    By nature, tree structures frequently present similarities between their sub-parts. Making use of this redundancy, different types of tree compression techniques have been designed in the literature to reduce the complexity of tree structures. A popular and efficient way to compress a tree consists of merging its isomorphic subtrees, which produces a directed acyclic graph (DAG) equivalent to the original tree. An important property of this method is that the compressed structure (i.e. the DAG) has the same height as the original tree, thus limiting partially the possibility of compression. In this paper we address the problem of further compressing this DAG in height. The difficulty is that compression must be carried out on substructures that are not exactly isomorphic as they are strictly nested within each-other. We thus introduced a notion of quasi-isomorphism between subtrees that makes it possible to define similar patterns along any given path in a tree. We then proposed an algorithm to detect these patterns and to merge them, thus leading to compressed structures corresponding to DAGs augmented with return edges. In this way, redundant information is removed from the original tree in both width and height, thus achieving minimal structural compression. The complete compression algorithm is then illustrated on the compression of various plant-like structures. PMID:26551155

  5. Cloud model bat algorithm.

    PubMed

    Zhou, Yongquan; Xie, Jian; Li, Liangliang; Ma, Mingzhi

    2014-01-01

    Bat algorithm (BA) is a novel stochastic global optimization algorithm. Cloud model is an effective tool in transforming between qualitative concepts and their quantitative representation. Based on the bat echolocation mechanism and excellent characteristics of cloud model on uncertainty knowledge representation, a new cloud model bat algorithm (CBA) is proposed. This paper focuses on remodeling echolocation model based on living and preying characteristics of bats, utilizing the transformation theory of cloud model to depict the qualitative concept: "bats approach their prey." Furthermore, Lévy flight mode and population information communication mechanism of bats are introduced to balance the advantage between exploration and exploitation. The simulation results show that the cloud model bat algorithm has good performance on functions optimization. PMID:24967425

  6. Novel Spectrum Sensing Algorithms for OFDM Cognitive Radio Networks

    PubMed Central

    Shi, Zhenguo; Wu, Zhilu; Yin, Zhendong; Cheng, Qingqing

    2015-01-01

    Spectrum sensing technology plays an increasingly important role in cognitive radio networks. Consequently, several spectrum sensing algorithms have been proposed in the literature. In this paper, we present a new spectrum sensing algorithm “Differential Characteristics-Based OFDM (DC-OFDM)” for detecting OFDM signal on account of differential characteristics. We put the primary value on channel gain θ around zero to detect the presence of primary user. Furthermore, utilizing the same method of differential operation, we improve two traditional OFDM sensing algorithms (cyclic prefix and pilot tones detecting algorithms), and propose a “Differential Characteristics-Based Cyclic Prefix (DC-CP)” detector and a “Differential Characteristics-Based Pilot Tones (DC-PT)” detector, respectively. DC-CP detector is based on auto-correlation vector to sense the spectrum, while the DC-PT detector takes the frequency-domain cross-correlation of PT as the test statistic to detect the primary user. Moreover, the distributions of the test statistics of the three proposed methods have been derived. Simulation results illustrate that all of the three proposed methods can achieve good performance under low signal to noise ratio (SNR) with the presence of timing delay. Specifically, the DC-OFDM detector gets the best performance among the presented detectors. Moreover, both of the DC-CP and DC-PT detector achieve significant improvements compared with their corresponding original detectors. PMID:26083226

  7. Novel Spectrum Sensing Algorithms for OFDM Cognitive Radio Networks.

    PubMed

    Shi, Zhenguo; Wu, Zhilu; Yin, Zhendong; Cheng, Qingqing

    2015-01-01

    Spectrum sensing technology plays an increasingly important role in cognitive radio networks. Consequently, several spectrum sensing algorithms have been proposed in the literature. In this paper, we present a new spectrum sensing algorithm "Differential Characteristics-Based OFDM (DC-OFDM)" for detecting OFDM signal on account of differential characteristics. We put the primary value on channel gain θ around zero to detect the presence of primary user. Furthermore, utilizing the same method of differential operation, we improve two traditional OFDM sensing algorithms (cyclic prefix and pilot tones detecting algorithms), and propose a "Differential Characteristics-Based Cyclic Prefix (DC-CP)" detector and a "Differential Characteristics-Based Pilot Tones (DC-PT)" detector, respectively. DC-CP detector is based on auto-correlation vector to sense the spectrum, while the DC-PT detector takes the frequency-domain cross-correlation of PT as the test statistic to detect the primary user. Moreover, the distributions of the test statistics of the three proposed methods have been derived. Simulation results illustrate that all of the three proposed methods can achieve good performance under low signal to noise ratio (SNR) with the presence of timing delay. Specifically, the DC-OFDM detector gets the best performance among the presented detectors. Moreover, both of the DC-CP and DC-PT detector achieve significant improvements compared with their corresponding original detectors. PMID:26083226

  8. Development of a new metal artifact reduction algorithm by using an edge preserving method for CBCT imaging

    NASA Astrophysics Data System (ADS)

    Kim, Juhye; Nam, Haewon; Lee, Rena

    2015-07-01

    CT (computed tomography) images, metal materials such as tooth supplements or surgical clips can cause metal artifact and degrade image quality. In severe cases, this may lead to misdiagnosis. In this research, we developed a new MAR (metal artifact reduction) algorithm by using an edge preserving filter and the MATLAB program (Mathworks, version R2012a). The proposed algorithm consists of 6 steps: image reconstruction from projection data, metal segmentation, forward projection, interpolation, applied edge preserving smoothing filter, and new image reconstruction. For an evaluation of the proposed algorithm, we obtained both numerical simulation data and data for a Rando phantom. In the numerical simulation data, four metal regions were added into the Shepp Logan phantom for metal artifacts. The projection data of the metal-inserted Rando phantom were obtained by using a prototype CBCT scanner manufactured by medical engineering and medical physics (MEMP) laboratory research group in medical science at Ewha Womans University. After these had been adopted the proposed algorithm was performed, and the result were compared with the original image (with metal artifact without correction) and with a corrected image based on linear interpolation. Both visual and quantitative evaluations were done. Compared with the original image with metal artifacts and with the image corrected by using linear interpolation, both the numerical and the experimental phantom data demonstrated that the proposed algorithm reduced the metal artifact. In conclusion, the evaluation in this research showed that the proposed algorithm outperformed the interpolation based MAR algorithm. If an optimization and a stability evaluation of the proposed algorithm can be performed, the developed algorithm is expected to be an effective tool for eliminating metal artifacts even in commercial CT systems.

  9. The Langley Parameterized Shortwave Algorithm (LPSA) for Surface Radiation Budget Studies. 1.0

    NASA Technical Reports Server (NTRS)

    Gupta, Shashi K.; Kratz, David P.; Stackhouse, Paul W., Jr.; Wilber, Anne C.

    2001-01-01

    An efficient algorithm was developed during the late 1980's and early 1990's by W. F. Staylor at NASA/LaRC for the purpose of deriving shortwave surface radiation budget parameters on a global scale. While the algorithm produced results in good agreement with observations, the lack of proper documentation resulted in a weak acceptance by the science community. The primary purpose of this report is to develop detailed documentation of the algorithm. In the process, the algorithm was modified whenever discrepancies were found between the algorithm and its referenced literature sources. In some instances, assumptions made in the algorithm could not be justified and were replaced with those that were justifiable. The algorithm uses satellite and operational meteorological data for inputs. Most of the original data sources have been replaced by more recent, higher quality data sources, and fluxes are now computed on a higher spatial resolution. Many more changes to the basic radiation scheme and meteorological inputs have been proposed to improve the algorithm and make the product more useful for new research projects. Because of the many changes already in place and more planned for the future, the algorithm has been renamed the Langley Parameterized Shortwave Algorithm (LPSA).

  10. Development and Validation of a Polar Cloud Algorithm for CERES

    NASA Technical Reports Server (NTRS)

    1999-01-01

    The objectives of this project, as described in the original proposal, were to develop an algorithm for diagnosing cloud properties over snow- and ice-covered surfaces, particularly at night, using satellite radiances from the Advanced Very High Resolution Radiometer (AVHRR) and High-resolution Infrared Radiation Sounder (HIRS) sensors. Products from this algorithm include a cloud mask and additional cloud properties such as cloud phase, amount, and height. The SIVIS software package, developed as a part of the CERES project, was originally the primary tool used to develop the algorithm, but as it is no longer supported we have had to pursue a new tool to enable the combination and analysis of collocated radiances from AVHRR and HIRS. This turned out to be a much larger endeavor than we expected, but we now have the data sets collocated (with many thanks to B. Baum for the fundamental code) and we have developed a nighttime cloud detection algorithm. Using this algorithm we have also computed realistic-looking cloud fractions from AVHRR brightness temperatures. A method to identify cloud phase has also been implemented. Atmospheric information from the TIROS Operational Vertical Sounder (TOVS) Polar Pathfinder Data Set, which includes temperature and moisture profiles as well as surface information, provides information required for determining cloud-top height.

  11. Research on super-resolution image reconstruction based on an improved POCS algorithm

    NASA Astrophysics Data System (ADS)

    Xu, Haiming; Miao, Hong; Yang, Chong; Xiong, Cheng

    2015-07-01

    Super-resolution image reconstruction (SRIR) can improve the fuzzy image's resolution; solve the shortage of the spatial resolution, excessive noise, and low-quality problem of the image. Firstly, we introduce the image degradation model to reveal the essence of super-resolution reconstruction process is an ill-posed inverse problem in mathematics. Secondly, analysis the blurring reason of optical imaging process - light diffraction and small angle scattering is the main reason for the fuzzy; propose an image point spread function estimation method and an improved projection onto convex sets (POCS) algorithm which indicate effectiveness by analyzing the changes between the time domain and frequency domain algorithm in the reconstruction process, pointed out that the improved POCS algorithms based on prior knowledge have the effect to restore and approach the high frequency of original image scene. Finally, we apply the algorithm to reconstruct synchrotron radiation computer tomography (SRCT) image, and then use these images to reconstruct the three-dimensional slice images. Comparing the differences between the original method and super-resolution algorithm, it is obvious that the improved POCS algorithm can restrain the noise and enhance the image resolution, so it is indicated that the algorithm is effective. This study and exploration to super-resolution image reconstruction by improved POCS algorithm is proved to be an effective method. It has important significance and broad application prospects - for example, CT medical image processing and SRCT ceramic sintering analyze of microstructure evolution mechanism.

  12. Unsupervised and stable LBG algorithm for data classification: application to aerial multicomponent images

    NASA Astrophysics Data System (ADS)

    Taher, A.; Chehdi, K.; Cariou, C.

    2015-10-01

    In this paper a stable and unsupervised Linde-Buzo-Gray (LBG) algorithm named LBGO is presented. The originality of the proposed algorithm relies: i) on the utilization of an adaptive incremental technique to initialize the class centres that calls into question the intermediate initializations; this technique makes the algorithm stable and deterministic, and the classification results do not vary from a run to another, and ii) on the unsupervised evaluation criteria of the intermediate classification result to estimate the optimal number of classes; this makes the algorithm unsupervised. The efficiency of this optimized version of LBG is shown through some experimental results on synthetic and real aerial hyperspectral data. More precisely we have tested our proposed classification approach regarding three aspects: firstly for its stability, secondly for its correct classification rate, and thirdly for the correct estimation of number of classes.

  13. A fuzzy record-to-record travel algorithm for solving rough set attribute reduction

    NASA Astrophysics Data System (ADS)

    Mafarja, Majdi; Abdullah, Salwani

    2015-02-01

    Attribute reduction can be defined as the process of determining a minimal subset of attributes from an original set of attributes. This paper proposes a new attribute reduction method that is based on a record-to-record travel algorithm for solving rough set attribute reduction problems. This algorithm has a solitary parameter called the DEVIATION, which plays a pivotal role in controlling the acceptance of the worse solutions, after it becomes pre-tuned. In this paper, we focus on a fuzzy-based record-to-record travel algorithm for attribute reduction (FuzzyRRTAR). This algorithm employs an intelligent fuzzy logic controller mechanism to control the value of DEVIATION, which is dynamically changed throughout the search process. The proposed method was tested on standard benchmark data sets. The results show that FuzzyRRTAR is efficient in solving attribute reduction problems when compared with other meta-heuristic approaches.

  14. Optimal band selection for high dimensional remote sensing data using genetic algorithm

    NASA Astrophysics Data System (ADS)

    Zhang, Xianfeng; Sun, Quan; Li, Jonathan

    2009-06-01

    A 'fused' method may not be suitable for reducing the dimensionality of data and a band/feature selection method needs to be used for selecting an optimal subset of original data bands. This study examined the efficiency of GA in band selection for remote sensing classification. A GA-based algorithm for band selection was designed deliberately in which a Bhattacharyya distance index that indicates separability between classes of interest is used as fitness function. A binary string chromosome is designed in which each gene location has a value of 1 representing a feature being included or 0 representing a band being not included. The algorithm was implemented in MATLAB programming environment, and a band selection task for lithologic classification in the Chocolate Mountain area (California) was used to test the proposed algorithm. The proposed feature selection algorithm can be useful in multi-source remote sensing data preprocessing, especially in hyperspectral dimensionality reduction.

  15. Efficient algorithm for training interpolation RBF networks with equally spaced nodes.

    PubMed

    Huan, Hoang Xuan; Hien, Dang Thi Thu; Tue, Huynh Huu

    2011-06-01

    This brief paper proposes a new algorithm to train interpolation Gaussian radial basis function (RBF) networks in order to solve the problem of interpolating multivariate functions with equally spaced nodes. Based on an efficient two-phase algorithm recently proposed by the authors, Euclidean norm associated to Gaussian RBF is now replaced by a conveniently chosen Mahalanobis norm, that allows for directly computing the width parameters of Gaussian radial basis functions. The weighting parameters are then determined by a simple iterative method. The original two-phase algorithm becomes a one-phase one. Simulation results show that the generality of networks trained by this new algorithm is sensibly improved and the running time significantly reduced, especially when the number of nodes is large.

  16. Quantum gate decomposition algorithms.

    SciTech Connect

    Slepoy, Alexander

    2006-07-01

    Quantum computing algorithms can be conveniently expressed in a format of a quantum logical circuits. Such circuits consist of sequential coupled operations, termed ''quantum gates'', or quantum analogs of bits called qubits. We review a recently proposed method [1] for constructing general ''quantum gates'' operating on an qubits, as composed of a sequence of generic elementary ''gates''.

  17. Heuristic-based tabu search algorithm for folding two-dimensional AB off-lattice model proteins.

    PubMed

    Liu, Jingfa; Sun, Yuanyuan; Li, Gang; Song, Beibei; Huang, Weibo

    2013-12-01

    The protein structure prediction problem is a classical NP hard problem in bioinformatics. The lack of an effective global optimization method is the key obstacle in solving this problem. As one of the global optimization algorithms, tabu search (TS) algorithm has been successfully applied in many optimization problems. We define the new neighborhood conformation, tabu object and acceptance criteria of current conformation based on the original TS algorithm and put forward an improved TS algorithm. By integrating the heuristic initialization mechanism, the heuristic conformation updating mechanism, and the gradient method into the improved TS algorithm, a heuristic-based tabu search (HTS) algorithm is presented for predicting the two-dimensional (2D) protein folding structure in AB off-lattice model which consists of hydrophobic (A) and hydrophilic (B) monomers. The tabu search minimization leads to the basins of local minima, near which a local search mechanism is then proposed to further search for lower-energy conformations. To test the performance of the proposed algorithm, experiments are performed on four Fibonacci sequences and two real protein sequences. The experimental results show that the proposed algorithm has found the lowest-energy conformations so far for three shorter Fibonacci sequences and renewed the results for the longest one, as well as two real protein sequences, demonstrating that the HTS algorithm is quite promising in finding the ground states for AB off-lattice model proteins. PMID:24077543

  18. Parallel algorithms for unconstrained optimizations by multisplitting

    SciTech Connect

    He, Qing

    1994-12-31

    In this paper a new parallel iterative algorithm for unconstrained optimization using the idea of multisplitting is proposed. This algorithm uses the existing sequential algorithms without any parallelization. Some convergence and numerical results for this algorithm are presented. The experiments are performed on an Intel iPSC/860 Hyper Cube with 64 nodes. It is interesting that the sequential implementation on one node shows that if the problem is split properly, the algorithm converges much faster than one without splitting.

  19. A rapid reconstruction algorithm for three-dimensional scanning images

    NASA Astrophysics Data System (ADS)

    Xiang, Jiying; Wu, Zhen; Zhang, Ping; Huang, Dexiu

    1998-04-01

    A `simulated fluorescence' three-dimensional reconstruction algorithm, which is especially suitable for confocal images of partial transparent biological samples, is proposed in this paper. To make the retina projection of the object reappear and to avoid excessive memory consumption, the original image is rotated and compressed before the processing. A left image and a right image are mixed by different colors to increase the sense of stereo. The details originally hidden in deep layers are well exhibited with the aid of an `auxiliary directional source'. In addition, the time consumption is greatly reduced compared with conventional methods such as `ray tracing'. The realization of the algorithm is interpreted by a group of reconstructed images.

  20. Image compression using a novel edge-based coding algorithm

    NASA Astrophysics Data System (ADS)

    Keissarian, Farhad; Daemi, Mohammad F.

    2001-08-01

    In this paper, we present a novel edge-based coding algorithm for image compression. The proposed coding scheme is the predictive version of the original algorithm, which we presented earlier in literature. In the original version, an image is block coded according to the level of visual activity of individual blocks, following a novel edge-oriented classification stage. Each block is then represented by a set of parameters associated with the pattern appearing inside the block. The use of these parameters at the receiver reduces the cost of reconstruction significantly. In the present study, we extend and improve the performance of the existing technique by exploiting the expected spatial redundancy across the neighboring blocks. Satisfactory coded images at competitive bit rate with other block-based coding techniques have been obtained.

  1. Audio Watermarking Algorithm Based on Centroid and Statistical Features

    NASA Astrophysics Data System (ADS)

    Zhang, Xiaoming; Yin, Xiong

    Experimental testing shows that the relative relation in the number of samples among the neighboring bins and the audio frequency centroid are two robust features to the Time Scale Modification (TSM) attacks. Accordingly, an audio watermark algorithm based on frequency centroid and histogram is proposed by modifying the frequency coefficients. The audio histogram with equal-sized bins is extracted from a selected frequency coefficient range referred to the audio centroid. The watermarked audio signal is perceptibly similar to the original one. The experimental results show that the algorithm is very robust to resample TSM and a variety of common attacks. Subjective quality evaluation of the algorithm shows that embedded watermark introduces low, inaudible distortion of host audio signal.

  2. A filtered backprojection algorithm for cone beam reconstruction using rotational filtering under helical source trajectory

    SciTech Connect

    Tang Xiangyang; Hsieh Jiang

    2004-11-01

    With the evolution from multi-detector-row CT to cone beam (CB) volumetric CT, maintaining reconstruction accuracy becomes more challenging. To combat the severe artifacts caused by a large cone angle in CB volumetric CT, three-dimensional reconstruction algorithms have to be utilized. In practice, filtered backprojection (FBP) reconstruction algorithms are more desirable due to their computational structure and image generation efficiency. One of the CB-FBP reconstruction algorithms is the well-known FDK algorithm that was originally derived for a circular x-ray source trajectory by heuristically extending its two-dimensional (2-D) counterpart. Later on, a general CB-FBP reconstruction algorithm was derived for noncircular, such as helical, source trajectories. It has been recognized that a filtering operation in the projection data along the tangential direction of a helical x-ray source trajectory can significantly improve the reconstruction accuracy of helical CB volumetric CT. However, the tangential filtering encounters latitudinal data truncation, resulting in degraded noise characteristics or data manipulation inefficiency. A CB-FBP reconstruction algorithm using one-dimensional rotational filtering across detector rows (namely CB-RFBP) is proposed in this paper. Although the proposed CB-RFBP reconstruction algorithm is approximate, it approaches the reconstruction accuracy that can be achieved by exact helical CB-FBP reconstruction algorithms for moderate cone angles. Unlike most exact CB-FBP reconstruction algorithms in which the redundant data are usually discarded, the proposed CB-RFBP reconstruction algorithm make use of all available projection data, resulting in significantly improved noise characteristics and dose efficiency. Moreover, the rotational filtering across detector rows not only survives the so-called long object problem, but also avoids latitudinal data truncation existing in other helical CB-FBP reconstruction algorithm in which a

  3. A filtered backprojection algorithm for cone beam reconstruction using rotational filtering under helical source trajectory.

    PubMed

    Tang, Xiangyang; Hsieh, Jiang

    2004-11-01

    With the evolution from multi-detector-row CT to cone beam (CB) volumetric CT, maintaining reconstruction accuracy becomes more challenging. To combat the severe artifacts caused by a large cone angle in CB volumetric CT, three-dimensional reconstruction algorithms have to be utilized. In practice, filtered backprojection (FBP) reconstruction algorithms are more desirable due to their computational structure and image generation efficiency. One of the CB-FBP reconstruction algorithms is the well-known FDK algorithm that was originally derived for a circular x-ray source trajectory by heuristically extending its two-dimensional (2-D) counterpart. Later on, a general CB-FBP reconstruction algorithm was derived for noncircular, such as helical, source trajectories. It has been recognized that a filtering operation in the projection data along the tangential direction of a helical x-ray source trajectory can significantly improve the reconstruction accuracy of helical CB volumetric CT. However, the tangential filtering encounters latitudinal data truncation, resulting in degraded noise characteristics or data manipulation inefficiency. A CB-FBP reconstruction algorithm using one-dimensional rotational filtering across detector rows (namely CB-RFBP) is proposed in this paper. Although the proposed CB-RFBP reconstruction algorithm is approximate, it approaches the reconstruction accuracy that can be achieved by exact helical CB-FBP reconstruction algorithms for moderate cone angles. Unlike most exact CB-FBP reconstruction algorithms in which the redundant data are usually discarded, the proposed CB-RFBP reconstruction algorithm make use of all available projection data, resulting in significantly improved noise characteristics and dose efficiency. Moreover, the rotational filtering across detector rows not only survives the so-called long object problem, but also avoids latitudinal data truncation existing in other helical CB-FBP reconstruction algorithm in which a

  4. Eukaryotic origins.

    PubMed

    Lake, James A

    2015-09-26

    The origin of the eukaryotes is a fundamental scientific question that for over 30 years has generated a spirited debate between the competing Archaea (or three domains) tree and the eocyte tree. As eukaryotes ourselves, humans have a personal interest in our origins. Eukaryotes contain their defining organelle, the nucleus, after which they are named. They have a complex evolutionary history, over time acquiring multiple organelles, including mitochondria, chloroplasts, smooth and rough endoplasmic reticula, and other organelles all of which may hint at their origins. It is the evolutionary history of the nucleus and their other organelles that have intrigued molecular evolutionists, myself included, for the past 30 years and which continues to hold our interest as increasingly compelling evidence favours the eocyte tree. As with any orthodoxy, it takes time to embrace new concepts and techniques.

  5. Eukaryotic origins

    PubMed Central

    Lake, James A.

    2015-01-01

    The origin of the eukaryotes is a fundamental scientific question that for over 30 years has generated a spirited debate between the competing Archaea (or three domains) tree and the eocyte tree. As eukaryotes ourselves, humans have a personal interest in our origins. Eukaryotes contain their defining organelle, the nucleus, after which they are named. They have a complex evolutionary history, over time acquiring multiple organelles, including mitochondria, chloroplasts, smooth and rough endoplasmic reticula, and other organelles all of which may hint at their origins. It is the evolutionary history of the nucleus and their other organelles that have intrigued molecular evolutionists, myself included, for the past 30 years and which continues to hold our interest as increasingly compelling evidence favours the eocyte tree. As with any orthodoxy, it takes time to embrace new concepts and techniques. PMID:26323753

  6. Non-parametric Algorithm to Isolate Chunks in Response Sequences

    PubMed Central

    Alamia, Andrea; Solopchuk, Oleg; Olivier, Etienne; Zenon, Alexandre

    2016-01-01

    Chunking consists in grouping items of a sequence into small clusters, named chunks, with the assumed goal of lessening working memory load. Despite extensive research, the current methods used to detect chunks, and to identify different chunking strategies, remain discordant and difficult to implement. Here, we propose a simple and reliable method to identify chunks in a sequence and to determine their stability across blocks. This algorithm is based on a ranking method and its major novelty is that it provides concomitantly both the features of individual chunk in a given sequence, and an overall index that quantifies the chunking pattern consistency across sequences. The analysis of simulated data confirmed the validity of our method in different conditions of noise, chunk lengths and chunk numbers; moreover, we found that this algorithm was particularly efficient in the noise range observed in real data, provided that at least 4 sequence repetitions were included in each experimental block. Furthermore, we applied this algorithm to actual reaction time series gathered from 3 published experiments and were able to confirm the findings obtained in the original reports. In conclusion, this novel algorithm is easy to implement, is robust to outliers and provides concurrent and reliable estimation of chunk position and chunking dynamics, making it useful to study both sequence-specific and general chunking effects. The algorithm is available at: https://github.com/artipago/Non-parametric-algorithm-to-isolate-chunks-in-response-sequences. PMID:27708565

  7. Interior search algorithm (ISA): a novel approach for global optimization.

    PubMed

    Gandomi, Amir H

    2014-07-01

    This paper presents the interior search algorithm (ISA) as a novel method for solving optimization tasks. The proposed ISA is inspired by interior design and decoration. The algorithm is different from other metaheuristic algorithms and provides new insight for global optimization. The proposed method is verified using some benchmark mathematical and engineering problems commonly used in the area of optimization. ISA results are further compared with well-known optimization algorithms. The results show that the ISA is efficiently capable of solving optimization problems. The proposed algorithm can outperform the other well-known algorithms. Further, the proposed algorithm is very simple and it only has one parameter to tune.

  8. An Efficient Pattern Matching Algorithm

    NASA Astrophysics Data System (ADS)

    Sleit, Azzam; Almobaideen, Wesam; Baarah, Aladdin H.; Abusitta, Adel H.

    In this study, we present an efficient algorithm for pattern matching based on the combination of hashing and search trees. The proposed solution is classified as an offline algorithm. Although, this study demonstrates the merits of the technique for text matching, it can be utilized for various forms of digital data including images, audio and video. The performance superiority of the proposed solution is validated analytically and experimentally.

  9. Original Version

    Cancer.gov

    The EPEC-O (Education in Palliative and End-of-Life Care for Oncology) Self-Study Original Version is a free comprehensive multimedia curricula for health professionals caring for persons with cancer and their families. The curricula is available as an online Self-Study Section and as a CD-ROM you can order.

  10. Parameters identification for photovoltaic module based on an improved artificial fish swarm algorithm.

    PubMed

    Han, Wei; Wang, Hong-Hua; Chen, Ling

    2014-01-01

    A precise mathematical model plays a pivotal role in the simulation, evaluation, and optimization of photovoltaic (PV) power systems. Different from the traditional linear model, the model of PV module has the features of nonlinearity and multiparameters. Since conventional methods are incapable of identifying the parameters of PV module, an excellent optimization algorithm is required. Artificial fish swarm algorithm (AFSA), originally inspired by the simulation of collective behavior of real fish swarms, is proposed to fast and accurately extract the parameters of PV module. In addition to the regular operation, a mutation operator (MO) is designed to enhance the searching performance of the algorithm. The feasibility of the proposed method is demonstrated by various parameters of PV module under different environmental conditions, and the testing results are compared with other studied methods in terms of final solutions and computational time. The simulation results show that the proposed method is capable of obtaining higher parameters identification precision. PMID:25243233

  11. Parameters identification for photovoltaic module based on an improved artificial fish swarm algorithm.

    PubMed

    Han, Wei; Wang, Hong-Hua; Chen, Ling

    2014-01-01

    A precise mathematical model plays a pivotal role in the simulation, evaluation, and optimization of photovoltaic (PV) power systems. Different from the traditional linear model, the model of PV module has the features of nonlinearity and multiparameters. Since conventional methods are incapable of identifying the parameters of PV module, an excellent optimization algorithm is required. Artificial fish swarm algorithm (AFSA), originally inspired by the simulation of collective behavior of real fish swarms, is proposed to fast and accurately extract the parameters of PV module. In addition to the regular operation, a mutation operator (MO) is designed to enhance the searching performance of the algorithm. The feasibility of the proposed method is demonstrated by various parameters of PV module under different environmental conditions, and the testing results are compared with other studied methods in terms of final solutions and computational time. The simulation results show that the proposed method is capable of obtaining higher parameters identification precision.

  12. Parameters Identification for Photovoltaic Module Based on an Improved Artificial Fish Swarm Algorithm

    PubMed Central

    Wang, Hong-Hua

    2014-01-01

    A precise mathematical model plays a pivotal role in the simulation, evaluation, and optimization of photovoltaic (PV) power systems. Different from the traditional linear model, the model of PV module has the features of nonlinearity and multiparameters. Since conventional methods are incapable of identifying the parameters of PV module, an excellent optimization algorithm is required. Artificial fish swarm algorithm (AFSA), originally inspired by the simulation of collective behavior of real fish swarms, is proposed to fast and accurately extract the parameters of PV module. In addition to the regular operation, a mutation operator (MO) is designed to enhance the searching performance of the algorithm. The feasibility of the proposed method is demonstrated by various parameters of PV module under different environmental conditions, and the testing results are compared with other studied methods in terms of final solutions and computational time. The simulation results show that the proposed method is capable of obtaining higher parameters identification precision. PMID:25243233

  13. Algorithm That Synthesizes Other Algorithms for Hashing

    NASA Technical Reports Server (NTRS)

    James, Mark

    2010-01-01

    An algorithm that includes a collection of several subalgorithms has been devised as a means of synthesizing still other algorithms (which could include computer code) that utilize hashing to determine whether an element (typically, a number or other datum) is a member of a set (typically, a list of numbers). Each subalgorithm synthesizes an algorithm (e.g., a block of code) that maps a static set of key hashes to a somewhat linear monotonically increasing sequence of integers. The goal in formulating this mapping is to cause the length of the sequence thus generated to be as close as practicable to the original length of the set and thus to minimize gaps between the elements. The advantage of the approach embodied in this algorithm is that it completely avoids the traditional approach of hash-key look-ups that involve either secondary hash generation and look-up or further searching of a hash table for a desired key in the event of collisions. This algorithm guarantees that it will never be necessary to perform a search or to generate a secondary key in order to determine whether an element is a member of a set. This algorithm further guarantees that any algorithm that it synthesizes can be executed in constant time. To enforce these guarantees, the subalgorithms are formulated to employ a set of techniques, each of which works very effectively covering a certain class of hash-key values. These subalgorithms are of two types, summarized as follows: Given a list of numbers, try to find one or more solutions in which, if each number is shifted to the right by a constant number of bits and then masked with a rotating mask that isolates a set of bits, a unique number is thereby generated. In a variant of the foregoing procedure, omit the masking. Try various combinations of shifting, masking, and/or offsets until the solutions are found. From the set of solutions, select the one that provides the greatest compression for the representation and is executable in the

  14. A three-dimensional weighted cone beam filtered backprojection (CB-FBP) algorithm for image reconstruction in volumetric CT under a circular source trajectory.

    PubMed

    Tang, Xiangyang; Hsieh, Jiang; Hagiwara, Akira; Nilsen, Roy A; Thibault, Jean-Baptiste; Drapkin, Evgeny

    2005-08-21

    The original FDK algorithm proposed for cone beam (CB) image reconstruction under a circular source trajectory has been extensively employed in medical and industrial imaging applications. With increasing cone angle, CB artefacts in images reconstructed by the original FDK algorithm deteriorate, since the circular trajectory does not satisfy the so-called data sufficiency condition (DSC). A few 'circular plus' trajectories have been proposed in the past to help the original FDK algorithm to reduce CB artefacts by meeting the DSC. However, the circular trajectory has distinct advantages over other scanning trajectories in practical CT imaging, such as head imaging, breast imaging, cardiac, vascular and perfusion applications. In addition to looking into the DSC, another insight into the CB artefacts existing in the original FDK algorithm is the inconsistency between conjugate rays that are 180 degrees apart in view angle (namely conjugate ray inconsistency). The conjugate ray inconsistency is pixel dependent, varying dramatically over pixels within the image plane to be reconstructed. However, the original FDK algorithm treats all conjugate rays equally, resulting in CB artefacts that can be avoided if appropriate weighting strategies are exercised. Along with an experimental evaluation and verification, a three-dimensional (3D) weighted axial cone beam filtered backprojection (CB-FBP) algorithm is proposed in this paper for image reconstruction in volumetric CT under a circular source trajectory. Without extra trajectories supplemental to the circular trajectory, the proposed algorithm applies 3D weighting on projection data before 3D backprojection to reduce conjugate ray inconsistency by suppressing the contribution from one of the conjugate rays with a larger cone angle. Furthermore, the 3D weighting is dependent on the distance between the reconstruction plane and the central plane determined by the circular trajectory. The proposed 3D weighted axial CB

  15. A New-Fangled FES-k-Means Clustering Algorithm for Disease Discovery and Visual Analytics.

    PubMed

    Oyana, Tonny J

    2010-01-01

    The central purpose of this study is to further evaluate the quality of the performance of a new algorithm. The study provides additional evidence on this algorithm that was designed to increase the overall efficiency of the original k-means clustering technique-the Fast, Efficient, and Scalable k-means algorithm (FES-k-means). The FES-k-means algorithm uses a hybrid approach that comprises the k-d tree data structure that enhances the nearest neighbor query, the original k-means algorithm, and an adaptation rate proposed by Mashor. This algorithm was tested using two real datasets and one synthetic dataset. It was employed twice on all three datasets: once on data trained by the innovative MIL-SOM method and then on the actual untrained data in order to evaluate its competence. This two-step approach of data training prior to clustering provides a solid foundation for knowledge discovery and data mining, otherwise unclaimed by clustering methods alone. The benefits of this method are that it produces clusters similar to the original k-means method at a much faster rate as shown by runtime comparison data; and it provides efficient analysis of large geospatial data with implications for disease mechanism discovery. From a disease mechanism discovery perspective, it is hypothesized that the linear-like pattern of elevated blood lead levels discovered in the city of Chicago may be spatially linked to the city's water service lines.

  16. Classifying scaled and rotated textures using a region-matched algorithm

    NASA Astrophysics Data System (ADS)

    Yao, Chih-Chia; Chen, Yu-Tin

    2012-07-01

    A novel method to correct texture variations resulting from scale magnification, narrowing caused by cropping into the original size, or spatial rotation is discussed. The variations usually occur in images captured by a camera using different focal lengths. A representative region-matched algorithm is developed to improve texture classification after magnification, narrowing, and spatial rotation. By using a minimum ellipse, a representative region-matched algorithm encloses a specific region extracted by the J-image segmentation algorithm. After translating the coordinates, the equation of an ellipse in the rotated texture can be formulated as that of an ellipse in the original texture. The rotated invariant property of ellipse provides an efficient method to identify the rotated texture. Additionally, the scale-variant representative region can be classified by adopting scale-invariant parameters. Moreover, a hybrid texture filter is developed. In the hybrid texture filter, the scheme of texture feature extraction includes the Gabor wavelet and the representative region-matched algorithm. Support vector machines are introduced as the classifier. The proposed hybrid texture filter performs excellently with respect to classifying both the stochastic and structural textures. Furthermore, experimental results demonstrate that the proposed algorithm outperforms conventional design algorithms.

  17. Localized density matrix minimization and linear-scaling algorithms

    NASA Astrophysics Data System (ADS)

    Lai, Rongjie; Lu, Jianfeng

    2016-06-01

    We propose a convex variational approach to compute localized density matrices for both zero temperature and finite temperature cases, by adding an entry-wise ℓ1 regularization to the free energy of the quantum system. Based on the fact that the density matrix decays exponentially away from the diagonal for insulating systems or systems at finite temperature, the proposed ℓ1 regularized variational method provides an effective way to approximate the original quantum system. We provide theoretical analysis of the approximation behavior and also design convergence guaranteed numerical algorithms based on Bregman iteration. More importantly, the ℓ1 regularized system naturally leads to localized density matrices with banded structure, which enables us to develop approximating algorithms to find the localized density matrices with computation cost linearly dependent on the problem size.

  18. Multi-pattern string matching algorithms comparison for intrusion detection system

    NASA Astrophysics Data System (ADS)

    Hasan, Awsan A.; Rashid, Nur'Aini Abdul; Abdulrazzaq, Atheer A.

    2014-12-01

    Computer networks are developing exponentially and running at high speeds. With the increasing number of Internet users, computers have become the preferred target for complex attacks that require complex analyses to be detected. The Intrusion detection system (IDS) is created and turned into an important part of any modern network to protect the network from attacks. The IDS relies on string matching algorithms to identify network attacks, but these string matching algorithms consume a considerable amount of IDS processing time, thereby slows down the IDS performance. A new algorithm that can overcome the weakness of the IDS needs to be developed. Improving the multi-pattern matching algorithm ensure that an IDS can work properly and the limitations can be overcome. In this paper, we perform a comparison between our three multi-pattern matching algorithms; MP-KR, MPHQS and MPH-BMH with their corresponding original algorithms Kr, QS and BMH respectively. The experiments show that MPH-QS performs best among the proposed algorithms, followed by MPH-BMH, and MP-KR is the slowest. MPH-QS detects a large number of signature patterns in short time compared to other two algorithms. This finding can prove that the multi-pattern matching algorithms are more efficient in high-speed networks.

  19. DNABIT Compress - Genome compression algorithm.

    PubMed

    Rajarajeswari, Pothuraju; Apparao, Allam

    2011-01-01

    Data compression is concerned with how information is organized in data. Efficient storage means removal of redundancy from the data being stored in the DNA molecule. Data compression algorithms remove redundancy and are used to understand biologically important molecules. We present a compression algorithm, "DNABIT Compress" for DNA sequences based on a novel algorithm of assigning binary bits for smaller segments of DNA bases to compress both repetitive and non repetitive DNA sequence. Our proposed algorithm achieves the best compression ratio for DNA sequences for larger genome. Significantly better compression results show that "DNABIT Compress" algorithm is the best among the remaining compression algorithms. While achieving the best compression ratios for DNA sequences (Genomes),our new DNABIT Compress algorithm significantly improves the running time of all previous DNA compression programs. Assigning binary bits (Unique BIT CODE) for (Exact Repeats, Reverse Repeats) fragments of DNA sequence is also a unique concept introduced in this algorithm for the first time in DNA compression. This proposed new algorithm could achieve the best compression ratio as much as 1.58 bits/bases where the existing best methods could not achieve a ratio less than 1.72 bits/bases.

  20. Analyzing the applicability of the least risk path algorithm in indoor space

    NASA Astrophysics Data System (ADS)

    Vanclooster, A.; Viaene, P.; Van de Weghe, N.; Fack, V.; De Maeyer, Ph.

    2013-11-01

    Over the last couple of years, applications that support navigation and wayfinding in indoor environments have become one of the booming industries. However, the algorithmic support for indoor navigation has so far been left mostly untouched, as most applications mainly rely on adapting Dijkstra's shortest path algorithm to an indoor network. In outdoor space, several alternative algorithms have been proposed adding a more cognitive notion to the calculated paths and as such adhering to the natural wayfinding behavior (e.g. simplest paths, least risk paths). The need for indoor cognitive algorithms is highlighted by a more challenged navigation and orientation due to the specific indoor structure (e.g. fragmentation, less visibility, confined areas…). Therefore, the aim of this research is to extend those richer cognitive algorithms to three-dimensional indoor environments. More specifically for this paper, we will focus on the application of the least risk path algorithm of Grum (2005) to an indoor space. The algorithm as proposed by Grum (2005) is duplicated and tested in a complex multi-story building. Several analyses compare shortest and least risk paths in indoor and in outdoor space. The results of these analyses indicate that the current outdoor least risk path algorithm does not calculate less risky paths compared to its shortest paths. In some cases, worse routes have been suggested. Adjustments to the original algorithm are proposed to be more aligned to the specific structure of indoor environments. In a later stage, other cognitive algorithms will be implemented and tested in both an indoor and combined indoor-outdoor setting, in an effort to improve the overall user experience during navigation in indoor environments.

  1. Algorithm design of liquid lens inspection system

    NASA Astrophysics Data System (ADS)

    Hsieh, Lu-Lin; Wang, Chun-Chieh

    2008-08-01

    In mobile lens domain, the glass lens is often to be applied in high-resolution requirement situation; but the glass zoom lens needs to be collocated with movable machinery and voice-coil motor, which usually arises some space limits in minimum design. In high level molding component technology development, the appearance of liquid lens has become the focus of mobile phone and digital camera companies. The liquid lens sets with solid optical lens and driving circuit has replaced the original components. As a result, the volume requirement is decreased to merely 50% of the original design. Besides, with the high focus adjusting speed, low energy requirement, high durability, and low-cost manufacturing process, the liquid lens shows advantages in the competitive market. In the past, authors only need to inspect the scrape defect made by external force for the glass lens. As to the liquid lens, authors need to inspect the state of four different structural layers due to the different design and structure. In this paper, authors apply machine vision and digital image processing technology to administer inspections in the particular layer according to the needs of users. According to our experiment results, the algorithm proposed can automatically delete non-focus background, extract the region of interest, find out and analyze the defects efficiently in the particular layer. In the future, authors will combine the algorithm of the system with automatic-focus technology to implement the inside inspection based on the product inspective demands.

  2. A joint watermarking/encryption algorithm for verifying medical image integrity and authenticity in both encrypted and spatial domains.

    PubMed

    Bouslimi, D; Coatrieux, G; Roux, Ch

    2011-01-01

    In this paper, we propose a new joint watermarking/encryption algorithm for the purpose of verifying the reliability of medical images in both encrypted and spatial domains. It combines a substitutive watermarking algorithm, the quantization index modulation (QIM), with a block cipher algorithm, the Advanced Encryption Standard (AES), in CBC mode of operation. The proposed solution gives access to the outcomes of the image integrity and of its origins even though the image is stored encrypted. Experimental results achieved on 8 bits encoded Ultrasound images illustrate the overall performances of the proposed scheme. By making use of the AES block cipher in CBC mode, the proposed solution is compliant with or transparent to the DICOM standard.

  3. A joint watermarking/encryption algorithm for verifying medical image integrity and authenticity in both encrypted and spatial domains.

    PubMed

    Bouslimi, D; Coatrieux, G; Roux, Ch

    2011-01-01

    In this paper, we propose a new joint watermarking/encryption algorithm for the purpose of verifying the reliability of medical images in both encrypted and spatial domains. It combines a substitutive watermarking algorithm, the quantization index modulation (QIM), with a block cipher algorithm, the Advanced Encryption Standard (AES), in CBC mode of operation. The proposed solution gives access to the outcomes of the image integrity and of its origins even though the image is stored encrypted. Experimental results achieved on 8 bits encoded Ultrasound images illustrate the overall performances of the proposed scheme. By making use of the AES block cipher in CBC mode, the proposed solution is compliant with or transparent to the DICOM standard. PMID:22256213

  4. Photocopy of original drawings (original located at the National Archives, ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    Photocopy of original drawings (original located at the National Archives, San Bruno, California, Navy # 104-A-4, showing organ recess. Dept. yards & docks, U.S. Navy, Mare Island, Cal., "Plan & Sections, proposed addition, St. Peter's Chapel, December 1904 - Mare Island Naval Shipyard, St. Peter's Chapel, Walnut Street & Cedar Parkway, Vallejo, Solano County, CA

  5. 12. Photocopy of original construction drawing, undated. (Original print in ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    12. Photocopy of original construction drawing, undated. (Original print in the possession of U.S. Army Corps of Engineers, Portland District, Portland, OR.) PROPOSED EXTENSION TO ADMINISTRATION BUILDING. - Bonneville Project, Administration Building, South side of main entrance, Bonneville Project, Bonneville, Multnomah County, OR

  6. Cropping and noise resilient steganography algorithm using secret image sharing

    NASA Astrophysics Data System (ADS)

    Juarez-Sandoval, Oswaldo; Fierro-Radilla, Atoany; Espejel-Trujillo, Angelina; Nakano-Miyatake, Mariko; Perez-Meana, Hector

    2015-03-01

    This paper proposes an image steganography scheme, in which a secret image is hidden into a cover image using a secret image sharing (SIS) scheme. Taking advantage of the fault tolerant property of the (k,n)-threshold SIS, where using any k of n shares (k≤n), the secret data can be recovered without any ambiguity, the proposed steganography algorithm becomes resilient to cropping and impulsive noise contamination. Among many SIS schemes proposed until now, Lin and Chan's scheme is selected as SIS, due to its lossless recovery capability of a large amount of secret data. The proposed scheme is evaluated from several points of view, such as imperceptibility of the stegoimage respect to its original cover image, robustness of hidden data to cropping operation and impulsive noise contamination. The evaluation results show a high quality of the extracted secret image from the stegoimage when it suffered more than 20% cropping or high density noise contamination.

  7. Learning Intelligent Genetic Algorithms Using Japanese Nonograms

    ERIC Educational Resources Information Center

    Tsai, Jinn-Tsong; Chou, Ping-Yi; Fang, Jia-Cen

    2012-01-01

    An intelligent genetic algorithm (IGA) is proposed to solve Japanese nonograms and is used as a method in a university course to learn evolutionary algorithms. The IGA combines the global exploration capabilities of a canonical genetic algorithm (CGA) with effective condensed encoding, improved fitness function, and modified crossover and…

  8. Correction of Faulty Sensors in Phased Array Radars Using Symmetrical Sensor Failure Technique and Cultural Algorithm with Differential Evolution

    PubMed Central

    Khan, S. U.; Qureshi, I. M.; Zaman, F.; Shoaib, B.; Naveed, A.; Basit, A.

    2014-01-01

    Three issues regarding sensor failure at any position in the antenna array are discussed. We assume that sensor position is known. The issues include raise in sidelobe levels, displacement of nulls from their original positions, and diminishing of null depth. The required null depth is achieved by making the weight of symmetrical complement sensor passive. A hybrid method based on memetic computing algorithm is proposed. The hybrid method combines the cultural algorithm with differential evolution (CADE) which is used for the reduction of sidelobe levels and placement of nulls at their original positions. Fitness function is used to minimize the error between the desired and estimated beam patterns along with null constraints. Simulation results for various scenarios have been given to exhibit the validity and performance of the proposed algorithm. PMID:24688440

  9. Correction of faulty sensors in phased array radars using symmetrical sensor failure technique and cultural algorithm with differential evolution.

    PubMed

    Khan, S U; Qureshi, I M; Zaman, F; Shoaib, B; Naveed, A; Basit, A

    2014-01-01

    Three issues regarding sensor failure at any position in the antenna array are discussed. We assume that sensor position is known. The issues include raise in sidelobe levels, displacement of nulls from their original positions, and diminishing of null depth. The required null depth is achieved by making the weight of symmetrical complement sensor passive. A hybrid method based on memetic computing algorithm is proposed. The hybrid method combines the cultural algorithm with differential evolution (CADE) which is used for the reduction of sidelobe levels and placement of nulls at their original positions. Fitness function is used to minimize the error between the desired and estimated beam patterns along with null constraints. Simulation results for various scenarios have been given to exhibit the validity and performance of the proposed algorithm.

  10. An incremental algorithm based on rough set for concept hierarchy tree

    NASA Astrophysics Data System (ADS)

    Yuan, Junpeng; Su, Jie

    2013-03-01

    In an open dynamic concept hierarchy tree of technological terms, universe keeps changing and then leads to changes in the system's structure and size characteristics. This study presents an efficient incremental algorithm based on rough set for maintaining the concept hierarchy tree in dynamic datasets. While taking into account the relationship between the new terms and the original concept hierarchy tree, the paper focus on when the condition attributes is known and the decision attributes is unknown, how to add a new term into the original concept hierarchy tree. The paper proposes a novel algorithm which can be used for updating concept hierarchy tree dynamically and proves the rationality of the algorithm theoretically. The paper has furthermore proved its efficiency and reliability with an empirical study of the Micro-Electro-Mechanical System, MEMS.

  11. Word Origins: Building Communication Connections.

    ERIC Educational Resources Information Center

    Rubenstein, Rheta N.

    2000-01-01

    Proposes examining word origins as a teaching strategy for helping middle school students speak the language of mathematics as well as promote students' general vocabulary development. Includes roots, meanings, related words, and notes for middle school mathematics vocabulary. (KHR)

  12. Highly Scalable Matching Pursuit Signal Decomposition Algorithm

    NASA Technical Reports Server (NTRS)

    Christensen, Daniel; Das, Santanu; Srivastava, Ashok N.

    2009-01-01

    Matching Pursuit Decomposition (MPD) is a powerful iterative algorithm for signal decomposition and feature extraction. MPD decomposes any signal into linear combinations of its dictionary elements or atoms . A best fit atom from an arbitrarily defined dictionary is determined through cross-correlation. The selected atom is subtracted from the signal and this procedure is repeated on the residual in the subsequent iterations until a stopping criterion is met. The reconstructed signal reveals the waveform structure of the original signal. However, a sufficiently large dictionary is required for an accurate reconstruction; this in return increases the computational burden of the algorithm, thus limiting its applicability and level of adoption. The purpose of this research is to improve the scalability and performance of the classical MPD algorithm. Correlation thresholds were defined to prune insignificant atoms from the dictionary. The Coarse-Fine Grids and Multiple Atom Extraction techniques were proposed to decrease the computational burden of the algorithm. The Coarse-Fine Grids method enabled the approximation and refinement of the parameters for the best fit atom. The ability to extract multiple atoms within a single iteration enhanced the effectiveness and efficiency of each iteration. These improvements were implemented to produce an improved Matching Pursuit Decomposition algorithm entitled MPD++. Disparate signal decomposition applications may require a particular emphasis of accuracy or computational efficiency. The prominence of the key signal features required for the proper signal classification dictates the level of accuracy necessary in the decomposition. The MPD++ algorithm may be easily adapted to accommodate the imposed requirements. Certain feature extraction applications may require rapid signal decomposition. The full potential of MPD++ may be utilized to produce incredible performance gains while extracting only slightly less energy than the

  13. Paideia: Origins.

    ERIC Educational Resources Information Center

    Burns, John W.

    The ideas in Mortimer Adler's educational manifesto, "The Paideia Proposal," are compared to the Greek concept of paideia (meaning upbringing of a child) and discredited. Committed to universal education, Adler wants schooling based on a set of uniformly applied objectives achieved by packaging pre-organized knowledge in established areas of…

  14. Algorithm development

    NASA Technical Reports Server (NTRS)

    Barth, Timothy J.; Lomax, Harvard

    1987-01-01

    The past decade has seen considerable activity in algorithm development for the Navier-Stokes equations. This has resulted in a wide variety of useful new techniques. Some examples for the numerical solution of the Navier-Stokes equations are presented, divided into two parts. One is devoted to the incompressible Navier-Stokes equations, and the other to the compressible form.

  15. Controversy on chloroplast origins.

    PubMed

    Lockhart, P J; Penny, D; Hendy, M D; Howe, C J; Beanland, T J; Larkum, A W

    1992-04-20

    Controversy exists over the origins of photosynthetic organelles in that contradictory trees arise from different sequence, biochemical and ultrastructural data sets. We propose a testable hypothesis which explains this inconsistency as a result of the differing GC contents of sequences. We report that current methods of tree reconstruction tend to group sequences with similar GC contents irrespective of whether the similar GC content is due to common ancestry or is independently acquired. Nuclear encoded sequences (high GC) give different trees from chloroplast encoded sequences (low GC). We find that current data is consistent with the hypothesis of multiple origins for photosynthetic organelles and single origins for each type of light harvesting complex. PMID:1568469

  16. Spaceborne SAR Imaging Algorithm for Coherence Optimized.

    PubMed

    Qiu, Zhiwei; Yue, Jianping; Wang, Xueqin; Yue, Shun

    2016-01-01

    This paper proposes SAR imaging algorithm with largest coherence based on the existing SAR imaging algorithm. The basic idea of SAR imaging algorithm in imaging processing is that output signal can have maximum signal-to-noise ratio (SNR) by using the optimal imaging parameters. Traditional imaging algorithm can acquire the best focusing effect, but would bring the decoherence phenomenon in subsequent interference process. Algorithm proposed in this paper is that SAR echo adopts consistent imaging parameters in focusing processing. Although the SNR of the output signal is reduced slightly, their coherence is ensured greatly, and finally the interferogram with high quality is obtained. In this paper, two scenes of Envisat ASAR data in Zhangbei are employed to conduct experiment for this algorithm. Compared with the interferogram from the traditional algorithm, the results show that this algorithm is more suitable for SAR interferometry (InSAR) research and application. PMID:26871446

  17. Spaceborne SAR Imaging Algorithm for Coherence Optimized.

    PubMed

    Qiu, Zhiwei; Yue, Jianping; Wang, Xueqin; Yue, Shun

    2016-01-01

    This paper proposes SAR imaging algorithm with largest coherence based on the existing SAR imaging algorithm. The basic idea of SAR imaging algorithm in imaging processing is that output signal can have maximum signal-to-noise ratio (SNR) by using the optimal imaging parameters. Traditional imaging algorithm can acquire the best focusing effect, but would bring the decoherence phenomenon in subsequent interference process. Algorithm proposed in this paper is that SAR echo adopts consistent imaging parameters in focusing processing. Although the SNR of the output signal is reduced slightly, their coherence is ensured greatly, and finally the interferogram with high quality is obtained. In this paper, two scenes of Envisat ASAR data in Zhangbei are employed to conduct experiment for this algorithm. Compared with the interferogram from the traditional algorithm, the results show that this algorithm is more suitable for SAR interferometry (InSAR) research and application.

  18. Spaceborne SAR Imaging Algorithm for Coherence Optimized

    PubMed Central

    Qiu, Zhiwei; Yue, Jianping; Wang, Xueqin; Yue, Shun

    2016-01-01

    This paper proposes SAR imaging algorithm with largest coherence based on the existing SAR imaging algorithm. The basic idea of SAR imaging algorithm in imaging processing is that output signal can have maximum signal-to-noise ratio (SNR) by using the optimal imaging parameters. Traditional imaging algorithm can acquire the best focusing effect, but would bring the decoherence phenomenon in subsequent interference process. Algorithm proposed in this paper is that SAR echo adopts consistent imaging parameters in focusing processing. Although the SNR of the output signal is reduced slightly, their coherence is ensured greatly, and finally the interferogram with high quality is obtained. In this paper, two scenes of Envisat ASAR data in Zhangbei are employed to conduct experiment for this algorithm. Compared with the interferogram from the traditional algorithm, the results show that this algorithm is more suitable for SAR interferometry (InSAR) research and application. PMID:26871446

  19. Program Proposal

    ERIC Educational Resources Information Center

    Baskas, Richard S.

    2012-01-01

    A study was conducted to determine if a deficiency, or learning gap, existed in a particular working environment. To determine if an assessment was to be conducted, a program proposal would need to be developed to explore this situation. In order for a particular environment to react and grow with other environments, it must be able to take on…

  20. Clever eye algorithm for target detection of remote sensing imagery

    NASA Astrophysics Data System (ADS)

    Geng, Xiurui; Ji, Luyan; Sun, Kang

    2016-04-01

    Target detection algorithms for hyperspectral remote sensing imagery, such as the two most commonly used remote sensing detection algorithms, the constrained energy minimization (CEM) and matched filter (MF), can usually be attributed to the inner product between a weight filter (or detector) and a pixel vector. CEM and MF have the same expression except that MF requires data centralization first. However, this difference leads to a difference in the target detection results. That is to say, the selection of the data origin could directly affect the performance of the detector. Therefore, does there exist another data origin other than the zero and mean-vector points for a better target detection performance? This is a very meaningful issue in the field of target detection, but it has not been paid enough attention yet. In this study, we propose a novel objective function by introducing the data origin as another variable, and the solution of the function is corresponding to the data origin with the minimal output energy. The process of finding the optimal solution can be vividly regarded as a clever eye automatically searching the best observing position and direction in the feature space, which corresponds to the largest separation between the target and background. Therefore, this new algorithm is referred to as the clever eye algorithm (CE). Based on the Sherman-Morrison formula and the gradient ascent method, CE could derive the optimal target detection result in terms of energy. Experiments with both synthetic and real hyperspectral data have verified the effectiveness of our method.

  1. Darwin's Originality.

    PubMed

    Bowler, Peter J

    2009-01-01

    Charles Darwin's theory of natural selection has been hailed as one of the most innovative contributions to modern science. When first proposed in 1859, however, it was widely rejected by his contemporaries, even by those who accepted the general idea of evolution. This article identifies those aspects of Darwin's work that led him to develop this revolutionary theory, including his studies of biogeography and animal breeding, and his recognition of the role played by the struggle for existence.

  2. Speech Enhancement based on Compressive Sensing Algorithm

    NASA Astrophysics Data System (ADS)

    Sulong, Amart; Gunawan, Teddy S.; Khalifa, Othman O.; Chebil, Jalel

    2013-12-01

    There are various methods, in performance of speech enhancement, have been proposed over the years. The accurate method for the speech enhancement design mainly focuses on quality and intelligibility. The method proposed with high performance level. A novel speech enhancement by using compressive sensing (CS) is a new paradigm of acquiring signals, fundamentally different from uniform rate digitization followed by compression, often used for transmission or storage. Using CS can reduce the number of degrees of freedom of a sparse/compressible signal by permitting only certain configurations of the large and zero/small coefficients, and structured sparsity models. Therefore, CS is significantly provides a way of reconstructing a compressed version of the speech in the original signal by taking only a small amount of linear and non-adaptive measurement. The performance of overall algorithms will be evaluated based on the speech quality by optimise using informal listening test and Perceptual Evaluation of Speech Quality (PESQ). Experimental results show that the CS algorithm perform very well in a wide range of speech test and being significantly given good performance for speech enhancement method with better noise suppression ability over conventional approaches without obvious degradation of speech quality.

  3. Alternative learning algorithms for feedforward neural networks

    SciTech Connect

    Vitela, J.E.

    1996-03-01

    The efficiency of the back propagation algorithm to train feed forward multilayer neural networks has originated the erroneous belief among many neural networks users, that this is the only possible way to obtain the gradient of the error in this type of networks. The purpose of this paper is to show how alternative algorithms can be obtained within the framework of ordered partial derivatives. Two alternative forward-propagating algorithms are derived in this work which are mathematically equivalent to the BP algorithm. This systematic way of obtaining learning algorithms illustrated with this particular type of neural networks can also be used with other types such as recurrent neural networks.

  4. The high performing backtracking algorithm and heuristic for the sequence-dependent setup times flowshop problem with total weighted tardiness

    NASA Astrophysics Data System (ADS)

    Zheng, Jun-Xi; Zhang, Ping; Li, Fang; Du, Guang-Long

    2016-09-01

    Although the sequence-dependent setup times flowshop problem with the total weighted tardiness minimization objective exists widely in industry, work on the problem has been scant in the existing literature. To the authors' best knowledge, the NEH?EWDD heuristic and the Iterated Greedy (IG) algorithm with descent local search have been regarded as the high performing heuristic and the state-of-the-art algorithm for the problem, which are both based on insertion search. In this article firstly, an efficient backtracking algorithm and a novel heuristic (HPIS) are presented for insertion search. Accordingly, two heuristics are introduced, one is NEH?EWDD with HPIS for insertion search, and the other is the combination of NEH?EWDD and both the two methods. Furthermore, the authors improve the IG algorithm with the proposed methods. Finally, experimental results show that both the proposed heuristics and the improved IG (IG*) significantly outperform the original ones.

  5. Development of a memetic clustering algorithm for optimal spectral histology: application to FTIR images of normal human colon.

    PubMed

    Farah, Ihsen; Nguyen, Thi Nguyet Que; Groh, Audrey; Guenot, Dominique; Jeannesson, Pierre; Gobinet, Cyril

    2016-05-23

    The coupling between Fourier-transform infrared (FTIR) imaging and unsupervised classification is effective in revealing the different structures of human tissues based on their specific biomolecular IR signatures; thus the spectral histology of the studied samples is achieved. However, the most widely applied clustering methods in spectral histology are local search algorithms, which converge to a local optimum, depending on initialization. Multiple runs of the techniques estimate multiple different solutions. Here, we propose a memetic algorithm, based on a genetic algorithm and a k-means clustering refinement, to perform optimal clustering. In addition, this approach was applied to the acquired FTIR images of normal human colon tissues originating from five patients. The results show the efficiency of the proposed memetic algorithm to achieve the optimal spectral histology of these samples, contrary to k-means. PMID:27110605

  6. A hybrid heuristic algorithm to improve known-plaintext attack on Fourier plane encryption.

    PubMed

    Liu, Wensi; Yang, Guanglin; Xie, Haiyan

    2009-08-01

    A hybrid heuristic attack scheme that combines the hill climbing algorithm and the simulated annealing algorithm is proposed to speed up the search procedure and to obtain a more accurate solution to the original key in the Fourier plane encryption algorithm. And a unit cycle is adopted to analyze the value space of the random phase. The experimental result shows that our scheme can obtain more accurate solution to the key that can achieve better decryption result both for the selected encrypted image and another unseen ciphertext image. The searching time is significantly reduced while without any exceptional case in searching procedure. For an image of 64x64 pixels, our algorithm costs a comparatively short computing time, about 1 minute, can retrieve the approximated key with the normalized root mean squared error 0.1, therefore, our scheme makes the known-plaintext attack on the Fourier plane image encryption more practical, stable, and effective.

  7. Performance of Thorup's Shortest Path Algorithm for Large-Scale Network Simulation

    NASA Astrophysics Data System (ADS)

    Sakumoto, Yusuke; Ohsaki, Hiroyuki; Imase, Makoto

    In this paper, we investigate the performance of Thorup's algorithm by comparing it to Dijkstra's algorithm for large-scale network simulations. One of the challenges toward the realization of large-scale network simulations is the efficient execution to find shortest paths in a graph with N vertices and M edges. The time complexity for solving a single-source shortest path (SSSP) problem with Dijkstra's algorithm with a binary heap (DIJKSTRA-BH) is O((M+N)log N). An sophisticated algorithm called Thorup's algorithm has been proposed. The original version of Thorup's algorithm (THORUP-FR) has the time complexity of O(M+N). A simplified version of Thorup's algorithm (THORUP-KL) has the time complexity of O(Mα(N)+N) where α(N) is the functional inverse of the Ackerman function. In this paper, we compare the performances (i.e., execution time and memory consumption) of THORUP-KL and DIJKSTRA-BH since it is known that THORUP-FR is at least ten times slower than Dijkstra's algorithm with a Fibonaccii heap. We find that (1) THORUP-KL is almost always faster than DIJKSTRA-BH for large-scale network simulations, and (2) the performances of THORUP-KL and DIJKSTRA-BH deviate from their time complexities due to the presence of the memory cache in the microprocessor.

  8. A novel algorithm with differential evolution and coral reef optimization for extreme learning machine training.

    PubMed

    Yang, Zhiyong; Zhang, Taohong; Zhang, Dezheng

    2016-02-01

    Extreme learning machine (ELM) is a novel and fast learning method to train single layer feed-forward networks. However due to the demand for larger number of hidden neurons, the prediction speed of ELM is not fast enough. An evolutionary based ELM with differential evolution (DE) has been proposed to reduce the prediction time of original ELM. But it may still get stuck at local optima. In this paper, a novel algorithm hybridizing DE and metaheuristic coral reef optimization (CRO), which is called differential evolution coral reef optimization (DECRO), is proposed to balance the explorative power and exploitive power to reach better performance. The thought and the implement of DECRO algorithm are discussed in this article with detail. DE, CRO and DECRO are applied to ELM training respectively. Experimental results show that DECRO-ELM can reduce the prediction time of original ELM, and obtain better performance for training ELM than both DE and CRO.

  9. An Algorithm for Suffix Stripping

    ERIC Educational Resources Information Center

    Porter, M. F.

    2006-01-01

    Purpose: The automatic removal of suffixes from words in English is of particular interest in the field of information retrieval. This work was originally published in Program in 1980 and is republished as part of a series of articles commemorating the 40th anniversary of the journal. Design/methodology/approach: An algorithm for suffix stripping…

  10. [An adaptive scaling hybrid algorithm for reduction of CT artifacts caused by metal objects].

    PubMed

    Chen, Yu; Luo, Hai; Zhou, He-qin

    2009-03-01

    A new adaptively hybrid filtering algorithm is proposed to reduce the artifacts caused by metal in CT image. Firstly, the method is used to preprocess the projection data of metal region and is reconstruct by filtered back projection (FBP) method. Then the expectation maximization algorithm (EM) is performed on the iterative original metal project data. Finally, a compensating procedure is applied to the reconstructed metal region. The simulation result has demonstrated that the proposed algorithm can remove the metal artifacts and keep the structure information of metal object effectively. It ensures that the tissues around the metal will not be distorted. The method is also computational efficient and effective for the CT images which contains several metal objects.

  11. An item-oriented recommendation algorithm on cold-start problem

    NASA Astrophysics Data System (ADS)

    Qiu, Tian; Chen, Guang; Zhang, Zi-Ke; Zhou, Tao

    2011-09-01

    Based on a hybrid algorithm incorporating the heat conduction and probability spreading processes (Proc. Natl. Acad. Sci. U.S.A., 107 (2010) 4511), in this letter, we propose an improved method by introducing an item-oriented function, focusing on solving the dilemma of the recommendation accuracy between the cold and popular items. Differently from previous works, the present algorithm does not require any additional information (e.g., tags). Further experimental results obtained in three real datasets, RYM, Netflix and MovieLens, show that, compared with the original hybrid method, the proposed algorithm significantly enhances the recommendation accuracy of the cold items, while it keeps the recommendation accuracy of the overall and the popular items. This work might shed some light on both understanding and designing effective methods for long-tailed online applications of recommender systems.

  12. Double color image encryption using iterative phase retrieval algorithm in quaternion gyrator domain.

    PubMed

    Shao, Zhuhong; Shu, Huazhong; Wu, Jiasong; Dong, Zhifang; Coatrieux, Gouenou; Coatrieux, Jean Louis

    2014-03-10

    This paper describes a novel algorithm to encrypt double color images into a single undistinguishable image in quaternion gyrator domain. By using an iterative phase retrieval algorithm, the phase masks used for encryption are obtained. Subsequently, the encrypted image is generated via cascaded quaternion gyrator transforms with different rotation angles. The parameters in quaternion gyrator transforms and phases serve as encryption keys. By knowing these keys, the original color images can be fully restituted. Numerical simulations have demonstrated the validity of the proposed encryption system as well as its robustness against loss of data and additive Gaussian noise. PMID:24663832

  13. [Near-infrared spectra combining with CARS and SPA algorithms to screen the variables and samples for quantitatively determining the soluble solids content in strawberry].

    PubMed

    Li, Jiang-bo; Guo, Zhi-ming; Huang, Wen-qian; Zhang, Bao-hua; Zhao, Chun-jiang

    2015-02-01

    In using spectroscopy to quantitatively or qualitatively analyze the quality of fruit, how to obtain a simple and effective correction model is very critical for the application and maintenance of the developed model. Strawberry as the research object, this research mainly focused on selecting the key variables and characteristic samples for quantitatively determining the soluble solids content. Competitive adaptive reweighted sampling (CARS) algorithm was firstly proposed to select the spectra variables. Then, Samples of correction set were selected by successive projections algorithm (SPA), and 98 characteristic samples were obtained. Next, based on the selected variables and characteristic samples, the second variable selection was performed by using SPA method. 25 key variables were obtained. In order to verify the performance of the proposed CARS algorithm, variable selection algorithms including Monte Carlo-uninformative variable elimination (MC-UVE) and SPA were used as the comparison algorithms. Results showed that CARS algorithm could eliminate uninformative variables and remove the collinearity information at the same time. Similarly, in order to assess the performance of the proposed SPA algorithm for selecting the characteristic samples, SPA algorithm was compared with classical Kennard-Stone algorithm Results showed that SPA algorithm could be used for selection of the characteristic samples in the calibration set. Finally, PLS and MLR model for quantitatively predicting the SSC (soluble solids content) in the strawberry were proposed based on the variables/samples subset (25/98), respectively. Results show that models built by using the 0.59% and 65.33% information of original variables and samples could obtain better performance than using the ones obtained by using all information of the original variables and samples. MLR model was the best with R(pre)2 = 0.9097, RMSEP=0.3484 and RPD = 3.3278.

  14. Multikernel least mean square algorithm.

    PubMed

    Tobar, Felipe A; Kung, Sun-Yuan; Mandic, Danilo P

    2014-02-01

    The multikernel least-mean-square algorithm is introduced for adaptive estimation of vector-valued nonlinear and nonstationary signals. This is achieved by mapping the multivariate input data to a Hilbert space of time-varying vector-valued functions, whose inner products (kernels) are combined in an online fashion. The proposed algorithm is equipped with novel adaptive sparsification criteria ensuring a finite dictionary, and is computationally efficient and suitable for nonstationary environments. We also show the ability of the proposed vector-valued reproducing kernel Hilbert space to serve as a feature space for the class of multikernel least-squares algorithms. The benefits of adaptive multikernel (MK) estimation algorithms are illuminated in the nonlinear multivariate adaptive prediction setting. Simulations on nonlinear inertial body sensor signals and nonstationary real-world wind signals of low, medium, and high dynamic regimes support the approach. PMID:24807027

  15. Robust matching algorithm for image mosaic

    NASA Astrophysics Data System (ADS)

    Zeng, Luan; Tan, Jiu-bin

    2010-08-01

    In order to improve the matching accuracy and the level of automation for image mosaic, a matching algorithm based on SIFT (Scale Invariant Feature Transform) features is proposed as detailed below. Firstly, according to the result of cursory comparison with the given basal matching threshold, the collection corresponding SIFT features which contains mismatch is obtained. Secondly, after calculating all the ratio of Euclidean distance from the closest neighbor to the distance of the second closest of corresponding features, we select the image coordinates of corresponding SIFT features with the first eight smallest ratios to solve the initial parameters of pin-hole camera model, and then calculate maximum error σ between transformation coordinates and original image coordinates of the eight corresponding features. Thirdly, calculating the scale of the largest original image coordinates of the eight corresponding features to the entire image size, the scale is regarded as control parameter k of matching error threshold. Finally, computing the difference of the transformation coordinates and the original image coordinates of all the features in the collection of features, deleting the corresponding features with difference larger than 3kσ. We can then obtain the exact collection of matching features to solve the parameters for pin-hole camera model. Experimental results indicate that the proposed method is stable and reliable in case of the image having some variation of view point, illumination, rotation and scale. This new method has been used to achieve an excellent matching accuracy on the experimental images. Moreover, the proposed method can be used to select the matching threshold of different images automatically without any manual intervention.

  16. Parallel algorithms for unconstrained optimization by multisplitting with inexact subspace search - the abstract

    SciTech Connect

    Renaut, R.; He, Q.

    1994-12-31

    In a new parallel iterative algorithm for unconstrained optimization by multisplitting is proposed. In this algorithm the original problem is split into a set of small optimization subproblems which are solved using well known sequential algorithms. These algorithms are iterative in nature, e.g. DFP variable metric method. Here the authors use sequential algorithms based on an inexact subspace search, which is an extension to the usual idea of an inexact fine search. Essentially the idea of the inexact line search for nonlinear minimization is that at each iteration the authors only find an approximate minimum in the line search direction. Hence by inexact subspace search, they mean that, instead of finding the minimum of the subproblem at each interation, they do an incomplete down hill search to give an approximate minimum. Some convergence and numerical results for this algorithm will be presented. Further, the original theory will be generalized to the situation with a singular Hessian. Applications for nonlinear least squares problems will be presented. Experimental results will be presented for implementations on an Intel iPSC/860 Hypercube with 64 nodes as well as on the Intel Paragon.

  17. An enhanced version of the heat exchange algorithm with excellent energy conservation properties

    NASA Astrophysics Data System (ADS)

    Wirnsberger, P.; Frenkel, D.; Dellago, C.

    2015-09-01

    We propose a new algorithm for non-equilibrium molecular dynamics simulations of thermal gradients. The algorithm is an extension of the heat exchange algorithm developed by Hafskjold et al. [Mol. Phys. 80, 1389 (1993); 81, 251 (1994)], in which a certain amount of heat is added to one region and removed from another by rescaling velocities appropriately. Since the amount of added and removed heat is the same and the dynamics between velocity rescaling steps is Hamiltonian, the heat exchange algorithm is expected to conserve the energy. However, it has been reported previously that the original version of the heat exchange algorithm exhibits a pronounced drift in the total energy, the exact cause of which remained hitherto unclear. Here, we show that the energy drift is due to the truncation error arising from the operator splitting and suggest an additional coordinate integration step as a remedy. The new algorithm retains all the advantages of the original one whilst exhibiting excellent energy conservation as illustrated for a Lennard-Jones liquid and SPC/E water.

  18. A Bat Algorithm with Mutation for UCAV Path Planning

    PubMed Central

    Wang, Gaige; Guo, Lihong; Duan, Hong; Liu, Luo; Wang, Heqi

    2012-01-01

    Path planning for uninhabited combat air vehicle (UCAV) is a complicated high dimension optimization problem, which mainly centralizes on optimizing the flight route considering the different kinds of constrains under complicated battle field environments. Original bat algorithm (BA) is used to solve the UCAV path planning problem. Furthermore, a new bat algorithm with mutation (BAM) is proposed to solve the UCAV path planning problem, and a modification is applied to mutate between bats during the process of the new solutions updating. Then, the UCAV can find the safe path by connecting the chosen nodes of the coordinates while avoiding the threat areas and costing minimum fuel. This new approach can accelerate the global convergence speed while preserving the strong robustness of the basic BA. The realization procedure for original BA and this improved metaheuristic approach BAM is also presented. To prove the performance of this proposed metaheuristic method, BAM is compared with BA and other population-based optimization methods, such as ACO, BBO, DE, ES, GA, PBIL, PSO, and SGA. The experiment shows that the proposed approach is more effective and feasible in UCAV path planning than the other models. PMID:23365518

  19. WDM Multicast Tree Construction Algorithms and Their Comparative Evaluations

    NASA Astrophysics Data System (ADS)

    Makabe, Tsutomu; Mikoshi, Taiju; Takenaka, Toyofumi

    We propose novel tree construction algorithms for multicast communication in photonic networks. Since multicast communications consume many more link resources than unicast communications, effective algorithms for route selection and wavelength assignment are required. We propose a novel tree construction algorithm, called the Weighted Steiner Tree (WST) algorithm and a variation of the WST algorithm, called the Composite Weighted Steiner Tree (CWST) algorithm. Because these algorithms are based on the Steiner Tree algorithm, link resources among source and destination pairs tend to be commonly used and link utilization ratios are improved. Because of this, these algorithms can accept many more multicast requests than other multicast tree construction algorithms based on the Dijkstra algorithm. However, under certain delay constraints, the blocking characteristics of the proposed Weighted Steiner Tree algorithm deteriorate since some light paths between source and destinations use many hops and cannot satisfy the delay constraint. In order to adapt the approach to the delay-sensitive environments, we have devised the Composite Weighted Steiner Tree algorithm comprising the Weighted Steiner Tree algorithm and the Dijkstra algorithm for use in a delay constrained environment such as an IPTV application. In this paper, we also give the results of simulation experiments which demonstrate the superiority of the proposed Composite Weighted Steiner Tree algorithm compared with the Distributed Minimum Hop Tree (DMHT) algorithm, from the viewpoint of the light-tree request blocking.

  20. Comprehensive eye evaluation algorithm

    NASA Astrophysics Data System (ADS)

    Agurto, C.; Nemeth, S.; Zamora, G.; Vahtel, M.; Soliz, P.; Barriga, S.

    2016-03-01

    In recent years, several research groups have developed automatic algorithms to detect diabetic retinopathy (DR) in individuals with diabetes (DM), using digital retinal images. Studies have indicated that diabetics have 1.5 times the annual risk of developing primary open angle glaucoma (POAG) as do people without DM. Moreover, DM patients have 1.8 times the risk for age-related macular degeneration (AMD). Although numerous investigators are developing automatic DR detection algorithms, there have been few successful efforts to create an automatic algorithm that can detect other ocular diseases, such as POAG and AMD. Consequently, our aim in the current study was to develop a comprehensive eye evaluation algorithm that not only detects DR in retinal images, but also automatically identifies glaucoma suspects and AMD by integrating other personal medical information with the retinal features. The proposed system is fully automatic and provides the likelihood of each of the three eye disease. The system was evaluated in two datasets of 104 and 88 diabetic cases. For each eye, we used two non-mydriatic digital color fundus photographs (macula and optic disc centered) and, when available, information about age, duration of diabetes, cataracts, hypertension, gender, and laboratory data. Our results show that the combination of multimodal features can increase the AUC by up to 5%, 7%, and 8% in the detection of AMD, DR, and glaucoma respectively. Marked improvement was achieved when laboratory results were combined with retinal image features.

  1. 15 CFR 930.66 - Supplemental coordination for proposed activities.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... proposed activity will affect any coastal use or resource substantially different than originally described... supporting a finding of substantially different coastal effects than originally described and the...

  2. 15 CFR 930.66 - Supplemental coordination for proposed activities.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... proposed activity will affect any coastal use or resource substantially different than originally described... supporting a finding of substantially different coastal effects than originally described and the...

  3. 15 CFR 930.66 - Supplemental coordination for proposed activities.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... proposed activity will affect any coastal use or resource substantially different than originally described... supporting a finding of substantially different coastal effects than originally described and the...

  4. 15 CFR 930.66 - Supplemental coordination for proposed activities.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... proposed activity will affect any coastal use or resource substantially different than originally described... supporting a finding of substantially different coastal effects than originally described and the...

  5. 15 CFR 930.66 - Supplemental coordination for proposed activities.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... proposed activity will affect any coastal use or resource substantially different than originally described... supporting a finding of substantially different coastal effects than originally described and the...

  6. Encoded expansion: an efficient algorithm to discover identical string motifs.

    PubMed

    Azmi, Aqil M; Al-Ssulami, Abdulrakeeb

    2014-01-01

    A major task in computational biology is the discovery of short recurring string patterns known as motifs. Most of the schemes to discover motifs are either stochastic or combinatorial in nature. Stochastic approaches do not guarantee finding the correct motifs, while the combinatorial schemes tend to have an exponential time complexity with respect to motif length. To alleviate the cost, the combinatorial approach exploits dynamic data structures such as trees or graphs. Recently (Karci (2009) Efficient automatic exact motif discovery algorithms for biological sequences, Expert Systems with Applications 36:7952-7963) devised a deterministic algorithm that finds all the identical copies of string motifs of all sizes [Formula: see text] in theoretical time complexity of [Formula: see text] and a space complexity of [Formula: see text] where [Formula: see text] is the length of the input sequence and [Formula: see text] is the length of the longest possible string motif. In this paper, we present a significant improvement on Karci's original algorithm. The algorithm that we propose reports all identical string motifs of sizes [Formula: see text] that occur at least [Formula: see text] times. Our algorithm starts with string motifs of size 2, and at each iteration it expands the candidate string motifs by one symbol throwing out those that occur less than [Formula: see text] times in the entire input sequence. We use a simple array and data encoding to achieve theoretical worst-case time complexity of [Formula: see text] and a space complexity of [Formula: see text] Encoding of the substrings can speed up the process of comparison between string motifs. Experimental results on random and real biological sequences confirm that our algorithm has indeed a linear time complexity and it is more scalable in terms of sequence length than the existing algorithms. PMID:24871320

  7. Encoded Expansion: An Efficient Algorithm to Discover Identical String Motifs

    PubMed Central

    Azmi, Aqil M.; Al-Ssulami, Abdulrakeeb

    2014-01-01

    A major task in computational biology is the discovery of short recurring string patterns known as motifs. Most of the schemes to discover motifs are either stochastic or combinatorial in nature. Stochastic approaches do not guarantee finding the correct motifs, while the combinatorial schemes tend to have an exponential time complexity with respect to motif length. To alleviate the cost, the combinatorial approach exploits dynamic data structures such as trees or graphs. Recently (Karci (2009) Efficient automatic exact motif discovery algorithms for biological sequences, Expert Systems with Applications 36:7952–7963) devised a deterministic algorithm that finds all the identical copies of string motifs of all sizes in theoretical time complexity of and a space complexity of where is the length of the input sequence and is the length of the longest possible string motif. In this paper, we present a significant improvement on Karci's original algorithm. The algorithm that we propose reports all identical string motifs of sizes that occur at least times. Our algorithm starts with string motifs of size 2, and at each iteration it expands the candidate string motifs by one symbol throwing out those that occur less than times in the entire input sequence. We use a simple array and data encoding to achieve theoretical worst-case time complexity of and a space complexity of Encoding of the substrings can speed up the process of comparison between string motifs. Experimental results on random and real biological sequences confirm that our algorithm has indeed a linear time complexity and it is more scalable in terms of sequence length than the existing algorithms. PMID:24871320

  8. Encoded expansion: an efficient algorithm to discover identical string motifs.

    PubMed

    Azmi, Aqil M; Al-Ssulami, Abdulrakeeb

    2014-01-01

    A major task in computational biology is the discovery of short recurring string patterns known as motifs. Most of the schemes to discover motifs are either stochastic or combinatorial in nature. Stochastic approaches do not guarantee finding the correct motifs, while the combinatorial schemes tend to have an exponential time complexity with respect to motif length. To alleviate the cost, the combinatorial approach exploits dynamic data structures such as trees or graphs. Recently (Karci (2009) Efficient automatic exact motif discovery algorithms for biological sequences, Expert Systems with Applications 36:7952-7963) devised a deterministic algorithm that finds all the identical copies of string motifs of all sizes [Formula: see text] in theoretical time complexity of [Formula: see text] and a space complexity of [Formula: see text] where [Formula: see text] is the length of the input sequence and [Formula: see text] is the length of the longest possible string motif. In this paper, we present a significant improvement on Karci's original algorithm. The algorithm that we propose reports all identical string motifs of sizes [Formula: see text] that occur at least [Formula: see text] times. Our algorithm starts with string motifs of size 2, and at each iteration it expands the candidate string motifs by one symbol throwing out those that occur less than [Formula: see text] times in the entire input sequence. We use a simple array and data encoding to achieve theoretical worst-case time complexity of [Formula: see text] and a space complexity of [Formula: see text] Encoding of the substrings can speed up the process of comparison between string motifs. Experimental results on random and real biological sequences confirm that our algorithm has indeed a linear time complexity and it is more scalable in terms of sequence length than the existing algorithms.

  9. Adaptive Inverse Hyperbolic Tangent Algorithm for Dynamic Contrast Adjustment in Displaying Scenes

    NASA Astrophysics Data System (ADS)

    Yu, Cheng-Yi; Ouyang, Yen-Chieh; Wang, Chuin-Mu; Chang, Chein-I.

    2010-12-01

    Contrast has a great influence on the quality of an image in human visual perception. A poorly illuminated environment can significantly affect the contrast ratio, producing an unexpected image. This paper proposes an Adaptive Inverse Hyperbolic Tangent (AIHT) algorithm to improve the display quality and contrast of a scene. Because digital cameras must maintain the shadow in a middle range of luminance that includes a main object such as a face, a gamma function is generally used for this purpose. However, this function has a severe weakness in that it decreases highlight contrast. To mitigate this problem, contrast enhancement algorithms have been designed to adjust contrast to tune human visual perception. The proposed AIHT determines the contrast levels of an original image as well as parameter space for different contrast types so that not only the original histogram shape features can be preserved, but also the contrast can be enhanced effectively. Experimental results show that the proposed algorithm is capable of enhancing the global contrast of the original image adaptively while extruding the details of objects simultaneously.

  10. Proposal for DICOM multiframe medical image integrity and authenticity.

    PubMed

    Kobayashi, Luiz O M; Furuie, Sergio S

    2009-03-01

    This paper presents a novel algorithm to successfully achieve viable integrity and authenticity addition and verification of n-frame DICOM medical images using cryptographic mechanisms. The aim of this work is the enhancement of DICOM security measures, especially for multiframe images. Current approaches have limitations that should be properly addressed for improved security. The algorithm proposed in this work uses data encryption to provide integrity and authenticity, along with digital signature. Relevant header data and digital signature are used as inputs to cipher the image. Therefore, one can only retrieve the original data if and only if the images and the inputs are correct. The encryption process itself is a cascading scheme, where a frame is ciphered with data related to the previous frames, generating also additional data on image integrity and authenticity. Decryption is similar to encryption, featuring also the standard security verification of the image. The implementation was done in JAVA, and a performance evaluation was carried out comparing the speed of the algorithm with other existing approaches. The evaluation showed a good performance of the algorithm, which is an encouraging result to use it in a real environment.

  11. Algorithms versus architectures for computational chemistry

    NASA Technical Reports Server (NTRS)

    Partridge, H.; Bauschlicher, C. W., Jr.

    1986-01-01

    The algorithms employed are computationally intensive and, as a result, increased performance (both algorithmic and architectural) is required to improve accuracy and to treat larger molecular systems. Several benchmark quantum chemistry codes are examined on a variety of architectures. While these codes are only a small portion of a typical quantum chemistry library, they illustrate many of the computationally intensive kernels and data manipulation requirements of some applications. Furthermore, understanding the performance of the existing algorithm on present and proposed supercomputers serves as a guide for future programs and algorithm development. The algorithms investigated are: (1) a sparse symmetric matrix vector product; (2) a four index integral transformation; and (3) the calculation of diatomic two electron Slater integrals. The vectorization strategies are examined for these algorithms for both the Cyber 205 and Cray XMP. In addition, multiprocessor implementations of the algorithms are looked at on the Cray XMP and on the MIT static data flow machine proposed by DENNIS.

  12. A synthesized heuristic task scheduling algorithm.

    PubMed

    Dai, Yanyan; Zhang, Xiangli

    2014-01-01

    Aiming at the static task scheduling problems in heterogeneous environment, a heuristic task scheduling algorithm named HCPPEFT is proposed. In task prioritizing phase, there are three levels of priority in the algorithm to choose task. First, the critical tasks have the highest priority, secondly the tasks with longer path to exit task will be selected, and then algorithm will choose tasks with less predecessors to schedule. In resource selection phase, the algorithm is selected task duplication to reduce the interresource communication cost, besides forecasting the impact of an assignment for all children of the current task permits better decisions to be made in selecting resources. The algorithm proposed is compared with STDH, PEFT, and HEFT algorithms through randomly generated graphs and sets of task graphs. The experimental results show that the new algorithm can achieve better scheduling performance.

  13. Linear Bregman algorithm implemented in parallel GPU

    NASA Astrophysics Data System (ADS)

    Li, Pengyan; Ke, Jue; Sui, Dong; Wei, Ping

    2015-08-01

    At present, most compressed sensing (CS) algorithms have poor converging speed, thus are difficult to run on PC. To deal with this issue, we use a parallel GPU, to implement a broadly used compressed sensing algorithm, the Linear Bregman algorithm. Linear iterative Bregman algorithm is a reconstruction algorithm proposed by Osher and Cai. Compared with other CS reconstruction algorithms, the linear Bregman algorithm only involves the vector and matrix multiplication and thresholding operation, and is simpler and more efficient for programming. We use C as a development language and adopt CUDA (Compute Unified Device Architecture) as parallel computing architectures. In this paper, we compared the parallel Bregman algorithm with traditional CPU realized Bregaman algorithm. In addition, we also compared the parallel Bregman algorithm with other CS reconstruction algorithms, such as OMP and TwIST algorithms. Compared with these two algorithms, the result of this paper shows that, the parallel Bregman algorithm needs shorter time, and thus is more convenient for real-time object reconstruction, which is important to people's fast growing demand to information technology.

  14. Adaptive link selection algorithms for distributed estimation

    NASA Astrophysics Data System (ADS)

    Xu, Songcen; de Lamare, Rodrigo C.; Poor, H. Vincent

    2015-12-01

    This paper presents adaptive link selection algorithms for distributed estimation and considers their application to wireless sensor networks and smart grids. In particular, exhaustive search-based least mean squares (LMS) / recursive least squares (RLS) link selection algorithms and sparsity-inspired LMS / RLS link selection algorithms that can exploit the topology of networks with poor-quality links are considered. The proposed link selection algorithms are then analyzed in terms of their stability, steady-state, and tracking performance and computational complexity. In comparison with the existing centralized or distributed estimation strategies, the key features of the proposed algorithms are as follows: (1) more accurate estimates and faster convergence speed can be obtained and (2) the network is equipped with the ability of link selection that can circumvent link failures and improve the estimation performance. The performance of the proposed algorithms for distributed estimation is illustrated via simulations in applications of wireless sensor networks and smart grids.

  15. A generalized vector-valued total variation algorithm

    SciTech Connect

    Wohlberg, Brendt; Rodriguez, Paul

    2009-01-01

    We propose a simple but flexible method for solving the generalized vector-valued TV (VTV) functional, which includes both the {ell}{sup 2}-VTV and {ell}{sup 1}-VTV regularizations as special cases, to address the problems of deconvolution and denoising of vector-valued (e.g. color) images with Gaussian or salt-andpepper noise. This algorithm is the vectorial extension of the Iteratively Reweighted Norm (IRN) algorithm [I] originally developed for scalar (grayscale) images. This method offers competitive computational performance for denoising and deconvolving vector-valued images corrupted with Gaussian ({ell}{sup 2}-VTV case) and salt-and-pepper noise ({ell}{sup 1}-VTV case).

  16. Algorithm for remote sensing of land surface temperature

    NASA Astrophysics Data System (ADS)

    AlSultan, Sultan; Lim, H. S.; MatJafri, M. Z.; Abdullah, K.

    2008-10-01

    This study employs the developed algorithm for retrieving land surface temperature (LST) from Landsat TM over Saudi Arabia. The algorithm is a mono window algorithm because the Landsat TM has only one thermal band between wavelengths of 10.44-12.42 μm. The proposed algorithm included three parameters, brightness temperature, surface emissivity and incoming solar radiation in the algorithm regression analysis. The LST estimated by the proposed developed algorithm and the LST values produced using ATCORT2_T in the PCI Geomatica 9.1 image processing software were compared. The mono window algorithm produced high accuracy LST values using Landsat TM data.

  17. Algorithms, games, and evolution.

    PubMed

    Chastain, Erick; Livnat, Adi; Papadimitriou, Christos; Vazirani, Umesh

    2014-07-22

    Even the most seasoned students of evolution, starting with Darwin himself, have occasionally expressed amazement that the mechanism of natural selection has produced the whole of Life as we see it around us. There is a computational way to articulate the same amazement: "What algorithm could possibly achieve all this in a mere three and a half billion years?" In this paper we propose an answer: We demonstrate that in the regime of weak selection, the standard equations of population genetics describing natural selection in the presence of sex become identical to those of a repeated game between genes played according to multiplicative weight updates (MWUA), an algorithm known in computer science to be surprisingly powerful and versatile. MWUA maximizes a tradeoff between cumulative performance and entropy, which suggests a new view on the maintenance of diversity in evolution.

  18. A combined NLP-differential evolution algorithm approach for the optimization of looped water distribution systems

    NASA Astrophysics Data System (ADS)

    Zheng, Feifei; Simpson, Angus R.; Zecchin, Aaron C.

    2011-08-01

    This paper proposes a novel optimization approach for the least cost design of looped water distribution systems (WDSs). Three distinct steps are involved in the proposed optimization approach. In the first step, the shortest-distance tree within the looped network is identified using the Dijkstra graph theory algorithm, for which an extension is proposed to find the shortest-distance tree for multisource WDSs. In the second step, a nonlinear programming (NLP) solver is employed to optimize the pipe diameters for the shortest-distance tree (chords of the shortest-distance tree are allocated the minimum allowable pipe sizes). Finally, in the third step, the original looped water network is optimized using a differential evolution (DE) algorithm seeded with diameters in the proximity of the continuous pipe sizes obtained in step two. As such, the proposed optimization approach combines the traditional deterministic optimization technique of NLP with the emerging evolutionary algorithm DE via the proposed network decomposition. The proposed methodology has been tested on four looped WDSs with the number of decision variables ranging from 21 to 454. Results obtained show the proposed approach is able to find optimal solutions with significantly less computational effort than other optimization techniques.

  19. Dual signal subspace projection (DSSP): a novel algorithm for removing large interference in biomagnetic measurements

    NASA Astrophysics Data System (ADS)

    Sekihara, Kensuke; Kawabata, Yuya; Ushio, Shuta; Sumiya, Satoshi; Kawabata, Shigenori; Adachi, Yoshiaki; Nagarajan, Srikantan S.

    2016-06-01

    Objective. In functional electrophysiological imaging, signals are often contaminated by interference that can be of considerable magnitude compared to the signals of interest. This paper proposes a novel algorithm for removing such interferences that does not require separate noise measurements. Approach. The algorithm is based on a dual definition of the signal subspace in the spatial- and time-domains. Since the algorithm makes use of this duality, it is named the dual signal subspace projection (DSSP). The DSSP algorithm first projects the columns of the measured data matrix onto the inside and outside of the spatial-domain signal subspace, creating a set of two preprocessed data matrices. The intersection of the row spans of these two matrices is estimated as the time-domain interference subspace. The original data matrix is projected onto the subspace that is orthogonal to this interference subspace. Main results. The DSSP algorithm is validated by using the computer simulation, and using two sets of real biomagnetic data: spinal cord evoked field data measured from a healthy volunteer and magnetoencephalography data from a patient with a vagus nerve stimulator. Significance. The proposed DSSP algorithm is effective for removing overlapped interference in a wide variety of biomagnetic measurements.

  20. Adaptive computation algorithm for RBF neural network.

    PubMed

    Han, Hong-Gui; Qiao, Jun-Fei

    2012-02-01

    A novel learning algorithm is proposed for nonlinear modelling and identification using radial basis function neural networks. The proposed method simplifies neural network training through the use of an adaptive computation algorithm (ACA). In addition, the convergence of the ACA is analyzed by the Lyapunov criterion. The proposed algorithm offers two important advantages. First, the model performance can be significantly improved through ACA, and the modelling error is uniformly ultimately bounded. Secondly, the proposed ACA can reduce computational cost and accelerate the training speed. The proposed method is then employed to model classical nonlinear system with limit cycle and to identify nonlinear dynamic system, exhibiting the effectiveness of the proposed algorithm. Computational complexity analysis and simulation results demonstrate its effectiveness.

  1. A New Differential Evolution Algorithm and Its Application to Real Life Problems

    NASA Astrophysics Data System (ADS)

    Pant, Millie; Ali, Musrrat; Singh, V. P.

    2009-07-01

    Most of the real life problems occurring in various disciplines of science and engineering can be modeled as optimization problems. Also, most of these problems are nonlinear in nature which requires a suitable and efficient optimization algorithm to reach to an optimum value. In the past few years various algorithms has been proposed to deal with nonlinear optimization problems. Differential Evolution (DE) is a stochastic, population based search technique, which can be classified as an Evolutionary Algorithm (EA) using the concepts of selection crossover and reproduction to guide the search. It has emerged as a powerful tool for solving optimization problems in the past few years. However, the convergence rate of DE still does not meet all the requirements, and attempts to speed up differential evolution are considered necessary. In order to improve the performance of DE, we propose a modified DE algorithm called DEPCX which uses parent centric approach to manipulate the solution vectors. The performance of DEPCX is validated on a test bed of five benchmark functions and five real life engineering design problems. Numerical results are compared with original differential evolution (DE) and with TDE, another recently modified version of DE. Empirical analysis of the results clearly indicates the competence and efficiency of the proposed DEPCX algorithm for solving benchmark as well as real life problems with a good convergence rate.

  2. Feature extraction and classification algorithms for high dimensional data

    NASA Technical Reports Server (NTRS)

    Lee, Chulhee; Landgrebe, David

    1993-01-01

    Feature extraction and classification algorithms for high dimensional data are investigated. Developments with regard to sensors for Earth observation are moving in the direction of providing much higher dimensional multispectral imagery than is now possible. In analyzing such high dimensional data, processing time becomes an important factor. With large increases in dimensionality and the number of classes, processing time will increase significantly. To address this problem, a multistage classification scheme is proposed which reduces the processing time substantially by eliminating unlikely classes from further consideration at each stage. Several truncation criteria are developed and the relationship between thresholds and the error caused by the truncation is investigated. Next an approach to feature extraction for classification is proposed based directly on the decision boundaries. It is shown that all the features needed for classification can be extracted from decision boundaries. A characteristic of the proposed method arises by noting that only a portion of the decision boundary is effective in discriminating between classes, and the concept of the effective decision boundary is introduced. The proposed feature extraction algorithm has several desirable properties: it predicts the minimum number of features necessary to achieve the same classification accuracy as in the original space for a given pattern recognition problem; and it finds the necessary feature vectors. The proposed algorithm does not deteriorate under the circumstances of equal means or equal covariances as some previous algorithms do. In addition, the decision boundary feature extraction algorithm can be used both for parametric and non-parametric classifiers. Finally, some problems encountered in analyzing high dimensional data are studied and possible solutions are proposed. First, the increased importance of the second order statistics in analyzing high dimensional data is recognized

  3. Improved autonomous star identification algorithm

    NASA Astrophysics Data System (ADS)

    Luo, Li-Yan; Xu, Lu-Ping; Zhang, Hua; Sun, Jing-Rong

    2015-06-01

    The log-polar transform (LPT) is introduced into the star identification because of its rotation invariance. An improved autonomous star identification algorithm is proposed in this paper to avoid the circular shift of the feature vector and to reduce the time consumed in the star identification algorithm using LPT. In the proposed algorithm, the star pattern of the same navigation star remains unchanged when the stellar image is rotated, which makes it able to reduce the star identification time. The logarithmic values of the plane distances between the navigation and its neighbor stars are adopted to structure the feature vector of the navigation star, which enhances the robustness of star identification. In addition, some efforts are made to make it able to find the identification result with fewer comparisons, instead of searching the whole feature database. The simulation results demonstrate that the proposed algorithm can effectively accelerate the star identification. Moreover, the recognition rate and robustness by the proposed algorithm are better than those by the LPT algorithm and the modified grid algorithm. Project supported by the National Natural Science Foundation of China (Grant Nos. 61172138 and 61401340), the Open Research Fund of the Academy of Satellite Application, China (Grant No. 2014_CXJJ-DH_12), the Fundamental Research Funds for the Central Universities, China (Grant Nos. JB141303 and 201413B), the Natural Science Basic Research Plan in Shaanxi Province, China (Grant No. 2013JQ8040), the Research Fund for the Doctoral Program of Higher Education of China (Grant No. 20130203120004), and the Xi’an Science and Technology Plan, China (Grant. No CXY1350(4)).

  4. Parallelized dilate algorithm for remote sensing image.

    PubMed

    Zhang, Suli; Hu, Haoran; Pan, Xin

    2014-01-01

    As an important algorithm, dilate algorithm can give us more connective view of a remote sensing image which has broken lines or objects. However, with the technological progress of satellite sensor, the resolution of remote sensing image has been increasing and its data quantities become very large. This would lead to the decrease of algorithm running speed or cannot obtain a result in limited memory or time. To solve this problem, our research proposed a parallelized dilate algorithm for remote sensing Image based on MPI and MP. Experiments show that our method runs faster than traditional single-process algorithm.

  5. Non-uniformity correction for infrared focal plane array with image based on neural network algorithm

    NASA Astrophysics Data System (ADS)

    Wang, Tingting; Yu, Junsheng; Zhou, Yun; Xing, Yanmin; Jiang, Yadong

    2010-10-01

    Non-uniformity response of detectors based on infrared focal plane array (IRFPA) result in fixed pattern noise (FPN) due to detector materials' non-uniformity and fabrication technology. Once fixed pattern noise added to the infrared image, focal plane image quality will have a serious impact. So non-uniformity correction (NUC) is a key technology in IRFPA application. This paper briefly introduces the traditional neural network algorithm and puts forward an improved algorithm for the neural network algorithm for NUC of infrared focal plane arrays. The main improvement is focused on the estimation method of desired image. The algorithm is used to analyze the image array, correcting data on the array both in space and in time. The correction image in the text is from the infrared data sequence which is more successful of three frames of data obtained. It was found that the estimated image corrected by new algorithm is closer to real image than the estimated image corrected by other algorithm. Moreover, we simulated the new proposed algorithm using Matlab. The results showed that the method of spatial and temporal co-correction of the images is more realistic than the original image.

  6. Segmentation of pomegranate MR images using spatial fuzzy c-means (SFCM) algorithm

    NASA Astrophysics Data System (ADS)

    Moradi, Ghobad; Shamsi, Mousa; Sedaaghi, M. H.; Alsharif, M. R.

    2011-10-01

    Segmentation is one of the fundamental issues of image processing and machine vision. It plays a prominent role in a variety of image processing applications. In this paper, one of the most important applications of image processing in MRI segmentation of pomegranate is explored. Pomegranate is a fruit with pharmacological properties such as being anti-viral and anti-cancer. Having a high quality product in hand would be critical factor in its marketing. The internal quality of the product is comprehensively important in the sorting process. The determination of qualitative features cannot be manually made. Therefore, the segmentation of the internal structures of the fruit needs to be performed as accurately as possible in presence of noise. Fuzzy c-means (FCM) algorithm is noise-sensitive and pixels with noise are classified inversely. As a solution, in this paper, the spatial FCM algorithm in pomegranate MR images' segmentation is proposed. The algorithm is performed with setting the spatial neighborhood information in FCM and modification of fuzzy membership function for each class. The segmentation algorithm results on the original and the corrupted Pomegranate MR images by Gaussian, Salt Pepper and Speckle noises show that the SFCM algorithm operates much more significantly than FCM algorithm. Also, after diverse steps of qualitative and quantitative analysis, we have concluded that the SFCM algorithm with 5×5 window size is better than the other windows.

  7. Algorithm for Compressing Time-Series Data

    NASA Technical Reports Server (NTRS)

    Hawkins, S. Edward, III; Darlington, Edward Hugo

    2012-01-01

    An algorithm based on Chebyshev polynomials effects lossy compression of time-series data or other one-dimensional data streams (e.g., spectral data) that are arranged in blocks for sequential transmission. The algorithm was developed for use in transmitting data from spacecraft scientific instruments to Earth stations. In spite of its lossy nature, the algorithm preserves the information needed for scientific analysis. The algorithm is computationally simple, yet compresses data streams by factors much greater than two. The algorithm is not restricted to spacecraft or scientific uses: it is applicable to time-series data in general. The algorithm can also be applied to general multidimensional data that have been converted to time-series data, a typical example being image data acquired by raster scanning. However, unlike most prior image-data-compression algorithms, this algorithm neither depends on nor exploits the two-dimensional spatial correlations that are generally present in images. In order to understand the essence of this compression algorithm, it is necessary to understand that the net effect of this algorithm and the associated decompression algorithm is to approximate the original stream of data as a sequence of finite series of Chebyshev polynomials. For the purpose of this algorithm, a block of data or interval of time for which a Chebyshev polynomial series is fitted to the original data is denoted a fitting interval. Chebyshev approximation has two properties that make it particularly effective for compressing serial data streams with minimal loss of scientific information: The errors associated with a Chebyshev approximation are nearly uniformly distributed over the fitting interval (this is known in the art as the "equal error property"); and the maximum deviations of the fitted Chebyshev polynomial from the original data have the smallest possible values (this is known in the art as the "min-max property").

  8. Algorithmic advances in stochastic programming

    SciTech Connect

    Morton, D.P.

    1993-07-01

    Practical planning problems with deterministic forecasts of inherently uncertain parameters often yield unsatisfactory solutions. Stochastic programming formulations allow uncertain parameters to be modeled as random variables with known distributions, but the size of the resulting mathematical programs can be formidable. Decomposition-based algorithms take advantage of special structure and provide an attractive approach to such problems. We consider two classes of decomposition-based stochastic programming algorithms. The first type of algorithm addresses problems with a ``manageable`` number of scenarios. The second class incorporates Monte Carlo sampling within a decomposition algorithm. We develop and empirically study an enhanced Benders decomposition algorithm for solving multistage stochastic linear programs within a prespecified tolerance. The enhancements include warm start basis selection, preliminary cut generation, the multicut procedure, and decision tree traversing strategies. Computational results are presented for a collection of ``real-world`` multistage stochastic hydroelectric scheduling problems. Recently, there has been an increased focus on decomposition-based algorithms that use sampling within the optimization framework. These approaches hold much promise for solving stochastic programs with many scenarios. A critical component of such algorithms is a stopping criterion to ensure the quality of the solution. With this as motivation, we develop a stopping rule theory for algorithms in which bounds on the optimal objective function value are estimated by sampling. Rules are provided for selecting sample sizes and terminating the algorithm under which asymptotic validity of confidence interval statements for the quality of the proposed solution can be verified. Issues associated with the application of this theory to two sampling-based algorithms are considered, and preliminary empirical coverage results are presented.

  9. Bouc-Wen hysteresis model identification using Modified Firefly Algorithm

    NASA Astrophysics Data System (ADS)

    Zaman, Mohammad Asif; Sikder, Urmita

    2015-12-01

    The parameters of Bouc-Wen hysteresis model are identified using a Modified Firefly Algorithm. The proposed algorithm uses dynamic process control parameters to improve its performance. The algorithm is used to find the model parameter values that results in the least amount of error between a set of given data points and points obtained from the Bouc-Wen model. The performance of the algorithm is compared with the performance of conventional Firefly Algorithm, Genetic Algorithm and Differential Evolution algorithm in terms of convergence rate and accuracy. Compared to the other three optimization algorithms, the proposed algorithm is found to have good convergence rate with high degree of accuracy in identifying Bouc-Wen model parameters. Finally, the proposed method is used to find the Bouc-Wen model parameters from experimental data. The obtained model is found to be in good agreement with measured data.

  10. New hybrid genetic particle swarm optimization algorithm to design multi-zone binary filter.

    PubMed

    Lin, Jie; Zhao, Hongyang; Ma, Yuan; Tan, Jiubin; Jin, Peng

    2016-05-16

    The binary phase filters have been used to achieve an optical needle with small lateral size. Designing a binary phase filter is still a scientific challenge in such fields. In this paper, a hybrid genetic particle swarm optimization (HGPSO) algorithm is proposed to design the binary phase filter. The HGPSO algorithm includes self-adaptive parameters, recombination and mutation operations that originated from the genetic algorithm. Based on the benchmark test, the HGPSO algorithm has achieved global optimization and fast convergence. In an easy-to-perform optimizing procedure, the iteration number of HGPSO is decreased to about a quarter of the original particle swarm optimization process. A multi-zone binary phase filter is designed by using the HGPSO. The long depth of focus and high resolution are achieved simultaneously, where the depth of focus and focal spot transverse size are 6.05λ and 0.41λ, respectively. Therefore, the proposed HGPSO can be applied to the optimization of filter with multiple parameters. PMID:27409895

  11. Understanding Air Transportation Market Dynamics Using a Search Algorithm for Calibrating Travel Demand and Price

    NASA Technical Reports Server (NTRS)

    Kumar, Vivek; Horio, Brant M.; DeCicco, Anthony H.; Hasan, Shahab; Stouffer, Virginia L.; Smith, Jeremy C.; Guerreiro, Nelson M.

    2015-01-01

    This paper presents a search algorithm based framework to calibrate origin-destination (O-D) market specific airline ticket demands and prices for the Air Transportation System (ATS). This framework is used for calibrating an agent based model of the air ticket buy-sell process - Airline Evolutionary Simulation (Airline EVOS) -that has fidelity of detail that accounts for airline and consumer behaviors and the interdependencies they share between themselves and the NAS. More specificially, this algorithm simultaneous calibrates demand and airfares for each O-D market, to within specified threshold of a pre-specified target value. The proposed algorithm is illustrated with market data targets provided by the Transportation System Analysis Model (TSAM) and Airline Origin and Destination Survey (DB1B). Although we specify these models and datasources for this calibration exercise, the methods described in this paper are applicable to calibrating any low-level model of the ATS to some other demand forecast model-based data. We argue that using a calibration algorithm such as the one we present here to synchronize ATS models with specialized forecast demand models, is a powerful tool for establishing credible baseline conditions in experiments analyzing the effects of proposed policy changes to the ATS.

  12. Algorithmic chemistry

    SciTech Connect

    Fontana, W.

    1990-12-13

    In this paper complex adaptive systems are defined by a self- referential loop in which objects encode functions that act back on these objects. A model for this loop is presented. It uses a simple recursive formal language, derived from the lambda-calculus, to provide a semantics that maps character strings into functions that manipulate symbols on strings. The interaction between two functions, or algorithms, is defined naturally within the language through function composition, and results in the production of a new function. An iterated map acting on sets of functions and a corresponding graph representation are defined. Their properties are useful to discuss the behavior of a fixed size ensemble of randomly interacting functions. This function gas'', or Turning gas'', is studied under various conditions, and evolves cooperative interaction patterns of considerable intricacy. These patterns adapt under the influence of perturbations consisting in the addition of new random functions to the system. Different organizations emerge depending on the availability of self-replicators.

  13. Some nonlinear space decomposition algorithms

    SciTech Connect

    Tai, Xue-Cheng; Espedal, M.

    1996-12-31

    Convergence of a space decomposition method is proved for a general convex programming problem. The space decomposition refers to methods that decompose a space into sums of subspaces, which could be a domain decomposition or a multigrid method for partial differential equations. Two algorithms are proposed. Both can be used for linear as well as nonlinear elliptic problems and they reduce to the standard additive and multiplicative Schwarz methods for linear elliptic problems. Two {open_quotes}hybrid{close_quotes} algorithms are also presented. They converge faster than the additive one and have better parallelism than the multiplicative method. Numerical tests with a two level domain decomposition for linear, nonlinear and interface elliptic problems are presented for the proposed algorithms.

  14. A New Modified Artificial Bee Colony Algorithm with Exponential Function Adaptive Steps

    PubMed Central

    Mao, Wei; Li, Hao-ru

    2016-01-01

    As one of the most recent popular swarm intelligence techniques, artificial bee colony algorithm is poor at exploitation and has some defects such as slow search speed, poor population diversity, the stagnation in the working process, and being trapped into the local optimal solution. The purpose of this paper is to develop a new modified artificial bee colony algorithm in view of the initial population structure, subpopulation groups, step updating, and population elimination. Further, depending on opposition-based learning theory and the new modified algorithms, an improved S-type grouping method is proposed and the original way of roulette wheel selection is substituted through sensitivity-pheromone way. Then, an adaptive step with exponential functions is designed for replacing the original random step. Finally, based on the new test function versions CEC13, six benchmark functions with the dimensions D = 20 and D = 40 are chosen and applied in the experiments for analyzing and comparing the iteration speed and accuracy of the new modified algorithms. The experimental results show that the new modified algorithm has faster and more stable searching and can quickly increase poor population diversity and bring out the global optimal solutions. PMID:27293426

  15. A New Modified Artificial Bee Colony Algorithm with Exponential Function Adaptive Steps.

    PubMed

    Mao, Wei; Lan, Heng-You; Li, Hao-Ru

    2016-01-01

    As one of the most recent popular swarm intelligence techniques, artificial bee colony algorithm is poor at exploitation and has some defects such as slow search speed, poor population diversity, the stagnation in the working process, and being trapped into the local optimal solution. The purpose of this paper is to develop a new modified artificial bee colony algorithm in view of the initial population structure, subpopulation groups, step updating, and population elimination. Further, depending on opposition-based learning theory and the new modified algorithms, an improved S-type grouping method is proposed and the original way of roulette wheel selection is substituted through sensitivity-pheromone way. Then, an adaptive step with exponential functions is designed for replacing the original random step. Finally, based on the new test function versions CEC13, six benchmark functions with the dimensions D = 20 and D = 40 are chosen and applied in the experiments for analyzing and comparing the iteration speed and accuracy of the new modified algorithms. The experimental results show that the new modified algorithm has faster and more stable searching and can quickly increase poor population diversity and bring out the global optimal solutions. PMID:27293426

  16. An Algorithmic Framework for Multiobjective Optimization

    PubMed Central

    Ganesan, T.; Elamvazuthi, I.; Shaari, Ku Zilati Ku; Vasant, P.

    2013-01-01

    Multiobjective (MO) optimization is an emerging field which is increasingly being encountered in many fields globally. Various metaheuristic techniques such as differential evolution (DE), genetic algorithm (GA), gravitational search algorithm (GSA), and particle swarm optimization (PSO) have been used in conjunction with scalarization techniques such as weighted sum approach and the normal-boundary intersection (NBI) method to solve MO problems. Nevertheless, many challenges still arise especially when dealing with problems with multiple objectives (especially in cases more than two). In addition, problems with extensive computational overhead emerge when dealing with hybrid algorithms. This paper discusses these issues by proposing an alternative framework that utilizes algorithmic concepts related to the problem structure for generating efficient and effective algorithms. This paper proposes a framework to generate new high-performance algorithms with minimal computational overhead for MO optimization. PMID:24470795

  17. Orbital objects detection algorithm using faint streaks

    NASA Astrophysics Data System (ADS)

    Tagawa, Makoto; Yanagisawa, Toshifumi; Kurosaki, Hirohisa; Oda, Hiroshi; Hanada, Toshiya

    2016-02-01

    This study proposes an algorithm to detect orbital objects that are small or moving at high apparent velocities from optical images by utilizing their faint streaks. In the conventional object-detection algorithm, a high signal-to-noise-ratio (e.g., 3 or more) is required, whereas in our proposed algorithm, the signals are summed along the streak direction to improve object-detection sensitivity. Lower signal-to-noise ratio objects were detected by applying the algorithm to a time series of images. The algorithm comprises the following steps: (1) image skewing, (2) image compression along the vertical axis, (3) detection and determination of streak position, (4) searching for object candidates using the time-series streak-position data, and (5) selecting the candidate with the best linearity and reliability. Our algorithm's ability to detect streaks with signals weaker than the background noise was confirmed using images from the Australia Remote Observatory.

  18. Threshold extended ID3 algorithm

    NASA Astrophysics Data System (ADS)

    Kumar, A. B. Rajesh; Ramesh, C. Phani; Madhusudhan, E.; Padmavathamma, M.

    2012-04-01

    Information exchange over insecure networks needs to provide authentication and confidentiality to the database in significant problem in datamining. In this paper we propose a novel authenticated multiparty ID3 Algorithm used to construct multiparty secret sharing decision tree for implementation in medical transactions.

  19. Some Practical Payments Clearance Algorithms

    NASA Astrophysics Data System (ADS)

    Kumlander, Deniss

    The globalisation of corporations' operations has produced a huge volume of inter-company invoices. Optimisation of those known as payment clearance can produce a significant saving in costs associated with those transfers and handling. The paper revises some common and so practical approaches to the payment clearance problem and proposes some novel algorithms based on graphs theory and heuristic totals' distribution.

  20. Systolic array architecture for convolutional decoding algorithms: Viterbi algorithm and stack algorithm

    SciTech Connect

    Chang, C.Y.

    1986-01-01

    New results on efficient forms of decoding convolutional codes based on Viterbi and stack algorithms using systolic array architecture are presented. Some theoretical aspects of systolic arrays are also investigated. First, systolic array implementation of Viterbi algorithm is considered, and various properties of convolutional codes are derived. A technique called strongly connected trellis decoding is introduced to increase the efficient utilization of all the systolic array processors. The issues dealing with the composite branch metric generation, survivor updating, overall system architecture, throughput rate, and computations overhead ratio are also investigated. Second, the existing stack algorithm is modified and restated in a more concise version so that it can be efficiently implemented by a special type of systolic array called systolic priority queue. Three general schemes of systolic priority queue based on random access memory, shift register, and ripple register are proposed. Finally, a systematic approach is presented to design systolic arrays for certain general classes of recursively formulated algorithms.

  1. Basic firefly algorithm for document clustering

    NASA Astrophysics Data System (ADS)

    Mohammed, Athraa Jasim; Yusof, Yuhanis; Husni, Husniza

    2015-12-01

    The Document clustering plays significant role in Information Retrieval (IR) where it organizes documents prior to the retrieval process. To date, various clustering algorithms have been proposed and this includes the K-means and Particle Swarm Optimization. Even though these algorithms have been widely applied in many disciplines due to its simplicity, such an approach tends to be trapped in a local minimum during its search for an optimal solution. To address the shortcoming, this paper proposes a Basic Firefly (Basic FA) algorithm to cluster text documents. The algorithm employs the Average Distance to Document Centroid (ADDC) as the objective function of the search. Experiments utilizing the proposed algorithm were conducted on the 20Newsgroups benchmark dataset. Results demonstrate that the Basic FA generates a more robust and compact clusters than the ones produced by K-means and Particle Swarm Optimization (PSO).

  2. Efficient algorithms for the laboratory discovery of optimal quantum controls

    NASA Astrophysics Data System (ADS)

    Turinici, Gabriel; Le Bris, Claude; Rabitz, Herschel

    2004-07-01

    The laboratory closed-loop optimal control of quantum phenomena, expressed as minimizing a suitable cost functional, is currently implemented through an optimization algorithm coupled to the experimental apparatus. In practice, the most commonly used search algorithms are variants of genetic algorithms. As an alternative choice, a direct search deterministic algorithm is proposed in this paper. For the simple simulations studied here, it outperforms the existing approaches. An additional algorithm is introduced in order to reveal some properties of the cost functional landscape.

  3. Optimized Swinging Door Algorithm for Wind Power Ramp Event Detection: Preprint

    SciTech Connect

    Cui, Mingjian; Zhang, Jie; Florita, Anthony R.; Hodge, Bri-Mathias; Ke, Deping; Sun, Yuanzhang

    2015-08-06

    Significant wind power ramp events (WPREs) are those that influence the integration of wind power, and they are a concern to the continued reliable operation of the power grid. As wind power penetration has increased in recent years, so has the importance of wind power ramps. In this paper, an optimized swinging door algorithm (SDA) is developed to improve ramp detection performance. Wind power time series data are segmented by the original SDA, and then all significant ramps are detected and merged through a dynamic programming algorithm. An application of the optimized SDA is provided to ascertain the optimal parameter of the original SDA. Measured wind power data from the Electric Reliability Council of Texas (ERCOT) are used to evaluate the proposed optimized SDA.

  4. New algorithm for efficient pattern recall using a static threshold with the Steinbuch Lernmatrix

    NASA Astrophysics Data System (ADS)

    Juan Carbajal Hernández, José; Sánchez Fernández, Luis P.

    2011-03-01

    An associative memory is a binary relationship between inputs and outputs, which is stored in an M matrix. The fundamental purpose of an associative memory is to recover correct output patterns from input patterns, which can be altered by additive, subtractive or combined noise. The Steinbuch Lernmatrix was the first associative memory developed in 1961, and is used as a pattern recognition classifier. However, a misclassification problem is presented when crossbar saturation occurs. A new algorithm that corrects the misclassification in the Lernmatrix is proposed in this work. The results of crossbar saturation with fundamental patterns demonstrate a better performance of pattern recalling using the new algorithm. Experiments with real data show a more efficient classifier when the algorithm is introduced in the original Lernmatrix. Therefore, the thresholded Lernmatrix memory emerges as a suitable and alternative classifier to be used in the developing pattern processing field.

  5. An implementation of the look-ahead Lanczos algorithm for non-Hermitian matrices, part 1

    NASA Technical Reports Server (NTRS)

    Freund, Roland W.; Gutknecht, Martin H.; Nachtigal, Noel M.

    1990-01-01

    The nonsymmetric Lanczos method can be used to compute eigenvalues of large sparse non-Hermitian matrices or to solve large sparse non-Hermitian linear systems. However, the original Lanczos algorithm is susceptible to possible breakdowns and potential instabilities. We present an implementation of a look-ahead version of the Lanczos algorithm which overcomes these problems by skipping over those steps in which a breakdown or near-breakdown would occur in the standard process. The proposed algorithm can handle look-ahead steps of any length and is not restricted to steps of length 2, as earlier implementations are. Also, our implementation has the feature that it requires roughly the same number of inner products as the standard Lanczos process without look-ahead.

  6. An implementation of the look-ahead Lanczos algorithm for non-Hermitian matrices

    NASA Technical Reports Server (NTRS)

    Freund, Roland W.; Gutknecht, Martin H.; Nachtigal, Noel M.

    1991-01-01

    The nonsymmetric Lanczos method can be used to compute eigenvalues of large sparse non-Hermitian matrices or to solve large sparse non-Hermitian linear systems. However, the original Lanczos algorithm is susceptible to possible breakdowns and potential instabilities. An implementation is presented of a look-ahead version of the Lanczos algorithm that, except for the very special situation of an incurable breakdown, overcomes these problems by skipping over those steps in which a breakdown or near-breakdown would occur in the standard process. The proposed algorithm can handle look-ahead steps of any length and requires the same number of matrix-vector products and inner products as the standard Lanczos process without look-ahead.

  7. cOSPREY: A Cloud-Based Distributed Algorithm for Large-Scale Computational Protein Design.

    PubMed

    Pan, Yuchao; Dong, Yuxi; Zhou, Jingtian; Hallen, Mark; Donald, Bruce R; Zeng, Jianyang; Xu, Wei

    2016-09-01

    Finding the global minimum energy conformation (GMEC) of a huge combinatorial search space is the key challenge in computational protein design (CPD) problems. Traditional algorithms lack a scalable and efficient distributed design scheme, preventing researchers from taking full advantage of current cloud infrastructures. We design cloud OSPREY (cOSPREY), an extension to a widely used protein design software OSPREY, to allow the original design framework to scale to the commercial cloud infrastructures. We propose several novel designs to integrate both algorithm and system optimizations, such as GMEC-specific pruning, state search partitioning, asynchronous algorithm state sharing, and fault tolerance. We evaluate cOSPREY on three different cloud platforms using different technologies and show that it can solve a number of large-scale protein design problems that have not been possible with previous approaches.

  8. An optimal algorithm based on extended kalman filter and the data fusion for infrared touch overlay

    NASA Astrophysics Data System (ADS)

    Zhou, AiGuo; Cheng, ShuYi; Pan, Qiang Biao; Sun, Dong Yu

    2016-01-01

    Current infrared touch overlay has problems on the touch point recognition which bring some burrs on the touch trajectory. This paper uses the target tracking algorithm to improve the recognition and smoothness of infrared touch overlay. In order to deal with the nonlinear state estimate problem for touch point tracking, we use the extended Kalman filter in the target tracking algorithm. And we also use the data fusion algorithm to match the estimate value with the original target trajectory. The experimental results of the infrared touch overlay demonstrate that the proposed target tracking approach can improve the touch point recognition of the infrared touch overlay and achieve much smoother tracking trajectory than the existing tracking approach.

  9. Quantum Color Image Encryption Algorithm Based on A Hyper-Chaotic System and Quantum Fourier Transform

    NASA Astrophysics Data System (ADS)

    Tan, Ru-Chao; Lei, Tong; Zhao, Qing-Min; Gong, Li-Hua; Zhou, Zhi-Hong

    2016-09-01

    To improve the slow processing speed of the classical image encryption algorithms and enhance the security of the private color images, a new quantum color image encryption algorithm based on a hyper-chaotic system is proposed, in which the sequences generated by the Chen's hyper-chaotic system are scrambled and diffused with three components of the original color image. Sequentially, the quantum Fourier transform is exploited to fulfill the encryption. Numerical simulations show that the presented quantum color image encryption algorithm possesses large key space to resist illegal attacks, sensitive dependence on initial keys, uniform distribution of gray values for the encrypted image and weak correlation between two adjacent pixels in the cipher-image.

  10. cOSPREY: A Cloud-Based Distributed Algorithm for Large-Scale Computational Protein Design.

    PubMed

    Pan, Yuchao; Dong, Yuxi; Zhou, Jingtian; Hallen, Mark; Donald, Bruce R; Zeng, Jianyang; Xu, Wei

    2016-09-01

    Finding the global minimum energy conformation (GMEC) of a huge combinatorial search space is the key challenge in computational protein design (CPD) problems. Traditional algorithms lack a scalable and efficient distributed design scheme, preventing researchers from taking full advantage of current cloud infrastructures. We design cloud OSPREY (cOSPREY), an extension to a widely used protein design software OSPREY, to allow the original design framework to scale to the commercial cloud infrastructures. We propose several novel designs to integrate both algorithm and system optimizations, such as GMEC-specific pruning, state search partitioning, asynchronous algorithm state sharing, and fault tolerance. We evaluate cOSPREY on three different cloud platforms using different technologies and show that it can solve a number of large-scale protein design problems that have not been possible with previous approaches. PMID:27154509

  11. Implementing EM and Viterbi algorithms for Hidden Markov Model in linear memory

    PubMed Central

    Churbanov, Alexander; Winters-Hilt, Stephen

    2008-01-01

    Background The Baum-Welch learning procedure for Hidden Markov Models (HMMs) provides a powerful tool for tailoring HMM topologies to data for use in knowledge discovery and clustering. A linear memory procedure recently proposed by Miklós, I. and Meyer, I.M. describes a memory sparse version of the Baum-Welch algorithm with modifications to the original probabilistic table topologies to make memory use independent of sequence length (and linearly dependent on state number). The original description of the technique has some errors that we amend. We then compare the corrected implementation on a variety of data sets with conventional and checkpointing implementations. Results We provide a correct recurrence relation for the emission parameter estimate and extend it to parameter estimates of the Normal distribution. To accelerate estimation of the prior state probabilities, and decrease memory use, we reverse the originally proposed forward sweep. We describe different scaling strategies necessary in all real implementations of the algorithm to prevent underflow. In this paper we also describe our approach to a linear memory implementation of the Viterbi decoding algorithm (with linearity in the sequence length, while memory use is approximately independent of state number). We demonstrate the use of the linear memory implementation on an extended Duration Hidden Markov Model (DHMM) and on an HMM with a spike detection topology. Comparing the various implementations of the Baum-Welch procedure we find that the checkpointing algorithm produces the best overall tradeoff between memory use and speed. In cases where sequence length is very large (for Baum-Welch), or state number is very large (for Viterbi), the linear memory methods outlined may offer some utility. Conclusion Our performance-optimized Java implementations of Baum-Welch algorithm are available at . The described method and implementations will aid sequence alignment, gene structure prediction, HMM

  12. QPSO-based adaptive DNA computing algorithm.

    PubMed

    Karakose, Mehmet; Cigdem, Ugur

    2013-01-01

    DNA (deoxyribonucleic acid) computing that is a new computation model based on DNA molecules for information storage has been increasingly used for optimization and data analysis in recent years. However, DNA computing algorithm has some limitations in terms of convergence speed, adaptability, and effectiveness. In this paper, a new approach for improvement of DNA computing is proposed. This new approach aims to perform DNA computing algorithm with adaptive parameters towards the desired goal using quantum-behaved particle swarm optimization (QPSO). Some contributions provided by the proposed QPSO based on adaptive DNA computing algorithm are as follows: (1) parameters of population size, crossover rate, maximum number of operations, enzyme and virus mutation rate, and fitness function of DNA computing algorithm are simultaneously tuned for adaptive process, (2) adaptive algorithm is performed using QPSO algorithm for goal-driven progress, faster operation, and flexibility in data, and (3) numerical realization of DNA computing algorithm with proposed approach is implemented in system identification. Two experiments with different systems were carried out to evaluate the performance of the proposed approach with comparative results. Experimental results obtained with Matlab and FPGA demonstrate ability to provide effective optimization, considerable convergence speed, and high accuracy according to DNA computing algorithm.

  13. Algorithmic causets

    NASA Astrophysics Data System (ADS)

    Bolognesi, Tommaso

    2011-07-01

    In the context of quantum gravity theories, several researchers have proposed causal sets as appropriate discrete models of spacetime. We investigate families of causal sets obtained from two simple models of computation - 2D Turing machines and network mobile automata - that operate on 'high-dimensional' supports, namely 2D arrays of cells and planar graphs, respectively. We study a number of quantitative and qualitative emergent properties of these causal sets, including dimension, curvature and localized structures, or 'particles'. We show how the possibility to detect and separate particles from background space depends on the choice between a global or local view at the causal set. Finally, we spot very rare cases of pseudo-randomness, or deterministic chaos; these exhibit a spontaneous phenomenon of 'causal compartmentation' that appears as a prerequisite for the occurrence of anything of physical interest in the evolution of spacetime.

  14. A Modified Decision Tree Algorithm Based on Genetic Algorithm for Mobile User Classification Problem

    PubMed Central

    Liu, Dong-sheng; Fan, Shu-jiang

    2014-01-01

    In order to offer mobile customers better service, we should classify the mobile user firstly. Aimed at the limitations of previous classification methods, this paper puts forward a modified decision tree algorithm for mobile user classification, which introduced genetic algorithm to optimize the results of the decision tree algorithm. We also take the context information as a classification attributes for the mobile user and we classify the context into public context and private context classes. Then we analyze the processes and operators of the algorithm. At last, we make an experiment on the mobile user with the algorithm, we can classify the mobile user into Basic service user, E-service user, Plus service user, and Total service user classes and we can also get some rules about the mobile user. Compared to C4.5 decision tree algorithm and SVM algorithm, the algorithm we proposed in this paper has higher accuracy and more simplicity. PMID:24688389

  15. A modified decision tree algorithm based on genetic algorithm for mobile user classification problem.

    PubMed

    Liu, Dong-sheng; Fan, Shu-jiang

    2014-01-01

    In order to offer mobile customers better service, we should classify the mobile user firstly. Aimed at the limitations of previous classification methods, this paper puts forward a modified decision tree algorithm for mobile user classification, which introduced genetic algorithm to optimize the results of the decision tree algorithm. We also take the context information as a classification attributes for the mobile user and we classify the context into public context and private context classes. Then we analyze the processes and operators of the algorithm. At last, we make an experiment on the mobile user with the algorithm, we can classify the mobile user into Basic service user, E-service user, Plus service user, and Total service user classes and we can also get some rules about the mobile user. Compared to C4.5 decision tree algorithm and SVM algorithm, the algorithm we proposed in this paper has higher accuracy and more simplicity. PMID:24688389

  16. A modified decision tree algorithm based on genetic algorithm for mobile user classification problem.

    PubMed

    Liu, Dong-sheng; Fan, Shu-jiang

    2014-01-01

    In order to offer mobile customers better service, we should classify the mobile user firstly. Aimed at the limitations of previous classification methods, this paper puts forward a modified decision tree algorithm for mobile user classification, which introduced genetic algorithm to optimize the results of the decision tree algorithm. We also take the context information as a classification attributes for the mobile user and we classify the context into public context and private context classes. Then we analyze the processes and operators of the algorithm. At last, we make an experiment on the mobile user with the algorithm, we can classify the mobile user into Basic service user, E-service user, Plus service user, and Total service user classes and we can also get some rules about the mobile user. Compared to C4.5 decision tree algorithm and SVM algorithm, the algorithm we proposed in this paper has higher accuracy and more simplicity.

  17. Cognitive radio resource allocation based on coupled chaotic genetic algorithm

    NASA Astrophysics Data System (ADS)

    Zu, Yun-Xiao; Zhou, Jie; Zeng, Chang-Chang

    2010-11-01

    A coupled chaotic genetic algorithm for cognitive radio resource allocation which is based on genetic algorithm and coupled Logistic map is proposed. A fitness function for cognitive radio resource allocation is provided. Simulations are conducted for cognitive radio resource allocation by using the coupled chaotic genetic algorithm, simple genetic algorithm and dynamic allocation algorithm respectively. The simulation results show that, compared with simple genetic and dynamic allocation algorithm, coupled chaotic genetic algorithm reduces the total transmission power and bit error rate in cognitive radio system, and has faster convergence speed.

  18. The Origin of Roman Numerals

    ERIC Educational Resources Information Center

    Dapre, P. A.

    1977-01-01

    A theory on the origin of Roman numerals proposes that the principal numbers can be stylized in terms of a square. It is speculated that the abacus or its equivalents, such as the counter or chequer-board, was used to count before the alphabet became common. (SW)

  19. A novel image-domain-based cone-beam computed tomography enhancement algorithm

    NASA Astrophysics Data System (ADS)

    Li, Xiang; Li, Tianfang; Yang, Yong; Heron, Dwight E.; Saiful Huq, M.

    2011-05-01

    Kilo-voltage (kV) cone-beam computed tomography (CBCT) plays an important role in image-guided radiotherapy. However, due to a large cone-beam angle, scatter effects significantly degrade the CBCT image quality and limit its clinical application. The goal of this study is to develop an image enhancement algorithm to reduce the low-frequency CBCT image artifacts, which are also called the bias field. The proposed algorithm is based on the hypothesis that image intensities of different types of materials in CBCT images are approximately globally uniform (in other words, a piecewise property). A maximum a posteriori probability framework was developed to estimate the bias field contribution from a given CBCT image. The performance of the proposed CBCT image enhancement method was tested using phantoms and clinical CBCT images. Compared to the original CBCT images, the corrected images using the proposed method achieved a more uniform intensity distribution within each tissue type and significantly reduced cupping and shading artifacts. In a head and a pelvic case, the proposed method reduced the Hounsfield unit (HU) errors within the region of interest from 300 HU to less than 60 HU. In a chest case, the HU errors were reduced from 460 HU to less than 110 HU. The proposed CBCT image enhancement algorithm demonstrated a promising result by the reduction of the scatter-induced low-frequency image artifacts commonly encountered in kV CBCT imaging.

  20. A finite size pencil beam algorithm for IMRT dose optimization: density corrections.

    PubMed

    Jeleń, U; Alber, M

    2007-02-01

    For beamlet-based IMRT optimization, fast and less accurate dose computation algorithms are frequently used, while more accurate algorithms are needed to recompute the final dose for verification. In order to speed up the optimization process and ensure close proximity between dose in optimization and verification, proper consideration of dose gradients and tissue inhomogeneity effects should be ensured at every stage of the optimization. Due to their speed, pencil beam algorithms are often used for precalculation of beamlet dose distributions in IMRT treatment planning systems. However, accounting for tissue heterogeneities with these models requires the use of approximate rescaling methods. Recently, a finite size pencil beam (fsPB) algorithm, based on a simple and small set of data, was proposed which was specifically designed for the purpose of dose pre-computation in beamlet-based IMRT. The present work describes the incorporation of 3D density corrections, based on Monte Carlo simulations in heterogeneous phantoms, into this method improving the algorithm accuracy in inhomogeneous geometries while keeping its original speed and simplicity of commissioning. The algorithm affords the full accuracy of 3D density corrections at every stage of the optimization, hence providing the means for density related fluence modulation like penumbra shaping at field edges. PMID:17228109

  1. Cooperative scheduling of imaging observation tasks for high-altitude airships based on propagation algorithm.

    PubMed

    Chuan, He; Dishan, Qiu; Jin, Liu

    2012-01-01

    The cooperative scheduling problem on high-altitude airships for imaging observation tasks is discussed. A constraint programming model is established by analyzing the main constraints, which takes the maximum task benefit and the minimum cruising distance as two optimization objectives. The cooperative scheduling problem of high-altitude airships is converted into a main problem and a subproblem by adopting hierarchy architecture. The solution to the main problem can construct the preliminary matching between tasks and observation resource in order to reduce the search space of the original problem. Furthermore, the solution to the sub-problem can detect the key nodes that each airship needs to fly through in sequence, so as to get the cruising path. Firstly, the task set is divided by using k-core neighborhood growth cluster algorithm (K-NGCA). Then, a novel swarm intelligence algorithm named propagation algorithm (PA) is combined with the key node search algorithm (KNSA) to optimize the cruising path of each airship and determine the execution time interval of each task. Meanwhile, this paper also provides the realization approach of the above algorithm and especially makes a detailed introduction on the encoding rules, search models, and propagation mechanism of the PA. Finally, the application results and comparison analysis show the proposed models and algorithms are effective and feasible. PMID:23365522

  2. Cooperative Scheduling of Imaging Observation Tasks for High-Altitude Airships Based on Propagation Algorithm

    PubMed Central

    Chuan, He; Dishan, Qiu; Jin, Liu

    2012-01-01

    The cooperative scheduling problem on high-altitude airships for imaging observation tasks is discussed. A constraint programming model is established by analyzing the main constraints, which takes the maximum task benefit and the minimum cruising distance as two optimization objectives. The cooperative scheduling problem of high-altitude airships is converted into a main problem and a subproblem by adopting hierarchy architecture. The solution to the main problem can construct the preliminary matching between tasks and observation resource in order to reduce the search space of the original problem. Furthermore, the solution to the sub-problem can detect the key nodes that each airship needs to fly through in sequence, so as to get the cruising path. Firstly, the task set is divided by using k-core neighborhood growth cluster algorithm (K-NGCA). Then, a novel swarm intelligence algorithm named propagation algorithm (PA) is combined with the key node search algorithm (KNSA) to optimize the cruising path of each airship and determine the execution time interval of each task. Meanwhile, this paper also provides the realization approach of the above algorithm and especially makes a detailed introduction on the encoding rules, search models, and propagation mechanism of the PA. Finally, the application results and comparison analysis show the proposed models and algorithms are effective and feasible. PMID:23365522

  3. A Winner Determination Algorithm for Combinatorial Auctions Based on Hybrid Artificial Fish Swarm Algorithm

    NASA Astrophysics Data System (ADS)

    Zheng, Genrang; Lin, ZhengChun

    The problem of winner determination in combinatorial auctions is a hotspot electronic business, and a NP hard problem. A Hybrid Artificial Fish Swarm Algorithm(HAFSA), which is combined with First Suite Heuristic Algorithm (FSHA) and Artificial Fish Swarm Algorithm (AFSA), is proposed to solve the problem after probing it base on the theories of AFSA. Experiment results show that the HAFSA is a rapidly and efficient algorithm for The problem of winner determining. Compared with Ant colony Optimization Algorithm, it has a good performance with broad and prosperous application.

  4. Modifications to Axially Symmetric Simulations Using New DSMC (2007) Algorithms

    NASA Technical Reports Server (NTRS)

    Liechty, Derek S.

    2008-01-01

    Several modifications aimed at improving physical accuracy are proposed for solving axially symmetric problems building on the DSMC (2007) algorithms introduced by Bird. Originally developed to solve nonequilibrium, rarefied flows, the DSMC method is now regularly used to solve complex problems over a wide range of Knudsen numbers. These new algorithms include features such as nearest neighbor collisions excluding the previous collision partners, separate collision and sampling cells, automatically adaptive variable time steps, a modified no-time counter procedure for collisions, and discontinuous and event-driven physical processes. Axially symmetric solutions require radial weighting for the simulated molecules since the molecules near the axis represent fewer real molecules than those farther away from the axis due to the difference in volume of the cells. In the present methodology, these radial weighting factors are continuous, linear functions that vary with the radial position of each simulated molecule. It is shown that how one defines the number of tentative collisions greatly influences the mean collision time near the axis. The method by which the grid is treated for axially symmetric problems also plays an important role near the axis, especially for scalar pressure. A new method to treat how the molecules are traced through the grid is proposed to alleviate the decrease in scalar pressure at the axis near the surface. Also, a modification to the duplication buffer is proposed to vary the duplicated molecular velocities while retaining the molecular kinetic energy and axially symmetric nature of the problem.

  5. Analysis of the geophysical data using a posteriori algorithms

    NASA Astrophysics Data System (ADS)

    Voskoboynikova, Gyulnara; Khairetdinov, Marat

    2016-04-01

    The problems of monitoring, prediction and prevention of extraordinary natural and technogenic events are priority of modern problems. These events include earthquakes, volcanic eruptions, the lunar-solar tides, landslides, falling celestial bodies, explosions utilized stockpiles of ammunition, numerous quarry explosion in open coal mines, provoking technogenic earthquakes. Monitoring is based on a number of successive stages, which include remote registration of the events responses, measurement of the main parameters as arrival times of seismic waves or the original waveforms. At the final stage the inverse problems associated with determining the geographic location and time of the registration event are solving. Therefore, improving the accuracy of the parameters estimation of the original records in the high noise is an important problem. As is known, the main measurement errors arise due to the influence of external noise, the difference between the real and model structures of the medium, imprecision of the time definition in the events epicenter, the instrumental errors. Therefore, posteriori algorithms more accurate in comparison with known algorithms are proposed and investigated. They are based on a combination of discrete optimization method and fractal approach for joint detection and estimation of the arrival times in the quasi-periodic waveforms sequence in problems of geophysical monitoring with improved accuracy. Existing today, alternative approaches to solving these problems does not provide the given accuracy. The proposed algorithms are considered for the tasks of vibration sounding of the Earth in times of lunar and solar tides, and for the problem of monitoring of the borehole seismic source location in trade drilling.

  6. A swaying object detection algorithm

    NASA Astrophysics Data System (ADS)

    Wang, Shidong; Rong, Jianzhong; Zhou, Dechuang; Wang, Jian

    2013-07-01

    Moving object detection is a most important preliminary step in video analysis. Some moving objects such as spitting steam, fire and smoke have unique motion feature whose lower position keep basically unchanged and the upper position move back and forth. Based on this unique motion feature, a swaying object detection algorithm is presented in this paper. Firstly, fuzzy integral was adopted to integrate color features for extracting moving objects from video frames. Secondly, a swaying identification algorithm based on centroid calculation was used to distinguish the swaying object from other moving objects. Experiments show that the proposed method is effective to detect swaying object.

  7. Atmospheric channel for bistatic optical communication: simulation algorithms

    NASA Astrophysics Data System (ADS)

    Belov, V. V.; Tarasenkov, M. V.

    2015-11-01

    Three algorithms of statistical simulation of the impulse response (IR) for the atmospheric optical communication channel are considered, including algorithms of local estimate and double local estimate and the algorithm suggested by us. On the example of a homogeneous molecular atmosphere it is demonstrated that algorithms of double local estimate and the suggested algorithm are more efficient than the algorithm of local estimate. For small optical path length, the proposed algorithm is more efficient, and for large optical path length, the algorithm of double local estimate is more efficient. Using the proposed algorithm, the communication quality is estimated for a particular case of the atmospheric channel under conditions of intermediate turbidity. The communication quality is characterized by the maximum IR, time of maximum IR, integral IR, and bandwidth of the communication channel. Calculations of these criteria demonstrated that communication is most efficient when the point of intersection of the directions toward the source and the receiver is most close to the source point.

  8. A constraint consensus memetic algorithm for solving constrained optimization problems

    NASA Astrophysics Data System (ADS)

    Hamza, Noha M.; Sarker, Ruhul A.; Essam, Daryl L.; Deb, Kalyanmoy; Elsayed, Saber M.

    2014-11-01

    Constraint handling is an important aspect of evolutionary constrained optimization. Currently, the mechanism used for constraint handling with evolutionary algorithms mainly assists the selection process, but not the actual search process. In this article, first a genetic algorithm is combined with a class of search methods, known as constraint consensus methods, that assist infeasible individuals to move towards the feasible region. This approach is also integrated with a memetic algorithm. The proposed algorithm is tested and analysed by solving two sets of standard benchmark problems, and the results are compared with other state-of-the-art algorithms. The comparisons show that the proposed algorithm outperforms other similar algorithms. The algorithm has also been applied to solve a practical economic load dispatch problem, where it also shows superior performance over other algorithms.

  9. Improved algorithm for calculating the Chandrasekhar function

    NASA Astrophysics Data System (ADS)

    Jablonski, A.

    2013-02-01

    number of abscissas N. (4) For Romberg quadrature, to optimize the performance, the mixed algorithm C was proposed in which algorithm A is used for argument x smaller than or equal to x0=0.4, while algorithm B is used for x larger than 0.4 [1]. For Gauss-Legendre quadrature, the limit x0 was found to depend on the number of abscissas N. For each value of N considered, the time of calculations of the H function was determined for pairs of arguments uniformly distributed in the ranges 0<=x<=0.05 and 0<=omega<=1, and for pairs of arguments uniformly distributed in the ranges 0.05<=x<=1 and 0<=omega<=1. As shown in Fig. 2 for N=64, algorithm A is faster than algorithm B for x smaller than or equal to 0.0225. Comparison of the running times of algorithms A and B. Open circles: algorithm B is faster than the algorithm A; full circles: algorithm A is faster than algorithm B. Thus, the value of x0=0.0225 is proposed for the mixed algorithm C when Gauss-Legendere quadrature with N=64 is used. Similar computer experiments performed for other values of N are summarized below. L N0 1 16 0.25 2 20 0.15 3 24 0.10 4 32 0.050 5 40 0.030 6 48 0.045 7 64 0.0225-Recommended 8 80 0.0125 9 96 0.020 The flag L is one of the input parameters for the subroutine GAUSS. In the programs implementing algorithms A, B, and C (CHANDRA, CHANDRB, and CHANDRC), Gauss-Legendre quadrature with N=64 is currently set. As follows from Fig. 1, algorithm B (and consequently algorithm C) is the fastest in that case. It is still possible to change the number of abscissas; the flag L then has to be modified in lines 165, 169, 185, 189, and 304 of program CHANDRAS_v2, and the value of x0 in line 111 has to be adjusted according to the table above. (5) The above modifications of the code did not affect the accuracy of the calculated Chandrasekhar function, as compared to the original code [1]. For the pairs of arguments shown in Fig. 2, the accuracy of the H function, calculated from algorithms A and B, reached at

  10. Monte Carlo algorithm for free energy calculation.

    PubMed

    Bi, Sheng; Tong, Ning-Hua

    2015-07-01

    We propose a Monte Carlo algorithm for the free energy calculation based on configuration space sampling. An upward or downward temperature scan can be used to produce F(T). We implement this algorithm for the Ising model on a square lattice and triangular lattice. Comparison with the exact free energy shows an excellent agreement. We analyze the properties of this algorithm and compare it with the Wang-Landau algorithm, which samples in energy space. This method is applicable to general classical statistical models. The possibility of extending it to quantum systems is discussed.

  11. An Adaptive Unified Differential Evolution Algorithm for Global Optimization

    SciTech Connect

    Qiang, Ji; Mitchell, Chad

    2014-11-03

    In this paper, we propose a new adaptive unified differential evolution algorithm for single-objective global optimization. Instead of the multiple mutation strate- gies proposed in conventional differential evolution algorithms, this algorithm employs a single equation unifying multiple strategies into one expression. It has the virtue of mathematical simplicity and also provides users the flexibility for broader exploration of the space of mutation operators. By making all control parameters in the proposed algorithm self-adaptively evolve during the process of optimization, it frees the application users from the burden of choosing appro- priate control parameters and also improves the performance of the algorithm. In numerical tests using thirteen basic unimodal and multimodal functions, the proposed adaptive unified algorithm shows promising performance in compari- son to several conventional differential evolution algorithms.

  12. Origin of the Moon

    NASA Technical Reports Server (NTRS)

    Stevenson, David

    2006-01-01

    Many ideas have been proposed for the origin of the Moon, but only one has stood the test of time: During the formation of Earth, about 4.5 billion years ago, our planet was hit by a projectile the size of Mars, leading to a close-in disk of molten material in earth orbit. From this material, our Moon formed in about a thousand years. I will explain how the properties of the Moon can be explained by this model and why the alternative ideas are either incorrect or highly improbable. I will also talk about some new developments in this area that come from a consideration of chemistry and isotopic measurements. Finally. I will talk about what we don't know and why the Moon is still an interesting place for further exploration.

  13. A Learning Algorithm for Multimodal Grammar Inference.

    PubMed

    D'Ulizia, A; Ferri, F; Grifoni, P

    2011-12-01

    The high costs of development and maintenance of multimodal grammars in integrating and understanding input in multimodal interfaces lead to the investigation of novel algorithmic solutions in automating grammar generation and in updating processes. Many algorithms for context-free grammar inference have been developed in the natural language processing literature. An extension of these algorithms toward the inference of multimodal grammars is necessary for multimodal input processing. In this paper, we propose a novel grammar inference mechanism that allows us to learn a multimodal grammar from its positive samples of multimodal sentences. The algorithm first generates the multimodal grammar that is able to parse the positive samples of sentences and, afterward, makes use of two learning operators and the minimum description length metrics in improving the grammar description and in avoiding the over-generalization problem. The experimental results highlight the acceptable performances of the algorithm proposed in this paper since it has a very high probability of parsing valid sentences.

  14. An enhanced algorithm for multiple sequence alignment of protein sequences using genetic algorithm

    PubMed Central

    Kumar, Manish

    2015-01-01

    One of the most fundamental operations in biological sequence analysis is multiple sequence alignment (MSA). The basic of multiple sequence alignment problems is to determine the most biologically plausible alignments of protein or DNA sequences. In this paper, an alignment method using genetic algorithm for multiple sequence alignment has been proposed. Two different genetic operators mainly crossover and mutation were defined and implemented with the proposed method in order to know the population evolution and quality of the sequence aligned. The proposed method is assessed with protein benchmark dataset, e.g., BALIBASE, by comparing the obtained results to those obtained with other alignment algorithms, e.g., SAGA, RBT-GA, PRRP, HMMT, SB-PIMA, CLUSTALX, CLUSTAL W, DIALIGN and PILEUP8 etc. Experiments on a wide range of data have shown that the proposed algorithm is much better (it terms of score) than previously proposed algorithms in its ability to achieve high alignment quality. PMID:27065770

  15. The origins of originality: the neural bases of creative thinking and originality.

    PubMed

    Shamay-Tsoory, S G; Adler, N; Aharon-Peretz, J; Perry, D; Mayseless, N

    2011-01-01

    Although creativity has been related to prefrontal activity, recent neurological case studies postulate that patients who have left frontal and temporal degeneration involving deterioration of language abilities may actually develop de novo artistic abilities. In this study, we propose a neural and cognitive model according to which a balance between the two hemispheres affects a major aspect of creative cognition, namely, originality. In order to examine the neural basis of originality, that is, the ability to produce statistically infrequent ideas, patients with localized lesions in the medial prefrontal cortex (mPFC), inferior frontal gyrus (IFG), and posterior parietal and temporal cortex (PC), were assessed by two tasks involving divergent thinking and originality. Results indicate that lesions in the mPFC involved the most profound impairment in originality. Furthermore, precise anatomical mapping of lesions indicated that while the extent of lesion in the right mPFC was associated with impaired originality, lesions in the left PC were associated with somewhat elevated levels of originality. A positive correlation between creativity scores and left PC lesions indicated that the larger the lesion is in this area the greater the originality. On the other hand, a negative correlation was observed between originality scores and lesions in the right mPFC. It is concluded that the right mPFC is part of a right fronto-parietal network which is responsible for producing original ideas. It is possible that more linear cognitive processing such as language, mediated by left hemisphere structures interferes with creative cognition. Therefore, lesions in the left hemisphere may be associated with elevated levels of originality. PMID:21126528

  16. Novel and efficient tag SNPs selection algorithms.

    PubMed

    Chen, Wen-Pei; Hung, Che-Lun; Tsai, Suh-Jen Jane; Lin, Yaw-Ling

    2014-01-01

    SNPs are the most abundant forms of genetic variations amongst species; the association studies between complex diseases and SNPs or haplotypes have received great attention. However, these studies are restricted by the cost of genotyping all SNPs; thus, it is necessary to find smaller subsets, or tag SNPs, representing the rest of the SNPs. In fact, the existing tag SNP selection algorithms are notoriously time-consuming. An efficient algorithm for tag SNP selection was presented, which was applied to analyze the HapMap YRI data. The experimental results show that the proposed algorithm can achieve better performance than the existing tag SNP selection algorithms; in most cases, this proposed algorithm is at least ten times faster than the existing methods. In many cases, when the redundant ratio of the block is high, the proposed algorithm can even be thousands times faster than the previously known methods. Tools and web services for haplotype block analysis integrated by hadoop MapReduce framework are also developed using the proposed algorithm as computation kernels. PMID:24212035

  17. Library of Continuation Algorithms

    2005-03-01

    LOCA (Library of Continuation Algorithms) is scientific software written in C++ that provides advanced analysis tools for nonlinear systems. In particular, it provides parameter continuation algorithms. bifurcation tracking algorithms, and drivers for linear stability analysis. The algorithms are aimed at large-scale applications that use Newton’s method for their nonlinear solve.

  18. Visualizing output for a data learning algorithm

    NASA Astrophysics Data System (ADS)

    Carson, Daniel; Graham, James; Ternovskiy, Igor

    2016-05-01

    This paper details the process we went through to visualize the output for our data learning algorithm. We have been developing a hierarchical self-structuring learning algorithm based around the general principles of the LaRue model. One example of a proposed application of this algorithm would be traffic analysis, chosen because it is conceptually easy to follow and there is a significant amount of already existing data and related research material with which to work with. While we choose the tracking of vehicles for our initial approach, it is by no means the only target of our algorithm. Flexibility is the end goal, however, we still need somewhere to start. To that end, this paper details our creation of the visualization GUI for our algorithm, the features we included and the initial results we obtained from our algorithm running a few of the traffic based scenarios we designed.

  19. A novel chaos danger model immune algorithm

    NASA Astrophysics Data System (ADS)

    Xu, Qingyang; Wang, Song; Zhang, Li; Liang, Ying

    2013-11-01

    Making use of ergodicity and randomness of chaos, a novel chaos danger model immune algorithm (CDMIA) is presented by combining the benefits of chaos and danger model immune algorithm (DMIA). To maintain the diversity of antibodies and ensure the performances of the algorithm, two chaotic operators are proposed. Chaotic disturbance is used for updating the danger antibody to exploit local solution space, and the chaotic regeneration is referred to the safe antibody for exploring the entire solution space. In addition, the performances of the algorithm are examined based upon several benchmark problems. The experimental results indicate that the diversity of the population is improved noticeably, and the CDMIA exhibits a higher efficiency than the danger model immune algorithm and other optimization algorithms.

  20. Modified OMP Algorithm for Exponentially Decaying Signals

    PubMed Central

    Kazimierczuk, Krzysztof; Kasprzak, Paweł

    2015-01-01

    A group of signal reconstruction methods, referred to as compressed sensing (CS), has recently found a variety of applications in numerous branches of science and technology. However, the condition of the applicability of standard CS algorithms (e.g., orthogonal matching pursuit, OMP), i.e., the existence of the strictly sparse representation of a signal, is rarely met. Thus, dedicated algorithms for solving particular problems have to be developed. In this paper, we introduce a modification of OMP motivated by nuclear magnetic resonance (NMR) application of CS. The algorithm is based on the fact that the NMR spectrum consists of Lorentzian peaks and matches a single Lorentzian peak in each of its iterations. Thus, we propose the name Lorentzian peak matching pursuit (LPMP). We also consider certain modification of the algorithm by introducing the allowed positions of the Lorentzian peaks' centers. Our results show that the LPMP algorithm outperforms other CS algorithms when applied to exponentially decaying signals. PMID:25609044

  1. Realization of a scalable Shor algorithm.

    PubMed

    Monz, Thomas; Nigg, Daniel; Martinez, Esteban A; Brandl, Matthias F; Schindler, Philipp; Rines, Richard; Wang, Shannon X; Chuang, Isaac L; Blatt, Rainer

    2016-03-01

    Certain algorithms for quantum computers are able to outperform their classical counterparts. In 1994, Peter Shor came up with a quantum algorithm that calculates the prime factors of a large number vastly more efficiently than a classical computer. For general scalability of such algorithms, hardware, quantum error correction, and the algorithmic realization itself need to be extensible. Here we present the realization of a scalable Shor algorithm, as proposed by Kitaev. We factor the number 15 by effectively employing and controlling seven qubits and four "cache qubits" and by implementing generalized arithmetic operations, known as modular multipliers. This algorithm has been realized scalably within an ion-trap quantum computer and returns the correct factors with a confidence level exceeding 99%. PMID:26941315

  2. Evaluation of proposed degradation algorithms for multiburst environments

    SciTech Connect

    Olness, D.U.; Warshawsky, A.S.

    1993-03-04

    This work is part of an ongoing effort of the Defense Nuclear Agency's Intermediate Dose Program to investigate the effects of intermediate radiation doses on combat unit performance. The objective of this study is to develop an improved technique for applying performance degradation factors to combat crews in simulated battles following multiple radiation doses on the tactical battlefield. A further objective of the study is to quantify differences in Janus results when crew performance factors, following multiple radiation doses, are obtained from the improved technique instead of from the technique used previously. In this paper, the authors describe and evaluate three methods previously identified for determining performance degradation from multiple exposures. They also present the observed quantitative differences in outcomes of conventional battles begun a few hours after multiple radiation exposures when alternate techniques for calculating combat crew performance degradation factors are included in the Janus combat simulation.

  3. Evaluation of proposed degradation algorithms for multiburst environments

    SciTech Connect

    Olness, D.U.; Warshawsky, A.S.

    1993-03-04

    This work is part of an ongoing effort of the Defense Nuclear Agency`s Intermediate Dose Program to investigate the effects of intermediate radiation doses on combat unit performance. The objective of this study is to develop an improved technique for applying performance degradation factors to combat crews in simulated battles following multiple radiation doses on the tactical battlefield. A further objective of the study is to quantify differences in Janus results when crew performance factors, following multiple radiation doses, are obtained from the improved technique instead of from the technique used previously. In this paper, the authors describe and evaluate three methods previously identified for determining performance degradation from multiple exposures. They also present the observed quantitative differences in outcomes of conventional battles begun a few hours after multiple radiation exposures when alternate techniques for calculating combat crew performance degradation factors are included in the Janus combat simulation.

  4. Expectation-maximization algorithms for learning a finite mixture of univariate survival time distributions from partially specified class values

    SciTech Connect

    Lee, Youngrok

    2013-05-15

    Heterogeneity exists on a data set when samples from di erent classes are merged into the data set. Finite mixture models can be used to represent a survival time distribution on heterogeneous patient group by the proportions of each class and by the survival time distribution within each class as well. The heterogeneous data set cannot be explicitly decomposed to homogeneous subgroups unless all the samples are precisely labeled by their origin classes; such impossibility of decomposition is a barrier to overcome for estimating nite mixture models. The expectation-maximization (EM) algorithm has been used to obtain maximum likelihood estimates of nite mixture models by soft-decomposition of heterogeneous samples without labels for a subset or the entire set of data. In medical surveillance databases we can find partially labeled data, that is, while not completely unlabeled there is only imprecise information about class values. In this study we propose new EM algorithms that take advantages of using such partial labels, and thus incorporate more information than traditional EM algorithms. We particularly propose four variants of the EM algorithm named EM-OCML, EM-PCML, EM-HCML and EM-CPCML, each of which assumes a specific mechanism of missing class values. We conducted a simulation study on exponential survival trees with five classes and showed that the advantages of incorporating substantial amount of partially labeled data can be highly signi cant. We also showed model selection based on AIC values fairly works to select the best proposed algorithm on each specific data set. A case study on a real-world data set of gastric cancer provided by Surveillance, Epidemiology and End Results (SEER) program showed a superiority of EM-CPCML to not only the other proposed EM algorithms but also conventional supervised, unsupervised and semi-supervised learning algorithms.

  5. Performance Comparison Of Evolutionary Algorithms For Image Clustering

    NASA Astrophysics Data System (ADS)

    Civicioglu, P.; Atasever, U. H.; Ozkan, C.; Besdok, E.; Karkinli, A. E.; Kesikoglu, A.

    2014-09-01

    Evolutionary computation tools are able to process real valued numerical sets in order to extract suboptimal solution of designed problem. Data clustering algorithms have been intensively used for image segmentation in remote sensing applications. Despite of wide usage of evolutionary algorithms on data clustering, their clustering performances have been scarcely studied by using clustering validation indexes. In this paper, the recently proposed evolutionary algorithms (i.e., Artificial Bee Colony Algorithm (ABC), Gravitational Search Algorithm (GSA), Cuckoo Search Algorithm (CS), Adaptive Differential Evolution Algorithm (JADE), Differential Search Algorithm (DSA) and Backtracking Search Optimization Algorithm (BSA)) and some classical image clustering techniques (i.e., k-means, fcm, som networks) have been used to cluster images and their performances have been compared by using four clustering validation indexes. Experimental test results exposed that evolutionary algorithms give more reliable cluster-centers than classical clustering techniques, but their convergence time is quite long.

  6. Research on Palmprint Identification Method Based on Quantum Algorithms

    PubMed Central

    Zhang, Zhanzhan

    2014-01-01

    Quantum image recognition is a technology by using quantum algorithm to process the image information. It can obtain better effect than classical algorithm. In this paper, four different quantum algorithms are used in the three stages of palmprint recognition. First, quantum adaptive median filtering algorithm is presented in palmprint filtering processing. Quantum filtering algorithm can get a better filtering result than classical algorithm through the comparison. Next, quantum Fourier transform (QFT) is used to extract pattern features by only one operation due to quantum parallelism. The proposed algorithm exhibits an exponential speed-up compared with discrete Fourier transform in the feature extraction. Finally, quantum set operations and Grover algorithm are used in palmprint matching. According to the experimental results, quantum algorithm only needs to apply square of N operations to find out the target palmprint, but the traditional method needs N times of calculation. At the same time, the matching accuracy of quantum algorithm is almost 100%. PMID:25105165

  7. Resolving transmitter-of-opportunity origin uncertainty in passive coherent location systems

    NASA Astrophysics Data System (ADS)

    Tharmarasa, R.; Nandakumaran, N.; McDonald, Mike; Kirubarajan, T.

    2009-08-01

    Passive Coherent Location (PCL) systems use existing commercial signals (e.g., FM broadcast, digital TV) as the illuminators of opportunity in air defence systems. PCL Sytems have many advantages such as low cost, covert operation and low vulnerability to electronic counter measures, over conventional radar systems. The main disadvantage of PCL systems is that the transmitter locations and the transmitted signals cannot be controlled. Thus, it is possible to have multiple transmitters that transmit the same signal/frequency inside the coverage region of the receiver. Thus, multiple measurements that originated from different transmitters and reflected by the same target will be received. Even though using multiple transmitters will facilitate better estimates of the target states due to spatial diversity, one cannot use these measurements without resolving transmitter and measurement origin uncertainties. This adds another level of complexity to the standard data association problem where the uncertainty is only in measurement origins. That is, there are two uncertainties that need to be resolved in order to track multiple targets. One is the measurement-to-target association and the other is the measurement-to-transmitter association. In this work, a tracking algorithm is proposed to track multiple targets using PCL systems with the above data association uncertainties. The efficiency of the proposed algorithm is demonstrated on realistically simulated data.

  8. A new wavelet-based reconstruction algorithm for twin image removal in digital in-line holography

    NASA Astrophysics Data System (ADS)

    Hattay, Jamel; Belaid, Samir; Aguili, Taoufik; Lebrun, Denis

    2016-07-01

    Two original methods are proposed here for digital in-line hologram processing. Firstly, we propose an entropy-based method to retrieve the focus plane which is very useful for digital hologram reconstruction. Secondly, we introduce a new approach to remove the so-called twin images reconstructed by holograms. This is achieved owing to the Blind Source Separation (BSS) technique. The proposed method is made up of two steps: an Adaptive Quincunx Lifting Scheme (AQLS) and a statistical unmixing algorithm. The AQLS tool is based on wavelet packet transform, whose role is to maximize the sparseness of the input holograms. The unmixing algorithm uses the Independent Component Analysis (ICA) tool. Experimental results confirm the ability of convolutive blind source separation to discard the unwanted twin image from in-line digital holograms.

  9. Efficient Hardware Implementation of the Lightweight Block Encryption Algorithm LEA

    PubMed Central

    Lee, Donggeon; Kim, Dong-Chan; Kwon, Daesung; Kim, Howon

    2014-01-01

    Recently, due to the advent of resource-constrained trends, such as smartphones and smart devices, the computing environment is changing. Because our daily life is deeply intertwined with ubiquitous networks, the importance of security is growing. A lightweight encryption algorithm is essential for secure communication between these kinds of resource-constrained devices, and many researchers have been investigating this field. Recently, a lightweight block cipher called LEA was proposed. LEA was originally targeted for efficient implementation on microprocessors, as it is fast when implemented in software and furthermore, it has a small memory footprint. To reflect on recent technology, all required calculations utilize 32-bit wide operations. In addition, the algorithm is comprised of not complex S-Box-like structures but simple Addition, Rotation, and XOR operations. To the best of our knowledge, this paper is the first report on a comprehensive hardware implementation of LEA. We present various hardware structures and their implementation results according to key sizes. Even though LEA was originally targeted at software efficiency, it also shows high efficiency when implemented as hardware. PMID:24406859

  10. Dual-Layer Video Encryption using RSA Algorithm

    NASA Astrophysics Data System (ADS)

    Chadha, Aman; Mallik, Sushmit; Chadha, Ankit; Johar, Ravdeep; Mani Roja, M.

    2015-04-01

    This paper proposes a video encryption algorithm using RSA and Pseudo Noise (PN) sequence, aimed at applications requiring sensitive video information transfers. The system is primarily designed to work with files encoded using the Audio Video Interleaved (AVI) codec, although it can be easily ported for use with Moving Picture Experts Group (MPEG) encoded files. The audio and video components of the source separately undergo two layers of encryption to ensure a reasonable level of security. Encryption of the video component involves applying the RSA algorithm followed by the PN-based encryption. Similarly, the audio component is first encrypted using PN and further subjected to encryption using the Discrete Cosine Transform. Combining these techniques, an efficient system, invulnerable to security breaches and attacks with favorable values of parameters such as encryption/decryption speed, encryption/decryption ratio and visual degradation; has been put forth. For applications requiring encryption of sensitive data wherein stringent security requirements are of prime concern, the system is found to yield negligible similarities in visual perception between the original and the encrypted video sequence. For applications wherein visual similarity is not of major concern, we limit the encryption task to a single level of encryption which is accomplished by using RSA, thereby quickening the encryption process. Although some similarity between the original and encrypted video is observed in this case, it is not enough to comprehend the happenings in the video.

  11. Novel quality-effective zooming algorithm for color filter array

    NASA Astrophysics Data System (ADS)

    Chung, Kuo-Liang; Yang, Wei-Jen; Yu, Jun-Hong; Yan, Wen-Ming; Fuh, Chiou-Shann

    2010-01-01

    Mosaic images are captured by a single charge-coupled device/complementary metal-oxide-semiconductor (CCD/CMOS) sensor with the Bayer color filter array. We present a new quality-effective zooming algorithm for mosaic images. First, based on adaptive heterogeneity projection masks and Sobel- and luminance-estimation-based masks, more accurate gradient information is extracted from the mosaic image directly. According to the extracted gradient information, the mosaic green (G) channel is first zoomed. To reduce color artifacts, instead of directly moving the original red (R) value to its right position and the blue (B) value to its lower position, the color difference interpolation is utilized to expand the G-R and G-B color difference values. Finally, the zoomed mosaic R and B channels can be constructed using the zoomed G channel and the two expanded color difference values; afterward, the zoomed mosaic image is obtained. Based on 24 popular test mosaic images, experimental results demonstrate that the proposed zooming algorithm has more than 1.79 dB quality improvement when compared with two previous zooming algorithms, one by Battiato et al. (2002) and the other by Lukac et al. (2005).

  12. Multipurpose image watermarking algorithm based on multistage vector quantization.

    PubMed

    Lu, Zhe-Ming; Xu, Dian-Guo; Sun, Sheng-He

    2005-06-01

    The rapid growth of digital multimedia and Internet technologies has made copyright protection, copy protection, and integrity verification three important issues in the digital world. To solve these problems, the digital watermarking technique has been presented and widely researched. Traditional watermarking algorithms are mostly based on discrete transform domains, such as the discrete cosine transform, discrete Fourier transform (DFT), and discrete wavelet transform (DWT). Most of these algorithms are good for only one purpose. Recently, some multipurpose digital watermarking methods have been presented, which can achieve the goal of content authentication and copyright protection simultaneously. However, they are based on DWT or DFT. Lately, several robust watermarking schemes based on vector quantization (VQ) have been presented, but they can only be used for copyright protection. In this paper, we present a novel multipurpose digital image watermarking method based on the multistage vector quantizer structure, which can be applied to image authentication and copyright protection. In the proposed method, the semi-fragile watermark and the robust watermark are embedded in different VQ stages using different techniques, and both of them can be extracted without the original image. Simulation results demonstrate the effectiveness of our algorithm in terms of robustness and fragility. PMID:15971780

  13. A joint image encryption and watermarking algorithm based on compressive sensing and chaotic map

    NASA Astrophysics Data System (ADS)

    Xiao, Di; Cai, Hong-Kun; Zheng, Hong-Ying

    2015-06-01

    In this paper, a compressive sensing (CS) and chaotic map-based joint image encryption and watermarking algorithm is proposed. The transform domain coefficients of the original image are scrambled by Arnold map firstly. Then the watermark is adhered to the scrambled data. By compressive sensing, a set of watermarked measurements is obtained as the watermarked cipher image. In this algorithm, watermark embedding and data compression can be performed without knowing the original image; similarly, watermark extraction will not interfere with decryption. Due to the characteristics of CS, this algorithm features compressible cipher image size, flexible watermark capacity, and lossless watermark extraction from the compressed cipher image as well as robustness against packet loss. Simulation results and analyses show that the algorithm achieves good performance in the sense of security, watermark capacity, extraction accuracy, reconstruction, robustness, etc. Project supported by the Open Research Fund of Chongqing Key Laboratory of Emergency Communications, China (Grant No. CQKLEC, 20140504), the National Natural Science Foundation of China (Grant Nos. 61173178, 61302161, and 61472464), and the Fundamental Research Funds for the Central Universities, China (Grant Nos. 106112013CDJZR180005 and 106112014CDJZR185501).

  14. Newton Algorithms for Analytic Rotation: An Implicit Function Approach

    ERIC Educational Resources Information Center

    Boik, Robert J.

    2008-01-01

    In this paper implicit function-based parameterizations for orthogonal and oblique rotation matrices are proposed. The parameterizations are used to construct Newton algorithms for minimizing differentiable rotation criteria applied to "m" factors and "p" variables. The speed of the new algorithms is compared to that of existing algorithms and to…

  15. [Decision on the rational algorithm in treatment of kidney cysts].

    PubMed

    Antonov, A V; Ishutin, E Iu; Guliev, R N

    2012-01-01

    The article presents an algorithm of diagnostics and treatment of renal cysts and other liquid neoplasms of the retroperitoneal space on an analysis of 270 case histories. The algorithm takes into account the achievements of modern medical technologies developed in the recent years. The application of the proposed algorithm must elevate efficiency of the diagnosis and quality of treatment of patients with renal cysts.

  16. Calculation of shock-wave parameters far from origination by combined numerical-analytical methods

    NASA Astrophysics Data System (ADS)

    Potapkin, A. V.; Moskvichev, D. Yu.

    2011-03-01

    An algorithm is proposed for calculating the parameters of weak shock waves at large distances from their origination. In chosen meridional planes, the parameters of the near field of the three-dimensional flow are used to determine the streamwise coordinates of "phantom bodies" by linear relations. When the initial body is replaced by a system of "phantom bodies" for which discrete values of the Whitham function are found, the far-field parameters are calculated by the Whitham theory, independently in each meridional plane. Results calculated for a body with axial symmetry and for bodies with spatial symmetry are presented.

  17. A Unified Differential Evolution Algorithm for Global Optimization

    SciTech Connect

    Qiang, Ji; Mitchell, Chad

    2014-06-24

    Abstract?In this paper, we propose a new unified differential evolution (uDE) algorithm for single objective global optimization. Instead of selecting among multiple mutation strategies as in the conventional differential evolution algorithm, this algorithm employs a single equation as the mutation strategy. It has the virtue of mathematical simplicity and also provides users the flexbility for broader exploration of different mutation strategies. Numerical tests using twelve basic unimodal and multimodal functions show promising performance of the proposed algorithm in comparison to convential differential evolution algorithms.

  18. Three penalized EM-type algorithms for PET image reconstruction.

    PubMed

    Teng, Yueyang; Zhang, Tie

    2012-06-01

    Based on Bayes theory, Green introduced the maximum a posteriori (MAP) algorithm to obtain a smoothing reconstruction for positron emission tomography. This algorithm is flexible and convenient for most of the penalties, but it is hard to guarantee convergence. For a common goal, Fessler penalized a weighted least squares (WLS) estimator by a quadratic penalty and then solved it with the successive over-relaxation (SOR) algorithm, however, the algorithm was time-consuming and difficultly parallelized. Anderson proposed another WLS estimator for faster convergence, on which there were few regularization methods studied. For three regularized estimators above, we develop three new expectation maximization (EM) type algorithms to solve them. Unlike MAP and SOR, the proposed algorithms yield update rules by minimizing the auxiliary functions constructed on the previous iterations, which ensure the cost functions monotonically decreasing. Experimental results demonstrated the robustness and effectiveness of the proposed algorithms.

  19. Two New PRP Conjugate Gradient Algorithms for Minimization Optimization Models

    PubMed Central

    Yuan, Gonglin; Duan, Xiabin; Liu, Wenjie; Wang, Xiaoliang; Cui, Zengru; Sheng, Zhou

    2015-01-01

    Two new PRP conjugate Algorithms are proposed in this paper based on two modified PRP conjugate gradient methods: the first algorithm is proposed for solving unconstrained optimization problems, and the second algorithm is proposed for solving nonlinear equations. The first method contains two aspects of information: function value and gradient value. The two methods both possess some good properties, as follows: 1)βk ≥ 0 2) the search direction has the trust region property without the use of any line search method 3) the search direction has sufficient descent property without the use of any line search method. Under some suitable conditions, we establish the global convergence of the two algorithms. We conduct numerical experiments to evaluate our algorithms. The numerical results indicate that the first algorithm is effective and competitive for solving unconstrained optimization problems and that the second algorithm is effective for solving large-scale nonlinear equations. PMID:26502409

  20. Segmentation of heterogeneous or small FDG PET positive tissue based on a 3D-locally adaptive random walk algorithm.

    PubMed

    Onoma, D P; Ruan, S; Thureau, S; Nkhali, L; Modzelewski, R; Monnehan, G A; Vera, P; Gardin, I

    2014-12-01

    A segmentation algorithm based on the random walk (RW) method, called 3D-LARW, has been developed to delineate small tumors or tumors with a heterogeneous distribution of FDG on PET images. Based on the original algorithm of RW [1], we propose an improved approach using new parameters depending on the Euclidean distance between two adjacent voxels instead of a fixed one and integrating probability densities of labels into the system of linear equations used in the RW. These improvements were evaluated and compared with the original RW method, a thresholding with a fixed value (40% of the maximum in the lesion), an adaptive thresholding algorithm on uniform spheres filled with FDG and FLAB method, on simulated heterogeneous spheres and on clinical data (14 patients). On these three different data, 3D-LARW has shown better segmentation results than the original RW algorithm and the three other methods. As expected, these improvements are more pronounced for the segmentation of small or tumors having heterogeneous FDG uptake.

  1. LCD motion blur: modeling, analysis, and algorithm.

    PubMed

    Chan, Stanley H; Nguyen, Truong Q

    2011-08-01

    Liquid crystal display (LCD) devices are well known for their slow responses due to the physical limitations of liquid crystals. Therefore, fast moving objects in a scene are often perceived as blurred. This effect is known as the LCD motion blur. In order to reduce LCD motion blur, an accurate LCD model and an efficient deblurring algorithm are needed. However, existing LCD motion blur models are insufficient to reflect the limitation of human-eye-tracking system. Also, the spatiotemporal equivalence in LCD motion blur models has not been proven directly in the discrete 2-D spatial domain, although it is widely used. There are three main contributions of this paper: modeling, analysis, and algorithm. First, a comprehensive LCD motion blur model is presented, in which human-eye-tracking limits are taken into consideration. Second, a complete analysis of spatiotemporal equivalence is provided and verified using real video sequences. Third, an LCD motion blur reduction algorithm is proposed. The proposed algorithm solves an l(1)-norm regularized least-squares minimization problem using a subgradient projection method. Numerical results show that the proposed algorithm gives higher peak SNR, lower temporal error, and lower spatial error than motion-compensated inverse filtering and Lucy-Richardson deconvolution algorithm, which are two state-of-the-art LCD deblurring algorithms. PMID:21292596

  2. Updated treatment algorithm of pulmonary arterial hypertension.

    PubMed

    Galiè, Nazzareno; Corris, Paul A; Frost, Adaani; Girgis, Reda E; Granton, John; Jing, Zhi Cheng; Klepetko, Walter; McGoon, Michael D; McLaughlin, Vallerie V; Preston, Ioana R; Rubin, Lewis J; Sandoval, Julio; Seeger, Werner; Keogh, Anne

    2013-12-24

    The demands on a pulmonary arterial hypertension (PAH) treatment algorithm are multiple and in some ways conflicting. The treatment algorithm usually includes different types of recommendations with varying degrees of scientific evidence. In addition, the algorithm is required to be comprehensive but not too complex, informative yet simple and straightforward. The type of information in the treatment algorithm are heterogeneous including clinical, hemodynamic, medical, interventional, pharmacological and regulatory recommendations. Stakeholders (or users) including physicians from various specialties and with variable expertise in PAH, nurses, patients and patients' associations, healthcare providers, regulatory agencies and industry are often interested in the PAH treatment algorithm for different reasons. These are the considerable challenges faced when proposing appropriate updates to the current evidence-based treatment algorithm.The current treatment algorithm may be divided into 3 main areas: 1) general measures, supportive therapy, referral strategy, acute vasoreactivity testing and chronic treatment with calcium channel blockers; 2) initial therapy with approved PAH drugs; and 3) clinical response to the initial therapy, combination therapy, balloon atrial septostomy, and lung transplantation. All three sections will be revisited highlighting information newly available in the past 5 years and proposing updates where appropriate. The European Society of Cardiology grades of recommendation and levels of evidence will be adopted to rank the proposed treatments. PMID:24355643

  3. Fast proximity algorithm for MAP ECT reconstruction

    NASA Astrophysics Data System (ADS)

    Li, Si; Krol, Andrzej; Shen, Lixin; Xu, Yuesheng

    2012-03-01

    We arrived at the fixed-point formulation of the total variation maximum a posteriori (MAP) regularized emission computed tomography (ECT) reconstruction problem and we proposed an iterative alternating scheme to numerically calculate the fixed point. We theoretically proved that our algorithm converges to unique solutions. Because the obtained algorithm exhibits slow convergence speed, we further developed the proximity algorithm in the transformed image space, i.e. the preconditioned proximity algorithm. We used the bias-noise curve method to select optimal regularization hyperparameters for both our algorithm and expectation maximization with total variation regularization (EM-TV). We showed in the numerical experiments that our proposed algorithms, with an appropriately selected preconditioner, outperformed conventional EM-TV algorithm in many critical aspects, such as comparatively very low noise and bias for Shepp-Logan phantom. This has major ramification for nuclear medicine because clinical implementation of our preconditioned fixed-point algorithms might result in very significant radiation dose reduction in the medical applications of emission tomography.

  4. Performance analysis of cone detection algorithms.

    PubMed

    Mariotti, Letizia; Devaney, Nicholas

    2015-04-01

    Many algorithms have been proposed to help clinicians evaluate cone density and spacing, as these may be related to the onset of retinal diseases. However, there has been no rigorous comparison of the performance of these algorithms. In addition, the performance of such algorithms is typically determined by comparison with human observers. Here we propose a technique to simulate realistic images of the cone mosaic. We use the simulated images to test the performance of three popular cone detection algorithms, and we introduce an algorithm which is used by astronomers to detect stars in astronomical images. We use Free Response Operating Characteristic (FROC) curves to evaluate and compare the performance of the four algorithms. This allows us to optimize the performance of each algorithm. We observe that performance is significantly enhanced by up-sampling the images. We investigate the effect of noise and image quality on cone mosaic parameters estimated using the different algorithms, finding that the estimated regularity is the most sensitive parameter. PMID:26366758

  5. Applications and accuracy of the parallel diagonal dominant algorithm

    NASA Technical Reports Server (NTRS)

    Sun, Xian-He

    1993-01-01

    The Parallel Diagonal Dominant (PDD) algorithm is a highly efficient, ideally scalable tridiagonal solver. In this paper, a detailed study of the PDD algorithm is given. First the PDD algorithm is introduced. Then the algorithm is extended to solve periodic tridiagonal systems. A variant, the reduced PDD algorithm, is also proposed. Accuracy analysis is provided for a class of tridiagonal systems, the symmetric, and anti-symmetric Toeplitz tridiagonal systems. Implementation results show that the analysis gives a good bound on the relative error, and the algorithm is a good candidate for the emerging massively parallel machines.

  6. New formulations of monotonically convergent quantum control algorithms

    NASA Astrophysics Data System (ADS)

    Maday, Yvon; Turinici, Gabriel

    2003-05-01

    Most of the numerical simulation in quantum (bilinear) control have used one of the monotonically convergent algorithms of Krotov (introduced by Tannor et al.) or of Zhu and Rabitz. However, until now no explicit relationship has been revealed between the two algorithms in order to understand their common properties. Within this framework, we propose in this paper a unified formulation that comprises both algorithms and that extends to a new class of monotonically convergent algorithms. Numerical results show that the newly derived algorithms behave as well as (and sometimes better than) the well-known algorithms cited above.

  7. Diabetic retinopathy: a quadtree based blood vessel detection algorithm using RGB components in fundus images.

    PubMed

    Reza, Ahmed Wasif; Eswaran, C; Hati, Subhas

    2008-04-01

    Blood vessel detection in retinal images is a fundamental step for feature extraction and interpretation of image content. This paper proposes a novel computational paradigm for detection of blood vessels in fundus images based on RGB components and quadtree decomposition. The proposed algorithm employs median filtering, quadtree decomposition, post filtration of detected edges, and morphological reconstruction on retinal images. The application of preprocessing algorithm helps in enhancing the image to make it better fit for the subsequent analysis and it is a vital phase before decomposing the image. Quadtree decomposition provides information on the different types of blocks and intensities of the pixels within the blocks. The post filtration and morphological reconstruction assist in filling the edges of the blood vessels and removing the false alarms and unwanted objects from the background, while restoring the original shape of the connected vessels. The proposed method which makes use of the three color components (RGB) is tested on various images of publicly available database. The results are compared with those obtained by other known methods as well as with the results obtained by using the proposed method with the green color component only. It is shown that the proposed method can yield true positive fraction values as high as 0.77, which are comparable to or somewhat higher than the results obtained by other known methods. It is also shown that the effect of noise can be reduced if the proposed method is implemented using only the green color component.

  8. Multidisciplinary Multiobjective Optimal Design for Turbomachinery Using Evolutionary Algorithm

    NASA Technical Reports Server (NTRS)

    2005-01-01

    This report summarizes Dr. Lian s efforts toward developing a robust and efficient tool for multidisciplinary and multi-objective optimal design for turbomachinery using evolutionary algorithms. This work consisted of two stages. The first stage (from July 2003 to June 2004) Dr. Lian focused on building essential capabilities required for the project. More specifically, Dr. Lian worked on two subjects: an enhanced genetic algorithm (GA) and an integrated optimization system with a GA and a surrogate model. The second stage (from July 2004 to February 2005) Dr. Lian formulated aerodynamic optimization and structural optimization into a multi-objective optimization problem and performed multidisciplinary and multi-objective optimizations on a transonic compressor blade based on the proposed model. Dr. Lian s numerical results showed that the proposed approach can effectively reduce the blade weight and increase the stage pressure ratio in an efficient manner. In addition, the new design was structurally safer than the original design. Five conference papers and three journal papers were published on this topic by Dr. Lian.

  9. A VLSI architecture for simplified arithmetic Fourier transform algorithm

    NASA Technical Reports Server (NTRS)

    Reed, Irving S.; Shih, Ming-Tang; Truong, T. K.; Hendon, E.; Tufts, D. W.

    1992-01-01

    The arithmetic Fourier transform (AFT) is a number-theoretic approach to Fourier analysis which has been shown to perform competitively with the classical FFT in terms of accuracy, complexity, and speed. Theorems developed in a previous paper for the AFT algorithm are used here to derive the original AFT algorithm which Bruns found in 1903. This is shown to yield an algorithm of less complexity and of improved performance over certain recent AFT algorithms. A VLSI architecture is suggested for this simplified AFT algorithm. This architecture uses a butterfly structure which reduces the number of additions by 25 percent of that used in the direct method.

  10. Basis for spectral curvature algorithms in remote sensing of chlorophyll

    NASA Technical Reports Server (NTRS)

    Campbell, J. W.; Esaias, W. E.

    1983-01-01

    A simple, empirically derived algorithm for estimating oceanic chlorophyll concentrations from spectral radiances measured by a low-flying spectroradiometer has proved highly successful in field experiments in 1980-82. The sensor used was the Multichannel Ocean Color Sensor, and the originator of the algorithm was Grew (1981). This paper presents an explanation for the algorithm based on the optical properties of waters containing chlorophyll and other phytoplankton pigments and the radiative transfer equations governing the remotely sensed signal. The effects of varying solar zenith, atmospheric transmittance, and interfering substances in the water on the chlorophyll algorithm are characterized, and applicability of the algorithm is discussed.

  11. Algorithm and program for information processing with the filin apparatus

    NASA Technical Reports Server (NTRS)

    Gurin, L. S.; Morkrov, V. S.; Moskalenko, Y. I.; Tsoy, K. A.

    1979-01-01

    The reduction of spectral radiation data from space sources is described. The algorithm and program for identifying segments of information obtained from the Film telescope-spectrometer on the Salyut-4 are presented. The information segments represent suspected X-ray sources. The proposed algorithm is an algorithm of the lowest level. Following evaluation, information free of uninformative segments is subject to further processing with algorithms of a higher level. The language used is FORTRAN 4.

  12. An incremental clustering algorithm based on Mahalanobis distance

    NASA Astrophysics Data System (ADS)

    Aik, Lim Eng; Choon, Tan Wee

    2014-12-01

    Classical fuzzy c-means clustering algorithm is insufficient to cluster non-spherical or elliptical distributed datasets. The paper replaces classical fuzzy c-means clustering euclidean distance with Mahalanobis distance. It applies Mahalanobis distance to incremental learning for its merits. A Mahalanobis distance based fuzzy incremental clustering learning algorithm is proposed. Experimental results show the algorithm is an effective remedy for the defect in fuzzy c-means algorithm but also increase training accuracy.

  13. Perturbation resilience and superiorization of iterative algorithms

    NASA Astrophysics Data System (ADS)

    Censor, Y.; Davidi, R.; Herman, G. T.

    2010-06-01

    Iterative algorithms aimed at solving some problems are discussed. For certain problems, such as finding a common point in the intersection of a finite number of convex sets, there often exist iterative algorithms that impose very little demand on computer resources. For other problems, such as finding that point in the intersection at which the value of a given function is optimal, algorithms tend to need more computer memory and longer execution time. A methodology is presented whose aim is to produce automatically for an iterative algorithm of the first kind a 'superiorized version' of it that retains its computational efficiency but nevertheless goes a long way toward solving an optimization problem. This is possible to do if the original algorithm is 'perturbation resilient', which is shown to be the case for various projection algorithms for solving the consistent convex feasibility problem. The superiorized versions of such algorithms use perturbations that steer the process in the direction of a superior feasible point, which is not necessarily optimal, with respect to the given function. After presenting these intuitive ideas in a precise mathematical form, they are illustrated in image reconstruction from projections for two different projection algorithms superiorized for the function whose value is the total variation of the image.

  14. Improvement of wavelet threshold filtered back-projection image reconstruction algorithm

    NASA Astrophysics Data System (ADS)

    Ren, Zhong; Liu, Guodong; Huang, Zhen

    2014-11-01

    Image reconstruction technique has been applied into many fields including some medical imaging, such as X ray computer tomography (X-CT), positron emission tomography (PET) and nuclear magnetic resonance imaging (MRI) etc, but the reconstructed effects are still not satisfied because original projection data are inevitably polluted by noises in process of image reconstruction. Although some traditional filters e.g., Shepp-Logan (SL) and Ram-Lak (RL) filter have the ability to filter some noises, Gibbs oscillation phenomenon are generated and artifacts leaded by back-projection are not greatly improved. Wavelet threshold denoising can overcome the noises interference to image reconstruction. Since some inherent defects exist in the traditional soft and hard threshold functions, an improved wavelet threshold function combined with filtered back-projection (FBP) algorithm was proposed in this paper. Four different reconstruction algorithms were compared in simulated experiments. Experimental results demonstrated that this improved algorithm greatly eliminated the shortcomings of un-continuity and large distortion of traditional threshold functions and the Gibbs oscillation. Finally, the availability of this improved algorithm was verified from the comparison of two evaluation criterions, i.e. mean square error (MSE), peak signal to noise ratio (PSNR) among four different algorithms, and the optimum dual threshold values of improved wavelet threshold function was gotten.

  15. Comparison of Statistical Algorithms for the Detection of Infectious Disease Outbreaks in Large Multiple Surveillance Systems

    PubMed Central

    Farrington, C. Paddy; Noufaily, Angela; Andrews, Nick J.; Charlett, Andre

    2016-01-01

    A large-scale multiple surveillance system for infectious disease outbreaks has been in operation in England and Wales since the early 1990s. Changes to the statistical algorithm at the heart of the system were proposed and the purpose of this paper is to compare two new algorithms with the original algorithm. Test data to evaluate performance are created from weekly counts of the number of cases of each of more than 2000 diseases over a twenty-year period. The time series of each disease is separated into one series giving the baseline (background) disease incidence and a second series giving disease outbreaks. One series is shifted forward by twelve months and the two are then recombined, giving a realistic series in which it is known where outbreaks have been added. The metrics used to evaluate performance include a scoring rule that appropriately balances sensitivity against specificity and is sensitive to variation in probabilities near 1. In the context of disease surveillance, a scoring rule can be adapted to reflect the size of outbreaks and this was done. Results indicate that the two new algorithms are comparable to each other and better than the algorithm they were designed to replace. PMID:27513749

  16. A novel image encryption algorithm based on chaos maps with Markov properties

    NASA Astrophysics Data System (ADS)

    Liu, Quan; Li, Pei-yue; Zhang, Ming-chao; Sui, Yong-xin; Yang, Huai-jiang

    2015-02-01

    In order to construct high complexity, secure and low cost image encryption algorithm, a class of chaos with Markov properties was researched and such algorithm was also proposed. The kind of chaos has higher complexity than the Logistic map and Tent map, which keeps the uniformity and low autocorrelation. An improved couple map lattice based on the chaos with Markov properties is also employed to cover the phase space of the chaos and enlarge the key space, which has better performance than the original one. A novel image encryption algorithm is constructed on the new couple map lattice, which is used as a key stream generator. A true random number is used to disturb the key which can dynamically change the permutation matrix and the key stream. From the experiments, it is known that the key stream can pass SP800-22 test. The novel image encryption can resist CPA and CCA attack and differential attack. The algorithm is sensitive to the initial key and can change the distribution the pixel values of the image. The correlation of the adjacent pixels can also be eliminated. When compared with the algorithm based on Logistic map, it has higher complexity and better uniformity, which is nearer to the true random number. It is also efficient to realize which showed its value in common use.

  17. Fast algorithm for scaling analysis with higher-order detrending moving average method

    NASA Astrophysics Data System (ADS)

    Tsujimoto, Yutaka; Miki, Yuki; Shimatani, Satoshi; Kiyono, Ken

    2016-05-01

    Among scaling analysis methods based on the root-mean-square deviation from the estimated trend, it has been demonstrated that centered detrending moving average (DMA) analysis with a simple moving average has good performance when characterizing long-range correlation or fractal scaling behavior. Furthermore, higher-order DMA has also been proposed; it is shown to have better detrending capabilities, removing higher-order polynomial trends than original DMA. However, a straightforward implementation of higher-order DMA requires a very high computational cost, which would prevent practical use of this method. To solve this issue, in this study, we introduce a fast algorithm for higher-order DMA, which consists of two techniques: (1) parallel translation of moving averaging windows by a fixed interval; (2) recurrence formulas for the calculation of summations. Our algorithm can significantly reduce computational cost. Monte Carlo experiments show that the computational time of our algorithm is approximately proportional to the data length, although that of the conventional algorithm is proportional to the square of the data length. The efficiency of our algorithm is also shown by a systematic study of the performance of higher-order DMA, such as the range of detectable scaling exponents and detrending capability for removing polynomial trends. In addition, through the analysis of heart-rate variability time series, we discuss possible applications of higher-order DMA.

  18. A dynamic material discrimination algorithm for dual MV energy X-ray digital radiography.

    PubMed

    Li, Liang; Li, Ruizhe; Zhang, Siyuan; Zhao, Tiao; Chen, Zhiqiang

    2016-08-01

    Dual-energy X-ray radiography has become a well-established technique in medical, industrial, and security applications, because of its material or tissue discrimination capability. The main difficulty of this technique is dealing with the materials overlapping problem. When there are two or more materials along the X-ray beam path, its material discrimination performance will be affected. In order to solve this problem, a new dynamic material discrimination algorithm is proposed for dual-energy X-ray digital radiography, which can also be extended to multi-energy X-ray situations. The algorithm has three steps: α-curve-based pre-classification, decomposition of overlapped materials, and the final material recognition. The key of the algorithm is to establish a dual-energy radiograph database of both pure basis materials and pair combinations of them. After the pre-classification results, original dual-energy projections of overlapped materials can be dynamically decomposed into two sets of dual-energy radiographs of each pure material by the algorithm. Thus, more accurate discrimination results can be provided even with the existence of the overlapping problem. Both numerical and experimental results that prove the validity and effectiveness of the algorithm are presented. PMID:27239987

  19. A dynamic material discrimination algorithm for dual MV energy X-ray digital radiography.

    PubMed

    Li, Liang; Li, Ruizhe; Zhang, Siyuan; Zhao, Tiao; Chen, Zhiqiang

    2016-08-01

    Dual-energy X-ray radiography has become a well-established technique in medical, industrial, and security applications, because of its material or tissue discrimination capability. The main difficulty of this technique is dealing with the materials overlapping problem. When there are two or more materials along the X-ray beam path, its material discrimination performance will be affected. In order to solve this problem, a new dynamic material discrimination algorithm is proposed for dual-energy X-ray digital radiography, which can also be extended to multi-energy X-ray situations. The algorithm has three steps: α-curve-based pre-classification, decomposition of overlapped materials, and the final material recognition. The key of the algorithm is to establish a dual-energy radiograph database of both pure basis materials and pair combinations of them. After the pre-classification results, original dual-energy projections of overlapped materials can be dynamically decomposed into two sets of dual-energy radiographs of each pure material by the algorithm. Thus, more accurate discrimination results can be provided even with the existence of the overlapping problem. Both numerical and experimental results that prove the validity and effectiveness of the algorithm are presented.

  20. Comparison of Statistical Algorithms for the Detection of Infectious Disease Outbreaks in Large Multiple Surveillance Systems.

    PubMed

    Enki, Doyo G; Garthwaite, Paul H; Farrington, C Paddy; Noufaily, Angela; Andrews, Nick J; Charlett, Andre

    2016-01-01

    A large-scale multiple surveillance system for infectious disease outbreaks has been in operation in England and Wales since the early 1990s. Changes to the statistical algorithm at the heart of the system were proposed and the purpose of this paper is to compare two new algorithms with the original algorithm. Test data to evaluate performance are created from weekly counts of the number of cases of each of more than 2000 diseases over a twenty-year period. The time series of each disease is separated into one series giving the baseline (background) disease incidence and a second series giving disease outbreaks. One series is shifted forward by twelve months and the two are then recombined, giving a realistic series in which it is known where outbreaks have been added. The metrics used to evaluate performance include a scoring rule that appropriately balances sensitivity against specificity and is sensitive to variation in probabilities near 1. In the context of disease surveillance, a scoring rule can be adapted to reflect the size of outbreaks and this was done. Results indicate that the two new algorithms are comparable to each other and better than the algorithm they were designed to replace. PMID:27513749

  1. Origins of GEMS Grains

    NASA Technical Reports Server (NTRS)

    Messenger, S.; Walker, R. M.

    2012-01-01

    Interplanetary dust particles (IDPs) collected in the Earth s stratosphere contain high abundances of submicrometer amorphous silicates known as GEMS grains. From their birth as condensates in the outflows of oxygen-rich evolved stars, processing in interstellar space, and incorporation into disks around new stars, amorphous silicates predominate in most astrophysical environments. Amorphous silicates were a major building block of our Solar System and are prominent in infrared spectra of comets. Anhydrous interplanetary dust particles (IDPs) thought to derive from comets contain abundant amorphous silicates known as GEMS (glass with embedded metal and sulfides) grains. GEMS grains have been proposed to be isotopically and chemically homogenized interstellar amorphous silicate dust. We evaluated this hypothesis through coordinated chemical and isotopic analyses of GEMS grains in a suite of IDPs to constrain their origins. GEMS grains show order of magnitude variations in Mg, Fe, Ca, and S abundances. GEMS grains do not match the average element abundances inferred for ISM dust containing on average, too little Mg, Fe, and Ca, and too much S. GEMS grains have complementary compositions to the crystalline components in IDPs suggesting that they formed from the same reservoir. We did not observe any unequivocal microstructural or chemical evidence that GEMS grains experienced prolonged exposure to radiation. We identified four GEMS grains having O isotopic compositions that point to origins in red giant branch or asymptotic giant branch stars and supernovae. Based on their O isotopic compositions, we estimate that 1-6% of GEMS grains are surviving circumstellar grains. The remaining 94-99% of GEMS grains have O isotopic compositions that are indistinguishable from terrestrial materials and carbonaceous chondrites. These isotopically solar GEMS grains either formed in the Solar System or were completely homogenized in the interstellar medium (ISM). However, the

  2. A Genetic Algorithm for Solving Job-shop Scheduling Problems using the Parameter-free Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Matsui, Shouichi; Watanabe, Isamu; Tokoro, Ken-Ichi

    A new genetic algorithm is proposed for solving job-shop scheduling problems where the total number of search points is limited. The objective of the problem is to minimize the makespan. The solution is represented by an operation sequence, i.e., a permutation of operations. The proposed algorithm is based on the framework of the parameter-free genetic algorithm. It encodes a permutation using random keys into a chromosome. A schedule is derived from a permutation using a hybrid scheduling (HS), and the parameter of HS is also encoded in a chromosome. Experiments using benchmark problems show that the proposed algorithm outperforms the previously proposed algorithms, genetic algorithm by Shi et al. and the improved local search by Nakano et al., for large-scale problems under the constraint of limited number of search points.

  3. A Palmprint Recognition Algorithm Using Phase-Only Correlation

    NASA Astrophysics Data System (ADS)

    Ito, Koichi; Aoki, Takafumi; Nakajima, Hiroshi; Kobayashi, Koji; Higuchi, Tatsuo

    This paper presents a palmprint recognition algorithm using Phase-Only Correlation (POC). The use of phase components in 2D (two-dimensional) discrete Fourier transforms of palmprint images makes it possible to achieve highly robust image registration and matching. In the proposed algorithm, POC is used to align scaling, rotation and translation between two palmprint images, and evaluate similarity between them. Experimental evaluation using a palmprint image database clearly demonstrates efficient matching performance of the proposed algorithm.

  4. Reasoning about systolic algorithms

    SciTech Connect

    Purushothaman, S.

    1986-01-01

    Systolic algorithms are a class of parallel algorithms, with small grain concurrency, well suited for implementation in VLSI. They are intended to be implemented as high-performance, computation-bound back-end processors and are characterized by a tesselating interconnection of identical processing elements. This dissertation investigates the problem of providing correctness of systolic algorithms. The following are reported in this dissertation: (1) a methodology for verifying correctness of systolic algorithms based on solving the representation of an algorithm as recurrence equations. The methodology is demonstrated by proving the correctness of a systolic architecture for optimal parenthesization. (2) The implementation of mechanical proofs of correctness of two systolic algorithms, a convolution algorithm and an optimal parenthesization algorithm, using the Boyer-Moore theorem prover. (3) An induction principle for proving correctness of systolic arrays which are modular. Two attendant inference rules, weak equivalence and shift transformation, which capture equivalent behavior of systolic arrays, are also presented.

  5. Algorithm-development activities

    NASA Technical Reports Server (NTRS)

    Carder, Kendall L.

    1994-01-01

    The task of algorithm-development activities at USF continues. The algorithm for determining chlorophyll alpha concentration, (Chl alpha) and gelbstoff absorption coefficient for SeaWiFS and MODIS-N radiance data is our current priority.

  6. A Fusion Algorithm for GFP Image and Phase Contrast Image of Arabidopsis Cell Based on SFL-Contourlet Transform

    PubMed Central

    Feng, Peng; Wang, Jing; Wei, Biao; Mi, Deling

    2013-01-01

    A hybrid multiscale and multilevel image fusion algorithm for green fluorescent protein (GFP) image and phase contrast image of Arabidopsis cell is proposed in this paper. Combining intensity-hue-saturation (IHS) transform and sharp frequency localization Contourlet transform (SFL-CT), this algorithm uses different fusion strategies for different detailed subbands, which include neighborhood consistency measurement (NCM) that can adaptively find balance between color background and gray structure. Also two kinds of neighborhood classes based on empirical model are taken into consideration. Visual information fidelity (VIF) as an objective criterion is introduced to evaluate the fusion image. The experimental results of 117 groups of Arabidopsis cell image from John Innes Center show that the new algorithm cannot only make the details of original images well preserved but also improve the visibility of the fusion image, which shows the superiority of the novel method to traditional ones. PMID:23476716

  7. A parallel algorithm for random searches

    NASA Astrophysics Data System (ADS)

    Wosniack, M. E.; Raposo, E. P.; Viswanathan, G. M.; da Luz, M. G. E.

    2015-11-01

    We discuss a parallelization procedure for a two-dimensional random search of a single individual, a typical sequential process. To assure the same features of the sequential random search in the parallel version, we analyze the former spatial patterns of the encountered targets for different search strategies and densities of homogeneously distributed targets. We identify a lognormal tendency for the distribution of distances between consecutively detected targets. Then, by assigning the distinct mean and standard deviation of this distribution for each corresponding configuration in the parallel simulations (constituted by parallel random walkers), we are able to recover important statistical properties, e.g., the target detection efficiency, of the original problem. The proposed parallel approach presents a speedup of nearly one order of magnitude compared with the sequential implementation. This algorithm can be easily adapted to different instances, as searches in three dimensions. Its possible range of applicability covers problems in areas as diverse as automated computer searchers in high-capacity databases and animal foraging.

  8. INSENS classification algorithm report

    SciTech Connect

    Hernandez, J.E.; Frerking, C.J.; Myers, D.W.

    1993-07-28

    This report describes a new algorithm developed for the Imigration and Naturalization Service (INS) in support of the INSENS project for classifying vehicles and pedestrians using seismic data. This algorithm is less sensitive to nuisance alarms due to environmental events than the previous algorithm. Furthermore, the algorithm is simple enough that it can be implemented in the 8-bit microprocessor used in the INSENS system.

  9. Accurate Finite Difference Algorithms

    NASA Technical Reports Server (NTRS)

    Goodrich, John W.

    1996-01-01

    Two families of finite difference algorithms for computational aeroacoustics are presented and compared. All of the algorithms are single step explicit methods, they have the same order of accuracy in both space and time, with examples up to eleventh order, and they have multidimensional extensions. One of the algorithm families has spectral like high resolution. Propagation with high order and high resolution algorithms can produce accurate results after O(10(exp 6)) periods of propagation with eight grid points per wavelength.

  10. Testing block subdivision algorithms on block designs

    NASA Astrophysics Data System (ADS)

    Wiseman, Natalie; Patterson, Zachary

    2016-01-01

    Integrated land use-transportation models predict future transportation demand taking into account how households and firms arrange themselves partly as a function of the transportation system. Recent integrated models require parcels as inputs and produce household and employment predictions at the parcel scale. Block subdivision algorithms automatically generate parcel patterns within blocks. Evaluating block subdivision algorithms is done by way of generating parcels and comparing them to those in a parcel database. Three block subdivision algorithms are evaluated on how closely they reproduce parcels of different block types found in a parcel database from Montreal, Canada. While the authors who developed each of the algorithms have evaluated them, they have used their own metrics and block types to evaluate their own algorithms. This makes it difficult to compare their strengths and weaknesses. The contribution of this paper is in resolving this difficulty with the aim of finding a better algorithm suited to subdividing each block type. The proposed hypothesis is that given the different approaches that block subdivision algorithms take, it's likely that different algorithms are better adapted to subdividing different block types. To test this, a standardized block type classification is used that consists of mutually exclusive and comprehensive categories. A statistical method is used for finding a better algorithm and the probability it will perform well for a given block type. Results suggest the oriented bounding box algorithm performs better for warped non-uniform sites, as well as gridiron and fragmented uniform sites. It also produces more similar parcel areas and widths. The Generalized Parcel Divider 1 algorithm performs better for gridiron non-uniform sites. The Straight Skeleton algorithm performs better for loop and lollipop networks as well as fragmented non-uniform and warped uniform sites. It also produces more similar parcel shapes and patterns.

  11. Improved hybrid optimization algorithm for 3D protein structure prediction.

    PubMed

    Zhou, Changjun; Hou, Caixia; Wei, Xiaopeng; Zhang, Qiang

    2014-07-01

    A new improved hybrid optimization algorithm - PGATS algorithm, which is based on toy off-lattice model, is presented for dealing with three-dimensional protein structure prediction problems. The algorithm combines the particle swarm optimization (PSO), genetic algorithm (GA), and tabu search (TS) algorithms. Otherwise, we also take some different improved strategies. The factor of stochastic disturbance is joined in the particle swarm optimization to improve the search ability; the operations of crossover and mutation that are in the genetic algorithm are changed to a kind of random liner method; at last tabu search algorithm is improved by appending a mutation operator. Through the combination of a variety of strategies and algorithms, the protein structure prediction (PSP) in a 3D off-lattice model is achieved. The PSP problem is an NP-hard problem, but the problem can be attributed to a global optimization problem of multi-extremum and multi-parameters. This is the theoretical principle of the hybrid optimization algorithm that is proposed in this paper. The algorithm combines local search and global search, which overcomes the shortcoming of a single algorithm, giving full play to the advantage of each algorithm. In the current universal standard sequences, Fibonacci sequences and real protein sequences are certified. Experiments show that the proposed new method outperforms single algorithms on the accuracy of calculating the protein sequence energy value, which is proved to be an effective way to predict the structure of proteins. PMID:25069136

  12. Implementation of a new iterative learning control algorithm on real data.

    PubMed

    Zamanian, Hamed; Koohi, Ardavan

    2016-02-01

    In this paper, a newly presented approach is proposed for closed-loop automatic tuning of a proportional integral derivative (PID) controller based on iterative learning control (ILC) algorithm. A modified ILC scheme iteratively changes the control signal by adjusting it. Once a satisfactory performance is achieved, a linear compensator is identified in the ILC behavior using casual relationship between the closed loop signals. This compensator is approximated by a PD controller which is used to tune the original PID controller. Results of implementing this approach presented on the experimental data of Damavand tokamak and are consistent with simulation outcome. PMID:26931852

  13. An Efficient Algorithm for Some Highly Nonlinear Fractional PDEs in Mathematical Physics

    PubMed Central

    Ahmad, Jamshad; Mohyud-Din, Syed Tauseef

    2014-01-01

    In this paper, a fractional complex transform (FCT) is used to convert the given fractional partial differential equations (FPDEs) into corresponding partial differential equations (PDEs) and subsequently Reduced Differential Transform Method (RDTM) is applied on the transformed system of linear and nonlinear time-fractional PDEs. The results so obtained are re-stated by making use of inverse transformation which yields it in terms of original variables. It is observed that the proposed algorithm is highly efficient and appropriate for fractional PDEs and hence can be extended to other complex problems of diversified nonlinear nature. PMID:25525804

  14. Implementation of a new iterative learning control algorithm on real data.

    PubMed

    Zamanian, Hamed; Koohi, Ardavan

    2016-02-01

    In this paper, a newly presented approach is proposed for closed-loop automatic tuning of a proportional integral derivative (PID) controller based on iterative learning control (ILC) algorithm. A modified ILC scheme iteratively changes the control signal by adjusting it. Once a satisfactory performance is achieved, a linear compensator is identified in the ILC behavior using casual relationship between the closed loop signals. This compensator is approximated by a PD controller which is used to tune the original PID controller. Results of implementing this approach presented on the experimental data of Damavand tokamak and are consistent with simulation outcome.

  15. Automatic ionospheric layers detection: Algorithms analysis

    NASA Astrophysics Data System (ADS)

    Molina, María G.; Zuccheretti, Enrico; Cabrera, Miguel A.; Bianchi, Cesidio; Sciacca, Umberto; Baskaradas, James

    2016-03-01

    Vertical sounding is a widely used technique to obtain ionosphere measurements, such as an estimation of virtual height versus frequency scanning. It is performed by high frequency radar for geophysical applications called "ionospheric sounder" (or "ionosonde"). Radar detection depends mainly on targets characteristics. While several targets behavior and correspondent echo detection algorithms have been studied, a survey to address a suitable algorithm for ionospheric sounder has to be carried out. This paper is focused on automatic echo detection algorithms implemented in particular for an ionospheric sounder, target specific characteristics were studied as well. Adaptive threshold detection algorithms are proposed, compared to the current implemented algorithm, and tested using actual data obtained from the Advanced Ionospheric Sounder (AIS-INGV) at Rome Ionospheric Observatory. Different cases of study have been selected according typical ionospheric and detection conditions.

  16. A three-dimensional-weighted cone beam filtered backprojection (CB-FBP) algorithm for image reconstruction in volumetric CT-helical scanning.

    PubMed

    Tang, Xiangyang; Hsieh, Jiang; Nilsen, Roy A; Dutta, Sandeep; Samsonov, Dmitry; Hagiwara, Akira

    2006-02-21

    Based on the structure of the original helical FDK algorithm, a three-dimensional (3D)-weighted cone beam filtered backprojection (CB-FBP) algorithm is proposed for image reconstruction in volumetric CT under helical source trajectory. In addition to its dependence on view and fan angles, the 3D weighting utilizes the cone angle dependency of a ray to improve reconstruction accuracy. The 3D weighting is ray-dependent and the underlying mechanism is to give a favourable weight to the ray with the smaller cone angle out of a pair of conjugate rays but an unfavourable weight to the ray with the larger cone angle out of the conjugate ray pair. The proposed 3D-weighted helical CB-FBP reconstruction algorithm is implemented in the cone-parallel geometry that can improve noise uniformity and image generation speed significantly. Under the cone-parallel geometry, the filtering is naturally carried out along the tangential direction of the helical source trajectory. By exploring the 3D weighting's dependence on cone angle, the proposed helical 3D-weighted CB-FBP reconstruction algorithm can provide significantly improved reconstruction accuracy at moderate cone angle and high helical pitches. The 3D-weighted CB-FBP algorithm is experimentally evaluated by computer-simulated phantoms and phantoms scanned by a diagnostic volumetric CT system with a detector dimension of 64 x 0.625 mm over various helical pitches. The computer simulation study shows that the 3D weighting enables the proposed algorithm to reach reconstruction accuracy comparable to that of exact CB reconstruction algorithms, such as the Katsevich algorithm, under a moderate cone angle (4 degrees) and various helical pitches. Meanwhile, the experimental evaluation using the phantoms scanned by a volumetric CT system shows that the spatial resolution along the z-direction and noise characteristics of the proposed 3D-weighted helical CB-FBP reconstruction algorithm are maintained very well in comparison to the FDK

  17. Dynamic displacement measurement of large-scale structures based on the Lucas-Kanade template tracking algorithm

    NASA Astrophysics Data System (ADS)

    Guo, Jie; Zhu, Chang`an

    2016-01-01

    The development of optics and computer technologies enables the application of the vision-based technique that uses digital cameras to the displacement measurement of large-scale structures. Compared with traditional contact measurements, vision-based technique allows for remote measurement, has a non-intrusive characteristic, and does not necessitate mass introduction. In this study, a high-speed camera system is developed to complete the displacement measurement in real time. The system consists of a high-speed camera and a notebook computer. The high-speed camera can capture images at a speed of hundreds of frames per second. To process the captured images in computer, the Lucas-Kanade template tracking algorithm in the field of computer vision is introduced. Additionally, a modified inverse compositional algorithm is proposed to reduce the computing time of the original algorithm and improve the efficiency further. The modified algorithm can rapidly accomplish one displacement extraction within 1 ms without having to install any pre-designed target panel onto the structures in advance. The accuracy and the efficiency of the system in the remote measurement of dynamic displacement are demonstrated in the experiments on motion platform and sound barrier on suspension viaduct. Experimental results show that the proposed algorithm can extract accurate displacement signal and accomplish the vibration measurement of large-scale structures.

  18. Node Self-Deployment Algorithm Based on an Uneven Cluster with Radius Adjusting for Underwater Sensor Networks.

    PubMed

    Jiang, Peng; Xu, Yiming; Wu, Feng

    2016-01-14

    Existing move-restricted node self-deployment algorithms are based on a fixed node communication radius, evaluate the performance based on network coverage or the connectivity rate and do not consider the number of nodes near the sink node and the energy consumption distribution of the network topology, thereby degrading network reliability and the energy consumption balance. Therefore, we propose a distributed underwater node self-deployment algorithm. First, each node begins the uneven clustering based on the distance on the water surface. Each cluster head node selects its next-hop node to synchronously construct a connected path to the sink node. Second, the cluster head node adjusts its depth while maintaining the layout formed by the uneven clustering and then adjusts the positions of in-cluster nodes. The algorithm originally considers the network reliability and energy consumption balance during node deployment and considers the coverage redundancy rate of all positions that a node may reach during the node position adjustment. Simulation results show, compared to the connected dominating set (CDS) based depth computation algorithm, that the proposed algorithm can increase the number of the nodes near the sink node and improve network reliability while guaranteeing the network connectivity rate. Moreover, it can balance energy consumption during network operation, further improve network coverage rate and reduce energy consumption.

  19. Node Self-Deployment Algorithm Based on an Uneven Cluster with Radius Adjusting for Underwater Sensor Networks.

    PubMed

    Jiang, Peng; Xu, Yiming; Wu, Feng

    2016-01-01

    Existing move-restricted node self-deployment algorithms are based on a fixed node communication radius, evaluate the performance based on network coverage or the connectivity rate and do not consider the number of nodes near the sink node and the energy consumption distribution of the network topology, thereby degrading network reliability and the energy consumption balance. Therefore, we propose a distributed underwater node self-deployment algorithm. First, each node begins the uneven clustering based on the distance on the water surface. Each cluster head node selects its next-hop node to synchronously construct a connected path to the sink node. Second, the cluster head node adjusts its depth while maintaining the layout formed by the uneven clustering and then adjusts the positions of in-cluster nodes. The algorithm originally considers the network reliability and energy consumption balance during node deployment and considers the coverage redundancy rate of all positions that a node may reach during the node position adjustment. Simulation results show, compared to the connected dominating set (CDS) based depth computation algorithm, that the proposed algorithm can increase the number of the nodes near the sink node and improve network reliability while guaranteeing the network connectivity rate. Moreover, it can balance energy consumption during network operation, further improve network coverage rate and reduce energy consumption. PMID:26784193

  20. Node Self-Deployment Algorithm Based on an Uneven Cluster with Radius Adjusting for Underwater Sensor Networks

    PubMed Central

    Jiang, Peng; Xu, Yiming; Wu, Feng

    2016-01-01

    Existing move-restricted node self-deployment algorithms are based on a fixed node communication radius, evaluate the performance based on network coverage or the connectivity rate and do not consider the number of nodes near the sink node and the energy consumption distribution of the network topology, thereby degrading network reliability and the energy consumption balance. Therefore, we propose a distributed underwater node self-deployment algorithm. First, each node begins the uneven clustering based on the distance on the water surface. Each cluster head node selects its next-hop node to synchronously construct a connected path to the sink node. Second, the cluster head node adjusts its depth while maintaining the layout formed by the uneven clustering and then adjusts the positions of in-cluster nodes. The algorithm originally considers the network reliability and energy consumption balance during node deployment and considers the coverage redundancy rate of all positions that a node may reach during the node position adjustment. Simulation results show, compared to the connected dominating set (CDS) based depth computation algorithm, that the proposed algorithm can increase the number of the nodes near the sink node and improve network reliability while guaranteeing the network connectivity rate. Moreover, it can balance energy consumption during network operation, further improve network coverage rate and reduce energy consumption. PMID:26784193

  1. An improved localization algorithm based on genetic algorithm in wireless sensor networks.

    PubMed

    Peng, Bo; Li, Lei

    2015-04-01

    Wireless sensor network (WSN) are widely used in many applications. A WSN is a wireless decentralized structure network comprised of nodes, which autonomously set up a network. The node localization that is to be aware of position of the node in the network is an essential part of many sensor network operations and applications. The existing localization algorithms can be classified into two categories: range-based and range-free. The range-based localization algorithm has requirements on hardware, thus is expensive to be implemented in practice. The range-free localization algorithm reduces the hardware cost. Because of the hardware limitations of WSN devices, solutions in range-free localization are being pursued as a cost-effective alternative to more expensive range-based approaches. However, these techniques usually have higher localization error compared to the range-based algorithms. DV-Hop is a typical range-free localization algorithm utilizing hop-distance estimation. In this paper, we propose an improved DV-Hop algorithm based on genetic algorithm. Simulation results show that our proposed algorithm improves the localization accuracy compared with previous algorithms.

  2. Eusociality: origin and consequences.

    PubMed

    Wilson, Edward O; Hölldobler, Bert

    2005-09-20

    In this new assessment of the empirical evidence, an alternative to the standard model is proposed: group selection is the strong binding force in eusocial evolution; individual selection, the strong dissolutive force; and kin selection (narrowly defined), either a weak binding or weak dissolutive force, according to circumstance. Close kinship may be more a consequence of eusociality than a factor promoting its origin. A point of no return to the solitary state exists, as a rule when workers become anatomically differentiated. Eusociality has been rare in evolution, evidently due to the scarcity of environmental pressures adequate to tip the balance among countervailing forces in favor of group selection. Eusociality in ants and termites in the irreversible stage is the key to their ecological dominance and has (at least in ants) shaped some features of internal phylogeny. Their colonies are consistently superior to solitary and preeusocial competitors, due to the altruistic behavior among nestmates and their ability to organize coordinated action by pheromonal communication. PMID:16157878

  3. Origin of Neutron Stars

    NASA Astrophysics Data System (ADS)

    Brecher, K.

    1999-12-01

    The origin of the concept of neutron stars can be traced to two brief, incredibly insightful publications. Work on the earlier paper by Lev Landau (Phys. Z. Sowjetunion, 1, 285, 1932) actually predated the discovery of neutrons. Nonetheless, Landau arrived at the notion of a collapsed star with the density of a nucleus (really a "nucleus star") and demonstrated (at about the same time as, and independent of, Chandrasekhar) that there is an upper mass limit for dense stellar objects of about 1.5 solar masses. Perhaps even more remarkable is the abstract of a talk presented at the December 1933 meeting of the American Physical Society published by Walter Baade and Fritz Zwicky in 1934 (Phys. Rev. 45, 138). It followed the discovery of the neutron by just over a year. Their report, which was about the same length as the present abstract: (1) invented the concept and word supernova; (2) suggested that cosmic rays are produced by supernovae; and (3) in the authors own words, proposed "with all reserve ... the view that supernovae represent the transitions from ordinary stars to neutron stars (italics), which in their final stages consist of extremely closely packed neutrons." The abstract by Baade and Zwicky probably contains the highest density of new, important (and correct) ideas in high energy astrophysics ever published in a single paper. In this talk, we will discuss some of the facts and myths surrounding these two publications.

  4. Origins of magnetospheric physics

    SciTech Connect

    Van Allen, J.A.

    1983-01-01

    The history of the scientific investigation of the earth magnetosphere during the period 1946-1960 is reviewed, with a focus on satellite missions leading to the discovery of the inner and outer radiation belts. Chapters are devoted to ground-based studies of the earth magnetic field through the 1930s, the first U.S. rocket flights carrying scientific instruments, the rockoon flights from the polar regions (1952-1957), U.S. planning for scientific use of artificial satellites (1956), the launch of Sputnik I (1957), the discovery of the inner belt by Explorers I and III (1958), the Argus high-altitude atomic-explosion tests (1958), the confirmation of the inner belt and discovery of the outer belt by Explorer IV and Pioneers I-V, related studies by Sputniks II and III and Luniks I-III, and the observational and theoretical advances of 1959-1961. Photographs, drawings, diagrams, graphs, and copies of original notes and research proposals are provided. 227 references.

  5. Robust and low complexity localization algorithm based on head-related impulse responses and interaural time difference.

    PubMed

    Wan, Xinwang; Liang, Juan

    2013-01-01

    This article introduces a biologically inspired localization algorithm using two microphones, for a mobile robot. The proposed algorithm has two steps. First, the coarse azimuth angle of the sound source is estimated by cross-correlation algorithm based on interaural time difference. Then, the accurate azimuth angle is obtained by cross-channel algorithm based on head-related impulse responses. The proposed algorithm has lower computational complexity compared to the cross-channel algorithm. Experimental results illustrate that the localization performance of the proposed algorithm is better than those of the cross-correlation and cross-channel algorithms. PMID:23298016

  6. Planning Readings: A Comparative Exploration of Basic Algorithms

    ERIC Educational Resources Information Center

    Piater, Justus H.

    2009-01-01

    Conventional introduction to computer science presents individual algorithmic paradigms in the context of specific, prototypical problems. To complement this algorithm-centric instruction, this study additionally advocates problem-centric instruction. I present an original problem drawn from students' life that is simply stated but provides rich…

  7. Adaptive cuckoo search algorithm for unconstrained optimization.

    PubMed

    Ong, Pauline

    2014-01-01

    Modification of the intensification and diversification approaches in the recently developed cuckoo search algorithm (CSA) is performed. The alteration involves the implementation of adaptive step size adjustment strategy, and thus enabling faster convergence to the global optimal solutions. The feasibility of the proposed algorithm is validated against benchmark optimization functions, where the obtained results demonstrate a marked improvement over the standard CSA, in all the cases. PMID:25298971

  8. Facial Composite System Using Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Zahradníková, Barbora; Duchovičová, Soňa; Schreiber, Peter

    2014-12-01

    The article deals with genetic algorithms and their application in face identification. The purpose of the research is to develop a free and open-source facial composite system using evolutionary algorithms, primarily processes of selection and breeding. The initial testing proved higher quality of the final composites and massive reduction in the composites processing time. System requirements were specified and future research orientation was proposed in order to improve the results.

  9. Adaptive cuckoo search algorithm for unconstrained optimization.

    PubMed

    Ong, Pauline

    2014-01-01

    Modification of the intensification and diversification approaches in the recently developed cuckoo search algorithm (CSA) is performed. The alteration involves the implementation of adaptive step size adjustment strategy, and thus enabling faster convergence to the global optimal solutions. The feasibility of the proposed algorithm is validated against benchmark optimization functions, where the obtained results demonstrate a marked improvement over the standard CSA, in all the cases.

  10. Grover's algorithm and the secant varieties

    NASA Astrophysics Data System (ADS)

    Holweck, Frédéric; Jaffali, Hamza; Nounouh, Ismaël

    2016-09-01

    In this paper we investigate the entanglement nature of quantum states generated by Grover's search algorithm by means of algebraic geometry. More precisely we establish a link between entanglement of states generated by the algorithm and auxiliary algebraic varieties built from the set of separable states. This new perspective enables us to propose qualitative interpretations of earlier numerical results obtained by M. Rossi et al. We also illustrate our purpose with a couple of examples investigated in details.

  11. Self-organization and clustering algorithms

    NASA Technical Reports Server (NTRS)

    Bezdek, James C.

    1991-01-01

    Kohonen's feature maps approach to clustering is often likened to the k or c-means clustering algorithms. Here, the author identifies some similarities and differences between the hard and fuzzy c-Means (HCM/FCM) or ISODATA algorithms and Kohonen's self-organizing approach. The author concludes that some differences are significant, but at the same time there may be some important unknown relationships between the two methodologies. Several avenues of research are proposed.

  12. An efficient method of key-frame extraction based on a cluster algorithm.

    PubMed

    Zhang, Qiang; Yu, Shao-Pei; Zhou, Dong-Sheng; Wei, Xiao-Peng

    2013-12-18

    This paper proposes a novel method of key-frame extraction for use with motion capture data. This method is based on an unsupervised cluster algorithm. First, the motion sequence is clustered into two classes by the similarity distance of the adjacent frames so that the thresholds needed in the next step can be determined adaptively. Second, a dynamic cluster algorithm called ISODATA is used to cluster all the frames and the frames nearest to the center of each class are automatically extracted as key-frames of the sequence. Unlike many other clustering techniques, the present improved cluster algorithm can automatically address different motion types without any need for specified parameters from users. The proposed method is capable of summarizing motion capture data reliably and efficiently. The present work also provides a meaningful comparison between the results of the proposed key-frame extraction technique and other previous methods. These results are evaluated in terms of metrics that measure reconstructed motion and the mean absolute error value, which are derived from the reconstructed data and the original data.

  13. Detection algorithm for glass bottle mouth defect by continuous wavelet transform based on machine vision

    NASA Astrophysics Data System (ADS)

    Qian, Jinfang; Zhang, Changjiang

    2014-11-01

    An efficient algorithm based on continuous wavelet transform combining with pre-knowledge, which can be used to detect the defect of glass bottle mouth, is proposed. Firstly, under the condition of ball integral light source, a perfect glass bottle mouth image is obtained by Japanese Computar camera through the interface of IEEE-1394b. A single threshold method based on gray level histogram is used to obtain the binary image of the glass bottle mouth. In order to efficiently suppress noise, moving average filter is employed to smooth the histogram of original glass bottle mouth image. And then continuous wavelet transform is done to accurately determine the segmentation threshold. Mathematical morphology operations are used to get normal binary bottle mouth mask. A glass bottle to be detected is moving to the detection zone by conveyor belt. Both bottle mouth image and binary image are obtained by above method. The binary image is multiplied with normal bottle mask and a region of interest is got. Four parameters (number of connected regions, coordinate of centroid position, diameter of inner cycle, and area of annular region) can be computed based on the region of interest. Glass bottle mouth detection rules are designed by above four parameters so as to accurately detect and identify the defect conditions of glass bottle. Finally, the glass bottles of Coca-Cola Company are used to verify the proposed algorithm. The experimental results show that the proposed algorithm can accurately detect the defect conditions of the glass bottles and have 98% detecting accuracy.

  14. Insect-inspired navigation algorithm for an aerial agent using satellite imagery.

    PubMed

    Gaffin, Douglas D; Dewar, Alexander; Graham, Paul; Philippides, Andrew

    2015-01-01

    Humans have long marveled at the ability of animals to navigate swiftly, accurately, and across long distances. Many mechanisms have been proposed for how animals acquire, store, and retrace learned routes, yet many of these hypotheses appear incongruent with behavioral observations and the animals' neural constraints. The "Navigation by Scene Familiarity Hypothesis" proposed originally for insect navigation offers an elegantly simple solution for retracing previously experienced routes without the need for complex neural architectures and memory retrieval mechanisms. This hypothesis proposes that an animal can return to a target location by simply moving toward the most familiar scene at any given point. Proof of concept simulations have used computer-generated ant's-eye views of the world, but here we test the ability of scene familiarity algorithms to navigate training routes across satellite images extracted from Google Maps. We find that Google satellite images are so rich in visual information that familiarity algorithms can be used to retrace even tortuous routes with low-resolution sensors. We discuss the implications of these findings not only for animal navigation but also for the potential development of visual augmentation systems and robot guidance algorithms.

  15. Insect-Inspired Navigation Algorithm for an Aerial Agent Using Satellite Imagery

    PubMed Central

    Gaffin, Douglas D.; Dewar, Alexander; Graham, Paul; Philippides, Andrew

    2015-01-01

    Humans have long marveled at the ability of animals to navigate swiftly, accurately, and across long distances. Many mechanisms have been proposed for how animals acquire, store, and retrace learned routes, yet many of these hypotheses appear incongruent with behavioral observations and the animals’ neural constraints. The “Navigation by Scene Familiarity Hypothesis” proposed originally for insect navigation offers an elegantly simple solution for retracing previously experienced routes without the need for complex neural architectures and memory retrieval mechanisms. This hypothesis proposes that an animal can return to a target location by simply moving toward the most familiar scene at any given point. Proof of concept simulations have used computer-generated ant’s-eye views of the world, but here we test the ability of scene familiarity algorithms to navigate training routes across satellite images extracted from Google Maps. We find that Google satellite images are so rich in visual information that familiarity algorithms can be used to retrace even tortuous routes with low-resolution sensors. We discuss the implications of these findings not only for animal navigation but also for the potential development of visual augmentation systems and robot guidance algorithms. PMID:25874764

  16. A new algorithm for importance analysis of the inputs with distribution parameter uncertainty

    NASA Astrophysics Data System (ADS)

    Li, Luyi; Lu, Zhenzhou

    2016-10-01

    Importance analysis is aimed at finding the contributions by the inputs to the uncertainty in a model output. For structural systems involving inputs with distribution parameter uncertainty, the contributions by the inputs to the output uncertainty are governed by both the variability and parameter uncertainty in their probability distributions. A natural and consistent way to arrive at importance analysis results in such cases would be a three-loop nested Monte Carlo (MC) sampling strategy, in which the parameters are sampled in the outer loop and the inputs are sampled in the inner nested double-loop. However, the computational effort of this procedure is often prohibitive for engineering problem. This paper, therefore, proposes a newly efficient algorithm for importance analysis of the inputs in the presence of parameter uncertainty. By introducing a 'surrogate sampling probability density function (SS-PDF)' and incorporating the single-loop MC theory into the computation, the proposed algorithm can reduce the original three-loop nested MC computation into a single-loop one in terms of model evaluation, which requires substantially less computational effort. Methods for choosing proper SS-PDF are also discussed in the paper. The efficiency and robustness of the proposed algorithm have been demonstrated by results of several examples.

  17. Modern human origins: progress and prospects.

    PubMed Central

    Stringer, Chris

    2002-01-01

    The question of the mode of origin of modern humans (Homo sapiens) has dominated palaeoanthropological debate over the last decade. This review discusses the main models proposed to explain modern human origins, and examines relevant fossil evidence from Eurasia, Africa and Australasia. Archaeological and genetic data are also discussed, as well as problems with the concept of 'modernity' itself. It is concluded that a recent African origin can be supported for H. sapiens, morphologically, behaviourally and genetically, but that more evidence will be needed, both from Africa and elsewhere, before an absolute African origin for our species and its behavioural characteristics can be established and explained. PMID:12028792

  18. Multilevel algorithms for nonlinear optimization

    NASA Technical Reports Server (NTRS)

    Alexandrov, Natalia; Dennis, J. E., Jr.

    1994-01-01

    Multidisciplinary design optimization (MDO) gives rise to nonlinear optimization problems characterized by a large number of constraints that naturally occur in blocks. We propose a class of multilevel optimization methods motivated by the structure and number of constraints and by the expense of the derivative computations for MDO. The algorithms are an extension to the nonlinear programming problem of the successful class of local Brown-Brent algorithms for nonlinear equations. Our extensions allow the user to partition constraints into arbitrary blocks to fit the application, and they separately process each block and the objective function, restricted to certain subspaces. The methods use trust regions as a globalization strategy, and they have been shown to be globally convergent under reasonable assumptions. The multilevel algorithms can be applied to all classes of MDO formulations. Multilevel algorithms for solving nonlinear systems of equations are a special case of the multilevel optimization methods. In this case, they can be viewed as a trust-region globalization of the Brown-Brent class.

  19. Optimal Battery Sizing in Photovoltaic Based Distributed Generation Using Enhanced Opposition-Based Firefly Algorithm for Voltage Rise Mitigation

    PubMed Central

    Wong, Ling Ai; Shareef, Hussain; Mohamed, Azah; Ibrahim, Ahmad Asrul

    2014-01-01

    This paper presents the application of enhanced opposition-based firefly algorithm in obtaining the optimal battery energy storage systems (BESS) sizing in photovoltaic generation integrated radial distribution network in order to mitigate the voltage rise problem. Initially, the performance of the original firefly algorithm is enhanced by utilizing the opposition-based learning and introducing inertia weight. After evaluating the performance of the enhanced opposition-based firefly algorithm (EOFA) with fifteen benchmark functions, it is then adopted to determine the optimal size for BESS. Two optimization processes are conducted where the first optimization aims to obtain the optimal battery output power on hourly basis and the second optimization aims to obtain the optimal BESS capacity by considering the state of charge constraint of BESS. The effectiveness of the proposed method is validated by applying the algorithm to the 69-bus distribution system and by comparing the performance of EOFA with conventional firefly algorithm and gravitational search algorithm. Results show that EOFA has the best performance comparatively in terms of mitigating the voltage rise problem. PMID:25054184

  20. Optimal battery sizing in photovoltaic based distributed generation using enhanced opposition-based firefly algorithm for voltage rise mitigation.

    PubMed

    Wong, Ling Ai; Shareef, Hussain; Mohamed, Azah; Ibrahim, Ahmad Asrul

    2014-01-01

    This paper presents the application of enhanced opposition-based firefly algorithm in obtaining the optimal battery energy storage systems (BESS) sizing in photovoltaic generation integrated radial distribution network in order to mitigate the voltage rise problem. Initially, the performance of the original firefly algorithm is enhanced by utilizing the opposition-based learning and introducing inertia weight. After evaluating the performance of the enhanced opposition-based firefly algorithm (EOFA) with fifteen benchmark functions, it is then adopted to determine the optimal size for BESS. Two optimization processes are conducted where the first optimization aims to obtain the optimal battery output power on hourly basis and the second optimization aims to obtain the optimal BESS capacity by considering the state of charge constraint of BESS. The effectiveness of the proposed method is validated by applying the algorithm to the 69-bus distribution system and by comparing the performance of EOFA with conventional firefly algorithm and gravitational search algorithm. Results show that EOFA has the best performance comparatively in terms of mitigating the voltage rise problem.

  1. Algorithmic Perspectives on Problem Formulations in MDO

    NASA Technical Reports Server (NTRS)

    Alexandrov, Natalia M.; Lewis, Robert Michael

    2000-01-01

    This work is concerned with an approach to formulating the multidisciplinary optimization (MDO) problem that reflects an algorithmic perspective on MDO problem solution. The algorithmic perspective focuses on formulating the problem in light of the abilities and inabilities of optimization algorithms, so that the resulting nonlinear programming problem can be solved reliably and efficiently by conventional optimization techniques. We propose a modular approach to formulating MDO problems that takes advantage of the problem structure, maximizes the autonomy of implementation, and allows for multiple easily interchangeable problem statements to be used depending on the available resources and the characteristics of the application problem.

  2. A novel resistance iterative algorithm for CCOS

    NASA Astrophysics Data System (ADS)

    Zheng, Ligong; Zhang, Xuejun

    2006-08-01

    CCOS (Computer Control Optical Surfacing) technology is widely used for making aspheric mirrors. For most manufacturers, dwell time algorithm is usually employed to determine the route and dwell time of the small tools to converge the errors. In this article, a novel damp iterative algorithm is proposed. We chose revolutions of the small tool instead of dwell time to determine fabrication stratagem. By using resistance iterative algorithm, we can solve these revolutions. Several mirrors have been manufactured by this method, all of them have fulfilled the demand of the designers, a 1m aspheric mirror was finished within 3 months.

  3. Complexity of the Quantum Adiabatic Algorithm

    NASA Technical Reports Server (NTRS)

    Hen, Itay

    2013-01-01

    The Quantum Adiabatic Algorithm (QAA) has been proposed as a mechanism for efficiently solving optimization problems on a quantum computer. Since adiabatic computation is analog in nature and does not require the design and use of quantum gates, it can be thought of as a simpler and perhaps more profound method for performing quantum computations that might also be easier to implement experimentally. While these features have generated substantial research in QAA, to date there is still a lack of solid evidence that the algorithm can outperform classical optimization algorithms.

  4. [Multispectral image compression algorithms for color reproduction].

    PubMed

    Liang, Wei; Zeng, Ping; Luo, Xue-mei; Wang, Yi-feng; Xie, Kun

    2015-01-01

    In order to improve multispectral images compression efficiency and further facilitate their storage and transmission for the application of color reproduction and so on, in which fields high color accuracy is desired, WF serial methods is proposed, and APWS_RA algorithm is designed. Then the WF_APWS_RA algorithm, which has advantages of low complexity, good illuminant stability and supporting consistent coior reproduction across devices, is presented. The conventional MSE based wavelet embedded coding principle is first studied. And then color perception distortion criterion and visual characteristic matrix W are proposed. Meanwhile, APWS_RA algorithm is formed by optimizing the. rate allocation strategy of APWS. Finally, combined above technologies, a new coding method named WF_APWS_RA is designed. Colorimetric error criterion is used in the algorithm and APWS_RA is applied on visual weighted multispectral image. In WF_APWS_RA, affinity propagation clustering is utilized to exploit spectral correlation of weighted image. Then two-dimensional wavelet transform is used to remove the spatial redundancy. Subsequently, error compensation mechanism and rate pre-allocation are combined to accomplish the embedded wavelet coding. Experimental results show that at the same bit rate, compared with classical coding algorithms, WF serial algorithms have better performance on color retention. APWS_RA preserves least spectral error and WF APWS_RA algorithm has obvious superiority on color accuracy.

  5. Image segmentation using an improved differential algorithm

    NASA Astrophysics Data System (ADS)

    Gao, Hao; Shi, Yujiao; Wu, Dongmei

    2014-10-01

    Among all the existing segmentation techniques, the thresholding technique is one of the most popular due to its simplicity, robustness, and accuracy (e.g. the maximum entropy method, Otsu's method, and K-means clustering). However, the computation time of these algorithms grows exponentially with the number of thresholds due to their exhaustive searching strategy. As a population-based optimization algorithm, differential algorithm (DE) uses a population of potential solutions and decision-making processes. It has shown considerable success in solving complex optimization problems within a reasonable time limit. Thus, applying this method into segmentation algorithm should be a good choice during to its fast computational ability. In this paper, we first propose a new differential algorithm with a balance strategy, which seeks a balance between the exploration of new regions and the exploitation of the already sampled regions. Then, we apply the new DE into the traditional Otsu's method to shorten the computation time. Experimental results of the new algorithm on a variety of images show that, compared with the EA-based thresholding methods, the proposed DE algorithm gets more effective and efficient results. It also shortens the computation time of the traditional Otsu method.

  6. Least significant qubit algorithm for quantum images

    NASA Astrophysics Data System (ADS)

    Sang, Jianzhi; Wang, Shen; Li, Qiong

    2016-08-01

    To study the feasibility of the classical image least significant bit (LSB) information hiding algorithm on quantum computer, a least significant qubit (LSQb) information hiding algorithm of quantum image is proposed. In this paper, we focus on a novel quantum representation for color digital images (NCQI). Firstly, by designing the three qubits comparator and unitary operators, the reasonability and feasibility of LSQb based on NCQI are presented. Then, the concrete LSQb information hiding algorithm is proposed, which can realize the aim of embedding the secret qubits into the least significant qubits of RGB channels of quantum cover image. Quantum circuit of the LSQb information hiding algorithm is also illustrated. Furthermore, the secrets extracting algorithm and circuit are illustrated through utilizing control-swap gates. The two merits of our algorithm are: (1) it is absolutely blind and (2) when extracting secret binary qubits, it does not need any quantum measurement operation or any other help from classical computer. Finally, simulation and comparative analysis show the performance of our algorithm.

  7. An algorithmic approach to crustal deformation analysis

    NASA Technical Reports Server (NTRS)

    Iz, Huseyin Baki

    1987-01-01

    In recent years the analysis of crustal deformation measurements has become important as a result of current improvements in geodetic methods and an increasing amount of theoretical and observational data provided by several earth sciences. A first-generation data analysis algorithm which combines a priori information with current geodetic measurements was proposed. Relevant methods which can be used in the algorithm were discussed. Prior information is the unifying feature of this algorithm. Some of the problems which may arise through the use of a priori information in the analysis were indicated and preventive measures were demonstrated. The first step in the algorithm is the optimal design of deformation networks. The second step in the algorithm identifies the descriptive model of the deformation field. The final step in the algorithm is the improved estimation of deformation parameters. Although deformation parameters are estimated in the process of model discrimination, they can further be improved by the use of a priori information about them. According to the proposed algorithm this information must first be tested against the estimates calculated using the sample data only. Null-hypothesis testing procedures were developed for this purpose. Six different estimators which employ a priori information were examined. Emphasis was put on the case when the prior information is wrong and analytical expressions for possible improvements under incompatible prior information were derived.

  8. Algorithm for Public Electric Transport Schedule Control for Intelligent Embedded Devices

    NASA Astrophysics Data System (ADS)

    Alps, Ivars; Potapov, Andrey; Gorobetz, Mikhail; Levchenkov, Anatoly

    2010-01-01

    In this paper authors present heuristics algorithm for precise schedule fulfilment in city traffic conditions taking in account traffic lights. The algorithm is proposed for programmable controller. PLC is proposed to be installed in electric vehicle to control its motion speed and signals of traffic lights. Algorithm is tested using real controller connected to virtual devices and real functional models of real tram devices. Results of experiments show high precision of public transport schedule fulfilment using proposed algorithm.

  9. The Origin(s) of Whales

    NASA Astrophysics Data System (ADS)

    Uhen, Mark D.

    2010-05-01

    Whales are first found in the fossil record approximately 52.5 million years ago (Mya) during the early Eocene in Indo-Pakistan. Our knowledge of early and middle Eocene whales has increased dramatically during the past three decades to the point where hypotheses of whale origins can be supported with a great deal of evidence from paleontology, anatomy, stratigraphy, and molecular biology. Fossils also provide preserved evidence of behavior and habitats, allowing the reconstruction of the modes of life of these semiaquatic animals during their transition from land to sea. Modern whales originated from ancient whales at or near the Eocene/Oligocene boundary, approximately 33.7 Mya. During the Oligocene, ancient whales coexisted with early baleen whales and early toothed whales. By the end of the Miocene, most modern families had originated, and most archaic forms had gone extinct. Whale diversity peaked in the late middle Miocene and fell thereafter toward the Recent, yielding our depauperate modern whale fauna.

  10. A Floor-Map-Aided WiFi/Pseudo-Odometry Integration Algorithm for an Indoor Positioning System

    PubMed Central

    Wang, Jian; Hu, Andong; Liu, Chunyan; Li, Xin

    2015-01-01

    This paper proposes a scheme for indoor positioning by fusing floor map, WiFi and smartphone sensor data to provide meter-level positioning without additional infrastructure. A topology-constrained K nearest neighbor (KNN) algorithm based on a floor map layout provides the coordinates required to integrate WiFi data with pseudo-odometry (P-O) measurements simulated using a pedestrian dead reckoning (PDR) approach. One method of further improving the positioning accuracy is to use a more effective multi-threshold step detection algorithm, as proposed by the authors. The “go and back” phenomenon caused by incorrect matching of the reference points (RPs) of a WiFi algorithm is eliminated using an adaptive fading-factor-based extended Kalman filter (EKF), taking WiFi positioning coordinates, P-O measurements and fused heading angles as observations. The “cross-wall” problem is solved based on the development of a floor-map-aided particle filter algorithm by weighting the particles, thereby also eliminating the gross-error effects originating from WiFi or P-O measurements. The performance observed in a field experiment performed on the fourth floor of the School of Environmental Science and Spatial Informatics (SESSI) building on the China University of Mining and Technology (CUMT) campus confirms that the proposed scheme can reliably achieve meter-level positioning. PMID:25811224

  11. A MARKOV CHAIN MONTE CARLO ALGORITHM FOR ANALYSIS OF LOW SIGNAL-TO-NOISE COSMIC MICROWAVE BACKGROUND DATA

    SciTech Connect

    Jewell, J. B.; O'Dwyer, I. J.; Huey, Greg; Gorski, K. M.; Eriksen, H. K.; Wandelt, B. D. E-mail: h.k.k.eriksen@astro.uio.no

    2009-05-20

    We present a new Markov Chain Monte Carlo (MCMC) algorithm for cosmic microwave background (CMB) analysis in the low signal-to-noise regime. This method builds on and complements the previously described CMB Gibbs sampler, and effectively solves the low signal-to-noise inefficiency problem of the direct Gibbs sampler. The new algorithm is a simple Metropolis-Hastings sampler with a general proposal rule for the power spectrum, C {sub l}, followed by a particular deterministic rescaling operation of the sky signal, s. The acceptance probability for this joint move depends on the sky map only through the difference of {chi}{sup 2} between the original and proposed sky sample, which is close to unity in the low signal-to-noise regime. The algorithm is completed by alternating this move with a standard Gibbs move. Together, these two proposals constitute a computationally efficient algorithm for mapping out the full joint CMB posterior, both in the high and low signal-to-noise regimes.

  12. A floor-map-aided WiFi/pseudo-odometry integration algorithm for an indoor positioning system.

    PubMed

    Wang, Jian; Hu, Andong; Liu, Chunyan; Li, Xin

    2015-01-01

    This paper proposes a scheme for indoor positioning by fusing floor map, WiFi and smartphone sensor data to provide meter-level positioning without additional infrastructure. A topology-constrained K nearest neighbor (KNN) algorithm based on a floor map layout provides the coordinates required to integrate WiFi data with pseudo-odometry (P-O) measurements simulated using a pedestrian dead reckoning (PDR) approach. One method of further improving the positioning accuracy is to use a more effective multi-threshold step detection algorithm, as proposed by the authors. The "go and back" phenomenon caused by incorrect matching of the reference points (RPs) of a WiFi algorithm is eliminated using an adaptive fading-factor-based extended Kalman filter (EKF), taking WiFi positioning coordinates, P-O measurements and fused heading angles as observations. The "cross-wall" problem is solved based on the development of a floor-map-aided particle filter algorithm by weighting the particles, thereby also eliminating the gross-error effects originating from WiFi or P-O measurements. The performance observed in a field experiment performed on the fourth floor of the School of Environmental Science and Spatial Informatics (SESSI) building on the China University of Mining and Technology (CUMT) campus confirms that the proposed scheme can reliably achieve meter-level positioning. PMID:25811224

  13. A floor-map-aided WiFi/pseudo-odometry integration algorithm for an indoor positioning system.

    PubMed

    Wang, Jian; Hu, Andong; Liu, Chunyan; Li, Xin

    2015-03-24

    This paper proposes a scheme for indoor positioning by fusing floor map, WiFi and smartphone sensor data to provide meter-level positioning without additional infrastructure. A topology-constrained K nearest neighbor (KNN) algorithm based on a floor map layout provides the coordinates required to integrate WiFi data with pseudo-odometry (P-O) measurements simulated using a pedestrian dead reckoning (PDR) approach. One method of further improving the positioning accuracy is to use a more effective multi-threshold step detection algorithm, as proposed by the authors. The "go and back" phenomenon caused by incorrect matching of the reference points (RPs) of a WiFi algorithm is eliminated using an adaptive fading-factor-based extended Kalman filter (EKF), taking WiFi positioning coordinates, P-O measurements and fused heading angles as observations. The "cross-wall" problem is solved based on the development of a floor-map-aided particle filter algorithm by weighting the particles, thereby also eliminating the gross-error effects originating from WiFi or P-O measurements. The performance observed in a field experiment performed on the fourth floor of the School of Environmental Science and Spatial Informatics (SESSI) building on the China University of Mining and Technology (CUMT) campus confirms that the proposed scheme can reliably achieve meter-level positioning.

  14. Origin, development, and evolution of butterfly eyespots.

    PubMed

    Monteiro, Antónia

    2015-01-01

    This article reviews the latest developments in our understanding of the origin, development, and evolution of nymphalid butterfly eyespots. Recent contributions to this field include insights into the evolutionary and developmental origin of eyespots and their ancestral deployment on the wing, the evolution of eyespot number and eyespot sexual dimorphism, and the identification of genes affecting eyespot development and black pigmentation. I also compare features of old and more recently proposed models of eyespot development and propose a schematic for the genetic regulatory architecture of eyespots. Using this schematic I propose two hypotheses for why we observe limits to morphological diversity across these serially homologous traits.

  15. Origin, development, and evolution of butterfly eyespots.

    PubMed

    Monteiro, Antónia

    2015-01-01

    This article reviews the latest developments in our understanding of the origin, development, and evolution of nymphalid butterfly eyespots. Recent contributions to this field include insights into the evolutionary and developmental origin of eyespots and their ancestral deployment on the wing, the evolution of eyespot number and eyespot sexual dimorphism, and the identification of genes affecting eyespot development and black pigmentation. I also compare features of old and more recently proposed models of eyespot development and propose a schematic for the genetic regulatory architecture of eyespots. Using this schematic I propose two hypotheses for why we observe limits to morphological diversity across these serially homologous traits. PMID:25341098

  16. An efficient algorithm for estimating noise covariances in distributed systems

    NASA Technical Reports Server (NTRS)

    Dee, D. P.; Cohn, S. E.; Ghil, M.; Dalcher, A.

    1985-01-01

    An efficient computational algorithm for estimating the noise covariance matrices of large linear discrete stochatic-dynamic systems is presented. Such systems arise typically by discretizing distributed-parameter systems, and their size renders computational efficiency a major consideration. The proposed adaptive filtering algorithm is based on the ideas of Belanger, and is algebraically equivalent to his algorithm. The earlier algorithm, however, has computational complexity proportional to p to the 6th, where p is the number of observations of the system state, while the new algorithm has complexity proportional to only p-cubed. Further, the formulation of noise covariance estimation as a secondary filter, analogous to state estimation as a primary filter, suggests several generalizations of the earlier algorithm. The performance of the proposed algorithm is demonstrated for a distributed system arising in numerical weather prediction.

  17. Variable depth recursion algorithm for leaf sequencing

    SciTech Connect

    Siochi, R. Alfredo C.

    2007-02-15

    The processes of extraction and sweep are basic segmentation steps that are used in leaf sequencing algorithms. A modified version of a commercial leaf sequencer changed the way that the extracts are selected and expanded the search space, but the modification maintained the basic search paradigm of evaluating multiple solutions, each one consisting of up to 12 extracts and a sweep sequence. While it generated the best solutions compared to other published algorithms, it used more computation time. A new, faster algorithm selects one extract at a time but calls itself as an evaluation function a user-specified number of times, after which it uses the bidirectional sweeping window algorithm as the final evaluation function. To achieve a performance comparable to that of the modified commercial leaf sequencer, 2-3 calls were needed, and in all test cases, there were only slight improvements beyond two calls. For the 13 clinical test maps, computation speeds improved by a factor between 12 and 43, depending on the constraints, namely the ability to interdigitate and the avoidance of the tongue-and-groove under dose. The new algorithm was compared to the original and modified versions of the commercial leaf sequencer. It was also compared to other published algorithms for 1400, random, 15x15, test maps with 3-16 intensity levels. In every single case the new algorithm provided the best solution.

  18. The POP learning algorithms: reducing work in identifying fuzzy rules.

    PubMed

    Quek, C; Zhou, R W

    2001-12-01

    A novel fuzzy neural network, the Pseudo Outer-Product based Fuzzy Neural Network (POPFNN), and its two fuzzy-rule-identification algorithms are proposed in this paper. They are the Pseudo Outer-Product (POP) learning and the Lazy Pseudo Outer-Product (LazyPOP) leaning algorithms. These two learning algorithms are used in POPFNN to identify relevant fuzzy rules. In contrast with other rule-learning algorithms, the proposed algorithms have many advantages, such as being fast, reliable, efficient, and easy to understand. POP learning is a simple one-pass learning algorithm. It essentially performs rule-selection. Hence, it suffers from the shortcoming of having to consider all the possible rules. The second algorithm, the LazyPOP learning algorithm, truly identifies the fuzzy rules which are relevant and does not use a rule-selection method whereby irrelevant fuzzy rules are eliminated from an initial rule set. In addition, it is able to adjust the structure of the fuzzy neural network. The proposed LazyPOP learning algorithm is able to delete invalid feature inputs according to the fuzzy rules that have been identified. Extensive experimental results and discussions are presented for a detailed analysis of the proposed algorithms.

  19. An Improved Back Propagation Neural Network Algorithm on Classification Problems

    NASA Astrophysics Data System (ADS)

    Nawi, Nazri Mohd; Ransing, R. S.; Salleh, Mohd Najib Mohd; Ghazali, Rozaida; Hamid, Norhamreeza Abdul

    The back propagation algorithm is one the most popular algorithms to train feed forward neural networks. However, the convergence of this algorithm is slow, it is mainly because of gradient descent algorithm. Previous research demonstrated that in 'feed forward' algorithm, the slope of the activation function is directly influenced by a parameter referred to as 'gain'. This research proposed an algorithm for improving the performance of the back propagation algorithm by introducing the adaptive gain of the activation function. The gain values change adaptively for each node. The influence of the adaptive gain on the learning ability of a neural network is analysed. Multi layer feed forward neural networks have been assessed. Physical interpretation of the relationship between the gain value and the learning rate and weight values is given. The efficiency of the proposed algorithm is compared with conventional Gradient Descent Method and verified by means of simulation on four classification problems. In learning the patterns, the simulations result demonstrate that the proposed method converged faster on Wisconsin breast cancer with an improvement ratio of nearly 2.8, 1.76 on diabetes problem, 65% better on thyroid data sets and 97% faster on IRIS classification problem. The results clearly show that the proposed algorithm significantly improves the learning speed of the conventional back-propagation algorithm.

  20. RNA-RNA interaction prediction using genetic algorithm

    PubMed Central

    2014-01-01

    Background RNA-RNA interaction plays an important role in the regulation of gene expression and cell development. In this process, an RNA molecule prohibits the translation of another RNA molecule by establishing stable interactions with it. In the RNA-RNA interaction prediction problem, two RNA sequences are given as inputs and the goal is to find the optimal secondary structure of two RNAs and between them. Some different algorithms have been proposed to predict RNA-RNA interaction structure. However, most of them suffer from high computational time. Results In this paper, we introduce a novel genetic algorithm called GRNAs to predict the RNA-RNA interaction. The proposed algorithm is performed on some standard datasets with appropriate accuracy and lower time complexity in comparison to the other state-of-the-art algorithms. In the proposed algorithm, each individual is a secondary structure of two interacting RNAs. The minimum free energy is considered as a fitness function for each individual. In each generation, the algorithm is converged to find the optimal secondary structure (minimum free energy structure) of two interacting RNAs by using crossover and mutation operations. Conclusions This algorithm is properly employed for joint secondary structure prediction. The results achieved on a set of known interacting RNA pairs are compared with the other related algorithms and the effectiveness and validity of the proposed algorithm have been demonstrated. It has been shown that time complexity of the algorithm in each iteration is as efficient as the other approaches. PMID:25114714

  1. The algorithm stitching for medical imaging

    NASA Astrophysics Data System (ADS)

    Semenishchev, E.; Marchuk, V.; Voronin, V.; Pismenskova, M.; Tolstova, I.; Svirin, I.

    2016-05-01

    In this paper we propose a stitching algorithm of medical images into one. The algorithm is designed to stitching the medical x-ray imaging, biological particles in microscopic images, medical microscopic images and other. Such image can improve the diagnosis accuracy and quality for minimally invasive studies (e.g., laparoscopy, ophthalmology and other). The proposed algorithm is based on the following steps: the searching and selection areas with overlap boundaries; the keypoint and feature detection; the preliminary stitching images and transformation to reduce the visible distortion; the search a single unified borders in overlap area; brightness, contrast and white balance converting; the superimposition into a one image. Experimental results demonstrate the effectiveness of the proposed method in the task of image stitching.

  2. Electricity load forecasting using support vector regression with memetic algorithms.

    PubMed

    Hu, Zhongyi; Bao, Yukun; Xiong, Tao

    2013-01-01

    Electricity load forecasting is an important issue that is widely explored and examined in power systems operation literature and commercial transactions in electricity markets literature as well. Among the existing forecasting models, support vector regression (SVR) has gained much attention. Considering the performance of SVR highly depends on its parameters; this study proposed a firefly algorithm (FA) based memetic algorithm (FA-MA) to appropriately determine the parameters of SVR forecasting model. In the proposed FA-MA algorithm, the FA algorithm is applied to explore the solution space, and the pattern search is used to conduct individual learning and thus enhance the exploitation of FA. Experimental results confirm that the proposed FA-MA based SVR model can not only yield more accurate forecasting results than the other four evolutionary algorithms based SVR models and three well-known forecasting models but also outperform the hybrid algorithms in the related existing literature.

  3. Asymmetric intimacy and algorithm for detecting communities in bipartite networks

    NASA Astrophysics Data System (ADS)

    Wang, Xingyuan; Qin, Xiaomeng

    2016-11-01

    In this paper, an algorithm to choose a good partition in bipartite networks has been proposed. Bipartite networks have more theoretical significance and broader prospect of application. In view of distinctive structure of bipartite networks, in our method, two parameters are defined to show the relationships between the same type nodes and heterogeneous nodes respectively. Moreover, our algorithm employs a new method of finding and expanding the core communities in bipartite networks. Two kinds of nodes are handled separately and merged, and then the sub-communities are obtained. After that, objective communities will be found according to the merging rule. The proposed algorithm has been simulated in real-world networks and artificial networks, and the result verifies the accuracy and reliability of the parameters on intimacy for our algorithm. Eventually, comparisons with similar algorithms depict that the proposed algorithm has better performance.

  4. A novel fitness evaluation method for evolutionary algorithms

    NASA Astrophysics Data System (ADS)

    Wang, Ji-feng; Tang, Ke-zong

    2013-03-01

    Fitness evaluation is a crucial task in evolutionary algorithms because it can affect the convergence speed and also the quality of the final solution. But these algorithms may require huge computation power for solving nonlinear programming problems. This paper proposes a novel fitness evaluation approach which employs similarity-base learning embedded in a classical differential evolution (SDE) to evaluate all new individuals. Each individual consists of three elements: parameter vector (v), a fitness value (f), and a reliability value(r). The f is calculated using NFEA, and only when the r is below a threshold is the f calculated using true fitness function. Moreover, applying error compensation system to the proposed algorithm further enhances the performance of the algorithm to make r much closer to true fitness value for each new child. Simulation results over a comprehensive set of benchmark functions show that the convergence rate of the proposed algorithm is much faster than much that of the compared algorithms.

  5. A generating set direct search augmented Lagrangian algorithm for optimization with a combination of general and linear constraints.

    SciTech Connect

    Lewis, Robert Michael (College of William and Mary, Williamsburg, VA); Torczon, Virginia Joanne (College of William and Mary, Williamsburg, VA); Kolda, Tamara Gibson

    2006-08-01

    We consider the solution of nonlinear programs in the case where derivatives of the objective function and nonlinear constraints are unavailable. To solve such problems, we propose an adaptation of a method due to Conn, Gould, Sartenaer, and Toint that proceeds by approximately minimizing a succession of linearly constrained augmented Lagrangians. Our modification is to use a derivative-free generating set direct search algorithm to solve the linearly constrained subproblems. The stopping criterion proposed by Conn, Gould, Sartenaer and Toint for the approximate solution of the subproblems requires explicit knowledge of derivatives. Such information is presumed absent in the generating set search method we employ. Instead, we show that stationarity results for linearly constrained generating set search methods provide a derivative-free stopping criterion, based on a step-length control parameter, that is sufficient to preserve the convergence properties of the original augmented Lagrangian algorithm.

  6. Novel image compression-encryption hybrid algorithm based on key-controlled measurement matrix in compressive sensing

    NASA Astrophysics Data System (ADS)

    Zhou, Nanrun; Zhang, Aidi; Zheng, Fen; Gong, Lihua

    2014-10-01

    The existing ways to encrypt images based on compressive sensing usually treat the whole measurement matrix as the key, which renders the key too large to distribute and memorize or store. To solve this problem, a new image compression-encryption hybrid algorithm is proposed to realize compression and encryption simultaneously, where the key is easily distributed, stored or memorized. The input image is divided into 4 blocks to compress and encrypt, then the pixels of the two adjacent blocks are exchanged randomly by random matrices. The measurement matrices in compressive sensing are constructed by utilizing the circulant matrices and controlling the original row vectors of the circulant matrices with logistic map. And the random matrices used in random pixel exchanging are bound with the measurement matrices. Simulation results verify the effectiveness, security of the proposed algorithm and the acceptable compression performance.

  7. Analysis of Multivariate Experimental Data Using A Simplified Regression Model Search Algorithm

    NASA Technical Reports Server (NTRS)

    Ulbrich, Norbert M.

    2013-01-01

    A new regression model search algorithm was developed that may be applied to both general multivariate experimental data sets and wind tunnel strain-gage balance calibration data. The algorithm is a simplified version of a more complex algorithm that was originally developed for the NASA Ames Balance Calibration Laboratory. The new algorithm performs regression model term reduction to prevent overfitting of data. It has the advantage that it needs only about one tenth of the original algorithm's CPU time for the completion of a regression model search. In addition, extensive testing showed that the prediction accuracy of math models obtained from the simplified algorithm is similar to the prediction accuracy of math models obtained from the original algorithm. The simplified algorithm, however, cannot guarantee that search constraints related to a set of statistical quality requirements are always satisfied in the optimized regression model. Therefore, the simplified algorithm is not intended to replace the original algorithm. Instead, it may be used to generate an alternate optimized regression model of experimental data whenever the application of the original search algorithm fails or requires too much CPU time. Data from a machine calibration of NASA's MK40 force balance is used to illustrate the application of the new search algorithm.

  8. Analysis of Multivariate Experimental Data Using A Simplified Regression Model Search Algorithm

    NASA Technical Reports Server (NTRS)

    Ulbrich, Norbert Manfred

    2013-01-01

    A new regression model search algorithm was developed in 2011 that may be used to analyze both general multivariate experimental data sets and wind tunnel strain-gage balance calibration data. The new algorithm is a simplified version of a more complex search algorithm that was originally developed at the NASA Ames Balance Calibration Laboratory. The new algorithm has the advantage that it needs only about one tenth of the original algorithm's CPU time for the completion of a search. In addition, extensive testing showed that the prediction accuracy of math models obtained from the simplified algorithm is similar to the prediction accuracy of math models obtained from the original algorithm. The simplified algorithm, however, cannot guarantee that search constraints related to a set of statistical quality requirements are always satisfied in the optimized regression models. Therefore, the simplified search algorithm is not intended to replace the original search algorithm. Instead, it may be used to generate an alternate optimized regression model of experimental data whenever the application of the original search algorithm either fails or requires too much CPU time. Data from a machine calibration of NASA's MK40 force balance is used to illustrate the application of the new regression model search algorithm.

  9. Rolling ball algorithm as a multitask filter for terrain conductivity measurements

    NASA Astrophysics Data System (ADS)

    Rashed, Mohamed

    2016-09-01

    Portable frequency domain electromagnetic devices, commonly known as terrain conductivity meters, have become increasingly popular in recent years, especially in locating underground utilities. Data collected using these devices, however, usually suffer from major problems such as complexity and interference of apparent conductivity anomalies, near edge local spikes, and fading of conductivity contrast between a utility and the surrounding soil. This study presents the experience of adopting the rolling ball algorithm, originally designed to remove background from medical images, to treat these major problems in terrain conductivity measurements. Applying the proposed procedure to data collected using different terrain conductivity meters at different locations and conditions proves the capability of the rolling ball algorithm to treat these data both efficiently and quickly.

  10. An effective detection algorithm for region duplication forgery in digital images

    NASA Astrophysics Data System (ADS)

    Yavuz, Fatih; Bal, Abdullah; Cukur, Huseyin

    2016-04-01

    Powerful image editing tools are very common and easy to use these days. This situation may cause some forgeries by adding or removing some information on the digital images. In order to detect these types of forgeries such as region duplication, we present an effective algorithm based on fixed-size block computation and discrete wavelet transform (DWT). In this approach, the original image is divided into fixed-size blocks, and then wavelet transform is applied for dimension reduction. Each block is processed by Fourier Transform and represented by circle regions. Four features are extracted from each block. Finally, the feature vectors are lexicographically sorted, and duplicated image blocks are detected according to comparison metric results. The experimental results show that the proposed algorithm presents computational efficiency due to fixed-size circle block architecture.

  11. Fast parallel molecular algorithms for DNA-based computation: factoring integers.

    PubMed

    Chang, Weng-Long; Guo, Minyi; Ho, Michael Shan-Hui

    2005-06-01

    The RSA public-key cryptosystem is an algorithm that converts input data to an unrecognizable encryption and converts the unrecognizable data back into its original decryption form. The security of the RSA public-key cryptosystem is based on the difficulty of factoring the product of two large prime numbers. This paper demonstrates to factor the product of two large prime numbers, and is a breakthrough in basic biological operations using a molecular computer. In order to achieve this, we propose three DNA-based algorithms for parallel subtractor, parallel comparator, and parallel modular arithmetic that formally verify our designed molecular solutions for factoring the product of two large prime numbers. Furthermore, this work indicates that the cryptosystems using public-key are perhaps insecure and also presents clear evidence of the ability of molecular computing to perform complicated mathematical operations.

  12. A Globally Convergent Augmented Lagrangian Pattern Search Algorithm for Optimization with General Constraints and Simple Bounds

    NASA Technical Reports Server (NTRS)

    Lewis, Robert Michael; Torczon, Virginia

    1998-01-01

    We give a pattern search adaptation of an augmented Lagrangian method due to Conn, Gould, and Toint. The algorithm proceeds by successive bound constrained minimization of an augmented Lagrangian. In the pattern search adaptation we solve this subproblem approximately using a bound constrained pattern search method. The stopping criterion proposed by Conn, Gould, and Toint for the solution of this subproblem requires explicit knowledge of derivatives. Such information is presumed absent in pattern search methods; however, we show how we can replace this with a stopping criterion based on the pattern size in a way that preserves the convergence properties of the original algorithm. In this way we proceed by successive, inexact, bound constrained minimization without knowing exactly how inexact the minimization is. So far as we know, this is the first provably convergent direct search method for general nonlinear programming.

  13. Model reduction algorithms for optimal control and importance sampling of diffusions

    NASA Astrophysics Data System (ADS)

    Hartmann, Carsten; Schütte, Christof; Zhang, Wei

    2016-08-01

    We propose numerical algorithms for solving optimal control and importance sampling problems based on simplified models. The algorithms combine model reduction techniques for multiscale diffusions and stochastic optimization tools, with the aim of reducing the original, possibly high-dimensional problem to a lower dimensional representation of the dynamics, in which only a few relevant degrees of freedom are controlled or biased. Specifically, we study situations in which either a reaction coordinate onto which the dynamics can be projected is known, or situations in which the dynamics shows strongly localized behavior in the small noise regime. No explicit assumptions about small parameters or scale separation have to be made. We illustrate the approach with simple, but paradigmatic numerical examples.

  14. Modified Landweber algorithm for robust particle sizing by using Fraunhofer diffraction.

    PubMed

    Xu, Lijun; Wei, Tianxiao; Zhou, Jiayi; Cao, Zhang

    2014-09-20

    In this paper, a robust modified Landweber algorithm was proposed to retrieve the particle size distributions from Fraunhofer diffraction. Three typical particle size distributions, i.e., Rosin-Rammler, lognormal, and bimodal normal distributions for particles ranging from 4.8 to 96 μm, were employed to verify the performance of the algorithm. To show its merits, the proposed algorithm was compared with the Tikhonov regularization algorithm and the ℓ1-norm-based algorithm. Simulation results showed that, for noise-free data, both the modified Landweber algorithm and the ℓ1-norm-based algorithm were better than the Tikhonov regularization algorithm in terms of accuracy. When the data was noise-contaminated, the modified Landweber algorithm was superior to the other two algorithms in both accuracy and speed. An experimental setup was also established and the results validated the feasibility and effectiveness of the proposed method.

  15. Algorithmic Mechanism Design of Evolutionary Computation

    PubMed Central

    Pei, Yan

    2015-01-01

    We consider algorithmic design, enhancement, and improvement of evolutionary computation as a mechanism design problem. All individuals or several groups of individuals can be considered as self-interested agents. The individuals in evolutionary computation can manipulate parameter settings and operations by satisfying their own preferences, which are defined by an evolutionary computation algorithm designer, rather than by following a fixed algorithm rule. Evolutionary computation algorithm designers or self-adaptive methods should construct proper rules and mechanisms for all agents (individuals) to conduct their evolution behaviour correctly in order to definitely achieve the desired and preset objective(s). As a case study, we propose a formal framework on parameter setting, strategy selection, and algorithmic design of evolutionary computation by considering the Nash strategy equilibrium of a mechanism design in the search process. The evaluation results present the efficiency of the framework. This primary principle can be implemented in any evolutionary computation algorithm that needs to consider strategy selection issues in its optimization process. The final objective of our work is to solve evolutionary computation design as an algorithmic mechanism design problem and establish its fundamental aspect by taking this perspective. This paper is the first step towards achieving this objective by implementing a strategy equilibrium solution (such as Nash equilibrium) in evolutionary computation algorithm. PMID:26257777

  16. A modified genetic algorithm with fuzzy roulette wheel selection for job-shop scheduling problems

    NASA Astrophysics Data System (ADS)

    Thammano, Arit; Teekeng, Wannaporn

    2015-05-01

    The job-shop scheduling problem is one of the most difficult production planning problems. Since it is in the NP-hard class, a recent trend in solving the job-shop scheduling problem is shifting towards the use of heuristic and metaheuristic algorithms. This paper proposes a novel metaheuristic algorithm, which is a modification of the genetic algorithm. This proposed algorithm introduces two new concepts to the standard genetic algorithm: (1) fuzzy roulette wheel selection and (2) the mutation operation with tabu list. The proposed algorithm has been evaluated and compared with several state-of-the-art algorithms in the literature. The experimental results on 53 JSSPs show that the proposed algorithm is very effective in solving the combinatorial optimization problems. It outperforms all state-of-the-art algorithms on all benchmark problems in terms of the ability to achieve the optimal solution and the computational time.

  17. An efficient algorithm for calculating the exact Hausdorff distance.

    PubMed

    Taha, Abdel Aziz; Hanbury, Allan

    2015-11-01

    The Hausdorff distance (HD) between two point sets is a commonly used dissimilarity measure for comparing point sets and image segmentations. Especially when very large point sets are compared using the HD, for example when evaluating magnetic resonance volume segmentations, or when the underlying applications are based on time critical tasks, like motion detection, then the computational complexity of HD algorithms becomes an important issue. In this paper we propose a novel efficient algorithm for computing the exact Hausdorff distance. In a runtime analysis, the proposed algorithm is demonstrated to have nearly-linear complexity. Furthermore, it has efficient performance for large point set sizes as well as for large grid size; performs equally for sparse and dense point sets; and finally it is general without restrictions on the characteristics of the point set. The proposed algorithm is tested against the HD algorithm of the widely used national library of medicine insight segmentation and registration toolkit (ITK) using magnetic resonance volumes with extremely large size. The proposed algorithm outperforms the ITK HD algorithm both in speed and memory required. In an experiment using trajectories from a road network, the proposed algorithm significantly outperforms an HD algorithm based on R-Trees. PMID:26440258

  18. Historical development of origins research.

    PubMed

    Lazcano, Antonio

    2010-11-01

    Following the publication of the Origin of Species in 1859, many naturalists adopted the idea that living organisms were the historical outcome of gradual transformation of lifeless matter. These views soon merged with the developments of biochemistry and cell biology and led to proposals in which the origin of protoplasm was equated with the origin of life. The heterotrophic origin of life proposed by Oparin and Haldane in the 1920s was part of this tradition, which Oparin enriched by transforming the discussion of the emergence of the first cells into a workable multidisciplinary research program. On the other hand, the scientific trend toward understanding biological phenomena at the molecular level led authors like Troland, Muller, and others to propose that single molecules or viruses represented primordial living systems. The contrast between these opposing views on the origin of life represents not only contrasting views of the nature of life itself, but also major ideological discussions that reached a surprising intensity in the years following Stanley Miller's seminal result which showed the ease with which organic compounds of biochemical significance could be synthesized under putative primitive conditions. In fact, during the years following the Miller experiment, attempts to understand the origin of life were strongly influenced by research on DNA replication and protein biosynthesis, and, in socio-political terms, by the atmosphere created by Cold War tensions. The catalytic versatility of RNA molecules clearly merits a critical reappraisal of Muller's viewpoint. However, the discovery of ribozymes does not imply that autocatalytic nucleic acid molecules ready to be used as primordial genes were floating in the primitive oceans, or that the RNA world emerged completely assembled from simple precursors present in the prebiotic soup. The evidence supporting the presence of a wide range of organic molecules on the primitive Earth, including membrane

  19. Historical Development of Origins Research

    PubMed Central

    Lazcano, Antonio

    2010-01-01

    Following the publication of the Origin of Species in 1859, many naturalists adopted the idea that living organisms were the historical outcome of gradual transformation of lifeless matter. These views soon merged with the developments of biochemistry and cell biology and led to proposals in which the origin of protoplasm was equated with the origin of life. The heterotrophic origin of life proposed by Oparin and Haldane in the 1920s was part of this tradition, which Oparin enriched by transforming the discussion of the emergence of the first cells into a workable multidisciplinary research program. On the other hand, the scientific trend toward understanding biological phenomena at the molecular level led authors like Troland, Muller, and others to propose that single molecules or viruses represented primordial living systems. The contrast between these opposing views on the origin of life represents not only contrasting views of the nature of life itself, but also major ideological discussions that reached a surprising intensity in the years following Stanley Miller’s seminal result which showed the ease with which organic compounds of biochemical significance could be synthesized under putative primitive conditions. In fact, during the years following the Miller experiment, attempts to understand the origin of life were strongly influenced by research on DNA replication and protein biosynthesis, and, in socio-political terms, by the atmosphere created by Cold War tensions. The catalytic versatility of RNA molecules clearly merits a critical reappraisal of Muller’s viewpoint. However, the discovery of ribozymes does not imply that autocatalytic nucleic acid molecules ready to be used as primordial genes were floating in the primitive oceans, or that the RNA world emerged completely assembled from simple precursors present in the prebiotic soup. The evidence supporting the presence of a wide range of organic molecules on the primitive Earth, including membrane

  20. Historical development of origins research.

    PubMed

    Lazcano, Antonio

    2010-11-01

    Following the publication of the Origin of Species in 1859, many naturalists adopted the idea that living organisms were the historical outcome of gradual transformation of lifeless matter. These views soon merged with the developments of biochemistry and cell biology and led to proposals in which the origin of protoplasm was equated with the origin of life. The heterotrophic origin of life proposed by Oparin and Haldane in the 1920s was part of this tradition, which Oparin enriched by transforming the discussion of the emergence of the first cells into a workable multidisciplinary research program. On the other hand, the scientific trend toward understanding biological phenomena at the molecular level led authors like Troland, Muller, and others to propose that single molecules or viruses represented primordial living systems. The contrast between these opposing views on the origin of life represents not only contrasting views of the nature of life itself, but also major ideological discussions that reached a surprising intensity in the years following Stanley Miller's seminal result which showed the ease with which organic compounds of biochemical significance could be synthesized under putative primitive conditions. In fact, during the years following the Miller experiment, attempts to understand the origin of life were strongly influenced by research on DNA replication and protein biosynthesis, and, in socio-political terms, by the atmosphere created by Cold War tensions. The catalytic versatility of RNA molecules clearly merits a critical reappraisal of Muller's viewpoint. However, the discovery of ribozymes does not imply that autocatalytic nucleic acid molecules ready to be used as primordial genes were floating in the primitive oceans, or that the RNA world emerged completely assembled from simple precursors present in the prebiotic soup. The evidence supporting the presence of a wide range of organic molecules on the primitive Earth, including membrane

  1. MUSIC algorithms for rebar detection

    NASA Astrophysics Data System (ADS)

    Solimene, Raffaele; Leone, Giovanni; Dell'Aversano, Angela

    2013-12-01

    The MUSIC (MUltiple SIgnal Classification) algorithm is employed to detect and localize an unknown number of scattering objects which are small in size as compared to the wavelength. The ensemble of objects to be detected consists of both strong and weak scatterers. This represents a scattering environment challenging for detection purposes as strong scatterers tend to mask the weak ones. Consequently, the detection of more weakly scattering objects is not always guaranteed and can be completely impaired when the noise corrupting data is of a relatively high level. To overcome this drawback, here a new technique is proposed, starting from the idea of applying a two-stage MUSIC algorithm. In the first stage strong scatterers are detected. Then, information concerning their number and location is employed in the second stage focusing only on the weak scatterers. The role of an adequate scattering model is emphasized to improve drastically detection performance in realistic scenarios.

  2. The Application Research of MD5 Encryption Algorithm in DCT Digital Watermarking

    NASA Astrophysics Data System (ADS)

    Xijin, Wang; Linxiu, Fan

    This article did the preliminary study of the application of algorithm for MD5 in the digital watermark. It proposed that copyright information will be encrypted using an algorithm MD5, and made rules for the second value image watermarks, through DCT algorithm that embeds an image by the carrier. The extraction algorithms can pick up the watermark and restore MD5 code.

  3. Improved Bat Algorithm Applied to Multilevel Image Thresholding

    PubMed Central

    2014-01-01

    Multilevel image thresholding is a very important image processing technique that is used as a basis for image segmentation and further higher level processing. However, the required computational time for exhaustive search grows exponentially with the number of desired thresholds. Swarm intelligence metaheuristics are well known as successful and efficient optimization methods for intractable problems. In this paper, we adjusted one of the latest swarm intelligence algorithms, the bat algorithm, for the multilevel image thresholding problem. The results of testing on standard benchmark images show that the bat algorithm is comparable with other state-of-the-art algorithms. We improved standard bat algorithm, where our modifications add some elements from the differential evolution and from the artificial bee colony algorithm. Our new proposed improved bat algorithm proved to be better than five other state-of-the-art algorithms, improving quality of results in all cases and significantly improving convergence speed. PMID:25165733

  4. Genetic Algorithms for Digital Quantum Simulations

    NASA Astrophysics Data System (ADS)

    Las Heras, U.; Alvarez-Rodriguez, U.; Solano, E.; Sanz, M.

    2016-06-01

    We propose genetic algorithms, which are robust optimization techniques inspired by natural selection, to enhance the versatility of digital quantum simulations. In this sense, we show that genetic algorithms can be employed to increase the fidelity and optimize the resource requirements of digital quantum simulation protocols while adapting naturally to the experimental constraints. Furthermore, this method allows us to reduce not only digital errors but also experimental errors in quantum gates. Indeed, by adding ancillary qubits, we design a modular gate made out of imperfect gates, whose fidelity is larger than the fidelity of any of the constituent gates. Finally, we prove that the proposed modular gates are resilient against different gate errors.

  5. Robust facial expression recognition algorithm based on local metric learning

    NASA Astrophysics Data System (ADS)

    Jiang, Bin; Jia, Kebin

    2016-01-01

    In facial expression recognition tasks, different facial expressions are often confused with each other. Motivated by the fact that a learned metric can significantly improve the accuracy of classification, a facial expression recognition algorithm based on local metric learning is proposed. First, k-nearest neighbors of the given testing sample are determined from the total training data. Second, chunklets are selected from the k-nearest neighbors. Finally, the optimal transformation matrix is computed by maximizing the total variance between different chunklets and minimizing the total variance of instances in the same chunklet. The proposed algorithm can find the suitable distance metric for every testing sample and improve the performance on facial expression recognition. Furthermore, the proposed algorithm can be used for vector-based and matrix-based facial expression recognition. Experimental results demonstrate that the proposed algorithm could achieve higher recognition rates and be more robust than baseline algorithms on the JAFFE, CK, and RaFD databases.

  6. Artificial bee colony algorithm for solving optimal power flow problem.

    PubMed

    Le Dinh, Luong; Vo Ngoc, Dieu; Vasant, Pandian

    2013-01-01

    This paper proposes an artificial bee colony (ABC) algorithm for solving optimal power flow (OPF) problem. The objective of the OPF problem is to minimize total cost of thermal units while satisfying the unit and system constraints such as generator capacity limits, power balance, line flow limits, bus voltages limits, and transformer tap settings limits. The ABC algorithm is an optimization method inspired from the foraging behavior of honey bees. The proposed algorithm has been tested on the IEEE 30-bus, 57-bus, and 118-bus systems. The numerical results have indicated that the proposed algorithm can find high quality solution for the problem in a fast manner via the result comparisons with other methods in the literature. Therefore, the proposed ABC algorithm can be a favorable method for solving the OPF problem.

  7. Artificial Bee Colony Algorithm for Solving Optimal Power Flow Problem

    PubMed Central

    Le Dinh, Luong; Vo Ngoc, Dieu

    2013-01-01

    This paper proposes an artificial bee colony (ABC) algorithm for solving optimal power flow (OPF) problem. The objective of the OPF problem is to minimize total cost of thermal units while satisfying the unit and system constraints such as generator capacity limits, power balance, line flow limits, bus voltages limits, and transformer tap settings limits. The ABC algorithm is an optimization method inspired from the foraging behavior of honey bees. The proposed algorithm has been tested on the IEEE 30-bus, 57-bus, and 118-bus systems. The numerical results have indicated that the proposed algorithm can find high quality solution for the problem in a fast manner via the result comparisons with other methods in the literature. Therefore, the proposed ABC algorithm can be a favorable method for solving the OPF problem. PMID:24470790

  8. An affine projection algorithm using grouping selection of input vectors

    NASA Astrophysics Data System (ADS)

    Shin, JaeWook; Kong, NamWoong; Park, PooGyeon

    2011-10-01

    This paper present an affine projection algorithm (APA) using grouping selection of input vectors. To improve the performance of conventional APA, the proposed algorithm adjusts the number of the input vectors using two procedures: grouping procedure and selection procedure. In grouping procedure, the some input vectors that have overlapping information for update is grouped using normalized inner product. Then, few input vectors that have enough information for for coefficient update is selected using steady-state mean square error (MSE) in selection procedure. Finally, the filter coefficients update using selected input vectors. The experimental results show that the proposed algorithm has small steady-state estimation errors comparing with the existing algorithms.

  9. Dual-Byte-Marker Algorithm for Detecting JFIF Header

    NASA Astrophysics Data System (ADS)

    Mohamad, Kamaruddin Malik; Herawan, Tutut; Deris, Mustafa Mat

    The use of efficient algorithm to detect JPEG file is vital to reduce time taken for analyzing ever increasing data in hard drive or physical memory. In the previous paper, single-byte-marker algorithm is proposed for header detection. In this paper, another novel header detection algorithm called dual-byte-marker is proposed. Based on the experiments done on images from hard disk, physical memory and data set from DFRWS 2006 Challenge, results showed that dual-byte-marker algorithm gives better performance with better execution time for header detection as compared to single-byte-marker.

  10. On algorithmic rate-coded AER generation.

    PubMed

    Linares-Barranco, Alejandro; Jimenez-Moreno, Gabriel; Linares-Barranco, Bernabé; Civit-Balcells, Antón

    2006-05-01

    This paper addresses the problem of converting a conventional video stream based on sequences of frames into the spike event-based representation known as the address-event-representation (AER). In this paper we concentrate on rate-coded AER. The problem is addressed as an algorithmic problem, in which different methods are proposed, implemented and tested through software algorithms. The proposed algorithms are comparatively evaluated according to different criteria. Emphasis is put on the potential of such algorithms for a) doing the frame-based to event-based representation in real time, and b) that the resulting event streams ressemble as much as possible those generated naturally by rate-coded address-event VLSI chips, such as silicon AER retinae. It is found that simple and straightforward algorithms tend to have high potential for real time but produce event distributions that differ considerably from those obtained in AER VLSI chips. On the other hand, sophisticated algorithms that yield better event distributions are not efficient for real time operations. The methods based on linear-feedback-shift-register (LFSR) pseudorandom number generation is a good compromise, which is feasible for real time and yield reasonably well distributed events in time. Our software experiments, on a 1.6-GHz Pentium IV, show that at 50% AER bus load the proposed algorithms require between 0.011 and 1.14 ms per 8 bit-pixel per frame. One of the proposed LFSR methods is implemented in real time hardware using a prototyping board that includes a VirtexE 300 FPGA. The demonstration hardware is capable of transforming frames of 64 x 64 pixels of 8-bit depth at a frame rate of 25 frames per second, producing spike events at a peak rate of 10(7) events per second. PMID:16722179

  11. Petri nets SM-cover-based on heuristic coloring algorithm

    NASA Astrophysics Data System (ADS)

    Tkacz, Jacek; Doligalski, Michał

    2015-09-01

    In the paper, coloring heuristic algorithm of interpreted Petri nets is presented. Coloring is used to determine the State Machines (SM) subnets. The present algorithm reduces the Petri net in order to reduce the computational complexity and finds one of its possible State Machines cover. The proposed algorithm uses elements of interpretation of Petri nets. The obtained result may not be the best, but it is sufficient for use in rapid prototyping of logic controllers. Found SM-cover will be also used in the development of algorithms for decomposition, and modular synthesis and implementation of parallel logic controllers. Correctness developed heuristic algorithm was verified using Gentzen formal reasoning system.

  12. Grooming of arbitrary traffic using improved genetic algorithms

    NASA Astrophysics Data System (ADS)

    Jiao, Yueguang; Xu, Zhengchun; Zhang, Hanyi

    2004-04-01

    A genetic algorithm is proposed with permutation based chromosome presentation and roulette wheel selection to solve traffic grooming problems in WDM ring network. The parameters of the algorithm are evaluated by calculating of large amount of traffic patterns at different conditions. Four methods were developed to improve the algorithm, which can be used combining with each other. Effects of them on the algorithm are studied via computer simulations. The results show that they can all make the algorithm more powerful to reduce the number of add-drop multiplexers or wavelengths required in a network.

  13. Generalized Jaynes-Cummings model as a quantum search algorithm

    SciTech Connect

    Romanelli, A.

    2009-07-15

    We propose a continuous time quantum search algorithm using a generalization of the Jaynes-Cummings model. In this model the states of the atom are the elements among which the algorithm realizes the search, exciting resonances between the initial and the searched states. This algorithm behaves like Grover's algorithm; the optimal search time is proportional to the square root of the size of the search set and the probability to find the searched state oscillates periodically in time. In this frame, it is possible to reinterpret the usual Jaynes-Cummings model as a trivial case of the quantum search algorithm.

  14. Novel biomedical tetrahedral mesh methods: algorithms and applications

    NASA Astrophysics Data System (ADS)

    Yu, Xiao; Jin, Yanfeng; Chen, Weitao; Huang, Pengfei; Gu, Lixu

    2007-12-01

    Tetrahedral mesh generation algorithm, as a prerequisite of many soft tissue simulation methods, becomes very important in the virtual surgery programs because of the real-time requirement. Aiming to speed up the computation in the simulation, we propose a revised Delaunay algorithm which makes a good balance of quality of tetrahedra, boundary preservation and time complexity, with many improved methods. Another mesh algorithm named Space-Disassembling is also presented in this paper, and a comparison of Space-Disassembling, traditional Delaunay algorithm and the revised Delaunay algorithm is processed based on clinical soft-tissue simulation projects, including craniofacial plastic surgery and breast reconstruction plastic surgery.

  15. Solving SAT Problem Based on Hybrid Differential Evolution Algorithm

    NASA Astrophysics Data System (ADS)

    Liu, Kunqi; Zhang, Jingmin; Liu, Gang; Kang, Lishan

    Satisfiability (SAT) problem is an NP-complete problem. Based on the analysis about it, SAT problem is translated equally into an optimization problem on the minimum of objective function. A hybrid differential evolution algorithm is proposed to solve the Satisfiability problem. It makes full use of strong local search capacity of hill-climbing algorithm and strong global search capability of differential evolution algorithm, which makes up their disadvantages, improves the efficiency of algorithm and avoids the stagnation phenomenon. The experiment results show that the hybrid algorithm is efficient in solving SAT problem.

  16. A hybrid monkey search algorithm for clustering analysis.

    PubMed

    Chen, Xin; Zhou, Yongquan; Luo, Qifang

    2014-01-01

    Clustering is a popular data analysis and data mining technique. The k-means clustering algorithm is one of the most commonly used methods. However, it highly depends on the initial solution and is easy to fall into local optimum solution. In view of the disadvantages of the k-means method, this paper proposed a hybrid monkey algorithm based on search operator of artificial bee colony algorithm for clustering analysis and experiment on synthetic and real life datasets to show that the algorithm has a good performance than that of the basic monkey algorithm for clustering analysis.

  17. Efficient algorithms for the laboratory discovery of optimal quantum controls.

    PubMed

    Turinici, Gabriel; Le Bris, Claude; Rabitz, Herschel

    2004-01-01

    The laboratory closed-loop optimal control of quantum phenomena, expressed as minimizing a suitable cost functional, is currently implemented through an optimization algorithm coupled to the experimental apparatus. In practice, the most commonly used search algorithms are variants of genetic algorithms. As an alternative choice, a direct search deterministic algorithm is proposed in this paper. For the simple simulations studied here, it outperforms the existing approaches. An additional algorithm is introduced in order to reveal some properties of the cost functional landscape. PMID:15324201

  18. Multiple origins of life

    NASA Technical Reports Server (NTRS)

    Raup, D. M.; Valentine, J. W.

    1983-01-01

    There is some indication that life may have originated readily under primitive earth conditions. If there were multiple origins of life, the result could have been a polyphyletic biota today. Using simple stochastic models for diversification and extinction, we conclude: (1) the probability of survival of life is low unless there are multiple origins, and (2) given survival of life and given as many as 10 independent origins of life, the odds are that all but one would have gone extinct, yielding the monophyletic biota we have now. The fact of the survival of our particular form of life does not imply that it was unique or superior.

  19. Speed-up hyperspheres homotopic path tracking algorithm for PWL circuits simulations.

    PubMed

    Ramirez-Pinero, A; Vazquez-Leal, H; Jimenez-Fernandez, V M; Sedighi, H M; Rashidi, M M; Filobello-Nino, U; Castaneda-Sheissa, R; Huerta-Chua, J; Sarmiento-Reyes, L A; Laguna-Camacho, J R; Castro-Gonzalez, F

    2016-01-01

    In the present work, we introduce an improved version of the hyperspheres path tracking method adapted for piecewise linear (PWL) circuits. This enhanced version takes advantage of the PWL characteristics from the homotopic curve, achieving faster path tracking and improving the performance of the homotopy continuation method (HCM). Faster computing time allows the study of complex circuits with higher complexity; the proposed method also decrease, significantly, the probability of having a diverging problem when using the Newton-Raphson method because it is applied just twice per linear region on the homotopic path. Equilibrium equations of the studied circuits are obtained applying the modified nodal analysis; this method allows to propose an algorithm for nonlinear circuit analysis. Besides, a starting point criteria is proposed to obtain better performance of the HCM and a technique for avoiding the reversion phenomenon is also proposed. To prove the efficiency of the path tracking method, several cases study with bipolar (BJT) and CMOS transistors are provided. Simulation results show that the proposed approach can be up to twelve times faster than the original path tracking method and also helps to avoid several reversion cases that appears when original hyperspheres path tracking scheme was employed.

  20. Speed-up hyperspheres homotopic path tracking algorithm for PWL circuits simulations.

    PubMed

    Ramirez-Pinero, A; Vazquez-Leal, H; Jimenez-Fernandez, V M; Sedighi, H M; Rashidi, M M; Filobello-Nino, U; Castaneda-Sheissa, R; Huerta-Chua, J; Sarmiento-Reyes, L A; Laguna-Camacho, J R; Castro-Gonzalez, F

    2016-01-01

    In the present work, we introduce an improved version of the hyperspheres path tracking method adapted for piecewise linear (PWL) circuits. This enhanced version takes advantage of the PWL characteristics from the homotopic curve, achieving faster path tracking and improving the performance of the homotopy continuation method (HCM). Faster computing time allows the study of complex circuits with higher complexity; the proposed method also decrease, significantly, the probability of having a diverging problem when using the Newton-Raphson method because it is applied just twice per linear region on the homotopic path. Equilibrium equations of the studied circuits are obtained applying the modified nodal analysis; this method allows to propose an algorithm for nonlinear circuit analysis. Besides, a starting point criteria is proposed to obtain better performance of the HCM and a technique for avoiding the reversion phenomenon is also proposed. To prove the efficiency of the path tracking method, several cases study with bipolar (BJT) and CMOS transistors are provided. Simulation results show that the proposed approach can be up to twelve times faster than the original path tracking method and also helps to avoid several reversion cases that appears when original hyperspheres path tracking scheme was employed. PMID:27386338

  1. The origin of risk aversion.

    PubMed

    Zhang, Ruixun; Brennan, Thomas J; Lo, Andrew W

    2014-12-16

    Risk aversion is one of the most basic assumptions of economic behavior, but few studies have addressed the question of where risk preferences come from and why they differ from one individual to the next. Here, we propose an evolutionary explanation for the origin of risk aversion. In the context of a simple binary-choice model, we show that risk aversion emerges by natural selection if reproductive risk is systematic (i.e., correlated across individuals in a given generation). In contrast, risk neutrality emerges if reproductive risk is idiosyncratic (i.e., uncorrelated across each given generation). More generally, our framework implies that the degree of risk aversion is determined by the stochastic nature of reproductive rates, and we show that different statistical properties lead to different utility functions. The simplicity and generality of our model suggest that these implications are primitive and cut across species, physiology, and genetic origins. PMID:25453072

  2. The origin of the moon.

    NASA Technical Reports Server (NTRS)

    Anderson, D. L.

    1972-01-01

    Recent studies of the dynamics and thermodynamics of the early solar nebula have provided a basis for a theory of the origin of the moon which is consistent with presently developed views of the origin of the solar system. The hypothesis of inhomogeneous planetary accretion is extended. It is suggested that the anomalous properties of the moon, such as its enrichment in Ca, Al, Ti, and other refractories and its depletion in iron and volatiles can be explained if the bulk of the moon represents a high temperature condensate. It is proposed that the moon is composed chiefly of compounds that condense before iron and that the volatile content of the moon was brought in as a thin veneer after the solar nebula dissipated.

  3. The origin of risk aversion

    PubMed Central

    Zhang, Ruixun; Brennan, Thomas J.; Lo, Andrew W.

    2014-01-01

    Risk aversion is one of the most basic assumptions of economic behavior, but few studies have addressed the question of where risk preferences come from and why they differ from one individual to the next. Here, we propose an evolutionary explanation for the origin of risk aversion. In the context of a simple binary-choice model, we show that risk aversion emerges by natural selection if reproductive risk is systematic (i.e., correlated across individuals in a given generation). In contrast, risk neutrality emerges if reproductive risk is idiosyncratic (i.e., uncorrelated across each given generation). More generally, our framework implies that the degree of risk aversion is determined by the stochastic nature of reproductive rates, and we show that different statistical properties lead to different utility functions. The simplicity and generality of our model suggest that these implications are primitive and cut across species, physiology, and genetic origins. PMID:25453072

  4. Origin of neurotoxins from defensins.

    PubMed

    Zhu, Li-Mei; Gao, Bin; Zhu, Shun-Yi

    2015-06-25

    There are at least three conserved protein folds shared by ion channel-targeted neurotoxins and antimicrobial defensins, including cysteine-stabilized α-helix and β-sheet fold (CSαβ), inhibitor cystine knot fold (ICK) and β-defensin fold (BDF). Based on a combined data of sequences, structures and functions, it has been proposed that these neurotoxins could originate from related ancient antimicrobial defensins by neofunctionalization. This provides an ideal system to study how a novel function emerged from a conserved structural scaffold during evolution. The elucidation of functional novelty of proteins not only has great significance in evolutionary biology but also will be helpful in guiding rational molecular design. This review describes recent progresses in origin of neurotoxins, focusing on the three conserved protein scaffolds.

  5. A statistical model-based algorithm for `black-box' multi-objective optimisation

    NASA Astrophysics Data System (ADS)

    Žilinskas, Antanas

    2014-01-01

    The problem of multi-objective optimisation with 'expensive' 'black-box' objective functions is considered. An algorithm is proposed that generalises the single objective P-algorithm constructed using the statistical model of multimodal functions and concepts of the theory of rational decisions under uncertainty. Computational examples are included demonstrating that the algorithm proposed possess several expected properties.

  6. Reasoning about systolic algorithms

    SciTech Connect

    Purushothaman, S.; Subrahmanyam, P.A.

    1988-12-01

    The authors present a methodology for verifying correctness of systolic algorithms. The methodology is based on solving a set of Uniform Recurrence Equations obtained from a description of systolic algorithms as a set of recursive equations. They present an approach to mechanically verify correctness of systolic algorithms, using the Boyer-Moore theorem proven. A mechanical correctness proof of an example from the literature is also presented.

  7. Competing Sudakov veto algorithms

    NASA Astrophysics Data System (ADS)

    Kleiss, Ronald; Verheyen, Rob

    2016-07-01

    We present a formalism to analyze the distribution produced by a Monte Carlo algorithm. We perform these analyses on several versions of the Sudakov veto algorithm, adding a cutoff, a second variable and competition between emission channels. The formal analysis allows us to prove that multiple, seemingly different competition algorithms, including those that are currently implemented in most parton showers, lead to the same result. Finally, we test their performance in a semi-realistic setting and show that there are significantly faster alternatives to the commonly used algorithms.

  8. Robust growing neural gas algorithm with application in cluster analysis.

    PubMed

    Qin, A K; Suganthan, P N

    2004-01-01

    We propose a novel robust clustering algorithm within the Growing Neural Gas (GNG) framework, called Robust Growing Neural Gas (RGNG) network.The Matlab codes are available from . By incorporating several robust strategies, such as outlier resistant scheme, adaptive modulation of learning rates and cluster repulsion method into the traditional GNG framework, the proposed RGNG network possesses better robustness properties. The RGNG is insensitive to initialization, input sequence ordering and the presence of outliers. Furthermore, the RGNG network can automatically determine the optimal number of clusters by seeking the extreme value of the Minimum Description Length (MDL) measure during network growing process. The resulting center positions of the optimal number of clusters represented by prototype vectors are close to the actual ones irrespective of the existence of outliers. Topology relationships among these prototypes can also be established. Experimental results have shown the superior performance of our proposed method over the original GNG incorporating MDL method, called GNG-M, in static data clustering tasks on both artificial and UCI data sets. PMID:15555857

  9. Hybrid Algorithms for Fuzzy Reverse Supply Chain Network Design

    PubMed Central

    Che, Z. H.; Chiang, Tzu-An; Kuo, Y. C.

    2014-01-01

    In consideration of capacity constraints, fuzzy defect ratio, and fuzzy transport loss ratio, this paper attempted to establish an optimized decision model for production planning and distribution of a multiphase, multiproduct reverse supply chain, which addresses defects returned to original manufacturers, and in addition, develops hybrid algorithms such as Particle Swarm Optimization-Genetic Algorithm (PSO-GA), Genetic Algorithm-Simulated Annealing (GA-SA), and Particle Swarm Optimization-Simulated Annealing (PSO-SA) for solving the optimized model. During a case study of a multi-phase, multi-product reverse supply chain network, this paper explained the suitability of the optimized decision model and the applicability of the algorithms. Finally, the hybrid algorithms showed excellent solving capability when compared with original GA and PSO methods. PMID:24892057

  10. Hybrid algorithms for fuzzy reverse supply chain network design.

    PubMed

    Che, Z H; Chiang, Tzu-An; Kuo, Y C; Cui, Zhihua

    2014-01-01

    In consideration of capacity constraints, fuzzy defect ratio, and fuzzy transport loss ratio, this paper attempted to establish an optimized decision model for production planning and distribution of a multiphase, multiproduct reverse supply chain, which addresses defects returned to original manufacturers, and in addition, develops hybrid algorithms such as Particle Swarm Optimization-Genetic Algorithm (PSO-GA), Genetic Algorithm-Simulated Annealing (GA-SA), and Particle Swarm Optimization-Simulated Annealing (PSO-SA) for solving the optimized model. During a case study of a multi-phase, multi-product reverse supply chain network, this paper explained the suitability of the optimized decision model and the applicability of the algorithms. Finally, the hybrid algorithms showed excellent solving capability when compared with original GA and PSO methods. PMID:24892057

  11. Hybrid algorithms for fuzzy reverse supply chain network design.

    PubMed

    Che, Z H; Chiang, Tzu-An; Kuo, Y C; Cui, Zhihua

    2014-01-01

    In consideration of capacity constraints, fuzzy defect ratio, and fuzzy transport loss ratio, this paper attempted to establish an optimized decision model for production planning and distribution of a multiphase, multiproduct reverse supply chain, which addresses defects returned to original manufacturers, and in addition, develops hybrid algorithms such as Particle Swarm Optimization-Genetic Algorithm (PSO-GA), Genetic Algorithm-Simulated Annealing (GA-SA), and Particle Swarm Optimization-Simulated Annealing (PSO-SA) for solving the optimized model. During a case study of a multi-phase, multi-product reverse supply chain network, this paper explained the suitability of the optimized decision model and the applicability of the algorithms. Finally, the hybrid algorithms showed excellent solving capability when compared with original GA and PSO methods.

  12. Current state of the art brachytherapy treatment planning dosimetry algorithms

    PubMed Central

    Pantelis, E; Karaiskos, P

    2014-01-01

    Following literature contributions delineating the deficiencies introduced by the approximations of conventional brachytherapy dosimetry, different model-based dosimetry algorithms have been incorporated into commercial systems for 192Ir brachytherapy treatment planning. The calculation settings of these algorithms are pre-configured according to criteria established by their developers for optimizing computation speed vs accuracy. Their clinical use is hence straightforward. A basic understanding of these algorithms and their limitations is essential, however, for commissioning; detecting differences from conventional algorithms; explaining their origin; assessing their impact; and maintaining global uniformity of clinical practice. PMID:25027247

  13. Annealed Importance Sampling Reversible Jump MCMC algorithms

    SciTech Connect

    Karagiannis, Georgios; Andrieu, Christophe

    2013-03-20

    It will soon be 20 years since reversible jump Markov chain Monte Carlo (RJ-MCMC) algorithms have been proposed. They have significantly extended the scope of Markov chain Monte Carlo simulation methods, offering the promise to be able to routinely tackle transdimensional sampling problems, as encountered in Bayesian model selection problems for example, in a principled and flexible fashion. Their practical efficient implementation, however, still remains a challenge. A particular difficulty encountered in practice is in the choice of the dimension matching variables (both their nature and their distribution) and the reversible transformations which allow one to define the one-to-one mappings underpinning the design of these algorithms. Indeed, even seemingly sensible choices can lead to algorithms with very poor performance. The focus of this paper is the development and performance evaluation of a method, annealed importance sampling RJ-MCMC (aisRJ), which addresses this problem by mitigating the sensitivity of RJ-MCMC algorithms to the aforementioned poor design. As we shall see the algorithm can be understood as being an “exact approximation” of an idealized MCMC algorithm that would sample from the model probabilities directly in a model selection set-up. Such an idealized algorithm may have good theoretical convergence properties, but typically cannot be implemented, and our algorithms can approximate the performance of such idealized algorithms to an arbitrary degree while not introducing any bias for any degree of approximation. Our approach combines the dimension matching ideas of RJ-MCMC with annealed importance sampling and its Markov chain Monte Carlo implementation. We illustrate the performance of the algorithm with numerical simulations which indicate that, although the approach may at first appear computationally involved, it is in fact competitive.

  14. A new optimization approach for shell and tube heat exchangers by using electromagnetism-like algorithm (EM)

    NASA Astrophysics Data System (ADS)

    Abed, Azher M.; Abed, Issa Ahmed; Majdi, Hasan Sh.; Al-Shamani, Ali Najah; Sopian, K.

    2016-02-01

    This study proposes a new procedure for optimal design of shell and tube heat exchangers. The electromagnetism-like algorithm is applied to save on heat exchanger capital cost and designing a compact, high performance heat exchanger with effective use of the allowable pressure drop (cost of the pump). An optimization algorithm is then utilized to determine the optimal values of both geometric design parameters and maximum allowable pressure drop by pursuing the minimization of a total cost function. A computer code is developed for the optimal shell and tube heat exchangers. Different test cases are solved to demonstrate the effectiveness and ability of the proposed algorithm. Results are also compared with those obtained by other approaches available in the literature. The comparisons indicate that a proposed design procedure can be successfully applied in the optimal design of shell and tube heat exchangers. In particular, in the examined cases a reduction of total costs up to 30, 29, and 56.15 % compared with the original design and up to 18, 5.5 and 7.4 % compared with other approaches for case study 1, 2 and 3 respectively, are observed. In this work, economic optimization resulting from the proposed design procedure are relevant especially when the size/volume is critical for high performance and compact unit, moderate volume and cost are needed.

  15. Musical emotions: Functions, origins, evolution

    NASA Astrophysics Data System (ADS)

    Perlovsky, Leonid

    2010-03-01

    Theories of music origins and the role of musical emotions in the mind are reviewed. Most existing theories contradict each other, and cannot explain mechanisms or roles of musical emotions in workings of the mind, nor evolutionary reasons for music origins. Music seems to be an enigma. Nevertheless, a synthesis of cognitive science and mathematical models of the mind has been proposed describing a fundamental role of music in the functioning and evolution of the mind, consciousness, and cultures. The review considers ancient theories of music as well as contemporary theories advanced by leading authors in this field. It addresses one hypothesis that promises to unify the field and proposes a theory of musical origin based on a fundamental role of music in cognition and evolution of consciousness and culture. We consider a split in the vocalizations of proto-humans into two types: one less emotional and more concretely-semantic, evolving into language, and the other preserving emotional connections along with semantic ambiguity, evolving into music. The proposed hypothesis departs from other theories in considering specific mechanisms of the mind-brain, which required the evolution of music parallel with the evolution of cultures and languages. Arguments are reviewed that the evolution of language toward becoming the semantically powerful tool of today required emancipation from emotional encumbrances. The opposite, no less powerful mechanisms required a compensatory evolution of music toward more differentiated and refined emotionality. The need for refined music in the process of cultural evolution is grounded in fundamental mechanisms of the mind. This is why today's human mind and cultures cannot exist without today's music. The reviewed hypothesis gives a basis for future analysis of why different evolutionary paths of languages were paralleled by different evolutionary paths of music. Approaches toward experimental verification of this hypothesis in

  16. Binarization algorithm for document image with complex background

    NASA Astrophysics Data System (ADS)

    Miao, Shaojun; Lu, Tongwei; Min, Feng

    2015-12-01

    The most important step in image preprocessing for Optical Character Recognition (OCR) is binarization. Due to the complex background or varying light in the text image, binarization is a very difficult problem. This paper presents the improved binarization algorithm. The algorithm can be divided into several steps. First, the background approximation can be obtained by the polynomial fitting, and the text is sharpened by using bilateral filter. Second, the image contrast compensation is done to reduce the impact of light and improve contrast of the original image. Third, the first derivative of the pixels in the compensated image are calculated to get the average value of the threshold, then the edge detection is obtained. Fourth, the stroke width of the text is estimated through a measuring of distance between edge pixels. The final stroke width is determined by choosing the most frequent distance in the histogram. Fifth, according to the value of the final stroke width, the window size is calculated, then a local threshold estimation approach can begin to binaries the image. Finally, the small noise is removed based on the morphological operators. The experimental result shows that the proposed method can effectively remove the noise caused by complex background and varying light.

  17. Formal Verification of a Conflict Resolution and Recovery Algorithm

    NASA Technical Reports Server (NTRS)

    Maddalon, Jeffrey; Butler, Ricky; Geser, Alfons; Munoz, Cesar

    2004-01-01

    New air traffic management concepts distribute the duty of traffic separation among system participants. As a consequence, these concepts have a greater dependency and rely heavily on on-board software and hardware systems. One example of a new on-board capability in a distributed air traffic management system is air traffic conflict detection and resolution (CD&R). Traditional methods for safety assessment such as human-in-the-loop simulations, testing, and flight experiments may not be sufficient for this highly distributed system as the set of possible scenarios is too large to have a reasonable coverage. This paper proposes a new method for the safety assessment of avionics systems that makes use of formal methods to drive the development of critical systems. As a case study of this approach, the mechanical veri.cation of an algorithm for air traffic conflict resolution and recovery called RR3D is presented. The RR3D algorithm uses a geometric optimization technique to provide a choice of resolution and recovery maneuvers. If the aircraft adheres to these maneuvers, they will bring the aircraft out of conflict and the aircraft will follow a conflict-free path to its original destination. Veri.cation of RR3D is carried out using the Prototype Verification System (PVS).

  18. Genetic algorithms for geophysical parameter inversion from altimeter data

    NASA Astrophysics Data System (ADS)

    Ramillien, Guillaume

    2001-11-01

    A new approach for inverting several geophysical parameters at the same time from altimeter and marine data by implementing genetic algorithms (GAs) is presented. These original techniques of optimization based on non-deterministic rules simulate the evolution of a population of candidate solutions for a given objective function to minimize. They offer a robust and efficient alternative to gradient techniques for non-linear parameter inversion. Here genetic algorithms are used for solving a discrete gravity problem of data associated with an undersea relief, to retrieve seven parameters at the same time: the elastic thickness, the mean ocean depth, the seamount location (longitude/latitude), its amplitude, radius and density from its observed gravity/geoid signature. This approach was also successfully used to adjust lithosphere parameters in the real case of the Rarotonga seamount [21.2°S 159.8°W] in the Southern Cook Islands region, where GA simulations provided robust estimates of these seven parameters. The GA found very realistic values for the mean ocean depth and the seamount amplitude and the precise geographical location of Rarotonga Island. Moreover, the values of elastic thickness (~14-15km) and seamount density (~2850-2870kgm-3) estimated by the GA are consistent with the ones proposed in earlier studies.

  19. Development of a novel constellation based landmark detection algorithm

    NASA Astrophysics Data System (ADS)

    Ghayoor, Ali; Vaidya, Jatin G.; Johnson, Hans J.

    2013-03-01

    Anatomical landmarks such as the anterior commissure (AC) and posterior commissure (PC) are commonly used by researchers for co-registration of images. In this paper, we present a novel, automated approach for landmark detection that combines morphometric constraining and statistical shape models to provide accurate estimation of landmark points. This method is made robust to large rotations in initial head orientation by extracting extra information of the eye centers using a radial Hough transform and exploiting the centroid of head mass (CM) using a novel estimation approach. To evaluate the effectiveness of this method, the algorithm is trained on a set of 20 images with manually selected landmarks, and a test dataset is used to compare the automatically detected against the manually detected landmark locations of the AC, PC, midbrain-pons junction (MPJ), and fourth ventricle notch (VN4). The results show that the proposed method is accurate as the average error between the automatically and manually labeled landmark points is less than 1 mm. Also, the algorithm is highly robust as it was successfully run on a large dataset that included different kinds of images with various orientation, spacing, and origin.

  20. Scalable Nearest Neighbor Algorithms for High Dimensional Data.

    PubMed

    Muja, Marius; Lowe, David G

    2014-11-01

    For many computer vision and machine learning problems, large training sets are key for good performance. However, the most computationally expensive part of many computer vision and machine learning algorithms consists of finding nearest neighbor matches to high dimensional vectors that represent the training data. We propose new algorithms for approximate nearest neighbor matching and evaluate and compare them with previous algorithms. For matching high dimensional features, we find two algorithms to be the most efficient: the randomized k-d forest and a new algorithm proposed in this paper, the priority search k-means tree. We also propose a new algorithm for matching binary features by searching multiple hierarchical clustering trees and show it outperforms methods typically used in the literature. We show that the optimal nearest neighbor algorithm and its parameters depend on the data set characteristics and describe an automated configuration procedure for finding the best algorithm to search a particular data set. In order to scale to very large data sets that would otherwise not fit in the memory of a single machine, we propose a distributed nearest neighbor matching framework that can be used with any of the algorithms described in the paper. All this research has been released as an open source library called fast library for approximate nearest neighbors (FLANN), which has been incorporated into OpenCV and is now one of the most popular libraries for nearest neighbor matching. PMID:26353063

  1. Scalable Nearest Neighbor Algorithms for High Dimensional Data.

    PubMed

    Muja, Marius; Lowe, David G

    2014-11-01

    For many computer vision and machine learning problems, large training sets are key for good performance. However, the most computationally expensive part of many computer vision and machine learning algorithms consists of finding nearest neighbor matches to high dimensional vectors that represent the training data. We propose new algorithms for approximate nearest neighbor matching and evaluate and compare them with previous algorithms. For matching high dimensional features, we find two algorithms to be the most efficient: the randomized k-d forest and a new algorithm proposed in this paper, the priority search k-means tree. We also propose a new algorithm for matching binary features by searching multiple hierarchical clustering trees and show it outperforms methods typically used in the literature. We show that the optimal nearest neighbor algorithm and its parameters depend on the data set characteristics and describe an automated configuration procedure for finding the best algorithm to search a particular data set. In order to scale to very large data sets that would otherwise not fit in the memory of a single machine, we propose a distributed nearest neighbor matching framework that can be used with any of the algorithms described in the paper. All this research has been released as an open source library called fast library for approximate nearest neighbors (FLANN), which has been incorporated into OpenCV and is now one of the most popular libraries for nearest neighbor matching.

  2. DNA replication origins.

    PubMed

    Leonard, Alan C; Méchali, Marcel

    2013-10-01

    The onset of genomic DNA synthesis requires precise interactions of specialized initiator proteins with DNA at sites where the replication machinery can be loaded. These sites, defined as replication origins, are found at a few unique locations in all of the prokaryotic chromosomes examined so far. However, replication origins are dispersed among tens of thousands of loci in metazoan chromosomes, thereby raising questions regarding the role of specific nucleotide sequences and chromatin environment in origin selection and the mechanisms used by initiators to recognize replication origins. Close examination of bacterial and archaeal replication origins reveals an array of DNA sequence motifs that position individual initiator protein molecules and promote initiator oligomerization on origin DNA. Conversely, the need for specific recognition sequences in eukaryotic replication origins is relaxed. In fact, the primary rule for origin selection appears to be flexibility, a feature that is modulated either by structural elements or by epigenetic mechanisms at least partly linked to the organization of the genome for gene expression.

  3. Religion: Origins and Evolution.

    ERIC Educational Resources Information Center

    Meyer, John K.

    2004-01-01

    We present the purpose of study of the origins and development of affect-relevant and religion-relevant hypotheses, and conjectured prediction of proto-religious sequences in pre-human anthropoids and primitive human cultures. We anticipate more comprehensive study of modern cultural outcomes of these origins and developments.

  4. Chemical Origins of Life

    ERIC Educational Resources Information Center

    Fox, J. Lawrence

    1972-01-01

    Reviews ideas and evidence bearing on the origin of life. Shows that evidence to support modifications of Oparin's theories of the origin of biological constituents from inorganic materials is accumulating, and that the necessary components are readily obtained from the simple gases found in the universe. (AL)

  5. The Moon's Origin.

    ERIC Educational Resources Information Center

    Cadogan, Peter

    1983-01-01

    Presents findings and conclusions about the origin of the moon, favoring the capture hypothesis of lunar origin. Advantage of the hypothesis is that it allows the moon to have been formed elsewhere, specifically in a hotter part of the solar nebula, accounting for chemical differences between earth and moon. (JN)

  6. Originalism in the Classroom

    ERIC Educational Resources Information Center

    Forte, David F.

    2011-01-01

    In this article, the author provides a detailed legal history of originalism and investigates whether, and to what extent, originalism is a part of law school teaching on the Constitution. He shares the results of an examination of the leading constitutional law textbooks used in the top fifty law schools and a selection of responses gathered from…

  7. The Growth of Originalism

    ERIC Educational Resources Information Center

    Bork, Robert H.

    2011-01-01

    The latest episode in the long-running struggle for control of the Constitution, and the political power that goes with it, is playing out in the federal courts in California. The contending philosophies are originalism, which holds that the Constitution should be read as it was originally understood by the framers and ratifiers, and the congeries…

  8. The origin of membrane bioenergetics.

    PubMed

    Lane, Nick; Martin, William F

    2012-12-21

    Harnessing energy as ion gradients across membranes is as universal as the genetic code. We leverage new insights into anaerobe metabolism to propose geochemical origins that account for the ubiquity of chemiosmotic coupling, and Na(+)/H(+) transporters in particular. Natural proton gradients acting across thin FeS walls within alkaline hydrothermal vents could drive carbon assimilation, leading to the emergence of protocells within vent pores. Protocell membranes that were initially leaky would eventually become less permeable, forcing cells dependent on natural H(+) gradients to pump Na(+) ions. Our hypothesis accounts for the Na(+)/H(+) promiscuity of bioenergetic proteins, as well as the deep divergence between bacteria and archaea. PMID:23260134

  9. The conception and implementation of a local HDR fusion algorithm depending on contrast and luminosity parameters

    NASA Astrophysics Data System (ADS)

    Besrour, Amine; Abdelkefi, Fatma; Siala, Mohamed; Snoussi, Hichem

    2015-09-01

    Nowadays, the high dynamic range (HDR) imaging represents the subject of the most researches. The major problem lies in the implementation of the best algorithm to acquire the best video quality. In fact, the major constraint is to conceive an optimal fusion which must meet the rapid movement of video frames. The implemented merging algorithms were not quick enough to reconstitute the HDR video. In this paper, we detail each of the previous existing works before detailing our algorithm and presenting results from the acquired HDR images, tone mapped with various techniques. Our proposed algorithm guarantees a more enhanced and faster solution compared to the existing ones. In fact, it has the ability to calculate the saturation matrix related to the saturation rate of the neighboring pixels. The computed coefficients are affected respectively to each picture from the tested ones. This analysis provides faster and efficient results in terms of quality and brightness. The originality of our work remains on its processing method including the pixels saturation in the totality of the captured pictures and their combination in order to obtain the best pictures illustrating all the possible details. These parameters are computed for each zone depending on the contrast and the luminosity of the current pixel and its neighboring. The final HDR image's coefficients are calculated dynamically ensuring the best image quality equilibrating the brightness and contrast values and making the perfect final image.

  10. Sensitivity Analysis of the Scattering-Based SARBM3D Despeckling Algorithm.

    PubMed

    Di Simone, Alessio

    2016-01-01

    Synthetic Aperture Radar (SAR) imagery greatly suffers from multiplicative speckle noise, typical of coherent image acquisition sensors, such as SAR systems. Therefore, a proper and accurate despeckling preprocessing step is almost mandatory to aid the interpretation and processing of SAR data by human users and computer algorithms, respectively. Very recently, a scattering-oriented version of the popular SAR Block-Matching 3D (SARBM3D) despeckling filter, named Scattering-Based (SB)-SARBM3D, was proposed. The new filter is based on the a priori knowledge of the local topography of the scene. In this paper, an experimental sensitivity analysis of the above-mentioned despeckling algorithm is carried out, and the main results are shown and discussed. In particular, the role of both electromagnetic and geometrical parameters of the surface and the impact of its scattering behavior are investigated. Furthermore, a comprehensive sensitivity analysis of the SB-SARBM3D filter against the Digital Elevation Model (DEM) resolution and the SAR image-DEM coregistration step is also provided. The sensitivity analysis shows a significant robustness of the algorithm against most of the surface parameters, while the DEM resolution plays a key role in the despeckling process. Furthermore, the SB-SARBM3D algorithm outperforms the original SARBM3D in the presence of the most realistic scattering behaviors of the surface. An actual scenario is also presented to assess the DEM role in real-life conditions. PMID:27347971

  11. Design of multiple sequence alignment algorithms on parallel, distributed memory supercomputers.

    PubMed

    Church, Philip C; Goscinski, Andrzej; Holt, Kathryn; Inouye, Michael; Ghoting, Amol; Makarychev, Konstantin; Reumann, Matthias

    2011-01-01

    The challenge of comparing two or more genomes that have undergone recombination and substantial amounts of segmental loss and gain has recently been addressed for small numbers of genomes. However, datasets of hundreds of genomes are now common and their sizes will only increase in the future. Multiple sequence alignment of hundreds of genomes remains an intractable problem due to quadratic increases in compute time and memory footprint. To date, most alignment algorithms are designed for commodity clusters without parallelism. Hence, we propose the design of a multiple sequence alignment algorithm on massively parallel, distributed memory supercomputers to enable research into comparative genomics on large data sets. Following the methodology of the sequential progressiveMauve algorithm, we design data structures including sequences and sorted k-mer lists on the IBM Blue Gene/P supercomputer (BG/P). Preliminary results show that we can reduce the memory footprint so that we can potentially align over 250 bacterial genomes on a single BG/P compute node. We verify our results on a dataset of E.coli, Shigella and S.pneumoniae genomes. Our implementation returns results matching those of the original algorithm but in 1/2 the time and with 1/4 the memory footprint for scaffold building. In this study, we have laid the basis for multiple sequence alignment of large-scale datasets on a massively parallel, distributed memory supercomputer, thus enabling comparison of hundreds instead of a few genome sequences within reasonable time. PMID:22254462

  12. Sensitivity Analysis of the Scattering-Based SARBM3D Despeckling Algorithm

    PubMed Central

    Di Simone, Alessio

    2016-01-01

    Synthetic Aperture Radar (SAR) imagery greatly suffers from multiplicative speckle noise, typical of coherent image acquisition sensors, such as SAR systems. Therefore, a proper and accurate despeckling preprocessing step is almost mandatory to aid the interpretation and processing of SAR data by human users and computer algorithms, respectively. Very recently, a scattering-oriented version of the popular SAR Block-Matching 3D (SARBM3D) despeckling filter, named Scattering-Based (SB)-SARBM3D, was proposed. The new filter is based on the a priori knowledge of the local topography of the scene. In this paper, an experimental sensitivity analysis of the above-mentioned despeckling algorithm is carried out, and the main results are shown and discussed. In particular, the role of both electromagnetic and geometrical parameters of the surface and the impact of its scattering behavior are investigated. Furthermore, a comprehensive sensitivity analysis of the SB-SARBM3D filter against the Digital Elevation Model (DEM) resolution and the SAR image-DEM coregistration step is also provided. The sensitivity analysis shows a significant robustness of the algorithm against most of the surface parameters, while the DEM resolution plays a key role in the despeckling process. Furthermore, the SB-SARBM3D algorithm outperforms the original SARBM3D in the presence of the most realistic scattering behaviors of the surface. An actual scenario is also presented to assess the DEM role in real-life conditions. PMID:27347971

  13. Design of multiple sequence alignment algorithms on parallel, distributed memory supercomputers.

    PubMed

    Church, Philip C; Goscinski, Andrzej; Holt, Kathryn; Inouye, Michael; Ghoting, Amol; Makarychev, Konstantin; Reumann, Matthias

    2011-01-01

    The challenge of comparing two or more genomes that have undergone recombination and substantial amounts of segmental loss and gain has recently been addressed for small numbers of genomes. However, datasets of hundreds of genomes are now common and their sizes will only increase in the future. Multiple sequence alignment of hundreds of genomes remains an intractable problem due to quadratic increases in compute time and memory footprint. To date, most alignment algorithms are designed for commodity clusters without parallelism. Hence, we propose the design of a multiple sequence alignment algorithm on massively parallel, distributed memory supercomputers to enable research into comparative genomics on large data sets. Following the methodology of the sequential progressiveMauve algorithm, we design data structures including sequences and sorted k-mer lists on the IBM Blue Gene/P supercomputer (BG/P). Preliminary results show that we can reduce the memory footprint so that we can potentially align over 250 bacterial genomes on a single BG/P compute node. We verify our results on a dataset of E.coli, Shigella and S.pneumoniae genomes. Our implementation returns results matching those of the original algorithm but in 1/2 the time and with 1/4 the memory footprint for scaffold building. In this study, we have laid the basis for multiple sequence alignment of large-scale datasets on a massively parallel, distributed memory supercomputer, thus enabling comparison of hundreds instead of a few genome sequences within reasonable time.

  14. A Novel Quantum-Behaved Bat Algorithm with Mean Best Position Directed for Numerical Optimization.

    PubMed

    Zhu, Binglian; Zhu, Wenyong; Liu, Zijuan; Duan, Qingyan; Cao, Long

    2016-01-01

    This paper proposes a novel quantum-behaved bat algorithm with the direction of mean best position (QMBA). In QMBA, the position of each bat is mainly updated by the current optimal solution in the early stage of searching and in the late search it also depends on the mean best position which can enhance the convergence speed of the algorithm. During the process of searching, quantum behavior of bats is introduced which is beneficial to jump out of local optimal solution and make the quantum-behaved bats not easily fall into local optimal solution, and it has better ability to adapt complex environment. Meanwhile, QMBA makes good use of statistical information of best position which bats had experienced to generate better quality solutions. This approach not only inherits the characteristic of quick convergence, simplicity, and easy implementation of original bat algorithm, but also increases the diversity of population and improves the accuracy of solution. Twenty-four benchmark test functions are tested and compared with other variant bat algorithms for numerical optimization the simulation results show that this approach is simple and efficient and can achieve a more accurate solution. PMID:27293424

  15. A Novel Quantum-Behaved Bat Algorithm with Mean Best Position Directed for Numerical Optimization

    PubMed Central

    Zhu, Wenyong; Liu, Zijuan; Duan, Qingyan; Cao, Long

    2016-01-01

    This paper proposes a novel quantum-behaved bat algorithm with the direction of mean best position (QMBA). In QMBA, the position of each bat is mainly updated by the current optimal solution in the early stage of searching and in the late search it also depends on the mean best position which can enhance the convergence speed of the algorithm. During the process of searching, quantum behavior of bats is introduced which is beneficial to jump out of local optimal solution and make the quantum-behaved bats not easily fall into local optimal solution, and it has better ability to adapt complex environment. Meanwhile, QMBA makes good use of statistical information of best position which bats had experienced to generate better quality solutions. This approach not only inherits the characteristic of quick convergence, simplicity, and easy implementation of original bat algorithm, but also increases the diversity of population and improves the accuracy of solution. Twenty-four benchmark test functions are tested and compared with other variant bat algorithms for numerical optimization the simulation results show that this approach is simple and efficient and can achieve a more accurate solution. PMID:27293424

  16. Sensitivity Analysis of the Scattering-Based SARBM3D Despeckling Algorithm.

    PubMed

    Di Simone, Alessio

    2016-01-01

    Synthetic Aperture Radar (SAR) imagery greatly suffers from multiplicative speckle noise, typical of coherent image acquisition sensors, such as SAR systems. Therefore, a proper and accurate despeckling preprocessing step is almost mandatory to aid the interpretation and processing of SAR data by human users and computer algorithms, respectively. Very recently, a scattering-oriented version of the popular SAR Block-Matching 3D (SARBM3D) despeckling filter, named Scattering-Based (SB)-SARBM3D, was proposed. The new filter is based on the a priori knowledge of the local topography of the scene. In this paper, an experimental sensitivity analysis of the above-mentioned despeckling algorithm is carried out, and the main results are shown and discussed. In particular, the role of both electromagnetic and geometrical parameters of the surface and the impact of its scattering behavior are investigated. Furthermore, a comprehensive sensitivity analysis of the SB-SARBM3D filter against the Digital Elevation Model (DEM) resolution and the SAR image-DEM coregistration step is also provided. The sensitivity analysis shows a significant robustness of the algorithm against most of the surface parameters, while the DEM resolution plays a key role in the despeckling process. Furthermore, the SB-SARBM3D algorithm outperforms the original SARBM3D in the presence of the most realistic scattering behaviors of the surface. An actual scenario is also presented to assess the DEM role in real-life conditions.

  17. Quantum algorithm for an additive approximation of Ising partition functions

    NASA Astrophysics Data System (ADS)

    Matsuo, Akira; Fujii, Keisuke; Imoto, Nobuyuki

    2014-08-01

    We investigate quantum-computational complexity of calculating partition functions of Ising models. We construct a quantum algorithm for an additive approximation of Ising partition functions on square lattices. To this end, we utilize the overlap mapping developed by M. Van den Nest, W. Dür, and H. J. Briegel [Phys. Rev. Lett. 98, 117207 (2007), 10.1103/PhysRevLett.98.117207] and its interpretation through measurement-based quantum computation (MBQC). We specify an algorithmic domain, on which the proposed algorithm works, and an approximation scale, which determines the accuracy of the approximation. We show that the proposed algorithm performs a nontrivial task, which would be intractable on any classical computer, by showing that the problem that is solvable by the proposed quantum algorithm is BQP-complete. In the construction of the BQP-complete problem coupling strengths and magnetic fields take complex values. However, the Ising models that are of central interest in statistical physics and computer science consist of real coupling strengths and magnetic fields. Thus we extend the algorithmic domain of the proposed algorithm to such a real physical parameter region and calculate the approximation scale explicitly. We found that the overlap mapping and its MBQC interpretation improve the approximation scale exponentially compared to a straightforward constant-depth quantum algorithm. On the other hand, the proposed quantum algorithm also provides partial evidence that there exist no efficient classical algorithm for a multiplicative approximation of the Ising partition functions even on the square lattice. This result supports the observation that the proposed quantum algorithm also performs a nontrivial task in the physical parameter region.

  18. A fast optimization transfer algorithm for image inpainting in wavelet domains.

    PubMed

    Chan, Raymond H; Wen, You-Wei; Yip, Andy M

    2009-07-01

    A wavelet inpainting problem refers to the problem of filling in missing wavelet coefficients in an image. A variational approach was used by Chan et al. The resulting functional was minimized by the gradient descent method. In this paper, we use an optimization transfer technique which involves replacing their univariate functional by a bivariate functional by adding an auxiliary variable. Our bivariate functional can be minimized easily by alternating minimization: for the auxiliary variable, the minimum has a closed form solution, and for the original variable, the minimization problem can be formulated as a classical total variation (TV) denoising problem and, hence, can be solved efficiently using a dual formulation. We show that our bivariate functional is equivalent to the original univariate functional. We also show that our alternating minimization is convergent. Numerical results show that the proposed algorithm is very efficient and outperforms that of Chan et al.

  19. Differential Search Algorithm Based Edge Detection

    NASA Astrophysics Data System (ADS)

    Gunen, M. A.; Civicioglu, P.; Beşdok, E.

    2016-06-01

    In this paper, a new method has been presented for the extraction of edge information by using Differential Search Optimization Algorithm. The proposed method is based on using a new heuristic image thresholding method for edge detection. The success of the proposed method has been examined on fusion of two remote sensed images. The applicability of the proposed method on edge detection and image fusion problems have been analysed in detail and the empirical results exposed that the proposed method is useful for solving the mentioned problems.

  20. Primitive fitting based on the efficient multiBaySAC algorithm.

    PubMed

    Kang, Zhizhong; Li, Zhen

    2015-01-01

    Although RANSAC is proven to be robust, the original RANSAC algorithm selects hypothesis sets at random, generating numerous iterations and high computational costs because many hypothesis sets are contaminated with outliers. This paper presents a conditional sampling method, multiBaySAC (Bayes SAmple Consensus), that fuses the BaySAC algorithm with candidate model parameters statistical testing for unorganized 3D point clouds to fit multiple primitives. This paper first presents a statistical testing algorithm for a candidate model parameter histogram to detect potential primitives. As the detected initial primitives were optimized using a parallel strategy rather than a sequential one, every data point in the multiBaySAC algorithm was assigned to multiple prior inlier probabilities for initial multiple primitives. Each prior inlier probability determined the probability that a point belongs to the corresponding primitive. We then implemented in parallel a conditional sampling method: BaySAC. With each iteration of the hypothesis testing process, hypothesis sets with the highest inlier probabilities were selected and verified for the existence of multiple primitives, revealing the fitting for multiple primitives. Moreover, the updated version of the initial probability was implemented based on a memorable form of Bayes' Theorem, which describes the relationship between prior and posterior probabilities of a data point by determining whether the hypothesis set to which a data point belongs is correct. The proposed approach was tested using real and synthetic point clouds. The results show that the proposed multiBaySAC algorithm can achieve a high computational efficiency (averaging 34% higher than the efficiency of the sequential RANSAC method) and fitting accuracy (exhibiting good performance in the intersection of two primitives), whereas the sequential RANSAC framework clearly suffers from over- and under-segmentation problems. Future work will aim at further