Science.gov

Sample records for algorithm originally proposed

  1. The algorithmic origins of life

    PubMed Central

    Walker, Sara Imari; Davies, Paul C. W.

    2013-01-01

    Although it has been notoriously difficult to pin down precisely what is it that makes life so distinctive and remarkable, there is general agreement that its informational aspect is one key property, perhaps the key property. The unique informational narrative of living systems suggests that life may be characterized by context-dependent causal influences, and, in particular, that top-down (or downward) causation—where higher levels influence and constrain the dynamics of lower levels in organizational hierarchies—may be a major contributor to the hierarchal structure of living systems. Here, we propose that the emergence of life may correspond to a physical transition associated with a shift in the causal structure, where information gains direct and context-dependent causal efficacy over the matter in which it is instantiated. Such a transition may be akin to more traditional physical transitions (e.g. thermodynamic phase transitions), with the crucial distinction that determining which phase (non-life or life) a given system is in requires dynamical information and therefore can only be inferred by identifying causal architecture. We discuss some novel research directions based on this hypothesis, including potential measures of such a transition that may be amenable to laboratory study, and how the proposed mechanism corresponds to the onset of the unique mode of (algorithmic) information processing characteristic of living systems. PMID:23235265

  2. Exactness of the original Grover search algorithm

    SciTech Connect

    Diao Zijian

    2010-10-15

    It is well-known that when searching one out of four, the original Grover's search algorithm is exact; that is, it succeeds with certainty. It is natural to ask the inverse question: If we are not searching one out of four, is Grover's algorithm definitely not exact? In this article we give a complete answer to this question through some rationality results of trigonometric functions.

  3. Cliftonite in meteorites: A proposed origin

    USGS Publications Warehouse

    Brett, R.; Higgins, G.T.

    1967-01-01

    Cliftonite, a polycrystalline aggregate of graphite with cubic morphology, is known in ten meteorites. Some workers have considered it to be a pseudomorph after diamond, and have used the proposed diamond ancestry as evidence of a meteoritic parent body of at least lunar dimensions. We have synthesized cliftonite in Fe-Ni-C alloys in vacuum, as a product of decomposition of cohenite [(Fe,Ni)3C]. We therefore suggest that a high pressure origin is unnecessary for meteorites which contain cliftonite, and that these meteorites were formed at low pressures. This conclusion is in agreement with other recent evidence.

  4. Cohenite: its occurrence and a proposed origin

    USGS Publications Warehouse

    Brett, R.

    1967-01-01

    Cohenite is found almost exclusively in meteorites containing from 6 to 8 wt.% Ni. On the basis of phase diagrams and kinetic data it is proposed that cohenite cannot form in meteorites having more than 8 wt.% Ni and that any cohenite which formed in meteorites having Ni content lower than 6 wt.% decomposed during cooling. A series of isothermal sections for the system Fe{single bond}Ni{single bond}C has been constructed between 750 and 600??C from published information on the three constitutent binary systems. The diagrams indicate that the presence of a few tenths of a per cent carbon in a Ni{single bond}Fe alloy may reduce the temperature at which kamacite separates from taenite by more than 50??C. Hence C in iron meteorites may be partly responsible for the postulated supercooled nucleation of kamacite in meteorites proposed by recent authors. Cohenite found in meteorites probably formed over the temperature range 650-610??C. For compositions approximating those of metallic meteorites, the greater the C or Ni content of the alloy, the lower the temperature of formation of cohenite. The presence of cohenite in meteorites indicates neither high nor low pressures of formation. However, the absence of cohenite in meteorites containing the assemblage metal + graphite requires low pressures during cooling. Such meteorites therefore cooled in parent bodies of asteroidal size, or near the surface of large bodies. ?? 1967.

  5. Proposal of an Algorithm to Synthesize Music Suitable for Dance

    NASA Astrophysics Data System (ADS)

    Morioka, Hirofumi; Nakatani, Mie; Nishida, Shogo

    This paper proposes an algorithm for synthesizing music suitable for emotions in moving pictures. Our goal is to support multi-media content creation; web page design, animation films and so on. Here we adopt a human dance as a moving picture to examine the availability of our method. Because we think the dance image has high affinity with music. This algorithm is composed of three modules. The first is the module for computing emotions from an input dance image, the second is for computing emotions from music in the database and the last is for selecting music suitable for input dance via an interface of emotion.

  6. Modified multiscale sample entropy computation of laser speckle contrast images and comparison with the original multiscale entropy algorithm.

    PubMed

    Humeau-Heurtier, Anne; Mahé, Guillaume; Abraham, Pierre

    2015-12-01

    Laser speckle contrast imaging (LSCI) enables a noninvasive monitoring of microvascular perfusion. Some studies have proposed to extract information from LSCI data through their multiscale entropy (MSE). However, for reaching a large range of scales, the original MSE algorithm may require long recordings for reliability. Recently, a novel approach to compute MSE with shorter data sets has been proposed: the short-time MSE (sMSE). Our goal is to apply, for the first time, the sMSE algorithm in LSCI data and to compare results with those given by the original MSE. Moreover, we apply the original MSE algorithm on data of different lengths and compare results with those given by longer recordings. For this purpose, synthetic signals and 192 LSCI regions of interest (ROIs) of different sizes are processed. Our results show that the sMSE algorithm is valid to compute the MSE of LSCI data. Moreover, with time series shorter than those initially proposed, the sMSE and original MSE algorithms give results with no statistical difference from those of the original MSE algorithm with longer data sets. The minimal acceptable length depends on the ROI size. Comparisons of MSE from healthy and pathological subjects can be performed with shorter data sets than those proposed until now. PMID:26220209

  7. Modified multiscale sample entropy computation of laser speckle contrast images and comparison with the original multiscale entropy algorithm

    NASA Astrophysics Data System (ADS)

    Humeau-Heurtier, Anne; Mahé, Guillaume; Abraham, Pierre

    2015-12-01

    Laser speckle contrast imaging (LSCI) enables a noninvasive monitoring of microvascular perfusion. Some studies have proposed to extract information from LSCI data through their multiscale entropy (MSE). However, for reaching a large range of scales, the original MSE algorithm may require long recordings for reliability. Recently, a novel approach to compute MSE with shorter data sets has been proposed: the short-time MSE (sMSE). Our goal is to apply, for the first time, the sMSE algorithm in LSCI data and to compare results with those given by the original MSE. Moreover, we apply the original MSE algorithm on data of different lengths and compare results with those given by longer recordings. For this purpose, synthetic signals and 192 LSCI regions of interest (ROIs) of different sizes are processed. Our results show that the sMSE algorithm is valid to compute the MSE of LSCI data. Moreover, with time series shorter than those initially proposed, the sMSE and original MSE algorithms give results with no statistical difference from those of the original MSE algorithm with longer data sets. The minimal acceptable length depends on the ROI size. Comparisons of MSE from healthy and pathological subjects can be performed with shorter data sets than those proposed until now.

  8. A new proposal concerning the botanical origin of Baltic amber

    PubMed Central

    Wolfe, Alexander P.; Tappert, Ralf; Muehlenbachs, Karlis; Boudreau, Marc; McKellar, Ryan C.; Basinger, James F.; Garrett, Amber

    2009-01-01

    Baltic amber constitutes the largest known deposit of fossil plant resin and the richest repository of fossil insects of any age. Despite a remarkable legacy of archaeological, geochemical and palaeobiological investigation, the botanical origin of this exceptional resource remains controversial. Here, we use taxonomically explicit applications of solid-state Fourier-transform infrared (FTIR) microspectroscopy, coupled with multivariate clustering and palaeobotanical observations, to propose that conifers of the family Sciadopityaceae, closely allied to the sole extant representative, Sciadopitys verticillata, were involved in the genesis of Baltic amber. The fidelity of FTIR-based chemotaxonomic inferences is upheld by modern–fossil comparisons of resins from additional conifer families and genera (Cupressaceae: Metasequoia; Pinaceae: Pinus and Pseudolarix). Our conclusions challenge hypotheses advocating members of either of the families Araucariaceae or Pinaceae as the primary amber-producing trees and correlate favourably with the progressive demise of subtropical forest biomes from northern Europe as palaeotemperatures cooled following the Eocene climate optimum. PMID:19570786

  9. Replication and Comparison of the Newly Proposed ADOS-2, Module 4 Algorithm in ASD without ID: A Multi-Site Study

    ERIC Educational Resources Information Center

    Pugliese, Cara E.; Kenworthy, Lauren; Bal, Vanessa Hus; Wallace, Gregory L.; Yerys, Benjamin E.; Maddox, Brenna B.; White, Susan W.; Popal, Haroon; Armour, Anna Chelsea; Miller, Judith; Herrington, John D.; Schultz, Robert T.; Martin, Alex; Anthony, Laura Gutermuth

    2015-01-01

    Recent updates have been proposed to the Autism Diagnostic Observation Schedule-2 Module 4 diagnostic algorithm. This new algorithm, however, has not yet been validated in an independent sample without intellectual disability (ID). This multi-site study compared the original and revised algorithms in individuals with ASD without ID. The revised…

  10. Cliftonite: A proposed origin, and its bearing on the origin of diamonds in meteorites

    USGS Publications Warehouse

    Brett, R.; Higgins, G.T.

    1969-01-01

    Cliftonite, a polycrystalline aggregate of graphite with spherulitic structure and cubic morphology, is known in 14 meteorites. Some workers have considered it to be a pseudomorph after diamond, and have used the proposed diamond ancestry as evidence of a meteoritic parent body of at least lunar dimensions. Careful examination of meteoritic samples indicates that cliftonite forms by precipitation within kamacite. We have also demonstrated that graphite with cubic morphology may be synthesized in a Fe-Ni-C alloy annealed in a vacuum. We therefore suggest that a high pressure origin is unnecessary for meteorities which contain cliftonite, and that these meteorities were formed at low pressures. This conclusion is in agreement with other recent evidence. We also suggest that recently discovered cubes and cubo-octahedra of lonsdaleite in the Canyon Diablo meteorite are pseudomorphs after cliftonite, not diamond, as has previously been suggested. ?? 1969.

  11. Event-by-event PET image reconstruction using list-mode origin ensembles algorithm

    NASA Astrophysics Data System (ADS)

    Andreyev, Andriy

    2016-03-01

    There is a great demand for real time or event-by-event (EBE) image reconstruction in emission tomography. Ideally, as soon as event has been detected by the acquisition electronics, it needs to be used in the image reconstruction software. This would greatly speed up the image reconstruction since most of the data will be processed and reconstructed while the patient is still undergoing the scan. Unfortunately, the current industry standard is that the reconstruction of the image would not start until all the data for the current image frame would be acquired. Implementing an EBE reconstruction for MLEM family of algorithms is possible, but not straightforward as multiple (computationally expensive) updates to the image estimate are required. In this work an alternative Origin Ensembles (OE) image reconstruction algorithm for PET imaging is converted to EBE mode and is investigated whether it is viable alternative for real-time image reconstruction. In OE algorithm all acquired events are seen as points that are located somewhere along the corresponding line-of-responses (LORs), together forming a point cloud. Iteratively, with a multitude of quasi-random shifts following the likelihood function the point cloud converges to a reflection of an actual radiotracer distribution with the degree of accuracy that is similar to MLEM. New data can be naturally added into the point cloud. Preliminary results with simulated data show little difference between regular reconstruction and EBE mode, proving the feasibility of the proposed approach.

  12. Phlegethon flow: A proposed origin for spicules and coronal heating

    NASA Technical Reports Server (NTRS)

    Schatten, Kenneth H.; Mayr, Hans G.

    1986-01-01

    A model was develped for the mass, energy, and magnetic field transport into the corona. The focus is on the flow below the photosphere which allows the energy to pass into, and be dissipated within, the solar atmosphere. The high flow velocities observed in spicules are explained. A treatment following the work of Bailyn et al. (1985) is examined. It was concluded that within the framework of the model, energy may dissipate at a temperature comparable to the temperature where the waves originated, allowing for an equipartition solution of atmospheric flow, departing the sun at velocities approaching the maximum Alfven speed.

  13. Validating retinal fundus image analysis algorithms: issues and a proposal.

    PubMed

    Trucco, Emanuele; Ruggeri, Alfredo; Karnowski, Thomas; Giancardo, Luca; Chaum, Edward; Hubschman, Jean Pierre; Al-Diri, Bashir; Cheung, Carol Y; Wong, Damon; Abràmoff, Michael; Lim, Gilbert; Kumar, Dinesh; Burlina, Philippe; Bressler, Neil M; Jelinek, Herbert F; Meriaudeau, Fabrice; Quellec, Gwénolé; Macgillivray, Tom; Dhillon, Bal

    2013-05-01

    This paper concerns the validation of automatic retinal image analysis (ARIA) algorithms. For reasons of space and consistency, we concentrate on the validation of algorithms processing color fundus camera images, currently the largest section of the ARIA literature. We sketch the context (imaging instruments and target tasks) of ARIA validation, summarizing the main image analysis and validation techniques. We then present a list of recommendations focusing on the creation of large repositories of test data created by international consortia, easily accessible via moderated Web sites, including multicenter annotations by multiple experts, specific to clinical tasks, and capable of running submitted software automatically on the data stored, with clear and widely agreed-on performance criteria, to provide a fair comparison. PMID:23794433

  14. Validating Retinal Fundus Image Analysis Algorithms: Issues and a Proposal

    PubMed Central

    Trucco, Emanuele; Ruggeri, Alfredo; Karnowski, Thomas; Giancardo, Luca; Chaum, Edward; Hubschman, Jean Pierre; al-Diri, Bashir; Cheung, Carol Y.; Wong, Damon; Abràmoff, Michael; Lim, Gilbert; Kumar, Dinesh; Burlina, Philippe; Bressler, Neil M.; Jelinek, Herbert F.; Meriaudeau, Fabrice; Quellec, Gwénolé; MacGillivray, Tom; Dhillon, Bal

    2013-01-01

    This paper concerns the validation of automatic retinal image analysis (ARIA) algorithms. For reasons of space and consistency, we concentrate on the validation of algorithms processing color fundus camera images, currently the largest section of the ARIA literature. We sketch the context (imaging instruments and target tasks) of ARIA validation, summarizing the main image analysis and validation techniques. We then present a list of recommendations focusing on the creation of large repositories of test data created by international consortia, easily accessible via moderated Web sites, including multicenter annotations by multiple experts, specific to clinical tasks, and capable of running submitted software automatically on the data stored, with clear and widely agreed-on performance criteria, to provide a fair comparison. PMID:23794433

  15. Comparison of switching control algorithms effective in restricting the switching in the neighborhood of the origin

    NASA Astrophysics Data System (ADS)

    Joung, JinWook; Smyth, Andrew W.; Chung, Lan

    2010-06-01

    The active interaction control (AIC) system consisting of a primary structure, an auxiliary structure and an interaction element was proposed to protect the primary structure against earthquakes and winds. The objective of the AIC system in reducing the responses of the primary structure is fulfilled by activating or deactivating the switching between the engagement and the disengagement of the primary and auxiliary structures through the interaction element. The status of the interaction element is controlled by switching control algorithms. The previously developed switching control algorithms require an excessive amount of switching, which is inefficient. In this paper, the excessive amount of switching is restricted by imposing an appropriately designed switching boundary region, where switching is prohibited, on pre-designed engagement-disengagement conditions. Two different approaches are used in designing the newly proposed AID-off and AID-off2 algorithms. The AID-off2 algorithm is designed to affect deactivated switching regions explicitly, unlike the AID-off algorithm, which follows the same procedure of designing the engagement-disengagement conditions of the previously developed algorithms, by using the current status of the AIC system. Both algorithms are shown to be effective in reducing the amount of switching times triggered from the previously developed AID algorithm under an appropriately selected control sampling period for different earthquakes, but the AID-off2 algorithm outperforms the AID-off algorithm in reducing the number of switching times.

  16. A Two-Stage Algorithm for Origin-Destination Matrices Estimation Considering Dynamic Dispersion Parameter for Route Choice.

    PubMed

    Wang, Yong; Ma, Xiaolei; Liu, Yong; Gong, Ke; Henrickson, Kristian C; Henricakson, Kristian C; Xu, Maozeng; Wang, Yinhai

    2016-01-01

    This paper proposes a two-stage algorithm to simultaneously estimate origin-destination (OD) matrix, link choice proportion, and dispersion parameter using partial traffic counts in a congested network. A non-linear optimization model is developed which incorporates a dynamic dispersion parameter, followed by a two-stage algorithm in which Generalized Least Squares (GLS) estimation and a Stochastic User Equilibrium (SUE) assignment model are iteratively applied until the convergence is reached. To evaluate the performance of the algorithm, the proposed approach is implemented in a hypothetical network using input data with high error, and tested under a range of variation coefficients. The root mean squared error (RMSE) of the estimated OD demand and link flows are used to evaluate the model estimation results. The results indicate that the estimated dispersion parameter theta is insensitive to the choice of variation coefficients. The proposed approach is shown to outperform two established OD estimation methods and produce parameter estimates that are close to the ground truth. In addition, the proposed approach is applied to an empirical network in Seattle, WA to validate the robustness and practicality of this methodology. In summary, this study proposes and evaluates an innovative computational approach to accurately estimate OD matrices using link-level traffic flow data, and provides useful insight for optimal parameter selection in modeling travelers' route choice behavior. PMID:26761209

  17. A Two-Stage Algorithm for Origin-Destination Matrices Estimation Considering Dynamic Dispersion Parameter for Route Choice

    PubMed Central

    Wang, Yong; Ma, Xiaolei; Liu, Yong; Gong, Ke; Henricakson, Kristian C.; Xu, Maozeng; Wang, Yinhai

    2016-01-01

    This paper proposes a two-stage algorithm to simultaneously estimate origin-destination (OD) matrix, link choice proportion, and dispersion parameter using partial traffic counts in a congested network. A non-linear optimization model is developed which incorporates a dynamic dispersion parameter, followed by a two-stage algorithm in which Generalized Least Squares (GLS) estimation and a Stochastic User Equilibrium (SUE) assignment model are iteratively applied until the convergence is reached. To evaluate the performance of the algorithm, the proposed approach is implemented in a hypothetical network using input data with high error, and tested under a range of variation coefficients. The root mean squared error (RMSE) of the estimated OD demand and link flows are used to evaluate the model estimation results. The results indicate that the estimated dispersion parameter theta is insensitive to the choice of variation coefficients. The proposed approach is shown to outperform two established OD estimation methods and produce parameter estimates that are close to the ground truth. In addition, the proposed approach is applied to an empirical network in Seattle, WA to validate the robustness and practicality of this methodology. In summary, this study proposes and evaluates an innovative computational approach to accurately estimate OD matrices using link-level traffic flow data, and provides useful insight for optimal parameter selection in modeling travelers’ route choice behavior. PMID:26761209

  18. 75 FR 4043 - Correction: Proposed Information Collection; Comment Request; Fisheries Certificate of Origin

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-01-26

    ... National Oceanic and Atmospheric Administration Correction: Proposed Information Collection; Comment Request; Fisheries Certificate of Origin AGENCY: National Oceanic and Atmospheric Administration (NOAA). ACTION: Correction. SUMMARY: On January 15, 2010, a notice was published in the Federal Register (75...

  19. 76 FR 69328 - Proposed Collection; Comment Request; Race and National Origin Identification

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-11-08

    ... Proposed Collection; Comment Request; Race and National Origin Identification AGENCY: Department of The.... Type of Review: Revision of a currently approved collection. Title: Race and National Origin Identification. Abstract: The Department's automated recruitment system, CareerConnector, is used to capture...

  20. Construction Method of Display Proposal for Commodities in Sales Promotion by Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Yumoto, Masaki

    In a sales promotion task, wholesaler prepares and presents the display proposal for commodities in order to negotiate with retailer's buyers what commodities they should sell. For automating the sales promotion tasks, the proposal has to be constructed according to the target retailer's buyer. However, it is difficult to construct the proposal suitable for the target retail store because of too much combination of commodities. This paper proposes a construction method by Genetic algorithm (GA). The proposed method represents initial display proposals for commodities with genes, improve ones with the evaluation value by GA, and rearrange one with the highest evaluation value according to the classification of commodity. Through practical experiment, we can confirm that display proposal by the proposed method is similar with the one constructed by a wholesaler.

  1. Differentiating origins of outflow tract ventricular arrhythmias: a comparison of three different electrocardiographic algorithms

    PubMed Central

    Jiao, Z.Y.; Li, Y.B.; Mao, J.; Liu, X.Y.; Yang, X.C.; Tan, C.; Chu, J.M.; Liu, X.P.

    2016-01-01

    Our objective is to evaluate the accuracy of three algorithms in differentiating the origins of outflow tract ventricular arrhythmias (OTVAs). This study involved 110 consecutive patients with OTVAs for whom a standard 12-lead surface electrocardiogram (ECG) showed typical left bundle branch block morphology with an inferior axis. All the ECG tracings were retrospectively analyzed using the following three recently published ECG algorithms: 1) the transitional zone (TZ) index, 2) the V2 transition ratio, and 3) V2 R wave duration and R/S wave amplitude indices. Considering all patients, the V2 transition ratio had the highest sensitivity (92.3%), while the R wave duration and R/S wave amplitude indices in V2 had the highest specificity (93.9%). The latter finding had a maximal area under the ROC curve of 0.925. In patients with left ventricular (LV) rotation, the V2 transition ratio had the highest sensitivity (94.1%), while the R wave duration and R/S wave amplitude indices in V2 had the highest specificity (87.5%). The former finding had a maximal area under the ROC curve of 0.892. All three published ECG algorithms are effective in differentiating the origin of OTVAs, while the V2 transition ratio, and the V2 R wave duration and R/S wave amplitude indices are the most sensitive and specific algorithms, respectively. Amongst all of the patients, the V2 R wave duration and R/S wave amplitude algorithm had the maximal area under the ROC curve, but in patients with LV rotation the V2 transition ratio algorithm had the maximum area under the ROC curve. PMID:27143173

  2. A Proposed India-Specific Algorithm for Management of Type 2 Diabetes.

    PubMed

    2016-06-01

    Several algorithms and guidelines have been proposed by countries and international professional bodies; however, no recent updated management algorithm is available for Asian Indians. Specifically, algorithms developed and validated in developed nations may not be relevant or applicable to patients in India because of several factors: early age of onset of diabetes, occurrence of diabetes in nonobese and sometimes lean people, differences in the relative contributions of insulin resistance and β-cell dysfunction, marked postprandial glycemia, frequent infections including tuberculosis, low access to healthcare and medications in people of low socioeconomic stratum, ethnic dietary practices (e.g., ingestion of high-carbohydrate diets), and inadequate education regarding hypoglycemia. All these factors should be considered to choose appropriate therapeutic option in this population. The proposed algorithm is simple, suggests less expensive drugs, and tries to provide an effective and comprehensive framework for delivery of diabetes therapy in primary care in India. The proposed guidelines agree with international recommendations in favoring individualization of therapeutic targets as well as modalities of treatment in a flexible manner suitable to the Indian population. PMID:26909751

  3. Proposed algorithm for determining the delta intercept of a thermocouple psychrometer curve

    SciTech Connect

    Kurzmack, M.A.

    1993-07-01

    The USGS Hydrologic Investigations Program is currently developing instrumentation to study the unsaturated zone at Yucca Mountain in Nevada. Surface-based boreholes up to 2,500 feet in depth will be drilled, and then instrumented in order to define the water potential field within the unsaturated zone. Thermocouple psychrometers will be used to monitor the in-situ water potential. An algorithm is proposed for simply and efficiently reducing a six wire thermocouple psychrometer voltage output curve to a single value, the delta intercept. The algorithm identifies a plateau region in the psychrometer curve and extrapolates a linear regression back to the initial start of relaxation. When properly conditioned for the measurements being made, the algorithm results in reasonable results even with incomplete or noisy psychrometer curves over a 1 to 60 bar range.

  4. A review of satellite altimeter measurement of sea surface wind speed - With a proposed new algorithm

    NASA Technical Reports Server (NTRS)

    Chelton, D. B.; Mccabe, P. J.

    1985-01-01

    The scheduled February 1985 launch of a radar altimeter aboard the U.S. Navy satellite Geosat has motivated an in-depth investigation of wind speed retrieval from satellite altimeters. The accuracy of sea surface wind speed estimated by the Seasat altimeter is examined by comparison with wind speed estimated by the Seasat scatterometer. The intercomparison is based on globally distributed spatial and temporal averages of the estimated wind speed. It is shown that there are systematic differences between altimeter and scatterometer wind speed estimates. These differences are traced to errors in the Seasat altimeter geophysical data record wind speed algorithm. A new algorithm is proposed which yields consistent estimates from the two satellite sensors. Using this new algorithm, the rms difference between spatial and temporal averages of the two wind speed estimates is less than 1 m/s, and their correlation is greater than 0.9.

  5. Assembled sequence contigs by SOAPdenova and Volvet algorithms from metagenomic short reads of a new bacterial isolate of gut origin

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Assembled sequence contigs by SOAPdenova and Volvet algorithms from metagenomic short reads of a new bacterial isolate of gut origin. This study included 2 submissions with a total of 9.8 million bp of assembled contigs....

  6. Replication and Comparison of the Newly Proposed ADOS-2, Module 4 Algorithm in ASD Without ID: A Multi-site Study.

    PubMed

    Pugliese, Cara E; Kenworthy, Lauren; Bal, Vanessa Hus; Wallace, Gregory L; Yerys, Benjamin E; Maddox, Brenna B; White, Susan W; Popal, Haroon; Armour, Anna Chelsea; Miller, Judith; Herrington, John D; Schultz, Robert T; Martin, Alex; Anthony, Laura Gutermuth

    2015-12-01

    Recent updates have been proposed to the Autism Diagnostic Observation Schedule-2 Module 4 diagnostic algorithm. This new algorithm, however, has not yet been validated in an independent sample without intellectual disability (ID). This multi-site study compared the original and revised algorithms in individuals with ASD without ID. The revised algorithm demonstrated increased sensitivity, but lower specificity in the overall sample. Estimates were highest for females, individuals with a verbal IQ below 85 or above 115, and ages 16 and older. Best practice diagnostic procedures should include the Module 4 in conjunction with other assessment tools. Balancing needs for sensitivity and specificity depending on the purpose of assessment (e.g., clinical vs. research) and demographic characteristics mentioned above will enhance its utility. PMID:26385796

  7. Proposed diagnostic algorithm for patients with suspected mastocytosis: a proposal of the European Competence Network on Mastocytosis.

    PubMed

    Valent, P; Escribano, L; Broesby-Olsen, S; Hartmann, K; Grattan, C; Brockow, K; Niedoszytko, M; Nedoszytko, B; Oude Elberink, J N G; Kristensen, T; Butterfield, J H; Triggiani, M; Alvarez-Twose, I; Reiter, A; Sperr, W R; Sotlar, K; Yavuz, S; Kluin-Nelemans, H C; Hermine, O; Radia, D; van Doormaal, J J; Gotlib, J; Orfao, A; Siebenhaar, F; Schwartz, L B; Castells, M; Maurer, M; Horny, H-P; Akin, C; Metcalfe, D D; Arock, M

    2014-10-01

    Mastocytosis is an emerging differential diagnosis in patients with more or less specific mediator-related symptoms. In some of these patients, typical skin lesions are found and the diagnosis of mastocytosis can be established. In other cases, however, skin lesions are absent, which represents a diagnostic challenge. In the light of this unmet need, we developed a diagnostic algorithm for patients with suspected mastocytosis. In adult patients with typical lesions of mastocytosis in the skin, a bone marrow (BM) biopsy should be considered, regardless of the basal serum tryptase concentration. In adults without skin lesions who suffer from mediator-related or other typical symptoms, the basal tryptase level is an important parameter. In those with a slightly increased tryptase level, additional investigations, including a sensitive KIT mutation analysis of blood leucocytes or measurement of urinary histamine metabolites, may be helpful. In adult patients in whom (i) KIT D816V is detected and/or (ii) the basal serum tryptase level is clearly increased (>25-30 ng/ml) and/or (iii) other clinical or laboratory features suggest the presence of 'occult' mastocytosis or another haematologic neoplasm, a BM investigation is recommended. In the absence of KIT D816V and other signs or symptoms of mastocytosis or another haematopoietic disease, no BM investigation is required, but the clinical course and tryptase levels are monitored in the follow-up. In paediatric patients, a BM investigation is usually not required, even if the tryptase level is increased. Although validation is required, it can be expected that the algorithm proposed herein will facilitate the management of patients with suspected mastocytosis and help avoid unnecessary referrals and investigations. PMID:24836395

  8. Locally linear manifold model for gap-filling algorithms of hyperspectral imagery: Proposed algorithms and a comparative study

    NASA Astrophysics Data System (ADS)

    Suliman, Suha Ibrahim

    Landsat 7 Enhanced Thematic Mapper Plus (ETM+) Scan Line Corrector (SLC) device, which corrects for the satellite motion, has failed since May 2003 resulting in a loss of about 22% of the data. To improve the reconstruction of Landsat 7 SLC-off images, Locally Linear Manifold (LLM) model is proposed for filling gaps in hyperspectral imagery. In this approach, each spectral band is modeled as a non-linear locally affine manifold that can be learned from the matching bands at different time instances. Moreover, each band is divided into small overlapping spatial patches. In particular, each patch is considered to be a linear combination (approximately on an affine space) of a set of corresponding patches from the same location that are adjacent in time or from the same season of the year. Fill patches are selected from Landsat 5 Thematic Mapper (TM) products of the year 1984 through 2011 which have similar spatial and radiometric resolution as Landsat 7 products. Using this approach, the gap-filling process involves feasible point on the learned manifold to approximate the missing pixels. The proposed LLM framework is compared to some existing single-source (Average and Inverse Distance Weight (IDW)) and multi- source (Local Linear Histogram Matching (LLHM) and Adaptive Window Linear Histogram Matching (AWLHM)) gap-filling methodologies. We analyze the effectiveness of the proposed LLM approach through simulation examples with known ground-truth. It is shown that the LLM-model driven approach outperforms all existing recovery methods considered in this study. The superiority of LLM is illustrated by providing better reconstructed images with higher accuracy even over heterogeneous landscape. Moreover, it is relatively simple to realize algorithmically, and it needs much less computing time when compared to the state- of-the art AWLHM approach.

  9. Flap reconstruction of the knee: A review of current concepts and a proposed algorithm

    PubMed Central

    Gravvanis, Andreas; Kyriakopoulos, Antonios; Kateros, Konstantinos; Tsoutsos, Dimosthenis

    2014-01-01

    A literature search focusing on flap knee reconstruction revealed much controversy regarding the optimal management of around the knee defects. Muscle flaps are the preferred option, mainly in infected wounds. Perforator flaps have recently been introduced in knee coverage with significant advantages due to low donor morbidity and long pedicles with wide arc of rotation. In the case of free flap the choice of recipient vessels is the key point to the reconstruction. Taking the published experience into account, a reconstructive algorithm is proposed according to the size and location of the wound, the presence of infection and/or 3-dimensional defect. PMID:25405089

  10. Bilateral Simultaneous Tubal Ectopic Pregnancy: A Case Report, Review of Literature and a Proposed Management Algorithm

    PubMed Central

    Jena, Saubhagya Kumar; Nayak, Monalisha; Das, Leena; Senapati, Swagatika

    2016-01-01

    Bilateral simultaneous Tubal Ectopic Pregnancy (BTP) is the rarest form of ectopic pregnancy. The incidence is higher in women undergoing assisted reproductive techniques or ovulation induction. The clinical presentation is unpredictable and there are no unique features to distinguish it from unilateral ectopic pregnancy. BTP continues to be a clinician’s dilemma as pre-operative diagnosis is difficult and is commonly made during surgery. Treatment options are varied depending on site of ectopic pregnancy, extent of tubal damage and requirement of future fertility. We report a case of BTP which was diagnosed during surgery and propose an algorithm for management of such patients. PMID:27134950

  11. A proposed origin of the Olympus Mons escarpment. [Martian volcanic feature

    NASA Technical Reports Server (NTRS)

    King, J. S.; Riehle, J. R.

    1974-01-01

    Olympus Mons (Nix Olympica) on Mars is delimited by a unique steep, nearly circular scarp. A pyroclastic model is proposed for the construct's origin. It is postulated that the Olympus Mons plateau is constructed predominantly of numerous ash-flow tuffs which were erupted from central sources over an extended period of time. Lava flows may be intercalated with the tuffs. A schematic radial profile incorporating the inferred compaction zones for an ash sheet is proposed. Following emplacement, eolian (and possibly fluvial) erosion and abrasion during dust storms would act on the ash sheets. Interior portions of the sheets would spall and slump following eolian erosion, generating steep, relatively smooth boundary scarps. The scarp would be circular due to symmetrical distribution of compaction zones. The model implies further that the Olympus Mons plateau rests on a more resistant rock substrate.

  12. 75 FR 39613 - Request for Proposals To Accelerate Tariff Elimination and Modify the Rules of Origin Under the...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-07-09

    ... Origin Under the United States-Chile Free Trade Agreement AGENCY: Office of the United States Trade... changes to the rules of origin under the United States-Chile Free Trade Agreement (``the Agreement'' or... consider for liberalizing the USCFTA's rules of origin. DATES: Proposals must be submitted to USTR no...

  13. A Proposed Implementation of Tarjan's Algorithm for Scheduling the Solution Sequence of Systems of Federated Models

    SciTech Connect

    McNunn, Gabriel S; Bryden, Kenneth M

    2013-01-01

    Tarjan's algorithm schedules the solution of systems of equations by noting the coupling and grouping between the equations. Simulating complex systems, e.g., advanced power plants, aerodynamic systems, or the multi-scale design of components, requires the linkage of large groups of coupled models. Currently, this is handled manually in systems modeling packages. That is, the analyst explicitly defines both the method and solution sequence necessary to couple the models. In small systems of models and equations this works well. However, as additional detail is needed across systems and across scales, the number of models grows rapidly. This precludes the manual assembly of large systems of federated models, particularly in systems composed of high fidelity models. This paper examines extending Tarjan's algorithm from sets of equations to sets of models. The proposed implementation of the algorithm is demonstrated using a small one-dimensional system of federated models representing the heat transfer and thermal stress in a gas turbine blade with thermal barrier coating. Enabling the rapid assembly and substitution of different models permits the rapid turnaround needed to support the “what-if” kinds of questions that arise in engineering design.

  14. Surgery in extensive vertebral hemangioma: case report, literature review and a new algorithm proposal.

    PubMed

    Tarantino, Roberto; Donnarumma, Pasquale; Nigro, Lorenzo; Delfini, Roberto

    2015-07-01

    Hemangiomas are benign dysplasias or vascular tumors consisting of vascular spaces lined with endothelium. Nowadays, radiotherapy for vertebral hemangiomas (VHs) is widely accepted as primary treatment for painful lesions. Nevertheless, the role of surgery is still unclear. The purpose of this study is to propose a novel algorithm of treatment about VHs. This is a case report of an extensive VH and a review of the literature. A case of vertebral fracture during radiotherapy at a total dose of 30 Gy given in 10 fractions (treatment time 2 weeks) using a linear accelerator at 15 MV high-energy photons for extensive VH is reported. Using PubMed database, a review of the literature is done. The authors have no study funding sources. The authors have no conflicting financial interests. In the literature, good results in terms of pain and neurological deficits are reported. No cases of vertebral fractures are described. However, there is no consensus regarding the treatment for VHs. Radiotherapy is widely utilized in VHs determining pain. Surgery for VHs determining neurological deficit is also widely accepted. Perhaps, regarding the width of the lesion, no indications are given. We consider it important to make an evaluation before initiating the treatment for the risk of pathologic vertebral fracture, since in radiotherapy, there is no convention regarding structural changes determined in VHs. We propose a new algorithm of treatment. We recommend radiotherapy only for small lesions in which vertebral stability is not concerned. Kyphoplasty can be proposed for asymptomatic patients in which VHs are small and in patients affected by VHs determining pain without spinal canal invasion in which the VH is small. In patients affected by pain without spinal canal invasion but in which the VH is wide or presented with spinal canal invasion and in patients affected by neurological deficits, we propose surgery. PMID:25720346

  15. Preliminary field trial of a putative research algorithm for diagnosing ICD-11 personality disorders in psychiatric patients: 2. Proposed trait domains.

    PubMed

    Kim, Youl-Ri; Tyrer, Peter; Lee, Hong-Seock; Kim, Sung-Gon; Hwang, Soon-Taek; Lee, Gi Young; Mulder, Roger

    2015-11-01

    This field trial examines the discriminant validity of five trait domains of the originally proposed research algorithm for diagnosing International Classification of Diseases (ICD)-11 personality disorders. This trial was carried out in South Korea where a total of 124 patients with personality disorder participated in the study. Participants were assessed using originally proposed monothetic trait domains of asocial-schizoid, antisocial-dissocial, anxious-dependent, emotionally unstable and anankastic-obsessional groups of the research algorithm in ICD-11. Their assessments were compared to those from the Personality Assessment Schedule interview, and the five-factor model (FFM). A total of 48.4% of patients were found to have pathology in two or more domains. In the discriminant analysis, 64.2% of the grouped cases of the originally proposed ICD-11 domains were correctly classified by the five domain categories using the Personality Assessment Schedule, with the highest accuracy in the anankastic-obsessional domain and the lowest accuracy in the emotionally unstable domain. In comparison, the asocial-schizoid, anxious-dependent and the emotionally unstable domains were moderately correlated with the FFM, whereas the anankastic-obsessional or antisocial-dissocial domains were not significantly correlated with the FFM. In this field trial, we demonstrated the limited discriminant and the convergent validities of the originally proposed trait domains of the research algorithm for diagnosing ICD-11 personality disorder. The results suggest that the anankastic, asocial and dissocial domains show good discrimination, whereas the anxious-dependent and emotionally unstable ones overlap too much and have been subsequently revised. PMID:26472077

  16. Carbonado: Physical and chemical properties, a critical evaluation of proposed origins, and a revised genetic model

    NASA Astrophysics Data System (ADS)

    Haggerty, Stephen E.

    2014-03-01

    Carbonado-diamond is the most controversial of all diamond types and is found only in Brazil, and the Central African Republic (Bangui). Neither an affinity to Earth's mantle, nor an origin in the crust can be unequivocally established. Carbonado-diamond is at least 3.8 Ga old, an age about 0.5 Ga older than the oldest diamonds yet reported in kimberlites and lamproites on Earth. Derived from Neo- to Mid-Proterozoic meta-conglomerates, the primary magmatic host rock has not been identified. Discovered in 1841, the material is polycrystalline, robust and coke-like, and is best described as a strongly bonded micro-diamond ceramic. It is characteristically porous, which precludes an origin at high pressures and high temperatures in Earth's deep interior, yet it is also typically patinated, with a glass-like surface that resembles melting. With exotic inclusions of highly reduced metals, carbides, and nitrides the origin of carbonado-diamond is made even more challenging. But the challenge is important because a new diamondiferous host rock may be involved, and the development of a new physical process for generating diamond is possibly assured. The combination of micro-crystals and random crystal orientation leads to extreme mechanical toughness, and a predicable super-hardness. The physical and chemical properties of carbonado are described with a view to the development of a mimetic strategy to synthesize carbonado and to duplicate its extreme toughness and super-hardness. Textural variations are described with an emphasis on melt-like surface features, not previously discussed in the literature, but having a very clear bearing on the history and genesis of carbonado. Selected physical properties are presented and the proposed origins, diverse in character and imaginatively novel, are critically reviewed. From our present knowledge of the dynamic Earth, all indications are that carbonado is unlikely to be of terrestrial origin. A revised model for the origin of

  17. Pandora - Discovering the origin of the moons of Mars (a proposed Discovery mission)

    NASA Astrophysics Data System (ADS)

    Raymond, C. A.; Diniega, S.; Prettyman, T. H.

    2015-12-01

    After decades of intensive exploration of Mars, fundamental questions about the origin and evolution of the martian moons, Phobos and Deimos, remain unanswered. Their spectral characteristics are similar to C- or D-class asteroids, suggesting that they may have originated in the asteroid belt or outer solar system. Perhaps these ancient objects were captured separately, or maybe they are the fragments of a captured asteroid disrupted by impact. Various lines of evidence hint at other possibilities: one alternative is co-formation with Mars, in which case the moons contain primitive martian materials. Another is that they are re-accreted ejecta from a giant impact and contain material from the early martian crust. The Pandora mission, proposed in response to the 2014 NASA Discovery Announcement of Opportunity, will acquire new information needed to determine the provenance of the moons of Mars. Pandora will travel to and successively orbit Phobos and Deimos to map their chemical and mineral composition and further refine their shape and gravity. Geochemical data, acquired by nuclear- and infrared-spectroscopy, can distinguish between key origin hypotheses. High resolution imaging data will enable detailed geologic mapping and crater counting to determine the timing of major events and stratigraphy. Data acquired will be used to determine the nature of and relationship between "red" and "blue" units on Phobos, and determine how Phobos and Deimos are related. After identifying material representative of each moons' bulk composition, analysis of the mineralogical and elemental composition of this material will allow discrimination between the formation hypotheses for each moon. The information acquired by Pandora can then be compared with similar data sets for other solar system bodies and from meteorite studies. Understanding the formation of the martian moons within this larger context will yield a better understanding of processes acting in the early solar system

  18. [Proposal for standardized authors' name citing in original plant Latin name listed in the Chinese Pharmacopoeia].

    PubMed

    Qin, Min-Jian; Tian, Mei

    2014-05-01

    In 2010, Chinese Pharmacopoeia Committee officially enacted Chinese Pharmacopoeia (2010 edition). The Volume 1 of the pharmacopoeia is comprised of the medicinal materials and the decoction pieces, the essential oils and extracts of medicinal plants, prescription preparations and single preparation, etc., which not only provides Latin names of Chinese medicinal materials, also provided Latin names of the original medicinal plants to effectively control the quality of Chinese medicinal materials. In order to raise awareness of correctly citation and maintain the authority and standardization of Chinese Pharmacopoeia, this paper briefly describes abbreviations rules of authors' name of plant scientific name according to the 'International Code of Botanical Nomenclature, ICBN'. Through comparing with the rules of ICBN, 'Flora of China' (Chinese edition and English edition), and authority international plant catalogue databases, the authors made statistic and analysis of the non-standard cited authors' names phenomena of the original plant scientific names recorded in the Chinese Pharmacopoeia (2010 edition), and the revision suggestions are proposed. PMID:25095396

  19. Proposal of a Clinical Decision Tree Algorithm Using Factors Associated with Severe Dengue Infection

    PubMed Central

    Hussin, Narwani; Cheah, Wee Kooi; Ng, Kee Sing; Muninathan, Prema

    2016-01-01

    Background WHO’s new classification in 2009: dengue with or without warning signs and severe dengue, has necessitated large numbers of admissions to hospitals of dengue patients which in turn has been imposing a huge economical and physical burden on many hospitals around the globe, particularly South East Asia and Malaysia where the disease has seen a rapid surge in numbers in recent years. Lack of a simple tool to differentiate mild from life threatening infection has led to unnecessary hospitalization of dengue patients. Methods We conducted a single-centre, retrospective study involving serologically confirmed dengue fever patients, admitted in a single ward, in Hospital Kuala Lumpur, Malaysia. Data was collected for 4 months from February to May 2014. Socio demography, co-morbidity, days of illness before admission, symptoms, warning signs, vital signs and laboratory result were all recorded. Descriptive statistics was tabulated and simple and multiple logistic regression analysis was done to determine significant risk factors associated with severe dengue. Results 657 patients with confirmed dengue were analysed, of which 59 (9.0%) had severe dengue. Overall, the commonest warning sign were vomiting (36.1%) and abdominal pain (32.1%). Previous co-morbid, vomiting, diarrhoea, pleural effusion, low systolic blood pressure, high haematocrit, low albumin and high urea were found as significant risk factors for severe dengue using simple logistic regression. However the significant risk factors for severe dengue with multiple logistic regressions were only vomiting, pleural effusion, and low systolic blood pressure. Using those 3 risk factors, we plotted an algorithm for predicting severe dengue. When compared to the classification of severe dengue based on the WHO criteria, the decision tree algorithm had a sensitivity of 0.81, specificity of 0.54, positive predictive value of 0.16 and negative predictive of 0.96. Conclusion The decision tree algorithm proposed

  20. A proposed origin for fossilized Pennsylvanian plant cuticles by pyrite oxidation (Sydney Coalfield, Nova Scotia, Canada)

    USGS Publications Warehouse

    Zodrow, E.L.; Mastalerz, Maria

    2009-01-01

    Fossilized cuticles, though rare in the roof rocks of coal seam in the younger part of the Pennsylvanian Sydney Coalfield, Nova Scotia, represent nearly all of the major plant groups. Selected for investigation, by methods of Fourier transform infrared spectroscopy (FTIR) and elemental analysis, are fossilized cuticles (FCs) and cuticles extracted from compressions by Schulze's process (CCs) of Alethopteris ambigua. These investigations are supplemented by FTIR analysis of FCs and CCs of Cordaites principalis, and a cuticle-fossilized medullosalean(?) axis. The purpose of this study is threefold: (1) to try to determine biochemical discriminators between FCs and CCs of the same species using semi-quantitative FTIR techniques; (2) to assess the effects chemical treatments have, particularly Schulze's process, on functional groups; and most importantly (3) to study the primary origin of FCs. Results are equivocal in respect to (1); (2) after Schulze's treatment aliphatic moieties tend to be reduced relative to oxygenated groups, and some aliphatic chains may be shortened; and (3) a primary chemical model is proposed. The model is based on a variety of geological observations, including stratal distribution, clay and pyrite mineralogies associated with FCs and compressions, and regional geological structure. The model presupposes compression-cuticle fossilization under anoxic conditions for late authigenic deposition of sub-micron-sized pyrite on the compressions. Rock joints subsequently provided conduits for oxygen-enriched ground-water circulation to initiate in situ pyritic oxidation that produced sulfuric acid for macerating compressions, with resultant loss of vitrinite, but with preservation of cuticles as FCs. The timing of the process remains undetermined, though it is assumed to be late to post-diagenetic. Although FCs represent a pathway of organic matter transformation (pomd) distinct from other plant-fossilization processes, global applicability of the

  1. Proposal of Functional-Specialization Multi-Objective Real-Coded Genetic Algorithm: FS-MOGA

    NASA Astrophysics Data System (ADS)

    Hamada, Naoki; Tanaka, Masaharu; Sakuma, Jun; Kobayashi, Shigenobu; Ono, Isao

    This paper presents a Genetic Algorithm (GA) for multi-objective function optimization. To find a precise and widely-distributed set of solutions in difficult multi-objective function optimization problems which have multimodality and curved Pareto-optimal set, a GA would be required conflicting behaviors in the early stage and the last stage of search. That is, in the early stage of search, GA should perform local-Pareto-optima-overcoming search which aims to overcome local Pareto-optima and converge the population to promising areas in the decision variable space. On the other hand, in the last stage of search, GA should perform Pareto-frontier-covering search which aims to spread the population along the Pareto-optimal set. NSGA-II and SPEA2, the most widely used conventional methods, have problems in local-Pareto-optima-overcoming and Pareto-frontier-covering search. In local-Pareto-optima-overcoming search, their selection pressure is too high to maintain the diversity for overcoming local Pareto-optima. In Pareto-frontier-covering search, their abilities of extrapolation-directed sampling are not enough to spread the population and they cannot sample along the Pareto-optimal set properly. To resolve above problems, the proposed method adaptively switches two search strategies, each of which is specialized for local-Pareto-optima-overcoming and Pareto-frontier-covering search, respectively. We examine the effectiveness of the proposed method using two benchmark problems. The experimental results show that our approach outperforms the conventional methods in terms of both local-Pareto-optima-overcoming and Pareto-frontier-covering search.

  2. When and why should mentally ill prisoners be transferred to secure hospitals: a proposed algorithm.

    PubMed

    Vogel, Tobias; Lanquillon, Stefan; Graf, Marc

    2013-01-01

    For reasons well known and researched in detail, worldwide prevalence rates for mental disorders are much higher in prison populations than in general, not only for sentenced prisoners but also for prisoners on remand, asylum seekers on warrant for deportation and others. Moreover, the proportion of imprisoned individuals is rising in most countries. Therefore forensic psychiatry must deal not only with the typically young criminal population, vulnerable to mental illness due to social stress and at an age when rates of schizophrenia, suicide, drug abuse and most personality disorders are highest, but also with an increasingly older population with age-related diseases such as dementia. While treatment standards for these mental disorders are largely published and accepted, and scientific evidence as to screening prisoners for mental illness is growing, where to treat them is dependent on considerations for public safety and local conditions such as national legislation, special regulations and the availability of treatment facilities (e.g., in prisons, in special medical wards within prisons or in secure hospitals). While from a medical point of view a mentally ill prisoner should be treated in a hospital, the ultimate decision must consider these different issues. In this article the authors propose an algorithm comprising screening procedures for mental health and a treatment chain for mentally ill prisoners based on treatment facilities in prison, medical safety, human rights, ethics, and the availability of services at this interface between prison and medicine. PMID:23706656

  3. Surgical Management of Early Endometrial Cancer: An Update and Proposal of a Therapeutic Algorithm

    PubMed Central

    Falcone, Francesca; Balbi, Giancarlo; Di Martino, Luca; Grauso, Flavio; Salzillo, Maria Elena; Messalli, Enrico Michelino

    2014-01-01

    In the last few years technical improvements have produced a dramatic shift from traditional open surgery towards a minimally invasive approach for the management of early endometrial cancer. Advancement in minimally invasive surgical approaches has allowed extensive staging procedures to be performed with significantly reduced patient morbidity. Debate is ongoing regarding the choice of a minimally invasive approach that has the most effective benefit for the patients, the surgeon, and the healthcare system as a whole. Surgical treatment of women with presumed early endometrial cancer should take into account the features of endometrial disease and the general surgical risk of the patient. Women with endometrial cancer are often aged, obese, and with cardiovascular and metabolic comorbidities that increase the risk of peri-operative complications, so it is important to tailor the extent and the radicalness of surgery in order to decrease morbidity and mortality potentially derivable from unnecessary procedures. In this regard women with negative nodes derive no benefit from unnecessary lymphadenectomy, but may develop short- and long-term morbidity related to this procedure. Preoperative and intraoperative techniques could be critical tools for tailoring the extent and the radicalness of surgery in the management of women with presumed early endometrial cancer. In this review we will discuss updates in surgical management of early endometrial cancer and also the role of preoperative and intraoperative evaluation of lymph node status in influencing surgical options, with the aim of proposing a management algorithm based on the literature and our experience. PMID:25063051

  4. CDRD and PNPR satellite passive microwave precipitation retrieval algorithms: EuroTRMM/EURAINSAT origins and H-SAF operations

    NASA Astrophysics Data System (ADS)

    Mugnai, A.; Smith, E. A.; Tripoli, G. J.; Bizzarri, B.; Casella, D.; Dietrich, S.; Di Paola, F.; Panegrossi, G.; Sanò, P.

    2013-04-01

    including a few examples of their performance. This aspect of the development of the two algorithms is placed in the context of what we refer to as the TRMM era, which is the era denoting the active and ongoing period of the Tropical Rainfall Measuring Mission (TRMM) that helped inspire their original development. In 2015, the ISAC-Rome precipitation algorithms will undergo a transformation beginning with the upcoming Global Precipitation Measurement (GPM) mission, particularly the GPM Core Satellite technologies. A few years afterward, the first pair of imaging and sounding Meteosat Third Generation (MTG) satellites will be launched, providing additional technological advances. Various of the opportunities presented by the GPM Core and MTG satellites for improving the current CDRD and PNPR precipitation retrieval algorithms, as well as extending their product capability, are discussed.

  5. To Propose a Reviewer Dispatching Algorithm for Networked Peer Assessment System.

    ERIC Educational Resources Information Center

    Liu, Eric Zhi-Feng

    2005-01-01

    Despite their increasing availability on the Internet, networked peer assessment systems (1-5) lack feasible automatic dispatching algorithm of student's assignments and ultimately inhibit the effectiveness of peer assessment. Therefore, this study presents a reviewer dispatching algorithm capable of supporting networked peer assessment system in…

  6. Development of an Evidence-Based Clinical Algorithm for Practice in Hypotonia Assessment: A Proposal

    PubMed Central

    2014-01-01

    Background Assessing muscle tone in children is essential during the neurological assessment and is often essential in ensuring a more accurate diagnosis for appropriate management. While there have been advances in child neurology, there remains much contention around the subjectivity of the clinical assessment of hypotonia, which is often the first step in the diagnostic process. Objective In response to this challenge, the objective of the study is to develop and validate a prototype of a decision making process in the form of a clinical algorithm that will guide clinicians during this assessment process. Methods Design research within a pragmatic stance will be employed in this study. Multi-phase stages of assessment, prototyping and evaluation will occur. These will include processes that include a systematic review, processes of reflection and action as well as validation methods. Given the mixed methods nature of this study, use of NVIVO or ATLAS-ti will be used in the analysis of qualitative data and SPSS for quantitative data. Results Initial results from the systematic review revealed a paucity of scientific literature that documented the objective assessment of hypotonia in children. The review identified the need for more studies with greater methodological rigor in order to determine best practice with respect to the methods used in the assessment of low muscle tone in the paediatric population. Conclusions It is envisaged that this proposal will contribute to a more accurate clinical diagnosis of children with low muscle tone in the absence of a gold standard. We anticipate that the use of this tool will ultimately assist clinicians towards moving to evidenced based practice whilst upholding best practice in the care of children with hypotonia. PMID:25485571

  7. A proposed adaptive step size perturbation and observation maximum power point tracking algorithm based on photovoltaic system modeling

    NASA Astrophysics Data System (ADS)

    Huang, Yu

    Solar energy becomes one of the major alternative renewable energy options for its huge abundance and accessibility. Due to the intermittent nature, the high demand of Maximum Power Point Tracking (MPPT) techniques exists when a Photovoltaic (PV) system is used to extract energy from the sunlight. This thesis proposed an advanced Perturbation and Observation (P&O) algorithm aiming for relatively practical circumstances. Firstly, a practical PV system model is studied with determining the series and shunt resistances which are neglected in some research. Moreover, in this proposed algorithm, the duty ratio of a boost DC-DC converter is the object of the perturbation deploying input impedance conversion to achieve working voltage adjustment. Based on the control strategy, the adaptive duty ratio step size P&O algorithm is proposed with major modifications made for sharp insolation change as well as low insolation scenarios. Matlab/Simulink simulation for PV model, boost converter control strategy and various MPPT process is conducted step by step. The proposed adaptive P&O algorithm is validated by the simulation results and detail analysis of sharp insolation changes, low insolation condition and continuous insolation variation.

  8. Fossil evidence for the origin of spider spinnerets, and a proposed arachnid order

    PubMed Central

    Selden, Paul A.; Shear, William A.; Sutton, Mark D.

    2008-01-01

    Silk production from opisthosomal glands is a defining characteristic of spiders (Araneae). Silk emerges from spigots (modified setae) borne on spinnerets (modified appendages). Spigots from Attercopus fimbriunguis, from Middle Devonian (386 Ma) strata of Gilboa, New York, were described in 1989 as evidence for the oldest spider and the first use of silk by animals. Slightly younger (374 Ma) material from South Mountain, New York, conspecific with A. fimbriunguis, includes spigots and other evidence that elucidate the evolution of early Araneae and the origin of spider silk. No known Attercopus spigots, including the original specimen, occur on true spinnerets but are arranged along the edges of plates. Spinnerets originated from biramous appendages of opisthosomal somites 4 and 5; although present in Limulus, no other arachnids have opisthosomal appendage homologues on these segments. The spigot arrangement in Attercopus shows a primitive state before the reexpression of the dormant genetic mechanism that gave rise to spinnerets in later spiders. Enigmatic flagellar structures originally described as Arachnida incertae sedis, are shown to be Attercopus anal flagella, as found in Permarachne, also originally described as a spider. An arachnid order, Uraraneida, is erected for a plesion, including these two genera, based on this combination of characters. The inability of Uraraneida precisely to control silk weaving suggests its original use as a wrapping, lining, or homing material. PMID:19104044

  9. Origins.

    ERIC Educational Resources Information Center

    Online-Offline, 1999

    1999-01-01

    Provides an annotated list of resources dealing with the theme of origins of life, the universe, and traditions. Includes Web sites, videos, books, audio materials, and magazines with appropriate grade levels and/or subject disciplines indicated; professional resources; and learning activities. (LRW)

  10. The scientific origin of life. Considerations on the evolution of information, leading to an alternative proposal for explaining the origin of the cell, a semantically closed system.

    PubMed

    Vaneechoutte, M

    2000-01-01

    We hypothesize that the origin of life, that is, the origin of the first cell, cannot be explained by natural selection among self-replicating molecules, as is done by the RNA-world hypothesis. To circumvent the chicken and egg problem associated with semantic closure of the cell--no replication of information molecules (nucleotide strands) without functional enzymes, no functional enzymes without encoding in information molecules--a prebiotic evolutionary process is proposed that, from the informational point of view, must somehow have resembled the current scientific process. The cell was the outcome of interactions of a complex premetabolic community, with information molecules that were devoid of self-replicative properties. In a comparable manner, scientific progress is possible, essentially because of interaction between a complex cultural society and permanent information carriers like printed matter. This may eventually lead to self-replicating technology in which semantic closure occurs anew. Explaining the origin of life as a scientific process might provide a unifying theory for the evolution of information, wherebye at two moments symbolization/encoding of interactions into permanent information occurred: at one moment that of chemical interaction and at another moment that of animal behavior interaction. In one event this encoding led to autonomously duplicating chemistry (the cell), an event that possibly may be one of the outcomes of current scientific progress. PMID:10818565

  11. A proposed dosing algorithm for the individualized dosing of human immunoglobulin in chronic inflammatory neuropathies.

    PubMed

    Lunn, Michael P; Ellis, Lauren; Hadden, Robert D; Rajabally, Yusuf A; Winer, John B; Reilly, Mary M

    2016-03-01

    Dosing guidelines for immunoglobulin (Ig) treatment in neurological disorders do not consider variations in Ig half-life or between patients. Individualization of therapy could optimize clinical outcomes and help control costs. We developed an algorithm to optimize Ig dose based on patient's response and present this here as an example of how dosing might be individualized in a pharmacokinetically rational way and how this achieves potential dose and cost savings. Patients are "normalized" with no more than two initial doses of 2 g/kg, identifying responders. A third dose is not administered until the patient's condition deteriorates, allowing a "dose interval" to be set. The dose is then reduced until relapse allowing dose optimization. Using this algorithm, we have individualized Ig doses for 71 chronic inflammatory neuropathy patients. The majority of patients had chronic inflammatory demyelinating polyradiculoneuropathy (n = 39) or multifocal motor neuropathy (n = 24). The mean (standard deviation) dose of Ig administered was 1.4 (0.6) g/kg, with a mean dosing interval of 4.3 weeks (median 4 weeks, range 0.5-10). Use of our standardized algorithm has allowed us to quickly optimize Ig dosing. PMID:26757367

  12. A proposal concerning the origin of life on the planet earth

    NASA Technical Reports Server (NTRS)

    Woese, C. R.

    1979-01-01

    It is proposed that, contrary to the widely accepted Oparin thesis, life on earth arose not in the oceans but in the earth's atmosphere. Difficulties of the Oparin thesis relating to the nonbiological nature of prebiotic evolution are discussed, and autotrophic, photosynthetic cells are proposed as the first living organisms to emerge, thus avoiding these difficulties. Recent developments in the geology of the earth at the time of the emergence of life are interpreted as requiring the absence of liquid surface water, with water partitioned between a molten crust and a dense, CO2-rich atmosphere, similar to the present state of Venus. Biochemistry in such an atmosphere would be primarily membrane chemistry on the interfaces of atmospheric salt water droplets, proceeding at normal temperatures without the absorption of electrical discharges or UV light. Areas not sufficiently accounted for by this scenario include the development of genetic organization and the breaking of the runaway greenhouse condition assumed.

  13. Adrenocortical Stem and Progenitor Cells: Unifying Model of Two Proposed Origins

    PubMed Central

    Wood, Michelle A.; Hammer, Gary D.

    2010-01-01

    The origins of our understanding of the cellular and molecular mechanisms by which signaling pathways and downstream transcription factors coordinate the specification of adrenocortical cells within the adrenal gland have arisen from studies on the role of Sf1 in steroidogenesis and adrenal development initiated 20 years ago in the laboratory of Dr. Keith Parker. Adrenocortical stem/progenitor cells have been predicted to be undifferentiated and quiescent cells that remain at the periphery of the cortex until needed to replenish the organ, at which time they undergo proliferation and terminal differentiation. Identification of these stem/progenitor cells has only recently been explored. Recent efforts have examined signaling molecules, including Wnt, Shh, and Dax1, which may coordinate intricate lineage and signaling relationships between the adrenal capsule (stem cell niche) and underlying cortex (progenitor cell pool) to maintain organ homeostasis in the adrenal gland. PMID:21094677

  14. The origin of the medial circumflex femoral artery: a meta-analysis and proposal of a new classification system.

    PubMed

    Tomaszewski, Krzysztof A; Henry, Brandon M; Vikse, Jens; Roy, Joyeeta; Pękala, Przemysław A; Svensen, Maren; Guay, Daniel L; Saganiak, Karolina; Walocha, Jerzy A

    2016-01-01

    Background and Objectives. The medial circumflex femoral artery (MCFA) is a common branch of the deep femoral artery (DFA) responsible for supplying the femoral head and the greater trochanteric fossa. The prevalence rates of MCFA origin, its branching patterns and its distance to the mid-inguinal point (MIP) vary significantly throughout the literature. The aim of this study was to determine the true prevalence of these characteristics and to study their associated anatomical and clinical relevance. Methods. A search of the major electronic databases Pubmed, EMBASE, Scopus, ScienceDirect, Web of Science, SciELO, BIOSIS, and CNKI was performed to identify all articles reporting data on the origin of the MCFA, its branching patterns and its distance to the MIP. No data or language restriction was set. Additionally, an extensive search of the references of all relevant articles was performed. All data on origin, branching and distance to MIP was extracted and pooled into a meta-analysis using MetaXL v2.0. Results. A total of 38 (36 cadaveric and 2 imaging) studies (n = 4,351 lower limbs) were included into the meta-analysis. The pooled prevalence of the MCFA originating from the DFA was 64.6% (95% CI [58.0-71.5]), while the pooled prevalence of the MCFA originating from the CFA was 32.2% (95% CI [25.9-39.1]). The CFA-derived MCFA was found to originate as a single branch in 81.1% (95% CI [70.1-91.7]) of cases with a mean pooled distance of 50.14 mm (95% CI [42.50-57.78]) from the MIP. Conclusion. The MCFA's variability must be taken into account by surgeons, especially during orthopedic interventions in the region of the hip to prevent iatrogenic injury to the circulation of the femoral head. Based on our analysis, we present a new proposed classification system for origin of the MCFA. PMID:26966661

  15. Fat-constrained 18F-FDG PET reconstruction using Dixon MR imaging and the origin ensemble algorithm

    NASA Astrophysics Data System (ADS)

    Wülker, Christian; Heinzer, Susanne; Börnert, Peter; Renisch, Steffen; Prevrhal, Sven

    2015-03-01

    Combined PET/MR imaging allows to incorporate the high-resolution anatomical information delivered by MRI into the PET reconstruction algorithm for improvement of PET accuracy beyond standard corrections. We used the working hypothesis that glucose uptake in adipose tissue is low. Thus, our aim was to shift 18F-FDG PET signal into image regions with a low fat content. Dixon MR imaging can be used to generate fat-only images via the water/fat chemical shift difference. On the other hand, the Origin Ensemble (OE) algorithm, a novel Markov chain Monte Carlo method, allows to reconstruct PET data without the use of forward- and back projection operations. By adequate modifications to the Markov chain transition kernel, it is possible to include anatomical a priori knowledge into the OE algorithm. In this work, we used the OE algorithm to reconstruct PET data of a modified IEC/NEMA Body Phantom simulating body water/fat composition. Reconstruction was performed 1) natively, 2) informed with the Dixon MR fat image to down-weight 18F-FDG signal in fatty tissue compartments in favor of adjacent regions, and 3) informed with the fat image to up-weight 18F-FDG signal in fatty tissue compartments, for control purposes. Image intensity profiles confirmed the visibly improved contrast and reduced partial volume effect at water/fat interfaces. We observed a 17+/-2% increased SNR of hot lesions surrounded by fat, while image quality was almost completely retained in fat-free image regions. An additional in vivo experiment proved the applicability of the presented technique in practice, and again verified the beneficial impact of fat-constrained OE reconstruction on PET image quality.

  16. A proposed origin for palimpsests and anomalous pit craters on Ganymede and Callisto

    NASA Technical Reports Server (NTRS)

    Croft, S. K.

    1983-01-01

    The hypothesis that palimpsests and anomalous pit craters are essentially pristine crater forms derived from high-velocity impacts and/or impacts into an ice crust with preimpact temperatures near melting is explored. The observational data are briefly reviewed, and an impact model is proposed for the direct formation of a palimpsest from an impact when the modification flow which produces the final crater is dominated by 'wet' fluid flow, as opposed to the 'dry' granular flow which produces normal craters. Conditions of 'wet' modification occur when the volume of impact melt remaining in the transient crater attains a volume comparable to the transient crater. The normal crater-palimpsest transition is found to occur for sufficiently large impacts or sufficiently fast impactors. The range of crater diameters and morphological characteristics inferred from the impact model is consistent with the observed characteristics of palimpsests and anomalous pit craters.

  17. Comparison of SMOS vegetation optical thickness data with the proposed SMAP algorithm

    NASA Astrophysics Data System (ADS)

    Patton, Jason Carl

    Soil moisture is important to agriculture, weather, and climate. Current soil moisture networks measure at single points, while large spatial averages are needed for some crop, weather, and climate models. Large spatial average soil moisture can be measured by microwave satellites. Two missions, the European Space Agency's Soil Moisture Ocean Salinity mission (SMOS) and NASA's Soil Moisture Active Passive mission (SMAP), can or will measure L-band microwave radiation, which can see through denser vegetation and deeper in to the soil than previous missions that used X-band or C-band measurements. Both SMOS and SMAP require knowledge of vegetation optical thickness (tau) to retrieve soil moisture. SMOS is able to measure tau directly through multi-angular measurements. SMAP, which will measure at a single incidence angle, requires an outside source of tau data. The current SMAP baseline algorithm will use a climatology of optical vegetation measurements, the normalized difference vegetation index (NDVI), to estimate tau. SMAP will convert the NDVI climatology to vegetation water content (VWC), then convert VWC to tau through the b parameter. This dissertation aimed to validate SMOS tau using county crop yield estimates in Iowa. SMOS tau was found to be noisy while still having a clear response to vegetation. Counties with higher yields had higher increases in tau; over growing seasons, so it appears that SMOS tau is valid during the growing season. However, SMOS tau had odd behavior outside of growing seasons which can be attributed to soil tillage and residue management. Next, this dissertation attempted to estimate values of the b parameter at the satellite scale using SMOS tau data, county crop yields, and allometric relationships, such as harvest index. A new allometric relationship was defined, thetagv,max, which is the ratio of maximum VWC to maximum dry biomass. While uncertainty in the estimated values of b was large, the values were close in magnitude to

  18. Methodology to automatically detect abnormal values of vital parameters in anesthesia time-series: Proposal for an adaptable algorithm.

    PubMed

    Lamer, Antoine; Jeanne, Mathieu; Marcilly, Romaric; Kipnis, Eric; Schiro, Jessica; Logier, Régis; Tavernier, Benoît

    2016-06-01

    Abnormal values of vital parameters such as hypotension or tachycardia may occur during anesthesia and may be detected by analyzing time-series data collected during the procedure by the Anesthesia Information Management System. When crossed with other data from the Hospital Information System, abnormal values of vital parameters have been linked with postoperative morbidity and mortality. However, methods for the automatic detection of these events are poorly documented in the literature and differ between studies, making it difficult to reproduce results. In this paper, we propose a methodology for the automatic detection of abnormal values of vital parameters. This methodology uses an algorithm allowing the configuration of threshold values for any vital parameters as well as the management of missing data. Four examples illustrate the application of the algorithm, after which it is applied to three vital signs (heart rate, SpO2, and mean arterial pressure) to all 2014 anesthetic records at our institution. PMID:26817405

  19. Origin of the human L1 elements: proposed progenitor genes deduced from a consensus DNA sequence.

    PubMed

    Scott, A F; Schmeckpeper, B J; Abdelrazik, M; Comey, C T; O'Hara, B; Rossiter, J P; Cooley, T; Heath, P; Smith, K D; Margolet, L

    1987-10-01

    A consensus sequence for the human long interspersed repeated DNA element, L1Hs (LINE or KpnI sequence), is presented. The sequence contains two open reading frames (ORFs) which are homologous to ORFs in corresponding regions of L1 elements in other species. The L1Hs ORFs are separated by a small evolutionarily nonconserved region. The 5' end of the consensus contains frequent terminators in all three reading frames and has a relatively high GC content with numerous stretches of weak homology with AluI repeats. The 5' ORF extends for a minimum of 723 bp (241 codons). The 3' ORF is 3843 bp (1281 codons) and predicts a protein of 149 kD which has regions of weak homology to the polymerase domain of various reverse transcriptases. The 3' end of the consensus has a 208-bp nonconserved region followed by an adenine-rich end. The organization of the L1Hs consensus sequence resembles the structure of eukaryotic mRNAs except for the noncoding region between ORFs. However, due to base substitutions or truncation most elements appear incapable of producing mRNA that can be translated. Our observation that individual elements cluster into subfamilies on the basis of the presence or absence of blocks of sequence, or by the linkage of alternative bases at multiple positions, suggests that most L1 sequences were derived from a small number of structural genes. An estimate of the mammalian L1 substitution rate was derived and used to predict the age of individual human elements. From this it follows that the majority of human L1 sequences have been generated within the last 30 million years. The human elements studied here differ from each other, yet overall the L1Hs sequences demonstrate a pattern of species-specificity when compared to the L1 families of other mammals. Possible mechanisms that may account for the origin and evolution of the L1 family are discussed. These include pseudogene formation (retroposition), transposition, gene conversion, and RNA recombination. PMID

  20. Analysis of trans-Neptunian objects and a proposed theory to explain their origin

    NASA Astrophysics Data System (ADS)

    Brown, Robert B.; Firth, Jordan A.

    2016-02-01

    Current theories cannot explain how trans-Neptunian objects (TNOs) either formed in situ or how ultrawide trans-Neptunian binaries (TNBs) exist if they were formed closer to the Sun and were later dispersed during Neptune's migration. Furthermore, no theory can adequately explain the documented clustering of ω near 0° for TNOs with a > 150 au. Here, we show that not only is ω clustered for the nine long-period TNOs (LPTNOs) with a > 200 au, but Ω is also grouped almost as closely. Neither of these orbital elements is randomly distributed for any collection of TNOs investigated, including those that are not in resonance with Neptune, those with q > 30 au, q > 44 au, and LPTNOs. Every frequency distribution of ω and Ω indicates that many TNOs were recently affected by Neptune. Based on this study, we propose that TNOs were inside Neptune's orbit in the last few Myr. The TNOs then migrated outwards in a relatively short time period. Ultrawide TNBs never came close to Neptune during this migration, allowing these fragile pairs to remain intact. However, many other TNOs were perturbed as they passed Neptune, resulting in the distribution of orbital elements we see today for all TNOs, including those in the Kuiper belt and the LPTNOs.

  1. Lung ultrasound in the diagnosis of pneumonia in children: proposal for a new diagnostic algorithm

    PubMed Central

    Capasso, Maria; De Luca, Giuseppe; Prisco, Salvatore; Mancusi, Carlo; Laganà, Bruno; Comune, Vincenzo

    2015-01-01

    Background. Despite guideline recommendations, chest radiography (CR) for the diagnosis of community-acquired pneumonia (CAP) in children is commonly used also in mild and/or uncomplicated cases. The aim of this study is to assess the reliability of lung ultrasonography (LUS) as an alternative test in these cases and suggest a new diagnostic algorithm. Methods. We reviewed the medical records of all patients admitted to the pediatric ward from February 1, 2013 to December 31, 2014 with respiratory signs and symptoms. We selected only cases with mild/uncomplicated clinical course and in which CR and LUS were performed within 24 h of each other. The LUS was not part of the required exams recorded in medical records but performed independently. The discharge diagnosis, made only on the basis of history and physical examination, laboratory and instrumental tests, including CR (without LUS), was used as a reference test to compare CR and LUS findings. Results. Of 52 selected medical records CAP diagnosis was confirmed in 29 (55.7%). CR was positive in 25 cases, whereas LUS detected pneumonia in 28 cases. Four patients with negative CR were positive in ultrasound findings. Instead, one patient with negative LUS was positive in radiographic findings. The LUS sensitivity was 96.5% (95% CI [82.2%–99.9%]), specificity of 95.6% (95% CI [78.0%–99.9%]), positive likelihood ratio of 22.2 (95% CI [3.2–151.2]), and negative likelihood ratio of 0.04 (95% CI [0.01–0.25]) for diagnosing pneumonia. Conclusion. LUS can be considered as a valid alternative diagnostic tool of CAP in children and its use must be promoted as a first approach in accordance with our new diagnostic algorithm. PMID:26587343

  2. Microcodium: An extensive review and a proposed non-rhizogenic biologically induced origin for its formation

    NASA Astrophysics Data System (ADS)

    Kabanov, Pavel; Anadón, Pere; Krumbein, Wolfgang E.

    2008-04-01

    Microcodium has been previously described as a mainly Cenozoic calcification pattern ascribed to various organisms. A review of the available literature and our data reveal two peaks in Microcodium abundance; the Moscovian-early Permian and the latest Cretaceous-Paleogene. A detailed analysis of late Paleozoic and Cenozoic examples leads to the following new conclusions. Typical Microcodium-forming unilayered 'corn-cob' aggregates of elongated grains and thick multilayered (palisade) replacing structures cannot be linked to smaller-grained intracellular root calcifications, as became widely accepted after the work of Klappa [Klappa, C.F., 1979. Calcified filaments in Quaternary calcretes: organo-mineral interactions in the subaerial vadose environment. J. Sediment. Petrol. 49, 955-968.] Typical Microcodium is recognized from the early Carboniferous (with doubtful Devonian reports) to Quaternary as a biologically induced mineralization formed via dissolution/precipitation processes in various aerobic Ca-rich soil and subsoil terrestrial environments. Morphology and δ13C signatures of Microcodium suggest that neither plants, algae, or roots and root-associated mycorrhiza regulate the formation of these fossil structures. Non-recrystallized Microcodium grains basically consist of slender (1.5-4 μm) curved radiating monocrystalline prisms with occasionally preserved hyphae-like morphology. Thin (0.5-3 μm) hypha-like canals can also be observed. These supposed hyphae may belong to actinobacteria. However, thin fungal mycelia cannot be excluded. We propose a model of Microcodium formation involving a mycelial saprotrophic organism responsible for substrate corrosion and associated bacteria capable of consuming acidic metabolites and CaCO 3 reprecipitation into the Microcodium structures.

  3. [Adequacy of clinical interventions in patients with advanced and complex disease. Proposal of a decision making algorithm].

    PubMed

    Ameneiros-Lago, E; Carballada-Rico, C; Garrido-Sanjuán, J A; García Martínez, A

    2015-01-01

    Decision making in the patient with chronic advanced disease is especially complex. Health professionals are obliged to prevent avoidable suffering and not to add any more damage to that of the disease itself. The adequacy of the clinical interventions consists of only offering those diagnostic and therapeutic procedures appropriate to the clinical situation of the patient and to perform only those allowed by the patient or representative. In this article, the use of an algorithm is proposed that should serve to help health professionals in this decision making process. PMID:25666087

  4. The origin of the medial circumflex femoral artery: a meta-analysis and proposal of a new classification system

    PubMed Central

    Henry, Brandon M.; Vikse, Jens; Roy, Joyeeta; Pękala, Przemysław A.; Svensen, Maren; Guay, Daniel L.; Saganiak, Karolina; Walocha, Jerzy A.

    2016-01-01

    Background and Objectives. The medial circumflex femoral artery (MCFA) is a common branch of the deep femoral artery (DFA) responsible for supplying the femoral head and the greater trochanteric fossa. The prevalence rates of MCFA origin, its branching patterns and its distance to the mid-inguinal point (MIP) vary significantly throughout the literature. The aim of this study was to determine the true prevalence of these characteristics and to study their associated anatomical and clinical relevance. Methods. A search of the major electronic databases Pubmed, EMBASE, Scopus, ScienceDirect, Web of Science, SciELO, BIOSIS, and CNKI was performed to identify all articles reporting data on the origin of the MCFA, its branching patterns and its distance to the MIP. No data or language restriction was set. Additionally, an extensive search of the references of all relevant articles was performed. All data on origin, branching and distance to MIP was extracted and pooled into a meta-analysis using MetaXL v2.0. Results. A total of 38 (36 cadaveric and 2 imaging) studies (n = 4,351 lower limbs) were included into the meta-analysis. The pooled prevalence of the MCFA originating from the DFA was 64.6% (95% CI [58.0–71.5]), while the pooled prevalence of the MCFA originating from the CFA was 32.2% (95% CI [25.9–39.1]). The CFA-derived MCFA was found to originate as a single branch in 81.1% (95% CI [70.1–91.7]) of cases with a mean pooled distance of 50.14 mm (95% CI [42.50–57.78]) from the MIP. Conclusion. The MCFA’s variability must be taken into account by surgeons, especially during orthopedic interventions in the region of the hip to prevent iatrogenic injury to the circulation of the femoral head. Based on our analysis, we present a new proposed classification system for origin of the MCFA. PMID:26966661

  5. Non-invasive diagnosis in a case of bronchopulmonary sequestration and proposal of diagnostic algorithm.

    PubMed

    Caradonna, P; Bellia, M; Cannizzaro, F; Regio, S; Midiri, M; Bellia, V

    2008-09-01

    The case of a 43-year-old woman with intralobar pulmonary sequestration, Pryce type one, is presented. The medical history was characterised by recurrent bronchopneumonia, productive cough with purulent sputum and hemoptysis in the last three years. Diagnosis was made by CT angiography: multiplanar, maximum intensity projection and volume rendering reconstructions were visualised. A volume reduction of middle and lower lobe with multiple cyst-like bronchiectasis was detected and no evident relationship with tracheobronchial tree was pointed out. Reconstructions aimed at evaluating bronchial structures demonstrated no patency of middle and lower lobar bronchi. The study carried out after contrast medium infusion in arterial phase showed a vascular disorder characterised by an accessory arterial branch arising from the upper portion of thoracic aorta which, after moving caudally to pulmonary hilus with a tortuous course, supplied the atelectatic parenchyma. No anomalous venous drainage was detected. The patient underwent surgery with resection of two pulmonary lobes. CT compares favourably with other alternative imaging technique for pulmonary sequestration as multiplanar reconstructions allow not only the detection of supplying vessel, but also the accurate description of heterogeneous characteristics of the mass and adjacent structures. Finally an imaging-based diagnostic algorhithm is proposed. PMID:19065849

  6. Notes on quantitative structure-property relationships (QSPR), part 3: density functions origin shift as a source of quantum QSPR algorithms in molecular spaces.

    PubMed

    Carbó-Dorca, Ramon

    2013-04-01

    A general algorithm implementing a useful variant of quantum quantitative structure-property relationships (QQSPR) theory is described. Based on quantum similarity framework and previous theoretical developments on the subject, the present QQSPR procedure relies on the possibility to perform geometrical origin shifts over molecular density function sets. In this way, molecular collections attached to known properties can be easily used over other quantum mechanically well-described molecular structures for the estimation of their unknown property values. The proposed procedure takes quantum mechanical expectation value as provider of causal relation background and overcomes the dimensionality paradox, which haunts classical descriptor space QSPR. Also, contrarily to classical procedures, which are also attached to heavy statistical gear, the present QQSPR approach might use a geometrical assessment only or just some simple statistical outline or both. From an applied point of view, several easily reachable computational levels can be set up. A Fortran 95 program: QQSPR-n is described with two versions, which might be downloaded from a dedicated web site. Various practical examples are provided, yielding excellent results. Finally, it is also shown that an equivalent molecular space classical QSPR formalism can be easily developed. PMID:23238931

  7. Chlorophyll pigment concentration using spectral curvature algorithms - An evaluation of present and proposed satellite ocean color sensor bands

    NASA Technical Reports Server (NTRS)

    Hoge, Frank E.; Swift, Robert N.

    1986-01-01

    During the past several years symmetric three-band (460-, 490-, 520-nm) spectral curvature algorithm (SCA) has demonstrated rather accurate determination of chlorophyll pigment concentration using low-altitude airborne ocean color data. It is shown herein that the in-water asymmetric SCA, when applied to certain recently proposed OCI (NOAA-K and SPOT-3) and OCM (ERS-1) satellite ocean color bands, can adequately recover chlorophyll-like pigments. These airborne findings suggest that the proposed new ocean color sensor bands are in general satisfactorily, but not necessarily optimally, positioned to allow space evaluation of the SCA using high-precision atmospherically corrected satellite radiances. The pigment concentration recovery is not as good when existing Coastal Zone Color Scanner bands are used in the SCA. The in-water asymmetric SCA chlorophyll pigment recovery evaluations were performed using (1) airborne laser-induced chlorophyll fluorescence and (2) concurrent passive upwelled radiances. Data from a separate ocean color sensor aboard the aircraft were further used to validate the findings.

  8. Detecting nocturnal hypertension in Parkinson's disease and multiple system atrophy: proposal of a decision-support algorithm.

    PubMed

    Fanciulli, Alessandra; Strano, Stefano; Ndayisaba, Jean Pierre; Goebel, Georg; Gioffrè, Laura; Rizzo, Massimiliano; Colosimo, Carlo; Caltagirone, Carlo; Poewe, Werner; Wenning, Gregor K; Pontieri, Francesco E

    2014-07-01

    A pathological nocturnal blood pressure (BP) profile, either non-dipping or reverse dipping, occurs in more than 50% of subjects diagnosed with multiple system atrophy (MSA) or Parkinson's disease (PD). This may play a negative prognostic role in α-synucleinopathies, but, being mostly asymptomatic, remains largely underdiagnosed. In this proof-of-concept study, we aimed at developing a decision-support algorithm to predict pathological nocturnal BP profiles during a standard tilt-table examination in PD and MSA. Sixteen MSA and 16 PD patients underwent standard tilt-table examination and 24-h ambulatory BP monitoring (24-h ABPM). Clinical and tilt test differences between patients with a normal and a pathological nocturnal BP profile at 24-h ABPM were assessed, and a decision-support algorithm was developed accordingly. 75% of MSA and 31 % of PD patients showed a pathological nocturnal BP profile. This was associated with more pronounced orthostatic BP drop (p = 0.03), joint occurrence of orthostatic hypotension and supine hypertension (p = 0.046), and lack of BP overshoot in the late phase II (II_L, p = 0.002) and in the phase IV (p = 0.007) of the Valsalva manoeuvre. Combined ∆BP ≤0.5 mmHg in the II_L and ≤-7 mmHg in the IV phase of Valsalva manoeuvre correctly predicted a pathological nocturnal BP profile with 87.5% sensitivity and 85.7% specificity. Pathological nocturnal BP profiles are associated with evidence of cardiovascular noradrenergic failure in PD and MSA. The Valsalva manoeuvre is routinely performed during standard tilt-table examinations. We propose the naked-eye evaluation of Valsalva phase II_L and phase IV BP behaviour as time-sparing screening tool for pathological nocturnal BP profiles in PD and MSA. PMID:24737171

  9. [Attempt to objectify of coronary vessels course variability on the standard arteriograms by using original image processing algorithm].

    PubMed

    Syrycki, Marek; Stachurska, Aneta; Mysiak, Andrzej; Kacała, Ryszard

    2014-01-01

    The aim of paper: the analysis of standard angiograms of the left coronary artery was done in this paper in purpose of performing the uniform mathematical description of the coronary branches (both proximal and distal) course. The changes the coronary branches underwent depending the phase of cardiac cycle (diastole, isovolumic systole and tonic systole) were examined as well. The examined material consists of sequences of standard angiograms of the left coronary artery (LCA) obtained from 10 patients (5 male and 5 female) undergoing the standard diagnostic procedure in course of suspected unstable cardiac ischemia. The coronarograms were applied with digital angiography system INNOVA 2000 GE. The average age of the patients was 51 years. The method was based on using the original algorithm of image processing allowing automatic, in real-time, vessel edges detection and mathematical description of the vessels course. The software ImageJ, deriving from public domain of National Institutes of Health of USA was used for image analysis and for statistical analysis Statistica for Windows 5.5 version. The obtained results of examined dependences and describing them mathematically polynomial equations were presented on the diagrams. Among examined parameters the ferret diameter, area and perimeter of vessel outlines (both proximal and distal branches) were the most reliable. Their changes in relation to the phase of cardiac cycle were very close to the level of statistical significance. In conclusion the performed analysis allows to objectify description of coronary vessels course and variability. It also makes possible to identify the abnormal manners of vessels outlines that could be suspected of structural disorders even despite the absence of significant coronary stenosis. PMID:25782214

  10. Comparison between PCR and larvae visualization methods for diagnosis of Strongyloides stercoralis out of endemic area: A proposed algorithm.

    PubMed

    Repetto, Silvia A; Ruybal, Paula; Solana, María Elisa; López, Carlota; Berini, Carolina A; Alba Soto, Catalina D; Cappa, Stella M González

    2016-05-01

    Underdiagnosis of chronic infection with the nematode Strongyloides stercoralis may lead to severe disease in the immunosuppressed. Thus, we have set-up a specific and highly sensitive molecular diagnosis in stool samples. Here, we compared the accuracy of our polymerase chain reaction (PCR)-based method with that of conventional diagnostic methods for chronic infection. We also analyzed clinical and epidemiological predictors of infection to propose an algorithm for the diagnosis of strongyloidiasis useful for the clinician. Molecular and gold standard methods were performed to evaluate a cohort of 237 individuals recruited in Buenos Aires, Argentina. Subjects were assigned according to their immunological status, eosinophilia and/or history of residence in endemic areas. Diagnosis of strongyloidiasis by PCR on the first stool sample was achieved in 71/237 (29.9%) individuals whereas only 35/237(27.4%) were positive by conventional methods, requiring up to four serial stool samples at weekly intervals. Eosinophilia and history of residence in endemic areas have been revealed as independent factors as they increase the likelihood of detecting the parasite according to our study population. Our results underscore the usefulness of robust molecular tools aimed to diagnose chronic S. stercoralis infection. Evidence also highlights the need to survey patients with eosinophilia even when history of an endemic area is absent. PMID:26868702

  11. Loss of Faith in the Origins of Information Literacy in E-Environments: Proposal of a Holistic Approach

    ERIC Educational Resources Information Center

    Nazari, Maryam; Webber, Sheila

    2012-01-01

    The original concept of information literacy (IL) identifies it as an enabler for lifelong learning and learning-to-learn, adaptable and transferable in any learning environment and context. However, practices of IL in electronic information and learning environments (e-environments) tend to question the origins, and workability, of IL on the…

  12. Applying the wisdom of stepping down inhaled corticosteroids in patients with COPD: a proposed algorithm for clinical practice

    PubMed Central

    Kaplan, Alan G

    2015-01-01

    the aforementioned, this perspective article proposes an algorithm for the stepwise withdrawal of ICS in real-life clinical practice. PMID:26648711

  13. A Proposed Extension to the Soil Moisture and Ocean Salinity Level 2 Algorithm for Mixed Forest and Moderate Vegetation Pixels

    NASA Technical Reports Server (NTRS)

    Panciera, Rocco; Walker, Jeffrey P.; Kalma, Jetse; Kim, Edward

    2011-01-01

    The Soil Moisture and Ocean Salinity (SMOS)mission, launched in November 2009, provides global maps of soil moisture and ocean salinity by measuring the L-band (1.4 GHz) emission of the Earth's surface with a spatial resolution of 40-50 km.Uncertainty in the retrieval of soilmoisture over large heterogeneous areas such as SMOS pixels is expected, due to the non-linearity of the relationship between soil moisture and the microwave emission. The current baseline soilmoisture retrieval algorithm adopted by SMOS and implemented in the SMOS Level 2 (SMOS L2) processor partially accounts for the sub-pixel heterogeneity of the land surface, by modelling the individual contributions of different pixel fractions to the overall pixel emission. This retrieval approach is tested in this study using airborne L-band data over an area the size of a SMOS pixel characterised by a mix Eucalypt forest and moderate vegetation types (grassland and crops),with the objective of assessing its ability to correct for the soil moisture retrieval error induced by the land surface heterogeneity. A preliminary analysis using a traditional uniform pixel retrieval approach shows that the sub-pixel heterogeneity of land cover type causes significant errors in soil moisture retrieval (7.7%v/v RMSE, 2%v/v bias) in pixels characterised by a significant amount of forest (40-60%). Although the retrieval approach adopted by SMOS partially reduces this error, it is affected by errors beyond the SMOS target accuracy, presenting in particular a strong dry bias when a fraction of the pixel is occupied by forest (4.1%v/v RMSE,-3.1%v/v bias). An extension to the SMOS approach is proposed that accounts for the heterogeneity of vegetation optical depth within the SMOS pixel. The proposed approach is shown to significantly reduce the error in retrieved soil moisture (2.8%v/v RMSE, -0.3%v/v bias) in pixels characterised by a critical amount of forest (40-60%), at the limited cost of only a crude estimate of the

  14. Case 3018. Cervus gouazoubira Fischer, 1814 (currently Mazama gouazoubira; Mammalia, Artiodactyla): proposed conservation as the correct original spelling

    USGS Publications Warehouse

    Gardner, A.L.

    1999-01-01

    The purpose of this application is to conserve the spelling of the specific name of Cervus gouazoubira Fischer, 1814 for the brown brocket deer of South America (family Cervidae). This spelling, rather than the original gouazoubira, has been in virtually universal usage for almost 50 years.

  15. A New Algorithm to Diagnose Atrial Ectopic Origin from Multi Lead ECG Systems - Insights from 3D Virtual Human Atria and Torso

    PubMed Central

    Alday, Erick A. Perez; Colman, Michael A.; Langley, Philip; Butters, Timothy D.; Higham, Jonathan; Workman, Antony J.; Hancox, Jules C.; Zhang, Henggui

    2015-01-01

    Rapid atrial arrhythmias such as atrial fibrillation (AF) predispose to ventricular arrhythmias, sudden cardiac death and stroke. Identifying the origin of atrial ectopic activity from the electrocardiogram (ECG) can help to diagnose the early onset of AF in a cost-effective manner. The complex and rapid atrial electrical activity during AF makes it difficult to obtain detailed information on atrial activation using the standard 12-lead ECG alone. Compared to conventional 12-lead ECG, more detailed ECG lead configurations may provide further information about spatio-temporal dynamics of the body surface potential (BSP) during atrial excitation. We apply a recently developed 3D human atrial model to simulate electrical activity during normal sinus rhythm and ectopic pacing. The atrial model is placed into a newly developed torso model which considers the presence of the lungs, liver and spinal cord. A boundary element method is used to compute the BSP resulting from atrial excitation. Elements of the torso mesh corresponding to the locations of the placement of the electrodes in the standard 12-lead and a more detailed 64-lead ECG configuration were selected. The ectopic focal activity was simulated at various origins across all the different regions of the atria. Simulated BSP maps during normal atrial excitation (i.e. sinoatrial node excitation) were compared to those observed experimentally (obtained from the 64-lead ECG system), showing a strong agreement between the evolution in time of the simulated and experimental data in the P-wave morphology of the ECG and dipole evolution. An algorithm to obtain the location of the stimulus from a 64-lead ECG system was developed. The algorithm presented had a success rate of 93%, meaning that it correctly identified the origin of atrial focus in 75/80 simulations, and involved a general approach relevant to any multi-lead ECG system. This represents a significant improvement over previously developed algorithms. PMID

  16. A relative reward-strength algorithm for the hierarchical structure learning automata operating in the general nonstationary multiteacher environment.

    PubMed

    Baba, Norio; Mogami, Yoshio

    2006-08-01

    A new learning algorithm for the hierarchical structure learning automata (HSLA) operating in the nonstationary multiteacher environment (NME) is proposed. The proposed algorithm is derived by extending the original relative reward-strength algorithm to be utilized in the HSLA operating in the general NME. It is shown that the proposed algorithm ensures convergence with probability 1 to the optimal path under a certain type of the NME. Several computer-simulation results, which have been carried out in order to compare the relative performance of the proposed algorithm in some NMEs against those of the two of the fastest algorithms today, confirm the effectiveness of the proposed algorithm. PMID:16903364

  17. Determination of origin and sugars of citrus fruits using genetic algorithm, correspondence analysis and partial least square combined with fiber optic NIR spectroscopy.

    PubMed

    Tewari, Jagdish C; Dixit, Vivechana; Cho, Byoung-Kwan; Malik, Kamal A

    2008-12-01

    The capacity to confirm the variety or origin and the estimation of sucrose, glucose, fructose of the citrus fruits are major interests of citrus juice industry. A rapid classification and quantification technique was developed and validated for simultaneous and nondestructive quantifying the sugar constituent's concentrations and the origin of citrus fruits using Fourier Transform Near-Infrared (FT-NIR) spectroscopy in conjunction with Artificial Neural Network (ANN) using genetic algorithm, Chemometrics and Correspondences Analysis (CA). To acquire good classification accuracy and to present a wide range of concentration of sucrose, glucose and fructose, we have collected 22 different varieties of citrus fruits from the market during the entire season of citruses. FT-NIR spectra were recorded in the NIR region from 1,100 to 2,500 nm using the fiber optic probe and three types of data analysis were performed. Chemometrics analysis using Partial Least Squares (PLS) was performed in order to determine the concentration of individual sugars. Artificial Neural Network analysis was performed for classification, origin or variety identification of citrus fruits using genetic algorithm. Correspondence analysis was performed in order to visualize the relationship between the citrus fruits. To compute a PLS model based upon the reference values and to validate the developed method, high performance liquid chromatography (HPLC) was performed. Spectral range and the number of PLS factors were optimized for the lowest standard error of calibration (SEC), prediction (SEP) and correlation coefficient (R(2)). The calibration model developed was able to assess the sucrose, glucose and fructose contents in unknown citrus fruit up to an R(2) value of 0.996-0.998. Numbers of factors from F1 to F10 were optimized for correspondence analysis for relationship visualization of citrus fruits based on the output values of genetic algorithm. ANN and CA analysis showed excellent

  18. Determination of origin and sugars of citrus fruits using genetic algorithm, correspondence analysis and partial least square combined with fiber optic NIR spectroscopy

    NASA Astrophysics Data System (ADS)

    Tewari, Jagdish C.; Dixit, Vivechana; Cho, Byoung-Kwan; Malik, Kamal A.

    2008-12-01

    The capacity to confirm the variety or origin and the estimation of sucrose, glucose, fructose of the citrus fruits are major interests of citrus juice industry. A rapid classification and quantification technique was developed and validated for simultaneous and nondestructive quantifying the sugar constituent's concentrations and the origin of citrus fruits using Fourier Transform Near-Infrared (FT-NIR) spectroscopy in conjunction with Artificial Neural Network (ANN) using genetic algorithm, Chemometrics and Correspondences Analysis (CA). To acquire good classification accuracy and to present a wide range of concentration of sucrose, glucose and fructose, we have collected 22 different varieties of citrus fruits from the market during the entire season of citruses. FT-NIR spectra were recorded in the NIR region from 1100 to 2500 nm using the fiber optic probe and three types of data analysis were performed. Chemometrics analysis using Partial Least Squares (PLS) was performed in order to determine the concentration of individual sugars. Artificial Neural Network analysis was performed for classification, origin or variety identification of citrus fruits using genetic algorithm. Correspondence analysis was performed in order to visualize the relationship between the citrus fruits. To compute a PLS model based upon the reference values and to validate the developed method, high performance liquid chromatography (HPLC) was performed. Spectral range and the number of PLS factors were optimized for the lowest standard error of calibration (SEC), prediction (SEP) and correlation coefficient ( R2). The calibration model developed was able to assess the sucrose, glucose and fructose contents in unknown citrus fruit up to an R2 value of 0.996-0.998. Numbers of factors from F1 to F10 were optimized for correspondence analysis for relationship visualization of citrus fruits based on the output values of genetic algorithm. ANN and CA analysis showed excellent classification

  19. Evaluation of kinetic models for industrial acetic fermentation: proposal of a new model optimized by genetic algorithms.

    PubMed

    González-Sáiz, José M; Pizarro, Consuelo; Garrido-Vidal, Diego

    2003-01-01

    The most important kinetic models developed for acetic fermentation were evaluated to study their ability to explain the behavior of the industrial process of acetification. Each model was introduced into a simulation environment capable of replicating the conditions of the industrial plant. In this paper, it is proven that these models are not suitable to predict the evolution of the industrial fermentation by the comparison of the simulation results with an average sequence calculated from the industrial data. Therefore, a new kinetic model for the industrial acetic fermentation was developed. The kinetic parameters of the model were optimized by a specifically designed genetic algorithm. Only the representative sequence of industrial concentrations of acetic acid was required. The main novelty of the algorithm is the four-composed desirability function that works properly as the response to maximize. The new model developed is capable of explaining the behavior of the industrial process. The predictive ability of the model has been compared with that of the other models studied. PMID:12675605

  20. Authentication of the botanical origin of unifloral honey by infrared spectroscopy coupled with support vector machine algorithm

    NASA Astrophysics Data System (ADS)

    Lenhardt, L.; Zeković, I.; Dramićanin, T.; Tešić, Ž.; Milojković-Opsenica, D.; Dramićanin, M. D.

    2014-09-01

    In recent years, the potential of Fourier-transform infrared spectroscopy coupled with different chemometric tools in food analysis has been established. This technique is rapid, low cost, and reliable and requires little sample preparation. In this work, 130 Serbian unifloral honey samples (linden, acacia, and sunflower types) were analyzed using attenuated total reflectance infrared spectroscopy (ATR-IR). For each spectrum, 64 scans were recorded in wavenumbers between 4000 and 500 cm-1 and at a spectral resolution of 4 cm-1. These spectra were analyzed using principal component analysis (PCA), and calculated principal components were then used for support vector machine (SVM) training. In this way, the pattern-recognition tool is obtained for building a classification model for determining the botanical origin of honey. The PCA was used to analyze results and to see if the separation between groups of different types of honeys exists. Using the SVM, the classification model was built and classification errors were acquired. It has been observed that this technique is adequate for determining the botanical origin of honey with a success rate of 98.6%. Based on these results, it can be concluded that this technique offers many possibilities for future rapid qualitative analysis of honey.

  1. Improvements of HITS Algorithms for Spam Links

    NASA Astrophysics Data System (ADS)

    Asano, Yasuhito; Tezuka, Yu; Nishizeki, Takao

    The HITS algorithm proposed by Kleinberg is one of the representative methods of scoring Web pages by using hyperlinks. In the days when the algorithm was proposed, most of the pages given high score by the algorithm were really related to a given topic, and hence the algorithm could be used to find related pages. However, the algorithm and the variants including Bharat's improved HITS, abbreviated to BHITS, proposed by Bharat and Henzinger cannot be used to find related pages any more on today's Web, due to an increase of spam links. In this paper, we first propose three methods to find “linkfarms,” that is, sets of spam links forming a densely connected subgraph of a Web graph. We then present an algorithm, called a trust-score algorithm, to give high scores to pages which are not spam pages with a high probability. Combining the three methods and the trust-score algorithm with BHITS, we obtain several variants of the HITS algorithm. We ascertain by experiments that one of them, named TaN+BHITS using the trust-score algorithm and the method of finding linkfarms by employing name servers, is most suitable for finding related pages on today's Web. Our algorithms take time and memory no more than those required by the original HITS algorithm, and can be executed on a PC with a small amount of main memory.

  2. Diamond of Possibly Metallurgical and Seismic Origin: PART 3: Additional Specimens and a Proposal Calling for adjusted Methodologies for Diamondism

    NASA Astrophysics Data System (ADS)

    Giamn, M.

    2007-05-01

    , noniconic, nonstereotyping specimen population of primarily fine grains is needed. My theory accomodates (1) broad compositional ranges;(2) present or historical specimens; and (3)valid on a grain by grain scale as well as regional scale. A great nember of metallic elements are broadly similar to iron in crystal structure, phase equilibria, range of stoicheometry of solid solutions, and properties. Under favorable conditions, they could be as likely as iron to proceed to generate carbon. This expand to a great number of potential source metal for diamond. Further multiplying this number by alloying and centering (of lattice points) variations, the number of potential source could be vast. Above mentioned exercise is expendable to, for instance, Cr, Ni or other metals. This could provide for a missing link between diamond in stable craton and other diamonds. 1 Giamn, M., Diamond of possibly metallurgical and seismic origin in an alloy from the debris after earthquake Taiwan PART I,2004 Eos AGU Spring.2 Giamn, M. submitted to GCA. 3 Giamn, M., PART II (Thermal) past is present.

  3. Gacs quantum algorithmic entropy in infinite dimensional Hilbert spaces

    SciTech Connect

    Benatti, Fabio; Oskouei, Samad Khabbazi Deh Abad, Ahmad Shafiei

    2014-08-15

    We extend the notion of Gacs quantum algorithmic entropy, originally formulated for finitely many qubits, to infinite dimensional quantum spin chains and investigate the relation of this extension with two quantum dynamical entropies that have been proposed in recent years.

  4. The Biochemical Origin of Pain – Proposing a new law of Pain: The origin of all Pain is Inflammation and the Inflammatory Response PART 1 of 3 – A unifying law of pain

    PubMed Central

    2009-01-01

    We are proposing a unifying theory or law of pain, which states: The origin of all pain is inflammation and the inflammatory response. The biochemical mediators of inflammation include cytokines, neuropeptides, growth factors and neurotransmitters. Irrespective of the type of pain whether it is acute or chronic pain, peripheral or central pain, nociceptive or neuropathic pain, the underlying origin is inflammation and the inflammatory response. Activation of pain receptors, transmission and modulation of pain signals, neuro plasticity and central sensitization are all one continuum of inflammation and the inflammatory response. Irrespective of the characteristic of the pain, whether it is sharp, dull, aching, burning, stabbing, numbing or tingling, all pain arise from inflammation and the inflammatory response. We are proposing a re-classification and treatment of pain syndromes based upon their inflammatory profile. Treatment of pain syndromes should be based on these principles: Determination of the inflammatory profile of the pain syndromeInhibition or suppression of production of the appropriate inflammatory mediators e.g. with inflammatory mediator blockers or surgical intervention where appropriateInhibition or suppression of neuronal afferent and efferent (motor) transmission e.g. with anti-seizure drugs or local anesthetic blocksModulation of neuronal transmission e.g. with opioid medication At the L.A. Pain Clinic, we have successfully treated a variety of pain syndromes by utilizing these principles. This theory of the biochemical origin of pain is compatible with, inclusive of, and unifies existing theories and knowledge of the mechanism of pain including the gate control theory, and theories of pre-emptive analgesia, windup and central sensitization. PMID:17240081

  5. Segmentation of MRI Brain Images with an Improved Harmony Searching Algorithm.

    PubMed

    Yang, Zhang; Shufan, Ye; Li, Guo; Weifeng, Ding

    2016-01-01

    The harmony searching (HS) algorithm is a kind of optimization search algorithm currently applied in many practical problems. The HS algorithm constantly revises variables in the harmony database and the probability of different values that can be used to complete iteration convergence to achieve the optimal effect. Accordingly, this study proposed a modified algorithm to improve the efficiency of the algorithm. First, a rough set algorithm was employed to improve the convergence and accuracy of the HS algorithm. Then, the optimal value was obtained using the improved HS algorithm. The optimal value of convergence was employed as the initial value of the fuzzy clustering algorithm for segmenting magnetic resonance imaging (MRI) brain images. Experimental results showed that the improved HS algorithm attained better convergence and more accurate results than those of the original HS algorithm. In our study, the MRI image segmentation effect of the improved algorithm was superior to that of the original fuzzy clustering method. PMID:27403428

  6. Segmentation of MRI Brain Images with an Improved Harmony Searching Algorithm

    PubMed Central

    Yang, Zhang; Li, Guo; Weifeng, Ding

    2016-01-01

    The harmony searching (HS) algorithm is a kind of optimization search algorithm currently applied in many practical problems. The HS algorithm constantly revises variables in the harmony database and the probability of different values that can be used to complete iteration convergence to achieve the optimal effect. Accordingly, this study proposed a modified algorithm to improve the efficiency of the algorithm. First, a rough set algorithm was employed to improve the convergence and accuracy of the HS algorithm. Then, the optimal value was obtained using the improved HS algorithm. The optimal value of convergence was employed as the initial value of the fuzzy clustering algorithm for segmenting magnetic resonance imaging (MRI) brain images. Experimental results showed that the improved HS algorithm attained better convergence and more accurate results than those of the original HS algorithm. In our study, the MRI image segmentation effect of the improved algorithm was superior to that of the original fuzzy clustering method. PMID:27403428

  7. Cryptococcal antigen screening and preemptive therapy in patients initiating antiretroviral therapy in resource-limited settings: a proposed algorithm for clinical implementation.

    PubMed

    Jarvis, Joseph N; Govender, Nelesh; Chiller, Tom; Park, Benjamin J; Longley, Nicky; Meintjes, Graeme; Bekker, Linda-Gail; Wood, Robin; Lawn, Stephen D; Harrison, Thomas S

    2012-01-01

    HIV-associated cryptococcal meningitis (CM) is estimated to cause over half a million deaths annually in Africa. Many of these deaths are preventable. Screening patients for subclinical cryptococcal infection at the time of entry into antiretroviral therapy programs using cryptococcal antigen (CRAG) immunoassays is highly effective in identifying patients at risk of developing CM, allowing these patients to then be targeted with "preemptive" therapy to prevent the development of severe disease. Such CRAG screening programs are currently being implemented in a number of countries; however, a strong evidence base and clear guidance on how to manage patients with subclinical cryptococcal infection identified by screening are lacking. We review the available evidence and propose a treatment algorithm for the management of patients with asymptomatic cryptococcal antigenemia. PMID:23015379

  8. Comparison of cone beam artifacts reduction: two pass algorithm vs TV-based CS algorithm

    NASA Astrophysics Data System (ADS)

    Choi, Shinkook; Baek, Jongduk

    2015-03-01

    In a cone beam computed tomography (CBCT), the severity of the cone beam artifacts is increased as the cone angle increases. To reduce the cone beam artifacts, several modified FDK algorithms and compressed sensing based iterative algorithms have been proposed. In this paper, we used two pass algorithm and Gradient-Projection-Barzilai-Borwein (GPBB) algorithm to reduce the cone beam artifacts, and compared their performance using structural similarity (SSIM) index. In two pass algorithm, it is assumed that the cone beam artifacts are mainly caused by extreme-density(ED) objects, and therefore the algorithm reproduces the cone beam artifacts(i.e., error image) produced by ED objects, and then subtract it from the original image. GPBB algorithm is a compressed sensing based iterative algorithm which minimizes an energy function for calculating the gradient projection with the step size determined by the Barzilai- Borwein formulation, therefore it can estimate missing data caused by the cone beam artifacts. To evaluate the performance of two algorithms, we used testing objects consisting of 7 ellipsoids separated along the z direction and cone beam artifacts were generated using 30 degree cone angle. Even though the FDK algorithm produced severe cone beam artifacts with a large cone angle, two pass algorithm reduced the cone beam artifacts with small residual errors caused by inaccuracy of ED objects. In contrast, GPBB algorithm completely removed the cone beam artifacts and restored the original shape of the objects.

  9. Classification of neuropathic pain in cancer patients: A Delphi expert survey report and EAPC/IASP proposal of an algorithm for diagnostic criteria.

    PubMed

    Brunelli, Cinzia; Bennett, Michael I; Kaasa, Stein; Fainsinger, Robin; Sjøgren, Per; Mercadante, Sebastiano; Løhre, Erik T; Caraceni, Augusto

    2014-12-01

    Neuropathic pain (NP) in cancer patients lacks standards for diagnosis. This study is aimed at reaching consensus on the application of the International Association for the Study of Pain (IASP) special interest group for neuropathic pain (NeuPSIG) criteria to the diagnosis of NP in cancer patients and on the relevance of patient-reported outcome (PRO) descriptors for the screening of NP in this population. An international group of 42 experts was invited to participate in a consensus process through a modified 2-round Internet-based Delphi survey. Relevant topics investigated were: peculiarities of NP in patients with cancer, IASP NeuPSIG diagnostic criteria adaptation and assessment, and standardized PRO assessment for NP screening. Median consensus scores (MED) and interquartile ranges (IQR) were calculated to measure expert consensus after both rounds. Twenty-nine experts answered, and good agreement was found on the statement "the pathophysiology of NP due to cancer can be different from non-cancer NP" (MED=9, IQR=2). Satisfactory consensus was reached for the first 3 NeuPSIG criteria (pain distribution, history, and sensory findings; MEDs⩾8, IQRs⩽3), but not for the fourth one (diagnostic test/imaging; MED=6, IQR=3). Agreement was also reached on clinical examination by soft brush or pin stimulation (MEDs⩾7 and IQRs⩽3) and on the use of PRO descriptors for NP screening (MED=8, IQR=3). Based on the study results, a clinical algorithm for NP diagnostic criteria in cancer patients with pain was proposed. Clinical research on PRO in the screening phase and on the application of the algorithm will be needed to examine their effectiveness in classifying NP in cancer patients. PMID:25284070

  10. Study on stabilization and quench protection of coils wound of HTS coated conductors considering quench origins - Proposal of criteria for stabilization and quench protection

    NASA Astrophysics Data System (ADS)

    Tsukamoto, Osami; Fujimoto, Yasutaka; Takao, Tomoaki

    2014-09-01

    It has been considered that HTS coils are hard to be quenched because of high quench energy due to high critical temperature and high specific heat of HTS wires. Therefore, attention to quench protection was not much paid. However, HTS coils still have possibility to be quenched during operation by mainly the following two origins, (a) presence of non-recoverable local defects in the conductors and (b) temperature rise of long part of the conductor. Actually, severe quench accidents, such as burning coils, are occurring in various places as scales of HTS increased. Purposes of this paper are to study on behaviors of normal zone and hot spot temperature of wires during quench detect/energy dump sequence and to find criteria for the stability and quench protection. In the paper, criteria are proposed for stability and quench protection of HTS coils. A criterion for the stability is that a coil can be operated stably without a quench against defects in coil windings and that for quench protection is that a coil can be safely protected from damages caused by a quench due to temperature rise of long part of coil wires. The criteria are used as design rules for HTS coils.

  11. The generalized frequency-domain adaptive filtering algorithm as an approximation of the block recursive least-squares algorithm

    NASA Astrophysics Data System (ADS)

    Schneider, Martin; Kellermann, Walter

    2016-01-01

    Acoustic echo cancellation (AEC) is a well-known application of adaptive filters in communication acoustics. To implement AEC for multichannel reproduction systems, powerful adaptation algorithms like the generalized frequency-domain adaptive filtering (GFDAF) algorithm are required for satisfactory convergence behavior. In this paper, the GFDAF algorithm is rigorously derived as an approximation of the block recursive least-squares (RLS) algorithm. Thereby, the original formulation of the GFDAF algorithm is generalized while avoiding an error that has been in the original derivation. The presented algorithm formulation is applied to pruned transform-domain loudspeaker-enclosure-microphone models in a mathematically consistent manner. Such pruned models have recently been proposed to cope with the tremendous computational demands of massive multichannel AEC. Beyond its generalization, a regularization of the GFDAF is shown to have a close relation to the well-known block least-mean-squares algorithm.

  12. Development of a Compound Optimization Approach Based on Imperialist Competitive Algorithm

    NASA Astrophysics Data System (ADS)

    Wang, Qimei; Yang, Zhihong; Wang, Yong

    In this paper, an improved novel approach is developed for the imperialist competitive algorithm to achieve a greater performance. The Nelder-Meand simplex method is applied to execute alternately with the original procedures of the algorithm. The approach is tested on twelve widely-used benchmark functions and is also compared with other relative studies. It is shown that the proposed approach has a faster convergence rate, better search ability, and higher stability than the original algorithm and other relative methods.

  13. An improved Camshift algorithm for target recognition

    NASA Astrophysics Data System (ADS)

    Fu, Min; Cai, Chao; Mao, Yusu

    2015-12-01

    Camshift algorithm and three frame difference algorithm are the popular target recognition and tracking methods. Camshift algorithm requires a manual initialization of the search window, which needs the subjective error and coherence, and only in the initialization calculating a color histogram, so the color probability model cannot be updated continuously. On the other hand, three frame difference method does not require manual initialization search window, it can make full use of the motion information of the target only to determine the range of motion. But it is unable to determine the contours of the object, and can not make use of the color information of the target object. Therefore, the improved Camshift algorithm is proposed to overcome the disadvantages of the original algorithm, the three frame difference operation is combined with the object's motion information and color information to identify the target object. The improved Camshift algorithm is realized and shows better performance in the recognition and tracking of the target.

  14. Algorithms and Algorithmic Languages.

    ERIC Educational Resources Information Center

    Veselov, V. M.; Koprov, V. M.

    This paper is intended as an introduction to a number of problems connected with the description of algorithms and algorithmic languages, particularly the syntaxes and semantics of algorithmic languages. The terms "letter, word, alphabet" are defined and described. The concept of the algorithm is defined and the relation between the algorithm and…

  15. Constant Modulus Algorithm with Reduced Complexity Employing DFT Domain Fast Filtering

    NASA Astrophysics Data System (ADS)

    Yang, Yoon Gi; Lee, Chang Su; Yang, Soo Mi

    In this paper, a novel CMA (constant modulus algorithm) algorithm employing fast convolution in the DFT (discrete Fourier transform) domain is proposed. We propose a non-linear adaptation algorithm that minimizes CMA cost function in the DFT domain. The proposed algorithm is completely new one as compared to the recently introduced similar DFT domain CMA algorithm in that, the original CMA cost function has not been changed to develop DFT domain algorithm, resulting improved convergence properties. Using the proposed approach, we can reduce the number of multiplications to O(N log 2 N), whereas the conventional CMA has the computation order of O(N2). Simulation results show that the proposed algorithm provides a comparable performance to the conventional CMA.

  16. Ordered subsets algorithms for transmission tomography.

    PubMed

    Erdogan, H; Fessler, J A

    1999-11-01

    The ordered subsets EM (OSEM) algorithm has enjoyed considerable interest for emission image reconstruction due to its acceleration of the original EM algorithm and ease of programming. The transmission EM reconstruction algorithm converges very slowly and is not used in practice. In this paper, we introduce a simultaneous update algorithm called separable paraboloidal surrogates (SPS) that converges much faster than the transmission EM algorithm. Furthermore, unlike the 'convex algorithm' for transmission tomography, the proposed algorithm is monotonic even with nonzero background counts. We demonstrate that the ordered subsets principle can also be applied to the new SPS algorithm for transmission tomography to accelerate 'convergence', albeit with similar sacrifice of global convergence properties as for OSEM. We implemented and evaluated this ordered subsets transmission (OSTR) algorithm. The results indicate that the OSTR algorithm speeds up the increase in the objective function by roughly the number of subsets in the early iterates when compared to the ordinary SPS algorithm. We compute mean square errors and segmentation errors for different methods and show that OSTR is superior to OSEM applied to the logarithm of the transmission data. However, penalized-likelihood reconstructions yield the best quality images among all other methods tested. PMID:10588288

  17. Bayesian Smoothing Algorithms in Partially Observed Markov Chains

    NASA Astrophysics Data System (ADS)

    Ait-el-Fquih, Boujemaa; Desbouvries, François

    2006-11-01

    Let x = {xn}n∈N be a hidden process, y = {yn}n∈N an observed process and r = {rn}n∈N some auxiliary process. We assume that t = {tn}n∈N with tn = (xn, rn, yn-1) is a (Triplet) Markov Chain (TMC). TMC are more general than Hidden Markov Chains (HMC) and yet enable the development of efficient restoration and parameter estimation algorithms. This paper is devoted to Bayesian smoothing algorithms for TMC. We first propose twelve algorithms for general TMC. In the Gaussian case, these smoothers reduce to a set of algorithms which include, among other solutions, extensions to TMC of classical Kalman-like smoothing algorithms (originally designed for HMC) such as the RTS algorithms, the Two-Filter algorithms or the Bryson and Frazier algorithm.

  18. GPU Accelerated Event Detection Algorithm

    2011-05-25

    Smart grid external require new algorithmic approaches as well as parallel formulations. One of the critical components is the prediction of changes and detection of anomalies within the power grid. The state-of-the-art algorithms are not suited to handle the demands of streaming data analysis. (i) need for events detection algorithms that can scale with the size of data, (ii) need for algorithms that can not only handle multi dimensional nature of the data, but alsomore » model both spatial and temporal dependencies in the data, which, for the most part, are highly nonlinear, (iii) need for algorithms that can operate in an online fashion with streaming data. The GAEDA code is a new online anomaly detection techniques that take into account spatial, temporal, multi-dimensional aspects of the data set. The basic idea behind the proposed approach is to (a) to convert a multi-dimensional sequence into a univariate time series that captures the changes between successive windows extracted from the original sequence using singular value decomposition (SVD), and then (b) to apply known anomaly detection techniques for univariate time series. A key challenge for the proposed approach is to make the algorithm scalable to huge datasets by adopting techniques from perturbation theory, incremental SVD analysis. We used recent advances in tensor decomposition techniques which reduce computational complexity to monitor the change between successive windows and detect anomalies in the same manner as described above. Therefore we propose to develop the parallel solutions on many core systems such as GPUs, because these algorithms involve lot of numerical operations and are highly data-parallelizable.« less

  19. An improved NAS-RIF algorithm for blind image restoration

    NASA Astrophysics Data System (ADS)

    Liu, Ning; Jiang, Yanbin; Lou, Shuntian

    2007-01-01

    Image restoration is widely applied in many areas, but when operating on images with different scales for the representation of pixel intensity levels or low SNR, the traditional restoration algorithm lacks validity and induces noise amplification, ringing artifacts and poor convergent ability. In this paper, an improved NAS-RIF algorithm is proposed to overcome the shortcomings of the traditional algorithm. The improved algorithm proposes a new cost function which adds a space-adaptive regularization term and a disunity gain of the adaptive filter. In determining the support region, a pre-segmentation is used to form it close to the object in the image. Compared with the traditional algorithm, simulations show that the improved algorithm behaves better convergence, noise resistance and provides a better estimate of original image.

  20. A fast portable implementation of the Secure Hash Algorithm, III.

    SciTech Connect

    McCurley, Kevin S.

    1992-10-01

    In 1992, NIST announced a proposed standard for a collision-free hash function. The algorithm for producing the hash value is known as the Secure Hash Algorithm (SHA), and the standard using the algorithm in known as the Secure Hash Standard (SHS). Later, an announcement was made that a scientist at NSA had discovered a weakness in the original algorithm. A revision to this standard was then announced as FIPS 180-1, and includes a slight change to the algorithm that eliminates the weakness. This new algorithm is called SHA-1. In this report we describe a portable and efficient implementation of SHA-1 in the C language. Performance information is given, as well as tips for porting the code to other architectures. We conclude with some observations on the efficiency of the algorithm, and a discussion of how the efficiency of SHA might be improved.

  1. A Cuckoo Search Algorithm for Multimodal Optimization

    PubMed Central

    2014-01-01

    Interest in multimodal optimization is expanding rapidly, since many practical engineering problems demand the localization of multiple optima within a search space. On the other hand, the cuckoo search (CS) algorithm is a simple and effective global optimization algorithm which can not be directly applied to solve multimodal optimization problems. This paper proposes a new multimodal optimization algorithm called the multimodal cuckoo search (MCS). Under MCS, the original CS is enhanced with multimodal capacities by means of (1) the incorporation of a memory mechanism to efficiently register potential local optima according to their fitness value and the distance to other potential solutions, (2) the modification of the original CS individual selection strategy to accelerate the detection process of new local minima, and (3) the inclusion of a depuration procedure to cyclically eliminate duplicated memory elements. The performance of the proposed approach is compared to several state-of-the-art multimodal optimization algorithms considering a benchmark suite of fourteen multimodal problems. Experimental results indicate that the proposed strategy is capable of providing better and even a more consistent performance over existing well-known multimodal algorithms for the majority of test problems yet avoiding any serious computational deterioration. PMID:25147850

  2. A cuckoo search algorithm for multimodal optimization.

    PubMed

    Cuevas, Erik; Reyna-Orta, Adolfo

    2014-01-01

    Interest in multimodal optimization is expanding rapidly, since many practical engineering problems demand the localization of multiple optima within a search space. On the other hand, the cuckoo search (CS) algorithm is a simple and effective global optimization algorithm which can not be directly applied to solve multimodal optimization problems. This paper proposes a new multimodal optimization algorithm called the multimodal cuckoo search (MCS). Under MCS, the original CS is enhanced with multimodal capacities by means of (1) the incorporation of a memory mechanism to efficiently register potential local optima according to their fitness value and the distance to other potential solutions, (2) the modification of the original CS individual selection strategy to accelerate the detection process of new local minima, and (3) the inclusion of a depuration procedure to cyclically eliminate duplicated memory elements. The performance of the proposed approach is compared to several state-of-the-art multimodal optimization algorithms considering a benchmark suite of fourteen multimodal problems. Experimental results indicate that the proposed strategy is capable of providing better and even a more consistent performance over existing well-known multimodal algorithms for the majority of test problems yet avoiding any serious computational deterioration. PMID:25147850

  3. An Adaptive Digital Image Watermarking Algorithm Based on Morphological Haar Wavelet Transform

    NASA Astrophysics Data System (ADS)

    Huang, Xiaosheng; Zhao, Sujuan

    At present, much more of the wavelet-based digital watermarking algorithms are based on linear wavelet transform and fewer on non-linear wavelet transform. In this paper, we propose an adaptive digital image watermarking algorithm based on non-linear wavelet transform--Morphological Haar Wavelet Transform. In the algorithm, the original image and the watermark image are decomposed with multi-scale morphological wavelet transform respectively. Then the watermark information is adaptively embedded into the original image in different resolutions, combining the features of Human Visual System (HVS). The experimental results show that our method is more robust and effective than the ordinary wavelet transform algorithms.

  4. A new algorithmic approach for fingers detection and identification

    NASA Astrophysics Data System (ADS)

    Mubashar Khan, Arslan; Umar, Waqas; Choudhary, Taimoor; Hussain, Fawad; Haroon Yousaf, Muhammad

    2013-03-01

    Gesture recognition is concerned with the goal of interpreting human gestures through mathematical algorithms. Gestures can originate from any bodily motion or state but commonly originate from the face or hand. Hand gesture detection in a real time environment, where the time and memory are important issues, is a critical operation. Hand gesture recognition largely depends on the accurate detection of the fingers. This paper presents a new algorithmic approach to detect and identify fingers of human hand. The proposed algorithm does not depend upon the prior knowledge of the scene. It detects the active fingers and Metacarpophalangeal (MCP) of the inactive fingers from an already detected hand. Dynamic thresholding technique and connected component labeling scheme are employed for background elimination and hand detection respectively. Algorithm proposed a new approach for finger identification in real time environment keeping the memory and time constraint as low as possible.

  5. [Accomplistments in the Last Year Against the Objectives Laid Out in the Original Proposal; the Current Status of the Research; the Work to go in the Next Year; and Publications

    NASA Technical Reports Server (NTRS)

    Elliot, James

    2005-01-01

    Below is the annual progress report (through 2005-01-31) on NASA Grant NNG04GF25G. It is organized according to: (I) Accomplishments in the last year against the objectives laid out in the original proposal; (II) The current status of the research; (III) The work to go in the next year; (IV) Publications. Since this program is a continuation of the occultation work supported in a predecessor grant, the "Accomplishments" section lists all the tasks written into the proposal (in June 2003) through the end of the first year of the new grant.

  6. Drag Measurements on Equivalent Bodies of Revolution of Six Configurations of the Convair MX-1964 (Originally MX-1626) Proposed Supersonic Bomber

    NASA Technical Reports Server (NTRS)

    Hall, James Rudyard

    1953-01-01

    Tests on equivalent bodies of revolution of six configurations of the Consolidated Vultee Aircraft Corporation proposed supersonic bomber (Convair MX-1964) have indicated that it is possible to reduce the drag of the configuration by designing it to have a favorable area distribution. The method of NACA RM L53I22c to predict the peak pressure drag of a configuration on the basis of its area distribution gave generally good agreement with the subject models.

  7. Algorithm for shortest path search in Geographic Information Systems by using reduced graphs.

    PubMed

    Rodríguez-Puente, Rafael; Lazo-Cortés, Manuel S

    2013-01-01

    The use of Geographic Information Systems has increased considerably since the eighties and nineties. As one of their most demanding applications we can mention shortest paths search. Several studies about shortest path search show the feasibility of using graphs for this purpose. Dijkstra's algorithm is one of the classic shortest path search algorithms. This algorithm is not well suited for shortest path search in large graphs. This is the reason why various modifications to Dijkstra's algorithm have been proposed by several authors using heuristics to reduce the run time of shortest path search. One of the most used heuristic algorithms is the A* algorithm, the main goal is to reduce the run time by reducing the search space. This article proposes a modification of Dijkstra's shortest path search algorithm in reduced graphs. It shows that the cost of the path found in this work, is equal to the cost of the path found using Dijkstra's algorithm in the original graph. The results of finding the shortest path, applying the proposed algorithm, Dijkstra's algorithm and A* algorithm, are compared. This comparison shows that, by applying the approach proposed, it is possible to obtain the optimal path in a similar or even in less time than when using heuristic algorithms. PMID:24010024

  8. Generalized Pattern Search Algorithm for Peptide Structure Prediction

    PubMed Central

    Nicosia, Giuseppe; Stracquadanio, Giovanni

    2008-01-01

    Finding the near-native structure of a protein is one of the most important open problems in structural biology and biological physics. The problem becomes dramatically more difficult when a given protein has no regular secondary structure or it does not show a fold similar to structures already known. This situation occurs frequently when we need to predict the tertiary structure of small molecules, called peptides. In this research work, we propose a new ab initio algorithm, the generalized pattern search algorithm, based on the well-known class of Search-and-Poll algorithms. We performed an extensive set of simulations over a well-known set of 44 peptides to investigate the robustness and reliability of the proposed algorithm, and we compared the peptide conformation with a state-of-the-art algorithm for peptide structure prediction known as PEPstr. In particular, we tested the algorithm on the instances proposed by the originators of PEPstr, to validate the proposed algorithm; the experimental results confirm that the generalized pattern search algorithm outperforms PEPstr by 21.17% in terms of average root mean-square deviation, RMSD Cα. PMID:18487293

  9. An original approach to fill the gap in the earthquake disaster experience - a proposal for 'the archive of the quake experience' -

    NASA Astrophysics Data System (ADS)

    Tanaka, Y.; Hirayama, Y.; Kuroda, S.; Yoshida, M.

    2015-12-01

    People without severe disaster experience infallibly forget even the extraordinary one like 3.11 as time advances. Therefore, to improve the resilient society, an ingenious attempt to keep people's memory of disaster not to fade away is necessary. Since 2011, we have been caring out earthquake disaster drills for residents of high-rise apartments, for schoolchildren, for citizens of the coastal area, etc. Using a portable earthquake simulator (1), the drill consists of three parts, the first: a short lecture explaining characteristic quakes expected for Japanese people to have in the future, the second: reliving experience of major earthquakes hit Japan since 1995, and the third: a short lecture for preparation that can be done at home and/or in an office. For the quake experience, although it is two dimensional movement, the real earthquake observation record is used to control the simulator to provide people to relive an experience of different kinds of earthquake including the long period motion of skyscrapers. Feedback of the drill is always positive because participants understand that the reliving the quake experience with proper lectures is one of the best method to communicate the past disasters to their family and to inherit them to the next generation. There are several kinds of archive for disaster as inheritance such as pictures, movies, documents, interviews, and so on. In addition to them, here we propose to construct 'the archive of the quake experience' which compiles observed data ready to relive with the simulator. We would like to show some movies of our quake drill in the presentation. Reference: (1) Kuroda, S. et al. (2012), "Development of portable earthquake simulator for enlightenment of disaster preparedness", 15th World Conference on Earthquake Engineering 2012, Vol. 12, 9412-9420.

  10. A hybrid artificial bee colony algorithm for numerical function optimization

    NASA Astrophysics Data System (ADS)

    Alqattan, Zakaria N.; Abdullah, Rosni

    2015-02-01

    Artificial Bee Colony (ABC) algorithm is one of the swarm intelligence algorithms; it has been introduced by Karaboga in 2005. It is a meta-heuristic optimization search algorithm inspired from the intelligent foraging behavior of the honey bees in nature. Its unique search process made it as one of the most competitive algorithm with some other search algorithms in the area of optimization, such as Genetic algorithm (GA) and Particle Swarm Optimization (PSO). However, the ABC performance of the local search process and the bee movement or the solution improvement equation still has some weaknesses. The ABC is good in avoiding trapping at the local optimum but it spends its time searching around unpromising random selected solutions. Inspired by the PSO, we propose a Hybrid Particle-movement ABC algorithm called HPABC, which adapts the particle movement process to improve the exploration of the original ABC algorithm. Numerical benchmark functions were used in order to experimentally test the HPABC algorithm. The results illustrate that the HPABC algorithm can outperform the ABC algorithm in most of the experiments (75% better in accuracy and over 3 times faster).

  11. An Artificial Immune Univariate Marginal Distribution Algorithm

    NASA Astrophysics Data System (ADS)

    Zhang, Qingbin; Kang, Shuo; Gao, Junxiang; Wu, Song; Tian, Yanping

    Hybridization is an extremely effective way of improving the performance of the Univariate Marginal Distribution Algorithm (UMDA). Owing to its diversity and memory mechanisms, artificial immune algorithm has been widely used to construct hybrid algorithms with other optimization algorithms. This paper proposes a hybrid algorithm which combines the UMDA with the principle of general artificial immune algorithm. Experimental results on deceptive function of order 3 show that the proposed hybrid algorithm can get more building blocks (BBs) than the UMDA.

  12. An efficient cuckoo search algorithm for numerical function optimization

    NASA Astrophysics Data System (ADS)

    Ong, Pauline; Zainuddin, Zarita

    2013-04-01

    Cuckoo search algorithm which reproduces the breeding strategy of the best known brood parasitic bird, the cuckoos has demonstrated its superiority in obtaining the global solution for numerical optimization problems. However, the involvement of fixed step approach in its exploration and exploitation behavior might slow down the search process considerably. In this regards, an improved cuckoo search algorithm with adaptive step size adjustment is introduced and its feasibility on a variety of benchmarks is validated. The obtained results show that the proposed scheme outperforms the standard cuckoo search algorithm in terms of convergence characteristic while preserving the fascinating features of the original method.

  13. Optimal Golomb Ruler Sequences Generation for Optical WDM Systems: A Novel Parallel Hybrid Multi-objective Bat Algorithm

    NASA Astrophysics Data System (ADS)

    Bansal, Shonak; Singh, Arun Kumar; Gupta, Neena

    2016-07-01

    In real-life, multi-objective engineering design problems are very tough and time consuming optimization problems due to their high degree of nonlinearities, complexities and inhomogeneity. Nature-inspired based multi-objective optimization algorithms are now becoming popular for solving multi-objective engineering design problems. This paper proposes original multi-objective Bat algorithm (MOBA) and its extended form, namely, novel parallel hybrid multi-objective Bat algorithm (PHMOBA) to generate shortest length Golomb ruler called optimal Golomb ruler (OGR) sequences at a reasonable computation time. The OGRs found their application in optical wavelength division multiplexing (WDM) systems as channel-allocation algorithm to reduce the four-wave mixing (FWM) crosstalk. The performances of both the proposed algorithms to generate OGRs as optical WDM channel-allocation is compared with other existing classical computing and nature-inspired algorithms, including extended quadratic congruence (EQC), search algorithm (SA), genetic algorithms (GAs), biogeography based optimization (BBO) and big bang-big crunch (BB-BC) optimization algorithms. Simulations conclude that the proposed parallel hybrid multi-objective Bat algorithm works efficiently as compared to original multi-objective Bat algorithm and other existing algorithms to generate OGRs for optical WDM systems. The algorithm PHMOBA to generate OGRs, has higher convergence and success rate than original MOBA. The efficiency improvement of proposed PHMOBA to generate OGRs up to 20-marks, in terms of ruler length and total optical channel bandwidth (TBW) is 100 %, whereas for original MOBA is 85 %. Finally the implications for further research are also discussed.

  14. Molecular diagnosis of distal renal tubular acidosis in Tunisian patients: proposed algorithm for Northern Africa populations for the ATP6V1B1, ATP6V0A4 and SCL4A1 genes

    PubMed Central

    2013-01-01

    Background Primary distal renal tubular acidosis (dRTA) caused by mutations in the genes that codify for the H + −ATPase pump subunits is a heterogeneous disease with a poor phenotype-genotype correlation. Up to now, large cohorts of dRTA Tunisian patients have not been analyzed, and molecular defects may differ from those described in other ethnicities. We aim to identify molecular defects present in the ATP6V1B1, ATP6V0A4 and SLC4A1 genes in a Tunisian cohort, according to the following algorithm: first, ATP6V1B1 gene analysis in dRTA patients with sensorineural hearing loss (SNHL) or unknown hearing status. Afterwards, ATP6V0A4 gene study in dRTA patients with normal hearing, and in those without any structural mutation in the ATP6V1B1 gene despite presenting SNHL. Finally, analysis of the SLC4A1 gene in those patients with a negative result for the previous studies. Methods 25 children (19 boys) with dRTA from 20 families of Tunisian origin were studied. DNAs were extracted by the standard phenol/chloroform method. Molecular analysis was performed by PCR amplification and direct sequencing. Results In the index cases, ATP6V1B1 gene screening resulted in a mutation detection rate of 81.25%, which increased up to 95% after ATP6V0A4 gene analysis. Three ATP6V1B1 mutations were observed: one frameshift mutation (c.1155dupC; p.Ile386fs), in exon 12; a G to C single nucleotide substitution, on the acceptor splicing site (c.175-1G > C; p.?) in intron 2, and one novel missense mutation (c.1102G > A; p.Glu368Lys), in exon 11. We also report four mutations in the ATP6V0A4 gene: one single nucleotide deletion in exon 13 (c.1221delG; p.Met408Cysfs*10); the nonsense c.16C > T; p.Arg6*, in exon 3; and the missense changes c.1739 T > C; p.Met580Thr, in exon 17 and c.2035G > T; p.Asp679Tyr, in exon 19. Conclusion Molecular diagnosis of ATP6V1B1 and ATP6V0A4 genes was performed in a large Tunisian cohort with dRTA. We identified three different ATP6V1

  15. An adaptive algorithm for low contrast infrared image enhancement

    NASA Astrophysics Data System (ADS)

    Liu, Sheng-dong; Peng, Cheng-yuan; Wang, Ming-jia; Wu, Zhi-guo; Liu, Jia-qi

    2013-08-01

    An adaptive infrared image enhancement algorithm for low contrast is proposed in this paper, to deal with the problem that conventional image enhancement algorithm is not able to effective identify the interesting region when dynamic range is large in image. This algorithm begin with the human visual perception characteristics, take account of the global adaptive image enhancement and local feature boost, not only the contrast of image is raised, but also the texture of picture is more distinct. Firstly, the global image dynamic range is adjusted from the overall, the dynamic range of original image and display grayscale form corresponding relationship, the gray scale of bright object is raised and the the gray scale of dark target is reduced at the same time, to improve the overall image contrast. Secondly, the corresponding filtering algorithm is used on the current point and its neighborhood pixels to extract image texture information, to adjust the brightness of the current point in order to enhance the local contrast of the image. The algorithm overcomes the default that the outline is easy to vague in traditional edge detection algorithm, and ensure the distinctness of texture detail in image enhancement. Lastly, we normalize the global luminance adjustment image and the local brightness adjustment image, to ensure a smooth transition of image details. A lot of experiments is made to compare the algorithm proposed in this paper with other convention image enhancement algorithm, and two groups of vague IR image are taken in experiment. Experiments show that: the contrast ratio of the picture is boosted after handled by histogram equalization algorithm, but the detail of the picture is not clear, the detail of the picture can be distinguished after handled by the Retinex algorithm. The image after deal with by self-adaptive enhancement algorithm proposed in this paper becomes clear in details, and the image contrast is markedly improved in compared with Retinex

  16. 'Stylo-mandibular complex' fracture from a maxillofacial surgeon's perspective--review of the literature and proposal of a management algorithm.

    PubMed

    Gayathri, G; Elavenil, P; Sasikala, B; Pathumai, M; Krishnakumar Raja, V B

    2016-03-01

    The incidence of fractures of styloid process, either in isolation or association with mandibular fractures, is rare, and frequently overlooked. When present, they pose clinical dilemma in diagnosis and management. Proper management of styloid fractures is essential, not just to alleviate the patients' symptoms, but also to prevent potential complications like post-traumatic styloid syndrome and injury to adjacent vital structures. This article features a review of literature on 'styloid fracture concomitant with mandibular fracture' along with a case report. The article explores the biomechanics resulting in styloid fracture especially when co-existing with mandibular fractures. The article also enumerates the clinical features of this unusual clinical phenomenon and aims at rationalizing the need for its medical or surgical management. A simple protocol for the management of 'stylo-mandibular complex' fracture has been proposed. PMID:26701324

  17. Probabilistic Route Selection Algorithm for IP Traceback

    NASA Astrophysics Data System (ADS)

    Yim, Hong-Bin; Jung, Jae-Il

    DoS(Denial of Service) or DDoS(Distributed DoS) attack is a major threaten and the most difficult problem to solve among many attacks. Moreover, it is very difficult to find a real origin of attackers because DoS/DDoS attacker uses spoofed IP addresses. To solve this problem, we propose a probabilistic route selection traceback algorithm, namely PRST, to trace the attacker's real origin. This algorithm uses two types of packets such as an agent packet and a reply agent packet. The agent packet is in use to find the attacker's real origin and the reply agent packet is in use to notify to a victim that the agent packet is reached the edge router of the attacker. After attacks occur, the victim generates the agent packet and sends it to a victim's edge router. The attacker's edge router received the agent packet generates the reply agent packet and send it to the victim. The agent packet and the reply agent packet is forwarded refer to probabilistic packet forwarding table (PPFT) by routers. The PRST algorithm runs on the distributed routers and PPFT is stored and managed by routers. We validate PRST algorithm by using mathematical approach based on Poisson distribution.

  18. A fast non-local image denoising algorithm

    NASA Astrophysics Data System (ADS)

    Dauwe, A.; Goossens, B.; Luong, H. Q.; Philips, W.

    2008-02-01

    In this paper we propose several improvements to the original non-local means algorithm introduced by Buades et al. which obtains state-of-the-art denoising results. The strength of this algorithm is to exploit the repetitive character of the image in order to denoise the image unlike conventional denoising algorithms, which typically operate in a local neighbourhood. Due to the enormous amount of weight computations, the original algorithm has a high computational cost. An improvement of image quality towards the original algorithm is to ignore the contributions from dissimilar windows. Even though their weights are very small at first sight, the new estimated pixel value can be severely biased due to the many small contributions. This bad influence of dissimilar windows can be eliminated by setting their corresponding weights to zero. Using the preclassification based on the first three statistical moments, only contributions from similar neighborhoods are computed. To decide whether a window is similar or dissimilar, we will derive thresholds for images corrupted with additive white Gaussian noise. Our accelerated approach is further optimized by taking advantage of the symmetry in the weights, which roughly halves the computation time, and by using a lookup table to speed up the weight computations. Compared to the original algorithm, our proposed method produces images with increased PSNR and better visual performance in less computation time. Our proposed method even outperforms state-of-the-art wavelet denoising techniques in both visual quality and PSNR values for images containing a lot of repetitive structures such as textures: the denoised images are much sharper and contain less artifacts. The proposed optimizations can also be applied in other image processing tasks which employ the concept of repetitive structures such as intra-frame super-resolution or detection of digital image forgery.

  19. An improved image matching algorithm based on SURF and Delaunay TIN

    NASA Astrophysics Data System (ADS)

    Cheng, Yuan-ming; Cheng, Peng-gen; Chen, Xiao-yong; Zheng, Shou-zhu

    2015-12-01

    Image matching is one of the key technologies in the image processing. In order to increase its efficiency and precision, a new method for image matching which based on the improved SURF and Delaunay-TIN is proposed in this paper. Based on the original SURF algorithm, three constraint conditions, color invariant model, Delaunay-TIN, triangle similarity function and photography invariant are added into the original SURF model. With the proposed algorithm, the image color information is effectively retained and the erroneous matching rate of features is largely reduced. The experimental results shows that this proposed method has the characteristics of higher matching speed, uniform distribution of feature points to be matched, and higher correct matching rate than the original algorithm does.

  20. Cohenite in meteorites: A proposed origin

    USGS Publications Warehouse

    Brett, R.

    1966-01-01

    Cohenite [(Fe, Ni)3C] is found almost exclusively in meteorites containing from 6 to 8 percent nickel (by weight). On the basis of iron-nickel-carbon phase diagrams at 1 atmosphere and of kinetic data, the occurrence of cohenite within this narrow composition range as a low-pressure metastable phase and the nonoccurrence of cohenite in meteorites outside the range 6 to 8 percent nickel can be explained. Cohenite formed in meteorites containing less than 6 to 8 percent nickel decomposed to metal and graphite during cooling; it cannot form in meteorites containing more than about 8 percent. The presence of cohenite in meteorites cannot be used as an indicator of pressure of formation. However, the absence of cohenite in meteorites containing the assemblage, metal plus graphite, requires low pressures during cooling.

  1. Adaptive Load-Balancing Algorithms using Symmetric Broadcast Networks

    NASA Technical Reports Server (NTRS)

    Das, Sajal K.; Harvey, Daniel J.; Biswas, Rupak; Biegel, Bryan A. (Technical Monitor)

    2002-01-01

    In a distributed computing environment, it is important to ensure that the processor workloads are adequately balanced, Among numerous load-balancing algorithms, a unique approach due to Das and Prasad defines a symmetric broadcast network (SBN) that provides a robust communication pattern among the processors in a topology-independent manner. In this paper, we propose and analyze three efficient SBN-based dynamic load-balancing algorithms, and implement them on an SGI Origin2000. A thorough experimental study with Poisson distributed synthetic loads demonstrates that our algorithms are effective in balancing system load. By optimizing completion time and idle time, the proposed algorithms are shown to compare favorably with several existing approaches.

  2. Developmental Algorithms Have Meaning!

    ERIC Educational Resources Information Center

    Green, John

    1997-01-01

    Adapts Stanic and McKillip's ideas for the use of developmental algorithms to propose that the present emphasis on symbolic manipulation should be tempered with an emphasis on the conceptual understanding of the mathematics underlying the algorithm. Uses examples from the areas of numeric computation, algebraic manipulation, and equation solving…

  3. Yeast Identification Algorithm Based on Use of the Vitek MS System Selectively Supplemented with Ribosomal DNA Sequencing: Proposal of a Reference Assay for Invasive Fungal Surveillance Programs in China

    PubMed Central

    Zhang, Li; Xiao, Meng; Wang, He; Gao, Ran; Fan, Xin; Brown, Mitchell; Gray, Timothy J.; Kong, Fanrong

    2014-01-01

    Sequence analysis of the internal transcribed spacer (ITS) region was employed as the gold standard method for yeast identification in the China Hospital Invasive Fungal Surveillance Net (CHIF-NET). It has subsequently been found that matrix-assisted laser desorption ionization–time of flight mass spectrometry (MALDI-TOF MS) is potentially a more practical approach for this purpose. In the present study, the performance of the Vitek MS v2.0 system for the identification of yeast isolates collected from patients with invasive fungal infections in the 2011 CHIF-NET was evaluated. A total of 1,243 isolates representing 31 yeast species were analyzed, and the identification results by the Vitek MS v2.0 system were compared to those obtained by ITS sequence analysis. By the Vitek MS v2.0 system, 96.7% (n = 1,202) of the isolates were correctly assigned to the species level and 0.2% (n = 2) of the isolates were identified to the genus level, while 2.4% (n = 30) and 0.7% (n = 9) of the isolates were unidentified and misidentified, respectively. After retesting of the unidentified and misidentified strains, 97.3% (n = 1,209) of the isolates were correctly identified to the species level. Based on these results, a testing algorithm that combines the use of the Vitek MS system with selected supplementary ribosomal DNA (rDNA) sequencing was developed and validated for yeast identification purposes. By employing this algorithm, 99.7% (1,240/1,243) of the study isolates were accurately identified with the exception of two isolates of Candida fermentati and one isolate of Cryptococcus gattii. In conclusion, the proposed identification algorithm could be practically implemented in strategic programs of fungal infection surveillance. PMID:24478490

  4. Cascade Error Projection: A New Learning Algorithm

    NASA Technical Reports Server (NTRS)

    Duong, T. A.; Stubberud, A. R.; Daud, T.; Thakoor, A. P.

    1995-01-01

    A new neural network architecture and a hardware implementable learning algorithm is proposed. The algorithm, called cascade error projection (CEP), handles lack of precision and circuit noise better than existing algorithms.

  5. Belief network algorithms: A study of performance

    SciTech Connect

    Jitnah, N.

    1996-12-31

    This abstract gives an overview of the work. We present a survey of Belief Network algorithms and propose a domain characterization system to be used as a basis for algorithm comparison and for predicting algorithm performance.

  6. Robust watermarking on copyright protection of digital originals

    NASA Astrophysics Data System (ADS)

    Gu, C.; Hu, X. Y.

    2010-06-01

    The issues about the difference between digital vector originals and raster originals were discussed. A new algorithm based on displacing vertices to realize the embedding and extracting of digital watermarking in vector data was proposed after that. The results showed that the watermark produced by the method is resistant against translation, scaling, rotation, additive random noise; it is also resistant, to some extent, against cropping. This paper also modified the DCT raster image watermarking algorithm, using a bitmap image as watermark embedded into target images, instead of some meaningless serial numbers or simple symbols. The embedding and extraction part of these two digital watermark systems achieved with software. Experiments proved that both algorithms are not only imperceptible, but also have strong resistance against the common attracts, which can prove the copyright more effectively.

  7. Application of Modified Differential Evolution Algorithm to Magnetotelluric and Vertical Electrical Sounding Data

    NASA Astrophysics Data System (ADS)

    Mingolo, Nusharin; Sarakorn, Weerachai

    2016-04-01

    In this research, the Modified Differential Evolution (DE) algorithm is proposed and applied to the Magnetotelluric (MT) and Vertical Electrical sounding (VES) data to reveal the reasonable resistivity structure. The common processes of DE algorithm, including initialization, mutation and crossover, are modified by introducing both new control parameters and some constraints to obtain the fitting-reasonable resistivity model. The validity and efficiency of our developed modified DE algorithm is tested on both synthetic and real observed data. Our developed DE algorithm is also compared to the well-known OCCAM's algorithm for real case of MT data. For the synthetic case, our modified DE algorithm with appropriate control parameters can reveal the reasonable-fitting models when compared to the original synthetic models. For the real data case, the resistivity structures revealed by our algorithm are closed to those obtained by OCCAM's inversion, but our obtained structures reveal layers more apparently.

  8. Blind Alley Aware ACO Routing Algorithm

    NASA Astrophysics Data System (ADS)

    Yoshikawa, Masaya; Otani, Kazuo

    2010-10-01

    The routing problem is applied to various engineering fields. Many researchers study this problem. In this paper, we propose a new routing algorithm which is based on Ant Colony Optimization. The proposed algorithm introduces the tabu search mechanism to escape the blind alley. Thus, the proposed algorithm enables to find the shortest route, even if the map data contains the blind alley. Experiments using map data prove the effectiveness in comparison with Dijkstra algorithm which is the most popular conventional routing algorithm.

  9. Texture orientation-based algorithm for detecting infrared maritime targets.

    PubMed

    Wang, Bin; Dong, Lili; Zhao, Ming; Wu, Houde; Xu, Wenhai

    2015-05-20

    Infrared maritime target detection is a key technology for maritime target searching systems. However, in infrared maritime images (IMIs) taken under complicated sea conditions, background clutters, such as ocean waves, clouds or sea fog, usually have high intensity that can easily overwhelm the brightness of real targets, which is difficult for traditional target detection algorithms to deal with. To mitigate this problem, this paper proposes a novel target detection algorithm based on texture orientation. This algorithm first extracts suspected targets by analyzing the intersubband correlation between horizontal and vertical wavelet subbands of the original IMI on the first scale. Then the self-adaptive wavelet threshold denoising and local singularity analysis of the original IMI is combined to remove false alarms further. Experiments show that compared with traditional algorithms, this algorithm can suppress background clutter much better and realize better single-frame detection for infrared maritime targets. Besides, in order to guarantee accurate target extraction further, the pipeline-filtering algorithm is adopted to eliminate residual false alarms. The high practical value and applicability of this proposed strategy is backed strongly by experimental data acquired under different environmental conditions. PMID:26192503

  10. Genetic algorithms

    NASA Technical Reports Server (NTRS)

    Wang, Lui; Bayer, Steven E.

    1991-01-01

    Genetic algorithms are mathematical, highly parallel, adaptive search procedures (i.e., problem solving methods) based loosely on the processes of natural genetics and Darwinian survival of the fittest. Basic genetic algorithms concepts are introduced, genetic algorithm applications are introduced, and results are presented from a project to develop a software tool that will enable the widespread use of genetic algorithm technology.

  11. Semioptimal practicable algorithmic cooling

    SciTech Connect

    Elias, Yuval; Mor, Tal; Weinstein, Yossi

    2011-04-15

    Algorithmic cooling (AC) of spins applies entropy manipulation algorithms in open spin systems in order to cool spins far beyond Shannon's entropy bound. Algorithmic cooling of nuclear spins was demonstrated experimentally and may contribute to nuclear magnetic resonance spectroscopy. Several cooling algorithms were suggested in recent years, including practicable algorithmic cooling (PAC) and exhaustive AC. Practicable algorithms have simple implementations, yet their level of cooling is far from optimal; exhaustive algorithms, on the other hand, cool much better, and some even reach (asymptotically) an optimal level of cooling, but they are not practicable. We introduce here semioptimal practicable AC (SOPAC), wherein a few cycles (typically two to six) are performed at each recursive level. Two classes of SOPAC algorithms are proposed and analyzed. Both attain cooling levels significantly better than PAC and are much more efficient than the exhaustive algorithms. These algorithms are shown to bridge the gap between PAC and exhaustive AC. In addition, we calculated the number of spins required by SOPAC in order to purify qubits for quantum computation. As few as 12 and 7 spins are required (in an ideal scenario) to yield a mildly pure spin (60% polarized) from initial polarizations of 1% and 10%, respectively. In the latter case, about five more spins are sufficient to produce a highly pure spin (99.99% polarized), which could be relevant for fault-tolerant quantum computing.

  12. Digital watermarking algorithm research of color images based on quaternion Fourier transform

    NASA Astrophysics Data System (ADS)

    An, Mali; Wang, Weijiang; Zhao, Zhen

    2013-10-01

    A watermarking algorithm of color images based on the quaternion Fourier Transform (QFFT) and improved quantization index algorithm (QIM) is proposed in this paper. The original image is transformed by QFFT, the watermark image is processed by compression and quantization coding, and then the processed watermark image is embedded into the components of the transformed original image. It achieves embedding and blind extraction of the watermark image. The experimental results show that the watermarking algorithm based on the improved QIM algorithm with distortion compensation achieves a good tradeoff between invisibility and robustness, and better robustness for the attacks of Gaussian noises, salt and pepper noises, JPEG compression, cropping, filtering and image enhancement than the traditional QIM algorithm.

  13. Enhancement of the ill-conditioned original recordings using novel ICA technique

    NASA Astrophysics Data System (ADS)

    Naik, Ganesh R.

    2012-07-01

    The independent component analysis (ICA) method proposed in this study uses FastICA algorithm to improve the quality of the original recordings, which can be used as valuable pre-processing technique in signal processing methods. Initially, the ill-conditioned original audio recordings are separated using ICA methods and later, they are reconstructed using modified un-mixing matrix. The simulation results showed huge improvement of the original signal after reconstruction. The new method is found to be good because the accuracy is more compared to others in terms of the variance of the Gain matrix. The proposed method has potential applications in audio and biosignal processing techniques.

  14. A distributed Canny edge detector: algorithm and FPGA implementation.

    PubMed

    Xu, Qian; Varadarajan, Srenivas; Chakrabarti, Chaitali; Karam, Lina J

    2014-07-01

    The Canny edge detector is one of the most widely used edge detection algorithms due to its superior performance. Unfortunately, not only is it computationally more intensive as compared with other edge detection algorithms, but it also has a higher latency because it is based on frame-level statistics. In this paper, we propose a mechanism to implement the Canny algorithm at the block level without any loss in edge detection performance compared with the original frame-level Canny algorithm. Directly applying the original Canny algorithm at the block-level leads to excessive edges in smooth regions and to loss of significant edges in high-detailed regions since the original Canny computes the high and low thresholds based on the frame-level statistics. To solve this problem, we present a distributed Canny edge detection algorithm that adaptively computes the edge detection thresholds based on the block type and the local distribution of the gradients in the image block. In addition, the new algorithm uses a nonuniform gradient magnitude histogram to compute block-based hysteresis thresholds. The resulting block-based algorithm has a significantly reduced latency and can be easily integrated with other block-based image codecs. It is capable of supporting fast edge detection of images and videos with high resolutions, including full-HD since the latency is now a function of the block size instead of the frame size. In addition, quantitative conformance evaluations and subjective tests show that the edge detection performance of the proposed algorithm is better than the original frame-based algorithm, especially when noise is present in the images. Finally, this algorithm is implemented using a 32 computing engine architecture and is synthesized on the Xilinx Virtex-5 FPGA. The synthesized architecture takes only 0.721 ms (including the SRAM READ/WRITE time and the computation time) to detect edges of 512 × 512 images in the USC SIPI database when clocked at 100

  15. Improved restoration algorithm for weakly blurred and strongly noisy image

    NASA Astrophysics Data System (ADS)

    Liu, Qianshun; Xia, Guo; Zhou, Haiyang; Bai, Jian; Yu, Feihong

    2015-10-01

    In real applications, such as consumer digital imaging, it is very common to record weakly blurred and strongly noisy images. Recently, a state-of-art algorithm named geometric locally adaptive sharpening (GLAS) has been proposed. By capturing local image structure, it can effectively combine denoising and sharpening together. However, there still exist two problems in the practice. On one hand, two hard thresholds have to be constantly adjusted with different images so as not to produce over-sharpening artifacts. On the other hand, the smoothing parameter must be manually set precisely. Otherwise, it will seriously magnify the noise. However, these parameters have to be set in advance and totally empirically. In a practical application, this is difficult to achieve. Thus, it is not easy to use and not smart enough. In an effort to improve the restoration effect of this situation by way of GLAS, an improved GLAS (IGLAS) algorithm by introducing the local phase coherence sharpening Index (LPCSI) metric is proposed in this paper. With the help of LPCSI metric, the two hard thresholds can be fixed at constant values for all images. Compared to the original method, the thresholds in our new algorithm no longer need to change with different images. Based on our proposed IGLAS, its automatic version is also developed in order to compensate for the disadvantages of manual intervention. Simulated and real experimental results show that the proposed algorithm can not only obtain better performances compared with the original method, but it is very easy to apply.

  16. Reconstruction-plane-dependent weighted FDK algorithm for cone beam volumetric CT

    NASA Astrophysics Data System (ADS)

    Tang, Xiangyang; Hsieh, Jiang

    2005-04-01

    The original FDK algorithm has been extensively employed in medical and industrial imaging applications. With an increased cone angle, cone beam (CB) artifacts in images reconstructed by the original FDK algorithm deteriorate, since the circular trajectory does not satisfy the so-called data sufficiency condition (DSC). A few "circular plus" trajectories have been proposed in the past to reduce CB artifacts by meeting the DSC. However, the circular trajectory has distinct advantages over other scanning trajectories in practical CT imaging, such as cardiac, vascular and perfusion applications. In addition to looking into the DSC, another insight into the CB artifacts of the original FDK algorithm is the inconsistency between conjugate rays that are 180° apart in view angle. The inconsistence between conjugate rays is pixel dependent, i.e., it varies dramatically over pixels within the image plane to be reconstructed. However, the original FDK algorithm treats all conjugate rays equally, resulting in CB artifacts that can be avoided if appropriate view weighting strategy is exercised. In this paper, a modified FDK algorithm is proposed, along with an experimental evaluation and verification, in which the helical body phantom and a humanoid head phantom scanned by a volumetric CT (64 x 0.625 mm) are utilized. Without extra trajectories supplemental to the circular trajectory, the modified FDK algorithm applies reconstruction-plane-dependent view weighting on projection data before 3D backprojection, which reduces the inconsistency between conjugate rays by suppressing the contribution of one of the conjugate rays with a larger cone angle. Both computer-simulated and real phantom studies show that, up to a moderate cone angle, the CB artifacts can be substantially suppressed by the modified FDK algorithm, while advantages of the original FDK algorithm, such as the filtered backprojection algorithm structure, 1D ramp filtering, and data manipulation efficiency, can be

  17. Gradient maintenance: A new algorithm for fast online replanning

    SciTech Connect

    Ahunbay, Ergun E. Li, X. Allen

    2015-06-15

    Purpose: Clinical use of online adaptive replanning has been hampered by the unpractically long time required to delineate volumes based on the image of the day. The authors propose a new replanning algorithm, named gradient maintenance (GM), which does not require the delineation of organs at risk (OARs), and can enhance automation, drastically reducing planning time and improving consistency and throughput of online replanning. Methods: The proposed GM algorithm is based on the hypothesis that if the dose gradient toward each OAR in daily anatomy can be maintained the same as that in the original plan, the intended plan quality of the original plan would be preserved in the adaptive plan. The algorithm requires a series of partial concentric rings (PCRs) to be automatically generated around the target toward each OAR on the planning and the daily images. The PCRs are used in the daily optimization objective function. The PCR dose constraints are generated with dose–volume data extracted from the original plan. To demonstrate this idea, GM plans generated using daily images acquired using an in-room CT were compared to regular optimization and image guided radiation therapy repositioning plans for representative prostate and pancreatic cancer cases. Results: The adaptive replanning using the GM algorithm, requiring only the target contour from the CT of the day, can be completed within 5 min without using high-power hardware. The obtained adaptive plans were almost as good as the regular optimization plans and were better than the repositioning plans for the cases studied. Conclusions: The newly proposed GM replanning algorithm, requiring only target delineation, not full delineation of OARs, substantially increased planning speed for online adaptive replanning. The preliminary results indicate that the GM algorithm may be a solution to improve the ability for automation and may be especially suitable for sites with small-to-medium size targets surrounded by

  18. A biconjugate gradient type algorithm on massively parallel architectures

    NASA Technical Reports Server (NTRS)

    Freund, Roland W.; Hochbruck, Marlis

    1991-01-01

    The biconjugate gradient (BCG) method is the natural generalization of the classical conjugate gradient algorithm for Hermitian positive definite matrices to general non-Hermitian linear systems. Unfortunately, the original BCG algorithm is susceptible to possible breakdowns and numerical instabilities. Recently, Freund and Nachtigal have proposed a novel BCG type approach, the quasi-minimal residual method (QMR), which overcomes the problems of BCG. Here, an implementation is presented of QMR based on an s-step version of the nonsymmetric look-ahead Lanczos algorithm. The main feature of the s-step Lanczos algorithm is that, in general, all inner products, except for one, can be computed in parallel at the end of each block; this is unlike the other standard Lanczos process where inner products are generated sequentially. The resulting implementation of QMR is particularly attractive on massively parallel SIMD architectures, such as the Connection Machine.

  19. Error Estimation for the Linearized Auto-Localization Algorithm

    PubMed Central

    Guevara, Jorge; Jiménez, Antonio R.; Prieto, Jose Carlos; Seco, Fernando

    2012-01-01

    The Linearized Auto-Localization (LAL) algorithm estimates the position of beacon nodes in Local Positioning Systems (LPSs), using only the distance measurements to a mobile node whose position is also unknown. The LAL algorithm calculates the inter-beacon distances, used for the estimation of the beacons’ positions, from the linearized trilateration equations. In this paper we propose a method to estimate the propagation of the errors of the inter-beacon distances obtained with the LAL algorithm, based on a first order Taylor approximation of the equations. Since the method depends on such approximation, a confidence parameter τ is defined to measure the reliability of the estimated error. Field evaluations showed that by applying this information to an improved weighted-based auto-localization algorithm (WLAL), the standard deviation of the inter-beacon distances can be improved by more than 30% on average with respect to the original LAL method. PMID:22736965

  20. Error estimation for the linearized auto-localization algorithm.

    PubMed

    Guevara, Jorge; Jiménez, Antonio R; Prieto, Jose Carlos; Seco, Fernando

    2012-01-01

    The Linearized Auto-Localization (LAL) algorithm estimates the position of beacon nodes in Local Positioning Systems (LPSs), using only the distance measurements to a mobile node whose position is also unknown. The LAL algorithm calculates the inter-beacon distances, used for the estimation of the beacons' positions, from the linearized trilateration equations. In this paper we propose a method to estimate the propagation of the errors of the inter-beacon distances obtained with the LAL algorithm, based on a first order Taylor approximation of the equations. Since the method depends on such approximation, a confidence parameter τ is defined to measure the reliability of the estimated error. Field evaluations showed that by applying this information to an improved weighted-based auto-localization algorithm (WLAL), the standard deviation of the inter-beacon distances can be improved by more than 30% on average with respect to the original LAL method. PMID:22736965

  1. Stochastic Leader Gravitational Search Algorithm for Enhanced Adaptive Beamforming Technique

    PubMed Central

    Darzi, Soodabeh; Islam, Mohammad Tariqul; Tiong, Sieh Kiong; Kibria, Salehin; Singh, Mandeep

    2015-01-01

    In this paper, stochastic leader gravitational search algorithm (SL-GSA) based on randomized k is proposed. Standard GSA (SGSA) utilizes the best agents without any randomization, thus it is more prone to converge at suboptimal results. Initially, the new approach randomly choses k agents from the set of all agents to improve the global search ability. Gradually, the set of agents is reduced by eliminating the agents with the poorest performances to allow rapid convergence. The performance of the SL-GSA was analyzed for six well-known benchmark functions, and the results are compared with SGSA and some of its variants. Furthermore, the SL-GSA is applied to minimum variance distortionless response (MVDR) beamforming technique to ensure compatibility with real world optimization problems. The proposed algorithm demonstrates superior convergence rate and quality of solution for both real world problems and benchmark functions compared to original algorithm and other recent variants of SGSA. PMID:26552032

  2. Stochastic Leader Gravitational Search Algorithm for Enhanced Adaptive Beamforming Technique.

    PubMed

    Darzi, Soodabeh; Islam, Mohammad Tariqul; Tiong, Sieh Kiong; Kibria, Salehin; Singh, Mandeep

    2015-01-01

    In this paper, stochastic leader gravitational search algorithm (SL-GSA) based on randomized k is proposed. Standard GSA (SGSA) utilizes the best agents without any randomization, thus it is more prone to converge at suboptimal results. Initially, the new approach randomly choses k agents from the set of all agents to improve the global search ability. Gradually, the set of agents is reduced by eliminating the agents with the poorest performances to allow rapid convergence. The performance of the SL-GSA was analyzed for six well-known benchmark functions, and the results are compared with SGSA and some of its variants. Furthermore, the SL-GSA is applied to minimum variance distortionless response (MVDR) beamforming technique to ensure compatibility with real world optimization problems. The proposed algorithm demonstrates superior convergence rate and quality of solution for both real world problems and benchmark functions compared to original algorithm and other recent variants of SGSA. PMID:26552032

  3. Kernel simplex growing algorithm for hyperspectral endmember extraction

    NASA Astrophysics Data System (ADS)

    Zhao, Liaoying; Zheng, Junpeng; Li, Xiaorun; Wang, Lijiao

    2014-01-01

    In order to effectively extract endmembers for hyperspectral imagery where linear mixing model may not be appropriate due to multiple scattering effects, this paper extends the simplex growing algorithm (SGA) to its kernel version. A new simplex volume formula without dimension reduction is used in SGA to form a new simplex growing algorithm (NSGA). The original data are nonlinearly mapped into a high-dimensional space where the scatters can be ignored. To avoid determining complex nonlinear mapping, a kernel function is used to extend the NSGA to kernel NSGA (KNSGA). Experimental results of simulated and real data prove that the proposed KNSGA approach outperforms SGA and NSGA.

  4. A PARALIND Decomposition-Based Coherent Two-Dimensional Direction of Arrival Estimation Algorithm for Acoustic Vector-Sensor Arrays

    PubMed Central

    Zhang, Xiaofei; Zhou, Min; Li, Jianfeng

    2013-01-01

    In this paper, we combine the acoustic vector-sensor array parameter estimation problem with the parallel profiles with linear dependencies (PARALIND) model, which was originally applied to biology and chemistry. Exploiting the PARALIND decomposition approach, we propose a blind coherent two-dimensional direction of arrival (2D-DOA) estimation algorithm for arbitrarily spaced acoustic vector-sensor arrays subject to unknown locations. The proposed algorithm works well to achieve automatically paired azimuth and elevation angles for coherent and incoherent angle estimation of acoustic vector-sensor arrays, as well as the paired correlated matrix of the sources. Our algorithm, in contrast with conventional coherent angle estimation algorithms such as the forward backward spatial smoothing (FBSS) estimation of signal parameters via rotational invariance technique (ESPRIT) algorithm, not only has much better angle estimation performance, even for closely-spaced sources, but is also available for arbitrary arrays. Simulation results verify the effectiveness of our algorithm. PMID:23604030

  5. The Algorithm Selection Problem

    NASA Technical Reports Server (NTRS)

    Minton, Steve; Allen, John; Deiss, Ron (Technical Monitor)

    1994-01-01

    Work on NP-hard problems has shown that many instances of these theoretically computationally difficult problems are quite easy. The field has also shown that choosing the right algorithm for the problem can have a profound effect on the time needed to find a solution. However, to date there has been little work showing how to select the right algorithm for solving any particular problem. The paper refers to this as the algorithm selection problem. It describes some of the aspects that make this problem difficult, as well as proposes a technique for addressing it.

  6. Optimized mean shift algorithm for color segmentation in image sequences

    NASA Astrophysics Data System (ADS)

    Bailer, Werner; Schallauer, Peter; Haraldsson, Harald B.; Rehatschek, Herwig

    2005-03-01

    The application of the mean shift algorithm to color image segmentation has been proposed in 1997 by Comaniciu and Meer. We apply the mean shift color segmentation to image sequences, as the first step of a moving object segmentation algorithm. Previous work has shown that it is well suited for this task, because it provides better temporal stability of the segmentation result than other approaches. The drawback is higher computational cost. For speed up of processing on image sequences we exploit the fact that subsequent frames are similar and use the cluster centers of previous frames as initial estimates, which also enhances spatial segmentation continuity. In contrast to other implementations we use the originally proposed CIE LUV color space to ensure high quality segmentation results. We show that moderate quantization of the input data before conversion to CIE LUV has little influence on the segmentation quality but results in significant speed up. We also propose changes in the post-processing step to increase the temporal stability of border pixels. We perform objective evaluation of the segmentation results to compare the original algorithm with our modified version. We show that our optimized algorithm reduces processing time and increases the temporal stability of the segmentation.

  7. An efficient algorithm for function optimization: modified stem cells algorithm

    NASA Astrophysics Data System (ADS)

    Taherdangkoo, Mohammad; Paziresh, Mahsa; Yazdi, Mehran; Bagheri, Mohammad

    2013-03-01

    In this paper, we propose an optimization algorithm based on the intelligent behavior of stem cell swarms in reproduction and self-organization. Optimization algorithms, such as the Genetic Algorithm (GA), Particle Swarm Optimization (PSO) algorithm, Ant Colony Optimization (ACO) algorithm and Artificial Bee Colony (ABC) algorithm, can give solutions to linear and non-linear problems near to the optimum for many applications; however, in some case, they can suffer from becoming trapped in local optima. The Stem Cells Algorithm (SCA) is an optimization algorithm inspired by the natural behavior of stem cells in evolving themselves into new and improved cells. The SCA avoids the local optima problem successfully. In this paper, we have made small changes in the implementation of this algorithm to obtain improved performance over previous versions. Using a series of benchmark functions, we assess the performance of the proposed algorithm and compare it with that of the other aforementioned optimization algorithms. The obtained results prove the superiority of the Modified Stem Cells Algorithm (MSCA).

  8. The genetic algorithms for trajectory optimization

    NASA Astrophysics Data System (ADS)

    Janin, G.; Gomez-Tierno, M. A.

    1985-10-01

    Possible difficulties encountered when solving space flight trajectory optimization problems are recalled. The need of a global optimization scheme is realized. Nondeterministic methods, called here stochastic methods, seem to be good candidates for solving these types of problems. A particular class of such methods, modelled upon search strategies employed in natural adaptation, is proposed here: the genetic algorithms. Two models, the mutation-selection and the crossover-selection, are discussed and remarks resulting from applications to test problems and space flight problems are made. It is concluded that a considerable effort is still needed for developing efficient schemes using genetic algorithms. However, they appear to offer an entirely original way for solving a large class of global optimization problems and they are particularly well-suited for parallel processing to be used in the fifth generation computers.

  9. Quantum algorithms for quantum field theories

    NASA Astrophysics Data System (ADS)

    Jordan, Stephen

    2015-03-01

    Ever since Feynman's original proposal for quantum computers, one of the primary applications envisioned has been efficient simulation of other quantum systems. In fact, it has been conjectured that quantum computers would be universal simulators, which can simulate all physical systems using computational resources that scale polynomially with the system's number of degrees of freedom. Quantum field theories have posed a challenge in that the set of degrees of freedom is formally infinite. We show how quantum computers, if built, could nevertheless efficiently simulate certain quantum field theories at bounded energy scales. Our algorithm includes a new state preparation technique which we believe may find additional applications in quantum algorithms. Joint work with Keith Lee and John Preskill.

  10. Original Misunderstanding

    ERIC Educational Resources Information Center

    Holtzman, Alexander

    2009-01-01

    Humorist Josh Billings quipped, "About the most originality that any writer can hope to achieve honestly is to steal with good judgment." Billings was harsh in his view of originality, but his critique reveals a tension faced by students every time they write a history paper. Research is the essence of any history paper. Especially in high school,…

  11. Simple Common Plane contact algorithm for explicit FE/FD methods

    SciTech Connect

    Vorobiev, O

    2006-12-18

    Common-plane (CP) algorithm is widely used in Discrete Element Method (DEM) to model contact forces between interacting particles or blocks. A new simple contact algorithm is proposed to model contacts in FE/FD methods which is similar to the CP algorithm. The CP is defined as a plane separating interacting faces of FE/FD mesh instead of blocks or particles used in the original CP method. The new method does not require iterations even for very stiff contacts. It is very robust and easy to implement both in 2D and 3D parallel codes.

  12. Localized Ambient Solidity Separation Algorithm Based Computer User Segmentation

    PubMed Central

    Sun, Xiao; Zhang, Tongda; Chai, Yueting; Liu, Yi

    2015-01-01

    Most of popular clustering methods typically have some strong assumptions of the dataset. For example, the k-means implicitly assumes that all clusters come from spherical Gaussian distributions which have different means but the same covariance. However, when dealing with datasets that have diverse distribution shapes or high dimensionality, these assumptions might not be valid anymore. In order to overcome this weakness, we proposed a new clustering algorithm named localized ambient solidity separation (LASS) algorithm, using a new isolation criterion called centroid distance. Compared with other density based isolation criteria, our proposed centroid distance isolation criterion addresses the problem caused by high dimensionality and varying density. The experiment on a designed two-dimensional benchmark dataset shows that our proposed LASS algorithm not only inherits the advantage of the original dissimilarity increments clustering method to separate naturally isolated clusters but also can identify the clusters which are adjacent, overlapping, and under background noise. Finally, we compared our LASS algorithm with the dissimilarity increments clustering method on a massive computer user dataset with over two million records that contains demographic and behaviors information. The results show that LASS algorithm works extremely well on this computer user dataset and can gain more knowledge from it. PMID:26221133

  13. Localized Ambient Solidity Separation Algorithm Based Computer User Segmentation.

    PubMed

    Sun, Xiao; Zhang, Tongda; Chai, Yueting; Liu, Yi

    2015-01-01

    Most of popular clustering methods typically have some strong assumptions of the dataset. For example, the k-means implicitly assumes that all clusters come from spherical Gaussian distributions which have different means but the same covariance. However, when dealing with datasets that have diverse distribution shapes or high dimensionality, these assumptions might not be valid anymore. In order to overcome this weakness, we proposed a new clustering algorithm named localized ambient solidity separation (LASS) algorithm, using a new isolation criterion called centroid distance. Compared with other density based isolation criteria, our proposed centroid distance isolation criterion addresses the problem caused by high dimensionality and varying density. The experiment on a designed two-dimensional benchmark dataset shows that our proposed LASS algorithm not only inherits the advantage of the original dissimilarity increments clustering method to separate naturally isolated clusters but also can identify the clusters which are adjacent, overlapping, and under background noise. Finally, we compared our LASS algorithm with the dissimilarity increments clustering method on a massive computer user dataset with over two million records that contains demographic and behaviors information. The results show that LASS algorithm works extremely well on this computer user dataset and can gain more knowledge from it. PMID:26221133

  14. A Hybrid Shortest Path Algorithm for Navigation System

    NASA Astrophysics Data System (ADS)

    Cho, Hsun-Jung; Lan, Chien-Lun

    2007-12-01

    Combined with Geographic Information System (GIS) and Global Positioning System (GPS), the vehicle navigation system had become a quite popular product in daily life. A key component of the navigation system is the Shortest Path Algorithm. Navigation in real world must face a network consists of tens of thousands nodes and links, and even more. Under the limited computation capability of vehicle navigation equipment, it is difficult to satisfy the realtime response requirement that user expected. Hence, this study focused on shortest path algorithm that enhances the computation speed with less memory requirement. Several well-known algorithms such as Dijkstra, A* and hierarchical concepts were integrated to build hybrid algorithms that reduce searching space and improve searching speed. Numerical examples were conducted on Taiwan highway network that consists of more than four hundred thousands of links and nearly three hundred thousands of nodes. This real network was divided into two connected sub-networks (layers). The upper layer is constructed by freeways and expressways; the lower layer is constructed by local networks. Test origin-destination pairs were chosen randomly and divided into three distance categories; short, medium and long distances. The evaluation of outcome is judged by actual length and travel time. The numerical example reveals that the hybrid algorithm proposed by this research might be tens of thousands times faster than traditional Dijkstra algorithm; the memory requirement of the hybrid algorithm is also much smaller than the tradition algorithm. This outcome shows that this proposed algorithm would have an advantage over vehicle navigation system.

  15. Deconvolution of interferometric data using interior point iterative algorithms

    NASA Astrophysics Data System (ADS)

    Theys, C.; Lantéri, H.; Aime, C.

    2016-09-01

    We address the problem of deconvolution of astronomical images that could be obtained with future large interferometers in space. The presentation is made in two complementary parts. The first part gives an introduction to the image deconvolution with linear and nonlinear algorithms. The emphasis is made on nonlinear iterative algorithms that verify the constraints of non-negativity and constant flux. The Richardson-Lucy algorithm appears there as a special case for photon counting conditions. More generally, the algorithm published recently by Lanteri et al. (2015) is based on scale invariant divergences without assumption on the statistic model of the data. The two proposed algorithms are interior-point algorithms, the latter being more efficient in terms of speed of calculation. These algorithms are applied to the deconvolution of simulated images corresponding to an interferometric system of 16 diluted telescopes in space. Two non-redundant configurations, one disposed around a circle and the other on an hexagonal lattice, are compared for their effectiveness on a simple astronomical object. The comparison is made in the direct and Fourier spaces. Raw "dirty" images have many artifacts due to replicas of the original object. Linear methods cannot remove these replicas while iterative methods clearly show their efficacy in these examples.

  16. Enhanced land use/cover classification using support vector machines and fuzzy k-means clustering algorithms

    NASA Astrophysics Data System (ADS)

    He, Tao; Sun, Yu-Jun; Xu, Ji-De; Wang, Xue-Jun; Hu, Chang-Ru

    2014-01-01

    Land use/cover (LUC) classification plays an important role in remote sensing and land change science. Because of the complexity of ground covers, LUC classification is still regarded as a difficult task. This study proposed a fusion algorithm, which uses support vector machines (SVM) and fuzzy k-means (FKM) clustering algorithms. The main scheme was divided into two steps. First, a clustering map was obtained from the original remote sensing image using FKM; simultaneously, a normalized difference vegetation index layer was extracted from the original image. Then, the classification map was generated by using an SVM classifier. Three different classification algorithms were compared, tested, and verified-parametric (maximum likelihood), nonparametric (SVM), and hybrid (unsupervised-supervised, fusion of SVM and FKM) classifiers, respectively. The proposed algorithm obtained the highest overall accuracy in our experiments.

  17. Cloud model bat algorithm.

    PubMed

    Zhou, Yongquan; Xie, Jian; Li, Liangliang; Ma, Mingzhi

    2014-01-01

    Bat algorithm (BA) is a novel stochastic global optimization algorithm. Cloud model is an effective tool in transforming between qualitative concepts and their quantitative representation. Based on the bat echolocation mechanism and excellent characteristics of cloud model on uncertainty knowledge representation, a new cloud model bat algorithm (CBA) is proposed. This paper focuses on remodeling echolocation model based on living and preying characteristics of bats, utilizing the transformation theory of cloud model to depict the qualitative concept: "bats approach their prey." Furthermore, Lévy flight mode and population information communication mechanism of bats are introduced to balance the advantage between exploration and exploitation. The simulation results show that the cloud model bat algorithm has good performance on functions optimization. PMID:24967425

  18. An analysis dictionary learning algorithm under a noisy data model with orthogonality constraint.

    PubMed

    Zhang, Ye; Yu, Tenglong; Wang, Wenwu

    2014-01-01

    Two common problems are often encountered in analysis dictionary learning (ADL) algorithms. The first one is that the original clean signals for learning the dictionary are assumed to be known, which otherwise need to be estimated from noisy measurements. This, however, renders a computationally slow optimization process and potentially unreliable estimation (if the noise level is high), as represented by the Analysis K-SVD (AK-SVD) algorithm. The other problem is the trivial solution to the dictionary, for example, the null dictionary matrix that may be given by a dictionary learning algorithm, as discussed in the learning overcomplete sparsifying transform (LOST) algorithm. Here we propose a novel optimization model and an iterative algorithm to learn the analysis dictionary, where we directly employ the observed data to compute the approximate analysis sparse representation of the original signals (leading to a fast optimization procedure) and enforce an orthogonality constraint on the optimization criterion to avoid the trivial solutions. Experiments demonstrate the competitive performance of the proposed algorithm as compared with three baselines, namely, the AK-SVD, LOST, and NAAOLA algorithms. PMID:25126605

  19. Algorithmic height compression of unordered trees.

    PubMed

    Ben-Naoum, Farah; Godin, Christophe

    2016-01-21

    By nature, tree structures frequently present similarities between their sub-parts. Making use of this redundancy, different types of tree compression techniques have been designed in the literature to reduce the complexity of tree structures. A popular and efficient way to compress a tree consists of merging its isomorphic subtrees, which produces a directed acyclic graph (DAG) equivalent to the original tree. An important property of this method is that the compressed structure (i.e. the DAG) has the same height as the original tree, thus limiting partially the possibility of compression. In this paper we address the problem of further compressing this DAG in height. The difficulty is that compression must be carried out on substructures that are not exactly isomorphic as they are strictly nested within each-other. We thus introduced a notion of quasi-isomorphism between subtrees that makes it possible to define similar patterns along any given path in a tree. We then proposed an algorithm to detect these patterns and to merge them, thus leading to compressed structures corresponding to DAGs augmented with return edges. In this way, redundant information is removed from the original tree in both width and height, thus achieving minimal structural compression. The complete compression algorithm is then illustrated on the compression of various plant-like structures. PMID:26551155

  20. Quantum algorithms

    NASA Astrophysics Data System (ADS)

    Abrams, Daniel S.

    This thesis describes several new quantum algorithms. These include a polynomial time algorithm that uses a quantum fast Fourier transform to find eigenvalues and eigenvectors of a Hamiltonian operator, and that can be applied in cases (commonly found in ab initio physics and chemistry problems) for which all known classical algorithms require exponential time. Fast algorithms for simulating many body Fermi systems are also provided in both first and second quantized descriptions. An efficient quantum algorithm for anti-symmetrization is given as well as a detailed discussion of a simulation of the Hubbard model. In addition, quantum algorithms that calculate numerical integrals and various characteristics of stochastic processes are described. Two techniques are given, both of which obtain an exponential speed increase in comparison to the fastest known classical deterministic algorithms and a quadratic speed increase in comparison to classical Monte Carlo (probabilistic) methods. I derive a simpler and slightly faster version of Grover's mean algorithm, show how to apply quantum counting to the problem, develop some variations of these algorithms, and show how both (apparently distinct) approaches can be understood from the same unified framework. Finally, the relationship between physics and computation is explored in some more depth, and it is shown that computational complexity theory depends very sensitively on physical laws. In particular, it is shown that nonlinear quantum mechanics allows for the polynomial time solution of NP-complete and #P oracle problems. Using the Weinberg model as a simple example, the explicit construction of the necessary gates is derived from the underlying physics. Nonlinear quantum algorithms are also presented using Polchinski type nonlinearities which do not allow for superluminal communication. (Copies available exclusively from MIT Libraries, Rm. 14- 0551, Cambridge, MA 02139-4307. Ph. 617-253-5668; Fax 617-253-1690.)

  1. Novel Spectrum Sensing Algorithms for OFDM Cognitive Radio Networks.

    PubMed

    Shi, Zhenguo; Wu, Zhilu; Yin, Zhendong; Cheng, Qingqing

    2015-01-01

    Spectrum sensing technology plays an increasingly important role in cognitive radio networks. Consequently, several spectrum sensing algorithms have been proposed in the literature. In this paper, we present a new spectrum sensing algorithm "Differential Characteristics-Based OFDM (DC-OFDM)" for detecting OFDM signal on account of differential characteristics. We put the primary value on channel gain θ around zero to detect the presence of primary user. Furthermore, utilizing the same method of differential operation, we improve two traditional OFDM sensing algorithms (cyclic prefix and pilot tones detecting algorithms), and propose a "Differential Characteristics-Based Cyclic Prefix (DC-CP)" detector and a "Differential Characteristics-Based Pilot Tones (DC-PT)" detector, respectively. DC-CP detector is based on auto-correlation vector to sense the spectrum, while the DC-PT detector takes the frequency-domain cross-correlation of PT as the test statistic to detect the primary user. Moreover, the distributions of the test statistics of the three proposed methods have been derived. Simulation results illustrate that all of the three proposed methods can achieve good performance under low signal to noise ratio (SNR) with the presence of timing delay. Specifically, the DC-OFDM detector gets the best performance among the presented detectors. Moreover, both of the DC-CP and DC-PT detector achieve significant improvements compared with their corresponding original detectors. PMID:26083226

  2. Novel Spectrum Sensing Algorithms for OFDM Cognitive Radio Networks

    PubMed Central

    Shi, Zhenguo; Wu, Zhilu; Yin, Zhendong; Cheng, Qingqing

    2015-01-01

    Spectrum sensing technology plays an increasingly important role in cognitive radio networks. Consequently, several spectrum sensing algorithms have been proposed in the literature. In this paper, we present a new spectrum sensing algorithm “Differential Characteristics-Based OFDM (DC-OFDM)” for detecting OFDM signal on account of differential characteristics. We put the primary value on channel gain θ around zero to detect the presence of primary user. Furthermore, utilizing the same method of differential operation, we improve two traditional OFDM sensing algorithms (cyclic prefix and pilot tones detecting algorithms), and propose a “Differential Characteristics-Based Cyclic Prefix (DC-CP)” detector and a “Differential Characteristics-Based Pilot Tones (DC-PT)” detector, respectively. DC-CP detector is based on auto-correlation vector to sense the spectrum, while the DC-PT detector takes the frequency-domain cross-correlation of PT as the test statistic to detect the primary user. Moreover, the distributions of the test statistics of the three proposed methods have been derived. Simulation results illustrate that all of the three proposed methods can achieve good performance under low signal to noise ratio (SNR) with the presence of timing delay. Specifically, the DC-OFDM detector gets the best performance among the presented detectors. Moreover, both of the DC-CP and DC-PT detector achieve significant improvements compared with their corresponding original detectors. PMID:26083226

  3. Development of a new metal artifact reduction algorithm by using an edge preserving method for CBCT imaging

    NASA Astrophysics Data System (ADS)

    Kim, Juhye; Nam, Haewon; Lee, Rena

    2015-07-01

    CT (computed tomography) images, metal materials such as tooth supplements or surgical clips can cause metal artifact and degrade image quality. In severe cases, this may lead to misdiagnosis. In this research, we developed a new MAR (metal artifact reduction) algorithm by using an edge preserving filter and the MATLAB program (Mathworks, version R2012a). The proposed algorithm consists of 6 steps: image reconstruction from projection data, metal segmentation, forward projection, interpolation, applied edge preserving smoothing filter, and new image reconstruction. For an evaluation of the proposed algorithm, we obtained both numerical simulation data and data for a Rando phantom. In the numerical simulation data, four metal regions were added into the Shepp Logan phantom for metal artifacts. The projection data of the metal-inserted Rando phantom were obtained by using a prototype CBCT scanner manufactured by medical engineering and medical physics (MEMP) laboratory research group in medical science at Ewha Womans University. After these had been adopted the proposed algorithm was performed, and the result were compared with the original image (with metal artifact without correction) and with a corrected image based on linear interpolation. Both visual and quantitative evaluations were done. Compared with the original image with metal artifacts and with the image corrected by using linear interpolation, both the numerical and the experimental phantom data demonstrated that the proposed algorithm reduced the metal artifact. In conclusion, the evaluation in this research showed that the proposed algorithm outperformed the interpolation based MAR algorithm. If an optimization and a stability evaluation of the proposed algorithm can be performed, the developed algorithm is expected to be an effective tool for eliminating metal artifacts even in commercial CT systems.

  4. The Langley Parameterized Shortwave Algorithm (LPSA) for Surface Radiation Budget Studies. 1.0

    NASA Technical Reports Server (NTRS)

    Gupta, Shashi K.; Kratz, David P.; Stackhouse, Paul W., Jr.; Wilber, Anne C.

    2001-01-01

    An efficient algorithm was developed during the late 1980's and early 1990's by W. F. Staylor at NASA/LaRC for the purpose of deriving shortwave surface radiation budget parameters on a global scale. While the algorithm produced results in good agreement with observations, the lack of proper documentation resulted in a weak acceptance by the science community. The primary purpose of this report is to develop detailed documentation of the algorithm. In the process, the algorithm was modified whenever discrepancies were found between the algorithm and its referenced literature sources. In some instances, assumptions made in the algorithm could not be justified and were replaced with those that were justifiable. The algorithm uses satellite and operational meteorological data for inputs. Most of the original data sources have been replaced by more recent, higher quality data sources, and fluxes are now computed on a higher spatial resolution. Many more changes to the basic radiation scheme and meteorological inputs have been proposed to improve the algorithm and make the product more useful for new research projects. Because of the many changes already in place and more planned for the future, the algorithm has been renamed the Langley Parameterized Shortwave Algorithm (LPSA).

  5. Research on super-resolution image reconstruction based on an improved POCS algorithm

    NASA Astrophysics Data System (ADS)

    Xu, Haiming; Miao, Hong; Yang, Chong; Xiong, Cheng

    2015-07-01

    Super-resolution image reconstruction (SRIR) can improve the fuzzy image's resolution; solve the shortage of the spatial resolution, excessive noise, and low-quality problem of the image. Firstly, we introduce the image degradation model to reveal the essence of super-resolution reconstruction process is an ill-posed inverse problem in mathematics. Secondly, analysis the blurring reason of optical imaging process - light diffraction and small angle scattering is the main reason for the fuzzy; propose an image point spread function estimation method and an improved projection onto convex sets (POCS) algorithm which indicate effectiveness by analyzing the changes between the time domain and frequency domain algorithm in the reconstruction process, pointed out that the improved POCS algorithms based on prior knowledge have the effect to restore and approach the high frequency of original image scene. Finally, we apply the algorithm to reconstruct synchrotron radiation computer tomography (SRCT) image, and then use these images to reconstruct the three-dimensional slice images. Comparing the differences between the original method and super-resolution algorithm, it is obvious that the improved POCS algorithm can restrain the noise and enhance the image resolution, so it is indicated that the algorithm is effective. This study and exploration to super-resolution image reconstruction by improved POCS algorithm is proved to be an effective method. It has important significance and broad application prospects - for example, CT medical image processing and SRCT ceramic sintering analyze of microstructure evolution mechanism.

  6. High-performance combinatorial algorithms

    SciTech Connect

    Pinar, Ali

    2003-10-31

    Combinatorial algorithms have long played an important role in many applications of scientific computing such as sparse matrix computations and parallel computing. The growing importance of combinatorial algorithms in emerging applications like computational biology and scientific data mining calls for development of a high performance library for combinatorial algorithms. Building such a library requires a new structure for combinatorial algorithms research that enables fast implementation of new algorithms. We propose a structure for combinatorial algorithms research that mimics the research structure of numerical algorithms. Numerical algorithms research is nicely complemented with high performance libraries, and this can be attributed to the fact that there are only a small number of fundamental problems that underlie numerical solvers. Furthermore there are only a handful of kernels that enable implementation of algorithms for these fundamental problems. Building a similar structure for combinatorial algorithms will enable efficient implementations for existing algorithms and fast implementation of new algorithms. Our results will promote utilization of combinatorial techniques and will impact research in many scientific computing applications, some of which are listed.

  7. Haplotyping algorithms

    SciTech Connect

    Sobel, E.; Lange, K.; O`Connell, J.R.

    1996-12-31

    Haplotyping is the logical process of inferring gene flow in a pedigree based on phenotyping results at a small number of genetic loci. This paper formalizes the haplotyping problem and suggests four algorithms for haplotype reconstruction. These algorithms range from exhaustive enumeration of all haplotype vectors to combinatorial optimization by simulated annealing. Application of the algorithms to published genetic analyses shows that manual haplotyping is often erroneous. Haplotyping is employed in screening pedigrees for phenotyping errors and in positional cloning of disease genes from conserved haplotypes in population isolates. 26 refs., 6 figs., 3 tabs.

  8. Unsupervised and stable LBG algorithm for data classification: application to aerial multicomponent images

    NASA Astrophysics Data System (ADS)

    Taher, A.; Chehdi, K.; Cariou, C.

    2015-10-01

    In this paper a stable and unsupervised Linde-Buzo-Gray (LBG) algorithm named LBGO is presented. The originality of the proposed algorithm relies: i) on the utilization of an adaptive incremental technique to initialize the class centres that calls into question the intermediate initializations; this technique makes the algorithm stable and deterministic, and the classification results do not vary from a run to another, and ii) on the unsupervised evaluation criteria of the intermediate classification result to estimate the optimal number of classes; this makes the algorithm unsupervised. The efficiency of this optimized version of LBG is shown through some experimental results on synthetic and real aerial hyperspectral data. More precisely we have tested our proposed classification approach regarding three aspects: firstly for its stability, secondly for its correct classification rate, and thirdly for the correct estimation of number of classes.

  9. A fuzzy record-to-record travel algorithm for solving rough set attribute reduction

    NASA Astrophysics Data System (ADS)

    Mafarja, Majdi; Abdullah, Salwani

    2015-02-01

    Attribute reduction can be defined as the process of determining a minimal subset of attributes from an original set of attributes. This paper proposes a new attribute reduction method that is based on a record-to-record travel algorithm for solving rough set attribute reduction problems. This algorithm has a solitary parameter called the DEVIATION, which plays a pivotal role in controlling the acceptance of the worse solutions, after it becomes pre-tuned. In this paper, we focus on a fuzzy-based record-to-record travel algorithm for attribute reduction (FuzzyRRTAR). This algorithm employs an intelligent fuzzy logic controller mechanism to control the value of DEVIATION, which is dynamically changed throughout the search process. The proposed method was tested on standard benchmark data sets. The results show that FuzzyRRTAR is efficient in solving attribute reduction problems when compared with other meta-heuristic approaches.

  10. A numerical comparison of discrete Kalman filtering algorithms - An orbit determination case study

    NASA Technical Reports Server (NTRS)

    Thornton, C. L.; Bierman, G. J.

    1976-01-01

    An improved Kalman filter algorithm based on a modified Givens matrix triangularization technique is proposed for solving a nonstationary discrete-time linear filtering problem. The proposed U-D covariance factorization filter uses orthogonal transformation technique; measurement and time updating of the U-D factors involve separate application of Gentleman's fast square-root-free Givens rotations. Numerical stability and accuracy of the algorithm are compared with those of the conventional and stabilized Kalman filters and the Potter-Schmidt square-root filter, by applying these techniques to a realistic planetary navigation problem (orbit determination for the Saturn approach phase of the Mariner Jupiter-Saturn Mission, 1977). The new algorithm is shown to combine the numerical precision of square root filtering with the efficiency of the original Kalman algorithm.

  11. Optimal band selection for high dimensional remote sensing data using genetic algorithm

    NASA Astrophysics Data System (ADS)

    Zhang, Xianfeng; Sun, Quan; Li, Jonathan

    2009-06-01

    A 'fused' method may not be suitable for reducing the dimensionality of data and a band/feature selection method needs to be used for selecting an optimal subset of original data bands. This study examined the efficiency of GA in band selection for remote sensing classification. A GA-based algorithm for band selection was designed deliberately in which a Bhattacharyya distance index that indicates separability between classes of interest is used as fitness function. A binary string chromosome is designed in which each gene location has a value of 1 representing a feature being included or 0 representing a band being not included. The algorithm was implemented in MATLAB programming environment, and a band selection task for lithologic classification in the Chocolate Mountain area (California) was used to test the proposed algorithm. The proposed feature selection algorithm can be useful in multi-source remote sensing data preprocessing, especially in hyperspectral dimensionality reduction.

  12. Constructive neural network learning algorithms

    SciTech Connect

    Parekh, R.; Yang, Jihoon; Honavar, V.

    1996-12-31

    Constructive Algorithms offer an approach for incremental construction of potentially minimal neural network architectures for pattern classification tasks. These algorithms obviate the need for an ad-hoc a-priori choice of the network topology. The constructive algorithm design involves alternately augmenting the existing network topology by adding one or more threshold logic units and training the newly added threshold neuron(s) using a stable variant of the perception learning algorithm (e.g., pocket algorithm, thermal perception, and barycentric correction procedure). Several constructive algorithms including tower, pyramid, tiling, upstart, and perception cascade have been proposed for 2-category pattern classification. These algorithms differ in terms of their topological and connectivity constraints as well as the training strategies used for individual neurons.

  13. Eukaryotic origins

    PubMed Central

    Lake, James A.

    2015-01-01

    The origin of the eukaryotes is a fundamental scientific question that for over 30 years has generated a spirited debate between the competing Archaea (or three domains) tree and the eocyte tree. As eukaryotes ourselves, humans have a personal interest in our origins. Eukaryotes contain their defining organelle, the nucleus, after which they are named. They have a complex evolutionary history, over time acquiring multiple organelles, including mitochondria, chloroplasts, smooth and rough endoplasmic reticula, and other organelles all of which may hint at their origins. It is the evolutionary history of the nucleus and their other organelles that have intrigued molecular evolutionists, myself included, for the past 30 years and which continues to hold our interest as increasingly compelling evidence favours the eocyte tree. As with any orthodoxy, it takes time to embrace new concepts and techniques. PMID:26323753

  14. Research and implementation of the algorithm for unwrapped and distortion correction basing on CORDIC for panoramic image

    NASA Astrophysics Data System (ADS)

    Zhang, Zhenhai; Li, Kejie; Wu, Xiaobing; Zhang, Shujiang

    2008-03-01

    The unwrapped and correcting algorithm based on Coordinate Rotation Digital Computer (CORDIC) and bilinear interpolation algorithm was presented in this paper, with the purpose of processing dynamic panoramic annular image. An original annular panoramic image captured by panoramic annular lens (PAL) can be unwrapped and corrected to conventional rectangular image without distortion, which is much more coincident with people's vision. The algorithm for panoramic image processing is modeled by VHDL and implemented in FPGA. The experimental results show that the proposed panoramic image algorithm for unwrapped and distortion correction has the lower computation complexity and the architecture for dynamic panoramic image processing has lower hardware cost and power consumption. And the proposed algorithm is valid.

  15. Parallel algorithms for unconstrained optimizations by multisplitting

    SciTech Connect

    He, Qing

    1994-12-31

    In this paper a new parallel iterative algorithm for unconstrained optimization using the idea of multisplitting is proposed. This algorithm uses the existing sequential algorithms without any parallelization. Some convergence and numerical results for this algorithm are presented. The experiments are performed on an Intel iPSC/860 Hyper Cube with 64 nodes. It is interesting that the sequential implementation on one node shows that if the problem is split properly, the algorithm converges much faster than one without splitting.

  16. A rapid reconstruction algorithm for three-dimensional scanning images

    NASA Astrophysics Data System (ADS)

    Xiang, Jiying; Wu, Zhen; Zhang, Ping; Huang, Dexiu

    1998-04-01

    A `simulated fluorescence' three-dimensional reconstruction algorithm, which is especially suitable for confocal images of partial transparent biological samples, is proposed in this paper. To make the retina projection of the object reappear and to avoid excessive memory consumption, the original image is rotated and compressed before the processing. A left image and a right image are mixed by different colors to increase the sense of stereo. The details originally hidden in deep layers are well exhibited with the aid of an `auxiliary directional source'. In addition, the time consumption is greatly reduced compared with conventional methods such as `ray tracing'. The realization of the algorithm is interpreted by a group of reconstructed images.

  17. Variable Selection using MM Algorithms

    PubMed Central

    Hunter, David R.; Li, Runze

    2009-01-01

    Variable selection is fundamental to high-dimensional statistical modeling. Many variable selection techniques may be implemented by maximum penalized likelihood using various penalty functions. Optimizing the penalized likelihood function is often challenging because it may be nondifferentiable and/or nonconcave. This article proposes a new class of algorithms for finding a maximizer of the penalized likelihood for a broad class of penalty functions. These algorithms operate by perturbing the penalty function slightly to render it differentiable, then optimizing this differentiable function using a minorize-maximize (MM) algorithm. MM algorithms are useful extensions of the well-known class of EM algorithms, a fact that allows us to analyze the local and global convergence of the proposed algorithm using some of the techniques employed for EM algorithms. In particular, we prove that when our MM algorithms converge, they must converge to a desirable point; we also discuss conditions under which this convergence may be guaranteed. We exploit the Newton-Raphson-like aspect of these algorithms to propose a sandwich estimator for the standard errors of the estimators. Our method performs well in numerical tests. PMID:19458786

  18. Proposal Writing.

    ERIC Educational Resources Information Center

    Grant, Andrew; And Others

    1988-01-01

    The basics of effective proposal writing, from content to structure to length, are presented in three articles: "Knowledge Is Power" (Andrew Grant, Emily S. Berkowitz), "Write on the Money" (Lucy Knight); and "The Problem Proposal." (MLW)

  19. Heuristic-based tabu search algorithm for folding two-dimensional AB off-lattice model proteins.

    PubMed

    Liu, Jingfa; Sun, Yuanyuan; Li, Gang; Song, Beibei; Huang, Weibo

    2013-12-01

    The protein structure prediction problem is a classical NP hard problem in bioinformatics. The lack of an effective global optimization method is the key obstacle in solving this problem. As one of the global optimization algorithms, tabu search (TS) algorithm has been successfully applied in many optimization problems. We define the new neighborhood conformation, tabu object and acceptance criteria of current conformation based on the original TS algorithm and put forward an improved TS algorithm. By integrating the heuristic initialization mechanism, the heuristic conformation updating mechanism, and the gradient method into the improved TS algorithm, a heuristic-based tabu search (HTS) algorithm is presented for predicting the two-dimensional (2D) protein folding structure in AB off-lattice model which consists of hydrophobic (A) and hydrophilic (B) monomers. The tabu search minimization leads to the basins of local minima, near which a local search mechanism is then proposed to further search for lower-energy conformations. To test the performance of the proposed algorithm, experiments are performed on four Fibonacci sequences and two real protein sequences. The experimental results show that the proposed algorithm has found the lowest-energy conformations so far for three shorter Fibonacci sequences and renewed the results for the longest one, as well as two real protein sequences, demonstrating that the HTS algorithm is quite promising in finding the ground states for AB off-lattice model proteins. PMID:24077543

  20. Original Version

    Cancer.gov

    The EPEC-O (Education in Palliative and End-of-Life Care for Oncology) Self-Study Original Version is a free comprehensive multimedia curricula for health professionals caring for persons with cancer and their families. The curricula is available as an online Self-Study Section and as a CD-ROM you can order.

  1. Genetic Algorithms with Local Minimum Escaping Technique

    NASA Astrophysics Data System (ADS)

    Tamura, Hiroki; Sakata, Kenichiro; Tang, Zheng; Ishii, Masahiro

    In this paper, we propose a genetic algorithm(GA) with local minimum escaping technique. This proposed method uses the local minimum escaping techique. It can escape from the local minimum by correcting parameters when genetic algorithm falls into a local minimum. Simulations are performed to scheduling problem without buffer capacity using this proposed method, and its validity is shown.

  2. Algorithm That Synthesizes Other Algorithms for Hashing

    NASA Technical Reports Server (NTRS)

    James, Mark

    2010-01-01

    An algorithm that includes a collection of several subalgorithms has been devised as a means of synthesizing still other algorithms (which could include computer code) that utilize hashing to determine whether an element (typically, a number or other datum) is a member of a set (typically, a list of numbers). Each subalgorithm synthesizes an algorithm (e.g., a block of code) that maps a static set of key hashes to a somewhat linear monotonically increasing sequence of integers. The goal in formulating this mapping is to cause the length of the sequence thus generated to be as close as practicable to the original length of the set and thus to minimize gaps between the elements. The advantage of the approach embodied in this algorithm is that it completely avoids the traditional approach of hash-key look-ups that involve either secondary hash generation and look-up or further searching of a hash table for a desired key in the event of collisions. This algorithm guarantees that it will never be necessary to perform a search or to generate a secondary key in order to determine whether an element is a member of a set. This algorithm further guarantees that any algorithm that it synthesizes can be executed in constant time. To enforce these guarantees, the subalgorithms are formulated to employ a set of techniques, each of which works very effectively covering a certain class of hash-key values. These subalgorithms are of two types, summarized as follows: Given a list of numbers, try to find one or more solutions in which, if each number is shifted to the right by a constant number of bits and then masked with a rotating mask that isolates a set of bits, a unique number is thereby generated. In a variant of the foregoing procedure, omit the masking. Try various combinations of shifting, masking, and/or offsets until the solutions are found. From the set of solutions, select the one that provides the greatest compression for the representation and is executable in the

  3. Parameters Identification for Photovoltaic Module Based on an Improved Artificial Fish Swarm Algorithm

    PubMed Central

    Wang, Hong-Hua

    2014-01-01

    A precise mathematical model plays a pivotal role in the simulation, evaluation, and optimization of photovoltaic (PV) power systems. Different from the traditional linear model, the model of PV module has the features of nonlinearity and multiparameters. Since conventional methods are incapable of identifying the parameters of PV module, an excellent optimization algorithm is required. Artificial fish swarm algorithm (AFSA), originally inspired by the simulation of collective behavior of real fish swarms, is proposed to fast and accurately extract the parameters of PV module. In addition to the regular operation, a mutation operator (MO) is designed to enhance the searching performance of the algorithm. The feasibility of the proposed method is demonstrated by various parameters of PV module under different environmental conditions, and the testing results are compared with other studied methods in terms of final solutions and computational time. The simulation results show that the proposed method is capable of obtaining higher parameters identification precision. PMID:25243233

  4. A three-dimensional weighted cone beam filtered backprojection (CB-FBP) algorithm for image reconstruction in volumetric CT under a circular source trajectory

    NASA Astrophysics Data System (ADS)

    Tang, Xiangyang; Hsieh, Jiang; Hagiwara, Akira; Nilsen, Roy A.; Thibault, Jean-Baptiste; Drapkin, Evgeny

    2005-08-01

    The original FDK algorithm proposed for cone beam (CB) image reconstruction under a circular source trajectory has been extensively employed in medical and industrial imaging applications. With increasing cone angle, CB artefacts in images reconstructed by the original FDK algorithm deteriorate, since the circular trajectory does not satisfy the so-called data sufficiency condition (DSC). A few 'circular plus' trajectories have been proposed in the past to help the original FDK algorithm to reduce CB artefacts by meeting the DSC. However, the circular trajectory has distinct advantages over other scanning trajectories in practical CT imaging, such as head imaging, breast imaging, cardiac, vascular and perfusion applications. In addition to looking into the DSC, another insight into the CB artefacts existing in the original FDK algorithm is the inconsistency between conjugate rays that are 180° apart in view angle (namely conjugate ray inconsistency). The conjugate ray inconsistency is pixel dependent, varying dramatically over pixels within the image plane to be reconstructed. However, the original FDK algorithm treats all conjugate rays equally, resulting in CB artefacts that can be avoided if appropriate weighting strategies are exercised. Along with an experimental evaluation and verification, a three-dimensional (3D) weighted axial cone beam filtered backprojection (CB-FBP) algorithm is proposed in this paper for image reconstruction in volumetric CT under a circular source trajectory. Without extra trajectories supplemental to the circular trajectory, the proposed algorithm applies 3D weighting on projection data before 3D backprojection to reduce conjugate ray inconsistency by suppressing the contribution from one of the conjugate rays with a larger cone angle. Furthermore, the 3D weighting is dependent on the distance between the reconstruction plane and the central plane determined by the circular trajectory. The proposed 3D weighted axial CB-FBP algorithm

  5. Localized density matrix minimization and linear-scaling algorithms

    NASA Astrophysics Data System (ADS)

    Lai, Rongjie; Lu, Jianfeng

    2016-06-01

    We propose a convex variational approach to compute localized density matrices for both zero temperature and finite temperature cases, by adding an entry-wise ℓ1 regularization to the free energy of the quantum system. Based on the fact that the density matrix decays exponentially away from the diagonal for insulating systems or systems at finite temperature, the proposed ℓ1 regularized variational method provides an effective way to approximate the original quantum system. We provide theoretical analysis of the approximation behavior and also design convergence guaranteed numerical algorithms based on Bregman iteration. More importantly, the ℓ1 regularized system naturally leads to localized density matrices with banded structure, which enables us to develop approximating algorithms to find the localized density matrices with computation cost linearly dependent on the problem size.

  6. Classifying scaled and rotated textures using a region-matched algorithm

    NASA Astrophysics Data System (ADS)

    Yao, Chih-Chia; Chen, Yu-Tin

    2012-07-01

    A novel method to correct texture variations resulting from scale magnification, narrowing caused by cropping into the original size, or spatial rotation is discussed. The variations usually occur in images captured by a camera using different focal lengths. A representative region-matched algorithm is developed to improve texture classification after magnification, narrowing, and spatial rotation. By using a minimum ellipse, a representative region-matched algorithm encloses a specific region extracted by the J-image segmentation algorithm. After translating the coordinates, the equation of an ellipse in the rotated texture can be formulated as that of an ellipse in the original texture. The rotated invariant property of ellipse provides an efficient method to identify the rotated texture. Additionally, the scale-variant representative region can be classified by adopting scale-invariant parameters. Moreover, a hybrid texture filter is developed. In the hybrid texture filter, the scheme of texture feature extraction includes the Gabor wavelet and the representative region-matched algorithm. Support vector machines are introduced as the classifier. The proposed hybrid texture filter performs excellently with respect to classifying both the stochastic and structural textures. Furthermore, experimental results demonstrate that the proposed algorithm outperforms conventional design algorithms.

  7. A Monotonically Convergent Algorithm for Orthogonal Congruence Rotation.

    ERIC Educational Resources Information Center

    Kiers, Henk A. L.; Groenen, Patrick

    1996-01-01

    An iterative majorization algorithm is proposed for orthogonal congruence rotation that is guaranteed to converge from every starting point. In addition, the algorithm is easier to program than the algorithm proposed by F. B. Brokken, which is not guaranteed to converge. The derivation of the algorithm is traced in detail. (SLD)

  8. Multi-pattern string matching algorithms comparison for intrusion detection system

    NASA Astrophysics Data System (ADS)

    Hasan, Awsan A.; Rashid, Nur'Aini Abdul; Abdulrazzaq, Atheer A.

    2014-12-01

    Computer networks are developing exponentially and running at high speeds. With the increasing number of Internet users, computers have become the preferred target for complex attacks that require complex analyses to be detected. The Intrusion detection system (IDS) is created and turned into an important part of any modern network to protect the network from attacks. The IDS relies on string matching algorithms to identify network attacks, but these string matching algorithms consume a considerable amount of IDS processing time, thereby slows down the IDS performance. A new algorithm that can overcome the weakness of the IDS needs to be developed. Improving the multi-pattern matching algorithm ensure that an IDS can work properly and the limitations can be overcome. In this paper, we perform a comparison between our three multi-pattern matching algorithms; MP-KR, MPHQS and MPH-BMH with their corresponding original algorithms Kr, QS and BMH respectively. The experiments show that MPH-QS performs best among the proposed algorithms, followed by MPH-BMH, and MP-KR is the slowest. MPH-QS detects a large number of signature patterns in short time compared to other two algorithms. This finding can prove that the multi-pattern matching algorithms are more efficient in high-speed networks.

  9. Quantum adiabatic algorithm for factorization and its experimental implementation.

    PubMed

    Peng, Xinhua; Liao, Zeyang; Xu, Nanyang; Qin, Gan; Zhou, Xianyi; Suter, Dieter; Du, Jiangfeng

    2008-11-28

    We propose an adiabatic quantum algorithm capable of factorizing numbers, using fewer qubits than Shor's algorithm. We implement the algorithm in a NMR quantum information processor and experimentally factorize the number 21. In the range that our classical computer could simulate, the quantum adiabatic algorithm works well, providing evidence that the running time of this algorithm scales polynomially with the problem size. PMID:19113467

  10. Development of a new signal processing algorithm based on independent component analysis for single channel ECG data.

    PubMed

    Lee, J; Lee, K J; Yoo, S K

    2004-01-01

    In this paper, we proposed a new signal processing algorithm based on independent component analysis (ICA) for single channel ECG data. For the application ICA to single channel data, mixed (multi-channel) signals are constructed by adding some delay to original data. By ICA, signal enhancement is acquired. For validation of usefulness of this signal, QRS complex detection was accompanied. In QRS detection process, Hilbert transform and wavelet transform were used and good QRS detection efficacy was obtained. Furthermore, a signal, which could not be filtered properly using existing algorithm, also had better signal enhancement. In future, we need to study on the algorithm optimization and simplification. PMID:17271650

  11. An efficient Earth Mover's Distance algorithm for robust histogram comparison.

    PubMed

    Ling, Haibin; Okada, Kazunori

    2007-05-01

    We propose EMD-L1: a fast and exact algorithm for computing the Earth Mover's Distance (EMD) between a pair of histograms. The efficiency of the new algorithm enables its application to problems that were previously prohibitive due to high time complexities. The proposed EMD-L1 significantly simplifies the original linear programming formulation of EMD. Exploiting the L1 metric structure, the number of unknown variables in EMD-L1 is reduced to O(N) from O(N2) of the original EMD for a histogram with N bins. In addition, the number of constraints is reduced by half and the objective function of the linear program is simplified. Formally, without any approximation, we prove that the EMD-L1 formulation is equivalent to the original EMD with a L1 ground distance. To perform the EMD-L1 computation, we propose an efficient tree-based algorithm, Tree-EMD. Tree-EMD exploits the fact that a basic feasible solution of the simplex algorithm-based solver forms a spanning tree when we interpret EMD-L1 as a network flow optimization problem. We empirically show that this new algorithm has an average time complexity of O(N2), which significantly improves the best reported supercubic complexity of the original EMD. The accuracy of the proposed methods is evaluated by experiments for two computation-intensive problems: shape recognition and interest point matching using multidimensional histogram-based local features. For shape recognition, EMD-L1 is applied to compare shape contexts on the widely tested MPEG7 shape data set, as well as an articulated shape data set. For interest point matching, SIFT, shape context and spin image are tested on both synthetic and real image pairs with large geometrical deformation, illumination change, and heavy intensity noise. The results demonstrate that our EMD-L1-based solutions outperform previously reported state-of-the-art features and distance measures in solving the two tasks. PMID:17356203

  12. Photocopy of original drawings (original located at the National Archives, ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    Photocopy of original drawings (original located at the National Archives, San Bruno, California, Navy # 104-A-4, showing organ recess. Dept. yards & docks, U.S. Navy, Mare Island, Cal., "Plan & Sections, proposed addition, St. Peter's Chapel, December 1904 - Mare Island Naval Shipyard, St. Peter's Chapel, Walnut Street & Cedar Parkway, Vallejo, Solano County, CA

  13. 12. Photocopy of original construction drawing, undated. (Original print in ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    12. Photocopy of original construction drawing, undated. (Original print in the possession of U.S. Army Corps of Engineers, Portland District, Portland, OR.) PROPOSED EXTENSION TO ADMINISTRATION BUILDING. - Bonneville Project, Administration Building, South side of main entrance, Bonneville Project, Bonneville, Multnomah County, OR

  14. Analyzing the applicability of the least risk path algorithm in indoor space

    NASA Astrophysics Data System (ADS)

    Vanclooster, A.; Viaene, P.; Van de Weghe, N.; Fack, V.; De Maeyer, Ph.

    2013-11-01

    Over the last couple of years, applications that support navigation and wayfinding in indoor environments have become one of the booming industries. However, the algorithmic support for indoor navigation has so far been left mostly untouched, as most applications mainly rely on adapting Dijkstra's shortest path algorithm to an indoor network. In outdoor space, several alternative algorithms have been proposed adding a more cognitive notion to the calculated paths and as such adhering to the natural wayfinding behavior (e.g. simplest paths, least risk paths). The need for indoor cognitive algorithms is highlighted by a more challenged navigation and orientation due to the specific indoor structure (e.g. fragmentation, less visibility, confined areas…). Therefore, the aim of this research is to extend those richer cognitive algorithms to three-dimensional indoor environments. More specifically for this paper, we will focus on the application of the least risk path algorithm of Grum (2005) to an indoor space. The algorithm as proposed by Grum (2005) is duplicated and tested in a complex multi-story building. Several analyses compare shortest and least risk paths in indoor and in outdoor space. The results of these analyses indicate that the current outdoor least risk path algorithm does not calculate less risky paths compared to its shortest paths. In some cases, worse routes have been suggested. Adjustments to the original algorithm are proposed to be more aligned to the specific structure of indoor environments. In a later stage, other cognitive algorithms will be implemented and tested in both an indoor and combined indoor-outdoor setting, in an effort to improve the overall user experience during navigation in indoor environments.

  15. A fast implementation of the incremental backprojection algorithms for parallel beam geometries

    SciTech Connect

    Chen, C.M.; Wang, C.Y.; Cho, Z.H.

    1996-12-01

    Filtered-backprojection algorithms are the most widely used approaches for reconstruction of computed tomographic (CT) images, such as X-ray CT and positron emission tomographic (PET) images. The Incremental backprojection algorithm is a fast backprojection approach based on restructuring the Shepp and Logan algorithm. By exploiting interdependency (position and values) of adjacent pixels, the Incremental algorithm requires only O(N) and O(N{sup 2}) multiplications in contrast to O(N{sup 2}) and O(N{sup 3}) multiplications for the Shepp and Logan algorithm in two-dimensional (2-D) and three-dimensional (3-D) backprojections, respectively, for each view, where N is the size of the image in each dimension. In addition, it may reduce the number of additions for each pixel computation. The improvement achieved by the Incremental algorithm in practice was not, however, as significant as expected. One of the main reasons is due to inevitably visiting pixels outside the beam in the searching flow scheme originally developed for the Incremental algorithm. To optimize implementation of the Incremental algorithm, an efficient scheme, namely, coded searching flow scheme, is proposed in this paper to minimize the overhead caused by searching for all pixels in a beam. The key idea of this scheme is to encode the searching flow for all pixels inside each beam. While backprojecting, all pixels may be visited without any overhead due to using the coded searching flow as the a priori information. The proposed coded searching flow scheme has been implemented on a Sun Sparc 10 and a Sun Sparc 20 workstations. The implementation results show that the proposed scheme is 1.45--2.0 times faster than the original searching flow scheme for most cases tested.

  16. Algorithm design of liquid lens inspection system

    NASA Astrophysics Data System (ADS)

    Hsieh, Lu-Lin; Wang, Chun-Chieh

    2008-08-01

    In mobile lens domain, the glass lens is often to be applied in high-resolution requirement situation; but the glass zoom lens needs to be collocated with movable machinery and voice-coil motor, which usually arises some space limits in minimum design. In high level molding component technology development, the appearance of liquid lens has become the focus of mobile phone and digital camera companies. The liquid lens sets with solid optical lens and driving circuit has replaced the original components. As a result, the volume requirement is decreased to merely 50% of the original design. Besides, with the high focus adjusting speed, low energy requirement, high durability, and low-cost manufacturing process, the liquid lens shows advantages in the competitive market. In the past, authors only need to inspect the scrape defect made by external force for the glass lens. As to the liquid lens, authors need to inspect the state of four different structural layers due to the different design and structure. In this paper, authors apply machine vision and digital image processing technology to administer inspections in the particular layer according to the needs of users. According to our experiment results, the algorithm proposed can automatically delete non-focus background, extract the region of interest, find out and analyze the defects efficiently in the particular layer. In the future, authors will combine the algorithm of the system with automatic-focus technology to implement the inside inspection based on the product inspective demands.

  17. Paideia: Origins.

    ERIC Educational Resources Information Center

    Burns, John W.

    The ideas in Mortimer Adler's educational manifesto, "The Paideia Proposal," are compared to the Greek concept of paideia (meaning upbringing of a child) and discredited. Committed to universal education, Adler wants schooling based on a set of uniformly applied objectives achieved by packaging pre-organized knowledge in established areas of…

  18. Algorithm development

    NASA Technical Reports Server (NTRS)

    Barth, Timothy J.; Lomax, Harvard

    1987-01-01

    The past decade has seen considerable activity in algorithm development for the Navier-Stokes equations. This has resulted in a wide variety of useful new techniques. Some examples for the numerical solution of the Navier-Stokes equations are presented, divided into two parts. One is devoted to the incompressible Navier-Stokes equations, and the other to the compressible form.

  19. Cropping and noise resilient steganography algorithm using secret image sharing

    NASA Astrophysics Data System (ADS)

    Juarez-Sandoval, Oswaldo; Fierro-Radilla, Atoany; Espejel-Trujillo, Angelina; Nakano-Miyatake, Mariko; Perez-Meana, Hector

    2015-03-01

    This paper proposes an image steganography scheme, in which a secret image is hidden into a cover image using a secret image sharing (SIS) scheme. Taking advantage of the fault tolerant property of the (k,n)-threshold SIS, where using any k of n shares (k≤n), the secret data can be recovered without any ambiguity, the proposed steganography algorithm becomes resilient to cropping and impulsive noise contamination. Among many SIS schemes proposed until now, Lin and Chan's scheme is selected as SIS, due to its lossless recovery capability of a large amount of secret data. The proposed scheme is evaluated from several points of view, such as imperceptibility of the stegoimage respect to its original cover image, robustness of hidden data to cropping operation and impulsive noise contamination. The evaluation results show a high quality of the extracted secret image from the stegoimage when it suffered more than 20% cropping or high density noise contamination.

  20. A joint watermarking/encryption algorithm for verifying medical image integrity and authenticity in both encrypted and spatial domains.

    PubMed

    Bouslimi, D; Coatrieux, G; Roux, Ch

    2011-01-01

    In this paper, we propose a new joint watermarking/encryption algorithm for the purpose of verifying the reliability of medical images in both encrypted and spatial domains. It combines a substitutive watermarking algorithm, the quantization index modulation (QIM), with a block cipher algorithm, the Advanced Encryption Standard (AES), in CBC mode of operation. The proposed solution gives access to the outcomes of the image integrity and of its origins even though the image is stored encrypted. Experimental results achieved on 8 bits encoded Ultrasound images illustrate the overall performances of the proposed scheme. By making use of the AES block cipher in CBC mode, the proposed solution is compliant with or transparent to the DICOM standard. PMID:22256213

  1. Learning Intelligent Genetic Algorithms Using Japanese Nonograms

    ERIC Educational Resources Information Center

    Tsai, Jinn-Tsong; Chou, Ping-Yi; Fang, Jia-Cen

    2012-01-01

    An intelligent genetic algorithm (IGA) is proposed to solve Japanese nonograms and is used as a method in a university course to learn evolutionary algorithms. The IGA combines the global exploration capabilities of a canonical genetic algorithm (CGA) with effective condensed encoding, improved fitness function, and modified crossover and…

  2. Highly Scalable Matching Pursuit Signal Decomposition Algorithm

    NASA Technical Reports Server (NTRS)

    Christensen, Daniel; Das, Santanu; Srivastava, Ashok N.

    2009-01-01

    Matching Pursuit Decomposition (MPD) is a powerful iterative algorithm for signal decomposition and feature extraction. MPD decomposes any signal into linear combinations of its dictionary elements or atoms . A best fit atom from an arbitrarily defined dictionary is determined through cross-correlation. The selected atom is subtracted from the signal and this procedure is repeated on the residual in the subsequent iterations until a stopping criterion is met. The reconstructed signal reveals the waveform structure of the original signal. However, a sufficiently large dictionary is required for an accurate reconstruction; this in return increases the computational burden of the algorithm, thus limiting its applicability and level of adoption. The purpose of this research is to improve the scalability and performance of the classical MPD algorithm. Correlation thresholds were defined to prune insignificant atoms from the dictionary. The Coarse-Fine Grids and Multiple Atom Extraction techniques were proposed to decrease the computational burden of the algorithm. The Coarse-Fine Grids method enabled the approximation and refinement of the parameters for the best fit atom. The ability to extract multiple atoms within a single iteration enhanced the effectiveness and efficiency of each iteration. These improvements were implemented to produce an improved Matching Pursuit Decomposition algorithm entitled MPD++. Disparate signal decomposition applications may require a particular emphasis of accuracy or computational efficiency. The prominence of the key signal features required for the proper signal classification dictates the level of accuracy necessary in the decomposition. The MPD++ algorithm may be easily adapted to accommodate the imposed requirements. Certain feature extraction applications may require rapid signal decomposition. The full potential of MPD++ may be utilized to produce incredible performance gains while extracting only slightly less energy than the

  3. Correction of Faulty Sensors in Phased Array Radars Using Symmetrical Sensor Failure Technique and Cultural Algorithm with Differential Evolution

    PubMed Central

    Khan, S. U.; Qureshi, I. M.; Zaman, F.; Shoaib, B.; Naveed, A.; Basit, A.

    2014-01-01

    Three issues regarding sensor failure at any position in the antenna array are discussed. We assume that sensor position is known. The issues include raise in sidelobe levels, displacement of nulls from their original positions, and diminishing of null depth. The required null depth is achieved by making the weight of symmetrical complement sensor passive. A hybrid method based on memetic computing algorithm is proposed. The hybrid method combines the cultural algorithm with differential evolution (CADE) which is used for the reduction of sidelobe levels and placement of nulls at their original positions. Fitness function is used to minimize the error between the desired and estimated beam patterns along with null constraints. Simulation results for various scenarios have been given to exhibit the validity and performance of the proposed algorithm. PMID:24688440

  4. The algorithm of motion blur image restoration based on PSF half-blind estimation

    NASA Astrophysics Data System (ADS)

    Chen, Da-Ke; Lin, Zhe

    2011-08-01

    A novel algorithm of motion blur image restoration based on PSF half-blind estimation with Hough transform was introduced on the basis of full analysis of the principle of TDICCD camera, with the problem that vertical uniform linear motion estimation used by IBD algorithm as the original value of PSF led to image restoration distortion. Firstly, the mathematical model of image degradation was established with the transcendental information of multi-frame images, and then two parameters (movement blur length and angle) that have crucial influence on PSF estimation was set accordingly. Finally, the ultimate restored image can be acquired through multiple iterative of the initial value of PSF estimation in Fourier domain, which the initial value was gained by the above method. Experimental results show that the proposal algorithm can not only effectively solve the image distortion problem caused by relative motion between TDICCD camera and movement objects, but also the details characteristics of original image are clearly restored.

  5. Spaceborne SAR Imaging Algorithm for Coherence Optimized

    PubMed Central

    Qiu, Zhiwei; Yue, Jianping; Wang, Xueqin; Yue, Shun

    2016-01-01

    This paper proposes SAR imaging algorithm with largest coherence based on the existing SAR imaging algorithm. The basic idea of SAR imaging algorithm in imaging processing is that output signal can have maximum signal-to-noise ratio (SNR) by using the optimal imaging parameters. Traditional imaging algorithm can acquire the best focusing effect, but would bring the decoherence phenomenon in subsequent interference process. Algorithm proposed in this paper is that SAR echo adopts consistent imaging parameters in focusing processing. Although the SNR of the output signal is reduced slightly, their coherence is ensured greatly, and finally the interferogram with high quality is obtained. In this paper, two scenes of Envisat ASAR data in Zhangbei are employed to conduct experiment for this algorithm. Compared with the interferogram from the traditional algorithm, the results show that this algorithm is more suitable for SAR interferometry (InSAR) research and application. PMID:26871446

  6. Speech Enhancement based on Compressive Sensing Algorithm

    NASA Astrophysics Data System (ADS)

    Sulong, Amart; Gunawan, Teddy S.; Khalifa, Othman O.; Chebil, Jalel

    2013-12-01

    There are various methods, in performance of speech enhancement, have been proposed over the years. The accurate method for the speech enhancement design mainly focuses on quality and intelligibility. The method proposed with high performance level. A novel speech enhancement by using compressive sensing (CS) is a new paradigm of acquiring signals, fundamentally different from uniform rate digitization followed by compression, often used for transmission or storage. Using CS can reduce the number of degrees of freedom of a sparse/compressible signal by permitting only certain configurations of the large and zero/small coefficients, and structured sparsity models. Therefore, CS is significantly provides a way of reconstructing a compressed version of the speech in the original signal by taking only a small amount of linear and non-adaptive measurement. The performance of overall algorithms will be evaluated based on the speech quality by optimise using informal listening test and Perceptual Evaluation of Speech Quality (PESQ). Experimental results show that the CS algorithm perform very well in a wide range of speech test and being significantly given good performance for speech enhancement method with better noise suppression ability over conventional approaches without obvious degradation of speech quality.

  7. Clever eye algorithm for target detection of remote sensing imagery

    NASA Astrophysics Data System (ADS)

    Geng, Xiurui; Ji, Luyan; Sun, Kang

    2016-04-01

    Target detection algorithms for hyperspectral remote sensing imagery, such as the two most commonly used remote sensing detection algorithms, the constrained energy minimization (CEM) and matched filter (MF), can usually be attributed to the inner product between a weight filter (or detector) and a pixel vector. CEM and MF have the same expression except that MF requires data centralization first. However, this difference leads to a difference in the target detection results. That is to say, the selection of the data origin could directly affect the performance of the detector. Therefore, does there exist another data origin other than the zero and mean-vector points for a better target detection performance? This is a very meaningful issue in the field of target detection, but it has not been paid enough attention yet. In this study, we propose a novel objective function by introducing the data origin as another variable, and the solution of the function is corresponding to the data origin with the minimal output energy. The process of finding the optimal solution can be vividly regarded as a clever eye automatically searching the best observing position and direction in the feature space, which corresponds to the largest separation between the target and background. Therefore, this new algorithm is referred to as the clever eye algorithm (CE). Based on the Sherman-Morrison formula and the gradient ascent method, CE could derive the optimal target detection result in terms of energy. Experiments with both synthetic and real hyperspectral data have verified the effectiveness of our method.

  8. Fast pixel shifting phase unwrapping algorithm in quantitative interferometric microscopy

    NASA Astrophysics Data System (ADS)

    Xu, Mingfei; Shan, Yanke; Yan, Keding; Xue, Liang; Wang, Shouyu; Liu, Fei

    2014-11-01

    Quantitative interferometric microscopy is an important method for observing biological samples such as cells and tissues. In order to obtain continuous phase distribution of the sample from the interferogram, phase extracting and phase unwrapping are both needed in quantitative interferometric microscopy. Phase extracting includes fast Fourier transform method and Hilbert transform method, etc., almost all of them are rapid methods. However, traditional unwrapping methods such as least squares algorithm, minimum network flow method, etc. are time-consuming to locate the phase discontinuities which lead to low processing efficiency. Other proposed high-speed phase unwrapping methods always need at least two interferograms to recover final phase distributions which cannot realize real time processing. Therefore, high-speed phase unwrapping algorithm for single interferogram is required to improve the calculation efficiency. Here, we propose a fast phase unwrapping algorithm to realize high-speed quantitative interferometric microscopy, by shifting mod 2π wrapped phase map for one pixel, then multiplying the original phase map and the shifted one, then the phase discontinuities location can be easily determined. Both numerical simulation and experiments confirm that the algorithm features fast, precise and reliable.

  9. Research on Routing Selection Algorithm Based on Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Gao, Guohong; Zhang, Baojian; Li, Xueyong; Lv, Jinna

    The hereditary algorithm is a kind of random searching and method of optimizing based on living beings natural selection and hereditary mechanism. In recent years, because of the potentiality in solving complicate problems and the successful application in the fields of industrial project, hereditary algorithm has been widely concerned by the domestic and international scholar. Routing Selection communication has been defined a standard communication model of IP version 6.This paper proposes a service model of Routing Selection communication, and designs and implements a new Routing Selection algorithm based on genetic algorithm.The experimental simulation results show that this algorithm can get more resolution at less time and more balanced network load, which enhances search ratio and the availability of network resource, and improves the quality of service.

  10. Evolving Stochastic Learning Algorithm based on Tsallis entropic index

    NASA Astrophysics Data System (ADS)

    Anastasiadis, A. D.; Magoulas, G. D.

    2006-03-01

    In this paper, inspired from our previous algorithm, which was based on the theory of Tsallis statistical mechanics, we develop a new evolving stochastic learning algorithm for neural networks. The new algorithm combines deterministic and stochastic search steps by employing a different adaptive stepsize for each network weight, and applies a form of noise that is characterized by the nonextensive entropic index q, regulated by a weight decay term. The behavior of the learning algorithm can be made more stochastic or deterministic depending on the trade off between the temperature T and the q values. This is achieved by introducing a formula that defines a time-dependent relationship between these two important learning parameters. Our experimental study verifies that there are indeed improvements in the convergence speed of this new evolving stochastic learning algorithm, which makes learning faster than using the original Hybrid Learning Scheme (HLS). In addition, experiments are conducted to explore the influence of the entropic index q and temperature T on the convergence speed and stability of the proposed method.

  11. Supergravity, complex parameters and the Janis-Newman algorithm

    NASA Astrophysics Data System (ADS)

    Erbin, Harold; Heurtier, Lucien

    2015-08-01

    The Demiański-Janis-Newman (DJN) algorithm is an original solution generating technique. For a long time it has been limited to producing rotating solutions, restricted to the case of a metric and real scalar fields, despite the fact that Demiański extended it to include more parameters such as a NUT charge. Recently two independent prescriptions have been given for extending the algorithm to gauge fields and thus electrically charged configurations. In this paper we aim to end setting up the algorithm by providing a missing but important piece, which is how the transformation is applied to complex scalar fields. We illustrate our proposal through several examples taken from N = 2 supergravity, including the stationary BPS solutions from Behrndt et al and Sen's axion-dilaton rotating black hole. Moreover we discuss solutions that include pairs of complex parameters, such as the mass and the NUT charge, or the electric and magnetic charges, and we explain how to perform the algorithm in this context (with the example of Kerr-Newman-Taub-NUT and dyonic Kerr-Newman black holes). The final formulation of the DJN algorithm can possibly handle solutions with five of the six Plebański-Demiański parameters along with any type of bosonic fields with spin less than two (exemplified with the stationary Israel-Wilson-Perjes solutions). This provides all the necessary tools for applications to general matter-coupled gravity and to (gauged) supergravity.

  12. Neural-network algorithms and architectures for pattern classification

    SciTech Connect

    Mao, Weidong.

    1991-01-01

    The study of the artificial neural networks is an integrated research field that involves the disciplines of applied mathematics, physics, neurobiology, computer science, information, control, parallel processing and VLSI. This dissertation deals with a number of topics from a broad spectrum of neural network research in models, algorithms, applications and VLSI architectures. Specifically, this dissertation is aimed at studying neural network algorithms and architectures for pattern classification tasks. The work presented in this dissertation has a wide range of applications including speech recognition, image recognition, and high level knowledge processing. Supervised neural networks, such as the back-propagation network, can be used for classification tasks as the result of approximating an input/output mapping. They are the approximation-based classifiers. The original gradient descent back propagation learning algorithm exhibits slow convergence speed. Fast algorithms such as the conjugate gradient and quasi-Newton algorithms can be adopted. The main emphasis on neural network classifiers in this dissertation is the competition-based classifiers. Due to the rapid advance in VLSI technology, parallel processing, and computer aided design (CAD), application-specific VLSI systems are becoming more and more powerful and feasible. In particular, VLSI array processors offer high speed and efficiency through their massive parallelism and pipelining, regularity, modularity, and local communication. A unified VLSI array architecture can be used for implementing neural networks and Hidden Markov Models. He also proposes a pipeline interleaving approach to design VLSI array architectures for real-time image and video signal processing.

  13. Development of a memetic clustering algorithm for optimal spectral histology: application to FTIR images of normal human colon.

    PubMed

    Farah, Ihsen; Nguyen, Thi Nguyet Que; Groh, Audrey; Guenot, Dominique; Jeannesson, Pierre; Gobinet, Cyril

    2016-05-23

    The coupling between Fourier-transform infrared (FTIR) imaging and unsupervised classification is effective in revealing the different structures of human tissues based on their specific biomolecular IR signatures; thus the spectral histology of the studied samples is achieved. However, the most widely applied clustering methods in spectral histology are local search algorithms, which converge to a local optimum, depending on initialization. Multiple runs of the techniques estimate multiple different solutions. Here, we propose a memetic algorithm, based on a genetic algorithm and a k-means clustering refinement, to perform optimal clustering. In addition, this approach was applied to the acquired FTIR images of normal human colon tissues originating from five patients. The results show the efficiency of the proposed memetic algorithm to achieve the optimal spectral histology of these samples, contrary to k-means. PMID:27110605

  14. A Generalization of Takane's Algorithm for DEDICOM.

    ERIC Educational Resources Information Center

    Kiers, Henk A. L.; And Others

    1990-01-01

    An algorithm is described for fitting the DEDICOM model (proposed by R. A. Harshman in 1978) for the analysis of asymmetric data matrices. The method modifies a procedure proposed by Y. Takane (1985) to provide guaranteed monotonic convergence. The algorithm is based on a technique known as majorization. (SLD)

  15. A Support Vector Machine Blind Equalization Algorithm Based on Immune Clone Algorithm

    NASA Astrophysics Data System (ADS)

    Yecai, Guo; Rui, Ding

    Aiming at affecting of the parameter selection method of support vector machine(SVM) on its application in blind equalization algorithm, a SVM constant modulus blind equalization algorithm based on immune clone selection algorithm(CSA-SVM-CMA) is proposed. In this proposed algorithm, the immune clone algorithm is used to optimize the parameters of the SVM on the basis advantages of its preventing evolutionary precocious, avoiding local optimum, and fast convergence. The proposed algorithm can improve the parameter selection efficiency of SVM constant modulus blind equalization algorithm(SVM-CMA) and overcome the defect of the artificial setting parameters. Accordingly, the CSA-SVM-CMA has faster convergence rate and smaller mean square error than the SVM-CMA. Computer simulations in underwater acoustic channels have proved the validity of the algorithm.

  16. Analysis of the contact graph routing algorithm: Bounding interplanetary paths

    NASA Astrophysics Data System (ADS)

    Birrane, Edward; Burleigh, Scott; Kasch, Niels

    2012-06-01

    Interplanetary communication networks comprise orbiters, deep-space relays, and stations on planetary surfaces. These networks must overcome node mobility, constrained resources, and significant propagation delays. Opportunities for wireless contact rely on calculating transmit and receive opportunities, but the Euclidean-distance diameter of these networks (measured in light-seconds and light-minutes) precludes node discovery and contact negotiation. Propagation delay may be larger than the line-of-sight contact between nodes. For example, Mars and Earth orbiters may be separated by up to 20.8 min of signal propagation time. Such spacecraft may never share line-of-sight, but may uni-directionally communicate if one orbiter knows the other's future position. The Contact Graph Routing (CGR) approach is a family of algorithms presented to solve the messaging problem of interplanetary communications. These algorithms exploit networks where nodes exhibit deterministic mobility. For CGR, mobility and bandwidth information is pre-configured throughout the network allowing nodes to construct transmit opportunities. Once constructed, routing algorithms operate on this contact graph to build an efficient path through the network. The interpretation of the contact graph, and the construction of a bounded approximate path, is critically important for adoption in operational systems. Brute force approaches, while effective in small networks, are computationally expensive and will not scale. Methods of inferring cycles or other librations within the graph are difficult to detect and will guide the practical implementation of any routing algorithm. This paper presents a mathematical analysis of a multi-destination contact graph algorithm (MD-CGR), demonstrates that it is NP-complete, and proposes realistic constraints that make the problem solvable in polynomial time, as is the case with the originally proposed CGR algorithm. An analysis of path construction to complement hop

  17. The registration algorithms for a video see-through augmented reality tabletop system

    NASA Astrophysics Data System (ADS)

    Zhang, Yang; Zhou, Ya; Yan, Da-yuan; Ma, Jin-tao; Liu, Xian-peng

    2008-03-01

    The traditional tabletop AR systems general use head mounted display (HMD) that has some shortcomings, such as the imprecise precision and low flexibility. To solve these problems, a new design of video see-through tabletop system is presented. In this paper, we describe an outline of the system and the registration algorithms for the system. A system origin calibration algorithm is proposed. In the calibration experiment, a sign cube is introduced for the first shooting of the camera. The position of the sign cube becomes the origin of the world reference frame, in which the translation and rotation of the virtual objects relative to the origin can be calculated easily. The experimental results show that the video see-through tabletop system meets the precision and flexibility requirements very well.

  18. Performance of Thorup's Shortest Path Algorithm for Large-Scale Network Simulation

    NASA Astrophysics Data System (ADS)

    Sakumoto, Yusuke; Ohsaki, Hiroyuki; Imase, Makoto

    In this paper, we investigate the performance of Thorup's algorithm by comparing it to Dijkstra's algorithm for large-scale network simulations. One of the challenges toward the realization of large-scale network simulations is the efficient execution to find shortest paths in a graph with N vertices and M edges. The time complexity for solving a single-source shortest path (SSSP) problem with Dijkstra's algorithm with a binary heap (DIJKSTRA-BH) is O((M+N)log N). An sophisticated algorithm called Thorup's algorithm has been proposed. The original version of Thorup's algorithm (THORUP-FR) has the time complexity of O(M+N). A simplified version of Thorup's algorithm (THORUP-KL) has the time complexity of O(Mα(N)+N) where α(N) is the functional inverse of the Ackerman function. In this paper, we compare the performances (i.e., execution time and memory consumption) of THORUP-KL and DIJKSTRA-BH since it is known that THORUP-FR is at least ten times slower than Dijkstra's algorithm with a Fibonaccii heap. We find that (1) THORUP-KL is almost always faster than DIJKSTRA-BH for large-scale network simulations, and (2) the performances of THORUP-KL and DIJKSTRA-BH deviate from their time complexities due to the presence of the memory cache in the microprocessor.

  19. Program Proposal

    ERIC Educational Resources Information Center

    Baskas, Richard S.

    2012-01-01

    A study was conducted to determine if a deficiency, or learning gap, existed in a particular working environment. To determine if an assessment was to be conducted, a program proposal would need to be developed to explore this situation. In order for a particular environment to react and grow with other environments, it must be able to take on…

  20. An item-oriented recommendation algorithm on cold-start problem

    NASA Astrophysics Data System (ADS)

    Qiu, Tian; Chen, Guang; Zhang, Zi-Ke; Zhou, Tao

    2011-09-01

    Based on a hybrid algorithm incorporating the heat conduction and probability spreading processes (Proc. Natl. Acad. Sci. U.S.A., 107 (2010) 4511), in this letter, we propose an improved method by introducing an item-oriented function, focusing on solving the dilemma of the recommendation accuracy between the cold and popular items. Differently from previous works, the present algorithm does not require any additional information (e.g., tags). Further experimental results obtained in three real datasets, RYM, Netflix and MovieLens, show that, compared with the original hybrid method, the proposed algorithm significantly enhances the recommendation accuracy of the cold items, while it keeps the recommendation accuracy of the overall and the popular items. This work might shed some light on both understanding and designing effective methods for long-tailed online applications of recommender systems.

  1. Double color image encryption using iterative phase retrieval algorithm in quaternion gyrator domain.

    PubMed

    Shao, Zhuhong; Shu, Huazhong; Wu, Jiasong; Dong, Zhifang; Coatrieux, Gouenou; Coatrieux, Jean Louis

    2014-03-10

    This paper describes a novel algorithm to encrypt double color images into a single undistinguishable image in quaternion gyrator domain. By using an iterative phase retrieval algorithm, the phase masks used for encryption are obtained. Subsequently, the encrypted image is generated via cascaded quaternion gyrator transforms with different rotation angles. The parameters in quaternion gyrator transforms and phases serve as encryption keys. By knowing these keys, the original color images can be fully restituted. Numerical simulations have demonstrated the validity of the proposed encryption system as well as its robustness against loss of data and additive Gaussian noise. PMID:24663832

  2. Robust matching algorithm for image mosaic

    NASA Astrophysics Data System (ADS)

    Zeng, Luan; Tan, Jiu-bin

    2010-08-01

    In order to improve the matching accuracy and the level of automation for image mosaic, a matching algorithm based on SIFT (Scale Invariant Feature Transform) features is proposed as detailed below. Firstly, according to the result of cursory comparison with the given basal matching threshold, the collection corresponding SIFT features which contains mismatch is obtained. Secondly, after calculating all the ratio of Euclidean distance from the closest neighbor to the distance of the second closest of corresponding features, we select the image coordinates of corresponding SIFT features with the first eight smallest ratios to solve the initial parameters of pin-hole camera model, and then calculate maximum error σ between transformation coordinates and original image coordinates of the eight corresponding features. Thirdly, calculating the scale of the largest original image coordinates of the eight corresponding features to the entire image size, the scale is regarded as control parameter k of matching error threshold. Finally, computing the difference of the transformation coordinates and the original image coordinates of all the features in the collection of features, deleting the corresponding features with difference larger than 3kσ. We can then obtain the exact collection of matching features to solve the parameters for pin-hole camera model. Experimental results indicate that the proposed method is stable and reliable in case of the image having some variation of view point, illumination, rotation and scale. This new method has been used to achieve an excellent matching accuracy on the experimental images. Moreover, the proposed method can be used to select the matching threshold of different images automatically without any manual intervention.

  3. An image hiding method based on cascaded iterative Fourier transform and public-key encryption algorithm

    NASA Astrophysics Data System (ADS)

    Zhang, B.; Sang, Jun; Alam, Mohammad S.

    2013-03-01

    An image hiding method based on cascaded iterative Fourier transform and public-key encryption algorithm was proposed. Firstly, the original secret image was encrypted into two phase-only masks M1 and M2 via cascaded iterative Fourier transform (CIFT) algorithm. Then, the public-key encryption algorithm RSA was adopted to encrypt M2 into M2' . Finally, a host image was enlarged by extending one pixel into 2×2 pixels and each element in M1 and M2' was multiplied with a superimposition coefficient and added to or subtracted from two different elements in the 2×2 pixels of the enlarged host image. To recover the secret image from the stego-image, the two masks were extracted from the stego-image without the original host image. By applying public-key encryption algorithm, the key distribution was facilitated, and also compared with the image hiding method based on optical interference, the proposed method may reach higher robustness by employing the characteristics of the CIFT algorithm. Computer simulations show that this method has good robustness against image processing.

  4. DNA replication origins in archaea

    PubMed Central

    Wu, Zhenfang; Liu, Jingfang; Yang, Haibo; Xiang, Hua

    2014-01-01

    DNA replication initiation, which starts at specific chromosomal site (known as replication origins), is the key regulatory stage of chromosome replication. Archaea, the third domain of life, use a single or multiple origin(s) to initiate replication of their circular chromosomes. The basic structure of replication origins is conserved among archaea, typically including an AT-rich unwinding region flanked by several conserved repeats (origin recognition box, ORB) that are located adjacent to a replication initiator gene. Both the ORB sequence and the adjacent initiator gene are considerably diverse among different replication origins, while in silico and genetic analyses have indicated the specificity between the initiator genes and their cognate origins. These replicator–initiator pairings are reminiscent of the oriC-dnaA system in bacteria, and a model for the negative regulation of origin activity by a downstream cluster of ORB elements has been recently proposed in haloarchaea. Moreover, comparative genomic analyses have revealed that the mosaics of replicator-initiator pairings in archaeal chromosomes originated from the integration of extrachromosomal elements. This review summarizes the research progress in understanding of archaeal replication origins with particular focus on the utilization, control and evolution of multiple replication origins in haloarchaea. PMID:24808892

  5. Brief Report: exploratory analysis of the ADOS revised algorithm: specificity and predictive value with Hispanic children referred for autism spectrum disorders.

    PubMed

    Overton, Terry; Fielding, Cheryl; de Alba, Roman Garcia

    2008-07-01

    This study compared Autism diagnostic observation schedule (ADOS) algorithm scores of a sample of 26 children who were administered modules 1-3 of the ADOS with the scores obtained applying the revised ADOS algorithm proposed by Gotham et al. (2007). Results of this application were inconsistent, yielding slightly more accurate results for module 1. New algorithm scores on modules 2 and 3 remained consistent with the original algorithm scores. The Mann-Whitney U was applied to compare revised algorithm and clinical levels of social impairment to determine if significant differences were evident. Results of Mann-Whitney U analyses were inconsistent and demonstrated less specificity for children with milder levels of social impairment. The revised algorithm demonstrated accuracy for the more severe autistic group. PMID:18026872

  6. A genetic-algorithm-based method to find unitary transformations for any desired quantum computation and application to a one-bit oracle decision problem

    NASA Astrophysics Data System (ADS)

    Bang, Jeongho; Yoo, Seokwon

    2014-12-01

    We propose a genetic-algorithm-based method to find the unitary transformations for any desired quantum computation. We formulate a simple genetic algorithm by introducing the "genetic parameter vector" of the unitary transformations to be found. In the genetic algorithm process, all components of the genetic parameter vectors are supposed to evolve to the solution parameters of the unitary transformations. We apply our method to find the optimal unitary transformations and to generalize the corresponding quantum algorithms for a realistic problem, the one-bit oracle decision problem, or the often-called Deutsch problem. By numerical simulations, we can faithfully find the appropriate unitary transformations to solve the problem by using our method. We analyze the quantum algorithms identified by the found unitary transformations and generalize the variant models of the original Deutsch's algorithm.

  7. A Bat Algorithm with Mutation for UCAV Path Planning

    PubMed Central

    Wang, Gaige; Guo, Lihong; Duan, Hong; Liu, Luo; Wang, Heqi

    2012-01-01

    Path planning for uninhabited combat air vehicle (UCAV) is a complicated high dimension optimization problem, which mainly centralizes on optimizing the flight route considering the different kinds of constrains under complicated battle field environments. Original bat algorithm (BA) is used to solve the UCAV path planning problem. Furthermore, a new bat algorithm with mutation (BAM) is proposed to solve the UCAV path planning problem, and a modification is applied to mutate between bats during the process of the new solutions updating. Then, the UCAV can find the safe path by connecting the chosen nodes of the coordinates while avoiding the threat areas and costing minimum fuel. This new approach can accelerate the global convergence speed while preserving the strong robustness of the basic BA. The realization procedure for original BA and this improved metaheuristic approach BAM is also presented. To prove the performance of this proposed metaheuristic method, BAM is compared with BA and other population-based optimization methods, such as ACO, BBO, DE, ES, GA, PBIL, PSO, and SGA. The experiment shows that the proposed approach is more effective and feasible in UCAV path planning than the other models. PMID:23365518

  8. A bat algorithm with mutation for UCAV path planning.

    PubMed

    Wang, Gaige; Guo, Lihong; Duan, Hong; Liu, Luo; Wang, Heqi

    2012-01-01

    Path planning for uninhabited combat air vehicle (UCAV) is a complicated high dimension optimization problem, which mainly centralizes on optimizing the flight route considering the different kinds of constrains under complicated battle field environments. Original bat algorithm (BA) is used to solve the UCAV path planning problem. Furthermore, a new bat algorithm with mutation (BAM) is proposed to solve the UCAV path planning problem, and a modification is applied to mutate between bats during the process of the new solutions updating. Then, the UCAV can find the safe path by connecting the chosen nodes of the coordinates while avoiding the threat areas and costing minimum fuel. This new approach can accelerate the global convergence speed while preserving the strong robustness of the basic BA. The realization procedure for original BA and this improved metaheuristic approach BAM is also presented. To prove the performance of this proposed metaheuristic method, BAM is compared with BA and other population-based optimization methods, such as ACO, BBO, DE, ES, GA, PBIL, PSO, and SGA. The experiment shows that the proposed approach is more effective and feasible in UCAV path planning than the other models. PMID:23365518

  9. Parallel algorithms for unconstrained optimization by multisplitting with inexact subspace search - the abstract

    SciTech Connect

    Renaut, R.; He, Q.

    1994-12-31

    In a new parallel iterative algorithm for unconstrained optimization by multisplitting is proposed. In this algorithm the original problem is split into a set of small optimization subproblems which are solved using well known sequential algorithms. These algorithms are iterative in nature, e.g. DFP variable metric method. Here the authors use sequential algorithms based on an inexact subspace search, which is an extension to the usual idea of an inexact fine search. Essentially the idea of the inexact line search for nonlinear minimization is that at each iteration the authors only find an approximate minimum in the line search direction. Hence by inexact subspace search, they mean that, instead of finding the minimum of the subproblem at each interation, they do an incomplete down hill search to give an approximate minimum. Some convergence and numerical results for this algorithm will be presented. Further, the original theory will be generalized to the situation with a singular Hessian. Applications for nonlinear least squares problems will be presented. Experimental results will be presented for implementations on an Intel iPSC/860 Hypercube with 64 nodes as well as on the Intel Paragon.

  10. An enhanced version of the heat exchange algorithm with excellent energy conservation properties

    NASA Astrophysics Data System (ADS)

    Wirnsberger, P.; Frenkel, D.; Dellago, C.

    2015-09-01

    We propose a new algorithm for non-equilibrium molecular dynamics simulations of thermal gradients. The algorithm is an extension of the heat exchange algorithm developed by Hafskjold et al. [Mol. Phys. 80, 1389 (1993); 81, 251 (1994)], in which a certain amount of heat is added to one region and removed from another by rescaling velocities appropriately. Since the amount of added and removed heat is the same and the dynamics between velocity rescaling steps is Hamiltonian, the heat exchange algorithm is expected to conserve the energy. However, it has been reported previously that the original version of the heat exchange algorithm exhibits a pronounced drift in the total energy, the exact cause of which remained hitherto unclear. Here, we show that the energy drift is due to the truncation error arising from the operator splitting and suggest an additional coordinate integration step as a remedy. The new algorithm retains all the advantages of the original one whilst exhibiting excellent energy conservation as illustrated for a Lennard-Jones liquid and SPC/E water.

  11. A Synchronous-Asynchronous Particle Swarm Optimisation Algorithm

    PubMed Central

    Ab Aziz, Nor Azlina; Mubin, Marizan; Mohamad, Mohd Saberi; Ab Aziz, Kamarulzaman

    2014-01-01

    In the original particle swarm optimisation (PSO) algorithm, the particles' velocities and positions are updated after the whole swarm performance is evaluated. This algorithm is also known as synchronous PSO (S-PSO). The strength of this update method is in the exploitation of the information. Asynchronous update PSO (A-PSO) has been proposed as an alternative to S-PSO. A particle in A-PSO updates its velocity and position as soon as its own performance has been evaluated. Hence, particles are updated using partial information, leading to stronger exploration. In this paper, we attempt to improve PSO by merging both update methods to utilise the strengths of both methods. The proposed synchronous-asynchronous PSO (SA-PSO) algorithm divides the particles into smaller groups. The best member of a group and the swarm's best are chosen to lead the search. Members within a group are updated synchronously, while the groups themselves are asynchronously updated. Five well-known unimodal functions, four multimodal functions, and a real world optimisation problem are used to study the performance of SA-PSO, which is compared with the performances of S-PSO and A-PSO. The results are statistically analysed and show that the proposed SA-PSO has performed consistently well. PMID:25121109

  12. WDM Multicast Tree Construction Algorithms and Their Comparative Evaluations

    NASA Astrophysics Data System (ADS)

    Makabe, Tsutomu; Mikoshi, Taiju; Takenaka, Toyofumi

    We propose novel tree construction algorithms for multicast communication in photonic networks. Since multicast communications consume many more link resources than unicast communications, effective algorithms for route selection and wavelength assignment are required. We propose a novel tree construction algorithm, called the Weighted Steiner Tree (WST) algorithm and a variation of the WST algorithm, called the Composite Weighted Steiner Tree (CWST) algorithm. Because these algorithms are based on the Steiner Tree algorithm, link resources among source and destination pairs tend to be commonly used and link utilization ratios are improved. Because of this, these algorithms can accept many more multicast requests than other multicast tree construction algorithms based on the Dijkstra algorithm. However, under certain delay constraints, the blocking characteristics of the proposed Weighted Steiner Tree algorithm deteriorate since some light paths between source and destinations use many hops and cannot satisfy the delay constraint. In order to adapt the approach to the delay-sensitive environments, we have devised the Composite Weighted Steiner Tree algorithm comprising the Weighted Steiner Tree algorithm and the Dijkstra algorithm for use in a delay constrained environment such as an IPTV application. In this paper, we also give the results of simulation experiments which demonstrate the superiority of the proposed Composite Weighted Steiner Tree algorithm compared with the Distributed Minimum Hop Tree (DMHT) algorithm, from the viewpoint of the light-tree request blocking.

  13. Encoded Expansion: An Efficient Algorithm to Discover Identical String Motifs

    PubMed Central

    Azmi, Aqil M.; Al-Ssulami, Abdulrakeeb

    2014-01-01

    A major task in computational biology is the discovery of short recurring string patterns known as motifs. Most of the schemes to discover motifs are either stochastic or combinatorial in nature. Stochastic approaches do not guarantee finding the correct motifs, while the combinatorial schemes tend to have an exponential time complexity with respect to motif length. To alleviate the cost, the combinatorial approach exploits dynamic data structures such as trees or graphs. Recently (Karci (2009) Efficient automatic exact motif discovery algorithms for biological sequences, Expert Systems with Applications 36:7952–7963) devised a deterministic algorithm that finds all the identical copies of string motifs of all sizes in theoretical time complexity of and a space complexity of where is the length of the input sequence and is the length of the longest possible string motif. In this paper, we present a significant improvement on Karci's original algorithm. The algorithm that we propose reports all identical string motifs of sizes that occur at least times. Our algorithm starts with string motifs of size 2, and at each iteration it expands the candidate string motifs by one symbol throwing out those that occur less than times in the entire input sequence. We use a simple array and data encoding to achieve theoretical worst-case time complexity of and a space complexity of Encoding of the substrings can speed up the process of comparison between string motifs. Experimental results on random and real biological sequences confirm that our algorithm has indeed a linear time complexity and it is more scalable in terms of sequence length than the existing algorithms. PMID:24871320

  14. Adaptive Inverse Hyperbolic Tangent Algorithm for Dynamic Contrast Adjustment in Displaying Scenes

    NASA Astrophysics Data System (ADS)

    Yu, Cheng-Yi; Ouyang, Yen-Chieh; Wang, Chuin-Mu; Chang, Chein-I.

    2010-12-01

    Contrast has a great influence on the quality of an image in human visual perception. A poorly illuminated environment can significantly affect the contrast ratio, producing an unexpected image. This paper proposes an Adaptive Inverse Hyperbolic Tangent (AIHT) algorithm to improve the display quality and contrast of a scene. Because digital cameras must maintain the shadow in a middle range of luminance that includes a main object such as a face, a gamma function is generally used for this purpose. However, this function has a severe weakness in that it decreases highlight contrast. To mitigate this problem, contrast enhancement algorithms have been designed to adjust contrast to tune human visual perception. The proposed AIHT determines the contrast levels of an original image as well as parameter space for different contrast types so that not only the original histogram shape features can be preserved, but also the contrast can be enhanced effectively. Experimental results show that the proposed algorithm is capable of enhancing the global contrast of the original image adaptively while extruding the details of objects simultaneously.

  15. Algorithms versus architectures for computational chemistry

    NASA Technical Reports Server (NTRS)

    Partridge, H.; Bauschlicher, C. W., Jr.

    1986-01-01

    The algorithms employed are computationally intensive and, as a result, increased performance (both algorithmic and architectural) is required to improve accuracy and to treat larger molecular systems. Several benchmark quantum chemistry codes are examined on a variety of architectures. While these codes are only a small portion of a typical quantum chemistry library, they illustrate many of the computationally intensive kernels and data manipulation requirements of some applications. Furthermore, understanding the performance of the existing algorithm on present and proposed supercomputers serves as a guide for future programs and algorithm development. The algorithms investigated are: (1) a sparse symmetric matrix vector product; (2) a four index integral transformation; and (3) the calculation of diatomic two electron Slater integrals. The vectorization strategies are examined for these algorithms for both the Cyber 205 and Cray XMP. In addition, multiprocessor implementations of the algorithms are looked at on the Cray XMP and on the MIT static data flow machine proposed by DENNIS.

  16. A Synthesized Heuristic Task Scheduling Algorithm

    PubMed Central

    Dai, Yanyan; Zhang, Xiangli

    2014-01-01

    Aiming at the static task scheduling problems in heterogeneous environment, a heuristic task scheduling algorithm named HCPPEFT is proposed. In task prioritizing phase, there are three levels of priority in the algorithm to choose task. First, the critical tasks have the highest priority, secondly the tasks with longer path to exit task will be selected, and then algorithm will choose tasks with less predecessors to schedule. In resource selection phase, the algorithm is selected task duplication to reduce the interresource communication cost, besides forecasting the impact of an assignment for all children of the current task permits better decisions to be made in selecting resources. The algorithm proposed is compared with STDH, PEFT, and HEFT algorithms through randomly generated graphs and sets of task graphs. The experimental results show that the new algorithm can achieve better scheduling performance. PMID:25254244

  17. Algorithms, games, and evolution

    PubMed Central

    Chastain, Erick; Livnat, Adi; Papadimitriou, Christos; Vazirani, Umesh

    2014-01-01

    Even the most seasoned students of evolution, starting with Darwin himself, have occasionally expressed amazement that the mechanism of natural selection has produced the whole of Life as we see it around us. There is a computational way to articulate the same amazement: “What algorithm could possibly achieve all this in a mere three and a half billion years?” In this paper we propose an answer: We demonstrate that in the regime of weak selection, the standard equations of population genetics describing natural selection in the presence of sex become identical to those of a repeated game between genes played according to multiplicative weight updates (MWUA), an algorithm known in computer science to be surprisingly powerful and versatile. MWUA maximizes a tradeoff between cumulative performance and entropy, which suggests a new view on the maintenance of diversity in evolution. PMID:24979793

  18. Adaptive link selection algorithms for distributed estimation

    NASA Astrophysics Data System (ADS)

    Xu, Songcen; de Lamare, Rodrigo C.; Poor, H. Vincent

    2015-12-01

    This paper presents adaptive link selection algorithms for distributed estimation and considers their application to wireless sensor networks and smart grids. In particular, exhaustive search-based least mean squares (LMS) / recursive least squares (RLS) link selection algorithms and sparsity-inspired LMS / RLS link selection algorithms that can exploit the topology of networks with poor-quality links are considered. The proposed link selection algorithms are then analyzed in terms of their stability, steady-state, and tracking performance and computational complexity. In comparison with the existing centralized or distributed estimation strategies, the key features of the proposed algorithms are as follows: (1) more accurate estimates and faster convergence speed can be obtained and (2) the network is equipped with the ability of link selection that can circumvent link failures and improve the estimation performance. The performance of the proposed algorithms for distributed estimation is illustrated via simulations in applications of wireless sensor networks and smart grids.

  19. Linear Bregman algorithm implemented in parallel GPU

    NASA Astrophysics Data System (ADS)

    Li, Pengyan; Ke, Jue; Sui, Dong; Wei, Ping

    2015-08-01

    At present, most compressed sensing (CS) algorithms have poor converging speed, thus are difficult to run on PC. To deal with this issue, we use a parallel GPU, to implement a broadly used compressed sensing algorithm, the Linear Bregman algorithm. Linear iterative Bregman algorithm is a reconstruction algorithm proposed by Osher and Cai. Compared with other CS reconstruction algorithms, the linear Bregman algorithm only involves the vector and matrix multiplication and thresholding operation, and is simpler and more efficient for programming. We use C as a development language and adopt CUDA (Compute Unified Device Architecture) as parallel computing architectures. In this paper, we compared the parallel Bregman algorithm with traditional CPU realized Bregaman algorithm. In addition, we also compared the parallel Bregman algorithm with other CS reconstruction algorithms, such as OMP and TwIST algorithms. Compared with these two algorithms, the result of this paper shows that, the parallel Bregman algorithm needs shorter time, and thus is more convenient for real-time object reconstruction, which is important to people's fast growing demand to information technology.

  20. A generalized vector-valued total variation algorithm

    SciTech Connect

    Wohlberg, Brendt; Rodriguez, Paul

    2009-01-01

    We propose a simple but flexible method for solving the generalized vector-valued TV (VTV) functional, which includes both the {ell}{sup 2}-VTV and {ell}{sup 1}-VTV regularizations as special cases, to address the problems of deconvolution and denoising of vector-valued (e.g. color) images with Gaussian or salt-andpepper noise. This algorithm is the vectorial extension of the Iteratively Reweighted Norm (IRN) algorithm [I] originally developed for scalar (grayscale) images. This method offers competitive computational performance for denoising and deconvolving vector-valued images corrupted with Gaussian ({ell}{sup 2}-VTV case) and salt-and-pepper noise ({ell}{sup 1}-VTV case).

  1. An improved algorithm of mask image dodging for aerial image

    NASA Astrophysics Data System (ADS)

    Zhang, Zuxun; Zou, Songbai; Zuo, Zhiqi

    2011-12-01

    The technology of Mask image dodging based on Fourier transform is a good algorithm in removing the uneven luminance within a single image. At present, the difference method and the ratio method are the methods in common use, but they both have their own defects .For example, the difference method can keep the brightness uniformity of the whole image, but it is deficient in local contrast; meanwhile the ratio method can work better in local contrast, but sometimes it makes the dark areas of the original image too bright. In order to remove the defects of the two methods effectively, this paper on the basis of research of the two methods proposes a balance solution. Experiments show that the scheme not only can combine the advantages of the difference method and the ratio method, but also can avoid the deficiencies of the two algorithms.

  2. A new frame-based registration algorithm.

    PubMed

    Yan, C H; Whalen, R T; Beaupre, G S; Sumanaweera, T S; Yen, S Y; Napel, S

    1998-01-01

    This paper presents a new algorithm for frame registration. Our algorithm requires only that the frame be comprised of straight rods, as opposed to the N structures or an accurate frame model required by existing algorithms. The algorithm utilizes the full 3D information in the frame as well as a least squares weighting scheme to achieve highly accurate registration. We use simulated CT data to assess the accuracy of our algorithm. We compare the performance of the proposed algorithm to two commonly used algorithms. Simulation results show that the proposed algorithm is comparable to the best existing techniques with knowledge of the exact mathematical frame model. For CT data corrupted with an unknown in-plane rotation or translation, the proposed technique is also comparable to the best existing techniques. However, in situations where there is a discrepancy of more than 2 mm (0.7% of the frame dimension) between the frame and the mathematical model, the proposed technique is significantly better (p < or = 0.05) than the existing techniques. The proposed algorithm can be applied to any existing frame without modification. It provides better registration accuracy and is robust against model mis-match. It allows greater flexibility on the frame structure. Lastly, it reduces the frame construction cost as adherence to a concise model is not required. PMID:9472834

  3. A new frame-based registration algorithm

    NASA Technical Reports Server (NTRS)

    Yan, C. H.; Whalen, R. T.; Beaupre, G. S.; Sumanaweera, T. S.; Yen, S. Y.; Napel, S.

    1998-01-01

    This paper presents a new algorithm for frame registration. Our algorithm requires only that the frame be comprised of straight rods, as opposed to the N structures or an accurate frame model required by existing algorithms. The algorithm utilizes the full 3D information in the frame as well as a least squares weighting scheme to achieve highly accurate registration. We use simulated CT data to assess the accuracy of our algorithm. We compare the performance of the proposed algorithm to two commonly used algorithms. Simulation results show that the proposed algorithm is comparable to the best existing techniques with knowledge of the exact mathematical frame model. For CT data corrupted with an unknown in-plane rotation or translation, the proposed technique is also comparable to the best existing techniques. However, in situations where there is a discrepancy of more than 2 mm (0.7% of the frame dimension) between the frame and the mathematical model, the proposed technique is significantly better (p < or = 0.05) than the existing techniques. The proposed algorithm can be applied to any existing frame without modification. It provides better registration accuracy and is robust against model mis-match. It allows greater flexibility on the frame structure. Lastly, it reduces the frame construction cost as adherence to a concise model is not required.

  4. A combined NLP-differential evolution algorithm approach for the optimization of looped water distribution systems

    NASA Astrophysics Data System (ADS)

    Zheng, Feifei; Simpson, Angus R.; Zecchin, Aaron C.

    2011-08-01

    This paper proposes a novel optimization approach for the least cost design of looped water distribution systems (WDSs). Three distinct steps are involved in the proposed optimization approach. In the first step, the shortest-distance tree within the looped network is identified using the Dijkstra graph theory algorithm, for which an extension is proposed to find the shortest-distance tree for multisource WDSs. In the second step, a nonlinear programming (NLP) solver is employed to optimize the pipe diameters for the shortest-distance tree (chords of the shortest-distance tree are allocated the minimum allowable pipe sizes). Finally, in the third step, the original looped water network is optimized using a differential evolution (DE) algorithm seeded with diameters in the proximity of the continuous pipe sizes obtained in step two. As such, the proposed optimization approach combines the traditional deterministic optimization technique of NLP with the emerging evolutionary algorithm DE via the proposed network decomposition. The proposed methodology has been tested on four looped WDSs with the number of decision variables ranging from 21 to 454. Results obtained show the proposed approach is able to find optimal solutions with significantly less computational effort than other optimization techniques.

  5. Dual signal subspace projection (DSSP): a novel algorithm for removing large interference in biomagnetic measurements

    NASA Astrophysics Data System (ADS)

    Sekihara, Kensuke; Kawabata, Yuya; Ushio, Shuta; Sumiya, Satoshi; Kawabata, Shigenori; Adachi, Yoshiaki; Nagarajan, Srikantan S.

    2016-06-01

    Objective. In functional electrophysiological imaging, signals are often contaminated by interference that can be of considerable magnitude compared to the signals of interest. This paper proposes a novel algorithm for removing such interferences that does not require separate noise measurements. Approach. The algorithm is based on a dual definition of the signal subspace in the spatial- and time-domains. Since the algorithm makes use of this duality, it is named the dual signal subspace projection (DSSP). The DSSP algorithm first projects the columns of the measured data matrix onto the inside and outside of the spatial-domain signal subspace, creating a set of two preprocessed data matrices. The intersection of the row spans of these two matrices is estimated as the time-domain interference subspace. The original data matrix is projected onto the subspace that is orthogonal to this interference subspace. Main results. The DSSP algorithm is validated by using the computer simulation, and using two sets of real biomagnetic data: spinal cord evoked field data measured from a healthy volunteer and magnetoencephalography data from a patient with a vagus nerve stimulator. Significance. The proposed DSSP algorithm is effective for removing overlapped interference in a wide variety of biomagnetic measurements.

  6. Proposal for DICOM multiframe medical image integrity and authenticity.

    PubMed

    Kobayashi, Luiz O M; Furuie, Sergio S

    2009-03-01

    This paper presents a novel algorithm to successfully achieve viable integrity and authenticity addition and verification of n-frame DICOM medical images using cryptographic mechanisms. The aim of this work is the enhancement of DICOM security measures, especially for multiframe images. Current approaches have limitations that should be properly addressed for improved security. The algorithm proposed in this work uses data encryption to provide integrity and authenticity, along with digital signature. Relevant header data and digital signature are used as inputs to cipher the image. Therefore, one can only retrieve the original data if and only if the images and the inputs are correct. The encryption process itself is a cascading scheme, where a frame is ciphered with data related to the previous frames, generating also additional data on image integrity and authenticity. Decryption is similar to encryption, featuring also the standard security verification of the image. The implementation was done in JAVA, and a performance evaluation was carried out comparing the speed of the algorithm with other existing approaches. The evaluation showed a good performance of the algorithm, which is an encouraging result to use it in a real environment. PMID:18266035

  7. Improved autonomous star identification algorithm

    NASA Astrophysics Data System (ADS)

    Luo, Li-Yan; Xu, Lu-Ping; Zhang, Hua; Sun, Jing-Rong

    2015-06-01

    The log-polar transform (LPT) is introduced into the star identification because of its rotation invariance. An improved autonomous star identification algorithm is proposed in this paper to avoid the circular shift of the feature vector and to reduce the time consumed in the star identification algorithm using LPT. In the proposed algorithm, the star pattern of the same navigation star remains unchanged when the stellar image is rotated, which makes it able to reduce the star identification time. The logarithmic values of the plane distances between the navigation and its neighbor stars are adopted to structure the feature vector of the navigation star, which enhances the robustness of star identification. In addition, some efforts are made to make it able to find the identification result with fewer comparisons, instead of searching the whole feature database. The simulation results demonstrate that the proposed algorithm can effectively accelerate the star identification. Moreover, the recognition rate and robustness by the proposed algorithm are better than those by the LPT algorithm and the modified grid algorithm. Project supported by the National Natural Science Foundation of China (Grant Nos. 61172138 and 61401340), the Open Research Fund of the Academy of Satellite Application, China (Grant No. 2014_CXJJ-DH_12), the Fundamental Research Funds for the Central Universities, China (Grant Nos. JB141303 and 201413B), the Natural Science Basic Research Plan in Shaanxi Province, China (Grant No. 2013JQ8040), the Research Fund for the Doctoral Program of Higher Education of China (Grant No. 20130203120004), and the Xi’an Science and Technology Plan, China (Grant. No CXY1350(4)).

  8. Feature extraction and classification algorithms for high dimensional data

    NASA Technical Reports Server (NTRS)

    Lee, Chulhee; Landgrebe, David

    1993-01-01

    Feature extraction and classification algorithms for high dimensional data are investigated. Developments with regard to sensors for Earth observation are moving in the direction of providing much higher dimensional multispectral imagery than is now possible. In analyzing such high dimensional data, processing time becomes an important factor. With large increases in dimensionality and the number of classes, processing time will increase significantly. To address this problem, a multistage classification scheme is proposed which reduces the processing time substantially by eliminating unlikely classes from further consideration at each stage. Several truncation criteria are developed and the relationship between thresholds and the error caused by the truncation is investigated. Next an approach to feature extraction for classification is proposed based directly on the decision boundaries. It is shown that all the features needed for classification can be extracted from decision boundaries. A characteristic of the proposed method arises by noting that only a portion of the decision boundary is effective in discriminating between classes, and the concept of the effective decision boundary is introduced. The proposed feature extraction algorithm has several desirable properties: it predicts the minimum number of features necessary to achieve the same classification accuracy as in the original space for a given pattern recognition problem; and it finds the necessary feature vectors. The proposed algorithm does not deteriorate under the circumstances of equal means or equal covariances as some previous algorithms do. In addition, the decision boundary feature extraction algorithm can be used both for parametric and non-parametric classifiers. Finally, some problems encountered in analyzing high dimensional data are studied and possible solutions are proposed. First, the increased importance of the second order statistics in analyzing high dimensional data is recognized

  9. A New Differential Evolution Algorithm and Its Application to Real Life Problems

    NASA Astrophysics Data System (ADS)

    Pant, Millie; Ali, Musrrat; Singh, V. P.

    2009-07-01

    Most of the real life problems occurring in various disciplines of science and engineering can be modeled as optimization problems. Also, most of these problems are nonlinear in nature which requires a suitable and efficient optimization algorithm to reach to an optimum value. In the past few years various algorithms has been proposed to deal with nonlinear optimization problems. Differential Evolution (DE) is a stochastic, population based search technique, which can be classified as an Evolutionary Algorithm (EA) using the concepts of selection crossover and reproduction to guide the search. It has emerged as a powerful tool for solving optimization problems in the past few years. However, the convergence rate of DE still does not meet all the requirements, and attempts to speed up differential evolution are considered necessary. In order to improve the performance of DE, we propose a modified DE algorithm called DEPCX which uses parent centric approach to manipulate the solution vectors. The performance of DEPCX is validated on a test bed of five benchmark functions and five real life engineering design problems. Numerical results are compared with original differential evolution (DE) and with TDE, another recently modified version of DE. Empirical analysis of the results clearly indicates the competence and efficiency of the proposed DEPCX algorithm for solving benchmark as well as real life problems with a good convergence rate.

  10. Parallelized Dilate Algorithm for Remote Sensing Image

    PubMed Central

    Zhang, Suli; Hu, Haoran; Pan, Xin

    2014-01-01

    As an important algorithm, dilate algorithm can give us more connective view of a remote sensing image which has broken lines or objects. However, with the technological progress of satellite sensor, the resolution of remote sensing image has been increasing and its data quantities become very large. This would lead to the decrease of algorithm running speed or cannot obtain a result in limited memory or time. To solve this problem, our research proposed a parallelized dilate algorithm for remote sensing Image based on MPI and MP. Experiments show that our method runs faster than traditional single-process algorithm. PMID:24955392

  11. Algorithm for Compressing Time-Series Data

    NASA Technical Reports Server (NTRS)

    Hawkins, S. Edward, III; Darlington, Edward Hugo

    2012-01-01

    An algorithm based on Chebyshev polynomials effects lossy compression of time-series data or other one-dimensional data streams (e.g., spectral data) that are arranged in blocks for sequential transmission. The algorithm was developed for use in transmitting data from spacecraft scientific instruments to Earth stations. In spite of its lossy nature, the algorithm preserves the information needed for scientific analysis. The algorithm is computationally simple, yet compresses data streams by factors much greater than two. The algorithm is not restricted to spacecraft or scientific uses: it is applicable to time-series data in general. The algorithm can also be applied to general multidimensional data that have been converted to time-series data, a typical example being image data acquired by raster scanning. However, unlike most prior image-data-compression algorithms, this algorithm neither depends on nor exploits the two-dimensional spatial correlations that are generally present in images. In order to understand the essence of this compression algorithm, it is necessary to understand that the net effect of this algorithm and the associated decompression algorithm is to approximate the original stream of data as a sequence of finite series of Chebyshev polynomials. For the purpose of this algorithm, a block of data or interval of time for which a Chebyshev polynomial series is fitted to the original data is denoted a fitting interval. Chebyshev approximation has two properties that make it particularly effective for compressing serial data streams with minimal loss of scientific information: The errors associated with a Chebyshev approximation are nearly uniformly distributed over the fitting interval (this is known in the art as the "equal error property"); and the maximum deviations of the fitted Chebyshev polynomial from the original data have the smallest possible values (this is known in the art as the "min-max property").

  12. CenLP: A centrality-based label propagation algorithm for community detection in networks

    NASA Astrophysics Data System (ADS)

    Sun, Heli; Liu, Jiao; Huang, Jianbin; Wang, Guangtao; Yang, Zhou; Song, Qinbao; Jia, Xiaolin

    2015-10-01

    Community detection is an important work for discovering the structure and features of complex networks. Many existing methods are sensitive to critical user-dependent parameters or time-consuming in practice. In this paper, we propose a novel label propagation algorithm, called CenLP (Centrality-based Label Propagation). The algorithm introduces a new function to measure the centrality of nodes quantitatively without any user interaction by calculating the local density and the similarity with higher density neighbors for each node. Based on the centrality of nodes, we present a new label propagation algorithm with specific update order and node preference to uncover communities in large-scale networks automatically without imposing any prior restriction. Experiments on both real-world and synthetic networks manifest our algorithm retains the simplicity, effectiveness, and scalability of the original label propagation algorithm and becomes more robust and accurate. Extensive experiments demonstrate the superior performance of our algorithm over the baseline methods. Moreover, our detailed experimental evaluation on real-world networks indicates that our algorithm can effectively measure the centrality of nodes in social networks.

  13. Segmentation of pomegranate MR images using spatial fuzzy c-means (SFCM) algorithm

    NASA Astrophysics Data System (ADS)

    Moradi, Ghobad; Shamsi, Mousa; Sedaaghi, M. H.; Alsharif, M. R.

    2011-10-01

    Segmentation is one of the fundamental issues of image processing and machine vision. It plays a prominent role in a variety of image processing applications. In this paper, one of the most important applications of image processing in MRI segmentation of pomegranate is explored. Pomegranate is a fruit with pharmacological properties such as being anti-viral and anti-cancer. Having a high quality product in hand would be critical factor in its marketing. The internal quality of the product is comprehensively important in the sorting process. The determination of qualitative features cannot be manually made. Therefore, the segmentation of the internal structures of the fruit needs to be performed as accurately as possible in presence of noise. Fuzzy c-means (FCM) algorithm is noise-sensitive and pixels with noise are classified inversely. As a solution, in this paper, the spatial FCM algorithm in pomegranate MR images' segmentation is proposed. The algorithm is performed with setting the spatial neighborhood information in FCM and modification of fuzzy membership function for each class. The segmentation algorithm results on the original and the corrupted Pomegranate MR images by Gaussian, Salt Pepper and Speckle noises show that the SFCM algorithm operates much more significantly than FCM algorithm. Also, after diverse steps of qualitative and quantitative analysis, we have concluded that the SFCM algorithm with 5×5 window size is better than the other windows.

  14. Volume learning algorithm artificial neural networks for 3D QSAR studies.

    PubMed

    Tetko, I V; Kovalishyn, V V; Livingstone, D J

    2001-07-19

    The current study introduces a new method, the volume learning algorithm (VLA), for the investigation of three-dimensional quantitative structure-activity relationships (QSAR) of chemical compounds. This method incorporates the advantages of comparative molecular field analysis (CoMFA) and artificial neural network approaches. VLA is a combination of supervised and unsupervised neural networks applied to solve the same problem. The supervised algorithm is a feed-forward neural network trained with a back-propagation algorithm while the unsupervised network is a self-organizing map of Kohonen. The use of both of these algorithms makes it possible to cluster the input CoMFA field variables and to use only a small number of the most relevant parameters to correlate spatial properties of the molecules with their activity. The statistical coefficients calculated by the proposed algorithm for cannabimimetic aminoalkyl indoles were comparable to, or improved, in comparison to the original study using the partial least squares algorithm. The results of the algorithm can be visualized and easily interpreted. Thus, VLA is a new convenient tool for three-dimensional QSAR studies. PMID:11448223

  15. Algorithmic advances in stochastic programming

    SciTech Connect

    Morton, D.P.

    1993-07-01

    Practical planning problems with deterministic forecasts of inherently uncertain parameters often yield unsatisfactory solutions. Stochastic programming formulations allow uncertain parameters to be modeled as random variables with known distributions, but the size of the resulting mathematical programs can be formidable. Decomposition-based algorithms take advantage of special structure and provide an attractive approach to such problems. We consider two classes of decomposition-based stochastic programming algorithms. The first type of algorithm addresses problems with a ``manageable`` number of scenarios. The second class incorporates Monte Carlo sampling within a decomposition algorithm. We develop and empirically study an enhanced Benders decomposition algorithm for solving multistage stochastic linear programs within a prespecified tolerance. The enhancements include warm start basis selection, preliminary cut generation, the multicut procedure, and decision tree traversing strategies. Computational results are presented for a collection of ``real-world`` multistage stochastic hydroelectric scheduling problems. Recently, there has been an increased focus on decomposition-based algorithms that use sampling within the optimization framework. These approaches hold much promise for solving stochastic programs with many scenarios. A critical component of such algorithms is a stopping criterion to ensure the quality of the solution. With this as motivation, we develop a stopping rule theory for algorithms in which bounds on the optimal objective function value are estimated by sampling. Rules are provided for selecting sample sizes and terminating the algorithm under which asymptotic validity of confidence interval statements for the quality of the proposed solution can be verified. Issues associated with the application of this theory to two sampling-based algorithms are considered, and preliminary empirical coverage results are presented.

  16. Algorithmic chemistry

    SciTech Connect

    Fontana, W.

    1990-12-13

    In this paper complex adaptive systems are defined by a self- referential loop in which objects encode functions that act back on these objects. A model for this loop is presented. It uses a simple recursive formal language, derived from the lambda-calculus, to provide a semantics that maps character strings into functions that manipulate symbols on strings. The interaction between two functions, or algorithms, is defined naturally within the language through function composition, and results in the production of a new function. An iterated map acting on sets of functions and a corresponding graph representation are defined. Their properties are useful to discuss the behavior of a fixed size ensemble of randomly interacting functions. This function gas'', or Turning gas'', is studied under various conditions, and evolves cooperative interaction patterns of considerable intricacy. These patterns adapt under the influence of perturbations consisting in the addition of new random functions to the system. Different organizations emerge depending on the availability of self-replicators.

  17. Scheduling Jobs with Genetic Algorithms

    NASA Astrophysics Data System (ADS)

    Ferrolho, António; Crisóstomo, Manuel

    Most scheduling problems are NP-hard, the time required to solve the problem optimally increases exponentially with the size of the problem. Scheduling problems have important applications, and a number of heuristic algorithms have been proposed to determine relatively good solutions in polynomial time. Recently, genetic algorithms (GA) are successfully used to solve scheduling problems, as shown by the growing numbers of papers. GA are known as one of the most efficient algorithms for solving scheduling problems. But, when a GA is applied to scheduling problems various crossovers and mutations operators can be applicable. This paper presents and examines a new concept of genetic operators for scheduling problems. A software tool called hybrid and flexible genetic algorithm (HybFlexGA) was developed to examine the performance of various crossover and mutation operators by computing simulations of job scheduling problems.

  18. Modified Golomb Algorithm for Computing Unit Fraction Expansions

    ERIC Educational Resources Information Center

    Man, Yiu-Kwong

    2004-01-01

    In this note, a modified Golomb algorithm for computing unit fraction expansions is presented. This algorithm has the advantage that the maximal denominators involved in the expansions will not exceed those computed by the original algorithm. In fact, the differences between the maximal denominators or the number of terms obtained by these two…

  19. Projection Classification Based Iterative Algorithm

    NASA Astrophysics Data System (ADS)

    Zhang, Ruiqiu; Li, Chen; Gao, Wenhua

    2015-05-01

    Iterative algorithm has good performance as it does not need complete projection data in 3D image reconstruction area. It is possible to be applied in BGA based solder joints inspection but with low convergence speed which usually acts with x-ray Laminography that has a worse reconstruction image compared to the former one. This paper explores to apply one projection classification based method which tries to separate the object to three parts, i.e. solute, solution and air, and suppose that the reconstruction speed decrease from solution to two other parts on both side lineally. And then SART and CAV algorithms are improved under the proposed idea. Simulation experiment result with incomplete projection images indicates the fast convergence speed of the improved iterative algorithms and the effectiveness of the proposed method. Less the projection images, more the superiority is also founded.

  20. Some nonlinear space decomposition algorithms

    SciTech Connect

    Tai, Xue-Cheng; Espedal, M.

    1996-12-31

    Convergence of a space decomposition method is proved for a general convex programming problem. The space decomposition refers to methods that decompose a space into sums of subspaces, which could be a domain decomposition or a multigrid method for partial differential equations. Two algorithms are proposed. Both can be used for linear as well as nonlinear elliptic problems and they reduce to the standard additive and multiplicative Schwarz methods for linear elliptic problems. Two {open_quotes}hybrid{close_quotes} algorithms are also presented. They converge faster than the additive one and have better parallelism than the multiplicative method. Numerical tests with a two level domain decomposition for linear, nonlinear and interface elliptic problems are presented for the proposed algorithms.

  1. Secure 3D watermarking algorithm based on point set projection

    NASA Astrophysics Data System (ADS)

    Liu, Quan; Zhang, Xiaomei

    2007-11-01

    3D digital models greatly facilitate the distribution and storage of information. While its copyright protection problems attract more and more research interests. A novel secure digital watermarking algorithm for 3D models is proposed in this paper. In order to survive most attacks like rotation, cropping, smoothing, adding noise, etc, the projection of the model's point set is chosen as the carrier of the watermark in the presented algorithm, in which contains the copyright information as logos, text, and so on. Then projection of the model's point set onto x, y and z plane are calculated respectively. Before watermark embedding process, the original watermark is scrambled by a key. Each projection is singular value decomposed, and the scrambled watermark is embedded into the SVD(singular value decomposed) domain of the above x, y and z plane respectively. After that we use the watermarked x, y and z plane to recover the vertices of the model and the watermarked model is attained. Only the legal user can remove the watermark from the watermarked models using the private key. Experiments are presented in the paper to show that the proposed algorithm has good performance on various malicious attacks.

  2. Bouc-Wen hysteresis model identification using Modified Firefly Algorithm

    NASA Astrophysics Data System (ADS)

    Zaman, Mohammad Asif; Sikder, Urmita

    2015-12-01

    The parameters of Bouc-Wen hysteresis model are identified using a Modified Firefly Algorithm. The proposed algorithm uses dynamic process control parameters to improve its performance. The algorithm is used to find the model parameter values that results in the least amount of error between a set of given data points and points obtained from the Bouc-Wen model. The performance of the algorithm is compared with the performance of conventional Firefly Algorithm, Genetic Algorithm and Differential Evolution algorithm in terms of convergence rate and accuracy. Compared to the other three optimization algorithms, the proposed algorithm is found to have good convergence rate with high degree of accuracy in identifying Bouc-Wen model parameters. Finally, the proposed method is used to find the Bouc-Wen model parameters from experimental data. The obtained model is found to be in good agreement with measured data.

  3. An Algorithmic Framework for Multiobjective Optimization

    PubMed Central

    Ganesan, T.; Elamvazuthi, I.; Shaari, Ku Zilati Ku; Vasant, P.

    2013-01-01

    Multiobjective (MO) optimization is an emerging field which is increasingly being encountered in many fields globally. Various metaheuristic techniques such as differential evolution (DE), genetic algorithm (GA), gravitational search algorithm (GSA), and particle swarm optimization (PSO) have been used in conjunction with scalarization techniques such as weighted sum approach and the normal-boundary intersection (NBI) method to solve MO problems. Nevertheless, many challenges still arise especially when dealing with problems with multiple objectives (especially in cases more than two). In addition, problems with extensive computational overhead emerge when dealing with hybrid algorithms. This paper discusses these issues by proposing an alternative framework that utilizes algorithmic concepts related to the problem structure for generating efficient and effective algorithms. This paper proposes a framework to generate new high-performance algorithms with minimal computational overhead for MO optimization. PMID:24470795

  4. An algorithmic framework for multiobjective optimization.

    PubMed

    Ganesan, T; Elamvazuthi, I; Shaari, Ku Zilati Ku; Vasant, P

    2013-01-01

    Multiobjective (MO) optimization is an emerging field which is increasingly being encountered in many fields globally. Various metaheuristic techniques such as differential evolution (DE), genetic algorithm (GA), gravitational search algorithm (GSA), and particle swarm optimization (PSO) have been used in conjunction with scalarization techniques such as weighted sum approach and the normal-boundary intersection (NBI) method to solve MO problems. Nevertheless, many challenges still arise especially when dealing with problems with multiple objectives (especially in cases more than two). In addition, problems with extensive computational overhead emerge when dealing with hybrid algorithms. This paper discusses these issues by proposing an alternative framework that utilizes algorithmic concepts related to the problem structure for generating efficient and effective algorithms. This paper proposes a framework to generate new high-performance algorithms with minimal computational overhead for MO optimization. PMID:24470795

  5. Orbital objects detection algorithm using faint streaks

    NASA Astrophysics Data System (ADS)

    Tagawa, Makoto; Yanagisawa, Toshifumi; Kurosaki, Hirohisa; Oda, Hiroshi; Hanada, Toshiya

    2016-02-01

    This study proposes an algorithm to detect orbital objects that are small or moving at high apparent velocities from optical images by utilizing their faint streaks. In the conventional object-detection algorithm, a high signal-to-noise-ratio (e.g., 3 or more) is required, whereas in our proposed algorithm, the signals are summed along the streak direction to improve object-detection sensitivity. Lower signal-to-noise ratio objects were detected by applying the algorithm to a time series of images. The algorithm comprises the following steps: (1) image skewing, (2) image compression along the vertical axis, (3) detection and determination of streak position, (4) searching for object candidates using the time-series streak-position data, and (5) selecting the candidate with the best linearity and reliability. Our algorithm's ability to detect streaks with signals weaker than the background noise was confirmed using images from the Australia Remote Observatory.

  6. Threshold extended ID3 algorithm

    NASA Astrophysics Data System (ADS)

    Kumar, A. B. Rajesh; Ramesh, C. Phani; Madhusudhan, E.; Padmavathamma, M.

    2012-04-01

    Information exchange over insecure networks needs to provide authentication and confidentiality to the database in significant problem in datamining. In this paper we propose a novel authenticated multiparty ID3 Algorithm used to construct multiparty secret sharing decision tree for implementation in medical transactions.

  7. Understanding Air Transportation Market Dynamics Using a Search Algorithm for Calibrating Travel Demand and Price

    NASA Technical Reports Server (NTRS)

    Kumar, Vivek; Horio, Brant M.; DeCicco, Anthony H.; Hasan, Shahab; Stouffer, Virginia L.; Smith, Jeremy C.; Guerreiro, Nelson M.

    2015-01-01

    This paper presents a search algorithm based framework to calibrate origin-destination (O-D) market specific airline ticket demands and prices for the Air Transportation System (ATS). This framework is used for calibrating an agent based model of the air ticket buy-sell process - Airline Evolutionary Simulation (Airline EVOS) -that has fidelity of detail that accounts for airline and consumer behaviors and the interdependencies they share between themselves and the NAS. More specificially, this algorithm simultaneous calibrates demand and airfares for each O-D market, to within specified threshold of a pre-specified target value. The proposed algorithm is illustrated with market data targets provided by the Transportation System Analysis Model (TSAM) and Airline Origin and Destination Survey (DB1B). Although we specify these models and datasources for this calibration exercise, the methods described in this paper are applicable to calibrating any low-level model of the ATS to some other demand forecast model-based data. We argue that using a calibration algorithm such as the one we present here to synchronize ATS models with specialized forecast demand models, is a powerful tool for establishing credible baseline conditions in experiments analyzing the effects of proposed policy changes to the ATS.

  8. Modified artificial bee colony algorithm for reactive power optimization

    NASA Astrophysics Data System (ADS)

    Sulaiman, Noorazliza; Mohamad-Saleh, Junita; Abro, Abdul Ghani

    2015-05-01

    Bio-inspired algorithms (BIAs) implemented to solve various optimization problems have shown promising results which are very important in this severely complex real-world. Artificial Bee Colony (ABC) algorithm, a kind of BIAs has demonstrated tremendous results as compared to other optimization algorithms. This paper presents a new modified ABC algorithm referred to as JA-ABC3 with the aim to enhance convergence speed and avoid premature convergence. The proposed algorithm has been simulated on ten commonly used benchmarks functions. Its performance has also been compared with other existing ABC variants. To justify its robust applicability, the proposed algorithm has been tested to solve Reactive Power Optimization problem. The results have shown that the proposed algorithm has superior performance to other existing ABC variants e.g. GABC, BABC1, BABC2, BsfABC dan IABC in terms of convergence speed. Furthermore, the proposed algorithm has also demonstrated excellence performance in solving Reactive Power Optimization problem.

  9. Systolic array architecture for convolutional decoding algorithms: Viterbi algorithm and stack algorithm

    SciTech Connect

    Chang, C.Y.

    1986-01-01

    New results on efficient forms of decoding convolutional codes based on Viterbi and stack algorithms using systolic array architecture are presented. Some theoretical aspects of systolic arrays are also investigated. First, systolic array implementation of Viterbi algorithm is considered, and various properties of convolutional codes are derived. A technique called strongly connected trellis decoding is introduced to increase the efficient utilization of all the systolic array processors. The issues dealing with the composite branch metric generation, survivor updating, overall system architecture, throughput rate, and computations overhead ratio are also investigated. Second, the existing stack algorithm is modified and restated in a more concise version so that it can be efficiently implemented by a special type of systolic array called systolic priority queue. Three general schemes of systolic priority queue based on random access memory, shift register, and ripple register are proposed. Finally, a systematic approach is presented to design systolic arrays for certain general classes of recursively formulated algorithms.

  10. A New Modified Artificial Bee Colony Algorithm with Exponential Function Adaptive Steps

    PubMed Central

    Mao, Wei; Li, Hao-ru

    2016-01-01

    As one of the most recent popular swarm intelligence techniques, artificial bee colony algorithm is poor at exploitation and has some defects such as slow search speed, poor population diversity, the stagnation in the working process, and being trapped into the local optimal solution. The purpose of this paper is to develop a new modified artificial bee colony algorithm in view of the initial population structure, subpopulation groups, step updating, and population elimination. Further, depending on opposition-based learning theory and the new modified algorithms, an improved S-type grouping method is proposed and the original way of roulette wheel selection is substituted through sensitivity-pheromone way. Then, an adaptive step with exponential functions is designed for replacing the original random step. Finally, based on the new test function versions CEC13, six benchmark functions with the dimensions D = 20 and D = 40 are chosen and applied in the experiments for analyzing and comparing the iteration speed and accuracy of the new modified algorithms. The experimental results show that the new modified algorithm has faster and more stable searching and can quickly increase poor population diversity and bring out the global optimal solutions. PMID:27293426

  11. A New Modified Artificial Bee Colony Algorithm with Exponential Function Adaptive Steps.

    PubMed

    Mao, Wei; Lan, Heng-You; Li, Hao-Ru

    2016-01-01

    As one of the most recent popular swarm intelligence techniques, artificial bee colony algorithm is poor at exploitation and has some defects such as slow search speed, poor population diversity, the stagnation in the working process, and being trapped into the local optimal solution. The purpose of this paper is to develop a new modified artificial bee colony algorithm in view of the initial population structure, subpopulation groups, step updating, and population elimination. Further, depending on opposition-based learning theory and the new modified algorithms, an improved S-type grouping method is proposed and the original way of roulette wheel selection is substituted through sensitivity-pheromone way. Then, an adaptive step with exponential functions is designed for replacing the original random step. Finally, based on the new test function versions CEC13, six benchmark functions with the dimensions D = 20 and D = 40 are chosen and applied in the experiments for analyzing and comparing the iteration speed and accuracy of the new modified algorithms. The experimental results show that the new modified algorithm has faster and more stable searching and can quickly increase poor population diversity and bring out the global optimal solutions. PMID:27293426

  12. The Orthogonally Partitioned EM Algorithm: Extending the EM Algorithm for Algorithmic Stability and Bias Correction Due to Imperfect Data.

    PubMed

    Regier, Michael D; Moodie, Erica E M

    2016-05-01

    We propose an extension of the EM algorithm that exploits the common assumption of unique parameterization, corrects for biases due to missing data and measurement error, converges for the specified model when standard implementation of the EM algorithm has a low probability of convergence, and reduces a potentially complex algorithm into a sequence of smaller, simpler, self-contained EM algorithms. We use the theory surrounding the EM algorithm to derive the theoretical results of our proposal, showing that an optimal solution over the parameter space is obtained. A simulation study is used to explore the finite sample properties of the proposed extension when there is missing data and measurement error. We observe that partitioning the EM algorithm into simpler steps may provide better bias reduction in the estimation of model parameters. The ability to breakdown a complicated problem in to a series of simpler, more accessible problems will permit a broader implementation of the EM algorithm, permit the use of software packages that now implement and/or automate the EM algorithm, and make the EM algorithm more accessible to a wider and more general audience. PMID:27227718

  13. Basic firefly algorithm for document clustering

    NASA Astrophysics Data System (ADS)

    Mohammed, Athraa Jasim; Yusof, Yuhanis; Husni, Husniza

    2015-12-01

    The Document clustering plays significant role in Information Retrieval (IR) where it organizes documents prior to the retrieval process. To date, various clustering algorithms have been proposed and this includes the K-means and Particle Swarm Optimization. Even though these algorithms have been widely applied in many disciplines due to its simplicity, such an approach tends to be trapped in a local minimum during its search for an optimal solution. To address the shortcoming, this paper proposes a Basic Firefly (Basic FA) algorithm to cluster text documents. The algorithm employs the Average Distance to Document Centroid (ADDC) as the objective function of the search. Experiments utilizing the proposed algorithm were conducted on the 20Newsgroups benchmark dataset. Results demonstrate that the Basic FA generates a more robust and compact clusters than the ones produced by K-means and Particle Swarm Optimization (PSO).

  14. An algorithm on distributed mining association rules

    NASA Astrophysics Data System (ADS)

    Xu, Fan

    2005-12-01

    With the rapid development of the Internet/Intranet, distributed databases have become a broadly used environment in various areas. It is a critical task to mine association rules in distributed databases. The algorithms of distributed mining association rules can be divided into two classes. One is a DD algorithm, and another is a CD algorithm. A DD algorithm focuses on data partition optimization so as to enhance the efficiency. A CD algorithm, on the other hand, considers a setting where the data is arbitrarily partitioned horizontally among the parties to begin with, and focuses on parallelizing the communication. A DD algorithm is not always applicable, however, at the time the data is generated, it is often already partitioned. In many cases, it cannot be gathered and repartitioned for reasons of security and secrecy, cost transmission, or sheer efficiency. A CD algorithm may be a more appealing solution for systems which are naturally distributed over large expenses, such as stock exchange and credit card systems. An FDM algorithm provides enhancement to CD algorithm. However, CD and FDM algorithms are both based on net-structure and executing in non-shareable resources. In practical applications, however, distributed databases often are star-structured. This paper proposes an algorithm based on star-structure networks, which are more practical in application, have lower maintenance costs and which are more practical in the construction of the networks. In addition, the algorithm provides high efficiency in communication and good extension in parallel computation.

  15. Parameter incremental learning algorithm for neural networks.

    PubMed

    Wan, Sheng; Banta, Larry E

    2006-11-01

    In this paper, a novel stochastic (or online) training algorithm for neural networks, named parameter incremental learning (PIL) algorithm, is proposed and developed. The main idea of the PIL strategy is that the learning algorithm should not only adapt to the newly presented input-output training pattern by adjusting parameters, but also preserve the prior results. A general PIL algorithm for feedforward neural networks is accordingly presented as the first-order approximate solution to an optimization problem, where the performance index is the combination of proper measures of preservation and adaptation. The PIL algorithms for the multilayer perceptron (MLP) are subsequently derived. Numerical studies show that for all the three benchmark problems used in this paper the PIL algorithm for MLP is measurably superior to the standard online backpropagation (BP) algorithm and the stochastic diagonal Levenberg-Marquardt (SDLM) algorithm in terms of the convergence speed and accuracy. Other appealing features of the PIL algorithm are that it is computationally as simple as the BP algorithm, and as easy to use as the BP algorithm. It, therefore, can be applied, with better performance, to any situations where the standard online BP algorithm is applicable. PMID:17131658

  16. Algorithmic causets

    NASA Astrophysics Data System (ADS)

    Bolognesi, Tommaso

    2011-07-01

    In the context of quantum gravity theories, several researchers have proposed causal sets as appropriate discrete models of spacetime. We investigate families of causal sets obtained from two simple models of computation - 2D Turing machines and network mobile automata - that operate on 'high-dimensional' supports, namely 2D arrays of cells and planar graphs, respectively. We study a number of quantitative and qualitative emergent properties of these causal sets, including dimension, curvature and localized structures, or 'particles'. We show how the possibility to detect and separate particles from background space depends on the choice between a global or local view at the causal set. Finally, we spot very rare cases of pseudo-randomness, or deterministic chaos; these exhibit a spontaneous phenomenon of 'causal compartmentation' that appears as a prerequisite for the occurrence of anything of physical interest in the evolution of spacetime.

  17. Optimized Swinging Door Algorithm for Wind Power Ramp Event Detection: Preprint

    SciTech Connect

    Cui, Mingjian; Zhang, Jie; Florita, Anthony R.; Hodge, Bri-Mathias; Ke, Deping; Sun, Yuanzhang

    2015-08-06

    Significant wind power ramp events (WPREs) are those that influence the integration of wind power, and they are a concern to the continued reliable operation of the power grid. As wind power penetration has increased in recent years, so has the importance of wind power ramps. In this paper, an optimized swinging door algorithm (SDA) is developed to improve ramp detection performance. Wind power time series data are segmented by the original SDA, and then all significant ramps are detected and merged through a dynamic programming algorithm. An application of the optimized SDA is provided to ascertain the optimal parameter of the original SDA. Measured wind power data from the Electric Reliability Council of Texas (ERCOT) are used to evaluate the proposed optimized SDA.

  18. The annealing robust backpropagation (ARBP) learning algorithm.

    PubMed

    Chuang, C C; Su, S F; Hsiao, C C

    2000-01-01

    Multilayer feedforward neural networks are often referred to as universal approximators. Nevertheless, if the used training data are corrupted by large noise, such as outliers, traditional backpropagation learning schemes may not always come up with acceptable performance. Even though various robust learning algorithms have been proposed in the literature, those approaches still suffer from the initialization problem. In those robust learning algorithms, the so-called M-estimator is employed. For the M-estimation type of learning algorithms, the loss function is used to play the role in discriminating against outliers from the majority by degrading the effects of those outliers in learning. However, the loss function used in those algorithms may not correctly discriminate against those outliers. In this paper, the annealing robust backpropagation learning algorithm (ARBP) that adopts the annealing concept into the robust learning algorithms is proposed to deal with the problem of modeling under the existence of outliers. The proposed algorithm has been employed in various examples. Those results all demonstrated the superiority over other robust learning algorithms independent of outliers. In the paper, not only is the annealing concept adopted into the robust learning algorithms but also the annealing schedule k/t was found experimentally to achieve the best performance among other annealing schedules, where k is a constant and is the epoch number. PMID:18249835

  19. QPSO-Based Adaptive DNA Computing Algorithm

    PubMed Central

    Karakose, Mehmet; Cigdem, Ugur

    2013-01-01

    DNA (deoxyribonucleic acid) computing that is a new computation model based on DNA molecules for information storage has been increasingly used for optimization and data analysis in recent years. However, DNA computing algorithm has some limitations in terms of convergence speed, adaptability, and effectiveness. In this paper, a new approach for improvement of DNA computing is proposed. This new approach aims to perform DNA computing algorithm with adaptive parameters towards the desired goal using quantum-behaved particle swarm optimization (QPSO). Some contributions provided by the proposed QPSO based on adaptive DNA computing algorithm are as follows: (1) parameters of population size, crossover rate, maximum number of operations, enzyme and virus mutation rate, and fitness function of DNA computing algorithm are simultaneously tuned for adaptive process, (2) adaptive algorithm is performed using QPSO algorithm for goal-driven progress, faster operation, and flexibility in data, and (3) numerical realization of DNA computing algorithm with proposed approach is implemented in system identification. Two experiments with different systems were carried out to evaluate the performance of the proposed approach with comparative results. Experimental results obtained with Matlab and FPGA demonstrate ability to provide effective optimization, considerable convergence speed, and high accuracy according to DNA computing algorithm. PMID:23935409

  20. An implementation of the look-ahead Lanczos algorithm for non-Hermitian matrices

    NASA Technical Reports Server (NTRS)

    Freund, Roland W.; Gutknecht, Martin H.; Nachtigal, Noel M.

    1991-01-01

    The nonsymmetric Lanczos method can be used to compute eigenvalues of large sparse non-Hermitian matrices or to solve large sparse non-Hermitian linear systems. However, the original Lanczos algorithm is susceptible to possible breakdowns and potential instabilities. An implementation is presented of a look-ahead version of the Lanczos algorithm that, except for the very special situation of an incurable breakdown, overcomes these problems by skipping over those steps in which a breakdown or near-breakdown would occur in the standard process. The proposed algorithm can handle look-ahead steps of any length and requires the same number of matrix-vector products and inner products as the standard Lanczos process without look-ahead.

  1. An implementation of the look-ahead Lanczos algorithm for non-Hermitian matrices, part 1

    NASA Technical Reports Server (NTRS)

    Freund, Roland W.; Gutknecht, Martin H.; Nachtigal, Noel M.

    1990-01-01

    The nonsymmetric Lanczos method can be used to compute eigenvalues of large sparse non-Hermitian matrices or to solve large sparse non-Hermitian linear systems. However, the original Lanczos algorithm is susceptible to possible breakdowns and potential instabilities. We present an implementation of a look-ahead version of the Lanczos algorithm which overcomes these problems by skipping over those steps in which a breakdown or near-breakdown would occur in the standard process. The proposed algorithm can handle look-ahead steps of any length and is not restricted to steps of length 2, as earlier implementations are. Also, our implementation has the feature that it requires roughly the same number of inner products as the standard Lanczos process without look-ahead.

  2. Stability of Bareiss algorithm

    NASA Astrophysics Data System (ADS)

    Bojanczyk, Adam W.; Brent, Richard P.; de Hoog, F. R.

    1991-12-01

    In this paper, we present a numerical stability analysis of Bareiss algorithm for solving a symmetric positive definite Toeplitz system of linear equations. We also compare Bareiss algorithm with Levinson algorithm and conclude that the former has superior numerical properties.

  3. The Origin of Roman Numerals

    ERIC Educational Resources Information Center

    Dapre, P. A.

    1977-01-01

    A theory on the origin of Roman numerals proposes that the principal numbers can be stylized in terms of a square. It is speculated that the abacus or its equivalents, such as the counter or chequer-board, was used to count before the alphabet became common. (SW)

  4. PARM--an efficient algorithm to mine association rules from spatial data.

    PubMed

    Ding, Qin; Ding, Qiang; Perrizo, William

    2008-12-01

    Association rule mining, originally proposed for market basket data, has potential applications in many areas. Spatial data, such as remote sensed imagery (RSI) data, is one of the promising application areas. Extracting interesting patterns and rules from spatial data sets, composed of images and associated ground data, can be of importance in precision agriculture, resource discovery, and other areas. However, in most cases, the sizes of the spatial data sets are too large to be mined in a reasonable amount of time using existing algorithms. In this paper, we propose an efficient approach to derive association rules from spatial data using Peano Count Tree (P-tree) structure. P-tree structure provides a lossless and compressed representation of spatial data. Based on P-trees, an efficient association rule mining algorithm PARM with fast support calculation and significant pruning techniques is introduced to improve the efficiency of the rule mining process. The P-tree based Association Rule Mining (PARM) algorithm is implemented and compared with FP-growth and Apriori algorithms. Experimental results showed that our algorithm is superior for association rule mining on RSI spatial data. PMID:19022723

  5. A modified decision tree algorithm based on genetic algorithm for mobile user classification problem.

    PubMed

    Liu, Dong-sheng; Fan, Shu-jiang

    2014-01-01

    In order to offer mobile customers better service, we should classify the mobile user firstly. Aimed at the limitations of previous classification methods, this paper puts forward a modified decision tree algorithm for mobile user classification, which introduced genetic algorithm to optimize the results of the decision tree algorithm. We also take the context information as a classification attributes for the mobile user and we classify the context into public context and private context classes. Then we analyze the processes and operators of the algorithm. At last, we make an experiment on the mobile user with the algorithm, we can classify the mobile user into Basic service user, E-service user, Plus service user, and Total service user classes and we can also get some rules about the mobile user. Compared to C4.5 decision tree algorithm and SVM algorithm, the algorithm we proposed in this paper has higher accuracy and more simplicity. PMID:24688389

  6. A Modified Decision Tree Algorithm Based on Genetic Algorithm for Mobile User Classification Problem

    PubMed Central

    Liu, Dong-sheng; Fan, Shu-jiang

    2014-01-01

    In order to offer mobile customers better service, we should classify the mobile user firstly. Aimed at the limitations of previous classification methods, this paper puts forward a modified decision tree algorithm for mobile user classification, which introduced genetic algorithm to optimize the results of the decision tree algorithm. We also take the context information as a classification attributes for the mobile user and we classify the context into public context and private context classes. Then we analyze the processes and operators of the algorithm. At last, we make an experiment on the mobile user with the algorithm, we can classify the mobile user into Basic service user, E-service user, Plus service user, and Total service user classes and we can also get some rules about the mobile user. Compared to C4.5 decision tree algorithm and SVM algorithm, the algorithm we proposed in this paper has higher accuracy and more simplicity. PMID:24688389

  7. Modifications to Axially Symmetric Simulations Using New DSMC (2007) Algorithms

    NASA Technical Reports Server (NTRS)

    Liechty, Derek S.

    2008-01-01

    Several modifications aimed at improving physical accuracy are proposed for solving axially symmetric problems building on the DSMC (2007) algorithms introduced by Bird. Originally developed to solve nonequilibrium, rarefied flows, the DSMC method is now regularly used to solve complex problems over a wide range of Knudsen numbers. These new algorithms include features such as nearest neighbor collisions excluding the previous collision partners, separate collision and sampling cells, automatically adaptive variable time steps, a modified no-time counter procedure for collisions, and discontinuous and event-driven physical processes. Axially symmetric solutions require radial weighting for the simulated molecules since the molecules near the axis represent fewer real molecules than those farther away from the axis due to the difference in volume of the cells. In the present methodology, these radial weighting factors are continuous, linear functions that vary with the radial position of each simulated molecule. It is shown that how one defines the number of tentative collisions greatly influences the mean collision time near the axis. The method by which the grid is treated for axially symmetric problems also plays an important role near the axis, especially for scalar pressure. A new method to treat how the molecules are traced through the grid is proposed to alleviate the decrease in scalar pressure at the axis near the surface. Also, a modification to the duplication buffer is proposed to vary the duplicated molecular velocities while retaining the molecular kinetic energy and axially symmetric nature of the problem.

  8. Concurrent constant modulus algorithm and multi-modulus algorithm scheme for high-order QAM signals

    NASA Astrophysics Data System (ADS)

    Rao, Wei

    2011-10-01

    In order to overcome the slow convergence rate and large steady-state mean square error of constant modulus algorithm (CMA), a concurrent constant modulus algorithm and multi-modulus algorithm scheme for high-order QAM signals is proposed, which makes full use of the character which is that the high-order QAM signals locate in the different modulus. This algorithm uses the CMA as the basal mode. And in the second mode it uses the multi-modulus algorithm. Furthermore, the two modes operate concurrently. The efficiency of the method is proved by computer simulations in underwater acoustic channels.

  9. A Winner Determination Algorithm for Combinatorial Auctions Based on Hybrid Artificial Fish Swarm Algorithm

    NASA Astrophysics Data System (ADS)

    Zheng, Genrang; Lin, ZhengChun

    The problem of winner determination in combinatorial auctions is a hotspot electronic business, and a NP hard problem. A Hybrid Artificial Fish Swarm Algorithm(HAFSA), which is combined with First Suite Heuristic Algorithm (FSHA) and Artificial Fish Swarm Algorithm (AFSA), is proposed to solve the problem after probing it base on the theories of AFSA. Experiment results show that the HAFSA is a rapidly and efficient algorithm for The problem of winner determining. Compared with Ant colony Optimization Algorithm, it has a good performance with broad and prosperous application.

  10. Analysis of the geophysical data using a posteriori algorithms

    NASA Astrophysics Data System (ADS)

    Voskoboynikova, Gyulnara; Khairetdinov, Marat

    2016-04-01

    The problems of monitoring, prediction and prevention of extraordinary natural and technogenic events are priority of modern problems. These events include earthquakes, volcanic eruptions, the lunar-solar tides, landslides, falling celestial bodies, explosions utilized stockpiles of ammunition, numerous quarry explosion in open coal mines, provoking technogenic earthquakes. Monitoring is based on a number of successive stages, which include remote registration of the events responses, measurement of the main parameters as arrival times of seismic waves or the original waveforms. At the final stage the inverse problems associated with determining the geographic location and time of the registration event are solving. Therefore, improving the accuracy of the parameters estimation of the original records in the high noise is an important problem. As is known, the main measurement errors arise due to the influence of external noise, the difference between the real and model structures of the medium, imprecision of the time definition in the events epicenter, the instrumental errors. Therefore, posteriori algorithms more accurate in comparison with known algorithms are proposed and investigated. They are based on a combination of discrete optimization method and fractal approach for joint detection and estimation of the arrival times in the quasi-periodic waveforms sequence in problems of geophysical monitoring with improved accuracy. Existing today, alternative approaches to solving these problems does not provide the given accuracy. The proposed algorithms are considered for the tasks of vibration sounding of the Earth in times of lunar and solar tides, and for the problem of monitoring of the borehole seismic source location in trade drilling.

  11. Cooperative scheduling of imaging observation tasks for high-altitude airships based on propagation algorithm.

    PubMed

    Chuan, He; Dishan, Qiu; Jin, Liu

    2012-01-01

    The cooperative scheduling problem on high-altitude airships for imaging observation tasks is discussed. A constraint programming model is established by analyzing the main constraints, which takes the maximum task benefit and the minimum cruising distance as two optimization objectives. The cooperative scheduling problem of high-altitude airships is converted into a main problem and a subproblem by adopting hierarchy architecture. The solution to the main problem can construct the preliminary matching between tasks and observation resource in order to reduce the search space of the original problem. Furthermore, the solution to the sub-problem can detect the key nodes that each airship needs to fly through in sequence, so as to get the cruising path. Firstly, the task set is divided by using k-core neighborhood growth cluster algorithm (K-NGCA). Then, a novel swarm intelligence algorithm named propagation algorithm (PA) is combined with the key node search algorithm (KNSA) to optimize the cruising path of each airship and determine the execution time interval of each task. Meanwhile, this paper also provides the realization approach of the above algorithm and especially makes a detailed introduction on the encoding rules, search models, and propagation mechanism of the PA. Finally, the application results and comparison analysis show the proposed models and algorithms are effective and feasible. PMID:23365522

  12. Cooperative Scheduling of Imaging Observation Tasks for High-Altitude Airships Based on Propagation Algorithm

    PubMed Central

    Chuan, He; Dishan, Qiu; Jin, Liu

    2012-01-01

    The cooperative scheduling problem on high-altitude airships for imaging observation tasks is discussed. A constraint programming model is established by analyzing the main constraints, which takes the maximum task benefit and the minimum cruising distance as two optimization objectives. The cooperative scheduling problem of high-altitude airships is converted into a main problem and a subproblem by adopting hierarchy architecture. The solution to the main problem can construct the preliminary matching between tasks and observation resource in order to reduce the search space of the original problem. Furthermore, the solution to the sub-problem can detect the key nodes that each airship needs to fly through in sequence, so as to get the cruising path. Firstly, the task set is divided by using k-core neighborhood growth cluster algorithm (K-NGCA). Then, a novel swarm intelligence algorithm named propagation algorithm (PA) is combined with the key node search algorithm (KNSA) to optimize the cruising path of each airship and determine the execution time interval of each task. Meanwhile, this paper also provides the realization approach of the above algorithm and especially makes a detailed introduction on the encoding rules, search models, and propagation mechanism of the PA. Finally, the application results and comparison analysis show the proposed models and algorithms are effective and feasible. PMID:23365522

  13. Use of Algorithm of Changes for Optimal Design of Heat Exchanger

    NASA Astrophysics Data System (ADS)

    Tam, S. C.; Tam, H. K.; Chio, C. H.; Tam, L. M.

    2010-05-01

    For economic reasons, the optimal design of heat exchanger is required. Design of heat exchanger is usually based on the iterative process. The design conditions, equipment geometries, the heat transfer and friction factor correlations are totally involved in the process. Using the traditional iterative method, many trials are needed for satisfying the compromise between the heat exchange performance and the cost consideration. The process is cumbersome and the optimal design is often depending on the design engineer's experience. Therefore, in the recent studies, many researchers, reviewed in [1], applied the genetic algorithm (GA) [2] for designing the heat exchanger. The results outperformed the traditional method. In this study, the alternative approach, algorithm of changes, is proposed for optimal design of shell-tube heat exchanger [3]. This new method, algorithm of changes based on I Ching (???), is developed originality by the author. In the algorithms, the hexagram operations in I Ching has been generalized to binary string case and the iterative procedure which imitates the I Ching inference is also defined. On the basis of [3], the shell inside diameter, tube outside diameter, and baffles spacing were treated as the design (or optimized) variables. The cost of the heat exchanger was arranged as the objective function. Through the case study, the results show that the algorithm of changes is comparable to the GA method. Both of method can find the optimal solution in a short time. However, without interchanging information between binary strings, the algorithm of changes has advantage on parallel computation over GA.

  14. Algorithms for error localization of discrete data

    SciTech Connect

    Liepins, G.E.

    1984-07-01

    A self-contained derivation and evaluation of the three principal algorithms for error localization of discrete data are provided: (1) generation of a sufficient set of edits by single-sweep with fathoming (suggested by the original work by Fellegi and Holt), and (2) sequential row generation (developed by Garfinkel), and (3) modifications of the Chernikova algorithm (suggested by work of Rubin and Sande). The first two approaches can be characterized as Boolean-based, whereas the last is a nonpivoting extremal ray search. The background to the Boolean approaches to data editing is provided, as are Burger's results on extremal rays, Chernikova's original algorithm, and Rubin's modifications. For selected results, new elementary proofs are given in the appendix. For the Chernikova algorithm, novel use is made of the problem structure of localization to minimize the undesirable tendency to generate excessively large matrices.

  15. Improved algorithm for calculating the Chandrasekhar function

    NASA Astrophysics Data System (ADS)

    Jablonski, A.

    2013-02-01

    number of abscissas N. (4) For Romberg quadrature, to optimize the performance, the mixed algorithm C was proposed in which algorithm A is used for argument x smaller than or equal to x0=0.4, while algorithm B is used for x larger than 0.4 [1]. For Gauss-Legendre quadrature, the limit x0 was found to depend on the number of abscissas N. For each value of N considered, the time of calculations of the H function was determined for pairs of arguments uniformly distributed in the ranges 0<=x<=0.05 and 0<=omega<=1, and for pairs of arguments uniformly distributed in the ranges 0.05<=x<=1 and 0<=omega<=1. As shown in Fig. 2 for N=64, algorithm A is faster than algorithm B for x smaller than or equal to 0.0225. Comparison of the running times of algorithms A and B. Open circles: algorithm B is faster than the algorithm A; full circles: algorithm A is faster than algorithm B. Thus, the value of x0=0.0225 is proposed for the mixed algorithm C when Gauss-Legendere quadrature with N=64 is used. Similar computer experiments performed for other values of N are summarized below. L N0 1 16 0.25 2 20 0.15 3 24 0.10 4 32 0.050 5 40 0.030 6 48 0.045 7 64 0.0225-Recommended 8 80 0.0125 9 96 0.020 The flag L is one of the input parameters for the subroutine GAUSS. In the programs implementing algorithms A, B, and C (CHANDRA, CHANDRB, and CHANDRC), Gauss-Legendre quadrature with N=64 is currently set. As follows from Fig. 1, algorithm B (and consequently algorithm C) is the fastest in that case. It is still possible to change the number of abscissas; the flag L then has to be modified in lines 165, 169, 185, 189, and 304 of program CHANDRAS_v2, and the value of x0 in line 111 has to be adjusted according to the table above. (5) The above modifications of the code did not affect the accuracy of the calculated Chandrasekhar function, as compared to the original code [1]. For the pairs of arguments shown in Fig. 2, the accuracy of the H function, calculated from algorithms A and B, reached at

  16. Practical algorithmic probability: an image inpainting example

    NASA Astrophysics Data System (ADS)

    Potapov, Alexey; Scherbakov, Oleg; Zhdanov, Innokentii

    2013-12-01

    Possibility of practical application of algorithmic probability is analyzed on an example of image inpainting problem that precisely corresponds to the prediction problem. Such consideration is fruitful both for the theory of universal prediction and practical image inpaiting methods. Efficient application of algorithmic probability implies that its computation is essentially optimized for some specific data representation. In this paper, we considered one image representation, namely spectral representation, for which an image inpainting algorithm is proposed based on the spectrum entropy criterion. This algorithm showed promising results in spite of very simple representation. The same approach can be used for introducing ALP-based criterion for more powerful image representations.

  17. Atmospheric channel for bistatic optical communication: simulation algorithms

    NASA Astrophysics Data System (ADS)

    Belov, V. V.; Tarasenkov, M. V.

    2015-11-01

    Three algorithms of statistical simulation of the impulse response (IR) for the atmospheric optical communication channel are considered, including algorithms of local estimate and double local estimate and the algorithm suggested by us. On the example of a homogeneous molecular atmosphere it is demonstrated that algorithms of double local estimate and the suggested algorithm are more efficient than the algorithm of local estimate. For small optical path length, the proposed algorithm is more efficient, and for large optical path length, the algorithm of double local estimate is more efficient. Using the proposed algorithm, the communication quality is estimated for a particular case of the atmospheric channel under conditions of intermediate turbidity. The communication quality is characterized by the maximum IR, time of maximum IR, integral IR, and bandwidth of the communication channel. Calculations of these criteria demonstrated that communication is most efficient when the point of intersection of the directions toward the source and the receiver is most close to the source point.

  18. An Adaptive Unified Differential Evolution Algorithm for Global Optimization

    SciTech Connect

    Qiang, Ji; Mitchell, Chad

    2014-11-03

    In this paper, we propose a new adaptive unified differential evolution algorithm for single-objective global optimization. Instead of the multiple mutation strate- gies proposed in conventional differential evolution algorithms, this algorithm employs a single equation unifying multiple strategies into one expression. It has the virtue of mathematical simplicity and also provides users the flexibility for broader exploration of the space of mutation operators. By making all control parameters in the proposed algorithm self-adaptively evolve during the process of optimization, it frees the application users from the burden of choosing appro- priate control parameters and also improves the performance of the algorithm. In numerical tests using thirteen basic unimodal and multimodal functions, the proposed adaptive unified algorithm shows promising performance in compari- son to several conventional differential evolution algorithms.

  19. The origins of originality: the neural bases of creative thinking and originality.

    PubMed

    Shamay-Tsoory, S G; Adler, N; Aharon-Peretz, J; Perry, D; Mayseless, N

    2011-01-01

    Although creativity has been related to prefrontal activity, recent neurological case studies postulate that patients who have left frontal and temporal degeneration involving deterioration of language abilities may actually develop de novo artistic abilities. In this study, we propose a neural and cognitive model according to which a balance between the two hemispheres affects a major aspect of creative cognition, namely, originality. In order to examine the neural basis of originality, that is, the ability to produce statistically infrequent ideas, patients with localized lesions in the medial prefrontal cortex (mPFC), inferior frontal gyrus (IFG), and posterior parietal and temporal cortex (PC), were assessed by two tasks involving divergent thinking and originality. Results indicate that lesions in the mPFC involved the most profound impairment in originality. Furthermore, precise anatomical mapping of lesions indicated that while the extent of lesion in the right mPFC was associated with impaired originality, lesions in the left PC were associated with somewhat elevated levels of originality. A positive correlation between creativity scores and left PC lesions indicated that the larger the lesion is in this area the greater the originality. On the other hand, a negative correlation was observed between originality scores and lesions in the right mPFC. It is concluded that the right mPFC is part of a right fronto-parietal network which is responsible for producing original ideas. It is possible that more linear cognitive processing such as language, mediated by left hemisphere structures interferes with creative cognition. Therefore, lesions in the left hemisphere may be associated with elevated levels of originality. PMID:21126528

  20. Vector Quantization Algorithm Based on Associative Memories

    NASA Astrophysics Data System (ADS)

    Guzmán, Enrique; Pogrebnyak, Oleksiy; Yáñez, Cornelio; Manrique, Pablo

    This paper presents a vector quantization algorithm for image compression based on extended associative memories. The proposed algorithm is divided in two stages. First, an associative network is generated applying the learning phase of the extended associative memories between a codebook generated by the LBG algorithm and a training set. This associative network is named EAM-codebook and represents a new codebook which is used in the next stage. The EAM-codebook establishes a relation between training set and the LBG codebook. Second, the vector quantization process is performed by means of the recalling stage of EAM using as associative memory the EAM-codebook. This process generates a set of the class indices to which each input vector belongs. With respect to the LBG algorithm, the main advantages offered by the proposed algorithm is high processing speed and low demand of resources (system memory); results of image compression and quality are presented.

  1. An enhanced algorithm for multiple sequence alignment of protein sequences using genetic algorithm

    PubMed Central

    Kumar, Manish

    2015-01-01

    One of the most fundamental operations in biological sequence analysis is multiple sequence alignment (MSA). The basic of multiple sequence alignment problems is to determine the most biologically plausible alignments of protein or DNA sequences. In this paper, an alignment method using genetic algorithm for multiple sequence alignment has been proposed. Two different genetic operators mainly crossover and mutation were defined and implemented with the proposed method in order to know the population evolution and quality of the sequence aligned. The proposed method is assessed with protein benchmark dataset, e.g., BALIBASE, by comparing the obtained results to those obtained with other alignment algorithms, e.g., SAGA, RBT-GA, PRRP, HMMT, SB-PIMA, CLUSTALX, CLUSTAL W, DIALIGN and PILEUP8 etc. Experiments on a wide range of data have shown that the proposed algorithm is much better (it terms of score) than previously proposed algorithms in its ability to achieve high alignment quality. PMID:27065770

  2. An enhanced algorithm for multiple sequence alignment of protein sequences using genetic algorithm.

    PubMed

    Kumar, Manish

    2015-01-01

    One of the most fundamental operations in biological sequence analysis is multiple sequence alignment (MSA). The basic of multiple sequence alignment problems is to determine the most biologically plausible alignments of protein or DNA sequences. In this paper, an alignment method using genetic algorithm for multiple sequence alignment has been proposed. Two different genetic operators mainly crossover and mutation were defined and implemented with the proposed method in order to know the population evolution and quality of the sequence aligned. The proposed method is assessed with protein benchmark dataset, e.g., BALIBASE, by comparing the obtained results to those obtained with other alignment algorithms, e.g., SAGA, RBT-GA, PRRP, HMMT, SB-PIMA, CLUSTALX, CLUSTAL W, DIALIGN and PILEUP8 etc. Experiments on a wide range of data have shown that the proposed algorithm is much better (it terms of score) than previously proposed algorithms in its ability to achieve high alignment quality. PMID:27065770

  3. Library of Continuation Algorithms

    2005-03-01

    LOCA (Library of Continuation Algorithms) is scientific software written in C++ that provides advanced analysis tools for nonlinear systems. In particular, it provides parameter continuation algorithms. bifurcation tracking algorithms, and drivers for linear stability analysis. The algorithms are aimed at large-scale applications that use Newton’s method for their nonlinear solve.

  4. Novel and efficient tag SNPs selection algorithms.

    PubMed

    Chen, Wen-Pei; Hung, Che-Lun; Tsai, Suh-Jen Jane; Lin, Yaw-Ling

    2014-01-01

    SNPs are the most abundant forms of genetic variations amongst species; the association studies between complex diseases and SNPs or haplotypes have received great attention. However, these studies are restricted by the cost of genotyping all SNPs; thus, it is necessary to find smaller subsets, or tag SNPs, representing the rest of the SNPs. In fact, the existing tag SNP selection algorithms are notoriously time-consuming. An efficient algorithm for tag SNP selection was presented, which was applied to analyze the HapMap YRI data. The experimental results show that the proposed algorithm can achieve better performance than the existing tag SNP selection algorithms; in most cases, this proposed algorithm is at least ten times faster than the existing methods. In many cases, when the redundant ratio of the block is high, the proposed algorithm can even be thousands times faster than the previously known methods. Tools and web services for haplotype block analysis integrated by hadoop MapReduce framework are also developed using the proposed algorithm as computation kernels. PMID:24212035

  5. Trial encoding algorithms ensemble.

    PubMed

    Cheng, Lipin Bill; Yeh, Ren Jye

    2013-01-01

    This paper proposes trial algorithms for some basic components in cryptography and lossless bit compression. The symmetric encryption is accomplished by mixing up randomizations and scrambling with hashing of the key playing an essential role. The digital signature is adapted from the Hill cipher with the verification key matrices incorporating un-invertible parts to hide the signature matrix. The hash is a straight running summation (addition chain) of data bytes plus some randomization. One simplified version can be burst error correcting code. The lossless bit compressor is the Shannon-Fano coding that is less optimal than the later Huffman and Arithmetic coding, but can be conveniently implemented without the use of a tree structure and improvable with bytes concatenation. PMID:27057475

  6. The BR eigenvalue algorithm

    SciTech Connect

    Geist, G.A.; Howell, G.W.; Watkins, D.S.

    1997-11-01

    The BR algorithm, a new method for calculating the eigenvalues of an upper Hessenberg matrix, is introduced. It is a bulge-chasing algorithm like the QR algorithm, but, unlike the QR algorithm, it is well adapted to computing the eigenvalues of the narrowband, nearly tridiagonal matrices generated by the look-ahead Lanczos process. This paper describes the BR algorithm and gives numerical evidence that it works well in conjunction with the Lanczos process. On the biggest problems run so far, the BR algorithm beats the QR algorithm by a factor of 30--60 in computing time and a factor of over 100 in matrix storage space.

  7. DHCP Origin Traceback

    NASA Astrophysics Data System (ADS)

    Majumdar, Saugat; Kulkarni, Dhananjay; Ravishankar, Chinya V.

    Imagine that the DHCP server is under attack from malicious hosts in your network. How would you know where these DHCP packets are coming from, or which path they took in the network? This paper investigates the problem of determining the origin of a DHCP packet in a network. We propose a practical method for adding a new option field that does not violate any RFC's, which we believe should be a crucial requirement while proposing any related solution. The new DHCP option will contain the ingress port and the switch MAC address. We recommend that this new option be added at the edge so that we can use the recorded value for performing traceback. The computational overhead of our solution is low, and the related network management tasks are low as well. We also address issues related to securing the field in order to maintain privacy of switch MAC addresses, fragmentation of packets, and possible attack scenarios. Our study shows that the traceback scheme is effective and practical to use in most network environments.

  8. Runtime support for parallelizing data mining algorithms

    NASA Astrophysics Data System (ADS)

    Jin, Ruoming; Agrawal, Gagan

    2002-03-01

    With recent technological advances, shared memory parallel machines have become more scalable, and offer large main memories and high bus bandwidths. They are emerging as good platforms for data warehousing and data mining. In this paper, we focus on shared memory parallelization of data mining algorithms. We have developed a series of techniques for parallelization of data mining algorithms, including full replication, full locking, fixed locking, optimized full locking, and cache-sensitive locking. Unlike previous work on shared memory parallelization of specific data mining algorithms, all of our techniques apply to a large number of common data mining algorithms. In addition, we propose a reduction-object based interface for specifying a data mining algorithm. We show how our runtime system can apply any of the technique we have developed starting from a common specification of the algorithm.

  9. Modified OMP Algorithm for Exponentially Decaying Signals

    PubMed Central

    Kazimierczuk, Krzysztof; Kasprzak, Paweł

    2015-01-01

    A group of signal reconstruction methods, referred to as compressed sensing (CS), has recently found a variety of applications in numerous branches of science and technology. However, the condition of the applicability of standard CS algorithms (e.g., orthogonal matching pursuit, OMP), i.e., the existence of the strictly sparse representation of a signal, is rarely met. Thus, dedicated algorithms for solving particular problems have to be developed. In this paper, we introduce a modification of OMP motivated by nuclear magnetic resonance (NMR) application of CS. The algorithm is based on the fact that the NMR spectrum consists of Lorentzian peaks and matches a single Lorentzian peak in each of its iterations. Thus, we propose the name Lorentzian peak matching pursuit (LPMP). We also consider certain modification of the algorithm by introducing the allowed positions of the Lorentzian peaks' centers. Our results show that the LPMP algorithm outperforms other CS algorithms when applied to exponentially decaying signals. PMID:25609044

  10. Aggressive multiple sclerosis: proposed definition and treatment algorithm.

    PubMed

    Rush, Carolina A; MacLean, Heather J; Freedman, Mark S

    2015-07-01

    Multiple sclerosis (MS) is a CNS disorder characterized by inflammation, demyelination and neurodegeneration, and is the most common cause of acquired nontraumatic neurological disability in young adults. The course of the disease varies between individuals: some patients accumulate minimal disability over their lives, whereas others experience a rapidly disabling disease course. This latter subset of patients, whose MS is marked by the rampant progression of disability over a short time period, is often referred to as having 'aggressive' MS. Treatment of patients with aggressive MS is challenging, and optimal strategies have yet to be defined. It is important to identify patients who are at risk of aggressive MS as early as possible and implement an effective treatment strategy. Early intervention might protect patients from irreversible damage and disability, and prevent the development of a secondary progressive course, which thus far lacks effective therapy. PMID:26032396

  11. Memetic algorithm for community detection in networks.

    PubMed

    Gong, Maoguo; Fu, Bao; Jiao, Licheng; Du, Haifeng

    2011-11-01

    Community structure is one of the most important properties in networks, and community detection has received an enormous amount of attention in recent years. Modularity is by far the most used and best known quality function for measuring the quality of a partition of a network, and many community detection algorithms are developed to optimize it. However, there is a resolution limit problem in modularity optimization methods. In this study, a memetic algorithm, named Meme-Net, is proposed to optimize another quality function, modularity density, which includes a tunable parameter that allows one to explore the network at different resolutions. Our proposed algorithm is a synergy of a genetic algorithm with a hill-climbing strategy as the local search procedure. Experiments on computer-generated and real-world networks show the effectiveness and the multiresolution ability of the proposed method. PMID:22181467

  12. Research on Palmprint Identification Method Based on Quantum Algorithms

    PubMed Central

    Zhang, Zhanzhan

    2014-01-01

    Quantum image recognition is a technology by using quantum algorithm to process the image information. It can obtain better effect than classical algorithm. In this paper, four different quantum algorithms are used in the three stages of palmprint recognition. First, quantum adaptive median filtering algorithm is presented in palmprint filtering processing. Quantum filtering algorithm can get a better filtering result than classical algorithm through the comparison. Next, quantum Fourier transform (QFT) is used to extract pattern features by only one operation due to quantum parallelism. The proposed algorithm exhibits an exponential speed-up compared with discrete Fourier transform in the feature extraction. Finally, quantum set operations and Grover algorithm are used in palmprint matching. According to the experimental results, quantum algorithm only needs to apply square of N operations to find out the target palmprint, but the traditional method needs N times of calculation. At the same time, the matching accuracy of quantum algorithm is almost 100%. PMID:25105165

  13. Performance Comparison Of Evolutionary Algorithms For Image Clustering

    NASA Astrophysics Data System (ADS)

    Civicioglu, P.; Atasever, U. H.; Ozkan, C.; Besdok, E.; Karkinli, A. E.; Kesikoglu, A.

    2014-09-01

    Evolutionary computation tools are able to process real valued numerical sets in order to extract suboptimal solution of designed problem. Data clustering algorithms have been intensively used for image segmentation in remote sensing applications. Despite of wide usage of evolutionary algorithms on data clustering, their clustering performances have been scarcely studied by using clustering validation indexes. In this paper, the recently proposed evolutionary algorithms (i.e., Artificial Bee Colony Algorithm (ABC), Gravitational Search Algorithm (GSA), Cuckoo Search Algorithm (CS), Adaptive Differential Evolution Algorithm (JADE), Differential Search Algorithm (DSA) and Backtracking Search Optimization Algorithm (BSA)) and some classical image clustering techniques (i.e., k-means, fcm, som networks) have been used to cluster images and their performances have been compared by using four clustering validation indexes. Experimental test results exposed that evolutionary algorithms give more reliable cluster-centers than classical clustering techniques, but their convergence time is quite long.

  14. Dual-Layer Video Encryption using RSA Algorithm

    NASA Astrophysics Data System (ADS)

    Chadha, Aman; Mallik, Sushmit; Chadha, Ankit; Johar, Ravdeep; Mani Roja, M.

    2015-04-01

    This paper proposes a video encryption algorithm using RSA and Pseudo Noise (PN) sequence, aimed at applications requiring sensitive video information transfers. The system is primarily designed to work with files encoded using the Audio Video Interleaved (AVI) codec, although it can be easily ported for use with Moving Picture Experts Group (MPEG) encoded files. The audio and video components of the source separately undergo two layers of encryption to ensure a reasonable level of security. Encryption of the video component involves applying the RSA algorithm followed by the PN-based encryption. Similarly, the audio component is first encrypted using PN and further subjected to encryption using the Discrete Cosine Transform. Combining these techniques, an efficient system, invulnerable to security breaches and attacks with favorable values of parameters such as encryption/decryption speed, encryption/decryption ratio and visual degradation; has been put forth. For applications requiring encryption of sensitive data wherein stringent security requirements are of prime concern, the system is found to yield negligible similarities in visual perception between the original and the encrypted video sequence. For applications wherein visual similarity is not of major concern, we limit the encryption task to a single level of encryption which is accomplished by using RSA, thereby quickening the encryption process. Although some similarity between the original and encrypted video is observed in this case, it is not enough to comprehend the happenings in the video.

  15. Efficient hardware implementation of the lightweight block encryption algorithm LEA.

    PubMed

    Lee, Donggeon; Kim, Dong-Chan; Kwon, Daesung; Kim, Howon

    2014-01-01

    Recently, due to the advent of resource-constrained trends, such as smartphones and smart devices, the computing environment is changing. Because our daily life is deeply intertwined with ubiquitous networks, the importance of security is growing. A lightweight encryption algorithm is essential for secure communication between these kinds of resource-constrained devices, and many researchers have been investigating this field. Recently, a lightweight block cipher called LEA was proposed. LEA was originally targeted for efficient implementation on microprocessors, as it is fast when implemented in software and furthermore, it has a small memory footprint. To reflect on recent technology, all required calculations utilize 32-bit wide operations. In addition, the algorithm is comprised of not complex S-Box-like structures but simple Addition, Rotation, and XOR operations. To the best of our knowledge, this paper is the first report on a comprehensive hardware implementation of LEA. We present various hardware structures and their implementation results according to key sizes. Even though LEA was originally targeted at software efficiency, it also shows high efficiency when implemented as hardware. PMID:24406859

  16. Efficient Hardware Implementation of the Lightweight Block Encryption Algorithm LEA

    PubMed Central

    Lee, Donggeon; Kim, Dong-Chan; Kwon, Daesung; Kim, Howon

    2014-01-01

    Recently, due to the advent of resource-constrained trends, such as smartphones and smart devices, the computing environment is changing. Because our daily life is deeply intertwined with ubiquitous networks, the importance of security is growing. A lightweight encryption algorithm is essential for secure communication between these kinds of resource-constrained devices, and many researchers have been investigating this field. Recently, a lightweight block cipher called LEA was proposed. LEA was originally targeted for efficient implementation on microprocessors, as it is fast when implemented in software and furthermore, it has a small memory footprint. To reflect on recent technology, all required calculations utilize 32-bit wide operations. In addition, the algorithm is comprised of not complex S-Box-like structures but simple Addition, Rotation, and XOR operations. To the best of our knowledge, this paper is the first report on a comprehensive hardware implementation of LEA. We present various hardware structures and their implementation results according to key sizes. Even though LEA was originally targeted at software efficiency, it also shows high efficiency when implemented as hardware. PMID:24406859

  17. Expectation-maximization algorithms for learning a finite mixture of univariate survival time distributions from partially specified class values

    SciTech Connect

    Lee, Youngrok

    2013-05-15

    Heterogeneity exists on a data set when samples from di erent classes are merged into the data set. Finite mixture models can be used to represent a survival time distribution on heterogeneous patient group by the proportions of each class and by the survival time distribution within each class as well. The heterogeneous data set cannot be explicitly decomposed to homogeneous subgroups unless all the samples are precisely labeled by their origin classes; such impossibility of decomposition is a barrier to overcome for estimating nite mixture models. The expectation-maximization (EM) algorithm has been used to obtain maximum likelihood estimates of nite mixture models by soft-decomposition of heterogeneous samples without labels for a subset or the entire set of data. In medical surveillance databases we can find partially labeled data, that is, while not completely unlabeled there is only imprecise information about class values. In this study we propose new EM algorithms that take advantages of using such partial labels, and thus incorporate more information than traditional EM algorithms. We particularly propose four variants of the EM algorithm named EM-OCML, EM-PCML, EM-HCML and EM-CPCML, each of which assumes a specific mechanism of missing class values. We conducted a simulation study on exponential survival trees with five classes and showed that the advantages of incorporating substantial amount of partially labeled data can be highly signi cant. We also showed model selection based on AIC values fairly works to select the best proposed algorithm on each specific data set. A case study on a real-world data set of gastric cancer provided by Surveillance, Epidemiology and End Results (SEER) program showed a superiority of EM-CPCML to not only the other proposed EM algorithms but also conventional supervised, unsupervised and semi-supervised learning algorithms.

  18. A new wavelet-based reconstruction algorithm for twin image removal in digital in-line holography

    NASA Astrophysics Data System (ADS)

    Hattay, Jamel; Belaid, Samir; Aguili, Taoufik; Lebrun, Denis

    2016-07-01

    Two original methods are proposed here for digital in-line hologram processing. Firstly, we propose an entropy-based method to retrieve the focus plane which is very useful for digital hologram reconstruction. Secondly, we introduce a new approach to remove the so-called twin images reconstructed by holograms. This is achieved owing to the Blind Source Separation (BSS) technique. The proposed method is made up of two steps: an Adaptive Quincunx Lifting Scheme (AQLS) and a statistical unmixing algorithm. The AQLS tool is based on wavelet packet transform, whose role is to maximize the sparseness of the input holograms. The unmixing algorithm uses the Independent Component Analysis (ICA) tool. Experimental results confirm the ability of convolutive blind source separation to discard the unwanted twin image from in-line digital holograms.

  19. Hybrid algorithm for NARX network parameters' determination using differential evolution and genetic algorithm

    NASA Astrophysics Data System (ADS)

    Salami, M. J. E.; Tijani, I. B.; Abdullateef, A. I.; Aibinu, M. A.

    2013-12-01

    A hybrid optimization algorithm using Differential Evolution (DE) and Genetic Algorithm (GA) is proposed in this study to address the problem of network parameters determination associated with the Nonlinear Autoregressive with eXogenous inputs Network (NARX-network). The proposed algorithm involves a two level optimization scheme to search for both optimal network architecture and weights. The DE at the upper level is formulated as combinatorial optimization to search for the network architecture while the associated network weights that minimize the prediction error is provided by the GA at the lower level. The performance of the algorithm is evaluated on identification of a laboratory rotary motion system. The system identification results show the effectiveness of the proposed algorithm for nonparametric model development.

  20. A novel algorithm combining finite state method and genetic algorithm for solving crude oil scheduling problem.

    PubMed

    Duan, Qian-Qian; Yang, Gen-Ke; Pan, Chang-Chun

    2014-01-01

    A hybrid optimization algorithm combining finite state method (FSM) and genetic algorithm (GA) is proposed to solve the crude oil scheduling problem. The FSM and GA are combined to take the advantage of each method and compensate deficiencies of individual methods. In the proposed algorithm, the finite state method makes up for the weakness of GA which is poor at local searching ability. The heuristic returned by the FSM can guide the GA algorithm towards good solutions. The idea behind this is that we can generate promising substructure or partial solution by using FSM. Furthermore, the FSM can guarantee that the entire solution space is uniformly covered. Therefore, the combination of the two algorithms has better global performance than the existing GA or FSM which is operated individually. Finally, a real-life crude oil scheduling problem from the literature is used for conducting simulation. The experimental results validate that the proposed method outperforms the state-of-art GA method. PMID:24772031

  1. An arc-sequencing algorithm for intensity modulated arc therapy

    SciTech Connect

    Shepard, D. M.; Cao, D.; Afghan, M. K. N.; Earl, M. A.

    2007-02-15

    Intensity modulated arc therapy (IMAT) is an intensity modulated radiation therapy delivery technique originally proposed as an alternative to tomotherapy. IMAT uses a series of overlapping arcs to deliver optimized intensity patterns from each beam direction. The full potential of IMAT has gone largely unrealized due in part to a lack of robust and commercially available inverse planning tools. To address this, we have implemented an IMAT arc-sequencing algorithm that translates optimized intensity maps into deliverable IMAT plans. The sequencing algorithm uses simulated annealing to simultaneously optimize the aperture shapes and weights throughout each arc. The sequencer enforces the delivery constraints while minimizing the discrepancies between the optimized and sequenced intensity maps. The performance of the algorithm has been tested for ten patient cases (3 prostate, 3 brain, 2 head-and-neck, 1 lung, and 1 pancreas). Seven coplanar IMAT plans were created using an average of 4.6 arcs and 685 monitor units. Additionally, three noncoplanar plans were created using an average of 16 arcs and 498 monitor units. The results demonstrate that the arc sequencer can provide efficient and highly conformal IMAT plans. An average sequencing time of approximately 20 min was observed.

  2. Adaptive color image watermarking algorithm

    NASA Astrophysics Data System (ADS)

    Feng, Gui; Lin, Qiwei

    2008-03-01

    As a major method for intellectual property right protecting, digital watermarking techniques have been widely studied and used. But due to the problems of data amount and color shifted, watermarking techniques on color image was not so widespread studied, although the color image is the principal part for multi-medium usages. Considering the characteristic of Human Visual System (HVS), an adaptive color image watermarking algorithm is proposed in this paper. In this algorithm, HSI color model was adopted both for host and watermark image, the DCT coefficient of intensity component (I) of the host color image was used for watermark date embedding, and while embedding watermark the amount of embedding bit was adaptively changed with the complex degree of the host image. As to the watermark image, preprocessing is applied first, in which the watermark image is decomposed by two layer wavelet transformations. At the same time, for enhancing anti-attack ability and security of the watermarking algorithm, the watermark image was scrambled. According to its significance, some watermark bits were selected and some watermark bits were deleted as to form the actual embedding data. The experimental results show that the proposed watermarking algorithm is robust to several common attacks, and has good perceptual quality at the same time.

  3. Fourier Lucas-Kanade algorithm.

    PubMed

    Lucey, Simon; Navarathna, Rajitha; Ashraf, Ahmed Bilal; Sridharan, Sridha

    2013-06-01

    In this paper, we propose a framework for both gradient descent image and object alignment in the Fourier domain. Our method centers upon the classical Lucas & Kanade (LK) algorithm where we represent the source and template/model in the complex 2D Fourier domain rather than in the spatial 2D domain. We refer to our approach as the Fourier LK (FLK) algorithm. The FLK formulation is advantageous when one preprocesses the source image and template/model with a bank of filters (e.g., oriented edges, Gabor, etc.) as 1) it can handle substantial illumination variations, 2) the inefficient preprocessing filter bank step can be subsumed within the FLK algorithm as a sparse diagonal weighting matrix, 3) unlike traditional LK, the computational cost is invariant to the number of filters and as a result is far more efficient, and 4) this approach can be extended to the Inverse Compositional (IC) form of the LK algorithm where nearly all steps (including Fourier transform and filter bank preprocessing) can be precomputed, leading to an extremely efficient and robust approach to gradient descent image matching. Further, these computational savings translate to nonrigid object alignment tasks that are considered extensions of the LK algorithm, such as those found in Active Appearance Models (AAMs). PMID:23599053

  4. Birkhoffian symplectic algorithms derived from Hamiltonian symplectic algorithms

    NASA Astrophysics Data System (ADS)

    Xin-Lei, Kong; Hui-Bin, Wu; Feng-Xiang, Mei

    2016-01-01

    In this paper, we focus on the construction of structure preserving algorithms for Birkhoffian systems, based on existing symplectic schemes for the Hamiltonian equations. The key of the method is to seek an invertible transformation which drives the Birkhoffian equations reduce to the Hamiltonian equations. When there exists such a transformation, applying the corresponding inverse map to symplectic discretization of the Hamiltonian equations, then resulting difference schemes are verified to be Birkhoffian symplectic for the original Birkhoffian equations. To illustrate the operation process of the method, we construct several desirable algorithms for the linear damped oscillator and the single pendulum with linear dissipation respectively. All of them exhibit excellent numerical behavior, especially in preserving conserved quantities. Project supported by the National Natural Science Foundation of China (Grant No. 11272050), the Excellent Young Teachers Program of North China University of Technology (Grant No. XN132), and the Construction Plan for Innovative Research Team of North China University of Technology (Grant No. XN129).

  5. Efficient irregular wavefront propagation algorithms on Intel® Xeon Phi™

    PubMed Central

    Gomes, Jeremias M.; Teodoro, George; de Melo, Alba; Kong, Jun; Kurc, Tahsin; Saltz, Joel H.

    2016-01-01

    We investigate the execution of the Irregular Wavefront Propagation Pattern (IWPP), a fundamental computing structure used in several image analysis operations, on the Intel® Xeon Phi™ co-processor. An efficient implementation of IWPP on the Xeon Phi is a challenging problem because of IWPP’s irregularity and the use of atomic instructions in the original IWPP algorithm to resolve race conditions. On the Xeon Phi, the use of SIMD and vectorization instructions is critical to attain high performance. However, SIMD atomic instructions are not supported. Therefore, we propose a new IWPP algorithm that can take advantage of the supported SIMD instruction set. We also evaluate an alternate storage container (priority queue) to track active elements in the wavefront in an effort to improve the parallel algorithm efficiency. The new IWPP algorithm is evaluated with Morphological Reconstruction and Imfill operations as use cases. Our results show performance improvements of up to 5.63× on top of the original IWPP due to vectorization. Moreover, the new IWPP achieves speedups of 45.7× and 1.62×, respectively, as compared to efficient CPU and GPU implementations. PMID:27298591

  6. Stabilizing the Richardson eigenvector algorithm by controlling chaos

    SciTech Connect

    He, S.

    1997-03-01

    By viewing the operations of the Richardson purification algorithm as a discrete time dynamical process, we propose a method to overcome the instability of this eigenvector algorithm by controlling chaos. We present theoretical analysis and numerical results on the behavior and performance of the stabilized algorithm. {copyright} {ital 1997 American Institute of Physics.}

  7. Newton Algorithms for Analytic Rotation: An Implicit Function Approach

    ERIC Educational Resources Information Center

    Boik, Robert J.

    2008-01-01

    In this paper implicit function-based parameterizations for orthogonal and oblique rotation matrices are proposed. The parameterizations are used to construct Newton algorithms for minimizing differentiable rotation criteria applied to "m" factors and "p" variables. The speed of the new algorithms is compared to that of existing algorithms and to…

  8. Calculation of shock-wave parameters far from origination by combined numerical-analytical methods

    NASA Astrophysics Data System (ADS)

    Potapkin, A. V.; Moskvichev, D. Yu.

    2011-03-01

    An algorithm is proposed for calculating the parameters of weak shock waves at large distances from their origination. In chosen meridional planes, the parameters of the near field of the three-dimensional flow are used to determine the streamwise coordinates of "phantom bodies" by linear relations. When the initial body is replaced by a system of "phantom bodies" for which discrete values of the Whitham function are found, the far-field parameters are calculated by the Whitham theory, independently in each meridional plane. Results calculated for a body with axial symmetry and for bodies with spatial symmetry are presented.

  9. Birefringent filter design by use of a modified genetic algorithm.

    PubMed

    Wen, Mengtao; Yao, Jianping

    2006-06-10

    A modified genetic algorithm is proposed for the optimization of fiber birefringent filters. The orientation angles and the element lengths are determined by the genetic algorithm to minimize the sidelobe levels of the filters. Being different from the normal genetic algorithm, the algorithm proposed reduces the problem space of the birefringent filter design to achieve faster speed and better performance. The design of 4-, 8-, and 14-section birefringent filters with an improved sidelobe suppression ratio is realized. A 4-section birefringent filter designed with the algorithm is experimentally realized. PMID:16761031

  10. A Unified Differential Evolution Algorithm for Global Optimization

    SciTech Connect

    Qiang, Ji; Mitchell, Chad

    2014-06-24

    Abstract?In this paper, we propose a new unified differential evolution (uDE) algorithm for single objective global optimization. Instead of selecting among multiple mutation strategies as in the conventional differential evolution algorithm, this algorithm employs a single equation as the mutation strategy. It has the virtue of mathematical simplicity and also provides users the flexbility for broader exploration of different mutation strategies. Numerical tests using twelve basic unimodal and multimodal functions show promising performance of the proposed algorithm in comparison to convential differential evolution algorithms.

  11. Updated treatment algorithm of pulmonary arterial hypertension.

    PubMed

    Galiè, Nazzareno; Corris, Paul A; Frost, Adaani; Girgis, Reda E; Granton, John; Jing, Zhi Cheng; Klepetko, Walter; McGoon, Michael D; McLaughlin, Vallerie V; Preston, Ioana R; Rubin, Lewis J; Sandoval, Julio; Seeger, Werner; Keogh, Anne

    2013-12-24

    The demands on a pulmonary arterial hypertension (PAH) treatment algorithm are multiple and in some ways conflicting. The treatment algorithm usually includes different types of recommendations with varying degrees of scientific evidence. In addition, the algorithm is required to be comprehensive but not too complex, informative yet simple and straightforward. The type of information in the treatment algorithm are heterogeneous including clinical, hemodynamic, medical, interventional, pharmacological and regulatory recommendations. Stakeholders (or users) including physicians from various specialties and with variable expertise in PAH, nurses, patients and patients' associations, healthcare providers, regulatory agencies and industry are often interested in the PAH treatment algorithm for different reasons. These are the considerable challenges faced when proposing appropriate updates to the current evidence-based treatment algorithm.The current treatment algorithm may be divided into 3 main areas: 1) general measures, supportive therapy, referral strategy, acute vasoreactivity testing and chronic treatment with calcium channel blockers; 2) initial therapy with approved PAH drugs; and 3) clinical response to the initial therapy, combination therapy, balloon atrial septostomy, and lung transplantation. All three sections will be revisited highlighting information newly available in the past 5 years and proposing updates where appropriate. The European Society of Cardiology grades of recommendation and levels of evidence will be adopted to rank the proposed treatments. PMID:24355643

  12. LCD motion blur: modeling, analysis, and algorithm.

    PubMed

    Chan, Stanley H; Nguyen, Truong Q

    2011-08-01

    Liquid crystal display (LCD) devices are well known for their slow responses due to the physical limitations of liquid crystals. Therefore, fast moving objects in a scene are often perceived as blurred. This effect is known as the LCD motion blur. In order to reduce LCD motion blur, an accurate LCD model and an efficient deblurring algorithm are needed. However, existing LCD motion blur models are insufficient to reflect the limitation of human-eye-tracking system. Also, the spatiotemporal equivalence in LCD motion blur models has not been proven directly in the discrete 2-D spatial domain, although it is widely used. There are three main contributions of this paper: modeling, analysis, and algorithm. First, a comprehensive LCD motion blur model is presented, in which human-eye-tracking limits are taken into consideration. Second, a complete analysis of spatiotemporal equivalence is provided and verified using real video sequences. Third, an LCD motion blur reduction algorithm is proposed. The proposed algorithm solves an l(1)-norm regularized least-squares minimization problem using a subgradient projection method. Numerical results show that the proposed algorithm gives higher peak SNR, lower temporal error, and lower spatial error than motion-compensated inverse filtering and Lucy-Richardson deconvolution algorithm, which are two state-of-the-art LCD deblurring algorithms. PMID:21292596

  13. Two New PRP Conjugate Gradient Algorithms for Minimization Optimization Models

    PubMed Central

    Yuan, Gonglin; Duan, Xiabin; Liu, Wenjie; Wang, Xiaoliang; Cui, Zengru; Sheng, Zhou

    2015-01-01

    Two new PRP conjugate Algorithms are proposed in this paper based on two modified PRP conjugate gradient methods: the first algorithm is proposed for solving unconstrained optimization problems, and the second algorithm is proposed for solving nonlinear equations. The first method contains two aspects of information: function value and gradient value. The two methods both possess some good properties, as follows: 1)βk ≥ 0 2) the search direction has the trust region property without the use of any line search method 3) the search direction has sufficient descent property without the use of any line search method. Under some suitable conditions, we establish the global convergence of the two algorithms. We conduct numerical experiments to evaluate our algorithms. The numerical results indicate that the first algorithm is effective and competitive for solving unconstrained optimization problems and that the second algorithm is effective for solving large-scale nonlinear equations. PMID:26502409

  14. Performance analysis of cone detection algorithms.

    PubMed

    Mariotti, Letizia; Devaney, Nicholas

    2015-04-01

    Many algorithms have been proposed to help clinicians evaluate cone density and spacing, as these may be related to the onset of retinal diseases. However, there has been no rigorous comparison of the performance of these algorithms. In addition, the performance of such algorithms is typically determined by comparison with human observers. Here we propose a technique to simulate realistic images of the cone mosaic. We use the simulated images to test the performance of three popular cone detection algorithms, and we introduce an algorithm which is used by astronomers to detect stars in astronomical images. We use Free Response Operating Characteristic (FROC) curves to evaluate and compare the performance of the four algorithms. This allows us to optimize the performance of each algorithm. We observe that performance is significantly enhanced by up-sampling the images. We investigate the effect of noise and image quality on cone mosaic parameters estimated using the different algorithms, finding that the estimated regularity is the most sensitive parameter. PMID:26366758

  15. An Innovative Thinking-Based Intelligent Information Fusion Algorithm

    PubMed Central

    Hu, Liang; Liu, Gang; Zhou, Jin

    2013-01-01

    This study proposes an intelligent algorithm that can realize information fusion in reference to the relative research achievements in brain cognitive theory and innovative computation. This algorithm treats knowledge as core and information fusion as a knowledge-based innovative thinking process. Furthermore, the five key parts of this algorithm including information sense and perception, memory storage, divergent thinking, convergent thinking, and evaluation system are simulated and modeled. This algorithm fully develops innovative thinking skills of knowledge in information fusion and is a try to converse the abstract conception of brain cognitive science to specific and operable research routes and strategies. Furthermore, the influences of each parameter of this algorithm on algorithm performance are analyzed and compared with those of classical intelligent algorithms trough test. Test results suggest that the algorithm proposed in this study can obtain the optimum problem solution by less target evaluation times, improve optimization effectiveness, and achieve the effective fusion of information. PMID:23956699

  16. Multidisciplinary Multiobjective Optimal Design for Turbomachinery Using Evolutionary Algorithm

    NASA Technical Reports Server (NTRS)

    2005-01-01

    This report summarizes Dr. Lian s efforts toward developing a robust and efficient tool for multidisciplinary and multi-objective optimal design for turbomachinery using evolutionary algorithms. This work consisted of two stages. The first stage (from July 2003 to June 2004) Dr. Lian focused on building essential capabilities required for the project. More specifically, Dr. Lian worked on two subjects: an enhanced genetic algorithm (GA) and an integrated optimization system with a GA and a surrogate model. The second stage (from July 2004 to February 2005) Dr. Lian formulated aerodynamic optimization and structural optimization into a multi-objective optimization problem and performed multidisciplinary and multi-objective optimizations on a transonic compressor blade based on the proposed model. Dr. Lian s numerical results showed that the proposed approach can effectively reduce the blade weight and increase the stage pressure ratio in an efficient manner. In addition, the new design was structurally safer than the original design. Five conference papers and three journal papers were published on this topic by Dr. Lian.

  17. Origins of GEMS Grains

    NASA Technical Reports Server (NTRS)

    Messenger, S.; Walker, R. M.

    2012-01-01

    Interplanetary dust particles (IDPs) collected in the Earth s stratosphere contain high abundances of submicrometer amorphous silicates known as GEMS grains. From their birth as condensates in the outflows of oxygen-rich evolved stars, processing in interstellar space, and incorporation into disks around new stars, amorphous silicates predominate in most astrophysical environments. Amorphous silicates were a major building block of our Solar System and are prominent in infrared spectra of comets. Anhydrous interplanetary dust particles (IDPs) thought to derive from comets contain abundant amorphous silicates known as GEMS (glass with embedded metal and sulfides) grains. GEMS grains have been proposed to be isotopically and chemically homogenized interstellar amorphous silicate dust. We evaluated this hypothesis through coordinated chemical and isotopic analyses of GEMS grains in a suite of IDPs to constrain their origins. GEMS grains show order of magnitude variations in Mg, Fe, Ca, and S abundances. GEMS grains do not match the average element abundances inferred for ISM dust containing on average, too little Mg, Fe, and Ca, and too much S. GEMS grains have complementary compositions to the crystalline components in IDPs suggesting that they formed from the same reservoir. We did not observe any unequivocal microstructural or chemical evidence that GEMS grains experienced prolonged exposure to radiation. We identified four GEMS grains having O isotopic compositions that point to origins in red giant branch or asymptotic giant branch stars and supernovae. Based on their O isotopic compositions, we estimate that 1-6% of GEMS grains are surviving circumstellar grains. The remaining 94-99% of GEMS grains have O isotopic compositions that are indistinguishable from terrestrial materials and carbonaceous chondrites. These isotopically solar GEMS grains either formed in the Solar System or were completely homogenized in the interstellar medium (ISM). However, the

  18. Applications and accuracy of the parallel diagonal dominant algorithm

    NASA Technical Reports Server (NTRS)

    Sun, Xian-He

    1993-01-01

    The Parallel Diagonal Dominant (PDD) algorithm is a highly efficient, ideally scalable tridiagonal solver. In this paper, a detailed study of the PDD algorithm is given. First the PDD algorithm is introduced. Then the algorithm is extended to solve periodic tridiagonal systems. A variant, the reduced PDD algorithm, is also proposed. Accuracy analysis is provided for a class of tridiagonal systems, the symmetric, and anti-symmetric Toeplitz tridiagonal systems. Implementation results show that the analysis gives a good bound on the relative error, and the algorithm is a good candidate for the emerging massively parallel machines.

  19. A VLSI architecture for simplified arithmetic Fourier transform algorithm

    NASA Technical Reports Server (NTRS)

    Reed, Irving S.; Shih, Ming-Tang; Truong, T. K.; Hendon, E.; Tufts, D. W.

    1992-01-01

    The arithmetic Fourier transform (AFT) is a number-theoretic approach to Fourier analysis which has been shown to perform competitively with the classical FFT in terms of accuracy, complexity, and speed. Theorems developed in a previous paper for the AFT algorithm are used here to derive the original AFT algorithm which Bruns found in 1903. This is shown to yield an algorithm of less complexity and of improved performance over certain recent AFT algorithms. A VLSI architecture is suggested for this simplified AFT algorithm. This architecture uses a butterfly structure which reduces the number of additions by 25 percent of that used in the direct method.

  20. Basis for spectral curvature algorithms in remote sensing of chlorophyll

    NASA Technical Reports Server (NTRS)

    Campbell, J. W.; Esaias, W. E.

    1983-01-01

    A simple, empirically derived algorithm for estimating oceanic chlorophyll concentrations from spectral radiances measured by a low-flying spectroradiometer has proved highly successful in field experiments in 1980-82. The sensor used was the Multichannel Ocean Color Sensor, and the originator of the algorithm was Grew (1981). This paper presents an explanation for the algorithm based on the optical properties of waters containing chlorophyll and other phytoplankton pigments and the radiative transfer equations governing the remotely sensed signal. The effects of varying solar zenith, atmospheric transmittance, and interfering substances in the water on the chlorophyll algorithm are characterized, and applicability of the algorithm is discussed.

  1. An improved edge detection algorithm for depth map inpainting

    NASA Astrophysics Data System (ADS)

    Chen, Weihai; Yue, Haosong; Wang, Jianhua; Wu, Xingming

    2014-04-01

    Three-dimensional (3D) measurement technology has been widely used in many scientific and engineering areas. The emergence of Kinect sensor makes 3D measurement much easier. However the depth map captured by Kinect sensor has some invalid regions, especially at object boundaries. These missing regions should be filled firstly. This paper proposes a depth-assisted edge detection algorithm and improves existing depth map inpainting algorithm using extracted edges. In the proposed algorithm, both color image and raw depth data are used to extract initial edges. Then the edges are optimized and are utilized to assist depth map inpainting. Comparative experiments demonstrate that the proposed edge detection algorithm can extract object boundaries and inhibit non-boundary edges caused by textures on object surfaces. The proposed depth inpainting algorithm can predict missing depth values successfully and has better performance than existing algorithm around object boundaries.

  2. Advances on image interpolation based on ant colony algorithm.

    PubMed

    Rukundo, Olivier; Cao, Hanqiang

    2016-01-01

    This paper presents an advance on image interpolation based on ant colony algorithm (AACA) for high resolution image scaling. The difference between the proposed algorithm and the previously proposed optimization of bilinear interpolation based on ant colony algorithm (OBACA) is that AACA uses global weighting, whereas OBACA uses local weighting scheme. The strength of the proposed global weighting of AACA algorithm depends on employing solely the pheromone matrix information present on any group of four adjacent pixels to decide which case deserves a maximum global weight value or not. Experimental results are further provided to show the higher performance of the proposed AACA algorithm with reference to the algorithms mentioned in this paper. PMID:27047729

  3. Algorithm and program for information processing with the filin apparatus

    NASA Technical Reports Server (NTRS)

    Gurin, L. S.; Morkrov, V. S.; Moskalenko, Y. I.; Tsoy, K. A.

    1979-01-01

    The reduction of spectral radiation data from space sources is described. The algorithm and program for identifying segments of information obtained from the Film telescope-spectrometer on the Salyut-4 are presented. The information segments represent suspected X-ray sources. The proposed algorithm is an algorithm of the lowest level. Following evaluation, information free of uninformative segments is subject to further processing with algorithms of a higher level. The language used is FORTRAN 4.

  4. A novel waveband routing algorithm in hierarchical WDM optical networks

    NASA Astrophysics Data System (ADS)

    Huang, Jun; Guo, Xiaojin; Qiu, Shaofeng; Luo, Jiangtao; Zhang, Zhizhong

    2007-11-01

    Hybrid waveband/wavelength switching in intelligent optical networks is gaining more and more academic attention. It is very challenging to develop efficient algorithms to efficiently use waveband switching capability. In this paper, we propose a novel cross-layer routing algorithm, waveband layered graph routing algorithm (WBLGR), in waveband switching-enabled optical networks. Through extensive simulation WBLGR algorithm can significantly improve the performance in terms of reduced call blocking probability.

  5. Quantitative performance evaluation of a blurring restoration algorithm based on principal component analysis

    NASA Astrophysics Data System (ADS)

    Greco, Mario; Huebner, Claudia; Marchi, Gabriele

    2008-10-01

    In the field on blind image deconvolution a new promising algorithm, based on the Principal Component Analysis (PCA), has been recently proposed in the literature. The main advantages of the algorithm are the following: computational complexity is generally lower than other deconvolution techniques (e.g., the widely used Iterative Blind Deconvolution - IBD - method); it is robust to white noise; only the blurring point spread function support is required to perform the single-observation deconvolution (i.e., a single degraded observation of a scene is available), while the multiple-observation one is completely unsupervised (i.e., multiple degraded observations of a scene are available). The effectiveness of the PCA-based restoration algorithm has been only confirmed by visual inspection and, to the best of our knowledge, no objective image quality assessment has been performed. In this paper a generalization of the original algorithm version is proposed; then the previous unexplored issue is considered and the achieved results are compared with that of the IBD method, which is used as benchmark.

  6. Despeckling algorithm on ultrasonic image using adaptive block-based singular value decomposition

    NASA Astrophysics Data System (ADS)

    Sae-Bae, Napa; Udomhunsakul, Somkait

    2008-03-01

    Speckle noise reduction is an important technique to enhance the quality of ultrasonic image. In this paper, a despeckling algorithm based on an adaptive block-based singular value decomposition filtering (BSVD) applied on ultrasonic images is presented. Instead of applying BSVD directly to ultrasonic image, we propose to apply BSVD on the noisy edge image version obtained from the difference between the logarithmic transformations of the original image and blur image version of its. The recovered image is performed by combining the speckle noise-free edge image with blur image version of its. Finally, exponential transformation is applied in order to get the reconstructed image. To evaluate our algorithm compared with well-know algorithms such as Lee filter, Kuan filter, Homomorphic Wiener filter, median filter and wavelet soft thresholding, four image quality measurements, which are Mean Square Error (MSE), Signal to MSE (S/MSE), Edge preservation (β), and Correlation measurement (ρ), are used. From the results, it clearly shows that the proposed algorithm outperforms other methods in terms of quantitative and subjective assessments.

  7. Algorithm architecture co-design for ultra low-power image sensor

    NASA Astrophysics Data System (ADS)

    Laforest, T.; Dupret, A.; Verdant, A.; Lattard, D.; Villard, P.

    2012-03-01

    In a context of embedded video surveillance, stand alone leftbehind image sensors are used to detect events with high level of confidence, but also with a very low power consumption. Using a steady camera, motion detection algorithms based on background estimation to find regions in movement are simple to implement and computationally efficient. To reduce power consumption, the background is estimated using a down sampled image formed of macropixels. In order to extend the class of moving objects to be detected, we propose an original mixed mode architecture developed thanks to an algorithm architecture co-design methodology. This programmable architecture is composed of a vector of SIMD processors. A basic RISC architecture was optimized in order to implement motion detection algorithms with a dedicated set of 42 instructions. Definition of delta modulation as a calculation primitive has allowed to implement algorithms in a very compact way. Thereby, a 1920x1080@25fps CMOS image sensor performing integrated motion detection is proposed with a power estimation of 1.8 mW.

  8. Reasoning about systolic algorithms

    SciTech Connect

    Purushothaman, S.

    1986-01-01

    Systolic algorithms are a class of parallel algorithms, with small grain concurrency, well suited for implementation in VLSI. They are intended to be implemented as high-performance, computation-bound back-end processors and are characterized by a tesselating interconnection of identical processing elements. This dissertation investigates the problem of providing correctness of systolic algorithms. The following are reported in this dissertation: (1) a methodology for verifying correctness of systolic algorithms based on solving the representation of an algorithm as recurrence equations. The methodology is demonstrated by proving the correctness of a systolic architecture for optimal parenthesization. (2) The implementation of mechanical proofs of correctness of two systolic algorithms, a convolution algorithm and an optimal parenthesization algorithm, using the Boyer-Moore theorem prover. (3) An induction principle for proving correctness of systolic arrays which are modular. Two attendant inference rules, weak equivalence and shift transformation, which capture equivalent behavior of systolic arrays, are also presented.

  9. Algorithm-development activities

    NASA Technical Reports Server (NTRS)

    Carder, Kendall L.

    1994-01-01

    The task of algorithm-development activities at USF continues. The algorithm for determining chlorophyll alpha concentration, (Chl alpha) and gelbstoff absorption coefficient for SeaWiFS and MODIS-N radiance data is our current priority.

  10. Validation of a dose warping algorithm using clinically realistic scenarios

    PubMed Central

    Dehghani, H; Green, S; Webster, G J

    2015-01-01

    Objective: Dose warping following deformable image registration (DIR) has been proposed for interfractional dose accumulation. Robust evaluation workflows are vital to clinically implement such procedures. This study demonstrates such a workflow and quantifies the accuracy of a commercial DIR algorithm for this purpose under clinically realistic scenarios. Methods: 12 head and neck (H&N) patient data sets were used for this retrospective study. For each case, four clinically relevant anatomical changes have been manually generated. Dose distributions were then calculated on each artificially deformed image and warped back to the original anatomy following DIR by a commercial algorithm. Spatial registration was evaluated by quantitative comparison of the original and warped structure sets, using conformity index and mean distance to conformity (MDC) metrics. Dosimetric evaluation was performed by quantitative comparison of the dose–volume histograms generated for the calculated and warped dose distributions, which should be identical for the ideal “perfect” registration of mass-conserving deformations. Results: Spatial registration of the artificially deformed image back to the planning CT was accurate (MDC range of 1–2 voxels or 1.2–2.4 mm). Dosimetric discrepancies introduced by the DIR were low (0.02 ± 0.03 Gy per fraction in clinically relevant dose metrics) with no statistically significant difference found (Wilcoxon test, 0.6 ≥ p ≥ 0.2). Conclusion: The reliability of CT-to-CT DIR-based dose warping and image registration was demonstrated for a commercial algorithm with H&N patient data. Advances in knowledge: This study demonstrates a workflow for validation of dose warping following DIR that could assist physicists and physicians in quantifying the uncertainties associated with dose accumulation in clinical scenarios. PMID:25791569

  11. Improvement of wavelet threshold filtered back-projection image reconstruction algorithm

    NASA Astrophysics Data System (ADS)

    Ren, Zhong; Liu, Guodong; Huang, Zhen

    2014-11-01

    Image reconstruction technique has been applied into many fields including some medical imaging, such as X ray computer tomography (X-CT), positron emission tomography (PET) and nuclear magnetic resonance imaging (MRI) etc, but the reconstructed effects are still not satisfied because original projection data are inevitably polluted by noises in process of image reconstruction. Although some traditional filters e.g., Shepp-Logan (SL) and Ram-Lak (RL) filter have the ability to filter some noises, Gibbs oscillation phenomenon are generated and artifacts leaded by back-projection are not greatly improved. Wavelet threshold denoising can overcome the noises interference to image reconstruction. Since some inherent defects exist in the traditional soft and hard threshold functions, an improved wavelet threshold function combined with filtered back-projection (FBP) algorithm was proposed in this paper. Four different reconstruction algorithms were compared in simulated experiments. Experimental results demonstrated that this improved algorithm greatly eliminated the shortcomings of un-continuity and large distortion of traditional threshold functions and the Gibbs oscillation. Finally, the availability of this improved algorithm was verified from the comparison of two evaluation criterions, i.e. mean square error (MSE), peak signal to noise ratio (PSNR) among four different algorithms, and the optimum dual threshold values of improved wavelet threshold function was gotten.

  12. A dynamic material discrimination algorithm for dual MV energy X-ray digital radiography.

    PubMed

    Li, Liang; Li, Ruizhe; Zhang, Siyuan; Zhao, Tiao; Chen, Zhiqiang

    2016-08-01

    Dual-energy X-ray radiography has become a well-established technique in medical, industrial, and security applications, because of its material or tissue discrimination capability. The main difficulty of this technique is dealing with the materials overlapping problem. When there are two or more materials along the X-ray beam path, its material discrimination performance will be affected. In order to solve this problem, a new dynamic material discrimination algorithm is proposed for dual-energy X-ray digital radiography, which can also be extended to multi-energy X-ray situations. The algorithm has three steps: α-curve-based pre-classification, decomposition of overlapped materials, and the final material recognition. The key of the algorithm is to establish a dual-energy radiograph database of both pure basis materials and pair combinations of them. After the pre-classification results, original dual-energy projections of overlapped materials can be dynamically decomposed into two sets of dual-energy radiographs of each pure material by the algorithm. Thus, more accurate discrimination results can be provided even with the existence of the overlapping problem. Both numerical and experimental results that prove the validity and effectiveness of the algorithm are presented. PMID:27239987

  13. Parallel Implementations Of The Nelder-Mead Simplex Algorithm For Unconstrained Optimization

    NASA Astrophysics Data System (ADS)

    Dennis, J. E.; Torczon, Virginia

    1988-04-01

    We are interested in implementing direct search methods on parallel computers to solve the unconstrained minimization problem: Given a function f : IRn --? IR find an x E En that minimizes 1 (x). Our preliminary work has focused on the Nelder-Mead simplex algorithm. The origin of the algorithm can be found in a 1962 paper by Spendley, Hext and Himsworth;1 Nelder and Meade proposed an adaptive version which proved to be much more robust in practice. Dennis and Woods3 give a clear presentation of the standard Nelder-Mead simplex algorithm; Woods4 includes a more complete discussion of implementation details as well as some preliminary convergence results. Since descriptions of the standard Nelder-Mead simplex algorithm appear in Nelder and Mead,2 Dennis and Woods,3 and Woods,4 we will limit our introductory discussion to the advantages and disadvantages of the algorithm, as well as some of the features which make it so popular. We then outline the approaches we have taken and discuss our preliminary results. We conclude with a discussion of future research and some observations about our findings.

  14. Comparison of Statistical Algorithms for the Detection of Infectious Disease Outbreaks in Large Multiple Surveillance Systems.

    PubMed

    Enki, Doyo G; Garthwaite, Paul H; Farrington, C Paddy; Noufaily, Angela; Andrews, Nick J; Charlett, Andre

    2016-01-01

    A large-scale multiple surveillance system for infectious disease outbreaks has been in operation in England and Wales since the early 1990s. Changes to the statistical algorithm at the heart of the system were proposed and the purpose of this paper is to compare two new algorithms with the original algorithm. Test data to evaluate performance are created from weekly counts of the number of cases of each of more than 2000 diseases over a twenty-year period. The time series of each disease is separated into one series giving the baseline (background) disease incidence and a second series giving disease outbreaks. One series is shifted forward by twelve months and the two are then recombined, giving a realistic series in which it is known where outbreaks have been added. The metrics used to evaluate performance include a scoring rule that appropriately balances sensitivity against specificity and is sensitive to variation in probabilities near 1. In the context of disease surveillance, a scoring rule can be adapted to reflect the size of outbreaks and this was done. Results indicate that the two new algorithms are comparable to each other and better than the algorithm they were designed to replace. PMID:27513749

  15. Comparison of Statistical Algorithms for the Detection of Infectious Disease Outbreaks in Large Multiple Surveillance Systems

    PubMed Central

    Farrington, C. Paddy; Noufaily, Angela; Andrews, Nick J.; Charlett, Andre

    2016-01-01

    A large-scale multiple surveillance system for infectious disease outbreaks has been in operation in England and Wales since the early 1990s. Changes to the statistical algorithm at the heart of the system were proposed and the purpose of this paper is to compare two new algorithms with the original algorithm. Test data to evaluate performance are created from weekly counts of the number of cases of each of more than 2000 diseases over a twenty-year period. The time series of each disease is separated into one series giving the baseline (background) disease incidence and a second series giving disease outbreaks. One series is shifted forward by twelve months and the two are then recombined, giving a realistic series in which it is known where outbreaks have been added. The metrics used to evaluate performance include a scoring rule that appropriately balances sensitivity against specificity and is sensitive to variation in probabilities near 1. In the context of disease surveillance, a scoring rule can be adapted to reflect the size of outbreaks and this was done. Results indicate that the two new algorithms are comparable to each other and better than the algorithm they were designed to replace. PMID:27513749

  16. Stitching algorithm of the images acquired from different points of fixation

    NASA Astrophysics Data System (ADS)

    Semenishchev, E. A.; Voronin, V. V.; Marchuk, V. I.; Pismenskova, M. M.

    2015-02-01

    Image mosaicing is the act of combining two or more images and is used in many applications in computer vision, image processing, and computer graphics. It aims to combine images such that no obstructive boundaries exist around overlapped regions and to create a mosaic image that exhibits as little distortion as possible from the original images. Most of the existing algorithms are the computationally complex and don't show good results always in obtaining of the stitched images, which are different: scale, light, various free points of view and others. In this paper we consider an algorithm which allows increasing the speed of processing in the case of stitching high-resolution images. We reduced the computational complexity used an edge image analysis and saliency map on high-detailisation areas. On detected areas are determined angles of rotation, scaling factors, the coefficients of the color correction and transformation matrix. We define key points using SURF detector and ignore false correspondences based on correlation analysis. The proposed algorithm allows to combine images from free points of view with the different color balances, time shutter and scale. We perform a comparative study and show that statistically, the new algorithm deliver good quality results compared to existing algorithms.

  17. Fast algorithm for scaling analysis with higher-order detrending moving average method

    NASA Astrophysics Data System (ADS)

    Tsujimoto, Yutaka; Miki, Yuki; Shimatani, Satoshi; Kiyono, Ken

    2016-05-01

    Among scaling analysis methods based on the root-mean-square deviation from the estimated trend, it has been demonstrated that centered detrending moving average (DMA) analysis with a simple moving average has good performance when characterizing long-range correlation or fractal scaling behavior. Furthermore, higher-order DMA has also been proposed; it is shown to have better detrending capabilities, removing higher-order polynomial trends than original DMA. However, a straightforward implementation of higher-order DMA requires a very high computational cost, which would prevent practical use of this method. To solve this issue, in this study, we introduce a fast algorithm for higher-order DMA, which consists of two techniques: (1) parallel translation of moving averaging windows by a fixed interval; (2) recurrence formulas for the calculation of summations. Our algorithm can significantly reduce computational cost. Monte Carlo experiments show that the computational time of our algorithm is approximately proportional to the data length, although that of the conventional algorithm is proportional to the square of the data length. The efficiency of our algorithm is also shown by a systematic study of the performance of higher-order DMA, such as the range of detectable scaling exponents and detrending capability for removing polynomial trends. In addition, through the analysis of heart-rate variability time series, we discuss possible applications of higher-order DMA.

  18. Iterative phase retrieval algorithms. I: optimization.

    PubMed

    Guo, Changliang; Liu, Shi; Sheridan, John T

    2015-05-20

    Two modified Gerchberg-Saxton (GS) iterative phase retrieval algorithms are proposed. The first we refer to as the spatial phase perturbation GS algorithm (SPP GSA). The second is a combined GS hybrid input-output algorithm (GS/HIOA). In this paper (Part I), it is demonstrated that the SPP GS and GS/HIO algorithms are both much better at avoiding stagnation during phase retrieval, allowing them to successfully locate superior solutions compared with either the GS or the HIO algorithms. The performances of the SPP GS and GS/HIO algorithms are also compared. Then, the error reduction (ER) algorithm is combined with the HIO algorithm (ER/HIOA) to retrieve the input object image and the phase, given only some knowledge of its extent and the amplitude in the Fourier domain. In Part II, the algorithms developed here are applied to carry out known plaintext and ciphertext attacks on amplitude encoding and phase encoding double random phase encryption systems. Significantly, ER/HIOA is then used to carry out a ciphertext-only attack on AE DRPE systems. PMID:26192504

  19. Accurate Finite Difference Algorithms

    NASA Technical Reports Server (NTRS)

    Goodrich, John W.

    1996-01-01

    Two families of finite difference algorithms for computational aeroacoustics are presented and compared. All of the algorithms are single step explicit methods, they have the same order of accuracy in both space and time, with examples up to eleventh order, and they have multidimensional extensions. One of the algorithm families has spectral like high resolution. Propagation with high order and high resolution algorithms can produce accurate results after O(10(exp 6)) periods of propagation with eight grid points per wavelength.

  20. Double regions growing algorithm for automated satellite image mosaicking

    NASA Astrophysics Data System (ADS)

    Tan, Yihua; Chen, Chen; Tian, Jinwen

    2011-12-01

    Feathering is a most widely used method in seamless satellite image mosaicking. A simple but effective algorithm - double regions growing (DRG) algorithm, which utilizes the shape content of images' valid regions, is proposed for generating robust feathering-line before feathering. It works without any human intervention, and experiment on real satellite images shows the advantages of the proposed method.

  1. Optimal Pid Controller Design Using Adaptive Vurpso Algorithm

    NASA Astrophysics Data System (ADS)

    Zirkohi, Majid Moradi

    2015-04-01

    The purpose of this paper is to improve theVelocity Update Relaxation Particle Swarm Optimization algorithm (VURPSO). The improved algorithm is called Adaptive VURPSO (AVURPSO) algorithm. Then, an optimal design of a Proportional-Integral-Derivative (PID) controller is obtained using the AVURPSO algorithm. An adaptive momentum factor is used to regulate a trade-off between the global and the local exploration abilities in the proposed algorithm. This operation helps the system to reach the optimal solution quickly and saves the computation time. Comparisons on the optimal PID controller design confirm the superiority of AVURPSO algorithm to the optimization algorithms mentioned in this paper namely the VURPSO algorithm, the Ant Colony algorithm, and the conventional approach. Comparisons on the speed of convergence confirm that the proposed algorithm has a faster convergence in a less computation time to yield a global optimum value. The proposed AVURPSO can be used in the diverse areas of optimization problems such as industrial planning, resource allocation, scheduling, decision making, pattern recognition and machine learning. The proposed AVURPSO algorithm is efficiently used to design an optimal PID controller.

  2. A Palmprint Recognition Algorithm Using Phase-Only Correlation

    NASA Astrophysics Data System (ADS)

    Ito, Koichi; Aoki, Takafumi; Nakajima, Hiroshi; Kobayashi, Koji; Higuchi, Tatsuo

    This paper presents a palmprint recognition algorithm using Phase-Only Correlation (POC). The use of phase components in 2D (two-dimensional) discrete Fourier transforms of palmprint images makes it possible to achieve highly robust image registration and matching. In the proposed algorithm, POC is used to align scaling, rotation and translation between two palmprint images, and evaluate similarity between them. Experimental evaluation using a palmprint image database clearly demonstrates efficient matching performance of the proposed algorithm.

  3. A parallel algorithm for random searches

    NASA Astrophysics Data System (ADS)

    Wosniack, M. E.; Raposo, E. P.; Viswanathan, G. M.; da Luz, M. G. E.

    2015-11-01

    We discuss a parallelization procedure for a two-dimensional random search of a single individual, a typical sequential process. To assure the same features of the sequential random search in the parallel version, we analyze the former spatial patterns of the encountered targets for different search strategies and densities of homogeneously distributed targets. We identify a lognormal tendency for the distribution of distances between consecutively detected targets. Then, by assigning the distinct mean and standard deviation of this distribution for each corresponding configuration in the parallel simulations (constituted by parallel random walkers), we are able to recover important statistical properties, e.g., the target detection efficiency, of the original problem. The proposed parallel approach presents a speedup of nearly one order of magnitude compared with the sequential implementation. This algorithm can be easily adapted to different instances, as searches in three dimensions. Its possible range of applicability covers problems in areas as diverse as automated computer searchers in high-capacity databases and animal foraging.

  4. Prediction of Saccharomyces cerevisiae replication origins

    PubMed Central

    Breier, Adam M; Chatterji, Sourav; Cozzarelli, Nicholas R

    2004-01-01

    Background Autonomously replicating sequences (ARSs) function as replication origins in Saccharomyces cerevisiae. ARSs contain the 17 bp ARS consensus sequence (ACS), which binds the origin recognition complex. The yeast genome contains more than 10,000 ACS matches, but there are only a few hundred origins, and little flanking sequence similarity has been found. Thus, identification of origins by sequence alone has not been possible. Results We developed an algorithm, Oriscan, to predict yeast origins using similarity to 26 characterized origins. Oriscan used 268 bp of sequence, including the T-rich ACS and a 3' A-rich region. The predictions identified the exact location of the ACS. A total of 84 of the top 100 Oriscan predictions, and 56% of the top 350, matched known ARSs or replication protein binding sites. The true accuracy was even higher because we tested 25 discrepancies, and 15 were in fact ARSs. Thus, 94% of the top 100 predictions and an estimated 70% of the top 350 were correct. We compared the predictions to corresponding sequences in related Saccharomyces species and found that the ACSs of experimentally supported predictions show significant conservation. Conclusions The high accuracy of the predictions indicates that we have defined near-sufficient conditions for ARS activity, the A-rich region is a recognizable feature of ARS elements with a probable role in replication initiation, and nucleotide sequence is a reliable predictor of yeast origins. Oriscan detected most origins in the genome, demonstrating previously unrecognized generality in yeast replication origins and significant discriminatory power in the algorithm. PMID:15059255

  5. Eusociality: Origin and consequences

    PubMed Central

    Wilson, Edward O.; Hölldobler, Bert

    2005-01-01

    In this new assessment of the empirical evidence, an alternative to the standard model is proposed: group selection is the strong binding force in eusocial evolution; individual selection, the strong dissolutive force; and kin selection (narrowly defined), either a weak binding or weak dissolutive force, according to circumstance. Close kinship may be more a consequence of eusociality than a factor promoting its origin. A point of no return to the solitary state exists, as a rule when workers become anatomically differentiated. Eusociality has been rare in evolution, evidently due to the scarcity of environmental pressures adequate to tip the balance among countervailing forces in favor of group selection. Eusociality in ants and termites in the irreversible stage is the key to their ecological dominance and has (at least in ants) shaped some features of internal phylogeny. Their colonies are consistently superior to solitary and preeusocial competitors, due to the altruistic behavior among nestmates and their ability to organize coordinated action by pheromonal communication. PMID:16157878

  6. Origin of Neutron Stars

    NASA Astrophysics Data System (ADS)

    Brecher, K.

    1999-12-01

    The origin of the concept of neutron stars can be traced to two brief, incredibly insightful publications. Work on the earlier paper by Lev Landau (Phys. Z. Sowjetunion, 1, 285, 1932) actually predated the discovery of neutrons. Nonetheless, Landau arrived at the notion of a collapsed star with the density of a nucleus (really a "nucleus star") and demonstrated (at about the same time as, and independent of, Chandrasekhar) that there is an upper mass limit for dense stellar objects of about 1.5 solar masses. Perhaps even more remarkable is the abstract of a talk presented at the December 1933 meeting of the American Physical Society published by Walter Baade and Fritz Zwicky in 1934 (Phys. Rev. 45, 138). It followed the discovery of the neutron by just over a year. Their report, which was about the same length as the present abstract: (1) invented the concept and word supernova; (2) suggested that cosmic rays are produced by supernovae; and (3) in the authors own words, proposed "with all reserve ... the view that supernovae represent the transitions from ordinary stars to neutron stars (italics), which in their final stages consist of extremely closely packed neutrons." The abstract by Baade and Zwicky probably contains the highest density of new, important (and correct) ideas in high energy astrophysics ever published in a single paper. In this talk, we will discuss some of the facts and myths surrounding these two publications.

  7. Origins of magnetospheric physics

    SciTech Connect

    Van Allen, J.A.

    1983-01-01

    The history of the scientific investigation of the earth magnetosphere during the period 1946-1960 is reviewed, with a focus on satellite missions leading to the discovery of the inner and outer radiation belts. Chapters are devoted to ground-based studies of the earth magnetic field through the 1930s, the first U.S. rocket flights carrying scientific instruments, the rockoon flights from the polar regions (1952-1957), U.S. planning for scientific use of artificial satellites (1956), the launch of Sputnik I (1957), the discovery of the inner belt by Explorers I and III (1958), the Argus high-altitude atomic-explosion tests (1958), the confirmation of the inner belt and discovery of the outer belt by Explorer IV and Pioneers I-V, related studies by Sputniks II and III and Luniks I-III, and the observational and theoretical advances of 1959-1961. Photographs, drawings, diagrams, graphs, and copies of original notes and research proposals are provided. 227 references.

  8. The gradient boosting algorithm and random boosting for genome-assisted evaluation in large data sets.

    PubMed

    González-Recio, O; Jiménez-Montero, J A; Alenda, R

    2013-01-01

    In the next few years, with the advent of high-density single nucleotide polymorphism (SNP) arrays and genome sequencing, genomic evaluation methods will need to deal with a large number of genetic variants and an increasing sample size. The boosting algorithm is a machine-learning technique that may alleviate the drawbacks of dealing with such large data sets. This algorithm combines different predictors in a sequential manner with some shrinkage on them; each predictor is applied consecutively to the residuals from the committee formed by the previous ones to form a final prediction based on a subset of covariates. Here, a detailed description is provided and examples using a toy data set are included. A modification of the algorithm called "random boosting" was proposed to increase predictive ability and decrease computation time of genome-assisted evaluation in large data sets. Random boosting uses a random selection of markers to add a subsequent weak learner to the predictive model. These modifications were applied to a real data set composed of 1,797 bulls genotyped for 39,714 SNP. Deregressed proofs of 4 yield traits and 1 type trait from January 2009 routine evaluations were used as dependent variables. A 2-fold cross-validation scenario was implemented. Sires born before 2005 were used as a training sample (1,576 and 1,562 for production and type traits, respectively), whereas younger sires were used as a testing sample to evaluate predictive ability of the algorithm on yet-to-be-observed phenotypes. Comparison with the original algorithm was provided. The predictive ability of the algorithm was measured as Pearson correlations between observed and predicted responses. Further, estimated bias was computed as the average difference between observed and predicted phenotypes. The results showed that the modification of the original boosting algorithm could be run in 1% of the time used with the original algorithm and with negligible differences in accuracy

  9. LiveWire interactive boundary extraction algorithm based on Haar wavelet transform and control point set direction search

    NASA Astrophysics Data System (ADS)

    Cheng, Jun; Zhang, Jun; Tian, Jinwen

    2015-12-01

    Based on deep analysis of the LiveWire interactive boundary extraction algorithm, a new algorithm focusing on improving the speed of LiveWire algorithm is proposed in this paper. Firstly, the Haar wavelet transform is carried on the input image, and the boundary is extracted on the low resolution image obtained by the wavelet transform of the input image. Secondly, calculating LiveWire shortest path is based on the control point set direction search by utilizing the spatial relationship between the two control points users provide in real time. Thirdly, the search order of the adjacent points of the starting node is set in advance. An ordinary queue instead of a priority queue is taken as the storage pool of the points when optimizing their shortest path value, thus reducing the complexity of the algorithm from O[n2] to O[n]. Finally, A region iterative backward projection method based on neighborhood pixel polling has been used to convert dual-pixel boundary of the reconstructed image to single-pixel boundary after Haar wavelet inverse transform. The algorithm proposed in this paper combines the advantage of the Haar wavelet transform and the advantage of the optimal path searching method based on control point set direction search. The former has fast speed of image decomposition and reconstruction and is more consistent with the texture features of the image and the latter can reduce the time complexity of the original algorithm. So that the algorithm can improve the speed in interactive boundary extraction as well as reflect the boundary information of the image more comprehensively. All methods mentioned above have a big role in improving the execution efficiency and the robustness of the algorithm.

  10. Testing block subdivision algorithms on block designs

    NASA Astrophysics Data System (ADS)

    Wiseman, Natalie; Patterson, Zachary

    2016-01-01

    Integrated land use-transportation models predict future transportation demand taking into account how households and firms arrange themselves partly as a function of the transportation system. Recent integrated models require parcels as inputs and produce household and employment predictions at the parcel scale. Block subdivision algorithms automatically generate parcel patterns within blocks. Evaluating block subdivision algorithms is done by way of generating parcels and comparing them to those in a parcel database. Three block subdivision algorithms are evaluated on how closely they reproduce parcels of different block types found in a parcel database from Montreal, Canada. While the authors who developed each of the algorithms have evaluated them, they have used their own metrics and block types to evaluate their own algorithms. This makes it difficult to compare their strengths and weaknesses. The contribution of this paper is in resolving this difficulty with the aim of finding a better algorithm suited to subdividing each block type. The proposed hypothesis is that given the different approaches that block subdivision algorithms take, it's likely that different algorithms are better adapted to subdividing different block types. To test this, a standardized block type classification is used that consists of mutually exclusive and comprehensive categories. A statistical method is used for finding a better algorithm and the probability it will perform well for a given block type. Results suggest the oriented bounding box algorithm performs better for warped non-uniform sites, as well as gridiron and fragmented uniform sites. It also produces more similar parcel areas and widths. The Generalized Parcel Divider 1 algorithm performs better for gridiron non-uniform sites. The Straight Skeleton algorithm performs better for loop and lollipop networks as well as fragmented non-uniform and warped uniform sites. It also produces more similar parcel shapes and patterns.

  11. Reactive power optimization by genetic algorithm

    SciTech Connect

    Iba, Kenji )

    1994-05-01

    This paper presents a new approach to optimal reactive power planning based on a genetic algorithm. Many outstanding methods to this problem have been proposed in the past. However, most of these approaches have the common defect of being caught to a local minimum solution. The integer problem which yields integer value solutions for discrete controllers/banks still remains as a difficult one. The genetic algorithm is a kind of search algorithm based on the mechanics of natural selection and genetics. This algorithm can search for a global solution using multiple paths and treat integer problems naturally. The proposed method was applied to practical 51-bus and 224-bus systems to show its feasibility and capabilities. Although this method is not as fast as sophisticated traditional methods, the concept is quite promising and useful.

  12. Swarm-based algorithm for phase unwrapping.

    PubMed

    da Silva Maciel, Lucas; Albertazzi, Armando G

    2014-08-20

    A novel algorithm for phase unwrapping based on swarm intelligence is proposed. The algorithm was designed based on three main goals: maximum coverage of reliable information, focused effort for better efficiency, and reliable unwrapping. Experiments were performed, and a new agent was designed to follow a simple set of five rules in order to collectively achieve these goals. These rules consist of random walking for unwrapping and searching, ambiguity evaluation by comparing unwrapped regions, and a replication behavior responsible for the good distribution of agents throughout the image. The results were comparable with the results from established methods. The swarm-based algorithm was able to suppress ambiguities better than the flood-fill algorithm without relying on lengthy processing times. In addition, future developments such as parallel processing and better-quality evaluation present great potential for the proposed method. PMID:25321125

  13. Self-adaptive parameters in genetic algorithms

    NASA Astrophysics Data System (ADS)

    Pellerin, Eric; Pigeon, Luc; Delisle, Sylvain

    2004-04-01

    Genetic algorithms are powerful search algorithms that can be applied to a wide range of problems. Generally, parameter setting is accomplished prior to running a Genetic Algorithm (GA) and this setting remains unchanged during execution. The problem of interest to us here is the self-adaptive parameters adjustment of a GA. In this research, we propose an approach in which the control of a genetic algorithm"s parameters can be encoded within the chromosome of each individual. The parameters" values are entirely dependent on the evolution mechanism and on the problem context. Our preliminary results show that a GA is able to learn and evaluate the quality of self-set parameters according to their degree of contribution to the resolution of the problem. These results are indicative of a promising approach to the development of GAs with self-adaptive parameter settings that do not require the user to pre-adjust parameters at the outset.

  14. Acceleration of iterative image restoration algorithms.

    PubMed

    Biggs, D S; Andrews, M

    1997-03-10

    A new technique for the acceleration of iterative image restoration algorithms is proposed. The method is based on the principles of vector extrapolation and does not require the minimization of a cost function. The algorithm is derived and its performance illustrated with Richardson-Lucy (R-L) and maximum entropy (ME) deconvolution algorithms and the Gerchberg-Saxton magnitude and phase retrieval algorithms. Considerable reduction in restoration times is achieved with little image distortion or computational overhead per iteration. The speedup achieved is shown to increase with the number of iterations performed and is easily adapted to suit different algorithms. An example R-L restoration achieves an average speedup of 40 times after 250 iterations and an ME method 20 times after only 50 iterations. An expression for estimating the acceleration factor is derived and confirmed experimentally. Comparisons with other acceleration techniques in the literature reveal significant improvements in speed and stability. PMID:18250863

  15. Automatic ionospheric layers detection: Algorithms analysis

    NASA Astrophysics Data System (ADS)

    Molina, María G.; Zuccheretti, Enrico; Cabrera, Miguel A.; Bianchi, Cesidio; Sciacca, Umberto; Baskaradas, James

    2016-03-01

    Vertical sounding is a widely used technique to obtain ionosphere measurements, such as an estimation of virtual height versus frequency scanning. It is performed by high frequency radar for geophysical applications called "ionospheric sounder" (or "ionosonde"). Radar detection depends mainly on targets characteristics. While several targets behavior and correspondent echo detection algorithms have been studied, a survey to address a suitable algorithm for ionospheric sounder has to be carried out. This paper is focused on automatic echo detection algorithms implemented in particular for an ionospheric sounder, target specific characteristics were studied as well. Adaptive threshold detection algorithms are proposed, compared to the current implemented algorithm, and tested using actual data obtained from the Advanced Ionospheric Sounder (AIS-INGV) at Rome Ionospheric Observatory. Different cases of study have been selected according typical ionospheric and detection conditions.

  16. Implementation of a new iterative learning control algorithm on real data.

    PubMed

    Zamanian, Hamed; Koohi, Ardavan

    2016-02-01

    In this paper, a newly presented approach is proposed for closed-loop automatic tuning of a proportional integral derivative (PID) controller based on iterative learning control (ILC) algorithm. A modified ILC scheme iteratively changes the control signal by adjusting it. Once a satisfactory performance is achieved, a linear compensator is identified in the ILC behavior using casual relationship between the closed loop signals. This compensator is approximated by a PD controller which is used to tune the original PID controller. Results of implementing this approach presented on the experimental data of Damavand tokamak and are consistent with simulation outcome. PMID:26931852

  17. An Efficient Algorithm for Some Highly Nonlinear Fractional PDEs in Mathematical Physics

    PubMed Central

    Ahmad, Jamshad; Mohyud-Din, Syed Tauseef

    2014-01-01

    In this paper, a fractional complex transform (FCT) is used to convert the given fractional partial differential equations (FPDEs) into corresponding partial differential equations (PDEs) and subsequently Reduced Differential Transform Method (RDTM) is applied on the transformed system of linear and nonlinear time-fractional PDEs. The results so obtained are re-stated by making use of inverse transformation which yields it in terms of original variables. It is observed that the proposed algorithm is highly efficient and appropriate for fractional PDEs and hence can be extended to other complex problems of diversified nonlinear nature. PMID:25525804

  18. Implementation of a new iterative learning control algorithm on real data

    NASA Astrophysics Data System (ADS)

    Zamanian, Hamed; Koohi, Ardavan

    2016-02-01

    In this paper, a newly presented approach is proposed for closed-loop automatic tuning of a proportional integral derivative (PID) controller based on iterative learning control (ILC) algorithm. A modified ILC scheme iteratively changes the control signal by adjusting it. Once a satisfactory performance is achieved, a linear compensator is identified in the ILC behavior using casual relationship between the closed loop signals. This compensator is approximated by a PD controller which is used to tune the original PID controller. Results of implementing this approach presented on the experimental data of Damavand tokamak and are consistent with simulation outcome.

  19. Improved hybrid optimization algorithm for 3D protein structure prediction.

    PubMed

    Zhou, Changjun; Hou, Caixia; Wei, Xiaopeng; Zhang, Qiang

    2014-07-01

    A new improved hybrid optimization algorithm - PGATS algorithm, which is based on toy off-lattice model, is presented for dealing with three-dimensional protein structure prediction problems. The algorithm combines the particle swarm optimization (PSO), genetic algorithm (GA), and tabu search (TS) algorithms. Otherwise, we also take some different improved strategies. The factor of stochastic disturbance is joined in the particle swarm optimization to improve the search ability; the operations of crossover and mutation that are in the genetic algorithm are changed to a kind of random liner method; at last tabu search algorithm is improved by appending a mutation operator. Through the combination of a variety of strategies and algorithms, the protein structure prediction (PSP) in a 3D off-lattice model is achieved. The PSP problem is an NP-hard problem, but the problem can be attributed to a global optimization problem of multi-extremum and multi-parameters. This is the theoretical principle of the hybrid optimization algorithm that is proposed in this paper. The algorithm combines local search and global search, which overcomes the shortcoming of a single algorithm, giving full play to the advantage of each algorithm. In the current universal standard sequences, Fibonacci sequences and real protein sequences are certified. Experiments show that the proposed new method outperforms single algorithms on the accuracy of calculating the protein sequence energy value, which is proved to be an effective way to predict the structure of proteins. PMID:25069136

  20. Modern human origins: progress and prospects.

    PubMed Central

    Stringer, Chris

    2002-01-01

    The question of the mode of origin of modern humans (Homo sapiens) has dominated palaeoanthropological debate over the last decade. This review discusses the main models proposed to explain modern human origins, and examines relevant fossil evidence from Eurasia, Africa and Australasia. Archaeological and genetic data are also discussed, as well as problems with the concept of 'modernity' itself. It is concluded that a recent African origin can be supported for H. sapiens, morphologically, behaviourally and genetically, but that more evidence will be needed, both from Africa and elsewhere, before an absolute African origin for our species and its behavioural characteristics can be established and explained. PMID:12028792

  1. Blind restoration method of three-dimensional microscope image based on RL algorithm

    NASA Astrophysics Data System (ADS)

    Yao, Jin-li; Tian, Si; Wang, Xiang-rong; Wang, Jing-li

    2013-08-01

    Thin specimens of biological tissue appear three dimensional transparent under a microscope. The optic slice images can be captured by moving the focal planes at the different locations of the specimen. The captured image has low resolution due to the influence of the out-of-focus information comes from the planes adjacent to the local plane. Using traditional methods can remove the blur in the images at a certain degree, but it needs to know the point spread function (PSF) of the imaging system accurately. The accuracy degree of PSF influences the restoration result greatly. In fact, it is difficult to obtain the accurate PSF of the imaging system. In order to restore the original appearance of the specimen under the conditions of the imaging system parameters are unknown or there is noise and spherical aberration in the system, a blind restoration methods of three-dimensional microscope based on the R-L algorithm is proposed in this paper. On the basis of the exhaustive study of the two-dimension R-L algorithm, according to the theory of the microscopy imaging and the wavelet transform denoising pretreatment, we expand the R-L algorithm to three-dimension space. It is a nonlinear restoration method with the maximum entropy constraint. The method doesn't need to know the PSF of the microscopy imaging system precisely to recover the blur image. The image and PSF converge to the optimum solutions by many alterative iterations and corrections. The matlab simulation and experiments results show that the expansion algorithm is better in visual indicators, peak signal to noise ratio and improved signal to noise ratio when compared with the PML algorithm, and the proposed algorithm can suppress noise, restore more details of target, increase image resolution.

  2. Facial Composite System Using Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Zahradníková, Barbora; Duchovičová, Soňa; Schreiber, Peter

    2014-12-01

    The article deals with genetic algorithms and their application in face identification. The purpose of the research is to develop a free and open-source facial composite system using evolutionary algorithms, primarily processes of selection and breeding. The initial testing proved higher quality of the final composites and massive reduction in the composites processing time. System requirements were specified and future research orientation was proposed in order to improve the results.

  3. Adaptive Cuckoo Search Algorithm for Unconstrained Optimization

    PubMed Central

    2014-01-01

    Modification of the intensification and diversification approaches in the recently developed cuckoo search algorithm (CSA) is performed. The alteration involves the implementation of adaptive step size adjustment strategy, and thus enabling faster convergence to the global optimal solutions. The feasibility of the proposed algorithm is validated against benchmark optimization functions, where the obtained results demonstrate a marked improvement over the standard CSA, in all the cases. PMID:25298971

  4. Self-organization and clustering algorithms

    NASA Technical Reports Server (NTRS)

    Bezdek, James C.

    1991-01-01

    Kohonen's feature maps approach to clustering is often likened to the k or c-means clustering algorithms. Here, the author identifies some similarities and differences between the hard and fuzzy c-Means (HCM/FCM) or ISODATA algorithms and Kohonen's self-organizing approach. The author concludes that some differences are significant, but at the same time there may be some important unknown relationships between the two methodologies. Several avenues of research are proposed.

  5. An Iterative Soft-Decision Decoding Algorithm

    NASA Technical Reports Server (NTRS)

    Lin, Shu; Koumoto, Takuya; Takata, Toyoo; Kasami, Tadao

    1996-01-01

    This paper presents a new minimum-weight trellis-based soft-decision iterative decoding algorithm for binary linear block codes. Simulation results for the RM(64,22), EBCH(64,24), RM(64,42) and EBCH(64,45) codes show that the proposed decoding algorithm achieves practically (or near) optimal error performance with significant reduction in decoding computational complexity. The average number of search iterations is also small even for low signal-to-noise ratio.

  6. Adaptive cuckoo search algorithm for unconstrained optimization.

    PubMed

    Ong, Pauline

    2014-01-01

    Modification of the intensification and diversification approaches in the recently developed cuckoo search algorithm (CSA) is performed. The alteration involves the implementation of adaptive step size adjustment strategy, and thus enabling faster convergence to the global optimal solutions. The feasibility of the proposed algorithm is validated against benchmark optimization functions, where the obtained results demonstrate a marked improvement over the standard CSA, in all the cases. PMID:25298971

  7. Multilevel algorithms for nonlinear optimization

    NASA Technical Reports Server (NTRS)

    Alexandrov, Natalia; Dennis, J. E., Jr.

    1994-01-01

    Multidisciplinary design optimization (MDO) gives rise to nonlinear optimization problems characterized by a large number of constraints that naturally occur in blocks. We propose a class of multilevel optimization methods motivated by the structure and number of constraints and by the expense of the derivative computations for MDO. The algorithms are an extension to the nonlinear programming problem of the successful class of local Brown-Brent algorithms for nonlinear equations. Our extensions allow the user to partition constraints into arbitrary blocks to fit the application, and they separately process each block and the objective function, restricted to certain subspaces. The methods use trust regions as a globalization strategy, and they have been shown to be globally convergent under reasonable assumptions. The multilevel algorithms can be applied to all classes of MDO formulations. Multilevel algorithms for solving nonlinear systems of equations are a special case of the multilevel optimization methods. In this case, they can be viewed as a trust-region globalization of the Brown-Brent class.

  8. Node Self-Deployment Algorithm Based on an Uneven Cluster with Radius Adjusting for Underwater Sensor Networks.

    PubMed

    Jiang, Peng; Xu, Yiming; Wu, Feng

    2016-01-01

    Existing move-restricted node self-deployment algorithms are based on a fixed node communication radius, evaluate the performance based on network coverage or the connectivity rate and do not consider the number of nodes near the sink node and the energy consumption distribution of the network topology, thereby degrading network reliability and the energy consumption balance. Therefore, we propose a distributed underwater node self-deployment algorithm. First, each node begins the uneven clustering based on the distance on the water surface. Each cluster head node selects its next-hop node to synchronously construct a connected path to the sink node. Second, the cluster head node adjusts its depth while maintaining the layout formed by the uneven clustering and then adjusts the positions of in-cluster nodes. The algorithm originally considers the network reliability and energy consumption balance during node deployment and considers the coverage redundancy rate of all positions that a node may reach during the node position adjustment. Simulation results show, compared to the connected dominating set (CDS) based depth computation algorithm, that the proposed algorithm can increase the number of the nodes near the sink node and improve network reliability while guaranteeing the network connectivity rate. Moreover, it can balance energy consumption during network operation, further improve network coverage rate and reduce energy consumption. PMID:26784193

  9. Node Self-Deployment Algorithm Based on an Uneven Cluster with Radius Adjusting for Underwater Sensor Networks

    PubMed Central

    Jiang, Peng; Xu, Yiming; Wu, Feng

    2016-01-01

    Existing move-restricted node self-deployment algorithms are based on a fixed node communication radius, evaluate the performance based on network coverage or the connectivity rate and do not consider the number of nodes near the sink node and the energy consumption distribution of the network topology, thereby degrading network reliability and the energy consumption balance. Therefore, we propose a distributed underwater node self-deployment algorithm. First, each node begins the uneven clustering based on the distance on the water surface. Each cluster head node selects its next-hop node to synchronously construct a connected path to the sink node. Second, the cluster head node adjusts its depth while maintaining the layout formed by the uneven clustering and then adjusts the positions of in-cluster nodes. The algorithm originally considers the network reliability and energy consumption balance during node deployment and considers the coverage redundancy rate of all positions that a node may reach during the node position adjustment. Simulation results show, compared to the connected dominating set (CDS) based depth computation algorithm, that the proposed algorithm can increase the number of the nodes near the sink node and improve network reliability while guaranteeing the network connectivity rate. Moreover, it can balance energy consumption during network operation, further improve network coverage rate and reduce energy consumption. PMID:26784193

  10. Dynamic displacement measurement of large-scale structures based on the Lucas-Kanade template tracking algorithm

    NASA Astrophysics Data System (ADS)

    Guo, Jie; Zhu, Chang`an

    2016-01-01

    The development of optics and computer technologies enables the application of the vision-based technique that uses digital cameras to the displacement measurement of large-scale structures. Compared with traditional contact measurements, vision-based technique allows for remote measurement, has a non-intrusive characteristic, and does not necessitate mass introduction. In this study, a high-speed camera system is developed to complete the displacement measurement in real time. The system consists of a high-speed camera and a notebook computer. The high-speed camera can capture images at a speed of hundreds of frames per second. To process the captured images in computer, the Lucas-Kanade template tracking algorithm in the field of computer vision is introduced. Additionally, a modified inverse compositional algorithm is proposed to reduce the computing time of the original algorithm and improve the efficiency further. The modified algorithm can rapidly accomplish one displacement extraction within 1 ms without having to install any pre-designed target panel onto the structures in advance. The accuracy and the efficiency of the system in the remote measurement of dynamic displacement are demonstrated in the experiments on motion platform and sound barrier on suspension viaduct. Experimental results show that the proposed algorithm can extract accurate displacement signal and accomplish the vibration measurement of large-scale structures.

  11. The Origin(s) of Whales

    NASA Astrophysics Data System (ADS)

    Uhen, Mark D.

    2010-05-01

    Whales are first found in the fossil record approximately 52.5 million years ago (Mya) during the early Eocene in Indo-Pakistan. Our knowledge of early and middle Eocene whales has increased dramatically during the past three decades to the point where hypotheses of whale origins can be supported with a great deal of evidence from paleontology, anatomy, stratigraphy, and molecular biology. Fossils also provide preserved evidence of behavior and habitats, allowing the reconstruction of the modes of life of these semiaquatic animals during their transition from land to sea. Modern whales originated from ancient whales at or near the Eocene/Oligocene boundary, approximately 33.7 Mya. During the Oligocene, ancient whales coexisted with early baleen whales and early toothed whales. By the end of the Miocene, most modern families had originated, and most archaic forms had gone extinct. Whale diversity peaked in the late middle Miocene and fell thereafter toward the Recent, yielding our depauperate modern whale fauna.

  12. A three-dimensional-weighted cone beam filtered backprojection (CB-FBP) algorithm for image reconstruction in volumetric CT-helical scanning.

    PubMed

    Tang, Xiangyang; Hsieh, Jiang; Nilsen, Roy A; Dutta, Sandeep; Samsonov, Dmitry; Hagiwara, Akira

    2006-02-21

    Based on the structure of the original helical FDK algorithm, a three-dimensional (3D)-weighted cone beam filtered backprojection (CB-FBP) algorithm is proposed for image reconstruction in volumetric CT under helical source trajectory. In addition to its dependence on view and fan angles, the 3D weighting utilizes the cone angle dependency of a ray to improve reconstruction accuracy. The 3D weighting is ray-dependent and the underlying mechanism is to give a favourable weight to the ray with the smaller cone angle out of a pair of conjugate rays but an unfavourable weight to the ray with the larger cone angle out of the conjugate ray pair. The proposed 3D-weighted helical CB-FBP reconstruction algorithm is implemented in the cone-parallel geometry that can improve noise uniformity and image generation speed significantly. Under the cone-parallel geometry, the filtering is naturally carried out along the tangential direction of the helical source trajectory. By exploring the 3D weighting's dependence on cone angle, the proposed helical 3D-weighted CB-FBP reconstruction algorithm can provide significantly improved reconstruction accuracy at moderate cone angle and high helical pitches. The 3D-weighted CB-FBP algorithm is experimentally evaluated by computer-simulated phantoms and phantoms scanned by a diagnostic volumetric CT system with a detector dimension of 64 x 0.625 mm over various helical pitches. The computer simulation study shows that the 3D weighting enables the proposed algorithm to reach reconstruction accuracy comparable to that of exact CB reconstruction algorithms, such as the Katsevich algorithm, under a moderate cone angle (4 degrees) and various helical pitches. Meanwhile, the experimental evaluation using the phantoms scanned by a volumetric CT system shows that the spatial resolution along the z-direction and noise characteristics of the proposed 3D-weighted helical CB-FBP reconstruction algorithm are maintained very well in comparison to the FDK

  13. A three-dimensional-weighted cone beam filtered backprojection (CB-FBP) algorithm for image reconstruction in volumetric CT—helical scanning

    NASA Astrophysics Data System (ADS)

    Tang, Xiangyang; Hsieh, Jiang; Nilsen, Roy A.; Dutta, Sandeep; Samsonov, Dmitry; Hagiwara, Akira

    2006-02-01

    Based on the structure of the original helical FDK algorithm, a three-dimensional (3D)-weighted cone beam filtered backprojection (CB-FBP) algorithm is proposed for image reconstruction in volumetric CT under helical source trajectory. In addition to its dependence on view and fan angles, the 3D weighting utilizes the cone angle dependency of a ray to improve reconstruction accuracy. The 3D weighting is ray-dependent and the underlying mechanism is to give a favourable weight to the ray with the smaller cone angle out of a pair of conjugate rays but an unfavourable weight to the ray with the larger cone angle out of the conjugate ray pair. The proposed 3D-weighted helical CB-FBP reconstruction algorithm is implemented in the cone-parallel geometry that can improve noise uniformity and image generation speed significantly. Under the cone-parallel geometry, the filtering is naturally carried out along the tangential direction of the helical source trajectory. By exploring the 3D weighting's dependence on cone angle, the proposed helical 3D-weighted CB-FBP reconstruction algorithm can provide significantly improved reconstruction accuracy at moderate cone angle and high helical pitches. The 3D-weighted CB-FBP algorithm is experimentally evaluated by computer-simulated phantoms and phantoms scanned by a diagnostic volumetric CT system with a detector dimension of 64 × 0.625 mm over various helical pitches. The computer simulation study shows that the 3D weighting enables the proposed algorithm to reach reconstruction accuracy comparable to that of exact CB reconstruction algorithms, such as the Katsevich algorithm, under a moderate cone angle (4°) and various helical pitches. Meanwhile, the experimental evaluation using the phantoms scanned by a volumetric CT system shows that the spatial resolution along the z-direction and noise characteristics of the proposed 3D-weighted helical CB-FBP reconstruction algorithm are maintained very well in comparison to the FDK

  14. Why different passive microwave algorithms give different soil moisture retrievals

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Several algorithms have been used to retrieve surface soil moisture from brightness temperature observations provided by low frequency microwave satellite sensors such as the Advanced Microwave Scanning Radiometer on NASA EOS satellite Aqua (AMSR-E). Most of these algorithms have originated from the...

  15. Planning Readings: A Comparative Exploration of Basic Algorithms

    ERIC Educational Resources Information Center

    Piater, Justus H.

    2009-01-01

    Conventional introduction to computer science presents individual algorithmic paradigms in the context of specific, prototypical problems. To complement this algorithm-centric instruction, this study additionally advocates problem-centric instruction. I present an original problem drawn from students' life that is simply stated but provides rich…

  16. An Augmentation of G-Guidance Algorithms

    NASA Technical Reports Server (NTRS)

    Carson, John M. III; Acikmese, Behcet

    2011-01-01

    The original G-Guidance algorithm provided an autonomous guidance and control policy for small-body proximity operations that took into account uncertainty and dynamics disturbances. However, there was a lack of robustness in regards to object proximity while in autonomous mode. The modified GGuidance algorithm was augmented with a second operational mode that allows switching into a safety hover mode. This will cause a spacecraft to hover in place until a mission-planning algorithm can compute a safe new trajectory. No state or control constraints are violated. When a new, feasible state trajectory is calculated, the spacecraft will return to standard mode and maneuver toward the target. The main goal of this augmentation is to protect the spacecraft in the event that a landing surface or obstacle is closer or further than anticipated. The algorithm can be used for the mitigation of any unexpected trajectory or state changes that occur during standard mode operations

  17. Insect-Inspired Navigation Algorithm for an Aerial Agent Using Satellite Imagery

    PubMed Central

    Gaffin, Douglas D.; Dewar, Alexander; Graham, Paul; Philippides, Andrew

    2015-01-01

    Humans have long marveled at the ability of animals to navigate swiftly, accurately, and across long distances. Many mechanisms have been proposed for how animals acquire, store, and retrace learned routes, yet many of these hypotheses appear incongruent with behavioral observations and the animals’ neural constraints. The “Navigation by Scene Familiarity Hypothesis” proposed originally for insect navigation offers an elegantly simple solution for retracing previously experienced routes without the need for complex neural architectures and memory retrieval mechanisms. This hypothesis proposes that an animal can return to a target location by simply moving toward the most familiar scene at any given point. Proof of concept simulations have used computer-generated ant’s-eye views of the world, but here we test the ability of scene familiarity algorithms to navigate training routes across satellite images extracted from Google Maps. We find that Google satellite images are so rich in visual information that familiarity algorithms can be used to retrace even tortuous routes with low-resolution sensors. We discuss the implications of these findings not only for animal navigation but also for the potential development of visual augmentation systems and robot guidance algorithms. PMID:25874764

  18. An Efficient Method of Key-Frame Extraction Based on a Cluster Algorithm

    PubMed Central

    Zhang, Qiang; Yu, Shao-Pei; Zhou, Dong-Sheng; Wei, Xiao-Peng

    2013-01-01

    This paper proposes a novel method of key-frame extraction for use with motion capture data. This method is based on an unsupervised cluster algorithm. First, the motion sequence is clustered into two classes by the similarity distance of the adjacent frames so that the thresholds needed in the next step can be determined adaptively. Second, a dynamic cluster algorithm called ISODATA is used to cluster all the frames and the frames nearest to the center of each class are automatically extracted as key-frames of the sequence. Unlike many other clustering techniques, the present improved cluster algorithm can automatically address different motion types without any need for specified parameters from users. The proposed method is capable of summarizing motion capture data reliably and efficiently. The present work also provides a meaningful comparison between the results of the proposed key-frame extraction technique and other previous methods. These results are evaluated in terms of metrics that measure reconstructed motion and the mean absolute error value, which are derived from the reconstructed data and the original data. PMID:24511336

  19. Detection algorithm for glass bottle mouth defect by continuous wavelet transform based on machine vision

    NASA Astrophysics Data System (ADS)

    Qian, Jinfang; Zhang, Changjiang

    2014-11-01

    An efficient algorithm based on continuous wavelet transform combining with pre-knowledge, which can be used to detect the defect of glass bottle mouth, is proposed. Firstly, under the condition of ball integral light source, a perfect glass bottle mouth image is obtained by Japanese Computar camera through the interface of IEEE-1394b. A single threshold method based on gray level histogram is used to obtain the binary image of the glass bottle mouth. In order to efficiently suppress noise, moving average filter is employed to smooth the histogram of original glass bottle mouth image. And then continuous wavelet transform is done to accurately determine the segmentation threshold. Mathematical morphology operations are used to get normal binary bottle mouth mask. A glass bottle to be detected is moving to the detection zone by conveyor belt. Both bottle mouth image and binary image are obtained by above method. The binary image is multiplied with normal bottle mask and a region of interest is got. Four parameters (number of connected regions, coordinate of centroid position, diameter of inner cycle, and area of annular region) can be computed based on the region of interest. Glass bottle mouth detection rules are designed by above four parameters so as to accurately detect and identify the defect conditions of glass bottle. Finally, the glass bottles of Coca-Cola Company are used to verify the proposed algorithm. The experimental results show that the proposed algorithm can accurately detect the defect conditions of the glass bottles and have 98% detecting accuracy.

  20. Protein folding simulations of the hydrophobic-hydrophilic model by combining tabu search with genetic algorithms

    NASA Astrophysics Data System (ADS)

    Jiang, Tianzi; Cui, Qinghua; Shi, Guihua; Ma, Songde

    2003-08-01

    In this paper, a novel hybrid algorithm combining genetic algorithms and tabu search is presented. In the proposed hybrid algorithm, the idea of tabu search is applied to the crossover operator. We demonstrate that the hybrid algorithm can be applied successfully to the protein folding problem based on a hydrophobic-hydrophilic lattice model. The results show that in all cases the hybrid algorithm works better than a genetic algorithm alone. A comparison with other methods is also made.

  1. Algorithmic Perspectives on Problem Formulations in MDO

    NASA Technical Reports Server (NTRS)

    Alexandrov, Natalia M.; Lewis, Robert Michael

    2000-01-01

    This work is concerned with an approach to formulating the multidisciplinary optimization (MDO) problem that reflects an algorithmic perspective on MDO problem solution. The algorithmic perspective focuses on formulating the problem in light of the abilities and inabilities of optimization algorithms, so that the resulting nonlinear programming problem can be solved reliably and efficiently by conventional optimization techniques. We propose a modular approach to formulating MDO problems that takes advantage of the problem structure, maximizes the autonomy of implementation, and allows for multiple easily interchangeable problem statements to be used depending on the available resources and the characteristics of the application problem.

  2. Complexity of the Quantum Adiabatic Algorithm

    NASA Technical Reports Server (NTRS)

    Hen, Itay

    2013-01-01

    The Quantum Adiabatic Algorithm (QAA) has been proposed as a mechanism for efficiently solving optimization problems on a quantum computer. Since adiabatic computation is analog in nature and does not require the design and use of quantum gates, it can be thought of as a simpler and perhaps more profound method for performing quantum computations that might also be easier to implement experimentally. While these features have generated substantial research in QAA, to date there is still a lack of solid evidence that the algorithm can outperform classical optimization algorithms.

  3. Robust and low complexity localization algorithm based on head-related impulse responses and interaural time difference.

    PubMed

    Wan, Xinwang; Liang, Juan

    2013-01-01

    This article introduces a biologically inspired localization algorithm using two microphones, for a mobile robot. The proposed algorithm has two steps. First, the coarse azimuth angle of the sound source is estimated by cross-correlation algorithm based on interaural time difference. Then, the accurate azimuth angle is obtained by cross-channel algorithm based on head-related impulse responses. The proposed algorithm has lower computational complexity compared to the cross-channel algorithm. Experimental results illustrate that the localization performance of the proposed algorithm is better than those of the cross-correlation and cross-channel algorithms. PMID:23298016

  4. An algorithmic approach to crustal deformation analysis

    NASA Technical Reports Server (NTRS)

    Iz, Huseyin Baki

    1987-01-01

    In recent years the analysis of crustal deformation measurements has become important as a result of current improvements in geodetic methods and an increasing amount of theoretical and observational data provided by several earth sciences. A first-generation data analysis algorithm which combines a priori information with current geodetic measurements was proposed. Relevant methods which can be used in the algorithm were discussed. Prior information is the unifying feature of this algorithm. Some of the problems which may arise through the use of a priori information in the analysis were indicated and preventive measures were demonstrated. The first step in the algorithm is the optimal design of deformation networks. The second step in the algorithm identifies the descriptive model of the deformation field. The final step in the algorithm is the improved estimation of deformation parameters. Although deformation parameters are estimated in the process of model discrimination, they can further be improved by the use of a priori information about them. According to the proposed algorithm this information must first be tested against the estimates calculated using the sample data only. Null-hypothesis testing procedures were developed for this purpose. Six different estimators which employ a priori information were examined. Emphasis was put on the case when the prior information is wrong and analytical expressions for possible improvements under incompatible prior information were derived.

  5. Image segmentation using an improved differential algorithm

    NASA Astrophysics Data System (ADS)

    Gao, Hao; Shi, Yujiao; Wu, Dongmei

    2014-10-01

    Among all the existing segmentation techniques, the thresholding technique is one of the most popular due to its simplicity, robustness, and accuracy (e.g. the maximum entropy method, Otsu's method, and K-means clustering). However, the computation time of these algorithms grows exponentially with the number of thresholds due to their exhaustive searching strategy. As a population-based optimization algorithm, differential algorithm (DE) uses a population of potential solutions and decision-making processes. It has shown considerable success in solving complex optimization problems within a reasonable time limit. Thus, applying this method into segmentation algorithm should be a good choice during to its fast computational ability. In this paper, we first propose a new differential algorithm with a balance strategy, which seeks a balance between the exploration of new regions and the exploitation of the already sampled regions. Then, we apply the new DE into the traditional Otsu's method to shorten the computation time. Experimental results of the new algorithm on a variety of images show that, compared with the EA-based thresholding methods, the proposed DE algorithm gets more effective and efficient results. It also shortens the computation time of the traditional Otsu method.

  6. Algorithm for Public Electric Transport Schedule Control for Intelligent Embedded Devices

    NASA Astrophysics Data System (ADS)

    Alps, Ivars; Potapov, Andrey; Gorobetz, Mikhail; Levchenkov, Anatoly

    2010-01-01

    In this paper authors present heuristics algorithm for precise schedule fulfilment in city traffic conditions taking in account traffic lights. The algorithm is proposed for programmable controller. PLC is proposed to be installed in electric vehicle to control its motion speed and signals of traffic lights. Algorithm is tested using real controller connected to virtual devices and real functional models of real tram devices. Results of experiments show high precision of public transport schedule fulfilment using proposed algorithm.

  7. Optimal battery sizing in photovoltaic based distributed generation using enhanced opposition-based firefly algorithm for voltage rise mitigation.

    PubMed

    Wong, Ling Ai; Shareef, Hussain; Mohamed, Azah; Ibrahim, Ahmad Asrul

    2014-01-01

    This paper presents the application of enhanced opposition-based firefly algorithm in obtaining the optimal battery energy storage systems (BESS) sizing in photovoltaic generation integrated radial distribution network in order to mitigate the voltage rise problem. Initially, the performance of the original firefly algorithm is enhanced by utilizing the opposition-based learning and introducing inertia weight. After evaluating the performance of the enhanced opposition-based firefly algorithm (EOFA) with fifteen benchmark functions, it is then adopted to determine the optimal size for BESS. Two optimization processes are conducted where the first optimization aims to obtain the optimal battery output power on hourly basis and the second optimization aims to obtain the optimal BESS capacity by considering the state of charge constraint of BESS. The effectiveness of the proposed method is validated by applying the algorithm to the 69-bus distribution system and by comparing the performance of EOFA with conventional firefly algorithm and gravitational search algorithm. Results show that EOFA has the best performance comparatively in terms of mitigating the voltage rise problem. PMID:25054184

  8. Optimal Battery Sizing in Photovoltaic Based Distributed Generation Using Enhanced Opposition-Based Firefly Algorithm for Voltage Rise Mitigation

    PubMed Central

    Wong, Ling Ai; Shareef, Hussain; Mohamed, Azah; Ibrahim, Ahmad Asrul

    2014-01-01

    This paper presents the application of enhanced opposition-based firefly algorithm in obtaining the optimal battery energy storage systems (BESS) sizing in photovoltaic generation integrated radial distribution network in order to mitigate the voltage rise problem. Initially, the performance of the original firefly algorithm is enhanced by utilizing the opposition-based learning and introducing inertia weight. After evaluating the performance of the enhanced opposition-based firefly algorithm (EOFA) with fifteen benchmark functions, it is then adopted to determine the optimal size for BESS. Two optimization processes are conducted where the first optimization aims to obtain the optimal battery output power on hourly basis and the second optimization aims to obtain the optimal BESS capacity by considering the state of charge constraint of BESS. The effectiveness of the proposed method is validated by applying the algorithm to the 69-bus distribution system and by comparing the performance of EOFA with conventional firefly algorithm and gravitational search algorithm. Results show that EOFA has the best performance comparatively in terms of mitigating the voltage rise problem. PMID:25054184

  9. Historical Development of Origins Research

    PubMed Central

    Lazcano, Antonio

    2010-01-01

    Following the publication of the Origin of Species in 1859, many naturalists adopted the idea that living organisms were the historical outcome of gradual transformation of lifeless matter. These views soon merged with the developments of biochemistry and cell biology and led to proposals in which the origin of protoplasm was equated with the origin of life. The heterotrophic origin of life proposed by Oparin and Haldane in the 1920s was part of this tradition, which Oparin enriched by transforming the discussion of the emergence of the first cells into a workable multidisciplinary research program. On the other hand, the scientific trend toward understanding biological phenomena at the molecular level led authors like Troland, Muller, and others to propose that single molecules or viruses represented primordial living systems. The contrast between these opposing views on the origin of life represents not only contrasting views of the nature of life itself, but also major ideological discussions that reached a surprising intensity in the years following Stanley Miller’s seminal result which showed the ease with which organic compounds of biochemical significance could be synthesized under putative primitive conditions. In fact, during the years following the Miller experiment, attempts to understand the origin of life were strongly influenced by research on DNA replication and protein biosynthesis, and, in socio-political terms, by the atmosphere created by Cold War tensions. The catalytic versatility of RNA molecules clearly merits a critical reappraisal of Muller’s viewpoint. However, the discovery of ribozymes does not imply that autocatalytic nucleic acid molecules ready to be used as primordial genes were floating in the primitive oceans, or that the RNA world emerged completely assembled from simple precursors present in the prebiotic soup. The evidence supporting the presence of a wide range of organic molecules on the primitive Earth, including membrane

  10. A cross-layer optimization algorithm for wireless sensor network

    NASA Astrophysics Data System (ADS)

    Wang, Yan; Liu, Le Qing

    2010-07-01

    Energy is critical for typical wireless sensor networks (WSN) and how to energy consumption and maximize network lifetime are big challenges for Wireless sensor networks; cross layer algorithm is main method to solve this problem. In this paper, firstly, we analyze current layer-based optimal methods in wireless sensor network and summarize the physical, link and routing optimization techniques. Secondly we compare some strategies in cross-layer optimization algorithms. According to the analysis and summary of the current lifetime algorithms in wireless sensor network A cross layer optimization algorithm is proposed,. Then this optimization algorithm proposed in the paper is adopted to improve the traditional Leach routing protocol. Simulation results show that this algorithm is an excellent cross layer algorithm for reducing energy consumption.

  11. A Cross Unequal Clustering Routing Algorithm for Sensor Network

    NASA Astrophysics Data System (ADS)

    Tong, Wang; Jiyi, Wu; He, Xu; Jinghua, Zhu; Munyabugingo, Charles

    2013-08-01

    In the routing protocol for wireless sensor network, the cluster size is generally fixed in clustering routing algorithm for wireless sensor network, which can easily lead to the "hot spot" problem. Furthermore, the majority of routing algorithms barely consider the problem of long distance communication between adjacent cluster heads that brings high energy consumption. Therefore, this paper proposes a new cross unequal clustering routing algorithm based on the EEUC algorithm. In order to solve the defects of EEUC algorithm, this algorithm calculating of competition radius takes the node's position and node's remaining energy into account to make the load of cluster heads more balanced. At the same time, cluster adjacent node is applied to transport data and reduce the energy-loss of cluster heads. Simulation experiments show that, compared with LEACH and EEUC, the proposed algorithm can effectively reduce the energy-loss of cluster heads and balance the energy consumption among all nodes in the network and improve the network lifetime

  12. An efficient algorithm for estimating noise covariances in distributed systems

    NASA Technical Reports Server (NTRS)

    Dee, D. P.; Cohn, S. E.; Ghil, M.; Dalcher, A.

    1985-01-01

    An efficient computational algorithm for estimating the noise covariance matrices of large linear discrete stochatic-dynamic systems is presented. Such systems arise typically by discretizing distributed-parameter systems, and their size renders computational efficiency a major consideration. The proposed adaptive filtering algorithm is based on the ideas of Belanger, and is algebraically equivalent to his algorithm. The earlier algorithm, however, has computational complexity proportional to p to the 6th, where p is the number of observations of the system state, while the new algorithm has complexity proportional to only p-cubed. Further, the formulation of noise covariance estimation as a secondary filter, analogous to state estimation as a primary filter, suggests several generalizations of the earlier algorithm. The performance of the proposed algorithm is demonstrated for a distributed system arising in numerical weather prediction.

  13. A MARKOV CHAIN MONTE CARLO ALGORITHM FOR ANALYSIS OF LOW SIGNAL-TO-NOISE COSMIC MICROWAVE BACKGROUND DATA

    SciTech Connect

    Jewell, J. B.; O'Dwyer, I. J.; Huey, Greg; Gorski, K. M.; Eriksen, H. K.; Wandelt, B. D. E-mail: h.k.k.eriksen@astro.uio.no

    2009-05-20

    We present a new Markov Chain Monte Carlo (MCMC) algorithm for cosmic microwave background (CMB) analysis in the low signal-to-noise regime. This method builds on and complements the previously described CMB Gibbs sampler, and effectively solves the low signal-to-noise inefficiency problem of the direct Gibbs sampler. The new algorithm is a simple Metropolis-Hastings sampler with a general proposal rule for the power spectrum, C {sub l}, followed by a particular deterministic rescaling operation of the sky signal, s. The acceptance probability for this joint move depends on the sky map only through the difference of {chi}{sup 2} between the original and proposed sky sample, which is close to unity in the low signal-to-noise regime. The algorithm is completed by alternating this move with a standard Gibbs move. Together, these two proposals constitute a computationally efficient algorithm for mapping out the full joint CMB posterior, both in the high and low signal-to-noise regimes.

  14. A floor-map-aided WiFi/pseudo-odometry integration algorithm for an indoor positioning system.

    PubMed

    Wang, Jian; Hu, Andong; Liu, Chunyan; Li, Xin

    2015-01-01

    This paper proposes a scheme for indoor positioning by fusing floor map, WiFi and smartphone sensor data to provide meter-level positioning without additional infrastructure. A topology-constrained K nearest neighbor (KNN) algorithm based on a floor map layout provides the coordinates required to integrate WiFi data with pseudo-odometry (P-O) measurements simulated using a pedestrian dead reckoning (PDR) approach. One method of further improving the positioning accuracy is to use a more effective multi-threshold step detection algorithm, as proposed by the authors. The "go and back" phenomenon caused by incorrect matching of the reference points (RPs) of a WiFi algorithm is eliminated using an adaptive fading-factor-based extended Kalman filter (EKF), taking WiFi positioning coordinates, P-O measurements and fused heading angles as observations. The "cross-wall" problem is solved based on the development of a floor-map-aided particle filter algorithm by weighting the particles, thereby also eliminating the gross-error effects originating from WiFi or P-O measurements. The performance observed in a field experiment performed on the fourth floor of the School of Environmental Science and Spatial Informatics (SESSI) building on the China University of Mining and Technology (CUMT) campus confirms that the proposed scheme can reliably achieve meter-level positioning. PMID:25811224

  15. A Floor-Map-Aided WiFi/Pseudo-Odometry Integration Algorithm for an Indoor Positioning System

    PubMed Central

    Wang, Jian; Hu, Andong; Liu, Chunyan; Li, Xin

    2015-01-01

    This paper proposes a scheme for indoor positioning by fusing floor map, WiFi and smartphone sensor data to provide meter-level positioning without additional infrastructure. A topology-constrained K nearest neighbor (KNN) algorithm based on a floor map layout provides the coordinates required to integrate WiFi data with pseudo-odometry (P-O) measurements simulated using a pedestrian dead reckoning (PDR) approach. One method of further improving the positioning accuracy is to use a more effective multi-threshold step detection algorithm, as proposed by the authors. The “go and back” phenomenon caused by incorrect matching of the reference points (RPs) of a WiFi algorithm is eliminated using an adaptive fading-factor-based extended Kalman filter (EKF), taking WiFi positioning coordinates, P-O measurements and fused heading angles as observations. The “cross-wall” problem is solved based on the development of a floor-map-aided particle filter algorithm by weighting the particles, thereby also eliminating the gross-error effects originating from WiFi or P-O measurements. The performance observed in a field experiment performed on the fourth floor of the School of Environmental Science and Spatial Informatics (SESSI) building on the China University of Mining and Technology (CUMT) campus confirms that the proposed scheme can reliably achieve meter-level positioning. PMID:25811224

  16. Minimalist ensemble algorithms for genome-wide protein localization prediction

    PubMed Central

    2012-01-01

    Background Computational prediction of protein subcellular localization can greatly help to elucidate its functions. Despite the existence of dozens of protein localization prediction algorithms, the prediction accuracy and coverage are still low. Several ensemble algorithms have been proposed to improve the prediction performance, which usually include as many as 10 or more individual localization algorithms. However, their performance is still limited by the running complexity and redundancy among individual prediction algorithms. Results This paper proposed a novel method for rational design of minimalist ensemble algorithms for practical genome-wide protein subcellular localization prediction. The algorithm is based on combining a feature selection based filter and a logistic regression classifier. Using a novel concept of contribution scores, we analyzed issues of algorithm redundancy, consensus mistakes, and algorithm complementarity in designing ensemble algorithms. We applied the proposed minimalist logistic regression (LR) ensemble algorithm to two genome-wide datasets of Yeast and Human and compared its performance with current ensemble algorithms. Experimental results showed that the minimalist ensemble algorithm can achieve high prediction accuracy with only 1/3 to 1/2 of individual predictors of current ensemble algorithms, which greatly reduces computational complexity and running time. It was found that the high performance ensemble algorithms are usually composed of the predictors that together cover most of available features. Compared to the best individual predictor, our ensemble algorithm improved the prediction accuracy from AUC score of 0.558 to 0.707 for the Yeast dataset and from 0.628 to 0.646 for the Human dataset. Compared with popular weighted voting based ensemble algorithms, our classifier-based ensemble algorithms achieved much better performance without suffering from inclusion of too many individual predictors. Conclusions We

  17. An Improved Back Propagation Neural Network Algorithm on Classification Problems

    NASA Astrophysics Data System (ADS)

    Nawi, Nazri Mohd; Ransing, R. S.; Salleh, Mohd Najib Mohd; Ghazali, Rozaida; Hamid, Norhamreeza Abdul

    The back propagation algorithm is one the most popular algorithms to train feed forward neural networks. However, the convergence of this algorithm is slow, it is mainly because of gradient descent algorithm. Previous research demonstrated that in 'feed forward' algorithm, the slope of the activation function is directly influenced by a parameter referred to as 'gain'. This research proposed an algorithm for improving the performance of the back propagation algorithm by introducing the adaptive gain of the activation function. The gain values change adaptively for each node. The influence of the adaptive gain on the learning ability of a neural network is analysed. Multi layer feed forward neural networks have been assessed. Physical interpretation of the relationship between the gain value and the learning rate and weight values is given. The efficiency of the proposed algorithm is compared with conventional Gradient Descent Method and verified by means of simulation on four classification problems. In learning the patterns, the simulations result demonstrate that the proposed method converged faster on Wisconsin breast cancer with an improvement ratio of nearly 2.8, 1.76 on diabetes problem, 65% better on thyroid data sets and 97% faster on IRIS classification problem. The results clearly show that the proposed algorithm significantly improves the learning speed of the conventional back-propagation algorithm.

  18. RNA-RNA interaction prediction using genetic algorithm

    PubMed Central

    2014-01-01

    Background RNA-RNA interaction plays an important role in the regulation of gene expression and cell development. In this process, an RNA molecule prohibits the translation of another RNA molecule by establishing stable interactions with it. In the RNA-RNA interaction prediction problem, two RNA sequences are given as inputs and the goal is to find the optimal secondary structure of two RNAs and between them. Some different algorithms have been proposed to predict RNA-RNA interaction structure. However, most of them suffer from high computational time. Results In this paper, we introduce a novel genetic algorithm called GRNAs to predict the RNA-RNA interaction. The proposed algorithm is performed on some standard datasets with appropriate accuracy and lower time complexity in comparison to the other state-of-the-art algorithms. In the proposed algorithm, each individual is a secondary structure of two interacting RNAs. The minimum free energy is considered as a fitness function for each individual. In each generation, the algorithm is converged to find the optimal secondary structure (minimum free energy structure) of two interacting RNAs by using crossover and mutation operations. Conclusions This algorithm is properly employed for joint secondary structure prediction. The results achieved on a set of known interacting RNA pairs are compared with the other related algorithms and the effectiveness and validity of the proposed algorithm have been demonstrated. It has been shown that time complexity of the algorithm in each iteration is as efficient as the other approaches. PMID:25114714

  19. Constrained Multiobjective Biogeography Optimization Algorithm

    PubMed Central

    Mo, Hongwei; Xu, Zhidan; Xu, Lifang; Wu, Zhou; Ma, Haiping

    2014-01-01

    Multiobjective optimization involves minimizing or maximizing multiple objective functions subject to a set of constraints. In this study, a novel constrained multiobjective biogeography optimization algorithm (CMBOA) is proposed. It is the first biogeography optimization algorithm for constrained multiobjective optimization. In CMBOA, a disturbance migration operator is designed to generate diverse feasible individuals in order to promote the diversity of individuals on Pareto front. Infeasible individuals nearby feasible region are evolved to feasibility by recombining with their nearest nondominated feasible individuals. The convergence of CMBOA is proved by using probability theory. The performance of CMBOA is evaluated on a set of 6 benchmark problems and experimental results show that the CMBOA performs better than or similar to the classical NSGA-II and IS-MOEA. PMID:25006591

  20. Constrained multiobjective biogeography optimization algorithm.

    PubMed

    Mo, Hongwei; Xu, Zhidan; Xu, Lifang; Wu, Zhou; Ma, Haiping

    2014-01-01

    Multiobjective optimization involves minimizing or maximizing multiple objective functions subject to a set of constraints. In this study, a novel constrained multiobjective biogeography optimization algorithm (CMBOA) is proposed. It is the first biogeography optimization algorithm for constrained multiobjective optimization. In CMBOA, a disturbance migration operator is designed to generate diverse feasible individuals in order to promote the diversity of individuals on Pareto front. Infeasible individuals nearby feasible region are evolved to feasibility by recombining with their nearest nondominated feasible individuals. The convergence of CMBOA is proved by using probability theory. The performance of CMBOA is evaluated on a set of 6 benchmark problems and experimental results show that the CMBOA performs better than or similar to the classical NSGA-II and IS-MOEA. PMID:25006591

  1. A possible hypercomputational quantum algorithm

    NASA Astrophysics Data System (ADS)

    Sicard, Andres; Velez, Mario; Ospina, Juan

    2005-05-01

    The term 'hypermachine' denotes any data processing device (theoretical or that can be implemented) capable of carrying out tasks that cannot be performed by a Turing machine. We present a possible quantum algorithm for a classically non-computable decision problem, Hilbert's tenth problem; more specifically, we present a possible hypercomputation model based on quantum computation. Our algorithm is inspired by the one proposed by Tien D. Kieu, but we have selected the infinite square well instead of the (one-dimensional) simple harmonic oscillator as the underlying physical system. Our model exploits the quantum adiabatic process and the characteristics of the representation of the dynamical Lie algebra su(1,1) associated to the infinite square well.

  2. MUSIC algorithms for rebar detection

    NASA Astrophysics Data System (ADS)

    Solimene, Raffaele; Leone, Giovanni; Dell'Aversano, Angela

    2013-12-01

    The MUSIC (MUltiple SIgnal Classification) algorithm is employed to detect and localize an unknown number of scattering objects which are small in size as compared to the wavelength. The ensemble of objects to be detected consists of both strong and weak scatterers. This represents a scattering environment challenging for detection purposes as strong scatterers tend to mask the weak ones. Consequently, the detection of more weakly scattering objects is not always guaranteed and can be completely impaired when the noise corrupting data is of a relatively high level. To overcome this drawback, here a new technique is proposed, starting from the idea of applying a two-stage MUSIC algorithm. In the first stage strong scatterers are detected. Then, information concerning their number and location is employed in the second stage focusing only on the weak scatterers. The role of an adequate scattering model is emphasized to improve drastically detection performance in realistic scenarios.

  3. Electricity Load Forecasting Using Support Vector Regression with Memetic Algorithms

    PubMed Central

    Hu, Zhongyi; Xiong, Tao

    2013-01-01

    Electricity load forecasting is an important issue that is widely explored and examined in power systems operation literature and commercial transactions in electricity markets literature as well. Among the existing forecasting models, support vector regression (SVR) has gained much attention. Considering the performance of SVR highly depends on its parameters; this study proposed a firefly algorithm (FA) based memetic algorithm (FA-MA) to appropriately determine the parameters of SVR forecasting model. In the proposed FA-MA algorithm, the FA algorithm is applied to explore the solution space, and the pattern search is used to conduct individual learning and thus enhance the exploitation of FA. Experimental results confirm that the proposed FA-MA based SVR model can not only yield more accurate forecasting results than the other four evolutionary algorithms based SVR models and three well-known forecasting models but also outperform the hybrid algorithms in the related existing literature. PMID:24459425

  4. A novel fitness evaluation method for evolutionary algorithms

    NASA Astrophysics Data System (ADS)

    Wang, Ji-feng; Tang, Ke-zong

    2013-03-01

    Fitness evaluation is a crucial task in evolutionary algorithms because it can affect the convergence speed and also the quality of the final solution. But these algorithms may require huge computation power for solving nonlinear programming problems. This paper proposes a novel fitness evaluation approach which employs similarity-base learning embedded in a classical differential evolution (SDE) to evaluate all new individuals. Each individual consists of three elements: parameter vector (v), a fitness value (f), and a reliability value(r). The f is calculated using NFEA, and only when the r is below a threshold is the f calculated using true fitness function. Moreover, applying error compensation system to the proposed algorithm further enhances the performance of the algorithm to make r much closer to true fitness value for each new child. Simulation results over a comprehensive set of benchmark functions show that the convergence rate of the proposed algorithm is much faster than much that of the compared algorithms.

  5. Algorithmic Mechanism Design of Evolutionary Computation

    PubMed Central

    Pei, Yan

    2015-01-01

    We consider algorithmic design, enhancement, and improvement of evolutionary computation as a mechanism design problem. All individuals or several groups of individuals can be considered as self-interested agents. The individuals in evolutionary computation can manipulate parameter settings and operations by satisfying their own preferences, which are defined by an evolutionary computation algorithm designer, rather than by following a fixed algorithm rule. Evolutionary computation algorithm designers or self-adaptive methods should construct proper rules and mechanisms for all agents (individuals) to conduct their evolution behaviour correctly in order to definitely achieve the desired and preset objective(s). As a case study, we propose a formal framework on parameter setting, strategy selection, and algorithmic design of evolutionary computation by considering the Nash strategy equilibrium of a mechanism design in the search process. The evaluation results present the efficiency of the framework. This primary principle can be implemented in any evolutionary computation algorithm that needs to consider strategy selection issues in its optimization process. The final objective of our work is to solve evolutionary computation design as an algorithmic mechanism design problem and establish its fundamental aspect by taking this perspective. This paper is the first step towards achieving this objective by implementing a strategy equilibrium solution (such as Nash equilibrium) in evolutionary computation algorithm. PMID:26257777

  6. Algorithmic Mechanism Design of Evolutionary Computation.

    PubMed

    Pei, Yan

    2015-01-01

    We consider algorithmic design, enhancement, and improvement of evolutionary computation as a mechanism design problem. All individuals or several groups of individuals can be considered as self-interested agents. The individuals in evolutionary computation can manipulate parameter settings and operations by satisfying their own preferences, which are defined by an evolutionary computation algorithm designer, rather than by following a fixed algorithm rule. Evolutionary computation algorithm designers or self-adaptive methods should construct proper rules and mechanisms for all agents (individuals) to conduct their evolution behaviour correctly in order to definitely achieve the desired and preset objective(s). As a case study, we propose a formal framework on parameter setting, strategy selection, and algorithmic design of evolutionary computation by considering the Nash strategy equilibrium of a mechanism design in the search process. The evaluation results present the efficiency of the framework. This primary principle can be implemented in any evolutionary computation algorithm that needs to consider strategy selection issues in its optimization process. The final objective of our work is to solve evolutionary computation design as an algorithmic mechanism design problem and establish its fundamental aspect by taking this perspective. This paper is the first step towards achieving this objective by implementing a strategy equilibrium solution (such as Nash equilibrium) in evolutionary computation algorithm. PMID:26257777

  7. A generating set direct search augmented Lagrangian algorithm for optimization with a combination of general and linear constraints.

    SciTech Connect

    Lewis, Robert Michael (College of William and Mary, Williamsburg, VA); Torczon, Virginia Joanne (College of William and Mary, Williamsburg, VA); Kolda, Tamara Gibson

    2006-08-01

    We consider the solution of nonlinear programs in the case where derivatives of the objective function and nonlinear constraints are unavailable. To solve such problems, we propose an adaptation of a method due to Conn, Gould, Sartenaer, and Toint that proceeds by approximately minimizing a succession of linearly constrained augmented Lagrangians. Our modification is to use a derivative-free generating set direct search algorithm to solve the linearly constrained subproblems. The stopping criterion proposed by Conn, Gould, Sartenaer and Toint for the approximate solution of the subproblems requires explicit knowledge of derivatives. Such information is presumed absent in the generating set search method we employ. Instead, we show that stationarity results for linearly constrained generating set search methods provide a derivative-free stopping criterion, based on a step-length control parameter, that is sufficient to preserve the convergence properties of the original augmented Lagrangian algorithm.

  8. Rolling ball algorithm as a multitask filter for terrain conductivity measurements

    NASA Astrophysics Data System (ADS)

    Rashed, Mohamed

    2016-09-01

    Portable frequency domain electromagnetic devices, commonly known as terrain conductivity meters, have become increasingly popular in recent years, especially in locating underground utilities. Data collected using these devices, however, usually suffer from major problems such as complexity and interference of apparent conductivity anomalies, near edge local spikes, and fading of conductivity contrast between a utility and the surrounding soil. This study presents the experience of adopting the rolling ball algorithm, originally designed to remove background from medical images, to treat these major problems in terrain conductivity measurements. Applying the proposed procedure to data collected using different terrain conductivity meters at different locations and conditions proves the capability of the rolling ball algorithm to treat these data both efficiently and quickly.

  9. Model reduction algorithms for optimal control and importance sampling of diffusions

    NASA Astrophysics Data System (ADS)

    Hartmann, Carsten; Schütte, Christof; Zhang, Wei

    2016-08-01

    We propose numerical algorithms for solving optimal control and importance sampling problems based on simplified models. The algorithms combine model reduction techniques for multiscale diffusions and stochastic optimization tools, with the aim of reducing the original, possibly high-dimensional problem to a lower dimensional representation of the dynamics, in which only a few relevant degrees of freedom are controlled or biased. Specifically, we study situations in which either a reaction coordinate onto which the dynamics can be projected is known, or situations in which the dynamics shows strongly localized behavior in the small noise regime. No explicit assumptions about small parameters or scale separation have to be made. We illustrate the approach with simple, but paradigmatic numerical examples.

  10. An effective detection algorithm for region duplication forgery in digital images

    NASA Astrophysics Data System (ADS)

    Yavuz, Fatih; Bal, Abdullah; Cukur, Huseyin

    2016-04-01

    Powerful image editing tools are very common and easy to use these days. This situation may cause some forgeries by adding or removing some information on the digital images. In order to detect these types of forgeries such as region duplication, we present an effective algorithm based on fixed-size block computation and discrete wavelet transform (DWT). In this approach, the original image is divided into fixed-size blocks, and then wavelet transform is applied for dimension reduction. Each block is processed by Fourier Transform and represented by circle regions. Four features are extracted from each block. Finally, the feature vectors are lexicographically sorted, and duplicated image blocks are detected according to comparison metric results. The experimental results show that the proposed algorithm presents computational efficiency due to fixed-size circle block architecture.

  11. A Globally Convergent Augmented Lagrangian Pattern Search Algorithm for Optimization with General Constraints and Simple Bounds

    NASA Technical Reports Server (NTRS)

    Lewis, Robert Michael; Torczon, Virginia

    1998-01-01

    We give a pattern search adaptation of an augmented Lagrangian method due to Conn, Gould, and Toint. The algorithm proceeds by successive bound constrained minimization of an augmented Lagrangian. In the pattern search adaptation we solve this subproblem approximately using a bound constrained pattern search method. The stopping criterion proposed by Conn, Gould, and Toint for the solution of this subproblem requires explicit knowledge of derivatives. Such information is presumed absent in pattern search methods; however, we show how we can replace this with a stopping criterion based on the pattern size in a way that preserves the convergence properties of the original algorithm. In this way we proceed by successive, inexact, bound constrained minimization without knowing exactly how inexact the minimization is. So far as we know, this is the first provably convergent direct search method for general nonlinear programming.

  12. Fast parallel molecular algorithms for DNA-based computation: factoring integers.

    PubMed

    Chang, Weng-Long; Guo, Minyi; Ho, Michael Shan-Hui

    2005-06-01

    The RSA public-key cryptosystem is an algorithm that converts input data to an unrecognizable encryption and converts the unrecognizable data back into its original decryption form. The security of the RSA public-key cryptosystem is based on the difficulty of factoring the product of two large prime numbers. This paper demonstrates to factor the product of two large prime numbers, and is a breakthrough in basic biological operations using a molecular computer. In order to achieve this, we propose three DNA-based algorithms for parallel subtractor, parallel comparator, and parallel modular arithmetic that formally verify our designed molecular solutions for factoring the product of two large prime numbers. Furthermore, this work indicates that the cryptosystems using public-key are perhaps insecure and also presents clear evidence of the ability of molecular computing to perform complicated mathematical operations. PMID:16117023

  13. A novel algorithm for detecting protein complexes with the breadth first search.

    PubMed

    Tang, Xiwei; Wang, Jianxin; Li, Min; He, Yiming; Pan, Yi

    2014-01-01

    Most biological processes are carried out by protein complexes. A substantial number of false positives of the protein-protein interaction (PPI) data can compromise the utility of the datasets for complexes reconstruction. In order to reduce the impact of such discrepancies, a number of data integration and affinity scoring schemes have been devised. The methods encode the reliabilities (confidence) of physical interactions between pairs of proteins. The challenge now is to identify novel and meaningful protein complexes from the weighted PPI network. To address this problem, a novel protein complex mining algorithm ClusterBFS (Cluster with Breadth-First Search) is proposed. Based on the weighted density, ClusterBFS detects protein complexes of the weighted network by the breadth first search algorithm, which originates from a given seed protein used as starting-point. The experimental results show that ClusterBFS performs significantly better than the other computational approaches in terms of the identification of protein complexes. PMID:24818139

  14. Analysis of Multivariate Experimental Data Using A Simplified Regression Model Search Algorithm

    NASA Technical Reports Server (NTRS)

    Ulbrich, Norbert M.

    2013-01-01

    A new regression model search algorithm was developed that may be applied to both general multivariate experimental data sets and wind tunnel strain-gage balance calibration data. The algorithm is a simplified version of a more complex algorithm that was originally developed for the NASA Ames Balance Calibration Laboratory. The new algorithm performs regression model term reduction to prevent overfitting of data. It has the advantage that it needs only about one tenth of the original algorithm's CPU time for the completion of a regression model search. In addition, extensive testing showed that the prediction accuracy of math models obtained from the simplified algorithm is similar to the prediction accuracy of math models obtained from the original algorithm. The simplified algorithm, however, cannot guarantee that search constraints related to a set of statistical quality requirements are always satisfied in the optimized regression model. Therefore, the simplified algorithm is not intended to replace the original algorithm. Instead, it may be used to generate an alternate optimized regression model of experimental data whenever the application of the original search algorithm fails or requires too much CPU time. Data from a machine calibration of NASA's MK40 force balance is used to illustrate the application of the new search algorithm.

  15. Analysis of Multivariate Experimental Data Using A Simplified Regression Model Search Algorithm

    NASA Technical Reports Server (NTRS)

    Ulbrich, Norbert Manfred

    2013-01-01

    A new regression model search algorithm was developed in 2011 that may be used to analyze both general multivariate experimental data sets and wind tunnel strain-gage balance calibration data. The new algorithm is a simplified version of a more complex search algorithm that was originally developed at the NASA Ames Balance Calibration Laboratory. The new algorithm has the advantage that it needs only about one tenth of the original algorithm's CPU time for the completion of a search. In addition, extensive testing showed that the prediction accuracy of math models obtained from the simplified algorithm is similar to the prediction accuracy of math models obtained from the original algorithm. The simplified algorithm, however, cannot guarantee that search constraints related to a set of statistical quality requirements are always satisfied in the optimized regression models. Therefore, the simplified search algorithm is not intended to replace the original search algorithm. Instead, it may be used to generate an alternate optimized regression model of experimental data whenever the application of the original search algorithm either fails or requires too much CPU time. Data from a machine calibration of NASA's MK40 force balance is used to illustrate the application of the new regression model search algorithm.

  16. Genomic-enabled prediction with classification algorithms.

    PubMed

    Ornella, L; Pérez, P; Tapia, E; González-Camacho, J M; Burgueño, J; Zhang, X; Singh, S; Vicente, F S; Bonnett, D; Dreisigacker, S; Singh, R; Long, N; Crossa, J

    2014-06-01

    Pearson's correlation coefficient (ρ) is the most commonly reported metric of the success of prediction in genomic selection (GS). However, in real breeding ρ may not be very useful for assessing the quality of the regression in the tails of the distribution, where individuals are chosen for selection. This research used 14 maize and 16 wheat data sets with different trait-environment combinations. Six different models were evaluated by means of a cross-validation scheme (50 random partitions each, with 90% of the individuals in the training set and 10% in the testing set). The predictive accuracy of these algorithms for selecting individuals belonging to the best α=10, 15, 20, 25, 30, 35, 40% of the distribution was estimated using Cohen's kappa coefficient (κ) and an ad hoc measure, which we call relative efficiency (RE), which indicates the expected genetic gain due to selection when individuals are selected based on GS exclusively. We put special emphasis on the analysis for α=15%, because it is a percentile commonly used in plant breeding programmes (for example, at CIMMYT). We also used ρ as a criterion for overall success. The algorithms used were: Bayesian LASSO (BL), Ridge Regression (RR), Reproducing Kernel Hilbert Spaces (RHKS), Random Forest Regression (RFR), and Support Vector Regression (SVR) with linear (lin) and Gaussian kernels (rbf). The performance of regression methods for selecting the best individuals was compared with that of three supervised classification algorithms: Random Forest Classification (RFC) and Support Vector Classification (SVC) with linear (lin) and Gaussian (rbf) kernels. Classification methods were evaluated using the same cross-validation scheme but with the response vector of the original training sets dichotomised using a given threshold. For α=15%, SVC-lin presented the highest κ coefficients in 13 of the 14 maize data sets, with best values ranging from 0.131 to 0.722 (statistically significant in 9 data sets

  17. Genomic-enabled prediction with classification algorithms

    PubMed Central

    Ornella, L; Pérez, P; Tapia, E; González-Camacho, J M; Burgueño, J; Zhang, X; Singh, S; Vicente, F S; Bonnett, D; Dreisigacker, S; Singh, R; Long, N; Crossa, J

    2014-01-01

    Pearson's correlation coefficient (ρ) is the most commonly reported metric of the success of prediction in genomic selection (GS). However, in real breeding ρ may not be very useful for assessing the quality of the regression in the tails of the distribution, where individuals are chosen for selection. This research used 14 maize and 16 wheat data sets with different trait–environment combinations. Six different models were evaluated by means of a cross-validation scheme (50 random partitions each, with 90% of the individuals in the training set and 10% in the testing set). The predictive accuracy of these algorithms for selecting individuals belonging to the best α=10, 15, 20, 25, 30, 35, 40% of the distribution was estimated using Cohen's kappa coefficient (κ) and an ad hoc measure, which we call relative efficiency (RE), which indicates the expected genetic gain due to selection when individuals are selected based on GS exclusively. We put special emphasis on the analysis for α=15%, because it is a percentile commonly used in plant breeding programmes (for example, at CIMMYT). We also used ρ as a criterion for overall success. The algorithms used were: Bayesian LASSO (BL), Ridge Regression (RR), Reproducing Kernel Hilbert Spaces (RHKS), Random Forest Regression (RFR), and Support Vector Regression (SVR) with linear (lin) and Gaussian kernels (rbf). The performance of regression methods for selecting the best individuals was compared with that of three supervised classification algorithms: Random Forest Classification (RFC) and Support Vector Classification (SVC) with linear (lin) and Gaussian (rbf) kernels. Classification methods were evaluated using the same cross-validation scheme but with the response vector of the original training sets dichotomised using a given threshold. For α=15%, SVC-lin presented the highest κ coefficients in 13 of the 14 maize data sets, with best values ranging from 0.131 to 0.722 (statistically significant in 9 data sets

  18. A modified genetic algorithm with fuzzy roulette wheel selection for job-shop scheduling problems

    NASA Astrophysics Data System (ADS)

    Thammano, Arit; Teekeng, Wannaporn

    2015-05-01

    The job-shop scheduling problem is one of the most difficult production planning problems. Since it is in the NP-hard class, a recent trend in solving the job-shop scheduling problem is shifting towards the use of heuristic and metaheuristic algorithms. This paper proposes a novel metaheuristic algorithm, which is a modification of the genetic algorithm. This proposed algorithm introduces two new concepts to the standard genetic algorithm: (1) fuzzy roulette wheel selection and (2) the mutation operation with tabu list. The proposed algorithm has been evaluated and compared with several state-of-the-art algorithms in the literature. The experimental results on 53 JSSPs show that the proposed algorithm is very effective in solving the combinatorial optimization problems. It outperforms all state-of-the-art algorithms on all benchmark problems in terms of the ability to achieve the optimal solution and the computational time.

  19. Genetic Algorithms for Digital Quantum Simulations

    NASA Astrophysics Data System (ADS)

    Las Heras, U.; Alvarez-Rodriguez, U.; Solano, E.; Sanz, M.

    2016-06-01

    We propose genetic algorithms, which are robust optimization techniques inspired by natural selection, to enhance the versatility of digital quantum simulations. In this sense, we show that genetic algorithms can be employed to increase the fidelity and optimize the resource requirements of digital quantum simulation protocols while adapting naturally to the experimental constraints. Furthermore, this method allows us to reduce not only digital errors but also experimental errors in quantum gates. Indeed, by adding ancillary qubits, we design a modular gate made out of imperfect gates, whose fidelity is larger than the fidelity of any of the constituent gates. Finally, we prove that the proposed modular gates are resilient against different gate errors.

  20. Multiple origins of life

    NASA Technical Reports Server (NTRS)

    Raup, D. M.; Valentine, J. W.

    1983-01-01

    There is some indication that life may have originated readily under primitive earth conditions. If there were multiple origins of life, the result could have been a polyphyletic biota today. Using simple stochastic models for diversification and extinction, we conclude: (1) the probability of survival of life is low unless there are multiple origins, and (2) given survival of life and given as many as 10 independent origins of life, the odds are that all but one would have gone extinct, yielding the monophyletic biota we have now. The fact of the survival of our particular form of life does not imply that it was unique or superior.

  1. Improved Bat Algorithm Applied to Multilevel Image Thresholding

    PubMed Central

    2014-01-01

    Multilevel image thresholding is a very important image processing technique that is used as a basis for image segmentation and further higher level processing. However, the required computational time for exhaustive search grows exponentially with the number of desired thresholds. Swarm intelligence metaheuristics are well known as successful and efficient optimization methods for intractable problems. In this paper, we adjusted one of the latest swarm intelligence algorithms, the bat algorithm, for the multilevel image thresholding problem. The results of testing on standard benchmark images show that the bat algorithm is comparable with other state-of-the-art algorithms. We improved standard bat algorithm, where our modifications add some elements from the differential evolution and from the artificial bee colony algorithm. Our new proposed improved bat algorithm proved to be better than five other state-of-the-art algorithms, improving quality of results in all cases and significantly improving convergence speed. PMID:25165733

  2. Fast three-step phase-shifting algorithm

    SciTech Connect

    Huang, Peisen S.; Zhang Song

    2006-07-20

    We propose a new three-step phase-shifting algorithm, which is much faster than the traditional three-step algorithm. We achieve the speed advantage by using a simple intensity ratio function to replace the arc tangent function in the traditional algorithm. The phase error caused by this new algorithm is compensated for by use of a lookup table. Our experimental result sshow that both the new algorithm and the traditional algorithm generate similar results, but the new algorithm is 3.4 times faster. By implementing this new algorithm in a high-resolution, real-time three-dimensional shape measurement system,we were able to achieve a measurement speed of 40 frames per second ata resolution of 532x500 pixels, all with an ordinary personal computer.

  3. Improved bat algorithm applied to multilevel image thresholding.

    PubMed

    Alihodzic, Adis; Tuba, Milan

    2014-01-01

    Multilevel image thresholding is a very important image processing technique that is used as a basis for image segmentation and further higher level processing. However, the required computational time for exhaustive search grows exponentially with the number of desired thresholds. Swarm intelligence metaheuristics are well known as successful and efficient optimization methods for intractable problems. In this paper, we adjusted one of the latest swarm intelligence algorithms, the bat algorithm, for the multilevel image thresholding problem. The results of testing on standard benchmark images show that the bat algorithm is comparable with other state-of-the-art algorithms. We improved standard bat algorithm, where our modifications add some elements from the differential evolution and from the artificial bee colony algorithm. Our new proposed improved bat algorithm proved to be better than five other state-of-the-art algorithms, improving quality of results in all cases and significantly improving convergence speed. PMID:25165733

  4. The origin of risk aversion

    PubMed Central

    Zhang, Ruixun; Brennan, Thomas J.; Lo, Andrew W.

    2014-01-01

    Risk aversion is one of the most basic assumptions of economic behavior, but few studies have addressed the question of where risk preferences come from and why they differ from one individual to the next. Here, we propose an evolutionary explanation for the origin of risk aversion. In the context of a simple binary-choice model, we show that risk aversion emerges by natural selection if reproductive risk is systematic (i.e., correlated across individuals in a given generation). In contrast, risk neutrality emerges if reproductive risk is idiosyncratic (i.e., uncorrelated across each given generation). More generally, our framework implies that the degree of risk aversion is determined by the stochastic nature of reproductive rates, and we show that different statistical properties lead to different utility functions. The simplicity and generality of our model suggest that these implications are primitive and cut across species, physiology, and genetic origins. PMID:25453072

  5. The origin of risk aversion.

    PubMed

    Zhang, Ruixun; Brennan, Thomas J; Lo, Andrew W

    2014-12-16

    Risk aversion is one of the most basic assumptions of economic behavior, but few studies have addressed the question of where risk preferences come from and why they differ from one individual to the next. Here, we propose an evolutionary explanation for the origin of risk aversion. In the context of a simple binary-choice model, we show that risk aversion emerges by natural selection if reproductive risk is systematic (i.e., correlated across individuals in a given generation). In contrast, risk neutrality emerges if reproductive risk is idiosyncratic (i.e., uncorrelated across each given generation). More generally, our framework implies that the degree of risk aversion is determined by the stochastic nature of reproductive rates, and we show that different statistical properties lead to different utility functions. The simplicity and generality of our model suggest that these implications are primitive and cut across species, physiology, and genetic origins. PMID:25453072

  6. Robust facial expression recognition algorithm based on local metric learning

    NASA Astrophysics Data System (ADS)

    Jiang, Bin; Jia, Kebin

    2016-01-01

    In facial expression recognition tasks, different facial expressions are often confused with each other. Motivated by the fact that a learned metric can significantly improve the accuracy of classification, a facial expression recognition algorithm based on local metric learning is proposed. First, k-nearest neighbors of the given testing sample are determined from the total training data. Second, chunklets are selected from the k-nearest neighbors. Finally, the optimal transformation matrix is computed by maximizing the total variance between different chunklets and minimizing the total variance of instances in the same chunklet. The proposed algorithm can find the suitable distance metric for every testing sample and improve the performance on facial expression recognition. Furthermore, the proposed algorithm can be used for vector-based and matrix-based facial expression recognition. Experimental results demonstrate that the proposed algorithm could achieve higher recognition rates and be more robust than baseline algorithms on the JAFFE, CK, and RaFD databases.

  7. Artificial Bee Colony Algorithm for Solving Optimal Power Flow Problem

    PubMed Central

    Le Dinh, Luong; Vo Ngoc, Dieu

    2013-01-01

    This paper proposes an artificial bee colony (ABC) algorithm for solving optimal power flow (OPF) problem. The objective of the OPF problem is to minimize total cost of thermal units while satisfying the unit and system constraints such as generator capacity limits, power balance, line flow limits, bus voltages limits, and transformer tap settings limits. The ABC algorithm is an optimization method inspired from the foraging behavior of honey bees. The proposed algorithm has been tested on the IEEE 30-bus, 57-bus, and 118-bus systems. The numerical results have indicated that the proposed algorithm can find high quality solution for the problem in a fast manner via the result comparisons with other methods in the literature. Therefore, the proposed ABC algorithm can be a favorable method for solving the OPF problem. PMID:24470790

  8. Artificial bee colony algorithm for solving optimal power flow problem.

    PubMed

    Le Dinh, Luong; Vo Ngoc, Dieu; Vasant, Pandian

    2013-01-01

    This paper proposes an artificial bee colony (ABC) algorithm for solving optimal power flow (OPF) problem. The objective of the OPF problem is to minimize total cost of thermal units while satisfying the unit and system constraints such as generator capacity limits, power balance, line flow limits, bus voltages limits, and transformer tap settings limits. The ABC algorithm is an optimization method inspired from the foraging behavior of honey bees. The proposed algorithm has been tested on the IEEE 30-bus, 57-bus, and 118-bus systems. The numerical results have indicated that the proposed algorithm can find high quality solution for the problem in a fast manner via the result comparisons with other methods in the literature. Therefore, the proposed ABC algorithm can be a favorable method for solving the OPF problem. PMID:24470790

  9. Reasoning about systolic algorithms

    SciTech Connect

    Purushothaman, S.; Subrahmanyam, P.A.

    1988-12-01

    The authors present a methodology for verifying correctness of systolic algorithms. The methodology is based on solving a set of Uniform Recurrence Equations obtained from a description of systolic algorithms as a set of recursive equations. They present an approach to mechanically verify correctness of systolic algorithms, using the Boyer-Moore theorem proven. A mechanical correctness proof of an example from the literature is also presented.

  10. On algorithmic rate-coded AER generation.

    PubMed

    Linares-Barranco, Alejandro; Jimenez-Moreno, Gabriel; Linares-Barranco, Bernabé; Civit-Balcells, Antón

    2006-05-01

    This paper addresses the problem of converting a conventional video stream based on sequences of frames into the spike event-based representation known as the address-event-representation (AER). In this paper we concentrate on rate-coded AER. The problem is addressed as an algorithmic problem, in which different methods are proposed, implemented and tested through software algorithms. The proposed algorithms are comparatively evaluated according to different criteria. Emphasis is put on the potential of such algorithms for a) doing the frame-based to event-based representation in real time, and b) that the resulting event streams ressemble as much as possible those generated naturally by rate-coded address-event VLSI chips, such as silicon AER retinae. It is found that simple and straightforward algorithms tend to have high potential for real time but produce event distributions that differ considerably from those obtained in AER VLSI chips. On the other hand, sophisticated algorithms that yield better event distributions are not efficient for real time operations. The methods based on linear-feedback-shift-register (LFSR) pseudorandom number generation is a good compromise, which is feasible for real time and yield reasonably well distributed events in time. Our software experiments, on a 1.6-GHz Pentium IV, show that at 50% AER bus load the proposed algorithms require between 0.011 and 1.14 ms per 8 bit-pixel per frame. One of the proposed LFSR methods is implemented in real time hardware using a prototyping board that includes a VirtexE 300 FPGA. The demonstration hardware is capable of transforming frames of 64 x 64 pixels of 8-bit depth at a frame rate of 25 frames per second, producing spike events at a peak rate of 10(7) events per second. PMID:16722179

  11. The Application Research of MD5 Encryption Algorithm in DCT Digital Watermarking

    NASA Astrophysics Data System (ADS)

    Xijin, Wang; Linxiu, Fan

    This article did the preliminary study of the application of algorithm for MD5 in the digital watermark. It proposed that copyright information will be encrypted using an algorithm MD5, and made rules for the second value image watermarks, through DCT algorithm that embeds an image by the carrier. The extraction algorithms can pick up the watermark and restore MD5 code.

  12. An algorithmic approach for clinical management of chronic spinal pain.

    PubMed

    Manchikanti, Laxmaiah; Helm, Standiford; Singh, Vijay; Benyamin, Ramsin M; Datta, Sukdeb; Hayek, Salim M; Fellows, Bert; Boswell, Mark V

    2009-01-01

    Interventional pain management, and the interventional techniques which are an integral part of that specialty, are subject to widely varying definitions and practices. How interventional techniques are applied by various specialties is highly variable, even for the most common procedures and conditions. At the same time, many payors, publications, and guidelines are showing increasing interest in the performance and costs of interventional techniques. There is a lack of consensus among interventional pain management specialists with regards to how to diagnose and manage spinal pain and the type and frequency of spinal interventional techniques which should be utilized to treat spinal pain. Therefore, an algorithmic approach is proposed, providing a step-by-step procedure for managing chronic spinal pain patients based upon evidence-based guidelines. The algorithmic approach is developed based on the best available evidence regarding the epidemiology of various identifiable sources of chronic spinal pain. Such an approach to spinal pain includes an appropriate history, examination, and medical decision making in the management of low back pain, neck pain and thoracic pain. This algorithm also provides diagnostic and therapeutic approaches to clinical management utilizing case examples of cervical, lumbar, and thoracic spinal pain. An algorithm for investigating chronic low back pain without disc herniation commences with a clinical question, examination and imaging findings. If there is evidence of radiculitis, spinal stenosis, or other demonstrable causes resulting in radiculitis, one may proceed with diagnostic or therapeutic epidural injections. In the algorithmic approach, facet joints are entertained first in the algorithm because of their commonality as a source of chronic low back pain followed by sacroiliac joint blocks if indicated and provocation discography as the last step. Based on the literature, in the United States, in patients without disc

  13. Dual-Byte-Marker Algorithm for Detecting JFIF Header

    NASA Astrophysics Data System (ADS)

    Mohamad, Kamaruddin Malik; Herawan, Tutut; Deris, Mustafa Mat

    The use of efficient algorithm to detect JPEG file is vital to reduce time taken for analyzing ever increasing data in hard drive or physical memory. In the previous paper, single-byte-marker algorithm is proposed for header detection. In this paper, another novel header detection algorithm called dual-byte-marker is proposed. Based on the experiments done on images from hard disk, physical memory and data set from DFRWS 2006 Challenge, results showed that dual-byte-marker algorithm gives better performance with better execution time for header detection as compared to single-byte-marker.

  14. Machine learning algorithms for damage detection: Kernel-based approaches

    NASA Astrophysics Data System (ADS)

    Santos, Adam; Figueiredo, Eloi; Silva, M. F. M.; Sales, C. S.; Costa, J. C. W. A.

    2016-02-01

    This paper presents four kernel-based algorithms for damage detection under varying operational and environmental conditions, namely based on one-class support vector machine, support vector data description, kernel principal component analysis and greedy kernel principal component analysis. Acceleration time-series from an array of accelerometers were obtained from a laboratory structure and used for performance comparison. The main contribution of this study is the applicability of the proposed algorithms for damage detection as well as the comparison of the classification performance between these algorithms and other four ones already considered as reliable approaches in the literature. All proposed algorithms revealed to have better classification performance than the previous ones.

  15. An affine projection algorithm using grouping selection of input vectors

    NASA Astrophysics Data System (ADS)

    Shin, JaeWook; Kong, NamWoong; Park, PooGyeon

    2011-10-01

    This paper present an affine projection algorithm (APA) using grouping selection of input vectors. To improve the performance of conventional APA, the proposed algorithm adjusts the number of the input vectors using two procedures: grouping procedure and selection procedure. In grouping procedure, the some input vectors that have overlapping information for update is grouped using normalized inner product. Then, few input vectors that have enough information for for coefficient update is selected using steady-state mean square error (MSE) in selection procedure. Finally, the filter coefficients update using selected input vectors. The experimental results show that the proposed algorithm has small steady-state estimation errors comparing with the existing algorithms.

  16. Competing Sudakov veto algorithms

    NASA Astrophysics Data System (ADS)

    Kleiss, Ronald; Verheyen, Rob

    2016-07-01

    We present a formalism to analyze the distribution produced by a Monte Carlo algorithm. We perform these analyses on several versions of the Sudakov veto algorithm, adding a cutoff, a second variable and competition between emission channels. The formal analysis allows us to prove that multiple, seemingly different competition algorithms, including those that are currently implemented in most parton showers, lead to the same result. Finally, we test their performance in a semi-realistic setting and show that there are significantly faster alternatives to the commonly used algorithms.

  17. Speed-up hyperspheres homotopic path tracking algorithm for PWL circuits simulations.

    PubMed

    Ramirez-Pinero, A; Vazquez-Leal, H; Jimenez-Fernandez, V M; Sedighi, H M; Rashidi, M M; Filobello-Nino, U; Castaneda-Sheissa, R; Huerta-Chua, J; Sarmiento-Reyes, L A; Laguna-Camacho, J R; Castro-Gonzalez, F

    2016-01-01

    In the present work, we introduce an improved version of the hyperspheres path tracking method adapted for piecewise linear (PWL) circuits. This enhanced version takes advantage of the PWL characteristics from the homotopic curve, achieving faster path tracking and improving the performance of the homotopy continuation method (HCM). Faster computing time allows the study of complex circuits with higher complexity; the proposed method also decrease, significantly, the probability of having a diverging problem when using the Newton-Raphson method because it is applied just twice per linear region on the homotopic path. Equilibrium equations of the studied circuits are obtained applying the modified nodal analysis; this method allows to propose an algorithm for nonlinear circuit analysis. Besides, a starting point criteria is proposed to obtain better performance of the HCM and a technique for avoiding the reversion phenomenon is also proposed. To prove the efficiency of the path tracking method, several cases study with bipolar (BJT) and CMOS transistors are provided. Simulation results show that the proposed approach can be up to twelve times faster than the original path tracking method and also helps to avoid several reversion cases that appears when original hyperspheres path tracking scheme was employed. PMID:27386338

  18. A Hybrid Monkey Search Algorithm for Clustering Analysis

    PubMed Central

    Chen, Xin; Zhou, Yongquan; Luo, Qifang

    2014-01-01

    Clustering is a popular data analysis and data mining technique. The k-means clustering algorithm is one of the most commonly used methods. However, it highly depends on the initial solution and is easy to fall into local optimum solution. In view of the disadvantages of the k-means method, this paper proposed a hybrid monkey algorithm based on search operator of artificial bee colony algorithm for clustering analysis and experiment on synthetic and real life datasets to show that the algorithm has a good performance than that of the basic monkey algorithm for clustering analysis. PMID:24772039

  19. Novel biomedical tetrahedral mesh methods: algorithms and applications

    NASA Astrophysics Data System (ADS)

    Yu, Xiao; Jin, Yanfeng; Chen, Weitao; Huang, Pengfei; Gu, Lixu

    2007-12-01

    Tetrahedral mesh generation algorithm, as a prerequisite of many soft tissue simulation methods, becomes very important in the virtual surgery programs because of the real-time requirement. Aiming to speed up the computation in the simulation, we propose a revised Delaunay algorithm which makes a good balance of quality of tetrahedra, boundary preservation and time complexity, with many improved methods. Another mesh algorithm named Space-Disassembling is also presented in this paper, and a comparison of Space-Disassembling, traditional Delaunay algorithm and the revised Delaunay algorithm is processed based on clinical soft-tissue simulation projects, including craniofacial plastic surgery and breast reconstruction plastic surgery.

  20. A Collaborative Recommend Algorithm Based on Bipartite Community

    PubMed Central

    Fu, Yuchen; Liu, Quan; Cui, Zhiming

    2014-01-01

    The recommendation algorithm based on bipartite network is superior to traditional methods on accuracy and diversity, which proves that considering the network topology of recommendation systems could help us to improve recommendation results. However, existing algorithms mainly focus on the overall topology structure and those local characteristics could also play an important role in collaborative recommend processing. Therefore, on account of data characteristics and application requirements of collaborative recommend systems, we proposed a link community partitioning algorithm based on the label propagation and a collaborative recommendation algorithm based on the bipartite community. Then we designed numerical experiments to verify the algorithm validity under benchmark and real database. PMID:24955393

  1. Solving SAT Problem Based on Hybrid Differential Evolution Algorithm

    NASA Astrophysics Data System (ADS)

    Liu, Kunqi; Zhang, Jingmin; Liu, Gang; Kang, Lishan

    Satisfiability (SAT) problem is an NP-complete problem. Based on the analysis about it, SAT problem is translated equally into an optimization problem on the minimum of objective function. A hybrid differential evolution algorithm is proposed to solve the Satisfiability problem. It makes full use of strong local search capacity of hill-climbing algorithm and strong global search capability of differential evolution algorithm, which makes up their disadvantages, improves the efficiency of algorithm and avoids the stagnation phenomenon. The experiment results show that the hybrid algorithm is efficient in solving SAT problem.

  2. Generalized Jaynes-Cummings model as a quantum search algorithm

    SciTech Connect

    Romanelli, A.

    2009-07-15

    We propose a continuous time quantum search algorithm using a generalization of the Jaynes-Cummings model. In this model the states of the atom are the elements among which the algorithm realizes the search, exciting resonances between the initial and the searched states. This algorithm behaves like Grover's algorithm; the optimal search time is proportional to the square root of the size of the search set and the probability to find the searched state oscillates periodically in time. In this frame, it is possible to reinterpret the usual Jaynes-Cummings model as a trivial case of the quantum search algorithm.

  3. Musical emotions: functions, origins, evolution.

    PubMed

    Perlovsky, Leonid

    2010-03-01

    Theories of music origins and the role of musical emotions in the mind are reviewed. Most existing theories contradict each other, and cannot explain mechanisms or roles of musical emotions in workings of the mind, nor evolutionary reasons for music origins. Music seems to be an enigma. Nevertheless, a synthesis of cognitive science and mathematical models of the mind has been proposed describing a fundamental role of music in the functioning and evolution of the mind, consciousness, and cultures. The review considers ancient theories of music as well as contemporary theories advanced by leading authors in this field. It addresses one hypothesis that promises to unify the field and proposes a theory of musical origin based on a fundamental role of music in cognition and evolution of consciousness and culture. We consider a split in the vocalizations of proto-humans into two types: one less emotional and more concretely-semantic, evolving into language, and the other preserving emotional connections along with semantic ambiguity, evolving into music. The proposed hypothesis departs from other theories in considering specific mechanisms of the mind-brain, which required the evolution of music parallel with the evolution of cultures and languages. Arguments are reviewed that the evolution of language toward becoming the semantically powerful tool of today required emancipation from emotional encumbrances. The opposite, no less powerful mechanisms required a compensatory evolution of music toward more differentiated and refined emotionality. The need for refined music in the process of cultural evolution is grounded in fundamental mechanisms of the mind. This is why today's human mind and cultures cannot exist without today's music. The reviewed hypothesis gives a basis for future analysis of why different evolutionary paths of languages were paralleled by different evolutionary paths of music. Approaches toward experimental verification of this hypothesis in

  4. Musical emotions: Functions, origins, evolution

    NASA Astrophysics Data System (ADS)

    Perlovsky, Leonid

    2010-03-01

    Theories of music origins and the role of musical emotions in the mind are reviewed. Most existing theories contradict each other, and cannot explain mechanisms or roles of musical emotions in workings of the mind, nor evolutionary reasons for music origins. Music seems to be an enigma. Nevertheless, a synthesis of cognitive science and mathematical models of the mind has been proposed describing a fundamental role of music in the functioning and evolution of the mind, consciousness, and cultures. The review considers ancient theories of music as well as contemporary theories advanced by leading authors in this field. It addresses one hypothesis that promises to unify the field and proposes a theory of musical origin based on a fundamental role of music in cognition and evolution of consciousness and culture. We consider a split in the vocalizations of proto-humans into two types: one less emotional and more concretely-semantic, evolving into language, and the other preserving emotional connections along with semantic ambiguity, evolving into music. The proposed hypothesis departs from other theories in considering specific mechanisms of the mind-brain, which required the evolution of music parallel with the evolution of cultures and languages. Arguments are reviewed that the evolution of language toward becoming the semantically powerful tool of today required emancipation from emotional encumbrances. The opposite, no less powerful mechanisms required a compensatory evolution of music toward more differentiated and refined emotionality. The need for refined music in the process of cultural evolution is grounded in fundamental mechanisms of the mind. This is why today's human mind and cultures cannot exist without today's music. The reviewed hypothesis gives a basis for future analysis of why different evolutionary paths of languages were paralleled by different evolutionary paths of music. Approaches toward experimental verification of this hypothesis in

  5. Nonlinear Smoothing and the EM Algorithm for Positive Integral Equations of the First Kind

    SciTech Connect

    Eggermont, P. P. B.

    1999-01-15

    We study a modification of the EMS algorithm in which each step of the EMS algorithm is preceded by a nonlinear smoothing step of the form Nf-exp(S*log f) , where S is the smoothing operator of the EMS algorithm. In the context of positive integral equations (a la positron emission tomography) the resulting algorithm is related to a convex minimization problem which always admits a unique smooth solution, in contrast to the unmodified maximum likelihood setup. The new algorithm has slightly stronger monotonicity properties than the original EM algorithm. This suggests that the modified EMS algorithm is actually an EM algorithm for the modified problem. The existence of a smooth solution to the modified maximum likelihood problem and the monotonicity together imply the strong convergence of the new algorithm. We also present some simulation results for the integral equation of stereology, which suggests that the new algorithm behaves roughly like the EMS algorithm.

  6. Chemical Origins of Life

    ERIC Educational Resources Information Center

    Fox, J. Lawrence

    1972-01-01

    Reviews ideas and evidence bearing on the origin of life. Shows that evidence to support modifications of Oparin's theories of the origin of biological constituents from inorganic materials is accumulating, and that the necessary components are readily obtained from the simple gases found in the universe. (AL)

  7. The Moon's Origin.

    ERIC Educational Resources Information Center

    Cadogan, Peter

    1983-01-01

    Presents findings and conclusions about the origin of the moon, favoring the capture hypothesis of lunar origin. Advantage of the hypothesis is that it allows the moon to have been formed elsewhere, specifically in a hotter part of the solar nebula, accounting for chemical differences between earth and moon. (JN)

  8. DNA Replication Origins

    PubMed Central

    Leonard, Alan C.; Méchali, Marcel

    2013-01-01

    The onset of genomic DNA synthesis requires precise interactions of specialized initiator proteins with DNA at sites where the replication machinery can be loaded. These sites, defined as replication origins, are found at a few unique locations in all of the prokaryotic chromosomes examined so far. However, replication origins are dispersed among tens of thousands of loci in metazoan chromosomes, thereby raising questions regarding the role of specific nucleotide sequences and chromatin environment in origin selection and the mechanisms used by initiators to recognize replication origins. Close examination of bacterial and archaeal replication origins reveals an array of DNA sequence motifs that position individual initiator protein molecules and promote initiator oligomerization on origin DNA. Conversely, the need for specific recognition sequences in eukaryotic replication origins is relaxed. In fact, the primary rule for origin selection appears to be flexibility, a feature that is modulated either by structural elements or by epigenetic mechanisms at least partly linked to the organization of the genome for gene expression. PMID:23838439

  9. The Growth of Originalism

    ERIC Educational Resources Information Center

    Bork, Robert H.

    2011-01-01

    The latest episode in the long-running struggle for control of the Constitution, and the political power that goes with it, is playing out in the federal courts in California. The contending philosophies are originalism, which holds that the Constitution should be read as it was originally understood by the framers and ratifiers, and the congeries…

  10. Originalism in the Classroom

    ERIC Educational Resources Information Center

    Forte, David F.

    2011-01-01

    In this article, the author provides a detailed legal history of originalism and investigates whether, and to what extent, originalism is a part of law school teaching on the Constitution. He shares the results of an examination of the leading constitutional law textbooks used in the top fifty law schools and a selection of responses gathered from…

  11. Religion: Origins and Evolution.

    ERIC Educational Resources Information Center

    Meyer, John K.

    2004-01-01

    We present the purpose of study of the origins and development of affect-relevant and religion-relevant hypotheses, and conjectured prediction of proto-religious sequences in pre-human anthropoids and primitive human cultures. We anticipate more comprehensive study of modern cultural outcomes of these origins and developments.

  12. Multichannel active control of nonlinear noise processes using diagonal structure bilinear FXLMS algorithm

    NASA Astrophysics Data System (ADS)

    Chen, Dong; Yuan, Ding; Li, Tan; Sidan, Du

    2015-12-01

    A novel nonlinear adaptive algorithm named as diagonal structure bilinear filtered-x least mean square (DBFXLMS) for multichannel nonlinear active noise control is proposed in this paper. The performances of the proposed algorithm are shown below and the computational complexity is compared with the second-order Volterra filtered-x LMS (VFXLMS) algorithm and the filtered-s least mean square (FSLMS) algorithm, in terms of normalized mean square error (NMSE), for multichannel active control of nonlinear noise processes. Both the simulations and the computational complexity analyses demonstrate that the proposed method has an improvement as compared to the proposed algorithms.

  13. The origin of membrane bioenergetics.

    PubMed

    Lane, Nick; Martin, William F

    2012-12-21

    Harnessing energy as ion gradients across membranes is as universal as the genetic code. We leverage new insights into anaerobe metabolism to propose geochemical origins that account for the ubiquity of chemiosmotic coupling, and Na(+)/H(+) transporters in particular. Natural proton gradients acting across thin FeS walls within alkaline hydrothermal vents could drive carbon assimilation, leading to the emergence of protocells within vent pores. Protocell membranes that were initially leaky would eventually become less permeable, forcing cells dependent on natural H(+) gradients to pump Na(+) ions. Our hypothesis accounts for the Na(+)/H(+) promiscuity of bioenergetic proteins, as well as the deep divergence between bacteria and archaea. PMID:23260134

  14. Annealed Importance Sampling Reversible Jump MCMC algorithms

    SciTech Connect

    Karagiannis, Georgios; Andrieu, Christophe

    2013-03-20

    It will soon be 20 years since reversible jump Markov chain Monte Carlo (RJ-MCMC) algorithms have been proposed. They have significantly extended the scope of Markov chain Monte Carlo simulation methods, offering the promise to be able to routinely tackle transdimensional sampling problems, as encountered in Bayesian model selection problems for example, in a principled and flexible fashion. Their practical efficient implementation, however, still remains a challenge. A particular difficulty encountered in practice is in the choice of the dimension matching variables (both their nature and their distribution) and the reversible transformations which allow one to define the one-to-one mappings underpinning the design of these algorithms. Indeed, even seemingly sensible choices can lead to algorithms with very poor performance. The focus of this paper is the development and performance evaluation of a method, annealed importance sampling RJ-MCMC (aisRJ), which addresses this problem by mitigating the sensitivity of RJ-MCMC algorithms to the aforementioned poor design. As we shall see the algorithm can be understood as being an “exact approximation” of an idealized MCMC algorithm that would sample from the model probabilities directly in a model selection set-up. Such an idealized algorithm may have good theoretical convergence properties, but typically cannot be implemented, and our algorithms can approximate the performance of such idealized algorithms to an arbitrary degree while not introducing any bias for any degree of approximation. Our approach combines the dimension matching ideas of RJ-MCMC with annealed importance sampling and its Markov chain Monte Carlo implementation. We illustrate the performance of the algorithm with numerical simulations which indicate that, although the approach may at first appear computationally involved, it is in fact competitive.

  15. Binarization algorithm for document image with complex background

    NASA Astrophysics Data System (ADS)

    Miao, Shaojun; Lu, Tongwei; Min, Feng

    2015-12-01

    The most important step in image preprocessing for Optical Character Recognition (OCR) is binarization. Due to the complex background or varying light in the text image, binarization is a very difficult problem. This paper presents the improved binarization algorithm. The algorithm can be divided into several steps. First, the background approximation can be obtained by the polynomial fitting, and the text is sharpened by using bilateral filter. Second, the image contrast compensation is done to reduce the impact of light and improve contrast of the original image. Third, the first derivative of the pixels in the compensated image are calculated to get the average value of the threshold, then the edge detection is obtained. Fourth, the stroke width of the text is estimated through a measuring of distance between edge pixels. The final stroke width is determined by choosing the most frequent distance in the histogram. Fifth, according to the value of the final stroke width, the window size is calculated, then a local threshold estimation approach can begin to binaries the image. Finally, the small noise is removed based on the morphological operators. The experimental result shows that the proposed method can effectively remove the noise caused by complex background and varying light.

  16. Formal Verification of a Conflict Resolution and Recovery Algorithm

    NASA Technical Reports Server (NTRS)

    Maddalon, Jeffrey; Butler, Ricky; Geser, Alfons; Munoz, Cesar

    2004-01-01

    New air traffic management concepts distribute the duty of traffic separation among system participants. As a consequence, these concepts have a greater dependency and rely heavily on on-board software and hardware systems. One example of a new on-board capability in a distributed air traffic management system is air traffic conflict detection and resolution (CD&R). Traditional methods for safety assessment such as human-in-the-loop simulations, testing, and flight experiments may not be sufficient for this highly distributed system as the set of possible scenarios is too large to have a reasonable coverage. This paper proposes a new method for the safety assessment of avionics systems that makes use of formal methods to drive the development of critical systems. As a case study of this approach, the mechanical veri.cation of an algorithm for air traffic conflict resolution and recovery called RR3D is presented. The RR3D algorithm uses a geometric optimization technique to provide a choice of resolution and recovery maneuvers. If the aircraft adheres to these maneuvers, they will bring the aircraft out of conflict and the aircraft will follow a conflict-free path to its original destination. Veri.cation of RR3D is carried out using the Prototype Verification System (PVS).

  17. Hybrid Algorithms for Fuzzy Reverse Supply Chain Network Design

    PubMed Central

    Che, Z. H.; Chiang, Tzu-An; Kuo, Y. C.

    2014-01-01

    In consideration of capacity constraints, fuzzy defect ratio, and fuzzy transport loss ratio, this paper attempted to establish an optimized decision model for production planning and distribution of a multiphase, multiproduct reverse supply chain, which addresses defects returned to original manufacturers, and in addition, develops hybrid algorithms such as Particle Swarm Optimization-Genetic Algorithm (PSO-GA), Genetic Algorithm-Simulated Annealing (GA-SA), and Particle Swarm Optimization-Simulated Annealing (PSO-SA) for solving the optimized model. During a case study of a multi-phase, multi-product reverse supply chain network, this paper explained the suitability of the optimized decision model and the applicability of the algorithms. Finally, the hybrid algorithms showed excellent solving capability when compared with original GA and PSO methods. PMID:24892057

  18. Differential Search Algorithm Based Edge Detection

    NASA Astrophysics Data System (ADS)

    Gunen, M. A.; Civicioglu, P.; Beşdok, E.

    2016-06-01

    In this paper, a new method has been presented for the extraction of edge information by using Differential Search Optimization Algorithm. The proposed method is based on using a new heuristic image thresholding method for edge detection. The success of the proposed method has been examined on fusion of two remote sensed images. The applicability of the proposed method on edge detection and image fusion problems have been analysed in detail and the empirical results exposed that the proposed method is useful for solving the mentioned problems.

  19. Current state of the art brachytherapy treatment planning dosimetry algorithms

    PubMed Central

    Pantelis, E; Karaiskos, P

    2014-01-01

    Following literature contributions delineating the deficiencies introduced by the approximations of conventional brachytherapy dosimetry, different model-based dosimetry algorithms have been incorporated into commercial systems for 192Ir brachytherapy treatment planning. The calculation settings of these algorithms are pre-configured according to criteria established by their developers for optimizing computation speed vs accuracy. Their clinical use is hence straightforward. A basic understanding of these algorithms and their limitations is essential, however, for commissioning; detecting differences from conventional algorithms; explaining their origin; assessing their impact; and maintaining global uniformity of clinical practice. PMID:25027247

  20. An adaptive, lossless data compression algorithm and VLSI implementations

    NASA Technical Reports Server (NTRS)

    Venbrux, Jack; Zweigle, Greg; Gambles, Jody; Wiseman, Don; Miller, Warner H.; Yeh, Pen-Shu

    1993-01-01

    This paper first provides an overview of an adaptive, lossless, data compression algorithm originally devised by Rice in the early '70s. It then reports the development of a VLSI encoder/decoder chip set developed which implements this algorithm. A recent effort in making a space qualified version of the encoder is described along with several enhancements to the algorithm. The performance of the enhanced algorithm is compared with those from other currently available lossless compression techniques on multiple sets of test data. The results favor our implemented technique in many applications.

  1. A new optimization approach for shell and tube heat exchangers by using electromagnetism-like algorithm (EM)

    NASA Astrophysics Data System (ADS)

    Abed, Azher M.; Abed, Issa Ahmed; Majdi, Hasan Sh.; Al-Shamani, Ali Najah; Sopian, K.

    2016-02-01

    This study proposes a new procedure for optimal design of shell and tube heat exchangers. The electromagnetism-like algorithm is applied to save on heat exchanger capital cost and designing a compact, high performance heat exchanger with effective use of the allowable pressure drop (cost of the pump). An optimization algorithm is then utilized to determine the optimal values of both geometric design parameters and maximum allowable pressure drop by pursuing the minimization of a total cost function. A computer code is developed for the optimal shell and tube heat exchangers. Different test cases are solved to demonstrate the effectiveness and ability of the proposed algorithm. Results are also compared with those obtained by other approaches available in the literature. The comparisons indicate that a proposed design procedure can be successfully applied in the optimal design of shell and tube heat exchangers. In particular, in the examined cases a reduction of total costs up to 30, 29, and 56.15 % compared with the original design and up to 18, 5.5 and 7.4 % compared with other approaches for case study 1, 2 and 3 respectively, are observed. In this work, economic optimization resulting from the proposed design procedure are relevant especially when the size/volume is critical for high performance and compact unit, moderate volume and cost are needed.

  2. Scalable Nearest Neighbor Algorithms for High Dimensional Data.

    PubMed

    Muja, Marius; Lowe, David G

    2014-11-01

    For many computer vision and machine learning problems, large training sets are key for good performance. However, the most computationally expensive part of many computer vision and machine learning algorithms consists of finding nearest neighbor matches to high dimensional vectors that represent the training data. We propose new algorithms for approximate nearest neighbor matching and evaluate and compare them with previous algorithms. For matching high dimensional features, we find two algorithms to be the most efficient: the randomized k-d forest and a new algorithm proposed in this paper, the priority search k-means tree. We also propose a new algorithm for matching binary features by searching multiple hierarchical clustering trees and show it outperforms methods typically used in the literature. We show that the optimal nearest neighbor algorithm and its parameters depend on the data set characteristics and describe an automated configuration procedure for finding the best algorithm to search a particular data set. In order to scale to very large data sets that would otherwise not fit in the memory of a single machine, we propose a distributed nearest neighbor matching framework that can be used with any of the algorithms described in the paper. All this research has been released as an open source library called fast library for approximate nearest neighbors (FLANN), which has been incorporated into OpenCV and is now one of the most popular libraries for nearest neighbor matching. PMID:26353063

  3. Parallel scheduling algorithms

    SciTech Connect

    Dekel, E.; Sahni, S.

    1983-01-01

    Parallel algorithms are given for scheduling problems such as scheduling to minimize the number of tardy jobs, job sequencing with deadlines, scheduling to minimize earliness and tardiness penalties, channel assignment, and minimizing the mean finish time. The shared memory model of parallel computers is used to obtain fast algorithms. 26 references.

  4. An improved vein image segmentation algorithm based on SLIC and Niblack threshold method

    NASA Astrophysics Data System (ADS)

    Zhou, Muqing; Wu, Zhaoguo; Chen, Difan; Zhou, Ya

    2013-12-01

    Subcutaneous vein images are often obtained by using the absorbency difference of near-infrared (NIR) light between vein and its surrounding tissue under NIR light illumination. Vein images with high quality are critical to biometric identification, which requires segmenting the vein skeleton from the original images accurately. To address this issue, we proposed a vein image segmentation method which based on simple linear iterative clustering (SLIC) method and Niblack threshold method. The SLIC method was used to pre-segment the original images into superpixels and all the information in superpixels were transferred into a matrix (Block Matrix). Subsequently, Niblack thresholding method is adopted to binarize Block Matrix. Finally, we obtained segmented vein images from binarized Block Matrix. According to several experiments, most part of vein skeleton is revealed compared to traditional Niblack segmentation algorithm.

  5. Parallel algorithms for computer vision. Annual report No. 2, 31 August 1986-31 August 1987

    SciTech Connect

    Poggio, T.; Little, J.

    1988-03-01

    Much work during the past year has focused on building the Vision Machine system. The Vision Machine is a testbed for the research on parallel vision algorithms and their integration. The system consists of an input device--a movable two-camera Eye-Head system with six degrees of freedom--and the 16K Connection Machine (CM-1). The authors concentrated on implementing and testing early vision algorithms, and on developing a new sophisticated techniques for their integration. The output of the integration stage will be used for navigation and recognition tasks. From August 31, 1986 to August 31, 1987. The Connection Machine delivered on July 31, 1986 by Thinking Machines Corporation was used. A substantial body of vision software was developed and tested on the machine. Also nearly completed was the development of an integrated Vision Machine that includes several early vision algorithms, and integration stage of middle vision. As outlined in their original proposal, the authors have begun to explore parallel algorithms at the higher level of recognition. They have also studied the performance of alternative, nonconventional architectures for navigation, and worked on the difficult issue of alternative parallel languages for the Connection Machine, in addition to LISP and C. The body of this report gives an overview of the results of the research during the second twelve month of funding.

  6. Evaluation of an Area-Based matching algorithm with advanced shape models

    NASA Astrophysics Data System (ADS)

    Re, C.; Roncella, R.; Forlani, G.; Cremonese, G.; Naletto, G.

    2014-04-01

    Nowadays, the scientific institutions involved in planetary mapping are working on new strategies to produce accurate high resolution DTMs from space images at planetary scale, usually dealing with extremely large data volumes. From a methodological point of view, despite the introduction of a series of new algorithms for image matching (e.g. the Semi Global Matching) that yield superior results (especially because they produce usually smooth and continuous surfaces) with lower processing times, the preference in this field still goes to well established area-based matching techniques. Many efforts are consequently directed to improve each phase of the photogrammetric process, from image pre-processing to DTM interpolation. In this context, the Dense Matcher software (DM) developed at the University of Parma has been recently optimized to cope with very high resolution images provided by the most recent missions (LROC NAC and HiRISE) focusing the efforts mainly to the improvement of the correlation phase and the process automation. Important changes have been made to the correlation algorithm, still maintaining its high performance in terms of precision and accuracy, by implementing an advanced version of the Least Squares Matching (LSM) algorithm. In particular, an iterative algorithm has been developed to adapt the geometric transformation in image resampling using different shape functions as originally proposed by other authors in different applications.

  7. A Novel Quantum-Behaved Bat Algorithm with Mean Best Position Directed for Numerical Optimization

    PubMed Central

    Zhu, Wenyong; Liu, Zijuan; Duan, Qingyan; Cao, Long

    2016-01-01

    This paper proposes a novel quantum-behaved bat algorithm with the direction of mean best position (QMBA). In QMBA, the position of each bat is mainly updated by the current optimal solution in the early stage of searching and in the late search it also depends on the mean best position which can enhance the convergence speed of the algorithm. During the process of searching, quantum behavior of bats is introduced which is beneficial to jump out of local optimal solution and make the quantum-behaved bats not easily fall into local optimal solution, and it has better ability to adapt complex environment. Meanwhile, QMBA makes good use of statistical information of best position which bats had experienced to generate better quality solutions. This approach not only inherits the characteristic of quick convergence, simplicity, and easy implementation of original bat algorithm, but also increases the diversity of population and improves the accuracy of solution. Twenty-four benchmark test functions are tested and compared with other variant bat algorithms for numerical optimization the simulation results show that this approach is simple and efficient and can achieve a more accurate solution. PMID:27293424

  8. A Novel Quantum-Behaved Bat Algorithm with Mean Best Position Directed for Numerical Optimization.

    PubMed

    Zhu, Binglian; Zhu, Wenyong; Liu, Zijuan; Duan, Qingyan; Cao, Long

    2016-01-01

    This paper proposes a novel quantum-behaved bat algorithm with the direction of mean best position (QMBA). In QMBA, the position of each bat is mainly updated by the current optimal solution in the early stage of searching and in the late search it also depends on the mean best position which can enhance the convergence speed of the algorithm. During the process of searching, quantum behavior of bats is introduced which is beneficial to jump out of local optimal solution and make the quantum-behaved bats not easily fall into local optimal solution, and it has better ability to adapt complex environment. Meanwhile, QMBA makes good use of statistical information of best position which bats had experienced to generate better quality solutions. This approach not only inherits the characteristic of quick convergence, simplicity, and easy implementation of original bat algorithm, but also increases the diversity of population and improves the accuracy of solution. Twenty-four benchmark test functions are tested and compared with other variant bat algorithms for numerical optimization the simulation results show that this approach is simple and efficient and can achieve a more accurate solution. PMID:27293424

  9. Sensitivity Analysis of the Scattering-Based SARBM3D Despeckling Algorithm.

    PubMed

    Di Simone, Alessio

    2016-01-01

    Synthetic Aperture Radar (SAR) imagery greatly suffers from multiplicative speckle noise, typical of coherent image acquisition sensors, such as SAR systems. Therefore, a proper and accurate despeckling preprocessing step is almost mandatory to aid the interpretation and processing of SAR data by human users and computer algorithms, respectively. Very recently, a scattering-oriented version of the popular SAR Block-Matching 3D (SARBM3D) despeckling filter, named Scattering-Based (SB)-SARBM3D, was proposed. The new filter is based on the a priori knowledge of the local topography of the scene. In this paper, an experimental sensitivity analysis of the above-mentioned despeckling algorithm is carried out, and the main results are shown and discussed. In particular, the role of both electromagnetic and geometrical parameters of the surface and the impact of its scattering behavior are investigated. Furthermore, a comprehensive sensitivity analysis of the SB-SARBM3D filter against the Digital Elevation Model (DEM) resolution and the SAR image-DEM coregistration step is also provided. The sensitivity analysis shows a significant robustness of the algorithm against most of the surface parameters, while the DEM resolution plays a key role in the despeckling process. Furthermore, the SB-SARBM3D algorithm outperforms the original SARBM3D in the presence of the most realistic scattering behaviors of the surface. An actual scenario is also presented to assess the DEM role in real-life conditions. PMID:27347971

  10. Sensitivity Analysis of the Scattering-Based SARBM3D Despeckling Algorithm

    PubMed Central

    Di Simone, Alessio

    2016-01-01

    Synthetic Aperture Radar (SAR) imagery greatly suffers from multiplicative speckle noise, typical of coherent image acquisition sensors, such as SAR systems. Therefore, a proper and accurate despeckling preprocessing step is almost mandatory to aid the interpretation and processing of SAR data by human users and computer algorithms, respectively. Very recently, a scattering-oriented version of the popular SAR Block-Matching 3D (SARBM3D) despeckling filter, named Scattering-Based (SB)-SARBM3D, was proposed. The new filter is based on the a priori knowledge of the local topography of the scene. In this paper, an experimental sensitivity analysis of the above-mentioned despeckling algorithm is carried out, and the main results are shown and discussed. In particular, the role of both electromagnetic and geometrical parameters of the surface and the impact of its scattering behavior are investigated. Furthermore, a comprehensive sensitivity analysis of the SB-SARBM3D filter against the Digital Elevation Model (DEM) resolution and the SAR image-DEM coregistration step is also provided. The sensitivity analysis shows a significant robustness of the algorithm against most of the surface parameters, while the DEM resolution plays a key role in the despeckling process. Furthermore, the SB-SARBM3D algorithm outperforms the original SARBM3D in the presence of the most realistic scattering behaviors of the surface. An actual scenario is also presented to assess the DEM role in real-life conditions. PMID:27347971

  11. The conception and implementation of a local HDR fusion algorithm depending on contrast and luminosity parameters

    NASA Astrophysics Data System (ADS)

    Besrour, Amine; Abdelkefi, Fatma; Siala, Mohamed; Snoussi, Hichem

    2015-09-01

    Nowadays, the high dynamic range (HDR) imaging represents the subject of the most researches. The major problem lies in the implementation of the best algorithm to acquire the best video quality. In fact, the major constraint is to conceive an optimal fusion which must meet the rapid movement of video frames. The implemented merging algorithms were not quick enough to reconstitute the HDR video. In this paper, we detail each of the previous existing works before detailing our algorithm and presenting results from the acquired HDR images, tone mapped with various techniques. Our proposed algorithm guarantees a more enhanced and faster solution compared to the existing ones. In fact, it has the ability to calculate the saturation matrix related to the saturation rate of the neighboring pixels. The computed coefficients are affected respectively to each picture from the tested ones. This analysis provides faster and efficient results in terms of quality and brightness. The originality of our work remains on its processing method including the pixels saturation in the totality of the captured pictures and their combination in order to obtain the best pictures illustrating all the possible details. These parameters are computed for each zone depending on the contrast and the luminosity of the current pixel and its neighboring. The final HDR image's coefficients are calculated dynamically ensuring the best image quality equilibrating the brightness and contrast values and making the perfect final image.

  12. A Stochastic Algorithm for Generating Realistic Virtual Interstitial Cell of Cajal Networks.

    PubMed

    Gao, Jerry; Sathar, Shameer; O'Grady, Gregory; Archer, Rosalind; Cheng, Leo K

    2015-08-01

    Interstitial cells of Cajal (ICC) play a central role in coordinating normal gastrointestinal (GI) motility. Depletion of ICC numbers and network integrity contributes to major functional GI motility disorders. However, the mechanisms relating ICC structure to GI function and dysfunction remains unclear, partly because there is a lack of large-scale ICC network imaging data across a spectrum of depletion levels to guide models. Experimental imaging of these large-scale networks remains challenging because of technical constraints, and hence, we propose the generation of realistic virtual ICC networks in silico using the single normal equation simulation (SNESIM) algorithm. ICC network imaging data obtained from wild-type (normal) and 5-HT2B serotonin receptor knockout (depleted ICC) mice were used to inform the algorithm, and the virtual networks generated were assessed using ICC network structural metrics and biophysically-based computational modeling. When the virtual networks were compared to the original networks, there was less than 10% error for four out of five structural metrics and all four functional measures. The SNESIM algorithm was then modified to enable the generation of ICC networks across a spectrum of depletion levels, and as a proof-of-concept, virtual networks were successfully generated with a range of structural and functional properties. The SNESIM and modified SNESIM algorithms, therefore, offer an alternative strategy for obtaining the large-scale ICC network imaging data across a spectrum of depletion levels. These models can be applied to accurately inform the physiological consequences of ICC depletion. PMID:25781477

  13. The NLO jet vertex in the small-cone approximation for kt and cone algorithms

    NASA Astrophysics Data System (ADS)

    Colferai, D.; Niccoli, A.

    2015-04-01

    We determine the jet vertex for Mueller-Navelet jets and forward jets in the small-cone approximation for two particular choices of jet algoritms: the kt algorithm and the cone algorithm. These choices are motivated by the extensive use of such algorithms in the phenomenology of jets. The differences with the original calculations of the small-cone jet vertex by Ivanov and Papa, which is found to be equivalent to a formerly algorithm proposed by Furman, are shown at both analytic and numerical level, and turn out to be sizeable. A detailed numerical study of the error introduced by the small-cone approximation is also presented, for various observables of phenomenological interest. For values of the jet "radius" R = 0 .5, the use of the small-cone approximation amounts to an error of about 5% at the level of cross section, while it reduces to less than 2% for ratios of distributions such as those involved in the measure of the azimuthal decorrelation of dijets.

  14. Advanced Imaging Algorithms for Radiation Imaging Systems

    SciTech Connect

    Marleau, Peter

    2015-10-01

    The intent of the proposed work, in collaboration with University of Michigan, is to develop the algorithms that will bring the analysis from qualitative images to quantitative attributes of objects containing SNM. The first step to achieving this is to develop an indepth understanding of the intrinsic errors associated with the deconvolution and MLEM algorithms. A significant new effort will be undertaken to relate the image data to a posited three-dimensional model of geometric primitives that can be adjusted to get the best fit. In this way, parameters of the model such as sizes, shapes, and masses can be extracted for both radioactive and non-radioactive materials. This model-based algorithm will need the integrated response of a hypothesized configuration of material to be calculated many times. As such, both the MLEM and the model-based algorithm require significant increases in calculation speed in order to converge to solutions in practical amounts of time.

  15. Five-dimensional Janis-Newman algorithm

    NASA Astrophysics Data System (ADS)

    Erbin, Harold; Heurtier, Lucien

    2015-08-01

    The Janis-Newman algorithm has been shown to be successful in finding new stationary solutions of four-dimensional gravity. Attempts for a generalization to higher dimensions have already been found for the restricted cases with only one angular momentum. In this paper we propose an extension of this algorithm to five-dimensions with two angular momenta—using the prescription of Giampieri—through two specific examples, that are the Myers-Perry and BMPV black holes. We also discuss possible enlargements of our prescriptions to other dimensions and maximal number of angular momenta, and show how dimensions higher than six appear to be much more challenging to treat within this framework. Nonetheless this general algorithm provides a unification of the formulation in d=3,4,5 of the Janis-Newman algorithm, from which several examples are exposed, including the BTZ black hole.

  16. A Dynamic Navigation Algorithm Considering Network Disruptions

    NASA Astrophysics Data System (ADS)

    Jiang, J.; Wu, L.

    2014-04-01

    In traffic network, link disruptions or recoveries caused by sudden accidents, bad weather and traffic congestion, lead to significant increase or decrease in travel times on some network links. Similar situation also occurs in real-time emergency evacuation plan in indoor areas. As the dynamic nature of real-time network information generates better navigation solutions than the static one, a real-time dynamic navigation algorithm for emergency evacuation with stochastic disruptions or recoveries in the network is presented in this paper. Compared with traditional existing algorithms, this new algorithm adjusts pre-existing path to a new optimal one according to the changing link travel time. With real-time network information, it can provide the optional path quickly to adapt to the rapid changing network properties. Theoretical analysis and experimental results demonstrate that this proposed algorithm performs a high time efficiency to get exact solution and indirect information can be calculated in spare time.

  17. Cell list algorithms for nonequilibrium molecular dynamics

    NASA Astrophysics Data System (ADS)

    Dobson, Matthew; Fox, Ian; Saracino, Alexandra

    2016-06-01

    We present two modifications of the standard cell list algorithm that handle molecular dynamics simulations with deforming periodic geometry. Such geometry naturally arises in the simulation of homogeneous, linear nonequilibrium flow modeled with periodic boundary conditions, and recent progress has been made developing boundary conditions suitable for general 3D flows of this type. Previous works focused on the planar flows handled by Lees-Edwards or Kraynik-Reinelt boundary conditions, while the new versions of the cell list algorithm presented here are formulated to handle the general 3D deforming simulation geometry. As in the case of equilibrium, for short-ranged pairwise interactions, the cell list algorithm reduces the computational complexity of the force computation from O(N2) to O(N), where N is the total number of particles in the simulation box. We include a comparison of the complexity and efficiency of the two proposed modifications of the standard algorithm.

  18. Genetic algorithms as global random search methods

    NASA Technical Reports Server (NTRS)

    Peck, Charles C.; Dhawan, Atam P.

    1995-01-01

    Genetic algorithm behavior is described in terms of the construction and evolution of the sampling distributions over the space of candidate solutions. This novel perspective is motivated by analysis indicating that that schema theory is inadequate for completely and properly explaining genetic algorithm behavior. Based on the proposed theory, it is argued that the similarities of candidate solutions should be exploited directly, rather than encoding candidate solution and then exploiting their similarities. Proportional selection is characterized as a global search operator, and recombination is characterized as the search process that exploits similarities. Sequential algorithms and many deletion methods are also analyzed. It is shown that by properly constraining the search breadth of recombination operators, convergence of genetic algorithms to a global optimum can be ensured.

  19. Genetic algorithms as global random search methods

    NASA Technical Reports Server (NTRS)

    Peck, Charles C.; Dhawan, Atam P.

    1995-01-01

    Genetic algorithm behavior is described in terms of the construction and evolution of the sampling distributions over the space of candidate solutions. This novel perspective is motivated by analysis indicating that the schema theory is inadequate for completely and properly explaining genetic algorithm behavior. Based on the proposed theory, it is argued that the similarities of candidate solutions should be exploited directly, rather than encoding candidate solutions and then exploiting their similarities. Proportional selection is characterized as a global search operator, and recombination is characterized as the search process that exploits similarities. Sequential algorithms and many deletion methods are also analyzed. It is shown that by properly constraining the search breadth of recombination operators, convergence of genetic algorithms to a global optimum can be ensured.

  20. A signal invariant wavelet function selection algorithm.

    PubMed

    Garg, Girisha

    2016-04-01

    This paper addresses the problem of mother wavelet selection for wavelet signal processing in feature extraction and pattern recognition. The problem is formulated as an optimization criterion, where a wavelet library is defined using a set of parameters to find the best mother wavelet function. For estimating the fitness function, adopted to evaluate the performance of the wavelet function, analysis of variance is used. Genetic algorithm is exploited to optimize the determination of the best mother wavelet function. For experimental evaluation, solutions for best mother wavelet selection are evaluated on various biomedical signal classification problems, where the solutions of the proposed algorithm are assessed and compared with manual hit-and-trial methods. The results show that the solutions of automated mother wavelet selection algorithm are consistent with the manual selection of wavelet functions. The algorithm is found to be invariant to the type of signals used for classification. PMID:26253283