NASA Astrophysics Data System (ADS)
Mahjoub, Mehdi
La resolution de l'equation de Boltzmann demeure une etape importante dans la prediction du comportement d'un reacteur nucleaire. Malheureusement, la resolution de cette equation presente toujours un defi pour une geometrie complexe (reacteur) tout comme pour une geometrie simple (cellule). Ainsi, pour predire le comportement d'un reacteur nucleaire,un schema de calcul a deux etapes est necessaire. La premiere etape consiste a obtenir les parametres nucleaires d'une cellule du reacteur apres une etape d'homogeneisation et condensation. La deuxieme etape consiste en un calcul de diffusion pour tout le reacteur en utilisant les resultats de la premiere etape tout en simplifiant la geometrie du reacteur a un ensemble de cellules homogenes le tout entoure de reflecteur. Lors des transitoires (accident), ces deux etapes sont insuffisantes pour pouvoir predire le comportement du reacteur. Comme la resolution de l'equation de Boltzmann dans sa forme dependante du temps presente toujours un defi de taille pour tous types de geometries,un autre schema de calcul est necessaire. Afin de contourner cette difficulte, l'hypothese adiabatique est utilisee. Elle se concretise en un schema de calcul a quatre etapes. La premiere et deuxieme etapes demeurent les memes pour des conditions nominales du reacteur. La troisieme etape se resume a obtenir les nouvelles proprietes nucleaires de la cellule a la suite de la perturbation pour les utiliser, au niveau de la quatrieme etape, dans un nouveau calcul de reacteur et obtenir l'effet de la perturbation sur le reacteur. Ce projet vise a verifier cette hypothese. Ainsi, un nouveau schema de calcul a ete defini. La premiere etape de ce projet a ete de creer un nouveau logiciel capable de resoudre l'equation de Boltzmann dependante du temps par la methode stochastique Monte Carlo dans le but d'obtenir des sections efficaces qui evoluent dans le temps. Ce code a ete utilise pour simuler un accident LOCA dans un reacteur nucleaire de type CANDU-6. Les sections efficaces dependantes du temps ont ete par la suite utilisees dans un calcul de diffusion espace-temps pour un reacteur CANDU-6 subissant un accident de type LOCA affectant la moitie du coeur afin d'observer son comportement durant toutes les phases de la perturbation. Dans la phase de developpement, nous avons choisi de demarrer avec le code OpenMC, developpe au MIT,comme plateforme initiale de developpement. L'introduction et le traitement des neutrons retardes durant la simulation ont presente un grand defi a surmonter. Il est important de noter que le code developpe utilisant la methode Monte Carlo peut etre utilise a grande echelle pour la simulation de tous les types des reacteurs nucleaires si les supports informatiques sont disponibles.
Formability Extension of Aerospace Alloys for Tube Hydroforming Applications =
NASA Astrophysics Data System (ADS)
Anderson, Melissa
L'hydroformage de tube est un procede novateur de mise en forme du metal qui utilise la pression d'un fluide, generalement de l'eau, dans une matrice fermee pour deformer plastiquement des pieces d'epaisseur faible et fabriquer ainsi des composants tubulaires de geometries complexes. Ce procede possede de nombreux avantages tels que la reduction du poids des pieces, la diminution des couts lies a l'outillage et l'assemblage, la reduction du nombre d'etapes de fabrication et l'excellent etat de surface des pieces hydroformees. Cependant, malgre tous ces atouts, l'hydroformage reste un procede marginal dans le domaine aerospatial a cause de plusieurs facteurs dont la formabilite limitee des alliages aeronautiques. L'objectif principal de la recherche conduite dans le cadre de cette these est d'etudier une methode pour augmenter la formabilite de deux alliages aeronautiques designes par l'utilisation d'un procede de mise en forme multi-etapes qui inclue des cycles de mise en forme suivis d'etapes de traitement thermique intermediaires. Une revue de litterature exhaustive sur les methodes existantes pour ameliorer la formabilite des materiaux vises ainsi que les traitements thermiques d'adoucissement disponibles a permis d'etablir une procedure experimentale appropriee. Ce procede comprend plusieurs sequences de mise en forme suivie de traitement thermique successives jusqu'a l'obtention de la piece finale. L'insertion d'etapes de traitements thermiques intermediaires ainsi que la combinaison " deformation + traitement thermique " influencent le comportement mecanique et metallurgique des alliages. De ce fait, une caracterisation complete des materiaux a ete conduite a chaque etape du procede. D'un point de vue mecanique, l'effet des traitements thermiques et plus generalement du procede multi-etapes sur les proprietes mecaniques et lois constitutives des alliages a ete etudie en detail. Au niveau metallurgique, l'influence du procede sur les caracteristiques microstructurales des alliages tels la taille de grains, les phases en presence et la texture a ete analysee. Ces deux etudes paralleles ont permis de comprendre de facon complete et detaillee l'impact des modifications metallurgiques induites par le procede multietapes sur le comportement mecanique macroscopique et les proprietes finales de la piece.
Fabrication par injection flexible de pieces coniques pour des applications aerospatiales
NASA Astrophysics Data System (ADS)
Shebib Loiselle, Vincent
Les materiaux composites sont presents dans les tuyeres de moteurs spatiaux depuis les annees soixante. Aujourd'hui, l'avenement des tissus tridimensionnels apporte une solution innovatrice au probleme de delamination qui limitait les proprietes mecaniques de ces composites. L'utilisation de ces tissus necessite toutefois la conception de procedes de fabrication mieux adaptes. Une nouvelle methode de fabrication de pieces composites pour des applications aerospatiales a ete etudiee tout au long de ce travail. Celle-ci applique les principes de l'injection flexible (procede Polyflex) a la fabrication de pieces coniques de fortes epaisseurs. La piece de validation a fabriquer represente un modele reduit de piece de tuyere de moteur spatial. Elle est composee d'un renfort tridimensionnel en fibres de carbone et d'une resine phenolique. La reussite du projet est definie par plusieurs criteres sur la compaction et la formation de plis du renfort et sur la formation de porosites de la piece fabriquee. Un grand nombre d'etapes ont ete necessaires avant la fabrication de deux pieces de validation. Premierement, pour repondre au critere sur la compaction du renfort, la conception d'un outil de caracterisation a ete entreprise. L'etude de la compaction a ete effectuee afin d'obtenir les informations necessaires a la comprehension de la deformation d'un renfort 3D axisymetrique. Ensuite, le principe d'injection de la piece a ete defini pour ce nouveau procede. Pour en valider les concepts proposes, la permeabilite du renfort fibreux ainsi que la viscosite de la resine ont du etre caracterisees. A l'aide de ces donnees, une serie de simulations de l'ecoulement pendant l'injection de la piece ont ete realisees et une approximation du temps de remplissage calculee. Apres cette etape, la conception du moule de tuyere a ete entamee et appuyee par une simulation mecanique de la resistance aux conditions de fabrication. Egalement, plusieurs outillages necessaires pour la fabrication ont ete concus et installes au nouveau laboratoire CGD (composites grandes dimensions). En parallele, plusieurs etudes ont ete effectuees pour comprendre les phenomenes influencant la polymerisation de la resine.
ERIC Educational Resources Information Center
Department of Indian Affairs and Northern Development, Ottawa (Ontario).
Gathering Strength is an integrated government-wide plan to address the key challenges facing Canada's Aboriginal people. Following an initial section on reconciliation of historic grievances, this report describes initiatives in the four areas addressed by the action plan: (1) partnerships (all schools received public awareness materials;…
Analyse experimentale des performances d'une batterie au lithium pour l'aeronautique
NASA Astrophysics Data System (ADS)
Bonnin, Romain
Ce memoire a pour objectif d'identifier et d'etudier les performances necessaires pour qu'une batterie au lithium puisse etre utilisee dans le secteur de l'aeronautique. C'est pourquoi dans le cadre de notre recherche, nous allons proposer une procedure de tests permettant d'analyser et de determiner si la batterie au lithium peut etre implantee dans un avion. En vue de repondre a l'analyse des performances, une etude des fonctionnalites demandees par l'avion ainsi que des normes preexistantes vont etre effectuees. Suite a cette etape, nous allons elaborer un banc d'essais. Une fois le banc d'essais acheve, nous allons tester une batterie au lithium qui est supposee disposer de toutes les caracteristiques techniques requises pour etre implantee dans un avion. Ces tests nous permettront donc d'emettre un avis sur l'utilisation des batteries au lithium dans le domaine de l'aeronautique.
NASA Astrophysics Data System (ADS)
Boussaboun, Zakariae
Les mineraux d'argile sont des catalyseurs possibles pour la formation du graphene a partir de precurseurs organiques, comme le saccharose. Les argiles sont abondantes, securitaires et economiques pour la formation du graphene. L'objectif principal de ce memoire est de demontrer qu'il est possible de synthetiser un materiau hybride contenant de l'argile et du graphene. La preparation de ces materiaux carbones a base de l'argile (bentonite et cloisite) et le saccharose a ete realisee selon deux methodes. La premiere methode est faite en trois etapes : 1) periode de contact entre l'argile et la source de carbone dans un environnement humide, 2) infiltration de la matiere carbonee et transformation au four a micro-onde, 3) chauffage a 750°C sous azote pour obtenir des materiaux carbones. Par contre la deuxieme methode est faite en deux etapes, sans micro-onde, et avec une augmentation de la quantite de source de carbone (saccharose et alginate). La caracterisation du materiau a permis de suivre les reactions de transformation de la source de carbone vers le graphene. Cette caracterisation a ete faite par la spectroscopie IRTF et Raman, l'analyse thermogravimetrique (TGA), la surface specifique (methode BET) et le microscope electronique a balayage (MEB). La conductivite electrique a ete mesuree par un spectrometre dielectrique et en fonction de la pression appliquee avec un multimetre. Le materiau realise etait incorpore dans une matrice avec un polyethylene a basse densite pour avoir un polymere avec des caracteristiques specifiques. La conductivite thermique a ete ensuite mesuree suivant la norme ASTM E1530. L'echantillon realise avec la deuxieme methode avec une proportion de bentonite pour 5 proportions de saccharose (M2 B1 : S5) signale la possibilite de produire des materiaux de graphene a partir de ressources naturelles. La surface specifique a considerablement augmente de (75,88 m2/g) pour bentonite non traiter a (139,76 m2/g) pour l'echantillon (M2 B1 : S5). Une augmentation significative de la conductivite par pression (95,3 S/m sous une pression de 6,5 MPa par rapport a 1,45*10 -3 S/m pour la bentonite) et la conductivite thermique dans le polyethylene basse densite a une concentration de 10% d'additif (0,332 W/m.K a 0,279 W/m.K) ont ete observes pour le meme echantillon M2 B1 : S5 comparativement a la bentonite non traitee. Les applications possibles sont par exemple les senseurs et les actuateurs par pression.
Carbon nano structures: Production and characterization
NASA Astrophysics Data System (ADS)
Beig Agha, Rosa
L'objectif de ce memoire est de preparer et de caracteriser des nanostructures de carbone (CNS -- Carbon Nanostructures, en licence a l'Institut de recherche sur l'hydrogene, Quebec, Canada), un carbone avec un plus grand degre de graphitisation et une meilleure porosite. Le Chapitre 1 est une description generale des PEMFCs (PEMFC -- Polymer Electrolyte Membrane Fuel Cell) et plus particulierement des CNS comme support de catalyseurs, leur synthese et purification. Le Chapitre 2 decrit plus en details la methode de synthese et la purification des CNS, la theorie de formation des nanostructures et les differentes techniques de caracterisation que nous avons utilises telles que la diffraction aux rayons-X (XRD -- X-ray diffraction), la microscopie electronique a transmission (TEM -- transmission electron microscope ), la spectroscopie Raman, les isothermes d'adsorption d'azote a 77 K (analyse BET, t-plot, DFT), l'intrusion au mercure, et l'analyse thermogravimetrique (TGA -- thermogravimetric analysis). Le Chapitre 3 presente les resultats obtenus a chaque etape de la synthese des CNS et avec des echantillons produits a l'aide d'un broyeur de type SPEXRTM (SPEX/CertiPrep 8000D) et d'un broyeur de type planetaire (Fritsch Pulverisette 5). La difference essentielle entre ces deux types de broyeur est la facon avec laquelle les materiaux sont broyes. Le broyeur de type SPEX secoue le creuset contenant les materiaux et des billes d'acier selon 3 axes produisant ainsi des impacts de tres grande energie. Le broyeur planetaire quant a lui fait tourner et deplace le creuset contenant les materiaux et des billes d'acier selon 2 axes (plan). Les materiaux sont donc broyes differemment et l'objectif est de voir si les CNS produits ont les memes structures et proprietes. Lors de nos travaux nous avons ete confrontes a un probleme majeur. Nous n'arrivions pas a reproduire les CNS dont la methode de synthese a originellement ete developpee dans les laboratoires de l'Institut de recherche sur l'hydrogene (IRH). Nos echantillons presentaient toujours une grande quantite de carbure de fer au detriment de la formation de nanostructures de carbone. Apres plusieurs mois de recherche nous avons constate que les metaux de base, soit le fer et le cobalt, etaient contamines. Neanmoins, ces recherches nous ont enseigne beaucoup et les resultats sont presentes aux Appendices I a III. Le carbone de depart est du charbon active commercial (CNS201) qui a ete prealablement chauffe a 1,000°C sous vide pendant 90 minutes pour se debarrasser de toute humidite et autres impuretes. En premiere etape, dans un creuset d'acier durci du CNS201 pretraite fut melange a une certaine quantite de Fe et de Co (99.9 % purs). Des proportions typiques sont 50 pd. %, 44 pd. %, et 6 pd. % pour le C, le Fe, et le Co respectivement. Pour les echantillons prepares avec le broyeur SPEX, trois a six billes en acier durci furent utilisees pour le broyage, de masse relative echantillon/poudre de 35 a 1. Pour les echantillons prepares avec le broyeur planetaire, trente-six billes en acier durci furent utilisees pour le broyage, de masse relative echantillon/poudre de 10 a 1. L'hydrogene fut alors introduit dans le creuset pour les deux types de broyeur a une pression de 1.4 MPa, et l'echantillon fut broye pendant 12 h pour le SPEX et 24 h pour le planetaire. Le broyeur SPEX a un rendement de transfert d'energie mecanique plus grand qu'un broyeur planetaire, mais il a le desavantage de contaminer davantage l'echantillon en Fe par attrition. Cependant, ceci peut etre neglige vu que le Fe etait un des catalyseurs metalliques ajoutes au creuset. En deuxieme etape, l'echantillon broye est transfere sous gaz inerte (argon) dans un tube en quartz, qui est alors chauffe a 700°C pendant 90 minutes. Des mesures de patrons de diffraction a rayons-X sur poudre furent faites pour caracteriser les changements structurels des CNS lors des etapes de synthese. Ces mesures furent prises avec un diffractometre Bruker D8 FOCUS utilisant le rayonnement Cu Ka (lambda = 1.54054 A) et une geometrie Theta/2Theta. La Figure 3.1 montre le patron de diffraction de rayon-X du charbon active utilise comme precurseur pour produire les CNS. Le charbon active est prechauffe a haute temperature (1,000°C) pendant 1 h pour enlever l'humidite. La Figure 3.2 montre les patrons de diffraction de rayons-X des echantillons SPEX et planetaire apres broyage de 12 h et 24 h, respectivement. Les structures de charbon ne sont pas encore bien definies, mais un pic a 2theta ≈ 20°-30° correspond aux petites cristallites a caractere turbostatique et un pic correspondant au fer et au carbure de fer apparait a 2theta ≈ 45°. (Abstract shortened by UMI.)
ERIC Educational Resources Information Center
Ontario Public Health Association, Toronto.
The Literacy and Health Project was set up to determine how reading and health problems were connected. A research phase documented the relationship between literacy and health. Information was collected from community organizations, literature review, three case studies in Ontario, and key informant interviews. The consultation process involved…
Methodes iteratives paralleles: Applications en neutronique et en mecanique des fluides
NASA Astrophysics Data System (ADS)
Qaddouri, Abdessamad
Dans cette these, le calcul parallele est applique successivement a la neutronique et a la mecanique des fluides. Dans chacune de ces deux applications, des methodes iteratives sont utilisees pour resoudre le systeme d'equations algebriques resultant de la discretisation des equations du probleme physique. Dans le probleme de neutronique, le calcul des matrices des probabilites de collision (PC) ainsi qu'un schema iteratif multigroupe utilisant une methode inverse de puissance sont parallelises. Dans le probleme de mecanique des fluides, un code d'elements finis utilisant un algorithme iteratif du type GMRES preconditionne est parallelise. Cette these est presentee sous forme de six articles suivis d'une conclusion. Les cinq premiers articles traitent des applications en neutronique, articles qui representent l'evolution de notre travail dans ce domaine. Cette evolution passe par un calcul parallele des matrices des PC et un algorithme multigroupe parallele teste sur un probleme unidimensionnel (article 1), puis par deux algorithmes paralleles l'un mutiregion l'autre multigroupe, testes sur des problemes bidimensionnels (articles 2--3). Ces deux premieres etapes sont suivies par l'application de deux techniques d'acceleration, le rebalancement neutronique et la minimisation du residu aux deux algorithmes paralleles (article 4). Finalement, on a mis en oeuvre l'algorithme multigroupe et le calcul parallele des matrices des PC sur un code de production DRAGON ou les tests sont plus realistes et peuvent etre tridimensionnels (article 5). Le sixieme article (article 6), consacre a l'application a la mecanique des fluides, traite la parallelisation d'un code d'elements finis FES ou le partitionneur de graphe METIS et la librairie PSPARSLIB sont utilises.
NASA Astrophysics Data System (ADS)
Lirette-Pitre, Nicole T.
2009-07-01
La reussite scolaire des filles les amene de plus en plus a poursuivre une formation postsecondaire et a exercer des professions qui demandent un haut niveau de connaissances et d'expertise scientifique. Toutefois, les filles demeurent toujours tres peu nombreuses a envisager une carriere en sciences (chimie et physique), en ingenierie ou en TIC (technologie d'information et de la communication), soit une carriere reliee a la nouvelle economie. Pour plusieurs filles, les sciences et les TIC ne sont pas des matieres scolaires qu'elles trouvent interessantes meme si elles y reussissent tres bien. Ces filles admettent que leurs experiences d'apprentissage en sciences et en TIC ne leur ont pas permis de developper un interet ni de se sentir confiante en leurs habiletes a reussir dans ces matieres. Par consequent, peu de filles choisissent de poursuivre leurs etudes postsecondaires dans ces disciplines. La theorie sociocognitive du choix carriere a ete choisie comme modele theorique pour mieux comprendre quelles variables entrent en jeu lorsque les filles choisissent leur carriere. Notre etude a pour objet la conception et l'evaluation de l'efficacite d'un materiel pedagogique concu specifiquement pour ameliorer les experiences d'apprentissage en sciences et en TIC des filles de 9e annee au Nouveau-Brunswick. L'approche pedagogique privilegiee dans notre materiel a mis en oeuvre des strategies pedagogiques issues des meilleures pratiques que nous avons identifiees et qui visaient particulierement l'augmentation du sentiment d'auto-efficacite et de l'interet des filles pour ces disciplines. Ce materiel disponible par Internet a l'adresse http://www.umoncton.ca/lirettn/scientic est directement en lien avec le programme d'etudes en sciences de la nature de 9e annee du Nouveau-Brunswick. L'evaluation de l'efficacite de notre materiel pedagogique a ete faite selon deux grandes etapes methodologiques: 1) l'evaluation de l'utilisabilite et de la convivialite du materiel et 2) l'evaluation de l'effet du materiel en fonction de diverses variables reliees a l'interet et au sentiment d'auto-efficacite des filles en sciences et en TIC. Cette recherche s'est inscrite dans un paradigme pragmatique de recherche. Le pragmatisme a guide nos choix en ce qui a trait au modele de recherche et des techniques utilisees. Cette recherche a associe a la fois des techniques qualitatives et quantitatives, particulierement en ce qui concerne la collecte et l'analyse de donnees. Les donnees recueillies dans la premiere etape de l'evaluation de l'utilisabilite et de la convivialite du materiel par les enseignantes et les enseignants de sciences et les filles ont revele que le materiel concu est tres utilisable et convivial. Toutefois quelques petites ameliorations seront apportees a une version subsequente afin de faciliter davantage la navigation. Quant a l'evaluation des effets du materiel concu sur les variables reliees au sentiment d'auto-efficacite et aux interets lors de l'etape quasi experimentale, nos donnees qualitatives ont indique que ce materiel a eu des effets positifs sur le sentiment d'auto-efficacite et sur les interets des filles qui l'ont utilise. Toutefois, nos donnees quantitatives n'ont pas permis d'inferer un lien causal direct entre l'utilisation du materiel et l'augmentation du sentiment d'auto-efficacite et des interets des filles en sciences et en TIC. A la lumiere des resultats obtenus, nous avons conclu que le materiel a eu les effets escomptes. Donc, nous recommandons la creation et l'utilisation de materiel de ce genre dans toutes les classes de sciences de la 6e annee a la 12e annee au Nouveau-Brunswick.
Algorithmes de couplage RANS et ecoulement potentiel
NASA Astrophysics Data System (ADS)
Gallay, Sylvain
Dans le processus de developpement d'avion, la solution retenue doit satisfaire de nombreux criteres dans de nombreux domaines, comme par exemple le domaine de la structure, de l'aerodynamique, de la stabilite et controle, de la performance ou encore de la securite, tout en respectant des echeanciers precis et minimisant les couts. Les geometries candidates sont nombreuses dans les premieres etapes de definition du produit et de design preliminaire, et des environnements d'optimisations multidisciplinaires sont developpes par les differentes industries aeronautiques. Differentes methodes impliquant differents niveaux de modelisations sont necessaires pour les differentes phases de developpement du projet. Lors des phases de definition et de design preliminaires, des methodes rapides sont necessaires afin d'etudier les candidats efficacement. Le developpement de methodes ameliorant la precision des methodes existantes tout en gardant un cout de calcul faible permet d'obtenir un niveau de fidelite plus eleve dans les premieres phases de developpement du projet et ainsi grandement diminuer les risques associes. Dans le domaine de l'aerodynamisme, les developpements des algorithmes de couplage visqueux/non visqueux permettent d'ameliorer les methodes de calcul lineaires non visqueuses en methodes non lineaires prenant en compte les effets visqueux. Ces methodes permettent ainsi de caracteriser l'ecoulement visqueux sur les configurations et predire entre autre les mecanismes de decrochage ou encore la position des ondes de chocs sur les surfaces portantes. Cette these se focalise sur le couplage entre une methode d'ecoulement potentiel tridimensionnelle et des donnees de section bidimensionnelles visqueuses. Les methodes existantes sont implementees et leurs limites identifiees. Une methode originale est ensuite developpee et validee. Les resultats sur une aile elliptique demontrent la capacite de l'algorithme a de grands angles d'attaques et dans la region post-decrochage. L'algorithme de couplage a ete compare a des donnees de plus haute fidelite sur des configurations issues de la litterature. Un modele de fuselage base sur des relations empiriques et des simulations RANS a ete teste et valide. Les coefficients de portance, de trainee et de moment de tangage ainsi que les coefficients de pression extraits le long de l'envergure ont montre un bon accord avec les donnees de soufflerie et les modeles RANS pour des configurations transsoniques. Une configuration a geometrie hypersustentatoire a permis d'etudier la modelisation des surfaces hypersustentees de la methode d'ecoulement potentiel, demontrant que la cambrure peut etre prise en compte uniquement dans les donnees visqueuses.
NASA Astrophysics Data System (ADS)
Communier, David
Lors de l'etude structurelle d'une aile d'avion, il est difficile de modeliser fidelement les forces aerodynamiques subies par l'aile de l'avion. Pour faciliter l'analyse, on repartit la portance maximale theorique de l'aile sur son longeron principal ou sur ses nervures. La repartition utilisee implique que l'aile entiere sera plus resistante que necessaire et donc que la structure ne sera pas totalement optimisee. Pour pallier ce probleme, il faudrait s'assurer d'appliquer une repartition aerodynamique de la portance sur la surface complete de l'aile. On serait donc en mesure d'obtenir une repartition des charges sur l'aile beaucoup plus fiable. Pour le realiser, nous aurons besoin de coupler les resultats d'un logiciel calculant les charges aerodynamiques de l'aile avec les resultats d'un logiciel permettant sa conception et son analyse structurelle. Dans ce projet, le logiciel utilise pour calculer les coefficients de pression sur l'aile est XFLR5 et le logiciel permettant la conception et l'analyse structurelle sera CATIA V5. Le logiciel XFLR5 permet une analyse rapide d'une aile en se basant sur l'analyse de ses profils. Ce logiciel calcule les performances des profils de la meme maniere que XFOIL et permet de choisir parmi trois methodes de calcul pour obtenir les performances de l'aile : Lifting Line Theory (LLT), Vortex Lattice Method (VLM) et 3D Panels. Dans notre methodologie, nous utilisons la methode de calcul 3D Panels dont la validite a ete testee en soufflerie pour confirmer les calculs sur XFLR5. En ce qui concerne la conception et l'analyse par des elements finis de la structure, le logiciel CATIA V5 est couramment utilise dans le domaine aerospatial. CATIA V5 permet une automatisation des etapes de conception de l'aile. Ainsi, dans ce memoire, nous allons decrire la methodologie permettant l'etude aerostructurelle d'une aile d'avion.
Mesure de haute resolution de la fonction de distribution radiale du silicium amorphe pur
NASA Astrophysics Data System (ADS)
Laaziri, Khalid
1999-11-01
Cette these porte sur l'etude de la structure du silicium amorphe prepare par irradiation ionique. Elle presente des mesures de diffraction de rayons X sur de la poudre de silicium cristallin, du silicium amorphe relaxe et non relaxe, ainsi que tous les developpements mathematiques et physiques necessaires pour extraire la fonction de distribution radiale correspondant a chaque echantillon. Au Chapitre I, nous presentons une methode de fabrication de membranes minces de silicium amorphe pur. Il y a deux etapes majeures lors du processus de fabrication: l'implantation ionique, afin de creer une couche amorphe de plusieurs microns et l'attaque chimique, pour enlever le reste du materiau cristallin. Nous avons caracterise premierement les membranes de silicium amorphe par spectroscopie Raman pour verifier qu'il ne reste plus de trace de materiau cristallin dans les films amorphes. Une deuxieme caracterisation par detection de recul elastique (ERD-TOF) sur ces memes membranes a montre qu'il y a moins de 0.1% atomique de contaminants tels que l'oxygene, le carbone, et l'hydrogene. Au Chapitre II, nous proposons une nouvelle methode de correction de la contribution inelastique "Compton" des spectres de diffusion totale afin d'extraire les pics de diffusion elastique, responsable de la diffraction de Bragg. L'article presente tout d'abord une description simplifiee d'une theorie sur la diffusion inelastique dite "Impulse Approximation" (IA) qui permet de calculer des profils de Compton en fonction de l'energie et de l'angle de diffusion 2theta. Ces profils sont utilises comme fonction de lissage de la diffusion Compton experimentale. Pour lisser les pics de diffusion elastique, nous avons utilise une fonction pic de nature asymetrique. Aux Chapitre III, nous exposons de maniere detaillee les resultats des experiences de diffraction de rayons X sur les membranes de silicium amorphe et la poudre de silicium cristallin que nous avons preparees. Nous abordons aussi les differentes etapes experimentales, d'analyse ainsi que les methodes de determination et de filtrage des transformees de Fourier des donnees de diffraction. Une comparaison des fonctions de distribution radiale du silicium amorphe relaxe et non relaxe indique que la relaxation structurelle dans le silicium amorphe est probablement due en grande partie a une annihilation des defauts plutot qu'a une reorganisation atomique globale du reseau de silicium amorphe. La deduction de la coordination des pics correspondants au premiers voisins atomiques par lissage de fonctions gaussienne indique que la coordination du silicium amorphe relaxe est de 3.88, celle du non-relaxe est de 3.79, alors que la mesure de reference sur la poudre de silicium cristallin donne une valeur de 4 tel que prevu. La sous-coordination du silicium amorphe expliquerait pourquoi sa densite est inferieure a celle du silicium cristallin. (Abstract shortened by UMI.)
Sell, Timothy C; Abt, John P; Crawford, Kim; Lovalekar, Mita; Nagai, Takashi; Deluzio, Jennifer B; Smalley, Brain W; McGrail, Mark A; Rowe, Russell S; Cardin, Sylvain; Lephart, Scott M
2010-01-01
Physical training for United States military personnel requires a combination of injury prevention and performance optimization to counter unintentional musculoskeletal injuries and maximize warrior capabilities. Determining the most effective activities and tasks to meet these goals requires a systematic, research-based approach that is population specific based on the tasks and demands of the Warrior. The authors have modified the traditional approach to injury prevention to implement a comprehensive injury prevention and performance optimization research program with the 101st Airborne Division (Air Assault) at Fort Campbell, KY. This is second of two companion papers and presents the last three steps of the research model and includes Design and Validation of the Interventions, Program Integration and Implementation, and Monitor and Determine the Effectiveness of the Program. An 8-week trial was performed to validate the Eagle Tactical Athlete Program (ETAP) to improve modifiable suboptimal characteristics identified in Part I. The experimental group participated in ETAP under the direction of a ETAP Strength and Conditioning Specialist while the control group performed the current physical training at Fort Campbell under the direction of a Physical Training Leader and as governed by FM 21-20 for the 8-week study period. Soldiers performing ETAP demonstrated improvements in several tests for strength, flexibility, performance, physiology, and the APFT compared to current physical training performed at Fort Campbell. ETAP was proven valid to improve certain suboptimal characteristics within the 8-week trial as compared to the current training performed at Fort Campbell. ETAP has long-term implications and with expected greater improvements when implemented into a Division pre-deployment cycle of 10-12 months which will result in further systemic adaptations for each variable.
Animation-Based Learning in Geology: Impact of Animations Coupled with Seductive Details
NASA Astrophysics Data System (ADS)
Clayton, Rodney L.
Research is not clear on how to address the difficulty students have conceptualizing geologic processes and phenomena. This study investigated how animations coupled with seductive details effect learners' situational interest and emotions. A quantitative quasi-experimental study employing an independent-measures factorial design was used. The participants included a convenience sampling of 102 undergraduates. There was a main effect of seductive details on comprehension, F (2, 94) = 10.02, p < .001, etap2 = .176. Contrasts revealed that the presence of seductive details significantly increased comprehension of learning material when compared to no seductive details, t(94) = -2.56, p = .012, etap2 = .065. There was an effect of seductive details on cognitive load, F (2, 94) = 4.96, p = .009, etap2 = .095, but a non-significant effect of presentational modality, F (1, 94) = 3.50, p = .064, etap2 = .036. Contrasts showed that perceived cognitive load significantly decreased under the textual seductive details condition (DeltaM = -.82, p = .017). The greatest significant decrease in total cognitive load occurred under the video seductive details condition (DeltaM = -.99, p = .004). There was a significant main effect of modality on comprehension, F (1, 94) = 7.74, p = .007, etap2 = .076. Contrasts revealed that learning with animations significantly increased learning performance compared to illustrations, t(94) = 2.03, p < .05, etap2 = .042. Contrast results also showed a significant difference in means when comparing animations to illustrations (DeltaM = 7.93, p = .007). There was a significant effect of seductive details on perceived interest after controlling for spatial ability and prior knowledge, F (2, 94) = 3.65, p = .030, etap2 = .072. Learners' prior knowledge also had a significant effect on perceived interest, F (1, 94) = 4.74, p = .032, etap2 = .048. There appeared to be no effect of presentational modality on perceived interest, F .05. Considering the inconsistent results of studies, and the potential impact of affective factors, further research is needed to evaluate the efficacy of animations and the use of seductive details under different learning conditions.
[Application of the Smoking Scale for Primary Care (ETAP) in clinical practice].
González Romero, M P; Cuevas-Fernández, F J; Marcelino-Rodríguez, I; Covas, V J; Rodríguez Pérez, M C; Cabrera de León, A; Aguirre-Jaime, A
2017-08-23
To determine if the ETAP smoking scale, which measures accumulated exposure to tobacco, both actively and passively, is applicable and effective in the clinical practice of Primary Care for the prevention of acute myocardial infarction (AMI). Location Barranco Grande Health Centre in Tenerife, Spain. A study of 61 cases (AMI) and 144 controls. Sampling with random start, without matching. COR-II curves were analysed, and effectiveness was estimated using sensitivity and negative predictive value (NPV). A questionnaire was provided to participating family physicians on the applicability of ETAP in the clinic. The opinion of the participating physicians was unanimously favourable. ETAP was easy to use in the clinic, required less than 3min per patient, and was useful to reinforce the preventive intervention. The ETAP COR-II curve showed that 20years of exposure was the best cut-off point, with an area under the curve of 0.70 (95%CI: 0.62-0.78), and a combination of sensitivity (98%) and NPV (96%) for AMI. When stratifying age and gender, all groups achieved sensitivities and NPVs close to 100%, except for men aged ≥55years, in whom the NPV fell to 75%. The results indicate that ETAP is a valid tool that can be applied and be effective in the clinical practice of Primary Care for the prevention of AMI related to smoking exposure. Copyright © 2017 Elsevier España, S.L.U. All rights reserved.
Gadow, Kenneth D; Roohi, Jasmin; DeVincent, Carla J; Kirsch, Sarah; Hatchwell, Eli
2009-11-01
The aim of the study is to examine rs4680 (COMT) and rs6265 (BDNF) as genetic markers of anxiety, ADHD, and tics. Parents and teachers completed a DSM-IV-referenced rating scale for a total sample of 67 children with autism spectrum disorder (ASD). Both COMT (p = 0.06) and BDNF (p = 0.07) genotypes were marginally significant for teacher ratings of social phobia (etap (2) = 0.06). Analyses also indicated associations of BDNF genotype with parent-rated ADHD (p = 0.01, etap (2) = 0.10) and teacher-rated tics (p = 0.04; etap (2) = 0.07). There was also evidence of a possible interaction (p = 0.02, etap (2) = 0.09) of BDNF genotype with DAT1 3' VNTR with tic severity. BDNF and COMT may be biomarkers for phenotypic variation in ASD, but these preliminary findings remain tentative pending replication with larger, independent samples.
Croissance epitaxiale de GaAs sur substrats de Ge par epitaxie par faisceaux chimiques
NASA Astrophysics Data System (ADS)
Belanger, Simon
La situation energetique et les enjeux environnementaux auxquels la societe est confrontee entrainent un interet grandissant pour la production d'electricite a partir de l'energie solaire. Parmi les technologies actuellement disponibles, la filiere du photovoltaique a concentrateur solaire (CPV pour concentrator photovoltaics) possede un rendement superieur et mi potentiel interessant a condition que ses couts de production soient competitifs. La methode d'epitaxie par faisceaux chimiques (CBE pour chemical beam epitaxy) possede plusieurs caracteristiques qui la rendent interessante pour la production a grande echelle de cellules photovoltaiques a jonctions multiples a base de semi-conducteurs III-V. Ce type de cellule possede la meilleure efficacite atteinte a ce jour et est utilise sur les satellites et les systemes photovoltaiques a concentrateur solaire (CPV) les plus efficaces. Une des principales forces de la technique CBE se trouve dans son potentiel d'efficacite d'utilisation des materiaux source qui est superieur a celui de la technique d'epitaxie qui est couramment utilisee pour la production a grande echelle de ces cellules. Ce memoire de maitrise presente les travaux effectues dans le but d'evaluer le potentiel de la technique CBE pour realiser la croissance de couches de GaAs sur des substrats de Ge. Cette croissance constitue la premiere etape de fabrication de nombreux modeles de cellules solaires a haute performance decrites plus haut. La realisation de ce projet a necessite le developpement d'un procede de preparation de surface pour les substrats de germanium, la realisation de nombreuses sceances de croissance epitaxiale et la caracterisation des materiaux obtenus par microscopie optique, microscopie a force atomique (AFM), diffraction des rayons-X a haute resolution (HRXRD), microscopie electronique a transmission (TEM), photoluminescence a basse temperature (LTPL) et spectrometrie de masse des ions secondaires (SIMS). Les experiences ont permis de confirmer l'efficacite du procede de preparation de surface et d'identifier les conditions de croissance optimales. Les resultats de caracterisation indiquent que les materiaux obtenus presentent une tres faible rugosite de surface, une bonne qualite cristalline et un dopage residuel relativement important. De plus, l'interface GaAs/Ge possede une faible densite de defauts. Finalement, la diffusion d'arsenic dans le substrat de germanium est comparable aux valeurs trouvees dans la litterature pour la croissance a basse temperature avec les autres procedes d'epitaxie courants. Ces resultats confirment que la technique d'epitaxie par faisceaux chimiques (CBE) permet de produire des couches de GaAs sur Ge de qualite adequate pour la fabrication de cellules solaires a haute performance. L'apport a la communaute scientifique a ete maximise par le biais de la redaction d'un article soumis a la revue Journal of Crystal Growth et la presentation des travaux a la conference Photovoltaics Canada 2010 . Mots-cles : Epitaxie par jets chimiques, Chemical beam epitaxy, CBE, MOMBE, Germanium, GaAs, Ge
NASA Astrophysics Data System (ADS)
Bretin, Remy
L'endommagement par fatigue des materiaux est un probleme courant dans de nombreux domaines, dont celui de l'aeronautique. Afin de prevenir la rupture par fatigue des materiaux il est necessaire de determiner leur duree de vie en fatigue. Malheureusement, dues aux nombreuses heterogeneites presentes, la duree de vie en fatigue peut fortement varier entre deux pieces identiques faites dans le meme materiau ayant subi les memes traitements. Il est donc necessaire de considerer ces heterogeneites dans nos modeles afin d'avoir une meilleure estimation de la duree de vie des materiaux. Comme premiere etape vers une meilleure consideration des heterogeneites dans nos modeles, une etude en elasticite lineaire de l'influence des orientations cristallographiques sur les champs de deformations et de contraintes dans un polycristal a ete realisee a l'aide de la methode des elements finis. Des correlations ont pu etre etablies a partir des resultats obtenus, et un modele analytique en elasticite lineaire prenant en compte les distributions d'orientations cristallographiques et les effets de voisinage a pu etre developpe. Ce modele repose sur les bases des modeles d'homogeneisation classique, comme le schema auto-coherent, et reprend aussi les principes de voisinage des automates cellulaires. En prenant pour reference les resultats des analyses elements finis, le modele analytique ici developpe a montre avoir une precision deux fois plus grande que le modele auto-coherent, quel que soit le materiau etudie.
Conception et optimisation d'une peau en composite pour une aile adaptative =
NASA Astrophysics Data System (ADS)
Michaud, Francois
Les preoccupations economiques et environnementales constituent des enjeux majeurs pour le developpement de nouvelles technologies en aeronautique. C'est dans cette optique qu'est ne le projet MDO-505 intitule Morphing Architectures and Related Technologies for Wing Efficiency Improvement. L'objectif de ce projet vise a concevoir une aile adaptative active servant a ameliorer sa laminarite et ainsi reduire la consommation de carburant et les emissions de l'avion. Les travaux de recherche realises ont permis de concevoir et optimiser une peau en composite adaptative permettant d'assurer l'amelioration de la laminarite tout en conservant son integrite structurale. D'abord, une methode d'optimisation en trois etapes fut developpee avec pour objectif de minimiser la masse de la peau en composite en assurant qu'elle s'adapte par un controle actif de la surface deformable aux profils aerodynamiques desires. Le processus d'optimisation incluait egalement des contraintes de resistance, de stabilite et de rigidite de la peau en composite. Suite a l'optimisation, la peau optimisee fut simplifiee afin de faciliter la fabrication et de respecter les regles de conception de Bombardier Aeronautique. Ce processus d'optimisation a permis de concevoir une peau en composite dont les deviations ou erreurs des formes obtenues etaient grandement reduites afin de repondre au mieux aux profils aerodynamiques optimises. Les analyses aerodynamiques realisees a partir de ces formes ont predit de bonnes ameliorations de la laminarite. Par la suite, une serie de validations analytiques fut realisee afin de valider l'integrite structurale de la peau en composite suivant les methodes generalement utilisees par Bombardier Aeronautique. D'abord, une analyse comparative par elements finis a permis de valider une rigidite equivalente de l'aile adaptative a la section d'aile d'origine. Le modele par elements finis fut par la suite mis en boucle avec des feuilles de calcul afin de valider la stabilite et la resistance de la peau en composite pour les cas de chargement aerodynamique reels. En dernier lieu, une analyse de joints boulonnes fut realisee en utilisant un outil interne nomme LJ 85 BJSFM GO.v9 developpe par Bombardier Aeronautique. Ces analyses ont permis de valider numeriquement l'integrite structurale de la peau de composite pour des chargements et des admissibles de materiaux aeronautiques typiques.
Optimisation des trajectoires verticales par la methode de la recherche de l'harmonie =
NASA Astrophysics Data System (ADS)
Ruby, Margaux
Face au rechauffement climatique, les besoins de trouver des solutions pour reduire les emissions de CO2 sont urgentes. L'optimisation des trajectoires est un des moyens pour reduire la consommation de carburant lors d'un vol. Afin de determiner la trajectoire optimale de l'avion, differents algorithmes ont ete developpes. Le but de ces algorithmes est de reduire au maximum le cout total d'un vol d'un avion qui est directement lie a la consommation de carburant et au temps de vol. Un autre parametre, nomme l'indice de cout est considere dans la definition du cout de vol. La consommation de carburant est fournie via des donnees de performances pour chaque phase de vol. Dans le cas de ce memoire, les phases d'un vol complet, soit, une phase de montee, une phase de croisiere et une phase de descente, sont etudies. Des " marches de montee " etaient definies comme des montees de 2 000ft lors de la phase de croisiere sont egalement etudiees. L'algorithme developpe lors de ce memoire est un metaheuristique, nomme la recherche de l'harmonie, qui, concilie deux types de recherches : la recherche locale et la recherche basee sur une population. Cet algorithme se base sur l'observation des musiciens lors d'un concert, ou plus exactement sur la capacite de la musique a trouver sa meilleure harmonie, soit, en termes d'optimisation, le plus bas cout. Differentes donnees d'entrees comme le poids de l'avion, la destination, la vitesse de l'avion initiale et le nombre d'iterations doivent etre, entre autre, fournies a l'algorithme pour qu'il soit capable de determiner la solution optimale qui est definie comme : [Vitesse de montee, Altitude, Vitesse de croisiere, Vitesse de descente]. L'algorithme a ete developpe a l'aide du logiciel MATLAB et teste pour plusieurs destinations et plusieurs poids pour un seul type d'avion. Pour la validation, les resultats obtenus par cet algorithme ont ete compares dans un premier temps aux resultats obtenus suite a une recherche exhaustive qui a utilisee toutes les combinaisons possibles. Cette recherche exhaustive nous a fourni l'optimal global; ainsi, la solution de notre algorithme doit se rapprocher le plus possible de la recherche exhaustive afin de prouver qu'il donne des resultats proche de l'optimal global. Une seconde comparaison a ete effectuee entre les resultats fournis par l'algorithme et ceux du Flight Management System (FMS) qui est un systeme d'avionique situe dans le cockpit de l'avion fournissant la route a suivre afin d'optimiser la trajectoire. Le but est de prouver que l'algorithme de la recherche de l'harmonie donne de meilleurs resultats que l'algorithme implemente dans le FMS.
NASA Astrophysics Data System (ADS)
Fareh, Fouad
Le moulage par injection basse pression des poudres metalliques est une technique de fabrication qui permet de fabriquer des pieces possedant la complexite des pieces coulees mais avec les proprietes mecaniques des pieces corroyees. Cependant, l'optimisation des etapes de deliantage et de frittage a ete jusqu'a maintenant effectuee a l'aide de melange pour lesquels la moulabilite optimale n'a pas encore ete demontree. Ainsi, la comprehension des proprietes rheologiques et de la segregation des melanges est tres limitee et cela presente le point faible du processus de LPIM. L'objectif de ce projet de recherche etait de caracteriser l'influence des liants sur le comportement rheologique des melanges en mesurant la viscosite et la segregation des melanges faible viscosite utilises dans le procede LPIM. Afin d'atteindre cet objectif, des essais rheologiques et thermogravimetriques ont ete conduits sur 12 melanges. Ces melanges ont ete prepares a base de poudre d'Inconel 718 de forme spherique (chargement solide constant a 60%) et de cires, d'agents surfactants ou epaississants. Les essais rheologiques ont ete utilises entre autre pour calculer l'indice d'injectabilite ?STV des melanges, tandis que les essais thermogravimetriques ont permis d'evaluer precisement la segregation des poudres dans les melanges. Il a ete demontre que les trois (3) melanges contenant de la cire de paraffine et de l'acide stearique presentent des indices alpha STV plus eleves qui sont avantageux pour le moulage par injection des poudres metalliques (MIM), mais segregent beaucoup trop pour que la piece fabriquee produise de bonnes caracteristiques mecaniques. A l'oppose, le melange contenant de la cire de paraffine et de l'ethylene-vinyle acetate ainsi que le melange contenant seulement de la cire de carnauba segregent peu voire pas du tout, mais possedent de tres faibles indices alphaSTV : ils sont donc difficilement injectables. Le meilleur compromis semble donc etre les melanges contenant de la cire (de paraffine, d'abeille et de carnauba) et de faible teneur en acide stearique et en ethylene-vinyle acetate. Par ailleurs, les lois physiques preexistantes ont permis de confirmer les resultats des essais rheologiques et thermogravimetriques, mais aussi de mettre en evidence l'influence de la segregation sur les proprietes rheologiques des melanges. Ces essais ont aussi montre l'effet de constituants de liant et du temps passe a l'etat fondu sur l'intensite de la segregation dans les melanges. Les melanges contenants de l'acide stearique segregent rapidement. La caracterisation des melanges developpes pour le moulage basse pression des poudres metalliques doit etre obtenue a l'aide d'une methode de courte duree pour eviter la segregation et de mesurer precisement l'aptitude a l'ecoulement de ces melanges.
Derguy, C; Poumeyreau, M; Pingault, S; M'bailara, K
2017-11-24
Autism Spectrum Disorders (ASD) are characterized by particularities of cognitive and socio-adaptive functioning. Daily, they require specific interventions for the disabled person as well as support for parents who often report deterioration in their physical and mental health. To this end, the latest Autism Plan 2013-2017 highlights the need "to help families to be present and active alongside their loved ones, to avoid situations of exhaustion and stress and to enable them to play their role fully in the long term". The support devices must therefore be based on an analysis of the parents' needs and propose multiple intervention modalities, which respond to the complexity of the caregiving mission. Therapeutic education (TE) seems to answer to these different elements by proposing a global approach improving the development of child-centered skills and the educational challenges (self-care skills) but also of skills centered on the projects and the fulfillment of the parent (psychosocial skills). The ETAP (Therapeutic Education Autism and Parenting) program is an initial TE offer intended for parents of children with ASD aged between 3 and 10years. It consists of seven group sessions and two semi-structured interviews, called educational diagnosis. A booster session is also proposed three months after the last session. It was developed following rigorously the guidelines on program construction, published by the High Authority of Health. In addition, it is based on an assessment of the needs of the parents, an in-depth analysis of the literature and the opinion of nine experts in this area. The objective of this study is to evaluate the effectiveness of the ETAP program on the quality of life and anxio-depressive symptoms of parents of a child with ASD. To our knowledge, the ETAP program is the first TE program in France for parents of children with an ASD that has been evaluated. Our sample is composed of 40 participants, including 30 parents who participated in the ETAP program ("ETAP Group"), compared to ten controls who did not participate, but who are on a waiting list ("Control Group"). Each participant completed a Quality of Life Questionnaire (WhoQol-Brief) and an Anxiety-Depressive Symptomatology Questionnaire (HADS) prior to the start of the program (T1) and after the session 7 (T2). Preliminary analyses show a good intergroup matching on socio-demographic and medical data. Moreover, the two groups are not significantly different at T1 over the set of dependent variables measured. Our results show an improvement in the quality of life of the depressive symptomatology in the participants. On the other hand, we did not notice any significant decrease in anxiety symptoms. However, when we consider the proportion of parents with a significant anxiety state (in terms of the clinical threshold of HADS, score ≥10), we see that it tends to decrease after the program only for the group ETAP. These data should be interpreted with caution because of their preliminary nature and the small size of our sample. However, the first steps are encouraging and confirm the value of the therapeutic education model for parents of children with ASD. The different information given during the sessions takes into account the previous representations, knowledge and skills of the parent. Thus the program promotes the upholding and the development of individual resources in parents. In addition, the psychosocial skills targeted also to make access easier to available environmental resources. Finally, in a more indirect way, the ETAP program also aims to maintain or restore a positive parenthood and individual identity and the progressive development of new ways of to interact with the environment. An adaptation of the Hobfoll resource conservation model is proposed by the authors to formulate hypotheses on the mechanisms of action of the ETAP program. Copyright © 2017 L'Encéphale, Paris. Published by Elsevier Masson SAS. All rights reserved.
Acute and overuse injuries of the abdomen and groin in athletes.
Atkins, Justin M; Taylor, Jonathan C; Kane, Shawn F
2010-01-01
Abdominal and groin injuries are common problems encountered by athletes across a wide variety of sports. They range from benign but annoying, such as exercise-related transient abdominal pain (ETAP), to the activity-limiting and possibly career-ending condition of athletic hernia. This article covers ETAP, rectus abdominus injuries, osteitis pubis, athletic hernia, and abdominal/groin hernias to provide an update on the current pathophysiology and treatment of common abdominal and pelvic conditions in the athlete.
L'effet des deformations plastiques severes sur les proprietes d'hydruration du magnesium
NASA Astrophysics Data System (ADS)
Lang, Julien
Le travail de recherche effectue durant mon projet de maitrise en physique a l'Universite du Quebec a Trois-Rivieres dans les laboratoires de l'Institut de Recherche sur l'Hydrogene etait de comparer l'effet du laminage a froid de la poudre de MgH2 avec celui du broyage mecanique. Nous avons etudie cette nouvelle technique en utilisant un laminoir vertical concu specialement pour laminer de la poudre. Nous avons lamine la poudre de MgH2 5, 25, 50 et 100 fois. La comparaison de la morphologie de la poudre de MgH 2 telle que recu du manufacturier et broye mecaniquement pendant 30 minutes avec celle de la poudre laminee ete faite a l'aide d'un microscope electronique a balayage. Nous avons par la suite mesure les proprietes de sorption d'hydrogene grace un appareil PCT de type Sievert. Nous avons aussi determine la structure cristalline par diffraction de rayons X. A partir de ces resultats, nous avons constate que le nombre optimal de laminages est de cinq et a les caracteristiques d'absorption/desorption d'hydrogene similaires a un broyage mecanique de 30 minutes. Nous avons aussi utilise les courbes de cinetiques d'absorption et de desorption d'hydrogene pour calculer l'etape limitative dans les reactions de sorption des echantillons lamines. Comme cinq laminages se font en environ 10 secondes, on voit que le laminage a froid est une technique plus interessante industriellement que le broyage mecanique a cause de l'important gain en temps et en energie.
NASA Astrophysics Data System (ADS)
Villeneuve, Eric
Ce projet, realise a la demande du Laboratoire International des Materiaux Antigivre, a pour but de mesurer et definir experimentalement l'impact de revetements hydrophobes sur les coefficients de trainee et de portance d'un profil NACA 0012. Pour ce faire, la balance aerodynamique du LIMA devait tout d'abord etre amelioree afin d'offrir une sensibilite suffisante pour realiser le projet. Plusieurs ameliorations ont ete faites, comme le changement des cellules de charge, la diminution du nombre de cellules de charge, le changement du cadre de la balance, etc. Une fois ces ameliorations terminees, la reproductibilite, l'exactitude et la sensibilite ont ete valides afin de s'assurer de la fiabilite des resultats offerts par la balance. Pour les angles d'attaque etudies avec les revetements, soient -6° et 0°, la balance a une reproductibilite de +/-2,06% a 360 000 de nombre de Reynolds. Pour valider la sensibilite, des essais a -6° et 0° d'angle d'attaque et des nombres de Reynolds de 360 000 et 500 000 ont ete faits avec des papiers sables. Les resultats de ces essais ont permis de, tracer des courbes de tendances du coefficient de trainee du NACA 0012 en fonction de la rugosite de surface et d'etablir la valeur de la sensibilite de la balance a +/-8 mu m. Cinq revetements populaires ont ete choisis pour l'experimentation, soient le Wearlon, le Staclean, le Hirec, le Phasebreak ainsi que le Nusil. Les revetements sont soumis aux memes conditions experimentales que les papiers sables, et une rugosite equivalente est trouvee par extrapolation des resultats. Cependant, les rugosites equivalentes de surfaces different entre -6° et 0°. Les essais avec le Staclean et le Hirec donnent des coefficients de trainee equivalent a ceux avec l'aluminium, alors que le Wearlon, le Nusil et le Phasebreak donnent une augmentation du coefficient de trainee de 13%, 17% et 25% respectivement par rapport a l'aluminium. Pour les coefficients de portance, la balance ne detecte pas l'effet des revetements, ni des papiers sables, sur la force de portance ce qui signifie qu'il entre dans l'insensibilite de la balance. La derniere etape experimentale consistait a mesurer l'impact des revetements sur la formation de la glace ainsi que sur l'evolution des coefficients de trainee et de portance du NACA 0012 en fonction de l'accumulation de glace sur celui-ci. Le Wearlon a ete choisi comme revetement en raison de sa grande popularite. Des essais a -5°C et -20°C ont ete faits et les resultats ont montres que le Wearlon n'apporte pas d'effet benefique au NACA 0012 en conditions d'accumulations de glace. L'augmentation du coefficient de trainee du profil muni du Wearlon debutait plus rapidement que sur l'aluminium et de l'eau gelait legerement plus loin vers l'arriere du profil pendant les essais, ce qui n'est pas souhaitable. Le coefficient de trainee est superieur d'environ 13% pour le Wearlon par rapport a l'aluminium pendant toute l'accumulation de glace, ce qui correspond au meme ecart lorsque la glace n'est pas en cause. Pour le coefficient de portance, les resultats ne peuvent etre utilises pour une raison qui doit etre investiguee.
Lelong, Bernard; de Chaisemartin, Cécile; Meillat, Helene; Cournier, Sandra; Boher, Jean Marie; Genre, Dominique; Karoui, Mehdi; Tuech, Jean Jacques; Delpero, Jean Robert
2017-04-11
Total mesorectal excision is the standard surgical treatment for mid- and low-rectal cancer. Laparoscopy represents a clear leap forward in the management of rectal cancer patients, offering significant improvements in post-operative measures such as pain, first bowel movement, and hospital length of stay. However, there are still some limits to its applications, especially in difficult cases. Such cases may entail either conversion to an open procedure or positive resection margins. Transanal endoscopic proctectomy (ETAP) was recently described and could address the difficulties of approaching the lower third of the rectum. Early series and case-control studies have shown favourable short-term results, such as a low conversion rate, reduced hospital length of stay and oncological outcomes comparable to laparoscopic surgery. The aim of the proposed study is to compare the rate of positive resection margins (R1 resection) with ETAP versus laparoscopic proctectomy (LAP), with patients randomly assigned to each arm. The proposed study is a multicentre randomised trial using two parallel groups to compare ETAP and LAP. Patients with T3 lower-third rectal adenocarcinomas for whom conservative surgery with manual coloanal anastomosis is planned will be recruited. Randomisation will be performed immediately prior to surgery after ensuring that the patient meets the inclusion criteria and completing the baseline functional and quality of life tests. The study is designed as a non-inferiority trial with a main criterion of R0/R1 resection. Secondary endpoints will include the conversion rate, the minimal invasiveness of the abdominal approach, postoperative morbidity, the length of hospital stay, mesorectal macroscopic assessment, functional urologic and sexual results, faecal continence, global quality of life, stoma-free survival, and disease-free survival at 3 years. The inclusion period will be 3 years, and every patient will be followed for 3 years. The number of patients needed is 226. There is a strong need for optimal evaluation of the ETAP because of substancial changes in the operative technique. Assessment of oncological safety and septic risk, as well as digestive and urological functional results, is particularily mandatory. Moreover, benefits of the ETAP technique could be demonstrated in post-operative outcome. ClinicalTrial.gov: NCT02584985 . Date and version identifier: Version n°2 - 2015 July 6.
NASA Astrophysics Data System (ADS)
Conrad, Heloise
L'evolution technologique des composants electroniques entraine des problemes de gestion de l'obsolescence dans le secteur aeronautique. Les systemes aeronautiques ont en effet des durees de vie nettement superieures aux composants qu'ils contiennent. Cette difference de duree de vie et les normes strictes propres a l'aeronautique obligent les constructeurs a mettre en place une gestion efficace de l'obsolescence pour eviter les couts supplementaires de maintenance et de retards. De plus, a cause des faibles volumes de production qu'ils representent, les constructeurs aeronautiques n'ont que peu de controle sur leur chaine d'approvisionnement. La litterature offre beaucoup d'etudes sur l'obsolescence, appliquees a l'aeronautique. Les auteurs recommandent de mettre en place des processus de gestion et de prevision de l'obsolescence, et de construire des relations de collaboration avec leurs fournisseurs, qui ont plus de visibilite sur la chaine d'approvisionnement. Cette recherche presente d'abord l'elaboration d'une liste de criteres de bonne gestion de l'obsolescence, ainsi que la creation d'une methode de generation de plan de transition et de mise en oeuvre de l'amelioration de la gestion et de la prevision de l'obsolescence pour un cas concret. La methode est creee pour un manufacturier aeronautique ne possedant pas de systemes de gestion proactive ou de prevision de l'obsolescence. La creation de la methode s'est faite en suivant la methodologie de la science de la conception, en impliquant les employes concernes par la gestion de l'obsolescence. La methode comporte douze (12) etapes, amenant au developpement du plan de transition et de mise en oeuvre. Pour applique la methode, divers entretiens individuels et de groupe ont ete realises. Ces entretiens ont aussi permis de lister les criteres de gestion et de prevision efficaces de l'obsolescence. Cette liste a ete comparee avec les criteres issus de la litterature. En respect des besoins enonces par les employes et des conseils d'un industriel expert en obsolescence des composants avioniques, le plan de transition et de mise en oeuvre cree se divise en trois (3) phases : 1) amelioration de la gestion de l'obsolescence, 2) amelioration de la prevision de l'obsolescence et 3) gestion des fournisseurs. Meme si le plan de transition n'a pas ete applique dans l'entreprise partenaire, la methode et le plan cree ont ete approuves par les employes et utilisateurs.
Analyse des interactions energetiques entre un arena et son systeme de refrigeration
NASA Astrophysics Data System (ADS)
Seghouani, Lotfi
La presente these s'inscrit dans le cadre d'un projet strategique sur les arenas finance par le CRSNG (Conseil de Recherche en Sciences Naturelles et en Genie du Canada) qui a pour but principal le developpement d'un outil numerique capable d'estimer et d'optimiser la consommation d'energie dans les arenas et curlings. Notre travail s'inscrit comme une suite a un travail deja realise par DAOUD et coll. (2006, 2007) qui a developpe un modele 3D (AIM) en regime transitoire de l'arena Camilien Houde a Montreal et qui calcule les flux de chaleur a travers l'enveloppe du batiment ainsi que les distributions de temperatures et d'humidite durant une annee meteorologique typique. En particulier, il calcule les flux de chaleur a travers la couche de glace dus a la convection, la radiation et la condensation. Dans un premier temps nous avons developpe un modele de la structure sous la glace (BIM) qui tient compte de sa geometrie 3D, des differentes couches, de l'effet transitoire, des gains de chaleur du sol en dessous et autour de l'arena etudie ainsi que de la temperature d'entree de la saumure dans la dalle de beton. Par la suite le BIM a ete couple le AIM. Dans la deuxieme etape, nous avons developpe un modele du systeme de refrigeration (REFSYS) en regime quasi-permanent pour l'arena etudie sur la base d'une combinaison de relations thermodynamiques, de correlations de transfert de chaleur et de relations elaborees a partir de donnees disponibles dans le catalogue du manufacturier. Enfin le couplage final entre l'AIM +BIM et le REFSYS a ete effectue sous l'interface du logiciel TRNSYS. Plusieurs etudes parametriques on ete entreprises pour evaluer les effets du climat, de la temperature de la saumure, de l'epaisseur de la glace, etc. sur la consommation energetique de l'arena. Aussi, quelques strategies pour diminuer cette consommation ont ete etudiees. Le considerable potentiel de recuperation de chaleur au niveau des condenseurs qui peut reduire l'energie requise par le systeme de ventilation de l'arena a ete mis en evidence. Mots cles. Arena, Systeme de refrigeration, Consommation d'energie, Efficacite energetique, Conduction au sol, Performance annuelle.
Upper gastrointestinal issues in athletes.
Waterman, Jason J; Kapur, Rahul
2012-01-01
Gastrointestinal (GI) complaints are common among athletes with rates in the range of 30% to 70%. Both the intensity of sport and the type of sporting activity have been shown to be contributing factors in the development of GI symptoms. Three important factors have been postulated as contributing to the pathophysiology of GI complaints in athletes: mechanical forces, altered GI blood flow, and neuroendocrine changes. As a result of those factors, gastroesophageal reflux disease (GERD), nausea, vomiting, gastritis, peptic ulcers, GI bleeding, or exercise-related transient abdominal pain (ETAP) may develop. GERD may be treated with changes in eating habits, lifestyle modifications, and training modifications. Nausea and vomiting may respond to simple training modifications, including no solid food 3 hours prior to an athletic event. Mechanical trauma, decreased splanchnic blood flow during exercise, and non-steroidal anti-inflammatory drugs (NSAID) contribute to gastritis, GI bleeding, and ulcer formation in athletes. Acid suppression with proton-pump inhibitors may be useful in athletes with persistence of any of the above symptoms. ETAP is a common, poorly-understood, self-limited acute abdominal pain which is difficult to treat. ETAP incidence increases in athletes beginning a new exercise program or increasing the intensity of their current exercise program. ETAP may respond to changes in breathing patterns or may resolve simply with continued training. Evaluation of the athlete with upper GI symptoms requires a thorough history, a detailed training log, a focused physical examination aimed at ruling out potentially serious causes of symptoms, and follow-up laboratory testing based on concerning physical examination findings.
NASA Astrophysics Data System (ADS)
Coulibaly, Issa
Principale source d'approvisionnement en eau potable de la municipalite d'Edmundston, le bassin versant Iroquois/Blanchette est un enjeu capital pour cette derniere, d'ou les efforts constants deployes pour assurer la preservation de la qualite de son eau. A cet effet, plusieurs etudes y ont ete menees. Les plus recentes ont identifie des menaces de pollution de diverses origines dont celles associees aux changements climatiques (e.g. Maaref 2012). Au regard des impacts des modifications climatiques annonces a l'echelle du Nouveau-Brunswick, le bassin versant Iroquois/Blanchette pourrait etre fortement affecte, et cela de diverses facons. Plusieurs scenarios d'impacts sont envisageables, notamment les risques d'inondation, d'erosion et de pollution a travers une augmentation des precipitations et du ruissellement. Face a toutes ces menaces eventuelles, l'objectif de cette etude est d'evaluer les impacts potentiels des changements climatiques sur les risques d'erosion et de pollution a l'echelle du bassin versant Iroquois/Blanchette. Pour ce faire, la version canadienne de l'equation universelle revisee des pertes en sol RUSLE-CAN et le modele hydrologique SWAT ( Soil and Water Assessment Tool) ont ete utilises pour modeliser les risques d'erosion et de pollution au niveau dans la zone d'etude. Les donnees utilisees pour realiser ce travail proviennent de sources diverses et variees (teledetections, pedologiques, topographiques, meteorologiques, etc.). Les simulations ont ete realisees en deux etapes distinctes, d'abord dans les conditions actuelles ou l'annee 2013 a ete choisie comme annee de reference, ensuite en 2025 et 2050. Les resultats obtenus montrent une tendance a la hausse de la production de sediments dans les prochaines annees. La production maximale annuelle augmente de 8,34 % et 8,08 % respectivement en 2025 et 2050 selon notre scenario le plus optimiste, et de 29,99 % en 2025 et 29,72 % en 2050 selon le scenario le plus pessimiste par rapport a celle de 2013. Pour ce qui est de la pollution, les concentrations observees (sediment, nitrate et phosphore) connaissent une evolution avec les changements climatiques. La valeur maximale de la concentration en sediments connait une baisse en 2025 et 2050 par rapport a 2013, de 11,20 mg/l en 2013, elle passe a 9,03 mg/l en 2025 puis a 6,25 en 2050. On s'attend egalement a une baisse de la valeur maximale de la concentration en nitrate au fil des annees, plus accentuee en 2025. De 4,12 mg/l en 2013, elle passe a 1,85 mg/l en 2025 puis a 2,90 en 2050. La concentration en phosphore par contre connait une augmentation dans les annees a venir par rapport a celle de 2013, elle passe de 0,056 mg/l en 2013 a 0,234 mg/l en 2025 puis a 0,144 en 2050.
Adachi, H; Sakurai, S; Tanehata, M; Oshima, S; Taniguchi, K
2000-11-01
Blood viscosity (etaB) is low in athletes, but the effect of exercise training on etaB during endurance exercise at an anaerobic threshold (AT) intensity in non-athletes is not well known, although it is known that exercise training sometimes induces the hyperviscosity syndrome. Fourteen subjects were recruited and divided into 2 groups: those who trained at an AT intensity for 30 min/day, 3 times weekly for 1 year (Group T, n=8), and sedentary subjects (Group C, n=6). The test protocol consisted of a single 30-min treadmill exercise at each individual's AT intensity, which was determined in advance. The etaB, plasma viscosity (etaP), and hematocrit were measured just before and at the end of the treadmill exercise. The subjects were not allowed to drink any water before exercise. In the Group C subjects, the hematocrit and etaP increased significantly and the etaB tended to increase. However, in the Group T subjects, the hematocrit and etaP did not increase and the etaB decreased significantly. These data indicate that long-term exercise training attenuates the increase in blood viscosity during exercise.
NASA Astrophysics Data System (ADS)
Homier, Ram
Dans le contexte environnemental actuel, le photovoltaïque bénéficie de l'augmentation des efforts de recherche dans le domaine des énergies renouvelables. Pour réduire le coût de la production d'électricité par conversion directe de l'énergie lumineuse en électricité, le photovoltaïque concentré est intéressant. Le principe est de concentrer une grande quantité d'énergie lumineuse sur des petites surfaces de cellules solaires multi-jonction à haute efficacité. Lors de la fabrication d'une cellule solaire, il est essentiel d'inclure une méthode pour réduire la réflexion de la lumière à la surface du dispositif. Le design d'un revêtement antireflet (ARC) pour cellules solaires multi-jonction présente des défis à cause de la large bande d'absorption et du besoin d'égaliser le courant produit par chaque sous-cellule. Le nitrure de silicium déposé par PECVD en utilisant des conditions standards est largement utilisé dans l'industrie des cellules solaires à base de silicium. Cependant, ce diélectrique présente de l'absorption dans la plage des courtes longueurs d'onde. Nous proposons l'utilisation du nitrure de silicium déposé par PECVD basse fréquence (LFSiN) optimisé pour avoir un haut indice de réfraction et une faible absorption optique pour l'ARC pour cellules solaires triple-jonction III-V/Ge. Ce matériau peut aussi servir de couche de passivation/encapsulation. Les simulations montrent que l'ARC double couche SiO2/LFSiN peut être très efficace pour réduire les pertes par réflexion dans la plage de longueurs d'onde de la sous-cellule limitante autant pour des cellules solaires triple-jonction limitées par la sous-cellule du haut que pour celles limitées par la sous-cellule du milieu. Nous démontrons aussi que la performance de la structure est robuste par rapport aux fluctuations des paramètres des couches PECVD (épaisseurs, indice de réfraction). Mots-clés : Photovoltaïque concentré (CPV), cellules solaires multi-jonction (MJSC), revêtement antireflet (ARC), passivation des semiconducteurs III-V, nitrure de silicium (Si"Ny), PECVD.
Mise en oeuvre et caracterisation de pieces autotrempantes elaborees avec de nouveaux alliages meres
NASA Astrophysics Data System (ADS)
Bouchemit, Arslane Abdelkader
: L'autotrempabilite des aciers en metallurgie des poudres (MP) permet d'obtenir des pieces ayant une microstructure de trempe (martensite et/ou bainite), et ce, directement lors du refroidissement a la sortie du four de frittage (frittage industriel : 10 45 °C/min [550 a 350 °C]). Cela permet entre autres d'eliminer les traitements thermiques d'austenisation et de trempe (a l'eau : ≈ 2700 °C/min ou a l'huile : ≈ 1100 °C/min [550 a 350 °C] [17]) generalement requis apres le frittage afin d'obtenir une microstructure martensitique. Ainsi, le procede de fabrication est simplifie, moins couteux et la distorsion des pieces due au refroidissement rapide lors de la trempe est evitee. De plus, l'utilisation des bains d'huile est eliminee ce qui rend le procede plus securitaire et ecologique. Les principaux parametres commandant l'autotrempabilite sont : le taux de refroidissement et la composition chimique de l'acier. De nos jours, les systemes de refroidissement a convection forcee combines aux fours industriels permettent d'obtenir des taux de refroidissement eleves a la sortie des fours (60 300 °C/min [550 a 350 °C]) [18, 19]. De plus, le taux de refroidissement critique induisant la formation de la structure de trempe est largement influence par la composition chimique de l'acier. Ainsi, plus l'acier est allie (jusqu'a une certaine limite), plus ce taux de refroidissement critique est moindre. Le molybdene, le nickel et le cuivre sont les elements usuellement utilises en MP. Cependant, le manganese et le chrome sont moins couteux et ont un impact plus marque sur l'autotrempabilite, malgre cela, ils sont rarement utilises a cause de leur susceptibilite a l'oxydation et la degradation de la compressibilite causee par le manganese. L'objectif principal de ce projet est de developper des melanges autotrempants en ajoutant des alliages meres (MA : MA1, MA2 et MA4) fortement allies au manganese (5 15 %m) et au chrome (5 15 %m) qui contiennent beaucoup de carbone (≈ 4 %m) developpes par Ian Bailon-Poujol lors de sa maitrise [20]. La haute teneur en carbone de ces alliages meres assure la protection des elements d'alliage susceptibles a l'oxydation durant toutes les etapes du procede : dans le bain liquide lors de la fusion et l'atomisation a l'eau, pendant le broyage ainsi que lors du frittage des pieces contenant ces alliages meres. Precedemment, Ian Bailon-Poujol avait etudie le broyage de certains alliages meres atomises a l'eau et avait amorce le developpement de melanges autotrempants ainsi que des etudes de diffusion des elements d'alliage. Pour ce projet, le developpement des melanges autotrempants a implique l'optimisation de toutes les etapes de la mise en oeuvre afin d'obtenir les meilleures proprietes possibles des melanges avant frittage (ecoulement, resistance a cru...) et apres frittage (durete, microstructure...), et ce, pour des alliages meres atomises a l'eau par Ian Bailon-Poujol ainsi qu'un alliage mere de chimie similaire qui fut atomise au gaz. (Abstract shortened by ProQuest.).
Consumer Perceptions About Pilot Training: An Emotional Response
NASA Astrophysics Data System (ADS)
Rosser, Timothy G.
Civilian pilot training has followed a traditional path for several decades. With a potential pilot shortage approaching, ICAO proposed a new paradigm in pilot training methodology called the Multi-Crew Pilot License. This new methodology puts a pilot in the cockpit of an airliner with significantly less flight time experience than the traditional methodology. The purpose of this study was to determine to what extent gender, country of origin and pilot training methodology effect an aviation consumer's willingness to fly. Additionally, this study attempted to determine what emotions mediate a consumer's decision. This study surveyed participants from India and the United States to measure their willingness to fly using the Willingness to Fly Scale shown to be valid and reliable by Rice et al. (2015). The scale uses a five point Likert-type scale. In order to determine the mediating emotions, Ekman and Friesen's (1979) universal emotions, which are happiness, surprise, fear, disgust, anger, and sadness were used. Data were analyzed using SPSS. Descriptive statistics are provided for respondent's age and willingness to fly values. An ANOVA was conducted to test the first four hypotheses and Hayes (2004, 2008) bootstrapping process was used for the mediation analysis. Results indicated a significant main effect for training, F(1,972) = 227.76, p . .001, etap 2 = 0.190, country of origin, F(1, 972) = 28.86, p < .001, .p 2 = 0.029, and a two-way interaction was indicated between training and country of origin, F(7, 972) = 46.71, p < .001, etap 2 = 0.252. Mediation analysis indicated the emotions anger, fear, happiness, and surprise mediated the relationship between training and country of origin, and training. The findings of this study are important to designers of MPL training programs and airline marketers.
Developpement de techniques de diagnostic non intrusif par tomographie optique
NASA Astrophysics Data System (ADS)
Dubot, Fabien
Que ce soit dans les domaines des procedes industriels ou de l'imagerie medicale, on a assiste ces deux dernieres decennies a un developpement croissant des techniques optiques de diagnostic. L'engouement pour ces methodes repose principalement sur le fait qu'elles sont totalement non invasives, qu'elle utilisent des sources de rayonnement non nocives pour l'homme et l'environnement et qu'elles sont relativement peu couteuses et faciles a mettre en oeuvre comparees aux autres techniques d'imagerie. Une de ces techniques est la Tomographie Optique Diffuse (TOD). Cette methode d'imagerie tridimensionnelle consiste a caracteriser les proprietes radiatives d'un Milieu Semi-Transparent (MST) a partir de mesures optiques dans le proche infrarouge obtenues a l'aide d'un ensemble de sources et detecteurs situes sur la frontiere du domaine sonde. Elle repose notamment sur un modele direct de propagation de la lumiere dans le MST, fournissant les predictions, et un algorithme de minimisation d'une fonction de cout integrant les predictions et les mesures, permettant la reconstruction des parametres d'interet. Dans ce travail, le modele direct est l'approximation diffuse de l'equation de transfert radiatif dans le regime frequentiel tandis que les parametres d'interet sont les distributions spatiales des coefficients d'absorption et de diffusion reduit. Cette these est consacree au developpement d'une methode inverse robuste pour la resolution du probleme de TOD dans le domaine frequentiel. Pour repondre a cet objectif, ce travail est structure en trois parties qui constituent les principaux axes de la these. Premierement, une comparaison des algorithmes de Gauss-Newton amorti et de Broyden- Fletcher-Goldfarb-Shanno (BFGS) est proposee dans le cas bidimensionnel. Deux methodes de regularisation sont combinees pour chacun des deux algorithmes, a savoir la reduction de la dimension de l'espace de controle basee sur le maillage et la regularisation par penalisation de Tikhonov pour l'algorithme de Gauss-Newton amorti, et les regularisations basees sur le maillage et l'utilisation des gradients de Sobolev, uniformes ou spatialement dependants, lors de l'extraction du gradient de la fonction cout, pour la methode BFGS. Les resultats numeriques indiquent que l'algorithme de BFGS surpasse celui de Gauss-Newton amorti en ce qui concerne la qualite des reconstructions obtenues, le temps de calcul ou encore la facilite de selection du parametre de regularisation. Deuxiemement, une etude sur la quasi-independance du parametre de penalisation de Tikhonov optimal par rapport a la dimension de l'espace de controle dans les problemes inverses d'estimation de fonctions spatialement dependantes est menee. Cette etude fait suite a une observation realisee lors de la premiere partie de ce travail ou le parametre de Tikhonov, determine par la methode " L-curve ", se trouve etre independant de la dimension de l'espace de controle dans le cas sous-determine. Cette hypothese est demontree theoriquement puis verifiee numeriquement sur un probleme inverse lineaire de conduction de la chaleur puis sur le probleme inverse non-lineaire de TOD. La verification numerique repose sur la determination d'un parametre de Tikhonov optimal, defini comme etant celui qui minimise les ecarts entre les cibles et les reconstructions. La demonstration theorique repose sur le principe de Morozov (discrepancy principle) dans le cas lineaire, tandis qu'elle repose essentiellement sur l'hypothese que les fonctions radiatives a reconstruire sont des variables aleatoires suivant une loi normale dans le cas non-lineaire. En conclusion, la these demontre que le parametre de Tikhonov peut etre determine en utilisant une parametrisation des variables de controle associee a un maillage lâche afin de reduire les temps de calcul. Troisiemement, une methode inverse multi-echelle basee sur les ondelettes associee a l'algorithme de BFGS est developpee. Cette methode, qui s'appuie sur une reformulation du probleme inverse original en une suite de sous-problemes inverses de la plus grande echelle a la plus petite, a l'aide de la transformee en ondelettes, permet de faire face a la propriete de convergence locale de l'optimiseur et a la presence de nombreux minima locaux dans la fonction cout. Les resultats numeriques montrent que la methode proposee est plus stable vis-a-vis de l'estimation initiale des proprietes radiatives et fournit des reconstructions finales plus precises que l'algorithme de BFGS ordinaire tout en necessitant des temps de calcul semblables. Les resultats de ces travaux sont presentes dans cette these sous forme de quatre articles. Le premier article a ete accepte dans l'International Journal of Thermal Sciences, le deuxieme est accepte dans la revue Inverse Problems in Science and Engineering, le troisieme est accepte dans le Journal of Computational and Applied Mathematics et le quatrieme a ete soumis au Journal of Quantitative Spectroscopy & Radiative Transfer. Dix autres articles ont ete publies dans des comptes-rendus de conferences avec comite de lecture. Ces articles sont disponibles en format pdf sur le site de la Chaire de recherche t3e (www.t3e.info).
NASA Astrophysics Data System (ADS)
Amrani, Salah
La fabrication de l'aluminium est realisee dans une cellule d'electrolyse, et cette operation utilise des anodes en carbone. L'evaluation de la qualite de ces anodes reste indispensable avant leur utilisation. La presence des fissures dans les anodes provoque une perturbation du procede l'electrolyse et une diminution de sa performance. Ce projet a ete entrepris pour determiner l'impact des differents parametres de procedes de fabrication des anodes sur la fissuration des anodes denses. Ces parametres incluent ceux de la fabrication des anodes crues, des proprietes des matieres premieres et de la cuisson. Une recherche bibliographique a ete effectuee sur tous les aspects de la fissuration des anodes en carbone pour compiler les travaux anterieurs. Une methodologie detaillee a ete mise au point pour faciliter le deroulement des travaux et atteindre les objectifs vises. La majorite de ce document est reservee pour la discussion des resultats obtenus au laboratoire de l'UQAC et au niveau industriel. Concernant les etudes realisees a l'UQAC, une partie des travaux experimentaux est reservee a la recherche des differents mecanismes de fissuration dans les anodes denses utilisees dans l'industrie d'aluminium. L'approche etait d'abord basee sur la caracterisation qualitative du mecanisme de la fissuration en surface et en profondeur. Puis, une caracterisation quantitative a ete realisee pour la determination de la distribution de la largeur de la fissure sur toute sa longueur, ainsi que le pourcentage de sa surface par rapport a la surface totale de l'echantillon. Cette etude a ete realisee par le biais de la technique d'analyse d'image utilisee pour caracteriser la fissuration d'un echantillon d'anode cuite. L'analyse surfacique et en profondeur de cet echantillon a permis de voir clairement la formation des fissures sur une grande partie de la surface analysee. L'autre partie des travaux est basee sur la caracterisation des defauts dans des echantillons d'anodes crues fabriquees industriellement. Cette technique a consiste a determiner le profil des differentes proprietes physiques. En effet, la methode basee sur la mesure de la distribution de la resistivite electrique sur la totalite de l'echantillon est la technique qui a ete utilisee pour localiser la fissuration et les macro-pores. La microscopie optique et l'analyse d'image ont, quant a elles, permis de caracteriser les zones fissurees tout en determinant la structure des echantillons analyses a l'echelle microscopique. D'autres tests ont ete menes, et ils ont consiste a etudier des echantillons cylindriques d'anodes de 50 mm de diametre et de 130 mm de longueur. Ces derniers ont ete cuits dans un four a UQAC a differents taux de chauffage dans le but de pouvoir determiner l'influence des parametres de cuisson sur la formation de la fissuration dans ce genre de carottes. La caracterisation des echantillons d'anodes cuites a ete faite a l'aide de la microscopie electronique a balayage et de l'ultrason. La derniere partie des travaux realises a l'UQAC contient une etude sur la caracterisation des anodes fabriquees au laboratoire sous differentes conditions d'operation. L'evolution de la qualite de ces anodes a ete faite par l'utilisation de plusieurs techniques. L'evolution de la temperature de refroidissement des anodes crues de laboratoire a ete mesuree; et un modele mathematique a ete developpe et valide avec les donnees experimentales. Cela a pour objectif d'estimer la vitesse de refroidissement ainsi que le stress thermique. Toutes les anodes fabriquees ont ete caracterisees avant la cuisson par la determination de certaines proprietes physiques (resistivite electrique, densite apparente, densite optique et pourcentage de defauts). La tomographie et la distribution de la resistivite electrique, qui sont des techniques non destructives, ont ete employees pour evaluer les defauts internes des anodes. Pendant la cuisson des anodes de laboratoire, l'evolution de la resistivite electrique a ete suivie et l'etape de devolatilisation a ete identifiee. Certaines anodes ont ete cuites a differents taux de chauffage (bas, moyen, eleve et un autre combine) dans l'objectif de trouver les meilleures conditions de cuisson en vue de minimiser la fissuration. D'autres anodes ont ete cuites a differents niveaux de cuisson, cela dans le but d'identifier a quelle etape de l'operation de cuisson la fissuration commence a se developper. Apres la cuisson, les anodes ont ete recuperees pour, a nouveau, faire leur caracterisation par les memes techniques utilisees precedemment. L'objectif principal de cette partie etait de reveler l'impact de differents parametres sur le probleme de fissuration, qui sont repartis sur toute la chaine de production des anodes. Le pourcentage de megots, la quantite de brai et la distribution des particules sont des facteurs importants a considerer pour etudier l'effet de la matiere premiere sur le probleme de la fissuration. Concernant l'effet des parametres du procede de fabrication sur le meme probleme, le temps de vibration, la pression de compaction et le procede de refroidissement ont ete a la base de cette etude. Finalement, l'influence de la phase de cuisson sur l'apparition de la fissuration a ete prise en consideration par l'intermediaire du taux de chauffage et du niveau de cuisson. Les travaux realises au niveau industriel ont ete faits lors d'une campagne de mesure dans le but d'evaluer la qualite des anodes de carbone en general et l'investigation du probleme de fissuration en particulier. Ensuite, il s'agissait de reveler les effets de differents parametres sur le probleme de la fissuration. Vingt-quatre anodes cuites ont ete utilisees. Elles ont ete fabriquees avec differentes matieres premieres (brai, coke, megots) et sous diverses conditions (pression, temps de vibration). Le parametre de la densite de fissuration a ete calcule en se basant sur l'inspection visuelle de la fissuration des carottes. Cela permet de classifier les differentes fissurations en plusieurs categories en se basant sur certains criteres tels que le type de fissures (horizontale, verticale et inclinee), leurs localisations longitudinales (bas, milieu et haut de l'anode) et transversales (gauche, centrale et droite). Les effets de la matiere premiere, les parametres de fabrication des anodes crues ainsi que les conditions de cuisson sur la fissuration ont ete etudies. La fissuration des anodes denses en carbones cause un serieux probleme pour l'industrie d'aluminium primaire. La realisation de ce projet a permis la revelation de differents mecanismes de fissuration, la classification de fissuration par plusieurs criteres (position, types localisation) et l'evaluation de l'impact de differents parametres sur la fissuration. Les etudes effectuees dans le domaine de cuisson ont donne la possibilite d'ameliorer l'operation et reduire la fissuration des anodes. Le travail consiste aussi a identifier des techniques capables d'evaluer la qualite d'anodes (l'ultrason, la tomographie et la distribution de la resistivite electrique). La fissuration des anodes en carbone est consideree comme un probleme complexe, car son apparition depend de plusieurs parametres repartis sur toute la chaine de production. Dans ce projet, plusieurs nouvelles etudes ont ete realisees, et elles permettent de donner de l'originalite aux travaux de recherches faits dans le domaine de la fissuration des anodes de carbone pour l'industrie de l'aluminium primaire. Les etudes realisees dans ce projet permettent d'ajouter d'un cote, une valeur scientifique pour mieux comprendre le probleme de fissuration des anodes et d'un autre cote, d'essayer de proposer des methodes qui peuvent reduire ce probleme a l'echelle industrielle.
1992-03-01
de Logiciels") etaient en cours de developpement pour resoudre des problimes similaires dans le monde de la gestion . le Panel... gestion des sp cifications, d’algorithmes et de reprtsentations. Techniques et Sciences Informatiques, 4(3), 1985. 4-21 R. Jacquart, M. Lemoine, and G...Guidance and Control Systems Software (Les Diff~rentes Approches "G6neration" pour la Conception et le D~veloppement de Logiciels de Guidage et de
Nimbalkar, S D; Jade, S S; Kauthale, V K; Agale, S; Bahulikar, R A
2018-03-01
Madhuca indica provides livelihood to several tribal people in India, where the flowers are used for extraction of sweet juices having multiple applications. Certain trees have more value as judged by the tribal people mainly based on yield and quality performance of the trees, and these trees were selected for the genetic diversity analyses. Genetic diversity of 48 candidate Mahua trees from Etapalli, Dadagaon, and Jawhar, Maharashtra, India, was assessed using ISSR markers. Fourteen ISSR primers revealed a total of 132 polymorphic bands giving overall 92% polymorphism. Genetic diversity, in terms of expected number of alleles (Ne), the observed number of alleles (Na), Nei's genetic diversity (H), and Shannon's information index ( I ) was 1.921, 1.333, 0.211, and 0.337, respectively, and suggested lower genetic diversity. Region wise analysis revealed higher genetic diversity for site Etapalli ( H = 0.206) and lowest at Dhadgaon ( H = 0.140). Etapalli area possesses higher forest cover than Dhadgaon and Jawhar. Additionally, in Dhadgaon and Jawhar M. indica trees are restricted to field bunds; both reasons might contribute to lower genetic diversity in these regions. The dendrogram and the principal coordinate analyses showed no region-specific clustering. The clustering patterns were supported by AMOVA where higher genetic variance was observed within trees and lower variance among regions. Long-distance dispersal and/or higher human interference might be responsible for low diversity and higher genetic variance within the candidate trees.
Multi-Team and Multi-Organization Systems
2009-11-01
Irandoust, A. Benaskeur ; DRDC Valcartier TR 2009-198 ; Recherche et développement pour la défense Canada – Valcartier ; novembre 2009. Contexte...defining features . . . . . . . . . . . . . . . . . 13 3.2.1 Purpose of Partnership . . . . . . . . . . . . . . . . . . . . . . . . 14 3.2.2 Control...the spectrum of organizations . . . . . . . 11 Figure 3: Properties of emergent and planned multi-organization systems . . . . . 13 Figure 4: Costs
Combat Resource Allocation Planning in Naval Engagements
2007-08-01
presented and discussed in this report. The coordination problems are discussed in the companion report [2]. The developed Agent and Multi-agent-based...Technology Chef de file au Canada en matière de science et de technologie pour la défense et la sécurité nationale WWW.drdc-rddc.gc.ca Defence R&D Canada R & D pour la défense Canada
75 FR 16074 - Availability of Seats for the Monterey Bay National Marine Sanctuary Advisory Council
Federal Register 2010, 2011, 2012, 2013, 2014
2010-03-31
... experience in relation to the seat for which they are applying; community and professional affiliations... Business and Tourism Activity Panel (``ETAP'') chaired by the Business/Industry Representative, each...
Literature Review on Best Practices in Collective Learning
2010-11-01
potentiels de l’apprentissage collectif dans un environnement multi-organisation. Ce rapport sera utile pour les groupes qui tentent d’évaluer dans...publications sont documentées. Elles sont ensuite transformées en indicateurs potentiels de l’apprentissage collectif dans un environnement multi
Investigation of the impact of thyroid surgery on vocal tract steadiness.
Timon, Conrad I; Hirani, Shashi P; Epstein, Ruth; Rafferty, Mark A
2010-09-01
Subjective nonspecific upper aerodigestive symptoms are not uncommon after thyroid surgery. These are postulated to be related to injury of an extrinsic perithyroid nerve plexus that innervates the muscles of the supraglottic and glottic larynx. This plexus is thought to receive contributing branches from both the recurrent and superior laryngeal nerves. The technique of linear predictive coding was used to estimate the F(2) values from a sustained vowel /a/ in patients before and 48 hours after thyroid or parathyroid surgery. These patients were controlled against a matched pair undergoing surgery without any theoretical effect on the supraglottic musculature. In total, 12 patients were recruited into each group. Each patient had the formant frequency fluctuation (FFF) and the formant frequency fluctuation ratio (FFFR) calculated for F(1) and F(2). Mixed analysis of variance (ANOVA) for all acoustic parameters revealed that the chiF(2)FF showed a significant "time" main effect (F(1,22)=7.196, P=0.014, partial eta(2)=0.246) and a significant "time by group interaction" effect (F(1,22)=8.036, P=0.010, eta(p)(2)=0.268), with changes over time for the thyroid group but not for the controls. Similarly, mean chiF(2)FFR showed a similar significant "time" main effect (F(1,22)=6.488, P=0.018, eta(p)(2)=0.228) and a "time by group interaction" effect (F(1,22)=7.134, P=0.014, eta(p)(2)=0.245). This work suggests that thyroid surgery produces a significant reduction in vocal tract stability in contrast to the controls. This noninvasive measurement offers a potential instrument to investigate the functional implications of any disturbance that thyroid surgery may have on pharyngeal innervations. 2010 The Voice Foundation. Published by Mosby, Inc. All rights reserved.
Kojeszewski, Tricia; Fish, Frank E
2007-07-01
The submerged swimming of the Florida manatee (Trichechus manatus latirostris), a subspecies of the West Indian manatee, was studied by filming individuals as they swam rectilinearly in a large pool at several rehabilitation centers. The swimming was analyzed using videography to detail the kinematics in conjunction with a hydromechanical model to determine the power output (P(t)) and propulsive efficiency (eta(p)). Manatees swam at velocities of 0.06-1.14 m s(-1). Locomotion was accomplished by undulation of the body and caudal fluke. Undulatory locomotion is a rapid and relatively high-powered propulsive mode involved in cruising and migrating by a variety of swimmers. Manatees displayed an undulatory swimming mode by passing a dorso-ventrally oriented traveling wave posteriorly along the body. The propulsive wave traveled at a higher velocity than the forward velocity of the animal. The frequency of the propulsive cycle (f) increased linearly with increasing swimming velocity (U). Amplitude at the tip of the caudal fluke (A) remained constant with respect to U and was 22% of body length. P(t) increased curvilinearly with U. The mean eta(p), expressing the relationship of the thrust power generated by the paddle-shaped caudal fluke to the total mechanical power, was 0.73. The maximum eta(p) was 0.82 at 0.95 m s(-1). Despite use of a primitive undulatory swimming mode and paddle-like fluke for propulsion, the manatee is capable of swimming with a high efficiency but lower power outputs compared with the oscillatory movements of the high-aspect ratio flukes of cetaceans. The swimming performance of the manatee is in accordance with its habits as an aquatic grazer that seasonally migrates over extended distances.
Detection de la fin de la compaction des anodes par le son
NASA Astrophysics Data System (ADS)
Sanogo, Bazoumana
L'objectif de ce projet etait de developper un outil de controle en temps reel du temps de compaction en se servant du son genere par le vibrocompacteur pendant le formage des anodes crues. Ainsi, une application a ete developpee pour l'analyse des sons enregistres. Des essais ont ete realises avec differents microphones pour une meilleure qualite des mesures et un a ete choisi pour la suite du projet. De meme, differents tests ont ete realises sur des anodes de laboratoire ainsi que des anodes a l'echelle industrielle afin de mettre en place une methode pour la detection du temps optimal necessaire au formage des anodes. Les travaux au laboratoire de carbone a l'Universite du Quebec a Chicoutimi (UQAC) ont consiste a l'enregistrement de son des anodes fabriquees sur place avec differentes configurations; et a la caracterisation de certaines anodes de l'usine. Les anodes fabriquees au laboratoire sont reparties en deux groupes. Le premier regroupe les anodes pour la validation de notre methode. Ce sont des anodes produites avec des temps de compaction differents. Le laboratoire de carbone a l'UQAC est unique et il est possible de produire des anodes avec les memes proprietes que celles des anodes industrielles. Par consequent, la validation initialement prevue a l'usine a ete effectuee avec les anodes de laboratoire. Le deuxieme groupe a servi a etudier les effets des matieres premieres sur le temps de compaction. Le type de coke et le type de brai ont constitue les differentes variations dans ce deuxieme groupe. Quant aux tests et mesures a l'usine, ils ont ete realises en trois campagnes de mesure. La premiere campagne en juin 2014 a servi a standardiser et a trouver le meilleur positionnement des appareils pour les mesures, a regler le logiciel et a faire les premieres mesures. Une deuxieme campagne en mai 2015 a fait l'objet d'enregistrement de son en classant les anodes selon differents temps de compaction. La troisieme et derniere campagne en decembre 2015 a ete le lieu de tests finaux a l'usine en fabriquant des anodes avec differents criteres (variation du temps de compaction, taux de brai, arret manuel du compacteur, variation de la pression des ballons du haut du compacteur). Ces anodes ont ete ensuite analysees au laboratoire a l'UQAC. En parallele a ces travaux precites, l'amelioration de l'application d'analyse du son a ete faite avec le choix des parametres d'analyse et leur standardisation. Les resultats des premiers tests au laboratoire et ceux de la campagne de juin 2014 ont montre que la formation des anodes se fait suivant trois etapes : rearrangement des particules et du brai, compaction et consolidation et enfin la finition. Ces travaux ont montre en outre que le temps de compaction joue un role tres important dans la definition des proprietes finales des anodes. Ainsi, en plus du type de brai, du taux de brai et du type de coke, il faut tenir compte du temps de sur-compaction et de sous-compaction. En effet, ceci a ete demontre a travers les deux validations qui ont ete realisees. Les resultats de la caracterisation des echantillons (venant des anodes de la campagne de decembre 2015) ont montre qu'une anode compactee a un temps optimal acquiert une bonne resistance a la compression et sa resistivite electrique baisse. En outre, on note que le temps de compaction dans notre cas a baisse legerement avec l'augmentation de la pression des ballons de haut du vibrocompacteur. Ce qui a eu pour effet d'augmenter la densite crue de l'anode. Toutefois, il faut s'abstenir de generaliser ce constat car le nombre d'anodes testees est faible dans notre cas. Par ailleurs, cette etude montre que le temps necessaire pour le formage d'une anode croit avec l'augmentation du taux de brai et baisse legerement avec l'augmentation de la pression des ballons. (Abstract shortened by ProQuest.).
Separation of Target Rigid Body and Micro-Doppler Effects in ISAR/SAR Imaging
2006-09-01
tour- nantes et vibrantes de la cible. De nouveaux algorithmes et m~thodes devront donc DRDC Ottawa TM 2006-187 v &tre 6tudi6s plus en profondeur afin...UNCLASSIFIED SECURITY CLASSIFICATION OF FORM Defence R&D Canada R & D pour la defense Canada Canada’s Leader in Defence Chef de file au Canada en mati~re and...I 1f1 Defence Research and Recherche et developpement Development Canada pour la defense Canada DEFENCE ril DEFENSE Separation of target rigid body
NASA Astrophysics Data System (ADS)
Zaag, Mahdi
La disponibilite des modeles precis des avions est parmi les elements cles permettant d'assurer leurs ameliorations. Ces modeles servent a ameliorer les commandes de vol et de concevoir de nouveaux systemes aerodynamiques pour la conception des ailes deformables des avions. Ce projet consiste a concevoir un systeme d'identification de certains parametres du modele du moteur de l'avion d'affaires americain Cessna Citation X pour la phase de croisiere a partir des essais en vol. Ces essais ont ete effectues sur le simulateur de vol concu et fabrique par CAE Inc. qui possede le niveau D de la dynamique de vol. En effet, le niveau D est le plus haut niveau de precision donne par l'autorite federale de reglementation FAA de l'aviation civile aux Etats-Unis. Une methodologie basee sur les reseaux de neurones optimises a l'aide d'un algorithme intitule le "grand deluge etendu" est utilisee dans la conception de ce systeme d'identification. Plusieurs tests de vol pour differentes altitudes et differents nombres de Mach ont ete realises afin de s'en servir comme bases de donnees pour l'apprentissage des reseaux de neurones. La validation de ce modele a ete realisee a l'aide des donnees du simulateur. Malgre la nonlinearite et la complexite du systeme, les parametres du moteur ont ete tres bien predits pour une enveloppe de vol determinee. Ce modele estime pourrait etre utilise pour des analyses de fonctionnement du moteur et pourrait assurer le controle de l'avion pendant cette phase de croisiere. L'identification des parametres du moteur pourrait etre realisee aussi pour les autres phases de montee et de descente afin d'obtenir son modele complet pour toute l'enveloppe du vol de l'avion Cessna Citation X (montee, croisiere, descente). Cette methode employee dans ce travail pourrait aussi etre efficace pour realiser un modele pour l'identification des coefficients aerodynamiques du meme avion a partir toujours des essais en vol. None None None
Proposals for Updating Tai Algorithm
1997-12-01
1997 meeting, the Comiti International des Poids et Mesures (CIPM) decided to change the name of the Comiti Consultatif pour la Difinition de la ...Report of the BIPM Time Section, 1988,1, D1-D22. [2] P. Tavella, C. Thomas, Comparative study of time scale algorithms, Metrologia , 1991, 28, 57...alternative choice for implementing an upper limit of clock weights, Metrologia , 1996, 33, 227-240. [5] C. Thomas, Impact of New Clock Technologies
NASA Astrophysics Data System (ADS)
Bansal, Shonak; Singh, Arun Kumar; Gupta, Neena
2017-02-01
In real-life, multi-objective engineering design problems are very tough and time consuming optimization problems due to their high degree of nonlinearities, complexities and inhomogeneity. Nature-inspired based multi-objective optimization algorithms are now becoming popular for solving multi-objective engineering design problems. This paper proposes original multi-objective Bat algorithm (MOBA) and its extended form, namely, novel parallel hybrid multi-objective Bat algorithm (PHMOBA) to generate shortest length Golomb ruler called optimal Golomb ruler (OGR) sequences at a reasonable computation time. The OGRs found their application in optical wavelength division multiplexing (WDM) systems as channel-allocation algorithm to reduce the four-wave mixing (FWM) crosstalk. The performances of both the proposed algorithms to generate OGRs as optical WDM channel-allocation is compared with other existing classical computing and nature-inspired algorithms, including extended quadratic congruence (EQC), search algorithm (SA), genetic algorithms (GAs), biogeography based optimization (BBO) and big bang-big crunch (BB-BC) optimization algorithms. Simulations conclude that the proposed parallel hybrid multi-objective Bat algorithm works efficiently as compared to original multi-objective Bat algorithm and other existing algorithms to generate OGRs for optical WDM systems. The algorithm PHMOBA to generate OGRs, has higher convergence and success rate than original MOBA. The efficiency improvement of proposed PHMOBA to generate OGRs up to 20-marks, in terms of ruler length and total optical channel bandwidth (TBW) is 100 %, whereas for original MOBA is 85 %. Finally the implications for further research are also discussed.
1981-01-01
This fact being established, leptokurtic and platykurtic density functions are defined in terms of deviations from the normal density function. Thus...the usual definitions (Ref. 6) are: Leptokurtic - A density function that is peaked, K > 0, [18] and Platykurtic - A density function that is flat, K...has long Deen accepted that a symmetrical platykurtic density function, with K<O, is characterized by a flatter top and more abrupt terminals than the
NASA Astrophysics Data System (ADS)
Satrio, Reza Indra; Subiyanto
2018-03-01
The effect of electric loads growth emerged direct impact in power systems distribution. Drop voltage and power losses one of the important things in power systems distribution. This paper presents modelling approach used to restructrure electrical network configuration, reduce drop voltage, reduce power losses and add new distribution transformer to enhance reliability of power systems distribution. Restructrure electrical network was aimed to analyse and investigate electric loads of a distribution transformer. Measurement of real voltage and real current were finished two times for each consumer, that were morning period and night period or when peak load. Design and simulation were conduct by using ETAP Power Station Software. Based on result of simulation and real measurement precentage of drop voltage and total power losses were mismatch with SPLN (Standard PLN) 72:1987. After added a new distribution transformer and restructrured electricity network configuration, the result of simulation could reduce drop voltage from 1.3 % - 31.3 % to 8.1 % - 9.6 % and power losses from 646.7 watt to 233.29 watt. Result showed, restructrure electricity network configuration and added new distribution transformer can be applied as an effective method to reduce drop voltage and reduce power losses.
NASA Astrophysics Data System (ADS)
Lasri, Abdel-Halim
Dans cette recherche-developpement, nous avons concu, developpe et mis a l'essai un simulateur interactif pour favoriser l'apprentissage des lois probabilistes impliqees dans la genetique mendelienne. Cet environnement informatise devra permettre aux etudiants de mener des experiences simulees, utilisant les statistiques et les probebilites comme outils mathematiques pour modeliser le phenomene de la transmission des caracteres hereditaires. L'approche didactique est essentiellement orientee vers l'utilisation des methodes quantitatives impliquees dans l'experimentation des facteurs hereditaires. En incorporant au simulateur le principe de la "Lunette cognitive" de Nonnon (1986), l'etudiant fut place dans une situation ou il a pu synchroniser la perception de la representation iconique (concrete) et symbolique (abstraite) des lois probabilistes de Mendel. A l'aide de cet environnement, nous avons amene l'etudiant a identifier le(s) caractere(s) hereditaire(s) des parents a croiser, a predire les frequences phenotypiques probables de la descendance issue du croisement, a observer les resultats statistiques et leur fluctuation au niveau de l'histogramme des frequences, a comparer ces resultats aux predictions anticipees, a interpreter les donnees et a selectionner en consequence d'autres experiences a realiser. Les etapes de l'approche inductive sont privilegiees du debut a la fin des activites proposees. L'elaboration, du simulateur et des documents d'accompagnement, a ete concue a partir d'une vingtaine de principes directeurs et d'un modele d'action. Ces principes directeurs et le modele d'action decoulent de considerations theoriques psychologiques, didactiques et technologiques. La recherche decrit la structure des differentes parties composant le simulateur. L'architecture de celui-ci est construite autour d'une unite centrale, la "Principale", dont les liens et les ramifications avec les autres unites confere a l'ensemble du simulateur sa souplesse et sa facilite d'utilisation. Le simulateur "Genetique", a l'etat de prototype, et la documentation qui lui est afferente ont ete soumis a deux mises a l'essai: l'une fonctionnelle, l'autre empirique. La mise a l'essai fonctionnelle, menee aupres d'un groupe d'enseignants experts, a permis d'identifier les lacunes du materiel elabore afin de lui apporter les reajustements qui s'imposaient. La mise a l'essai empirique, conduite par un groupe de onze (11) etudiants de niveau secondaire, avait pour but, d'une part, de tester la facilite d'utilisation du simulateur "Genetique" ainsi que les documents d'accompagnement et, d'autre part, de verifier si les participants retiraient des avantages pedagogiques de cet environnement. Trois techniques furent exploitees pour recolter les donnees de la mise a l'essai empirique. L'analyse des resultats a permis de faire un retour critique sur les productions concretes de cette recherche et d'apporter les modifications necessaires tant au simulateur qu'aux documents d'accompagnement. Cette analyse a permis egalement de conclure que notre simulateur interactif favorise une approche inductive permettant aux etudiants de s'approprier les lois probabilistes de Mendel. Enfin, la conclusion degage des pistes de recherches destinees aux etudes ulterieures, plus particulierement celles qui s'interessent a developper des simulateurs, afin d'integrer a ceux-ci des representations concretes et abstraites presentees en temps reel. Les disquettes du simulateur "Genetique" et les documents d'accompagnement sont annexes a la presente recherche.
MAX: Multiplatform Applications for XAFS
NASA Astrophysics Data System (ADS)
Alain, Michalowicz; Jacques, Moscovici; Diane, Muller-Bouvet; Karine, Provost
2009-11-01
MAX is a new EXAFS and XANES analysis package, replacing our old "EXAFS pour le Mac" software suite. The major improvement is the ability to work with strictly the same code, compiled at once for Microsoft Windows, Apple MacOSX and LINUX systems, justifying the title "Multiplatform Applications for XAFS". It is organized as four modules: ABSORBIX (X-ray absorbance and fluorescence self-absorption calculations), CHEROKEE (EXAFS and XANES data treatment), ROUNDMIDNIGHT (EXAFS modeling and fit) and CRYSTALFFREV (from crystal structures and molecular modeling to FEFF EXAFS and XANES theoretical calculations). Most features developed in "EXAFS pour le Mac" are still available, but with much improvements in the user's interface, data treatment algorithms and new functionalities.
Parallel Lattice Basis Reduction Using a Multi-threaded Schnorr-Euchner LLL Algorithm
NASA Astrophysics Data System (ADS)
Backes, Werner; Wetzel, Susanne
In this paper, we introduce a new parallel variant of the LLL lattice basis reduction algorithm. Our new, multi-threaded algorithm is the first to provide an efficient, parallel implementation of the Schorr-Euchner algorithm for today’s multi-processor, multi-core computer architectures. Experiments with sparse and dense lattice bases show a speed-up factor of about 1.8 for the 2-thread and about factor 3.2 for the 4-thread version of our new parallel lattice basis reduction algorithm in comparison to the traditional non-parallel algorithm.
Developpement d'une commande pour une hydrolienne de riviere et optimisation =
NASA Astrophysics Data System (ADS)
Tetrault, Philippe
Suivant le developpement des energies renouvelables, la presente etude se veut une base theorique quant aux principes fondamentaux necessaires au bon fonctionnement et a l'implementation d'une hydrolienne de riviere. La problematique derriere ce nouveau type d'appareil est d'abord presentee. La machine electrique utilisee dans l'application, c'est-a-dire la machine synchrone a aimants permanents, est etudiee : ses equations dynamiques mecaniques et electriques sont developpees, introduisant en meme temps le concept de referentiel tournant. Le fonctionnement de l'onduleur utilise, soit un montage en pont complet a deux niveaux a semi-conducteurs, est explique et mit en equation pour permettre de comprendre les strategies de modulation disponibles. Un bref historique de ces strategies est fait avant de mettre l'emphase sur la modulation vectorielle qui sera celle utilisee pour l'application en cours. Les differents modules sont assembles dans une simulation Matlab pour confirmer leur bon fonctionnement et comparer les resultats de la simulation avec les calculs theoriques. Differents algorithmes permettant de traquer et maintenir un point de fonctionnement optimal sont presentes. Le comportement de la riviere est etudie afin d'evaluer l'ampleur des perturbations que le systeme devra gerer. Finalement, une nouvelle approche est presentee et comparee a une strategie plus conservatrice a l'aide d'un autre modele de simulation Matlab.
A joint swarm intelligence algorithm for multi-user detection in MIMO-OFDM system
NASA Astrophysics Data System (ADS)
Hu, Fengye; Du, Dakun; Zhang, Peng; Wang, Zhijun
2014-11-01
In the multi-input multi-output orthogonal frequency division multiplexing (MIMO-OFDM) system, traditional multi-user detection (MUD) algorithms that usually used to suppress multiple access interference are difficult to balance system detection performance and the complexity of the algorithm. To solve this problem, this paper proposes a joint swarm intelligence algorithm called Ant Colony and Particle Swarm Optimisation (AC-PSO) by integrating particle swarm optimisation (PSO) and ant colony optimisation (ACO) algorithms. According to simulation results, it has been shown that, with low computational complexity, the MUD for the MIMO-OFDM system based on AC-PSO algorithm gains comparable MUD performance with maximum likelihood algorithm. Thus, the proposed AC-PSO algorithm provides a satisfactory trade-off between computational complexity and detection performance.
NASA Astrophysics Data System (ADS)
Titeux, Isabelle; Li, Yuming M.; Debray, Karl; Guo, Ying-Qiao
2004-11-01
This Note deals with an efficient algorithm to carry out the plastic integration and compute the stresses due to large strains for materials satisfying the Hill's anisotropic yield criterion. The classical algorithm of plastic integration such as 'Return Mapping Method' is largely used for nonlinear analyses of structures and numerical simulations of forming processes, but it requires an iterative schema and may have convergence problems. A new direct algorithm based on a scalar method is developed which allows us to directly obtain the plastic multiplier without an iteration procedure; thus the computation time is largely reduced and the numerical problems are avoided. To cite this article: I. Titeux et al., C. R. Mecanique 332 (2004).
NASA Astrophysics Data System (ADS)
Fan, Tian-E.; Shao, Gui-Fang; Ji, Qing-Shuang; Zheng, Ji-Wen; Liu, Tun-dong; Wen, Yu-Hua
2016-11-01
Theoretically, the determination of the structure of a cluster is to search the global minimum on its potential energy surface. The global minimization problem is often nondeterministic-polynomial-time (NP) hard and the number of local minima grows exponentially with the cluster size. In this article, a multi-populations multi-strategies differential evolution algorithm has been proposed to search the globally stable structure of Fe and Cr nanoclusters. The algorithm combines a multi-populations differential evolution with an elite pool scheme to keep the diversity of the solutions and avoid prematurely trapping into local optima. Moreover, multi-strategies such as growing method in initialization and three differential strategies in mutation are introduced to improve the convergence speed and lower the computational cost. The accuracy and effectiveness of our algorithm have been verified by comparing the results of Fe clusters with Cambridge Cluster Database. Meanwhile, the performance of our algorithm has been analyzed by comparing the convergence rate and energy evaluations with the classical DE algorithm. The multi-populations, multi-strategies mutation and growing method in initialization in our algorithm have been considered respectively. Furthermore, the structural growth pattern of Cr clusters has been predicted by this algorithm. The results show that the lowest-energy structure of Cr clusters contains many icosahedra, and the number of the icosahedral rings rises with increasing size.
NASA Astrophysics Data System (ADS)
Ayoub, Simon
Le reseau de distribution et de transport de l'electricite se modernise dans plusieurs pays dont le Canada. La nouvelle generation de ce reseau que l'on appelle smart grid, permet entre autre l'automatisation de la production, de la distribution et de la gestion de la charge chez les clients. D'un autre cote, des appareils domestiques intelligents munis d'une interface de communication pour des applications du smart grid commencent a apparaitre sur le marche. Ces appareils intelligents pourraient creer une communaute virtuelle pour optimiser leurs consommations d'une facon distribuee. La gestion distribuee de ces charges intelligentes necessite la communication entre un grand nombre d'equipements electriques. Ceci represente un defi important a relever surtout si on ne veut pas augmenter le cout de l'infrastructure et de la maintenance. Lors de cette these deux systemes distincts ont ete concus : un systeme de communication peer-to-peer, appele Ring-Tree, permettant la communication entre un nombre important de noeuds (jusqu'a de l'ordre de grandeur du million) tel que des appareils electriques communicants et une technique distribuee de gestion de la charge sur le reseau electrique. Le systeme de communication Ring-Tree inclut une nouvelle topologie reseau qui n'a jamais ete definie ou exploitee auparavant. Il inclut egalement des algorithmes pour la creation, l'exploitation et la maintenance de ce reseau. Il est suffisamment simple pour etre mis en oeuvre sur des controleurs associes aux dispositifs tels que des chauffe-eaux, chauffage a accumulation, bornes de recharges electriques, etc. Il n'utilise pas un serveur centralise (ou tres peu, seulement lorsqu'un noeud veut rejoindre le reseau). Il offre une solution distribuee qui peut etre mise en oeuvre sans deploiement d'une infrastructure autre que les controleurs sur les dispositifs vises. Finalement, un temps de reponse de quelques secondes pour atteindre 1'ensemble du reseau peut etre obtenu, ce qui est suffisant pour les besoins des applications visees. Les protocoles de communication s'appuient sur un protocole de transport qui peut etre un de ceux utilises sur l'Internet comme TCP ou UDP. Pour valider le fonctionnement de de la technique de controle distribuee et le systeme de communiction Ring-Tree, un simulateur a ete developpe; un modele de chauffe-eau, comme exemple de charge, a ete integre au simulateur. La simulation d'une communaute de chauffe-eaux intelligents a montre que la technique de gestion de la charge combinee avec du stockage d'energie sous forme thermique permet d'obtenir, sans affecter le confort de l'utilisateur, des profils de consommation varies dont un profil de consommation uniforme qui represente un facteur de charge de 100%. Mots-cles : Algorithme Distribue, Demand Response, Gestion de la Charge Electrique, M2M (Machine-to-Machine), P2P (Peer-to-Peer), Reseau Electrique Intelligent, Ring-Tree, Smart Grid
Bromuri, Stefano; Zufferey, Damien; Hennebert, Jean; Schumacher, Michael
2014-10-01
This research is motivated by the issue of classifying illnesses of chronically ill patients for decision support in clinical settings. Our main objective is to propose multi-label classification of multivariate time series contained in medical records of chronically ill patients, by means of quantization methods, such as bag of words (BoW), and multi-label classification algorithms. Our second objective is to compare supervised dimensionality reduction techniques to state-of-the-art multi-label classification algorithms. The hypothesis is that kernel methods and locality preserving projections make such algorithms good candidates to study multi-label medical time series. We combine BoW and supervised dimensionality reduction algorithms to perform multi-label classification on health records of chronically ill patients. The considered algorithms are compared with state-of-the-art multi-label classifiers in two real world datasets. Portavita dataset contains 525 diabetes type 2 (DT2) patients, with co-morbidities of DT2 such as hypertension, dyslipidemia, and microvascular or macrovascular issues. MIMIC II dataset contains 2635 patients affected by thyroid disease, diabetes mellitus, lipoid metabolism disease, fluid electrolyte disease, hypertensive disease, thrombosis, hypotension, chronic obstructive pulmonary disease (COPD), liver disease and kidney disease. The algorithms are evaluated using multi-label evaluation metrics such as hamming loss, one error, coverage, ranking loss, and average precision. Non-linear dimensionality reduction approaches behave well on medical time series quantized using the BoW algorithm, with results comparable to state-of-the-art multi-label classification algorithms. Chaining the projected features has a positive impact on the performance of the algorithm with respect to pure binary relevance approaches. The evaluation highlights the feasibility of representing medical health records using the BoW for multi-label classification tasks. The study also highlights that dimensionality reduction algorithms based on kernel methods, locality preserving projections or both are good candidates to deal with multi-label classification tasks in medical time series with many missing values and high label density. Copyright © 2014 Elsevier Inc. All rights reserved.
2017-11-01
costs, conserve physical resources, and sustain the health of those potentially exposed. The U.S. Army RDECOM, ETAP has been dedicated to finding...trinitropryrazol (MTNP) and 1,3-dimethylhexahydropyrimidine (DHP) Prepared by: Emily Reinke, Ph.D. Health Effects Division Toxicology...the Novel Energetics methyl trinitropyrazol (MTNP) and 1,3-dimethylhexahydropyrimidine (DHP) Emily N. Reinke, Ph.D. Army Public Health Center
Multi-object Detection and Discrimination Algorithms
2015-03-26
with an algorithm similar to a depth-‐first search . This stage of the algorithm is O(CN). From...Multi-object Detection and Discrimination Algorithms This document contains an overview of research and work performed and published at the University...of Florida from October 1, 2009 to October 31, 2013 pertaining to proposal 57306CS: Multi-object Detection and Discrimination Algorithms
NASA Technical Reports Server (NTRS)
Leutenegger, Scott T.; Horton, Graham
1994-01-01
Recently the Multi-Level algorithm was introduced as a general purpose solver for the solution of steady state Markov chains. In this paper, we consider the performance of the Multi-Level algorithm for solving Nearly Completely Decomposable (NCD) Markov chains, for which special-purpose iteractive aggregation/disaggregation algorithms such as the Koury-McAllister-Stewart (KMS) method have been developed that can exploit the decomposability of the the Markov chain. We present experimental results indicating that the general-purpose Multi-Level algorithm is competitive, and can be significantly faster than the special-purpose KMS algorithm when Gauss-Seidel and Gaussian Elimination are used for solving the individual blocks.
Optimization of multi-objective micro-grid based on improved particle swarm optimization algorithm
NASA Astrophysics Data System (ADS)
Zhang, Jian; Gan, Yang
2018-04-01
The paper presents a multi-objective optimal configuration model for independent micro-grid with the aim of economy and environmental protection. The Pareto solution set can be obtained by solving the multi-objective optimization configuration model of micro-grid with the improved particle swarm algorithm. The feasibility of the improved particle swarm optimization algorithm for multi-objective model is verified, which provides an important reference for multi-objective optimization of independent micro-grid.
Multi-objective Optimization Design of Gear Reducer Based on Adaptive Genetic Algorithms
NASA Astrophysics Data System (ADS)
Li, Rui; Chang, Tian; Wang, Jianwei; Wei, Xiaopeng; Wang, Jinming
2008-11-01
An adaptive Genetic Algorithm (GA) is introduced to solve the multi-objective optimized design of the reducer. Firstly, according to the structure, strength, etc. in a reducer, a multi-objective optimized model of the helical gear reducer is established. And then an adaptive GA based on a fuzzy controller is introduced, aiming at the characteristics of multi-objective, multi-parameter, multi-constraint conditions. Finally, a numerical example is illustrated to show the advantages of this approach and the effectiveness of an adaptive genetic algorithm used in optimized design of a reducer.
1998-04-01
The result of the project is a demonstration of the fusion process, the sensors management and the real-time capabilities using simulated sensors...demonstrator (TAD) is a system that demonstrates the core ele- ment of a battlefield ground surveillance system by simulation in near real-time. The core...Management and Sensor/Platform simulation . The surveillance system observes the real world through a non-collocated heterogene- ous multisensory system
Kamalakar, Kotte; Mahesh, Goli; Prasad, Rachapudi B N; Karuna, Mallampalli S L
2015-01-01
Castor oil, a non-edible oil containing hydroxyl fatty acid, ricinoleic acid (89.3 %) was chemically modified employing a two step procedure. The first step involved acylation (C(2)-C(6) alkanoic anhydrides) of -OH functionality employing a green catalyst, Kieselguhr-G and solvent free medium. The catalyst after reaction was filtered and reused several times without loss in activity. The second step is esterification of acylated castor fatty acids with branched mono alcohol, 2-ethylhexanol and polyols namely neopentyl glycol (NPG), trimethylolpropane (TMP) and pentaerythritol (PE) to obtain 16 novel base stocks. The base stocks when evaluated for different lubricant properties have shown very low pour points (-30 to -45°C) and broad viscosity ranges 20.27 cSt to 370.73 cSt, higher viscosity indices (144-171), good thermal and oxidative stabilities, and high weld load capacities suitable for multi-range industrial applications such as hydraulic fluids, metal working fluids, gear oil, forging and aviation applications. The study revealed that acylated branched mono- and polyol esters rich in monounsaturation is desirable for developing low pour point base stocks.
Parallel Multi-Step/Multi-Rate Integration of Two-Time Scale Dynamic Systems
NASA Technical Reports Server (NTRS)
Chang, Johnny T.; Ploen, Scott R.; Sohl, Garett. A,; Martin, Bryan J.
2004-01-01
Increasing demands on the fidelity of simulations for real-time and high-fidelity simulations are stressing the capacity of modern processors. New integration techniques are required that provide maximum efficiency for systems that are parallelizable. However many current techniques make assumptions that are at odds with non-cascadable systems. A new serial multi-step/multi-rate integration algorithm for dual-timescale continuous state systems is presented which applies to these systems, and is extended to a parallel multi-step/multi-rate algorithm. The superior performance of both algorithms is demonstrated through a representative example.
NASA Astrophysics Data System (ADS)
Piriou, F.; Razek, A.
1991-03-01
In this paper a 3D model for coupling of magnetic and electric equations is presented. The magnetic equations are solved with the help of finite element method using the magnetic vector potential formulation. To take into account the effects of magnetic saturation we use the Newton-Raphson algorithm. We develop the analysis permitting the coupling of magnetic and electric equations to obtain a difrerential system equations which can be solved with numerical integration. As example we model an iron core coil and the validity of our model is verified by a comparison of the obtained results with an analytical solution and a 2D code calculation. Dans cet article est présenté un modèle 3D qui permet de coupler les équations magnétiques et électriques. Les équations magnétiques sont résolues à l'aide de la méthode des éléments finis en utilisant une formulation en potentiel vecteur magnétique. Dans le modèle proposé les effets de la saturation du circuit magnétique sont pris en compte en utilisant l'algorithme de Newton-Raphson. On montre comment relier les équations magnétiques avec celles du circuit électrique pour aboutir à un système d'équations différentielles que l'on résout avec une intégration numérique. A titre d'exemple on modélise une bobine à noyau ferromagnétique et pour montrer la validité du modèle on compare les résultats obtenus avec une solution analytique et un code de calcul 2D.
Self-adaptive multi-objective harmony search for optimal design of water distribution networks
NASA Astrophysics Data System (ADS)
Choi, Young Hwan; Lee, Ho Min; Yoo, Do Guen; Kim, Joong Hoon
2017-11-01
In multi-objective optimization computing, it is important to assign suitable parameters to each optimization problem to obtain better solutions. In this study, a self-adaptive multi-objective harmony search (SaMOHS) algorithm is developed to apply the parameter-setting-free technique, which is an example of a self-adaptive methodology. The SaMOHS algorithm attempts to remove some of the inconvenience from parameter setting and selects the most adaptive parameters during the iterative solution search process. To verify the proposed algorithm, an optimal least cost water distribution network design problem is applied to three different target networks. The results are compared with other well-known algorithms such as multi-objective harmony search and the non-dominated sorting genetic algorithm-II. The efficiency of the proposed algorithm is quantified by suitable performance indices. The results indicate that SaMOHS can be efficiently applied to the search for Pareto-optimal solutions in a multi-objective solution space.
NASA Astrophysics Data System (ADS)
Boissonneault, Maxime
L'electrodynamique quantique en circuit est une architecture prometteuse pour le calcul quantique ainsi que pour etudier l'optique quantique. Dans cette architecture, on couple un ou plusieurs qubits supraconducteurs jouant le role d'atomes a un ou plusieurs resonateurs jouant le role de cavites optiques. Dans cette these, j'etudie l'interaction entre un seul qubit supraconducteur et un seul resonateur, en permettant cependant au qubit d'avoir plus de deux niveaux et au resonateur d'avoir une non-linearite Kerr. Je m'interesse particulierement a la lecture de l'etat du qubit et a son amelioration, a la retroaction du processus de mesure sur le qubit de meme qu'a l'etude des proprietes quantiques du resonateur a l'aide du qubit. J'utilise pour ce faire un modele analytique reduit que je developpe a partir de la description complete du systeme en utilisant principalement des transfprmations unitaires et une elimination adiabatique. J'utilise aussi une librairie de calcul numerique maison permettant de simuler efficacement l'evolution du systeme complet. Je compare les predictions du modele analytique reduit et les resultats de simulations numeriques a des resultats experimentaux obtenus par l'equipe de quantronique du CEASaclay. Ces resultats sont ceux d'une spectroscopie d'un qubit supraconducteur couple a un resonateur non lineaire excite. Dans un regime de faible puissance de spectroscopie le modele reduit predit correctement la position et la largeur de la raie. La position de la raie subit les decalages de Lamb et de Stark, et sa largeur est dominee par un dephasage induit par le processus de mesure. Je montre que, pour les parametres typiques de l'electrodynamique quantique en circuit, un accord quantitatif requiert un modele en reponse non lineaire du champ intra-resonateur, tel que celui developpe. Dans un regime de forte puissance de spectroscopie, des bandes laterales apparaissent et sont causees par les fluctuations quantiques du champ electromagnetique intra-resonateur autour de sa valeur d'equilibre. Ces fluctuations sont causees par la compression du champ electromagnetique due a la non-linearite du resonateur, et l'observation de leur effet via la spectroscopie d'un qubit constitue une premiere. Suite aux succes quantitatifs du modele reduit, je montre que deux regimes de parametres ameliorent marginalement la mesure dispersive d'un qubit avec un resonateur lineaire, et significativement une mesure par bifurcation avec un resonateur non lineaire. J'explique le fonctionnement d'une mesure de qubit dans un resonateur lineaire developpee par une equipe experimentale de l'Universite de Yale. Cette mesure, qui utilise les non-linearites induites par le qubit, a une haute fidelite, mais utilise une tres haute puissance et est destructrice. Dans tous ces cas, la structure multi-niveaux du qubit s'avere cruciale pour la mesure. En suggerant des facons d'ameliorer la mesure de qubits supraconducteurs, et en decrivant quantitativement la physique d'un systeme a plusieurs niveaux couple a un resonateur non lineaire excite, les resultats presentes dans cette these sont pertinents autant pour l'utilisation de l'architecture d'electrodynamique quantique en circuit pour l'informatique quantique que pour l'optique quantique. Mots-cles: electrodynamique quantique en circuit, informatique quantique, mesure, qubit supraconducteur, transmon, non-linearite Kerr
Application of composite dictionary multi-atom matching in gear fault diagnosis.
Cui, Lingli; Kang, Chenhui; Wang, Huaqing; Chen, Peng
2011-01-01
The sparse decomposition based on matching pursuit is an adaptive sparse expression method for signals. This paper proposes an idea concerning a composite dictionary multi-atom matching decomposition and reconstruction algorithm, and the introduction of threshold de-noising in the reconstruction algorithm. Based on the structural characteristics of gear fault signals, a composite dictionary combining the impulse time-frequency dictionary and the Fourier dictionary was constituted, and a genetic algorithm was applied to search for the best matching atom. The analysis results of gear fault simulation signals indicated the effectiveness of the hard threshold, and the impulse or harmonic characteristic components could be separately extracted. Meanwhile, the robustness of the composite dictionary multi-atom matching algorithm at different noise levels was investigated. Aiming at the effects of data lengths on the calculation efficiency of the algorithm, an improved segmented decomposition and reconstruction algorithm was proposed, and the calculation efficiency of the decomposition algorithm was significantly enhanced. In addition it is shown that the multi-atom matching algorithm was superior to the single-atom matching algorithm in both calculation efficiency and algorithm robustness. Finally, the above algorithm was applied to gear fault engineering signals, and achieved good results.
Research and application of multi-agent genetic algorithm in tower defense game
NASA Astrophysics Data System (ADS)
Jin, Shaohua
2018-04-01
In this paper, a new multi-agent genetic algorithm based on orthogonal experiment is proposed, which is based on multi-agent system, genetic algorithm and orthogonal experimental design. The design of neighborhood competition operator, orthogonal crossover operator, Son and self-learning operator. The new algorithm is applied to mobile tower defense game, according to the characteristics of the game, the establishment of mathematical models, and finally increases the value of the game's monster.
Research status of multi - robot systems task allocation and uncertainty treatment
NASA Astrophysics Data System (ADS)
Li, Dahui; Fan, Qi; Dai, Xuefeng
2017-08-01
The multi-robot coordination algorithm has become a hot research topic in the field of robotics in recent years. It has a wide range of applications and good application prospects. This paper analyzes and summarizes the current research status of multi-robot coordination algorithms at home and abroad. From task allocation and dealing with uncertainty, this paper discusses the multi-robot coordination algorithm and presents the advantages and disadvantages of each method commonly used.
Fuzzy Neural Network-Based Interacting Multiple Model for Multi-Node Target Tracking Algorithm
Sun, Baoliang; Jiang, Chunlan; Li, Ming
2016-01-01
An interacting multiple model for multi-node target tracking algorithm was proposed based on a fuzzy neural network (FNN) to solve the multi-node target tracking problem of wireless sensor networks (WSNs). Measured error variance was adaptively adjusted during the multiple model interacting output stage using the difference between the theoretical and estimated values of the measured error covariance matrix. The FNN fusion system was established during multi-node fusion to integrate with the target state estimated data from different nodes and consequently obtain network target state estimation. The feasibility of the algorithm was verified based on a network of nine detection nodes. Experimental results indicated that the proposed algorithm could trace the maneuvering target effectively under sensor failure and unknown system measurement errors. The proposed algorithm exhibited great practicability in the multi-node target tracking of WSNs. PMID:27809271
2010-11-01
Novembre 2010. Contexte: La puissance des ordinateurs nous permet aujourd’hui d’étudier des problèmes pour lesquels une solution analytique n’existe... 13 4.8 Proof of Corollary........................................................................................................ 13 ...optimal capacities for links. e DRDC CORA TM 2010-249 13 4.9 Example Figure 4 below shows that the probability of achieving the optimal
1992-06-01
que ce que l’on peut esp~rer avec des hommes. Le second serait un robot de renseignement, sorte d’unit6 4 recherche dans la profondeur , mis en place par...3 1ime Seminaire sur la robotique du champ de Bataille Groupe sur la Recherche pour la Defense S92-25111 92 9 14 00O ;-" SANS CLASSIFICATION...6l6mentaires de pilotage d’un v~hicule en vue d’en am~liorer et d’en simplifier la t~l6commandabilit6 by M. Urvoy - Adaptation de l’algorithme A * 6 la
NASA Astrophysics Data System (ADS)
Zhang, Tianzhen; Wang, Xiumei; Gao, Xinbo
2018-04-01
Nowadays, several datasets are demonstrated by multi-view, which usually include shared and complementary information. Multi-view clustering methods integrate the information of multi-view to obtain better clustering results. Nonnegative matrix factorization has become an essential and popular tool in clustering methods because of its interpretation. However, existing nonnegative matrix factorization based multi-view clustering algorithms do not consider the disagreement between views and neglects the fact that different views will have different contributions to the data distribution. In this paper, we propose a new multi-view clustering method, named adaptive multi-view clustering based on nonnegative matrix factorization and pairwise co-regularization. The proposed algorithm can obtain the parts-based representation of multi-view data by nonnegative matrix factorization. Then, pairwise co-regularization is used to measure the disagreement between views. There is only one parameter to auto learning the weight values according to the contribution of each view to data distribution. Experimental results show that the proposed algorithm outperforms several state-of-the-arts algorithms for multi-view clustering.
Multi-Level Sequential Pattern Mining Based on Prime Encoding
NASA Astrophysics Data System (ADS)
Lianglei, Sun; Yun, Li; Jiang, Yin
Encoding is not only to express the hierarchical relationship, but also to facilitate the identification of the relationship between different levels, which will directly affect the efficiency of the algorithm in the area of mining the multi-level sequential pattern. In this paper, we prove that one step of division operation can decide the parent-child relationship between different levels by using prime encoding and present PMSM algorithm and CROSS-PMSM algorithm which are based on prime encoding for mining multi-level sequential pattern and cross-level sequential pattern respectively. Experimental results show that the algorithm can effectively extract multi-level and cross-level sequential pattern from the sequence database.
NASA Astrophysics Data System (ADS)
Lv, Gangming; Zhu, Shihua; Hui, Hui
Multi-cell resource allocation under minimum rate request for each user in OFDMA networks is addressed in this paper. Based on Lagrange dual decomposition theory, the joint multi-cell resource allocation problem is decomposed and modeled as a limited-cooperative game, and a distributed multi-cell resource allocation algorithm is thus proposed. Analysis and simulation results show that, compared with non-cooperative iterative water-filling algorithm, the proposed algorithm can remarkably reduce the ICI level and improve overall system performances.
Cooperative path planning for multi-USV based on improved artificial bee colony algorithm
NASA Astrophysics Data System (ADS)
Cao, Lu; Chen, Qiwei
2018-03-01
Due to the complex constraints, more uncertain factors and critical real-time demand of path planning for multiple unmanned surface vehicle (multi-USV), an improved artificial bee colony (I-ABC) algorithm were proposed to solve the model of cooperative path planning for multi-USV. First the Voronoi diagram of battle field space is conceived to generate the optimal area of USVs paths. Then the chaotic searching algorithm is used to initialize the collection of paths, which is regard as foods of the ABC algorithm. With the limited data, the initial collection can search the optimal area of paths perfectly. Finally simulations of the multi-USV path planning under various threats have been carried out. Simulation results verify that the I-ABC algorithm can improve the diversity of nectar source and the convergence rate of algorithm. It can increase the adaptability of dynamic battlefield and unexpected threats for USV.
NASA Astrophysics Data System (ADS)
Louis, Ognel Pierre
Le but de cette etude est de developper un outil permettant d'estimer le niveau de risque de perte de vigueur des peuplements forestiers de la region de Gounamitz au nord-ouest du Nouveau-Brunswick via des donnees d'inventaires forestiers et des donnees de teledetection. Pour ce faire, un marteloscope de 100m x 100m et 20 parcelles d'echantillonnages ont ete delimites. A l'interieur de ces derniers, le niveau de risque de perte de vigueur des arbres ayant un DHP superieur ou egal a 9 cm a ete determine. Afin de caracteriser le risque de perte de vigueur des arbres, leurs positions spatiales ont ete repertoriees a partir d'un GPS en tenant compte des defauts au niveau des tiges. Pour mener a bien ce travail, les indices de vegetation et de textures et les bandes spectrales de l'image aeroportee ont ete extraits et consideres comme variables independantes. Le niveau de risque de perte de vigueur obtenu par espece d'arbre a travers les inventaires forestiers a ete considere comme variable dependante. En vue d'obtenir la superficie des peuplements forestiers de la region d'etude, une classification dirigee des images a partir de l'algorithme maximum de vraisemblance a ete effectuee. Le niveau de risque de perte de vigueur par type d'arbre a ensuite ete estime a l'aide des reseaux de neurones en utilisant un reseau dit perceptron multicouches. Il s'agit d'un modele de reseau de neurones compose de : 11 neurones sur la couche d'entree, correspondant aux variables independantes, 35 neurones sur la couche cachee et 4 neurones sur la couche de sortie. La prediction a partir des reseaux de neurones produit une matrice de confusion qui permet d'obtenir des mesures quantitatives d'estimation, notamment un pourcentage de classification globale de 91,7% pour la prediction du risque de perte de vigueur du peuplement de resineux et de 89,7% pour celui du peuplement de feuillus. L'evaluation de la performance des reseaux de neurones fournit une valeur de MSE globale de 0,04, et une RMSE (Mean Square Error) globale de 0,20 pour le peuplement de feuillus. Quant au peuplement de resineux, une valeur de MSE (Mean Square Error) globale de 0,05 et une valeur de RMSE globale de 0,22 ont ete obtenues. Pour la validation des resultats, le niveau de risque de perte de vigueur predit a ete compare avec le risque de perte de vigueur de reference. Les resultats obtenus donnent un coefficient de determination de 0,98 pour le peuplement de feuillus et 0,93 pour le peuplement de resineux.
Single and Multiple Object Tracking Using a Multi-Feature Joint Sparse Representation.
Hu, Weiming; Li, Wei; Zhang, Xiaoqin; Maybank, Stephen
2015-04-01
In this paper, we propose a tracking algorithm based on a multi-feature joint sparse representation. The templates for the sparse representation can include pixel values, textures, and edges. In the multi-feature joint optimization, noise or occlusion is dealt with using a set of trivial templates. A sparse weight constraint is introduced to dynamically select the relevant templates from the full set of templates. A variance ratio measure is adopted to adaptively adjust the weights of different features. The multi-feature template set is updated adaptively. We further propose an algorithm for tracking multi-objects with occlusion handling based on the multi-feature joint sparse reconstruction. The observation model based on sparse reconstruction automatically focuses on the visible parts of an occluded object by using the information in the trivial templates. The multi-object tracking is simplified into a joint Bayesian inference. The experimental results show the superiority of our algorithm over several state-of-the-art tracking algorithms.
A Modified Distributed Bees Algorithm for Multi-Sensor Task Allocation.
Tkach, Itshak; Jevtić, Aleksandar; Nof, Shimon Y; Edan, Yael
2018-03-02
Multi-sensor systems can play an important role in monitoring tasks and detecting targets. However, real-time allocation of heterogeneous sensors to dynamic targets/tasks that are unknown a priori in their locations and priorities is a challenge. This paper presents a Modified Distributed Bees Algorithm (MDBA) that is developed to allocate stationary heterogeneous sensors to upcoming unknown tasks using a decentralized, swarm intelligence approach to minimize the task detection times. Sensors are allocated to tasks based on sensors' performance, tasks' priorities, and the distances of the sensors from the locations where the tasks are being executed. The algorithm was compared to a Distributed Bees Algorithm (DBA), a Bees System, and two common multi-sensor algorithms, market-based and greedy-based algorithms, which were fitted for the specific task. Simulation analyses revealed that MDBA achieved statistically significant improved performance by 7% with respect to DBA as the second-best algorithm, and by 19% with respect to Greedy algorithm, which was the worst, thus indicating its fitness to provide solutions for heterogeneous multi-sensor systems.
A Modified Distributed Bees Algorithm for Multi-Sensor Task Allocation †
Nof, Shimon Y.; Edan, Yael
2018-01-01
Multi-sensor systems can play an important role in monitoring tasks and detecting targets. However, real-time allocation of heterogeneous sensors to dynamic targets/tasks that are unknown a priori in their locations and priorities is a challenge. This paper presents a Modified Distributed Bees Algorithm (MDBA) that is developed to allocate stationary heterogeneous sensors to upcoming unknown tasks using a decentralized, swarm intelligence approach to minimize the task detection times. Sensors are allocated to tasks based on sensors’ performance, tasks’ priorities, and the distances of the sensors from the locations where the tasks are being executed. The algorithm was compared to a Distributed Bees Algorithm (DBA), a Bees System, and two common multi-sensor algorithms, market-based and greedy-based algorithms, which were fitted for the specific task. Simulation analyses revealed that MDBA achieved statistically significant improved performance by 7% with respect to DBA as the second-best algorithm, and by 19% with respect to Greedy algorithm, which was the worst, thus indicating its fitness to provide solutions for heterogeneous multi-sensor systems. PMID:29498683
NASA Astrophysics Data System (ADS)
Hou, Zhenlong; Huang, Danian
2017-09-01
In this paper, we make a study on the inversion of probability tomography (IPT) with gravity gradiometry data at first. The space resolution of the results is improved by multi-tensor joint inversion, depth weighting matrix and the other methods. Aiming at solving the problems brought by the big data in the exploration, we present the parallel algorithm and the performance analysis combining Compute Unified Device Architecture (CUDA) with Open Multi-Processing (OpenMP) based on Graphics Processing Unit (GPU) accelerating. In the test of the synthetic model and real data from Vinton Dome, we get the improved results. It is also proved that the improved inversion algorithm is effective and feasible. The performance of parallel algorithm we designed is better than the other ones with CUDA. The maximum speedup could be more than 200. In the performance analysis, multi-GPU speedup and multi-GPU efficiency are applied to analyze the scalability of the multi-GPU programs. The designed parallel algorithm is demonstrated to be able to process larger scale of data and the new analysis method is practical.
NASA Astrophysics Data System (ADS)
Tabik, S.; Romero, L. F.; Mimica, P.; Plata, O.; Zapata, E. L.
2012-09-01
A broad area in astronomy focuses on simulating extragalactic objects based on Very Long Baseline Interferometry (VLBI) radio-maps. Several algorithms in this scope simulate what would be the observed radio-maps if emitted from a predefined extragalactic object. This work analyzes the performance and scaling of this kind of algorithms on multi-socket, multi-core architectures. In particular, we evaluate a sharing approach, a privatizing approach and a hybrid approach on systems with complex memory hierarchy that includes shared Last Level Cache (LLC). In addition, we investigate which manual processes can be systematized and then automated in future works. The experiments show that the data-privatizing model scales efficiently on medium scale multi-socket, multi-core systems (up to 48 cores) while regardless of algorithmic and scheduling optimizations, the sharing approach is unable to reach acceptable scalability on more than one socket. However, the hybrid model with a specific level of data-sharing provides the best scalability over all used multi-socket, multi-core systems.
A multi-level solution algorithm for steady-state Markov chains
NASA Technical Reports Server (NTRS)
Horton, Graham; Leutenegger, Scott T.
1993-01-01
A new iterative algorithm, the multi-level algorithm, for the numerical solution of steady state Markov chains is presented. The method utilizes a set of recursively coarsened representations of the original system to achieve accelerated convergence. It is motivated by multigrid methods, which are widely used for fast solution of partial differential equations. Initial results of numerical experiments are reported, showing significant reductions in computation time, often an order of magnitude or more, relative to the Gauss-Seidel and optimal SOR algorithms for a variety of test problems. The multi-level method is compared and contrasted with the iterative aggregation-disaggregation algorithm of Takahashi.
Multi-task feature selection in microarray data by binary integer programming.
Lan, Liang; Vucetic, Slobodan
2013-12-20
A major challenge in microarray classification is that the number of features is typically orders of magnitude larger than the number of examples. In this paper, we propose a novel feature filter algorithm to select the feature subset with maximal discriminative power and minimal redundancy by solving a quadratic objective function with binary integer constraints. To improve the computational efficiency, the binary integer constraints are relaxed and a low-rank approximation to the quadratic term is applied. The proposed feature selection algorithm was extended to solve multi-task microarray classification problems. We compared the single-task version of the proposed feature selection algorithm with 9 existing feature selection methods on 4 benchmark microarray data sets. The empirical results show that the proposed method achieved the most accurate predictions overall. We also evaluated the multi-task version of the proposed algorithm on 8 multi-task microarray datasets. The multi-task feature selection algorithm resulted in significantly higher accuracy than when using the single-task feature selection methods.
EDMC: An enhanced distributed multi-channel anti-collision algorithm for RFID reader system
NASA Astrophysics Data System (ADS)
Zhang, YuJing; Cui, Yinghua
2017-05-01
In this paper, we proposes an enhanced distributed multi-channel reader anti-collision algorithm for RFID environments which is based on the distributed multi-channel reader anti-collision algorithm for RFID environments (called DiMCA). We proposes a monitor method to decide whether reader receive the latest control news after it selected the data channel. The simulation result shows that it improves interrogation delay.
NASA Astrophysics Data System (ADS)
Fanuel, Ibrahim Mwita; Mushi, Allen; Kajunguri, Damian
2018-03-01
This paper analyzes more than 40 papers with a restricted area of application of Multi-Objective Genetic Algorithm, Non-Dominated Sorting Genetic Algorithm-II and Multi-Objective Differential Evolution (MODE) to solve the multi-objective problem in agricultural water management. The paper focused on different application aspects which include water allocation, irrigation planning, crop pattern and allocation of available land. The performance and results of these techniques are discussed. The review finds that there is a potential to use MODE to analyzed the multi-objective problem, the application is more significance due to its advantage of being simple and powerful technique than any Evolutionary Algorithm. The paper concludes with the hopeful new trend of research that demand effective use of MODE; inclusion of benefits derived from farm byproducts and production costs into the model.
Intérêt de l’algorithme rocuronium - sugammadex dans la laryngoscopie directe en suspension
El jaouhari, Sidi Driss; Meziane, Mohamed; Ahtil, Redouane; Bensghir, Mustapha; Haimeur, Charki
2017-01-01
La laryngoscopie directe en suspension est un geste chirurgical diagnostique et/ou thérapeutique des lésions endo-laryngées. Sa gestion anesthésique est compliquée. Différentes techniques anesthésiques peuvent être proposées. Malgré les contraintes, les curares gardent tout leur intérêt. L’association rocuronium et sugammadex est envisageable, elle permet une inversion rapide du bloc neuromusculaire profond et par conséquent une réduction de la morbidité postopératoire. Nous rapportons un cas d’une laryngoscopie directe en suspension réalisée sous anesthésie générale dont l’utilisation de l’association rocuronium-sugammadex a permis une facilité du geste chirurgicale, une sécurité pour le patient et un confort pour l’anesthésiste. PMID:28690746
Multi-scale graph-cut algorithm for efficient water-fat separation.
Berglund, Johan; Skorpil, Mikael
2017-09-01
To improve the accuracy and robustness to noise in water-fat separation by unifying the multiscale and graph cut based approaches to B 0 -correction. A previously proposed water-fat separation algorithm that corrects for B 0 field inhomogeneity in 3D by a single quadratic pseudo-Boolean optimization (QPBO) graph cut was incorporated into a multi-scale framework, where field map solutions are propagated from coarse to fine scales for voxels that are not resolved by the graph cut. The accuracy of the single-scale and multi-scale QPBO algorithms was evaluated against benchmark reference datasets. The robustness to noise was evaluated by adding noise to the input data prior to water-fat separation. Both algorithms achieved the highest accuracy when compared with seven previously published methods, while computation times were acceptable for implementation in clinical routine. The multi-scale algorithm was more robust to noise than the single-scale algorithm, while causing only a small increase (+10%) of the reconstruction time. The proposed 3D multi-scale QPBO algorithm offers accurate water-fat separation, robustness to noise, and fast reconstruction. The software implementation is freely available to the research community. Magn Reson Med 78:941-949, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.
Learning Behavior Characterization with Multi-Feature, Hierarchical Activity Sequences
ERIC Educational Resources Information Center
Ye, Cheng; Segedy, James R.; Kinnebrew, John S.; Biswas, Gautam
2015-01-01
This paper discusses Multi-Feature Hierarchical Sequential Pattern Mining, MFH-SPAM, a novel algorithm that efficiently extracts patterns from students' learning activity sequences. This algorithm extends an existing sequential pattern mining algorithm by dynamically selecting the level of specificity for hierarchically-defined features…
A multi-pattern hash-binary hybrid algorithm for URL matching in the HTTP protocol.
Zeng, Ping; Tan, Qingping; Meng, Xiankai; Shao, Zeming; Xie, Qinzheng; Yan, Ying; Cao, Wei; Xu, Jianjun
2017-01-01
In this paper, based on our previous multi-pattern uniform resource locator (URL) binary-matching algorithm called HEM, we propose an improved multi-pattern matching algorithm called MH that is based on hash tables and binary tables. The MH algorithm can be applied to the fields of network security, data analysis, load balancing, cloud robotic communications, and so on-all of which require string matching from a fixed starting position. Our approach effectively solves the performance problems of the classical multi-pattern matching algorithms. This paper explores ways to improve string matching performance under the HTTP protocol by using a hash method combined with a binary method that transforms the symbol-space matching problem into a digital-space numerical-size comparison and hashing problem. The MH approach has a fast matching speed, requires little memory, performs better than both the classical algorithms and HEM for matching fields in an HTTP stream, and it has great promise for use in real-world applications.
A multi-pattern hash-binary hybrid algorithm for URL matching in the HTTP protocol
Tan, Qingping; Meng, Xiankai; Shao, Zeming; Xie, Qinzheng; Yan, Ying; Cao, Wei; Xu, Jianjun
2017-01-01
In this paper, based on our previous multi-pattern uniform resource locator (URL) binary-matching algorithm called HEM, we propose an improved multi-pattern matching algorithm called MH that is based on hash tables and binary tables. The MH algorithm can be applied to the fields of network security, data analysis, load balancing, cloud robotic communications, and so on—all of which require string matching from a fixed starting position. Our approach effectively solves the performance problems of the classical multi-pattern matching algorithms. This paper explores ways to improve string matching performance under the HTTP protocol by using a hash method combined with a binary method that transforms the symbol-space matching problem into a digital-space numerical-size comparison and hashing problem. The MH approach has a fast matching speed, requires little memory, performs better than both the classical algorithms and HEM for matching fields in an HTTP stream, and it has great promise for use in real-world applications. PMID:28399157
NASA Astrophysics Data System (ADS)
Zhao, Wei-hu; Zhao, Jing; Zhao, Shang-hong; Li, Yong-jun; Wang, Xiang; Dong, Yi; Dong, Chen
2013-08-01
Optical satellite communication with the advantages of broadband, large capacity and low power consuming broke the bottleneck of the traditional microwave satellite communication. The formation of the Space-based Information System with the technology of high performance optical inter-satellite communication and the realization of global seamless coverage and mobile terminal accessing are the necessary trend of the development of optical satellite communication. Considering the resources, missions and restraints of Data Relay Satellite Optical Communication System, a model of optical communication resources scheduling is established and a scheduling algorithm based on artificial intelligent optimization is put forwarded. According to the multi-relay-satellite, multi-user-satellite, multi-optical-antenna and multi-mission with several priority weights, the resources are scheduled reasonable by the operation: "Ascertain Current Mission Scheduling Time" and "Refresh Latter Mission Time-Window". The priority weight is considered as the parameter of the fitness function and the scheduling project is optimized by the Genetic Algorithm. The simulation scenarios including 3 relay satellites with 6 optical antennas, 12 user satellites and 30 missions, the simulation result reveals that the algorithm obtain satisfactory results in both efficiency and performance and resources scheduling model and the optimization algorithm are suitable in multi-relay-satellite, multi-user-satellite, and multi-optical-antenna recourses scheduling problem.
[Research on non-rigid registration of multi-modal medical image based on Demons algorithm].
Hao, Peibo; Chen, Zhen; Jiang, Shaofeng; Wang, Yang
2014-02-01
Non-rigid medical image registration is a popular subject in the research areas of the medical image and has an important clinical value. In this paper we put forward an improved algorithm of Demons, together with the conservation of gray model and local structure tensor conservation model, to construct a new energy function processing multi-modal registration problem. We then applied the L-BFGS algorithm to optimize the energy function and solve complex three-dimensional data optimization problem. And finally we used the multi-scale hierarchical refinement ideas to solve large deformation registration. The experimental results showed that the proposed algorithm for large de formation and multi-modal three-dimensional medical image registration had good effects.
ERIC Educational Resources Information Center
Pugliese, Cara E.; Kenworthy, Lauren; Bal, Vanessa Hus; Wallace, Gregory L.; Yerys, Benjamin E.; Maddox, Brenna B.; White, Susan W.; Popal, Haroon; Armour, Anna Chelsea; Miller, Judith; Herrington, John D.; Schultz, Robert T.; Martin, Alex; Anthony, Laura Gutermuth
2015-01-01
Recent updates have been proposed to the Autism Diagnostic Observation Schedule-2 Module 4 diagnostic algorithm. This new algorithm, however, has not yet been validated in an independent sample without intellectual disability (ID). This multi-site study compared the original and revised algorithms in individuals with ASD without ID. The revised…
Multipoint to multipoint routing and wavelength assignment in multi-domain optical networks
NASA Astrophysics Data System (ADS)
Qin, Panke; Wu, Jingru; Li, Xudong; Tang, Yongli
2018-01-01
In multi-point to multi-point (MP2MP) routing and wavelength assignment (RWA) problems, researchers usually assume the optical networks to be a single domain. However, the optical networks develop toward to multi-domain and larger scale in practice. In this context, multi-core shared tree (MST)-based MP2MP RWA are introduced problems including optimal multicast domain sequence selection, core nodes belonging in which domains and so on. In this letter, we focus on MST-based MP2MP RWA problems in multi-domain optical networks, mixed integer linear programming (MILP) formulations to optimally construct MP2MP multicast trees is presented. A heuristic algorithm base on network virtualization and weighted clustering algorithm (NV-WCA) is proposed. Simulation results show that, under different traffic patterns, the proposed algorithm achieves significant improvement on network resources occupation and multicast trees setup latency in contrast with the conventional algorithms which were proposed base on a single domain network environment.
An adaptive evolutionary multi-objective approach based on simulated annealing.
Li, H; Landa-Silva, D
2011-01-01
A multi-objective optimization problem can be solved by decomposing it into one or more single objective subproblems in some multi-objective metaheuristic algorithms. Each subproblem corresponds to one weighted aggregation function. For example, MOEA/D is an evolutionary multi-objective optimization (EMO) algorithm that attempts to optimize multiple subproblems simultaneously by evolving a population of solutions. However, the performance of MOEA/D highly depends on the initial setting and diversity of the weight vectors. In this paper, we present an improved version of MOEA/D, called EMOSA, which incorporates an advanced local search technique (simulated annealing) and adapts the search directions (weight vectors) corresponding to various subproblems. In EMOSA, the weight vector of each subproblem is adaptively modified at the lowest temperature in order to diversify the search toward the unexplored parts of the Pareto-optimal front. Our computational results show that EMOSA outperforms six other well established multi-objective metaheuristic algorithms on both the (constrained) multi-objective knapsack problem and the (unconstrained) multi-objective traveling salesman problem. Moreover, the effects of the main algorithmic components and parameter sensitivities on the search performance of EMOSA are experimentally investigated.
Using multi-class queuing network to solve performance models of e-business sites.
Zheng, Xiao-ying; Chen, De-ren
2004-01-01
Due to e-business's variety of customers with different navigational patterns and demands, multi-class queuing network is a natural performance model for it. The open multi-class queuing network(QN) models are based on the assumption that no service center is saturated as a result of the combined loads of all the classes. Several formulas are used to calculate performance measures, including throughput, residence time, queue length, response time and the average number of requests. The solution technique of closed multi-class QN models is an approximate mean value analysis algorithm (MVA) based on three key equations, because the exact algorithm needs huge time and space requirement. As mixed multi-class QN models, include some open and some closed classes, the open classes should be eliminated to create a closed multi-class QN so that the closed model algorithm can be applied. Some corresponding examples are given to show how to apply the algorithms mentioned in this article. These examples indicate that multi-class QN is a reasonably accurate model of e-business and can be solved efficiently.
T-L Plane Abstraction-Based Energy-Efficient Real-Time Scheduling for Multi-Core Wireless Sensors.
Kim, Youngmin; Lee, Ki-Seong; Pham, Ngoc-Son; Lee, Sun-Ro; Lee, Chan-Gun
2016-07-08
Energy efficiency is considered as a critical requirement for wireless sensor networks. As more wireless sensor nodes are equipped with multi-cores, there are emerging needs for energy-efficient real-time scheduling algorithms. The T-L plane-based scheme is known to be an optimal global scheduling technique for periodic real-time tasks on multi-cores. Unfortunately, there has been a scarcity of studies on extending T-L plane-based scheduling algorithms to exploit energy-saving techniques. In this paper, we propose a new T-L plane-based algorithm enabling energy-efficient real-time scheduling on multi-core sensor nodes with dynamic power management (DPM). Our approach addresses the overhead of processor mode transitions and reduces fragmentations of the idle time, which are inherent in T-L plane-based algorithms. Our experimental results show the effectiveness of the proposed algorithm compared to other energy-aware scheduling methods on T-L plane abstraction.
The optimal algorithm for Multi-source RS image fusion.
Fu, Wei; Huang, Shui-Guang; Li, Zeng-Shun; Shen, Hao; Li, Jun-Shuai; Wang, Peng-Yuan
2016-01-01
In order to solve the issue which the fusion rules cannot be self-adaptively adjusted by using available fusion methods according to the subsequent processing requirements of Remote Sensing (RS) image, this paper puts forward GSDA (genetic-iterative self-organizing data analysis algorithm) by integrating the merit of genetic arithmetic together with the advantage of iterative self-organizing data analysis algorithm for multi-source RS image fusion. The proposed algorithm considers the wavelet transform of the translation invariance as the model operator, also regards the contrast pyramid conversion as the observed operator. The algorithm then designs the objective function by taking use of the weighted sum of evaluation indices, and optimizes the objective function by employing GSDA so as to get a higher resolution of RS image. As discussed above, the bullet points of the text are summarized as follows.•The contribution proposes the iterative self-organizing data analysis algorithm for multi-source RS image fusion.•This article presents GSDA algorithm for the self-adaptively adjustment of the fusion rules.•This text comes up with the model operator and the observed operator as the fusion scheme of RS image based on GSDA. The proposed algorithm opens up a novel algorithmic pathway for multi-source RS image fusion by means of GSDA.
Multi-Source Multi-Target Dictionary Learning for Prediction of Cognitive Decline.
Zhang, Jie; Li, Qingyang; Caselli, Richard J; Thompson, Paul M; Ye, Jieping; Wang, Yalin
2017-06-01
Alzheimer's Disease (AD) is the most common type of dementia. Identifying correct biomarkers may determine pre-symptomatic AD subjects and enable early intervention. Recently, Multi-task sparse feature learning has been successfully applied to many computer vision and biomedical informatics researches. It aims to improve the generalization performance by exploiting the shared features among different tasks. However, most of the existing algorithms are formulated as a supervised learning scheme. Its drawback is with either insufficient feature numbers or missing label information. To address these challenges, we formulate an unsupervised framework for multi-task sparse feature learning based on a novel dictionary learning algorithm. To solve the unsupervised learning problem, we propose a two-stage Multi-Source Multi-Target Dictionary Learning (MMDL) algorithm. In stage 1, we propose a multi-source dictionary learning method to utilize the common and individual sparse features in different time slots. In stage 2, supported by a rigorous theoretical analysis, we develop a multi-task learning method to solve the missing label problem. Empirical studies on an N = 3970 longitudinal brain image data set, which involves 2 sources and 5 targets, demonstrate the improved prediction accuracy and speed efficiency of MMDL in comparison with other state-of-the-art algorithms.
Algorithmic and user study of an autocompletion algorithm on a large medical vocabulary.
Sevenster, Merlijn; van Ommering, Rob; Qian, Yuechen
2012-02-01
Autocompletion supports human-computer interaction in software applications that let users enter textual data. We will be inspired by the use case in which medical professionals enter ontology concepts, catering the ongoing demand for structured and standardized data in medicine. Goal is to give an algorithmic analysis of one particular autocompletion algorithm, called multi-prefix matching algorithm, which suggests terms whose words' prefixes contain all words in the string typed by the user, e.g., in this sense, opt ner me matches optic nerve meningioma. Second we aim to investigate how well it supports users entering concepts from a large and comprehensive medical vocabulary (snomed ct). We give a concise description of the multi-prefix algorithm, and sketch how it can be optimized to meet required response time. Performance will be compared to a baseline algorithm, which gives suggestions that extend the string typed by the user to the right, e.g. optic nerve m gives optic nerve meningioma, but opt ner me does not. We conduct a user experiment in which 12 participants are invited to complete 40 snomed ct terms with the baseline algorithm and another set of 40 snomed ct terms with the multi-prefix algorithm. Our results show that users need significantly fewer keystrokes when supported by the multi-prefix algorithm than when supported by the baseline algorithm. The proposed algorithm is a competitive candidate for searching and retrieving terms from a large medical ontology. Copyright © 2011 Elsevier Inc. All rights reserved.
Implementation of a Multi-Robot Coverage Algorithm on a Two-Dimensional, Grid-Based Environment
2017-06-01
two planar laser range finders with a 180-degree field of view , color camera, vision beacons, and wireless communicator. In their system, the robots...Master’s thesis 4. TITLE AND SUBTITLE IMPLEMENTATION OF A MULTI -ROBOT COVERAGE ALGORITHM ON A TWO -DIMENSIONAL, GRID-BASED ENVIRONMENT 5. FUNDING NUMBERS...path planning coverage algorithm for a multi -robot system in a two -dimensional, grid-based environment. We assess the applicability of a topology
NASA Astrophysics Data System (ADS)
Wang, L.; Wang, T. G.; Wu, J. H.; Cheng, G. P.
2016-09-01
A novel multi-objective optimization algorithm incorporating evolution strategies and vector mechanisms, referred as VD-MOEA, is proposed and applied in aerodynamic- structural integrated design of wind turbine blade. In the algorithm, a set of uniformly distributed vectors is constructed to guide population in moving forward to the Pareto front rapidly and maintain population diversity with high efficiency. For example, two- and three- objective designs of 1.5MW wind turbine blade are subsequently carried out for the optimization objectives of maximum annual energy production, minimum blade mass, and minimum extreme root thrust. The results show that the Pareto optimal solutions can be obtained in one single simulation run and uniformly distributed in the objective space, maximally maintaining the population diversity. In comparison to conventional evolution algorithms, VD-MOEA displays dramatic improvement of algorithm performance in both convergence and diversity preservation for handling complex problems of multi-variables, multi-objectives and multi-constraints. This provides a reliable high-performance optimization approach for the aerodynamic-structural integrated design of wind turbine blade.
NASA Astrophysics Data System (ADS)
Pollender-Moreau, Olivier
Ce document présente, dans le cadre d'un contexte conceptuel, une méthode d'enchaînement servant à faire le lien entre les différentes étapes qui permettent de réaliser la simulation d'un aéronef à partir de ses données géométriques et de ses propriétés massiques. En utilisant le cas de l'avion d'affaires Hawker 800XP de la compagnie Hawker Beechcraft, on démontre, via des données, un processus de traitement par lots et une plate-forme de simulation, comment (1) modéliser la géométrie d'un aéronef en plusieurs surfaces, (2) calculer les forces aérodynamiques selon une technique connue sous le nom de
He, Chenlong; Feng, Zuren; Ren, Zhigang
2018-01-01
In this paper, we propose a connectivity-preserving flocking algorithm for multi-agent systems in which the neighbor set of each agent is determined by the hybrid metric-topological distance so that the interaction topology can be represented as the range-limited Delaunay graph, which combines the properties of the commonly used disk graph and Delaunay graph. As a result, the proposed flocking algorithm has the following advantages over the existing ones. First, range-limited Delaunay graph is sparser than the disk graph so that the information exchange among agents is reduced significantly. Second, some links irrelevant to the connectivity can be dynamically deleted during the evolution of the system. Thus, the proposed flocking algorithm is more flexible than existing algorithms, where links are not allowed to be disconnected once they are created. Finally, the multi-agent system spontaneously generates a regular quasi-lattice formation without imposing the constraint on the ratio of the sensing range of the agent to the desired distance between two adjacent agents. With the interaction topology induced by the hybrid distance, the proposed flocking algorithm can still be implemented in a distributed manner. We prove that the proposed flocking algorithm can steer the multi-agent system to a stable flocking motion, provided the initial interaction topology of multi-agent systems is connected and the hysteresis in link addition is smaller than a derived upper bound. The correctness and effectiveness of the proposed algorithm are verified by extensive numerical simulations, where the flocking algorithms based on the disk and Delaunay graph are compared.
Feng, Zuren; Ren, Zhigang
2018-01-01
In this paper, we propose a connectivity-preserving flocking algorithm for multi-agent systems in which the neighbor set of each agent is determined by the hybrid metric-topological distance so that the interaction topology can be represented as the range-limited Delaunay graph, which combines the properties of the commonly used disk graph and Delaunay graph. As a result, the proposed flocking algorithm has the following advantages over the existing ones. First, range-limited Delaunay graph is sparser than the disk graph so that the information exchange among agents is reduced significantly. Second, some links irrelevant to the connectivity can be dynamically deleted during the evolution of the system. Thus, the proposed flocking algorithm is more flexible than existing algorithms, where links are not allowed to be disconnected once they are created. Finally, the multi-agent system spontaneously generates a regular quasi-lattice formation without imposing the constraint on the ratio of the sensing range of the agent to the desired distance between two adjacent agents. With the interaction topology induced by the hybrid distance, the proposed flocking algorithm can still be implemented in a distributed manner. We prove that the proposed flocking algorithm can steer the multi-agent system to a stable flocking motion, provided the initial interaction topology of multi-agent systems is connected and the hysteresis in link addition is smaller than a derived upper bound. The correctness and effectiveness of the proposed algorithm are verified by extensive numerical simulations, where the flocking algorithms based on the disk and Delaunay graph are compared. PMID:29462217
2012-01-01
Background Multi-target therapeutics has been shown to be effective for treating complex diseases, and currently, it is a common practice to combine multiple drugs to treat such diseases to optimize the therapeutic outcomes. However, considering the huge number of possible ways to mix multiple drugs at different concentrations, it is practically difficult to identify the optimal drug combination through exhaustive testing. Results In this paper, we propose a novel stochastic search algorithm, called the adaptive reference update (ARU) algorithm, that can provide an efficient and systematic way for optimizing multi-drug cocktails. The ARU algorithm iteratively updates the drug combination to improve its response, where the update is made by comparing the response of the current combination with that of a reference combination, based on which the beneficial update direction is predicted. The reference combination is continuously updated based on the drug response values observed in the past, thereby adapting to the underlying drug response function. To demonstrate the effectiveness of the proposed algorithm, we evaluated its performance based on various multi-dimensional drug functions and compared it with existing algorithms. Conclusions Simulation results show that the ARU algorithm significantly outperforms existing stochastic search algorithms, including the Gur Game algorithm. In fact, the ARU algorithm can more effectively identify potent drug combinations and it typically spends fewer iterations for finding effective combinations. Furthermore, the ARU algorithm is robust to random fluctuations and noise in the measured drug response, which makes the algorithm well-suited for practical drug optimization applications. PMID:23134742
LORENE: Spectral methods differential equations solver
NASA Astrophysics Data System (ADS)
Gourgoulhon, Eric; Grandclément, Philippe; Marck, Jean-Alain; Novak, Jérôme; Taniguchi, Keisuke
2016-08-01
LORENE (Langage Objet pour la RElativité NumériquE) solves various problems arising in numerical relativity, and more generally in computational astrophysics. It is a set of C++ classes and provides tools to solve partial differential equations by means of multi-domain spectral methods. LORENE classes implement basic structures such as arrays and matrices, but also abstract mathematical objects, such as tensors, and astrophysical objects, such as stars and black holes.
Etude exploratoire des conceptions de la circulation sanguine aupres d'eleves de l'ordre collegial
NASA Astrophysics Data System (ADS)
Robitaille, Jean-Marc
Il existe peu d'etudes sur les conceptions touchant les domaines de la biologie, notamment sur les conceptions de la circulation sanguine Nous avons observe egalement l'absence de recherche menee aupres d'eleves de l'ordre collegial sur cette question. Nous avons voulu combler une lacune en menant une recherche sur les conceptions de la circulation sanguine aupres d'eleves de l'ordre collegial. Pour mener cette recherche nous nous sommes inspires d'une methode developpee par Treagust (1988). Le premier niveau de formulation didactique etablit l'architecture du systeme et la fonction nutritive de la circulation. Le second niveau de formulation didactique decrit et relie les parametres de la dynamique de la circulation et leur relation: Pression, Debit et Resistance. Le troisieme niveau de formulation didactique s'interesse au controle de la circulation du sang dans un contexte d'homeostasie qui implique la regulation de la pression arterielle. Nous avons construit un questionnaire en nous guidant sur les niveaux de formulation didactique et l'analyse des entrevues menees aupres de dix-huit eleves, representatifs de la population cible. Ce questionnaire fut administre a un echantillon de 2300 eleves disperses dans six colleges de la region de Montreal. Notre echantillon comprend des eleves inscrits a des programmes de l'ordre collegial en Sciences de la nature et en Techniques de la sante et qui n'ont pas suivi le cours sur la circulation sanguine. Notre analyse des reponses des eleves de notre echantillon aux questions sur le premier niveau de formulation didactique revele que la majorite des eleves considerent que le systeme circulatoire relie les organes les uns aux autres dans un circuit en serie. Notre analyse revele egalement que la majorite des eleves estiment que les nutriments sont extraits du sang par les organes selon un processus de selection base sur les besoins determines par la fonction de l'organe. Ces besoins sont differents selon les organes qui ne prelevent que les nutriments necessaires. Au second niveau les reponses des eleves de la population indiquent une conception de la dynamique cardio-vasculaire axee d'abord sur le coeur, laissant aux vaisseaux un role passif de canaux. Ces reponses indiquent egalement que la dynamique circulatoire est reduite a une sequence d'etapes ponctuelles sans relation les unes avec les autres. Au troisieme niveau les reponses des eleves de la population font etat d'une conception du controle qui privilegie la satisfaction de besoins locaux, sans relation systemique. Nos resultats suggerent que les eleves de notre echantillon affichent une plus grande concordance avec l'expert pour les questions du premier niveau (70%) que pour les niveaux II (54%) et III (50%). Notre analyse des donnees revele que l'accord avec l'expert est eleve lorsque la questions touchent la description des structures et la definition de leurs roles et plus faible lorsque les questions touchent la dynamique et le controle. Il existerait donc un niveau de formulation qui correspond a la description de structures et un autre niveau qui recoupe toute la dynamique de la circulation et son controle. Du point de vue didactique lanalyse des donnees suggere que nous ne retrouvons pas une correspondance entre les niveaux de formulation didactique et les conceptions des eleves. (Abstract shortened by UMI.)
A new hybrid meta-heuristic algorithm for optimal design of large-scale dome structures
NASA Astrophysics Data System (ADS)
Kaveh, A.; Ilchi Ghazaan, M.
2018-02-01
In this article a hybrid algorithm based on a vibrating particles system (VPS) algorithm, multi-design variable configuration (Multi-DVC) cascade optimization, and an upper bound strategy (UBS) is presented for global optimization of large-scale dome truss structures. The new algorithm is called MDVC-UVPS in which the VPS algorithm acts as the main engine of the algorithm. The VPS algorithm is one of the most recent multi-agent meta-heuristic algorithms mimicking the mechanisms of damped free vibration of single degree of freedom systems. In order to handle a large number of variables, cascade sizing optimization utilizing a series of DVCs is used. Moreover, the UBS is utilized to reduce the computational time. Various dome truss examples are studied to demonstrate the effectiveness and robustness of the proposed method, as compared to some existing structural optimization techniques. The results indicate that the MDVC-UVPS technique is a powerful search and optimization method for optimizing structural engineering problems.
Low-thrust orbit transfer optimization with refined Q-law and multi-objective genetic algorithm
NASA Technical Reports Server (NTRS)
Lee, Seungwon; Petropoulos, Anastassios E.; von Allmen, Paul
2005-01-01
An optimization method for low-thrust orbit transfers around a central body is developed using the Q-law and a multi-objective genetic algorithm. in the hybrid method, the Q-law generates candidate orbit transfers, and the multi-objective genetic algorithm optimizes the Q-law control parameters in order to simultaneously minimize both the consumed propellant mass and flight time of the orbit tranfer. This paper addresses the problem of finding optimal orbit transfers for low-thrust spacecraft.
NASA Astrophysics Data System (ADS)
Iny, David
2007-09-01
This paper addresses the out-of-sequence measurement (OOSM) problem associated with multiple platform tracking systems. The problem arises due to different transmission delays in communication of detection reports across platforms. Much of the literature focuses on the improvement to the state estimate by incorporating the OOSM. As the time lag increases, there is diminishing improvement to the state estimate. However, this paper shows that optimal processing of OOSMs may still be beneficial by improving data association as part of a multi-target tracker. This paper derives exact multi-lag algorithms with the property that the standard log likelihood track scoring is independent of the order in which the measurements are processed. The orthogonality principle is applied to generalize the method of Bar- Shalom in deriving the exact A1 algorithm for 1-lag estimation. Theory is also developed for optimal filtering of time averaged measurements and measurements correlated through periodic updates of a target aim-point. An alternative derivation of the multi-lag algorithms is also achieved using an efficient variant of the augmented state Kalman filter (AS-KF). This results in practical and reasonably efficient multi-lag algorithms. Results are compared to a well known ad hoc algorithm for incorporating OOSMs. Finally, the paper presents some simulated multi-target multi-static scenarios where there is a benefit to processing the data out of sequence in order to improve pruning efficiency.
A new chaotic multi-verse optimization algorithm for solving engineering optimization problems
NASA Astrophysics Data System (ADS)
Sayed, Gehad Ismail; Darwish, Ashraf; Hassanien, Aboul Ella
2018-03-01
Multi-verse optimization algorithm (MVO) is one of the recent meta-heuristic optimization algorithms. The main inspiration of this algorithm came from multi-verse theory in physics. However, MVO like most optimization algorithms suffers from low convergence rate and entrapment in local optima. In this paper, a new chaotic multi-verse optimization algorithm (CMVO) is proposed to overcome these problems. The proposed CMVO is applied on 13 benchmark functions and 7 well-known design problems in the engineering and mechanical field; namely, three-bar trust, speed reduce design, pressure vessel problem, spring design, welded beam, rolling element-bearing and multiple disc clutch brake. In the current study, a modified feasible-based mechanism is employed to handle constraints. In this mechanism, four rules were used to handle the specific constraint problem through maintaining a balance between feasible and infeasible solutions. Moreover, 10 well-known chaotic maps are used to improve the performance of MVO. The experimental results showed that CMVO outperforms other meta-heuristic optimization algorithms on most of the optimization problems. Also, the results reveal that sine chaotic map is the most appropriate map to significantly boost MVO's performance.
Multi-label literature classification based on the Gene Ontology graph.
Jin, Bo; Muller, Brian; Zhai, Chengxiang; Lu, Xinghua
2008-12-08
The Gene Ontology is a controlled vocabulary for representing knowledge related to genes and proteins in a computable form. The current effort of manually annotating proteins with the Gene Ontology is outpaced by the rate of accumulation of biomedical knowledge in literature, which urges the development of text mining approaches to facilitate the process by automatically extracting the Gene Ontology annotation from literature. The task is usually cast as a text classification problem, and contemporary methods are confronted with unbalanced training data and the difficulties associated with multi-label classification. In this research, we investigated the methods of enhancing automatic multi-label classification of biomedical literature by utilizing the structure of the Gene Ontology graph. We have studied three graph-based multi-label classification algorithms, including a novel stochastic algorithm and two top-down hierarchical classification methods for multi-label literature classification. We systematically evaluated and compared these graph-based classification algorithms to a conventional flat multi-label algorithm. The results indicate that, through utilizing the information from the structure of the Gene Ontology graph, the graph-based multi-label classification methods can significantly improve predictions of the Gene Ontology terms implied by the analyzed text. Furthermore, the graph-based multi-label classifiers are capable of suggesting Gene Ontology annotations (to curators) that are closely related to the true annotations even if they fail to predict the true ones directly. A software package implementing the studied algorithms is available for the research community. Through utilizing the information from the structure of the Gene Ontology graph, the graph-based multi-label classification methods have better potential than the conventional flat multi-label classification approach to facilitate protein annotation based on the literature.
Mental Computation or Standard Algorithm? Children's Strategy Choices on Multi-Digit Subtractions
ERIC Educational Resources Information Center
Torbeyns, Joke; Verschaffel, Lieven
2016-01-01
This study analyzed children's use of mental computation strategies and the standard algorithm on multi-digit subtractions. Fifty-eight Flemish 4th graders of varying mathematical achievement level were individually offered subtractions that either stimulated the use of mental computation strategies or the standard algorithm in one choice and two…
Chang, Hing-Chiu; Guhaniyogi, Shayan; Chen, Nan-kuei
2014-01-01
Purpose We report a series of techniques to reliably eliminate artifacts in interleaved echo-planar imaging (EPI) based diffusion weighted imaging (DWI). Methods First, we integrate the previously reported multiplexed sensitivity encoding (MUSE) algorithm with a new adaptive Homodyne partial-Fourier reconstruction algorithm, so that images reconstructed from interleaved partial-Fourier DWI data are free from artifacts even in the presence of either a) motion-induced k-space energy peak displacement, or b) susceptibility field gradient induced fast phase changes. Second, we generalize the previously reported single-band MUSE framework to multi-band MUSE, so that both through-plane and in-plane aliasing artifacts in multi-band multi-shot interleaved DWI data can be effectively eliminated. Results The new adaptive Homodyne-MUSE reconstruction algorithm reliably produces high-quality and high-resolution DWI, eliminating residual artifacts in images reconstructed with previously reported methods. Furthermore, the generalized MUSE algorithm is compatible with multi-band and high-throughput DWI. Conclusion The integration of the multi-band and adaptive Homodyne-MUSE algorithms significantly improves the spatial-resolution, image quality, and scan throughput of interleaved DWI. We expect that the reported reconstruction framework will play an important role in enabling high-resolution DWI for both neuroscience research and clinical uses. PMID:24925000
Fuzzy multi objective transportation problem – evolutionary algorithm approach
NASA Astrophysics Data System (ADS)
Karthy, T.; Ganesan, K.
2018-04-01
This paper deals with fuzzy multi objective transportation problem. An fuzzy optimal compromise solution is obtained by using Fuzzy Genetic Algorithm. A numerical example is provided to illustrate the methodology.
Topology Control in Aerial Multi-Beam Directional Networks
2017-04-24
underlying challenges to topology control in multi -beam direction networks. Two topology control algorithms are developed: a centralized algorithm...main beam, the gain is negligible. Thus, for topology control in a multi -beam system, two nodes that are being simultaneously transmitted to or...the network. As the network size is larger than the communication range, even the original network will require some multi -hop traffic. The second two
Multi-Source Multi-Target Dictionary Learning for Prediction of Cognitive Decline
Zhang, Jie; Li, Qingyang; Caselli, Richard J.; Thompson, Paul M.; Ye, Jieping; Wang, Yalin
2017-01-01
Alzheimer’s Disease (AD) is the most common type of dementia. Identifying correct biomarkers may determine pre-symptomatic AD subjects and enable early intervention. Recently, Multi-task sparse feature learning has been successfully applied to many computer vision and biomedical informatics researches. It aims to improve the generalization performance by exploiting the shared features among different tasks. However, most of the existing algorithms are formulated as a supervised learning scheme. Its drawback is with either insufficient feature numbers or missing label information. To address these challenges, we formulate an unsupervised framework for multi-task sparse feature learning based on a novel dictionary learning algorithm. To solve the unsupervised learning problem, we propose a two-stage Multi-Source Multi-Target Dictionary Learning (MMDL) algorithm. In stage 1, we propose a multi-source dictionary learning method to utilize the common and individual sparse features in different time slots. In stage 2, supported by a rigorous theoretical analysis, we develop a multi-task learning method to solve the missing label problem. Empirical studies on an N = 3970 longitudinal brain image data set, which involves 2 sources and 5 targets, demonstrate the improved prediction accuracy and speed efficiency of MMDL in comparison with other state-of-the-art algorithms. PMID:28943731
T-L Plane Abstraction-Based Energy-Efficient Real-Time Scheduling for Multi-Core Wireless Sensors
Kim, Youngmin; Lee, Ki-Seong; Pham, Ngoc-Son; Lee, Sun-Ro; Lee, Chan-Gun
2016-01-01
Energy efficiency is considered as a critical requirement for wireless sensor networks. As more wireless sensor nodes are equipped with multi-cores, there are emerging needs for energy-efficient real-time scheduling algorithms. The T-L plane-based scheme is known to be an optimal global scheduling technique for periodic real-time tasks on multi-cores. Unfortunately, there has been a scarcity of studies on extending T-L plane-based scheduling algorithms to exploit energy-saving techniques. In this paper, we propose a new T-L plane-based algorithm enabling energy-efficient real-time scheduling on multi-core sensor nodes with dynamic power management (DPM). Our approach addresses the overhead of processor mode transitions and reduces fragmentations of the idle time, which are inherent in T-L plane-based algorithms. Our experimental results show the effectiveness of the proposed algorithm compared to other energy-aware scheduling methods on T-L plane abstraction. PMID:27399722
An improved KCF tracking algorithm based on multi-feature and multi-scale
NASA Astrophysics Data System (ADS)
Wu, Wei; Wang, Ding; Luo, Xin; Su, Yang; Tian, Weiye
2018-02-01
The purpose of visual tracking is to associate the target object in a continuous video frame. In recent years, the method based on the kernel correlation filter has become the research hotspot. However, the algorithm still has some problems such as video capture equipment fast jitter, tracking scale transformation. In order to improve the ability of scale transformation and feature description, this paper has carried an innovative algorithm based on the multi feature fusion and multi-scale transform. The experimental results show that our method solves the problem that the target model update when is blocked or its scale transforms. The accuracy of the evaluation (OPE) is 77.0%, 75.4% and the success rate is 69.7%, 66.4% on the VOT and OTB datasets. Compared with the optimal one of the existing target-based tracking algorithms, the accuracy of the algorithm is improved by 6.7% and 6.3% respectively. The success rates are improved by 13.7% and 14.2% respectively.
NASA Astrophysics Data System (ADS)
Tavakkoli-Moghaddam, Reza; Vazifeh-Noshafagh, Samira; Taleizadeh, Ata Allah; Hajipour, Vahid; Mahmoudi, Amin
2017-01-01
This article presents a new multi-objective model for a facility location problem with congestion and pricing policies. This model considers situations in which immobile service facilities are congested by a stochastic demand following M/M/m/k queues. The presented model belongs to the class of mixed-integer nonlinear programming models and NP-hard problems. To solve such a hard model, a new multi-objective optimization algorithm based on a vibration theory, namely multi-objective vibration damping optimization (MOVDO), is developed. In order to tune the algorithms parameters, the Taguchi approach using a response metric is implemented. The computational results are compared with those of the non-dominated ranking genetic algorithm and non-dominated sorting genetic algorithm. The outputs demonstrate the robustness of the proposed MOVDO in large-sized problems.
NASA Astrophysics Data System (ADS)
Zhang, Ka; Sheng, Yehua; Wang, Meizhen; Fu, Suxia
2018-05-01
The traditional multi-view vertical line locus (TMVLL) matching method is an object-space-based method that is commonly used to directly acquire spatial 3D coordinates of ground objects in photogrammetry. However, the TMVLL method can only obtain one elevation and lacks an accurate means of validating the matching results. In this paper, we propose an enhanced multi-view vertical line locus (EMVLL) matching algorithm based on positioning consistency for aerial or space images. The algorithm involves three components: confirming candidate pixels of the ground primitive in the base image, multi-view image matching based on the object space constraints for all candidate pixels, and validating the consistency of the object space coordinates with the multi-view matching result. The proposed algorithm was tested using actual aerial images and space images. Experimental results show that the EMVLL method successfully solves the problems associated with the TMVLL method, and has greater reliability, accuracy and computing efficiency.
Data association approaches in bearings-only multi-target tracking
NASA Astrophysics Data System (ADS)
Xu, Benlian; Wang, Zhiquan
2008-03-01
According to requirements of time computation complexity and correctness of data association of the multi-target tracking, two algorithms are suggested in this paper. The proposed Algorithm 1 is developed from the modified version of dual Simplex method, and it has the advantage of direct and explicit form of the optimal solution. The Algorithm 2 is based on the idea of Algorithm 1 and rotational sort method, it combines not only advantages of Algorithm 1, but also reduces the computational burden, whose complexity is only 1/ N times that of Algorithm 1. Finally, numerical analyses are carried out to evaluate the performance of the two data association algorithms.
Guo, Y C; Wang, H; Wu, H P; Zhang, M Q
2015-12-21
Aimed to address the defects of the large mean square error (MSE), and the slow convergence speed in equalizing the multi-modulus signals of the constant modulus algorithm (CMA), a multi-modulus algorithm (MMA) based on global artificial fish swarm (GAFS) intelligent optimization of DNA encoding sequences (GAFS-DNA-MMA) was proposed. To improve the convergence rate and reduce the MSE, this proposed algorithm adopted an encoding method based on DNA nucleotide chains to provide a possible solution to the problem. Furthermore, the GAFS algorithm, with its fast convergence and global search ability, was used to find the best sequence. The real and imaginary parts of the initial optimal weight vector of MMA were obtained through DNA coding of the best sequence. The simulation results show that the proposed algorithm has a faster convergence speed and smaller MSE in comparison with the CMA, the MMA, and the AFS-DNA-MMA.
Multi-robot task allocation based on two dimensional artificial fish swarm algorithm
NASA Astrophysics Data System (ADS)
Zheng, Taixiong; Li, Xueqin; Yang, Liangyi
2007-12-01
The problem of task allocation for multiple robots is to allocate more relative-tasks to less relative-robots so as to minimize the processing time of these tasks. In order to get optimal multi-robot task allocation scheme, a twodimensional artificial swarm algorithm based approach is proposed in this paper. In this approach, the normal artificial fish is extended to be two dimension artificial fish. In the two dimension artificial fish, each vector of primary artificial fish is extended to be an m-dimensional vector. Thus, each vector can express a group of tasks. By redefining the distance between artificial fish and the center of artificial fish, the behavior of two dimension fish is designed and the task allocation algorithm based on two dimension artificial swarm algorithm is put forward. At last, the proposed algorithm is applied to the problem of multi-robot task allocation and comparer with GA and SA based algorithm is done. Simulation and compare result shows the proposed algorithm is effective.
Zheng, Wei; Yan, Xiaoyong; Zhao, Wei; Qian, Chengshan
2017-12-20
A novel large-scale multi-hop localization algorithm based on regularized extreme learning is proposed in this paper. The large-scale multi-hop localization problem is formulated as a learning problem. Unlike other similar localization algorithms, the proposed algorithm overcomes the shortcoming of the traditional algorithms which are only applicable to an isotropic network, therefore has a strong adaptability to the complex deployment environment. The proposed algorithm is composed of three stages: data acquisition, modeling and location estimation. In data acquisition stage, the training information between nodes of the given network is collected. In modeling stage, the model among the hop-counts and the physical distances between nodes is constructed using regularized extreme learning. In location estimation stage, each node finds its specific location in a distributed manner. Theoretical analysis and several experiments show that the proposed algorithm can adapt to the different topological environments with low computational cost. Furthermore, high accuracy can be achieved by this method without setting complex parameters.
Peissig, Peggy L; Rasmussen, Luke V; Berg, Richard L; Linneman, James G; McCarty, Catherine A; Waudby, Carol; Chen, Lin; Denny, Joshua C; Wilke, Russell A; Pathak, Jyotishman; Carrell, David; Kho, Abel N; Starren, Justin B
2012-01-01
There is increasing interest in using electronic health records (EHRs) to identify subjects for genomic association studies, due in part to the availability of large amounts of clinical data and the expected cost efficiencies of subject identification. We describe the construction and validation of an EHR-based algorithm to identify subjects with age-related cataracts. We used a multi-modal strategy consisting of structured database querying, natural language processing on free-text documents, and optical character recognition on scanned clinical images to identify cataract subjects and related cataract attributes. Extensive validation on 3657 subjects compared the multi-modal results to manual chart review. The algorithm was also implemented at participating electronic MEdical Records and GEnomics (eMERGE) institutions. An EHR-based cataract phenotyping algorithm was successfully developed and validated, resulting in positive predictive values (PPVs) >95%. The multi-modal approach increased the identification of cataract subject attributes by a factor of three compared to single-mode approaches while maintaining high PPV. Components of the cataract algorithm were successfully deployed at three other institutions with similar accuracy. A multi-modal strategy incorporating optical character recognition and natural language processing may increase the number of cases identified while maintaining similar PPVs. Such algorithms, however, require that the needed information be embedded within clinical documents. We have demonstrated that algorithms to identify and characterize cataracts can be developed utilizing data collected via the EHR. These algorithms provide a high level of accuracy even when implemented across multiple EHRs and institutional boundaries.
Path planning of decentralized multi-quadrotor based on fuzzy-cell decomposition algorithm
NASA Astrophysics Data System (ADS)
Iswanto, Wahyunggoro, Oyas; Cahyadi, Adha Imam
2017-04-01
The paper aims to present a design algorithm for multi quadrotor lanes in order to move towards the goal quickly and avoid obstacles in an area with obstacles. There are several problems in path planning including how to get to the goal position quickly and avoid static and dynamic obstacles. To overcome the problem, therefore, the paper presents fuzzy logic algorithm and fuzzy cell decomposition algorithm. Fuzzy logic algorithm is one of the artificial intelligence algorithms which can be applied to robot path planning that is able to detect static and dynamic obstacles. Cell decomposition algorithm is an algorithm of graph theory used to make a robot path map. By using the two algorithms the robot is able to get to the goal position and avoid obstacles but it takes a considerable time because they are able to find the shortest path. Therefore, this paper describes a modification of the algorithms by adding a potential field algorithm used to provide weight values on the map applied for each quadrotor by using decentralized controlled, so that the quadrotor is able to move to the goal position quickly by finding the shortest path. The simulations conducted have shown that multi-quadrotor can avoid various obstacles and find the shortest path by using the proposed algorithms.
Multi-agent cooperation rescue algorithm based on influence degree and state prediction
NASA Astrophysics Data System (ADS)
Zheng, Yanbin; Ma, Guangfu; Wang, Linlin; Xi, Pengxue
2018-04-01
Aiming at the multi-agent cooperative rescue in disaster, a multi-agent cooperative rescue algorithm based on impact degree and state prediction is proposed. Firstly, based on the influence of the information in the scene on the collaborative task, the influence degree function is used to filter the information. Secondly, using the selected information to predict the state of the system and Agent behavior. Finally, according to the result of the forecast, the cooperative behavior of Agent is guided and improved the efficiency of individual collaboration. The simulation results show that this algorithm can effectively solve the cooperative rescue problem of multi-agent and ensure the efficient completion of the task.
A Pseudo-Temporal Multi-Grid Relaxation Scheme for Solving the Parabolized Navier-Stokes Equations
NASA Technical Reports Server (NTRS)
White, J. A.; Morrison, J. H.
1999-01-01
A multi-grid, flux-difference-split, finite-volume code, VULCAN, is presented for solving the elliptic and parabolized form of the equations governing three-dimensional, turbulent, calorically perfect and non-equilibrium chemically reacting flows. The space marching algorithms developed to improve convergence rate and or reduce computational cost are emphasized. The algorithms presented are extensions to the class of implicit pseudo-time iterative, upwind space-marching schemes. A full approximate storage, full multi-grid scheme is also described which is used to accelerate the convergence of a Gauss-Seidel relaxation method. The multi-grid algorithm is shown to significantly improve convergence on high aspect ratio grids.
NASA Astrophysics Data System (ADS)
Karkra, Rashmi; Kumar, Prashant; Bansod, Baban K. S.; Bagchi, Sudeshna; Sharma, Pooja; Krishna, C. Rama
2017-11-01
Access to potable water for the common people is one of the most challenging tasks in the present era. Contamination of drinking water has become a serious problem due to various anthropogenic and geogenic events. The paper demonstrates the application of evolutionary algorithms, viz., particle swan optimization and genetic algorithm to 24 water samples containing eight different heavy metal ions (Cd, Cu, Co, Pb, Zn, Ar, Cr and Ni) for the optimal estimation of electrode and frequency to classify the heavy metal ions. The work has been carried out on multi-variate data, viz., single electrode multi-frequency, single frequency multi-electrode and multi-frequency multi-electrode water samples. The electrodes used are platinum, gold, silver nanoparticles and glassy carbon electrodes. Various hazardous metal ions present in the water samples have been optimally classified and validated by the application of Davis Bouldin index. Such studies are useful in the segregation of hazardous heavy metal ions found in water resources, thereby quantifying the degree of water quality.
NASA Astrophysics Data System (ADS)
Vikram, K. Arun; Ratnam, Ch; Lakshmi, VVK; Kumar, A. Sunny; Ramakanth, RT
2018-02-01
Meta-heuristic multi-response optimization methods are widely in use to solve multi-objective problems to obtain Pareto optimal solutions during optimization. This work focuses on optimal multi-response evaluation of process parameters in generating responses like surface roughness (Ra), surface hardness (H) and tool vibration displacement amplitude (Vib) while performing operations like tangential and orthogonal turn-mill processes on A-axis Computer Numerical Control vertical milling center. Process parameters like tool speed, feed rate and depth of cut are considered as process parameters machined over brass material under dry condition with high speed steel end milling cutters using Taguchi design of experiments (DOE). Meta-heuristic like Dragonfly algorithm is used to optimize the multi-objectives like ‘Ra’, ‘H’ and ‘Vib’ to identify the optimal multi-response process parameters combination. Later, the results thus obtained from multi-objective dragonfly algorithm (MODA) are compared with another multi-response optimization technique Viz. Grey relational analysis (GRA).
Integrative systems modeling and multi-objective optimization
This presentation presents a number of algorithms, tools, and methods for utilizing multi-objective optimization within integrated systems modeling frameworks. We first present innovative methods using a genetic algorithm to optimally calibrate the VELMA and SWAT ecohydrological ...
A joint precoding scheme for indoor downlink multi-user MIMO VLC systems
NASA Astrophysics Data System (ADS)
Zhao, Qiong; Fan, Yangyu; Kang, Bochao
2017-11-01
In this study, we aim to improve the system performance and reduce the implementation complexity of precoding scheme for visible light communication (VLC) systems. By incorporating the power-method algorithm and the block diagonalization (BD) algorithm, we propose a joint precoding scheme for indoor downlink multi-user multi-input-multi-output (MU-MIMO) VLC systems. In this scheme, we apply the BD algorithm to eliminate the co-channel interference (CCI) among users firstly. Secondly, the power-method algorithm is used to search the precoding weight for each user based on the optimal criterion of signal to interference plus noise ratio (SINR) maximization. Finally, the optical power restrictions of VLC systems are taken into account to constrain the precoding weight matrix. Comprehensive computer simulations in two scenarios indicate that the proposed scheme always has better bit error rate (BER) performance and lower computation complexity than that of the traditional scheme.
Automatic Clustering Using Multi-objective Particle Swarm and Simulated Annealing
Abubaker, Ahmad; Baharum, Adam; Alrefaei, Mahmoud
2015-01-01
This paper puts forward a new automatic clustering algorithm based on Multi-Objective Particle Swarm Optimization and Simulated Annealing, “MOPSOSA”. The proposed algorithm is capable of automatic clustering which is appropriate for partitioning datasets to a suitable number of clusters. MOPSOSA combines the features of the multi-objective based particle swarm optimization (PSO) and the Multi-Objective Simulated Annealing (MOSA). Three cluster validity indices were optimized simultaneously to establish the suitable number of clusters and the appropriate clustering for a dataset. The first cluster validity index is centred on Euclidean distance, the second on the point symmetry distance, and the last cluster validity index is based on short distance. A number of algorithms have been compared with the MOPSOSA algorithm in resolving clustering problems by determining the actual number of clusters and optimal clustering. Computational experiments were carried out to study fourteen artificial and five real life datasets. PMID:26132309
Image Segmentation Method Using Fuzzy C Mean Clustering Based on Multi-Objective Optimization
NASA Astrophysics Data System (ADS)
Chen, Jinlin; Yang, Chunzhi; Xu, Guangkui; Ning, Li
2018-04-01
Image segmentation is not only one of the hottest topics in digital image processing, but also an important part of computer vision applications. As one kind of image segmentation algorithms, fuzzy C-means clustering is an effective and concise segmentation algorithm. However, the drawback of FCM is that it is sensitive to image noise. To solve the problem, this paper designs a novel fuzzy C-mean clustering algorithm based on multi-objective optimization. We add a parameter λ to the fuzzy distance measurement formula to improve the multi-objective optimization. The parameter λ can adjust the weights of the pixel local information. In the algorithm, the local correlation of neighboring pixels is added to the improved multi-objective mathematical model to optimize the clustering cent. Two different experimental results show that the novel fuzzy C-means approach has an efficient performance and computational time while segmenting images by different type of noises.
A highly efficient multi-core algorithm for clustering extremely large datasets
2010-01-01
Background In recent years, the demand for computational power in computational biology has increased due to rapidly growing data sets from microarray and other high-throughput technologies. This demand is likely to increase. Standard algorithms for analyzing data, such as cluster algorithms, need to be parallelized for fast processing. Unfortunately, most approaches for parallelizing algorithms largely rely on network communication protocols connecting and requiring multiple computers. One answer to this problem is to utilize the intrinsic capabilities in current multi-core hardware to distribute the tasks among the different cores of one computer. Results We introduce a multi-core parallelization of the k-means and k-modes cluster algorithms based on the design principles of transactional memory for clustering gene expression microarray type data and categorial SNP data. Our new shared memory parallel algorithms show to be highly efficient. We demonstrate their computational power and show their utility in cluster stability and sensitivity analysis employing repeated runs with slightly changed parameters. Computation speed of our Java based algorithm was increased by a factor of 10 for large data sets while preserving computational accuracy compared to single-core implementations and a recently published network based parallelization. Conclusions Most desktop computers and even notebooks provide at least dual-core processors. Our multi-core algorithms show that using modern algorithmic concepts, parallelization makes it possible to perform even such laborious tasks as cluster sensitivity and cluster number estimation on the laboratory computer. PMID:20370922
Tse, Chi-Shing; Balota, David A; Yap, Melvin J; Duchek, Janet M; McCabe, David P
2010-05-01
The characteristics of response time (RT) distributions beyond measures of central tendency were explored in 3 attention tasks across groups of young adults, healthy older adults, and individuals with very mild dementia of the Alzheimer's type (DAT). Participants were administered computerized Stroop, Simon, and switching tasks, along with psychometric tasks that tap various cognitive abilities and a standard personality inventory (NEO-FFI). Ex-Gaussian (and Vincentile) analyses were used to capture the characteristics of the RT distributions for each participant across the 3 tasks, which afforded 3 components: mu and sigma (mean and standard deviation of the modal portion of the distribution) and tau (the positive tail of the distribution). The results indicated that across all 3 attention tasks, healthy aging produced large changes in the central tendency mu parameter of the distribution along with some change in sigma and tau (mean etap(2) = .17, .08, and .04, respectively). In contrast, early stage DAT primarily produced an increase in the tau component (mean etap(2) = .06). tau was also correlated with the psychometric measures of episodic/semantic memory, working memory, and processing speed, and with the personality traits of neuroticism and conscientiousness. Structural equation modeling indicated a unique relation between a latent tau construct (-.90), as opposed to sigma (-.09) and mu constructs (.24), with working memory measures. The results suggest a critical role of attentional control systems in discriminating healthy aging from early stage DAT and the utility of RT distribution analyses to better specify the nature of such change.
Content-Based Multi-Channel Network Coding Algorithm in the Millimeter-Wave Sensor Network
Lin, Kai; Wang, Di; Hu, Long
2016-01-01
With the development of wireless technology, the widespread use of 5G is already an irreversible trend, and millimeter-wave sensor networks are becoming more and more common. However, due to the high degree of complexity and bandwidth bottlenecks, the millimeter-wave sensor network still faces numerous problems. In this paper, we propose a novel content-based multi-channel network coding algorithm, which uses the functions of data fusion, multi-channel and network coding to improve the data transmission; the algorithm is referred to as content-based multi-channel network coding (CMNC). The CMNC algorithm provides a fusion-driven model based on the Dempster-Shafer (D-S) evidence theory to classify the sensor nodes into different classes according to the data content. By using the result of the classification, the CMNC algorithm also provides the channel assignment strategy and uses network coding to further improve the quality of data transmission in the millimeter-wave sensor network. Extensive simulations are carried out and compared to other methods. Our simulation results show that the proposed CMNC algorithm can effectively improve the quality of data transmission and has better performance than the compared methods. PMID:27376302
Classification algorithm of lung lobe for lung disease cases based on multislice CT images
NASA Astrophysics Data System (ADS)
Matsuhiro, M.; Kawata, Y.; Niki, N.; Nakano, Y.; Mishima, M.; Ohmatsu, H.; Tsuchida, T.; Eguchi, K.; Kaneko, M.; Moriyama, N.
2011-03-01
With the development of multi-slice CT technology, to obtain an accurate 3D image of lung field in a short time is possible. To support that, a lot of image processing methods need to be developed. In clinical setting for diagnosis of lung cancer, it is important to study and analyse lung structure. Therefore, classification of lung lobe provides useful information for lung cancer analysis. In this report, we describe algorithm which classify lungs into lung lobes for lung disease cases from multi-slice CT images. The classification algorithm of lung lobes is efficiently carried out using information of lung blood vessel, bronchus, and interlobar fissure. Applying the classification algorithms to multi-slice CT images of 20 normal cases and 5 lung disease cases, we demonstrate the usefulness of the proposed algorithms.
NASA Astrophysics Data System (ADS)
Oesterle, Jonathan; Lionel, Amodeo
2018-06-01
The current competitive situation increases the importance of realistically estimating product costs during the early phases of product and assembly line planning projects. In this article, several multi-objective algorithms using difference dominance rules are proposed to solve the problem associated with the selection of the most effective combination of product and assembly lines. The list of developed algorithms includes variants of ant colony algorithms, evolutionary algorithms and imperialist competitive algorithms. The performance of each algorithm and dominance rule is analysed by five multi-objective quality indicators and fifty problem instances. The algorithms and dominance rules are ranked using a non-parametric statistical test.
EIT image regularization by a new Multi-Objective Simulated Annealing algorithm.
Castro Martins, Thiago; Sales Guerra Tsuzuki, Marcos
2015-01-01
Multi-Objective Optimization can be used to produce regularized Electrical Impedance Tomography (EIT) images where the weight of the regularization term is not known a priori. This paper proposes a novel Multi-Objective Optimization algorithm based on Simulated Annealing tailored for EIT image reconstruction. Images are reconstructed from experimental data and compared with images from other Multi and Single Objective optimization methods. A significant performance enhancement from traditional techniques can be inferred from the results.
Ceberio, Josu; Calvo, Borja; Mendiburu, Alexander; Lozano, Jose A
2018-02-15
In the last decade, many works in combinatorial optimisation have shown that, due to the advances in multi-objective optimisation, the algorithms from this field could be used for solving single-objective problems as well. In this sense, a number of papers have proposed multi-objectivising single-objective problems in order to use multi-objective algorithms in their optimisation. In this article, we follow up this idea by presenting a methodology for multi-objectivising combinatorial optimisation problems based on elementary landscape decompositions of their objective function. Under this framework, each of the elementary landscapes obtained from the decomposition is considered as an independent objective function to optimise. In order to illustrate this general methodology, we consider four problems from different domains: the quadratic assignment problem and the linear ordering problem (permutation domain), the 0-1 unconstrained quadratic optimisation problem (binary domain), and the frequency assignment problem (integer domain). We implemented two widely known multi-objective algorithms, NSGA-II and SPEA2, and compared their performance with that of a single-objective GA. The experiments conducted on a large benchmark of instances of the four problems show that the multi-objective algorithms clearly outperform the single-objective approaches. Furthermore, a discussion on the results suggests that the multi-objective space generated by this decomposition enhances the exploration ability, thus permitting NSGA-II and SPEA2 to obtain better results in the majority of the tested instances.
Page, Andrew J.; Keane, Thomas M.; Naughton, Thomas J.
2010-01-01
We present a multi-heuristic evolutionary task allocation algorithm to dynamically map tasks to processors in a heterogeneous distributed system. It utilizes a genetic algorithm, combined with eight common heuristics, in an effort to minimize the total execution time. It operates on batches of unmapped tasks and can preemptively remap tasks to processors. The algorithm has been implemented on a Java distributed system and evaluated with a set of six problems from the areas of bioinformatics, biomedical engineering, computer science and cryptography. Experiments using up to 150 heterogeneous processors show that the algorithm achieves better efficiency than other state-of-the-art heuristic algorithms. PMID:20862190
Genetics-based control of a mimo boiler-turbine plant
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dimeo, R.M.; Lee, K.Y.
1994-12-31
A genetic algorithm is used to develop an optimal controller for a non-linear, multi-input/multi-output boiler-turbine plant. The algorithm is used to train a control system for the plant over a wide operating range in an effort to obtain better performance. The results of the genetic algorithm`s controller designed from the linearized plant model at a nominal operating point. Because the genetic algorithm is well-suited to solving traditionally difficult optimization problems it is found that the algorithm is capable of developing the controller based on input/output information only. This controller achieves a performance comparable to the standard linear quadratic regulator.
1999-03-01
aerodynamics to affect load motions. The effects include a load trail angle in proportion to the drag specific force, and modification of the load pendulum...equations algorithm for flight data filtering architeture . and data consistency checking; and SCIDNT 8, an output architecture. error identification...accelerations at the seven sensor locations, identified system is proportional to the number When system identification is performed, as of flexible modes
2006-07-01
of water, gelatin (G2625, Sigma Inc.), India ink (for absorption), and titanium dioxide powder (for scatter) (TiO2, Sigma Inc.) is poured into a mold...R. C., Ference, R. J, Refractive index of some mammalian tissue using a fiber optic cladding method. Applied Optics, 1989. 28(12): p. 2297-2303. 3...scans. The NIR system utilizes six optical wavelengths from 660 to 850 nm using intensity modulated diode lasers nominally working at 100 MHz
Photoémission de Csl induite par une impulsion laser intense femtoseconde
NASA Astrophysics Data System (ADS)
Belsky, A.; Vasil'Ev, A.; Yatsenko, B.; Bachau, H.; Martin, P.; Geoffroy, G.; Guizard, S.
2003-06-01
Nous avons mesuré pour la première fois les spectres de photoélectrons émis par un cristal isolant à large bande interdite, Csl, avec une dynamique de 10^6 coups/s, excité par la source laser haute cadence du C.E.L.I.A (800 nm, 40 fs, 1 kHz, 1 TW). L'émission d'électrons jusqu'à des énergies de quelques dizaines d'électrons-volts a été observée pour des impulsions d'éclairement compris entre 0.5 et 3 TW/cm^2, relativement faible donc par comparaison aux éclairements utilisés pour accélérer les électrons d'un atome aux mêmes énergies. Ces spectres contiennent tous, en particulier, deux bandes dans le domaine des basses énergies d'électrons (<5 eV), également observées lors d'études précédentes. Les électrons les plus énergétiques forment un plateau intense légèrement structuré et limité par une coupure exponentielle. Pour des impulsions de 3 TW/cm^2 cette coupure est située à 27 eV. L'insuffisance du mécanisme électron-photon-phonon, considéré jusqu'à présent comme le principal processus d'échauffement des électrons dans les solides en interaction non destructrice avec un champ laser, nous a poussé à proposer un mécanisme alternatif. Ce modèle met en évidence les transitions directes multiphotoniques dans la bande de conduction du solide qui sont incontournables du fait de sa structure électronique multi-branches
Rapid Calculation of Max-Min Fair Rates for Multi-Commodity Flows in Fat-Tree Networks
Mollah, Md Atiqul; Yuan, Xin; Pakin, Scott; ...
2017-08-29
Max-min fairness is often used in the performance modeling of interconnection networks. Existing methods to compute max-min fair rates for multi-commodity flows have high complexity and are computationally infeasible for large networks. In this paper, we show that by considering topological features, this problem can be solved efficiently for the fat-tree topology that is widely used in data centers and high performance compute clusters. Several efficient new algorithms are developed for this problem, including a parallel algorithm that can take advantage of multi-core and shared-memory architectures. Using these algorithms, we demonstrate that it is possible to find the max-min fairmore » rate allocation for multi-commodity flows in fat-tree networks that support tens of thousands of nodes. We evaluate the run-time performance of the proposed algorithms and show improvement in orders of magnitude over the previously best known method. Finally, we further demonstrate a new application of max-min fair rate allocation that is only computationally feasible using our new algorithms.« less
Rapid Calculation of Max-Min Fair Rates for Multi-Commodity Flows in Fat-Tree Networks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mollah, Md Atiqul; Yuan, Xin; Pakin, Scott
Max-min fairness is often used in the performance modeling of interconnection networks. Existing methods to compute max-min fair rates for multi-commodity flows have high complexity and are computationally infeasible for large networks. In this paper, we show that by considering topological features, this problem can be solved efficiently for the fat-tree topology that is widely used in data centers and high performance compute clusters. Several efficient new algorithms are developed for this problem, including a parallel algorithm that can take advantage of multi-core and shared-memory architectures. Using these algorithms, we demonstrate that it is possible to find the max-min fairmore » rate allocation for multi-commodity flows in fat-tree networks that support tens of thousands of nodes. We evaluate the run-time performance of the proposed algorithms and show improvement in orders of magnitude over the previously best known method. Finally, we further demonstrate a new application of max-min fair rate allocation that is only computationally feasible using our new algorithms.« less
Katouda, Michio; Naruse, Akira; Hirano, Yukihiko; Nakajima, Takahito
2016-11-15
A new parallel algorithm and its implementation for the RI-MP2 energy calculation utilizing peta-flop-class many-core supercomputers are presented. Some improvements from the previous algorithm (J. Chem. Theory Comput. 2013, 9, 5373) have been performed: (1) a dual-level hierarchical parallelization scheme that enables the use of more than 10,000 Message Passing Interface (MPI) processes and (2) a new data communication scheme that reduces network communication overhead. A multi-node and multi-GPU implementation of the present algorithm is presented for calculations on a central processing unit (CPU)/graphics processing unit (GPU) hybrid supercomputer. Benchmark results of the new algorithm and its implementation using the K computer (CPU clustering system) and TSUBAME 2.5 (CPU/GPU hybrid system) demonstrate high efficiency. The peak performance of 3.1 PFLOPS is attained using 80,199 nodes of the K computer. The peak performance of the multi-node and multi-GPU implementation is 514 TFLOPS using 1349 nodes and 4047 GPUs of TSUBAME 2.5. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
A Multi-Scale Settlement Matching Algorithm Based on ARG
NASA Astrophysics Data System (ADS)
Yue, Han; Zhu, Xinyan; Chen, Di; Liu, Lingjia
2016-06-01
Homonymous entity matching is an important part of multi-source spatial data integration, automatic updating and change detection. Considering the low accuracy of existing matching methods in dealing with matching multi-scale settlement data, an algorithm based on Attributed Relational Graph (ARG) is proposed. The algorithm firstly divides two settlement scenes at different scales into blocks by small-scale road network and constructs local ARGs in each block. Then, ascertains candidate sets by merging procedures and obtains the optimal matching pairs by comparing the similarity of ARGs iteratively. Finally, the corresponding relations between settlements at large and small scales are identified. At the end of this article, a demonstration is presented and the results indicate that the proposed algorithm is capable of handling sophisticated cases.
An improved feature extraction algorithm based on KAZE for multi-spectral image
NASA Astrophysics Data System (ADS)
Yang, Jianping; Li, Jun
2018-02-01
Multi-spectral image contains abundant spectral information, which is widely used in all fields like resource exploration, meteorological observation and modern military. Image preprocessing, such as image feature extraction and matching, is indispensable while dealing with multi-spectral remote sensing image. Although the feature matching algorithm based on linear scale such as SIFT and SURF performs strong on robustness, the local accuracy cannot be guaranteed. Therefore, this paper proposes an improved KAZE algorithm, which is based on nonlinear scale, to raise the number of feature and to enhance the matching rate by using the adjusted-cosine vector. The experiment result shows that the number of feature and the matching rate of the improved KAZE are remarkably than the original KAZE algorithm.
Rasmussen, Luke V; Berg, Richard L; Linneman, James G; McCarty, Catherine A; Waudby, Carol; Chen, Lin; Denny, Joshua C; Wilke, Russell A; Pathak, Jyotishman; Carrell, David; Kho, Abel N; Starren, Justin B
2012-01-01
Objective There is increasing interest in using electronic health records (EHRs) to identify subjects for genomic association studies, due in part to the availability of large amounts of clinical data and the expected cost efficiencies of subject identification. We describe the construction and validation of an EHR-based algorithm to identify subjects with age-related cataracts. Materials and methods We used a multi-modal strategy consisting of structured database querying, natural language processing on free-text documents, and optical character recognition on scanned clinical images to identify cataract subjects and related cataract attributes. Extensive validation on 3657 subjects compared the multi-modal results to manual chart review. The algorithm was also implemented at participating electronic MEdical Records and GEnomics (eMERGE) institutions. Results An EHR-based cataract phenotyping algorithm was successfully developed and validated, resulting in positive predictive values (PPVs) >95%. The multi-modal approach increased the identification of cataract subject attributes by a factor of three compared to single-mode approaches while maintaining high PPV. Components of the cataract algorithm were successfully deployed at three other institutions with similar accuracy. Discussion A multi-modal strategy incorporating optical character recognition and natural language processing may increase the number of cases identified while maintaining similar PPVs. Such algorithms, however, require that the needed information be embedded within clinical documents. Conclusion We have demonstrated that algorithms to identify and characterize cataracts can be developed utilizing data collected via the EHR. These algorithms provide a high level of accuracy even when implemented across multiple EHRs and institutional boundaries. PMID:22319176
Multi-objective evolutionary algorithms for fuzzy classification in survival prediction.
Jiménez, Fernando; Sánchez, Gracia; Juárez, José M
2014-03-01
This paper presents a novel rule-based fuzzy classification methodology for survival/mortality prediction in severe burnt patients. Due to the ethical aspects involved in this medical scenario, physicians tend not to accept a computer-based evaluation unless they understand why and how such a recommendation is given. Therefore, any fuzzy classifier model must be both accurate and interpretable. The proposed methodology is a three-step process: (1) multi-objective constrained optimization of a patient's data set, using Pareto-based elitist multi-objective evolutionary algorithms to maximize accuracy and minimize the complexity (number of rules) of classifiers, subject to interpretability constraints; this step produces a set of alternative (Pareto) classifiers; (2) linguistic labeling, which assigns a linguistic label to each fuzzy set of the classifiers; this step is essential to the interpretability of the classifiers; (3) decision making, whereby a classifier is chosen, if it is satisfactory, according to the preferences of the decision maker. If no classifier is satisfactory for the decision maker, the process starts again in step (1) with a different input parameter set. The performance of three multi-objective evolutionary algorithms, niched pre-selection multi-objective algorithm, elitist Pareto-based multi-objective evolutionary algorithm for diversity reinforcement (ENORA) and the non-dominated sorting genetic algorithm (NSGA-II), was tested using a patient's data set from an intensive care burn unit and a standard machine learning data set from an standard machine learning repository. The results are compared using the hypervolume multi-objective metric. Besides, the results have been compared with other non-evolutionary techniques and validated with a multi-objective cross-validation technique. Our proposal improves the classification rate obtained by other non-evolutionary techniques (decision trees, artificial neural networks, Naive Bayes, and case-based reasoning) obtaining with ENORA a classification rate of 0.9298, specificity of 0.9385, and sensitivity of 0.9364, with 14.2 interpretable fuzzy rules on average. Our proposal improves the accuracy and interpretability of the classifiers, compared with other non-evolutionary techniques. We also conclude that ENORA outperforms niched pre-selection and NSGA-II algorithms. Moreover, given that our multi-objective evolutionary methodology is non-combinational based on real parameter optimization, the time cost is significantly reduced compared with other evolutionary approaches existing in literature based on combinational optimization. Copyright © 2014 Elsevier B.V. All rights reserved.
Towards Symbolic Model Checking for Multi-Agent Systems via OBDDs
NASA Technical Reports Server (NTRS)
Raimondi, Franco; Lomunscio, Alessio
2004-01-01
We present an algorithm for model checking temporal-epistemic properties of multi-agent systems, expressed in the formalism of interpreted systems. We first introduce a technique for the translation of interpreted systems into boolean formulae, and then present a model-checking algorithm based on this translation. The algorithm is based on OBDD's, as they offer a compact and efficient representation for boolean formulae.
Zhang, Rubo; Yang, Yu
2017-01-01
Research on distributed task planning model for multi-autonomous underwater vehicle (MAUV). A scroll time domain quantum artificial bee colony (STDQABC) optimization algorithm is proposed to solve the multi-AUV optimal task planning scheme. In the uncertain marine environment, the rolling time domain control technique is used to realize a numerical optimization in a narrowed time range. Rolling time domain control is one of the better task planning techniques, which can greatly reduce the computational workload and realize the tradeoff between AUV dynamics, environment and cost. Finally, a simulation experiment was performed to evaluate the distributed task planning performance of the scroll time domain quantum bee colony optimization algorithm. The simulation results demonstrate that the STDQABC algorithm converges faster than the QABC and ABC algorithms in terms of both iterations and running time. The STDQABC algorithm can effectively improve MAUV distributed tasking planning performance, complete the task goal and get the approximate optimal solution. PMID:29186166
Li, Jianjun; Zhang, Rubo; Yang, Yu
2017-01-01
Research on distributed task planning model for multi-autonomous underwater vehicle (MAUV). A scroll time domain quantum artificial bee colony (STDQABC) optimization algorithm is proposed to solve the multi-AUV optimal task planning scheme. In the uncertain marine environment, the rolling time domain control technique is used to realize a numerical optimization in a narrowed time range. Rolling time domain control is one of the better task planning techniques, which can greatly reduce the computational workload and realize the tradeoff between AUV dynamics, environment and cost. Finally, a simulation experiment was performed to evaluate the distributed task planning performance of the scroll time domain quantum bee colony optimization algorithm. The simulation results demonstrate that the STDQABC algorithm converges faster than the QABC and ABC algorithms in terms of both iterations and running time. The STDQABC algorithm can effectively improve MAUV distributed tasking planning performance, complete the task goal and get the approximate optimal solution.
Deadbeat Predictive Controllers
NASA Technical Reports Server (NTRS)
Juang, Jer-Nan; Phan, Minh
1997-01-01
Several new computational algorithms are presented to compute the deadbeat predictive control law. The first algorithm makes use of a multi-step-ahead output prediction to compute the control law without explicitly calculating the controllability matrix. The system identification must be performed first and then the predictive control law is designed. The second algorithm uses the input and output data directly to compute the feedback law. It combines the system identification and the predictive control law into one formulation. The third algorithm uses an observable-canonical form realization to design the predictive controller. The relationship between all three algorithms is established through the use of the state-space representation. All algorithms are applicable to multi-input, multi-output systems with disturbance inputs. In addition to the feedback terms, feed forward terms may also be added for disturbance inputs if they are measurable. Although the feedforward terms do not influence the stability of the closed-loop feedback law, they enhance the performance of the controlled system.
NASA Astrophysics Data System (ADS)
Yan, Dan; Bai, Lianfa; Zhang, Yi; Han, Jing
2018-02-01
For the problems of missing details and performance of the colorization based on sparse representation, we propose a conceptual model framework for colorizing gray-scale images, and then a multi-sparse dictionary colorization algorithm based on the feature classification and detail enhancement (CEMDC) is proposed based on this framework. The algorithm can achieve a natural colorized effect for a gray-scale image, and it is consistent with the human vision. First, the algorithm establishes a multi-sparse dictionary classification colorization model. Then, to improve the accuracy rate of the classification, the corresponding local constraint algorithm is proposed. Finally, we propose a detail enhancement based on Laplacian Pyramid, which is effective in solving the problem of missing details and improving the speed of image colorization. In addition, the algorithm not only realizes the colorization of the visual gray-scale image, but also can be applied to the other areas, such as color transfer between color images, colorizing gray fusion images, and infrared images.
Evaluation of Genetic Algorithm Concepts Using Model Problems. Part 2; Multi-Objective Optimization
NASA Technical Reports Server (NTRS)
Holst, Terry L.; Pulliam, Thomas H.
2003-01-01
A genetic algorithm approach suitable for solving multi-objective optimization problems is described and evaluated using a series of simple model problems. Several new features including a binning selection algorithm and a gene-space transformation procedure are included. The genetic algorithm is suitable for finding pareto optimal solutions in search spaces that are defined by any number of genes and that contain any number of local extrema. Results indicate that the genetic algorithm optimization approach is flexible in application and extremely reliable, providing optimal results for all optimization problems attempted. The binning algorithm generally provides pareto front quality enhancements and moderate convergence efficiency improvements for most of the model problems. The gene-space transformation procedure provides a large convergence efficiency enhancement for problems with non-convoluted pareto fronts and a degradation in efficiency for problems with convoluted pareto fronts. The most difficult problems --multi-mode search spaces with a large number of genes and convoluted pareto fronts-- require a large number of function evaluations for GA convergence, but always converge.
2010-11-01
S.A. Horn, A. Zegers ; DRDC CORA TM 2010-252 ; R & D pour la défense Canada – CARO ; novembre 2010. Contexte : La pêche au filet dérivant est une... 13 3.1 Characterizing the Information Provided by the Sensors . . . . . . . . . . . . . . 13 3.2 Operational Decision Support...ship for a given RS2 cut-off length based on measurements of length deviations. . . . . . . . . . . . . . . . . . . . . . . . . 24 Figure 13 : AS-IS
Performance of Multi-chaotic PSO on a shifted benchmark functions set
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pluhacek, Michal; Senkerik, Roman; Zelinka, Ivan
2015-03-10
In this paper the performance of Multi-chaotic PSO algorithm is investigated using two shifted benchmark functions. The purpose of shifted benchmark functions is to simulate the time-variant real-world problems. The results of chaotic PSO are compared with canonical version of the algorithm. It is concluded that using the multi-chaotic approach can lead to better results in optimization of shifted functions.
Shi, Jun; Liu, Xiao; Li, Yan; Zhang, Qi; Li, Yingjie; Ying, Shihui
2015-10-30
Electroencephalography (EEG) based sleep staging is commonly used in clinical routine. Feature extraction and representation plays a crucial role in EEG-based automatic classification of sleep stages. Sparse representation (SR) is a state-of-the-art unsupervised feature learning method suitable for EEG feature representation. Collaborative representation (CR) is an effective data coding method used as a classifier. Here we use CR as a data representation method to learn features from the EEG signal. A joint collaboration model is established to develop a multi-view learning algorithm, and generate joint CR (JCR) codes to fuse and represent multi-channel EEG signals. A two-stage multi-view learning-based sleep staging framework is then constructed, in which JCR and joint sparse representation (JSR) algorithms first fuse and learning the feature representation from multi-channel EEG signals, respectively. Multi-view JCR and JSR features are then integrated and sleep stages recognized by a multiple kernel extreme learning machine (MK-ELM) algorithm with grid search. The proposed two-stage multi-view learning algorithm achieves superior performance for sleep staging. With a K-means clustering based dictionary, the mean classification accuracy, sensitivity and specificity are 81.10 ± 0.15%, 71.42 ± 0.66% and 94.57 ± 0.07%, respectively; while with the dictionary learned using the submodular optimization method, they are 80.29 ± 0.22%, 71.26 ± 0.78% and 94.38 ± 0.10%, respectively. The two-stage multi-view learning based sleep staging framework outperforms all other classification methods compared in this work, while JCR is superior to JSR. The proposed multi-view learning framework has the potential for sleep staging based on multi-channel or multi-modality polysomnography signals. Copyright © 2015 Elsevier B.V. All rights reserved.
Conductivite dans le modele de Hubbard bi-dimensionnel a faible couplage
NASA Astrophysics Data System (ADS)
Bergeron, Dominic
Le modele de Hubbard bi-dimensionnel (2D) est souvent considere comme le modele minimal pour les supraconducteurs a haute temperature critique a base d'oxyde de cuivre (SCHT). Sur un reseau carre, ce modele possede les phases qui sont communes a tous les SCHT, la phase antiferromagnetique, la phase supraconductrice et la phase dite du pseudogap. Il n'a pas de solution exacte, toutefois, plusieurs methodes approximatives permettent d'etudier ses proprietes de facon numerique. Les proprietes optiques et de transport sont bien connues dans les SCHT et sont donc de bonne candidates pour valider un modele theorique et aider a comprendre mieux la physique de ces materiaux. La presente these porte sur le calcul de ces proprietes pour le modele de Hubbard 2D a couplage faible ou intermediaire. La methode de calcul utilisee est l'approche auto-coherente a deux particules (ACDP), qui est non-perturbative et inclue l'effet des fluctuations de spin et de charge a toutes les longueurs d'onde. La derivation complete de l'expression de la conductivite dans l'approche ACDP est presentee. Cette expression contient ce qu'on appelle les corrections de vertex, qui tiennent compte des correlations entre quasi-particules. Pour rendre possible le calcul numerique de ces corrections, des algorithmes utilisant, entre autres, des transformees de Fourier rapides et des splines cubiques sont developpes. Les calculs sont faits pour le reseau carre avec sauts aux plus proches voisins autour du point critique antiferromagnetique. Aux dopages plus faibles que le point critique, la conductivite optique presente une bosse dans l'infrarouge moyen a basse temperature, tel qu'observe dans plusieurs SCHT. Dans la resistivite en fonction de la temperature, on trouve un comportement isolant dans le pseudogap lorsque les corrections de vertex sont negligees et metallique lorsqu'elles sont prises en compte. Pres du point critique, la resistivite est lineaire en T a basse temperature et devient progressivement proportionnelle a T 2 a fort dopage. Quelques resultats avec sauts aux voisins plus eloignes sont aussi presentes. Mots-cles: Hubbard, point critique quantique, conductivite, corrections de vertex
Experiments with a Parallel Multi-Objective Evolutionary Algorithm for Scheduling
NASA Technical Reports Server (NTRS)
Brown, Matthew; Johnston, Mark D.
2013-01-01
Evolutionary multi-objective algorithms have great potential for scheduling in those situations where tradeoffs among competing objectives represent a key requirement. One challenge, however, is runtime performance, as a consequence of evolving not just a single schedule, but an entire population, while attempting to sample the Pareto frontier as accurately and uniformly as possible. The growing availability of multi-core processors in end user workstations, and even laptops, has raised the question of the extent to which such hardware can be used to speed up evolutionary algorithms. In this paper we report on early experiments in parallelizing a Generalized Differential Evolution (GDE) algorithm for scheduling long-range activities on NASA's Deep Space Network. Initial results show that significant speedups can be achieved, but that performance does not necessarily improve as more cores are utilized. We describe our preliminary results and some initial suggestions from parallelizing the GDE algorithm. Directions for future work are outlined.
NASA Astrophysics Data System (ADS)
LeBlanc, Luc R.
Les materiaux composites sont de plus en plus utilises dans des domaines tels que l'aerospatiale, les voitures a hautes performances et les equipements sportifs, pour en nommer quelques-uns. Des etudes ont demontre qu'une exposition a l'humidite nuit a la resistance des composites en favorisant l'initiation et la propagation du delaminage. De ces etudes, tres peu traitent de l'effet de l'humidite sur l'initiation du delaminage en mode mixte I/II et aucune ne traite des effets de l'humidite sur le taux de propagation du delaminage en mode mixte I/II dans un composite. La premiere partie de cette these consiste a determiner les effets de l'humidite sur la propagation du delaminage lors d'une sollicitation en mode mixte I/II. Des eprouvettes d'un composite unidirectionnel de carbone/epoxy (G40-800/5276-1) ont ete immergees dans un bain d'eau distillee a 70°C jusqu'a leur saturation. Des essais experimentaux quasi-statiques avec des chargements d'une gamme de mixites des modes I/II (0%, 25%, 50%, 75% et 100%) ont ete executes pour determiner les effets de l'humidite sur la resistance au delaminage du composite. Des essais de fatigue ont ete realises, avec la meme gamme de mixite des modes I/II, pour determiner 1'effet de 1'humidite sur l'initiation et sur le taux de propagation du delaminage. Les resultats des essais en chargement quasi-statique ont demontre que l'humidite reduit la resistance au delaminage d'un composite carbone/epoxy pour toute la gamme des mixites des modes I/II, sauf pour le mode I ou la resistance au delaminage augmente apres une exposition a l'humidite. Pour les chargements en fatigue, l'humidite a pour effet d'accelerer l'initiation du delaminage et d'augmenter le taux de propagation pour toutes les mixites des modes I/II. Les donnees experimentales recueillies ont ete utilisees pour determiner lesquels des criteres de delaminage en statique et des modeles de taux de propagation du delaminage en fatigue en mode mixte I/II proposes dans la litterature representent le mieux le delaminage du composite etudie. Une courbe de regression a ete utilisee pour determiner le meilleur ajustement entre les donnees experimentales et les criteres de delaminage en statique etudies. Une surface de regression a ete utilisee pour determiner le meilleur ajustement entre les donnees experimentales et les modeles de taux de propagation en fatigue etudies. D'apres les ajustements, le meilleur critere de delaminage en statique est le critere B-K et le meilleur modele de propagation en fatigue est le modele de Kenane-Benzeggagh. Afin de predire le delaminage lors de la conception de pieces complexes, des modeles numeriques peuvent etre utilises. La prediction de la longueur de delaminage lors des chargements en fatigue d'une piece est tres importante pour assurer qu'une fissure interlaminaire ne va pas croitre excessivement et causer la rupture de cette piece avant la fin de sa duree de vie de conception. Selon la tendance recente, ces modeles sont souvent bases sur l'approche de zone cohesive avec une formulation par elements finis. Au cours des travaux presentes dans cette these, le modele de progression du delaminage en fatigue de Landry & LaPlante (2012) a ete ameliore en y ajoutant le traitement des chargements en mode mixte I/II et en y modifiant l'algorithme du calcul de la force d'entrainement maximale du delaminage. Une calibration des parametres de zone cohesive a ete faite a partir des essais quasi-statiques experimentaux en mode I et II. Des resultats de simulations numeriques des essais quasi-statiques en mode mixte I/II, avec des eprouvettes seches et humides, ont ete compares avec les essais experimentaux. Des simulations numeriques en fatigue ont aussi ete faites et comparees avec les resultats experimentaux du taux de propagation du delaminage. Les resultats numeriques des essais quasi-statiques et de fatigue ont montre une bonne correlation avec les resultats experimentaux pour toute la gamme des mixites des modes I/II etudiee.
Ma, Changxi; Hao, Wei; Pan, Fuquan; Xiang, Wang
2018-01-01
Route optimization of hazardous materials transportation is one of the basic steps in ensuring the safety of hazardous materials transportation. The optimization scheme may be a security risk if road screening is not completed before the distribution route is optimized. For road screening issues of hazardous materials transportation, a road screening algorithm of hazardous materials transportation is built based on genetic algorithm and Levenberg-Marquardt neural network (GA-LM-NN) by analyzing 15 attributes data of each road network section. A multi-objective robust optimization model with adjustable robustness is constructed for the hazardous materials transportation problem of single distribution center to minimize transportation risk and time. A multi-objective genetic algorithm is designed to solve the problem according to the characteristics of the model. The algorithm uses an improved strategy to complete the selection operation, applies partial matching cross shift and single ortho swap methods to complete the crossover and mutation operation, and employs an exclusive method to construct Pareto optimal solutions. Studies show that the sets of hazardous materials transportation road can be found quickly through the proposed road screening algorithm based on GA-LM-NN, whereas the distribution route Pareto solutions with different levels of robustness can be found rapidly through the proposed multi-objective robust optimization model and algorithm.
A new implementation of the CMRH method for solving dense linear systems
NASA Astrophysics Data System (ADS)
Heyouni, M.; Sadok, H.
2008-04-01
The CMRH method [H. Sadok, Methodes de projections pour les systemes lineaires et non lineaires, Habilitation thesis, University of Lille1, Lille, France, 1994; H. Sadok, CMRH: A new method for solving nonsymmetric linear systems based on the Hessenberg reduction algorithm, Numer. Algorithms 20 (1999) 303-321] is an algorithm for solving nonsymmetric linear systems in which the Arnoldi component of GMRES is replaced by the Hessenberg process, which generates Krylov basis vectors which are orthogonal to standard unit basis vectors rather than mutually orthogonal. The iterate is formed from these vectors by solving a small least squares problem involving a Hessenberg matrix. Like GMRES, this method requires one matrix-vector product per iteration. However, it can be implemented to require half as much arithmetic work and less storage. Moreover, numerical experiments show that this method performs accurately and reduces the residual about as fast as GMRES. With this new implementation, we show that the CMRH method is the only method with long-term recurrence which requires not storing at the same time the entire Krylov vectors basis and the original matrix as in the GMRES algorithmE A comparison with Gaussian elimination is provided.
Sun, J; Wang, T; Li, Z D; Shao, Y; Zhang, Z Y; Feng, H; Zou, D H; Chen, Y J
2017-12-01
To reconstruct a vehicle-bicycle-cyclist crash accident and analyse the injuries using 3D laser scanning technology, multi-rigid-body dynamics and optimized genetic algorithm, and to provide biomechanical basis for the forensic identification of death cause. The vehicle was measured by 3D laser scanning technology. The multi-rigid-body models of cyclist, bicycle and vehicle were developed based on the measurements. The value range of optimal variables was set. A multi-objective genetic algorithm and the nondominated sorting genetic algorithm were used to find the optimal solutions, which were compared to the record of the surveillance video around the accident scene. The reconstruction result of laser scanning on vehicle was satisfactory. In the optimal solutions found by optimization method of genetic algorithm, the dynamical behaviours of dummy, bicycle and vehicle corresponded to that recorded by the surveillance video. The injury parameters of dummy were consistent with the situation and position of the real injuries on the cyclist in accident. The motion status before accident, damage process by crash and mechanical analysis on the injury of the victim can be reconstructed using 3D laser scanning technology, multi-rigid-body dynamics and optimized genetic algorithm, which have application value in the identification of injury manner and analysis of death cause in traffic accidents. Copyright© by the Editorial Department of Journal of Forensic Medicine
NASA Astrophysics Data System (ADS)
Tang, Xiangyang
2003-05-01
In multi-slice helical CT, the single-tilted-plane-based reconstruction algorithm has been proposed to combat helical and cone beam artifacts by tilting a reconstruction plane to fit a helical source trajectory optimally. Furthermore, to improve the noise characteristics or dose efficiency of the single-tilted-plane-based reconstruction algorithm, the multi-tilted-plane-based reconstruction algorithm has been proposed, in which the reconstruction plane deviates from the pose globally optimized due to an extra rotation along the 3rd axis. As a result, the capability of suppressing helical and cone beam artifacts in the multi-tilted-plane-based reconstruction algorithm is compromised. An optomized tilted-plane-based reconstruction algorithm is proposed in this paper, in which a matched view weighting strategy is proposed to optimize the capability of suppressing helical and cone beam artifacts and noise characteristics. A helical body phantom is employed to quantitatively evaluate the imaging performance of the matched view weighting approach by tabulating artifact index and noise characteristics, showing that the matched view weighting improves both the helical artifact suppression and noise characteristics or dose efficiency significantly in comparison to the case in which non-matched view weighting is applied. Finally, it is believed that the matched view weighting approach is of practical importance in the development of multi-slive helical CT, because it maintains the computational structure of fan beam filtered backprojection and demands no extra computational services.
MACVIA clinical decision algorithm in adolescents and adults with allergic rhinitis.
Bousquet, Jean; Schünemann, Holger J; Hellings, Peter W; Arnavielhe, Sylvie; Bachert, Claus; Bedbrook, Anna; Bergmann, Karl-Christian; Bosnic-Anticevich, Sinthia; Brozek, Jan; Calderon, Moises; Canonica, G Walter; Casale, Thomas B; Chavannes, Niels H; Cox, Linda; Chrystyn, Henry; Cruz, Alvaro A; Dahl, Ronald; De Carlo, Giuseppe; Demoly, Pascal; Devillier, Phillipe; Dray, Gérard; Fletcher, Monica; Fokkens, Wytske J; Fonseca, Joao; Gonzalez-Diaz, Sandra N; Grouse, Lawrence; Keil, Thomas; Kuna, Piotr; Larenas-Linnemann, Désirée; Lodrup Carlsen, Karin C; Meltzer, Eli O; Mullol, Jaoquim; Muraro, Antonella; Naclerio, Robert N; Palkonen, Susanna; Papadopoulos, Nikolaos G; Passalacqua, Giovanni; Price, David; Ryan, Dermot; Samolinski, Boleslaw; Scadding, Glenis K; Sheikh, Aziz; Spertini, François; Valiulis, Arunas; Valovirta, Erkka; Walker, Samantha; Wickman, Magnus; Yorgancioglu, Arzu; Haahtela, Tari; Zuberbier, Torsten
2016-08-01
The selection of pharmacotherapy for patients with allergic rhinitis (AR) depends on several factors, including age, prominent symptoms, symptom severity, control of AR, patient preferences, and cost. Allergen exposure and the resulting symptoms vary, and treatment adjustment is required. Clinical decision support systems (CDSSs) might be beneficial for the assessment of disease control. CDSSs should be based on the best evidence and algorithms to aid patients and health care professionals to jointly determine treatment and its step-up or step-down strategy depending on AR control. Contre les MAladies Chroniques pour un VIeillissement Actif en Languedoc-Roussillon (MACVIA-LR [fighting chronic diseases for active and healthy ageing]), one of the reference sites of the European Innovation Partnership on Active and Healthy Ageing, has initiated an allergy sentinel network (the MACVIA-ARIA Sentinel Network). A CDSS is currently being developed to optimize AR control. An algorithm developed by consensus is presented in this article. This algorithm should be confirmed by appropriate trials. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.
Li, Dongming; Sun, Changming; Yang, Jinhua; Liu, Huan; Peng, Jiaqi; Zhang, Lijuan
2017-04-06
An adaptive optics (AO) system provides real-time compensation for atmospheric turbulence. However, an AO image is usually of poor contrast because of the nature of the imaging process, meaning that the image contains information coming from both out-of-focus and in-focus planes of the object, which also brings about a loss in quality. In this paper, we present a robust multi-frame adaptive optics image restoration algorithm via maximum likelihood estimation. Our proposed algorithm uses a maximum likelihood method with image regularization as the basic principle, and constructs the joint log likelihood function for multi-frame AO images based on a Poisson distribution model. To begin with, a frame selection method based on image variance is applied to the observed multi-frame AO images to select images with better quality to improve the convergence of a blind deconvolution algorithm. Then, by combining the imaging conditions and the AO system properties, a point spread function estimation model is built. Finally, we develop our iterative solutions for AO image restoration addressing the joint deconvolution issue. We conduct a number of experiments to evaluate the performances of our proposed algorithm. Experimental results show that our algorithm produces accurate AO image restoration results and outperforms the current state-of-the-art blind deconvolution methods.
Li, Dongming; Sun, Changming; Yang, Jinhua; Liu, Huan; Peng, Jiaqi; Zhang, Lijuan
2017-01-01
An adaptive optics (AO) system provides real-time compensation for atmospheric turbulence. However, an AO image is usually of poor contrast because of the nature of the imaging process, meaning that the image contains information coming from both out-of-focus and in-focus planes of the object, which also brings about a loss in quality. In this paper, we present a robust multi-frame adaptive optics image restoration algorithm via maximum likelihood estimation. Our proposed algorithm uses a maximum likelihood method with image regularization as the basic principle, and constructs the joint log likelihood function for multi-frame AO images based on a Poisson distribution model. To begin with, a frame selection method based on image variance is applied to the observed multi-frame AO images to select images with better quality to improve the convergence of a blind deconvolution algorithm. Then, by combining the imaging conditions and the AO system properties, a point spread function estimation model is built. Finally, we develop our iterative solutions for AO image restoration addressing the joint deconvolution issue. We conduct a number of experiments to evaluate the performances of our proposed algorithm. Experimental results show that our algorithm produces accurate AO image restoration results and outperforms the current state-of-the-art blind deconvolution methods. PMID:28383503
Une nouvelle voie pour la conception des implants intervertébraux
NASA Astrophysics Data System (ADS)
Gradel, T.; Tabourot, L.; Arrieux, R.; Balland, P.
2002-12-01
L'objectif de notre travail est la conception d'une nouvelle génération d'implants intersomatiques qui s'adapte parfaitement à la géométrie des plateaux vertébraux en se déformant. Pour cela, nous avons utilisé une nouvelle démarche qui consiste à simuler entièrement le procédé de fabrication en l'occurrence l'emboutissage, Cette simulation, en concervant l'historique des sollicitations exercées sur le matériau lors de sa mise en œuvre permet de valider très précisément sa résistance mécanique en fin de cycle. Au cours de cette étude, nous avons mené en parallèle deux analyses dites “ coopératives ” : l'une fondée sur un modèle phénoménologique de type HILL et l'autre sur un modèle multi-échelles prenant en compte des phénomènes plus physiques afin d'acquérir une bonne connaissance du comportement du matériau lors de la déformation. C'est pour sa bonne résistance, sa biocompatibilité et ses propriétés radiologiques que nous avons choisi le T40 (titane pur) comme matériau.
Data fusion algorithm for rapid multi-mode dust concentration measurement system based on MEMS
NASA Astrophysics Data System (ADS)
Liao, Maohao; Lou, Wenzhong; Wang, Jinkui; Zhang, Yan
2018-03-01
As single measurement method cannot fully meet the technical requirements of dust concentration measurement, the multi-mode detection method is put forward, as well as the new requirements for data processing. This paper presents a new dust concentration measurement system which contains MEMS ultrasonic sensor and MEMS capacitance sensor, and presents a new data fusion algorithm for this multi-mode dust concentration measurement system. After analyzing the relation between the data of the composite measurement method, the data fusion algorithm based on Kalman filtering is established, which effectively improve the measurement accuracy, and ultimately forms a rapid data fusion model of dust concentration measurement. Test results show that the data fusion algorithm is able to realize the rapid and exact concentration detection.
Outdoor flocking of quadcopter drones with decentralized model predictive control.
Yuan, Quan; Zhan, Jingyuan; Li, Xiang
2017-11-01
In this paper, we present a multi-drone system featured with a decentralized model predictive control (DMPC) flocking algorithm. The drones gather localized information from neighbors and update their velocities using the DMPC flocking algorithm. In the multi-drone system, data packages are transmitted through XBee ® wireless modules in broadcast mode, yielding such an anonymous and decentralized system where all the calculations and controls are completed on an onboard minicomputer of each drone. Each drone is a double-layered agent system with the coordination layer running multi-drone flocking algorithms and the flight control layer navigating the drone, and the final formation of the flock relies on both the communication range and the desired inter-drone distance. We give both numerical simulations and field tests with a flock of five drones, showing that the DMPC flocking algorithm performs well on the presented multi-drone system in both the convergence rate and the ability of tracking a desired path. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Karakostas, Spiros
2015-05-01
The multi-objective nature of most spatial planning initiatives and the numerous constraints that are introduced in the planning process by decision makers, stakeholders, etc., synthesize a complex spatial planning context in which the concept of solid and meaningful optimization is a unique challenge. This article investigates new approaches to enhance the effectiveness of multi-objective evolutionary algorithms (MOEAs) via the adoption of a well-known metaheuristic: the non-dominated sorting genetic algorithm II (NSGA-II). In particular, the contribution of a sophisticated crossover operator coupled with an enhanced initialization heuristic is evaluated against a series of metrics measuring the effectiveness of MOEAs. Encouraging results emerge for both the convergence rate of the evolutionary optimization process and the occupation of valuable regions of the objective space by non-dominated solutions, facilitating the work of spatial planners and decision makers. Based on the promising behaviour of both heuristics, topics for further research are proposed to improve their effectiveness.
Multi-Agent Patrolling under Uncertainty and Threats.
Chen, Shaofei; Wu, Feng; Shen, Lincheng; Chen, Jing; Ramchurn, Sarvapali D
2015-01-01
We investigate a multi-agent patrolling problem where information is distributed alongside threats in environments with uncertainties. Specifically, the information and threat at each location are independently modelled as multi-state Markov chains, whose states are not observed until the location is visited by an agent. While agents will obtain information at a location, they may also suffer damage from the threat at that location. Therefore, the goal of the agents is to gather as much information as possible while mitigating the damage incurred. To address this challenge, we formulate the single-agent patrolling problem as a Partially Observable Markov Decision Process (POMDP) and propose a computationally efficient algorithm to solve this model. Building upon this, to compute patrols for multiple agents, the single-agent algorithm is extended for each agent with the aim of maximising its marginal contribution to the team. We empirically evaluate our algorithm on problems of multi-agent patrolling and show that it outperforms a baseline algorithm up to 44% for 10 agents and by 21% for 15 agents in large domains.
Engblom, Henrik; Tufvesson, Jane; Jablonowski, Robert; Carlsson, Marcus; Aletras, Anthony H; Hoffmann, Pavel; Jacquier, Alexis; Kober, Frank; Metzler, Bernhard; Erlinge, David; Atar, Dan; Arheden, Håkan; Heiberg, Einar
2016-05-04
Late gadolinium enhancement (LGE) cardiovascular magnetic resonance (CMR) using magnitude inversion recovery (IR) or phase sensitive inversion recovery (PSIR) has become clinical standard for assessment of myocardial infarction (MI). However, there is no clinical standard for quantification of MI even though multiple methods have been proposed. Simple thresholds have yielded varying results and advanced algorithms have only been validated in single center studies. Therefore, the aim of this study was to develop an automatic algorithm for MI quantification in IR and PSIR LGE images and to validate the new algorithm experimentally and compare it to expert delineations in multi-center, multi-vendor patient data. The new automatic algorithm, EWA (Expectation Maximization, weighted intensity, a priori information), was implemented using an intensity threshold by Expectation Maximization (EM) and a weighted summation to account for partial volume effects. The EWA algorithm was validated in-vivo against triphenyltetrazolium-chloride (TTC) staining (n = 7 pigs with paired IR and PSIR images) and against ex-vivo high resolution T1-weighted images (n = 23 IR and n = 13 PSIR images). The EWA algorithm was also compared to expert delineation in 124 patients from multi-center, multi-vendor clinical trials 2-6 days following first time ST-elevation myocardial infarction (STEMI) treated with percutaneous coronary intervention (PCI) (n = 124 IR and n = 49 PSIR images). Infarct size by the EWA algorithm in vivo in pigs showed a bias to ex-vivo TTC of -1 ± 4%LVM (R = 0.84) in IR and -2 ± 3%LVM (R = 0.92) in PSIR images and a bias to ex-vivo T1-weighted images of 0 ± 4%LVM (R = 0.94) in IR and 0 ± 5%LVM (R = 0.79) in PSIR images. In multi-center patient studies, infarct size by the EWA algorithm showed a bias to expert delineation of -2 ± 6 %LVM (R = 0.81) in IR images (n = 124) and 0 ± 5%LVM (R = 0.89) in PSIR images (n = 49). The EWA algorithm was validated experimentally and in patient data with a low bias in both IR and PSIR LGE images. Thus, the use of EM and a weighted intensity as in the EWA algorithm, may serve as a clinical standard for the quantification of myocardial infarction in LGE CMR images. CHILL-MI: NCT01379261 . NCT01374321 .
An adaptive multi-level simulation algorithm for stochastic biological systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lester, C., E-mail: lesterc@maths.ox.ac.uk; Giles, M. B.; Baker, R. E.
2015-01-14
Discrete-state, continuous-time Markov models are widely used in the modeling of biochemical reaction networks. Their complexity often precludes analytic solution, and we rely on stochastic simulation algorithms (SSA) to estimate system statistics. The Gillespie algorithm is exact, but computationally costly as it simulates every single reaction. As such, approximate stochastic simulation algorithms such as the tau-leap algorithm are often used. Potentially computationally more efficient, the system statistics generated suffer from significant bias unless tau is relatively small, in which case the computational time can be comparable to that of the Gillespie algorithm. The multi-level method [Anderson and Higham, “Multi-level Montemore » Carlo for continuous time Markov chains, with applications in biochemical kinetics,” SIAM Multiscale Model. Simul. 10(1), 146–179 (2012)] tackles this problem. A base estimator is computed using many (cheap) sample paths at low accuracy. The bias inherent in this estimator is then reduced using a number of corrections. Each correction term is estimated using a collection of paired sample paths where one path of each pair is generated at a higher accuracy compared to the other (and so more expensive). By sharing random variables between these paired paths, the variance of each correction estimator can be reduced. This renders the multi-level method very efficient as only a relatively small number of paired paths are required to calculate each correction term. In the original multi-level method, each sample path is simulated using the tau-leap algorithm with a fixed value of τ. This approach can result in poor performance when the reaction activity of a system changes substantially over the timescale of interest. By introducing a novel adaptive time-stepping approach where τ is chosen according to the stochastic behaviour of each sample path, we extend the applicability of the multi-level method to such cases. We demonstrate the efficiency of our method using a number of examples.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Kok Foong; Patterson, Robert I.A.; Wagner, Wolfgang
2015-12-15
Graphical abstract: -- Highlights: •Problems concerning multi-compartment population balance equations are studied. •A class of fragmentation weight transfer functions is presented. •Three stochastic weighted algorithms are compared against the direct simulation algorithm. •The numerical errors of the stochastic solutions are assessed as a function of fragmentation rate. •The algorithms are applied to a multi-dimensional granulation model. -- Abstract: This paper introduces stochastic weighted particle algorithms for the solution of multi-compartment population balance equations. In particular, it presents a class of fragmentation weight transfer functions which are constructed such that the number of computational particles stays constant during fragmentation events. Themore » weight transfer functions are constructed based on systems of weighted computational particles and each of it leads to a stochastic particle algorithm for the numerical treatment of population balance equations. Besides fragmentation, the algorithms also consider physical processes such as coagulation and the exchange of mass with the surroundings. The numerical properties of the algorithms are compared to the direct simulation algorithm and an existing method for the fragmentation of weighted particles. It is found that the new algorithms show better numerical performance over the two existing methods especially for systems with significant amount of large particles and high fragmentation rates.« less
Yan, Zheping; Wang, Lu; Wang, Tongda; Yang, Zewen; Chen, Tao; Xu, Jian
2018-03-30
To solve the navigation accuracy problems of multi-Unmanned Underwater Vehicles (multi-UUVs) in the polar region, a polar cooperative navigation algorithm for multi-UUVs considering communication delays is proposed in this paper. UUVs are important pieces of equipment in ocean engineering for marine development. For UUVs to complete missions, precise navigation is necessary. It is difficult for UUVs to establish true headings because of the rapid convergence of Earth meridians and the severe polar environment. Based on the polar grid navigation algorithm, UUV navigation in the polar region can be accomplished with the Strapdown Inertial Navigation System (SINS) in the grid frame. To save costs, a leader-follower type of system is introduced in this paper. The leader UUV helps the follower UUVs to achieve high navigation accuracy. Follower UUVs correct their own states based on the information sent by the leader UUV and the relative position measured by ultra-short baseline (USBL) acoustic positioning. The underwater acoustic communication delay is quantized by the model. In this paper, considering underwater acoustic communication delay, the conventional adaptive Kalman filter (AKF) is modified to adapt to polar cooperative navigation. The results demonstrate that the polar cooperative navigation algorithm for multi-UUVs that considers communication delays can effectively navigate the sailing of multi-UUVs in the polar region.
Yan, Zheping; Wang, Lu; Wang, Tongda; Yang, Zewen; Chen, Tao; Xu, Jian
2018-01-01
To solve the navigation accuracy problems of multi-Unmanned Underwater Vehicles (multi-UUVs) in the polar region, a polar cooperative navigation algorithm for multi-UUVs considering communication delays is proposed in this paper. UUVs are important pieces of equipment in ocean engineering for marine development. For UUVs to complete missions, precise navigation is necessary. It is difficult for UUVs to establish true headings because of the rapid convergence of Earth meridians and the severe polar environment. Based on the polar grid navigation algorithm, UUV navigation in the polar region can be accomplished with the Strapdown Inertial Navigation System (SINS) in the grid frame. To save costs, a leader-follower type of system is introduced in this paper. The leader UUV helps the follower UUVs to achieve high navigation accuracy. Follower UUVs correct their own states based on the information sent by the leader UUV and the relative position measured by ultra-short baseline (USBL) acoustic positioning. The underwater acoustic communication delay is quantized by the model. In this paper, considering underwater acoustic communication delay, the conventional adaptive Kalman filter (AKF) is modified to adapt to polar cooperative navigation. The results demonstrate that the polar cooperative navigation algorithm for multi-UUVs that considers communication delays can effectively navigate the sailing of multi-UUVs in the polar region. PMID:29601537
Boiler-turbine control system design using a genetic algorithm
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dimeo, R.; Lee, K.Y.
1995-12-01
This paper discusses the application of a genetic algorithm to control system design for a boiler-turbine plant. In particular the authors study the ability of the genetic algorithm to develop a proportional-integral (PI) controller and a state feedback controller for a non-linear multi-input/multi-output (MIMO) plant model. The plant model is presented along with a discussion of the inherent difficulties in such controller development. A sketch of the genetic algorithm (GA) is presented and its strategy as a method of control system design is discussed. Results are presented for two different control systems that have been designed with the genetic algorithm.
Multi-Optimisation Consensus Clustering
NASA Astrophysics Data System (ADS)
Li, Jian; Swift, Stephen; Liu, Xiaohui
Ensemble Clustering has been developed to provide an alternative way of obtaining more stable and accurate clustering results. It aims to avoid the biases of individual clustering algorithms. However, it is still a challenge to develop an efficient and robust method for Ensemble Clustering. Based on an existing ensemble clustering method, Consensus Clustering (CC), this paper introduces an advanced Consensus Clustering algorithm called Multi-Optimisation Consensus Clustering (MOCC), which utilises an optimised Agreement Separation criterion and a Multi-Optimisation framework to improve the performance of CC. Fifteen different data sets are used for evaluating the performance of MOCC. The results reveal that MOCC can generate more accurate clustering results than the original CC algorithm.
Retinex enhancement of infrared images.
Li, Ying; He, Renjie; Xu, Guizhi; Hou, Changzhi; Sun, Yunyan; Guo, Lei; Rao, Liyun; Yan, Weili
2008-01-01
With the ability of imaging the temperature distribution of body, infrared imaging is promising in diagnostication and prognostication of diseases. However the poor quality of the raw original infrared images prevented applications and one of the essential problems is the low contrast appearance of the imagined object. In this paper, the image enhancement technique based on the Retinex theory is studied, which is a process that automatically retrieve the visual realism to images. The algorithms, including Frackle-McCann algorithm, McCann99 algorithm, single-scale Retinex algorithm, multi-scale Retinex algorithm and multi-scale Retinex algorithm with color restoration, are experienced to the enhancement of infrared images. The entropy measurements along with the visual inspection were compared and results shown the algorithms based on Retinex theory have the ability in enhancing the infrared image. Out of the algorithms compared, MSRCR demonstrated the best performance.
Analyzing gene expression time-courses based on multi-resolution shape mixture model.
Li, Ying; He, Ye; Zhang, Yu
2016-11-01
Biological processes actually are a dynamic molecular process over time. Time course gene expression experiments provide opportunities to explore patterns of gene expression change over a time and understand the dynamic behavior of gene expression, which is crucial for study on development and progression of biology and disease. Analysis of the gene expression time-course profiles has not been fully exploited so far. It is still a challenge problem. We propose a novel shape-based mixture model clustering method for gene expression time-course profiles to explore the significant gene groups. Based on multi-resolution fractal features and mixture clustering model, we proposed a multi-resolution shape mixture model algorithm. Multi-resolution fractal features is computed by wavelet decomposition, which explore patterns of change over time of gene expression at different resolution. Our proposed multi-resolution shape mixture model algorithm is a probabilistic framework which offers a more natural and robust way of clustering time-course gene expression. We assessed the performance of our proposed algorithm using yeast time-course gene expression profiles compared with several popular clustering methods for gene expression profiles. The grouped genes identified by different methods are evaluated by enrichment analysis of biological pathways and known protein-protein interactions from experiment evidence. The grouped genes identified by our proposed algorithm have more strong biological significance. A novel multi-resolution shape mixture model algorithm based on multi-resolution fractal features is proposed. Our proposed model provides a novel horizons and an alternative tool for visualization and analysis of time-course gene expression profiles. The R and Matlab program is available upon the request. Copyright © 2016 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Farda, N. M.
2017-12-01
Coastal wetlands provide ecosystem services essential to people and the environment. Changes in coastal wetlands, especially on land use, are important to monitor by utilizing multi-temporal imagery. The Google Earth Engine (GEE) provides many machine learning algorithms (10 algorithms) that are very useful for extracting land use from imagery. The research objective is to explore machine learning in Google Earth Engine and its accuracy for multi-temporal land use mapping of coastal wetland area. Landsat 3 MSS (1978), Landsat 5 TM (1991), Landsat 7 ETM+ (2001), and Landsat 8 OLI (2014) images located in Segara Anakan lagoon are selected to represent multi temporal images. The input for machine learning are visible and near infrared bands, PCA band, invers PCA bands, bare soil index, vegetation index, wetness index, elevation from ASTER GDEM, and GLCM (Harralick) texture, and also polygon samples in 140 locations. There are 10 machine learning algorithms applied to extract coastal wetlands land use from Landsat imagery. The algorithms are Fast Naive Bayes, CART (Classification and Regression Tree), Random Forests, GMO Max Entropy, Perceptron (Multi Class Perceptron), Winnow, Voting SVM, Margin SVM, Pegasos (Primal Estimated sub-GrAdient SOlver for Svm), IKPamir (Intersection Kernel Passive Aggressive Method for Information Retrieval, SVM). Machine learning in Google Earth Engine are very helpful in multi-temporal land use mapping, the highest accuracy for land use mapping of coastal wetland is CART with 96.98 % Overall Accuracy using K-Fold Cross Validation (K = 10). GEE is particularly useful for multi-temporal land use mapping with ready used image and classification algorithms, and also very challenging for other applications.
Liu, Chun; Kroll, Andreas
2016-01-01
Multi-robot task allocation determines the task sequence and distribution for a group of robots in multi-robot systems, which is one of constrained combinatorial optimization problems and more complex in case of cooperative tasks because they introduce additional spatial and temporal constraints. To solve multi-robot task allocation problems with cooperative tasks efficiently, a subpopulation-based genetic algorithm, a crossover-free genetic algorithm employing mutation operators and elitism selection in each subpopulation, is developed in this paper. Moreover, the impact of mutation operators (swap, insertion, inversion, displacement, and their various combinations) is analyzed when solving several industrial plant inspection problems. The experimental results show that: (1) the proposed genetic algorithm can obtain better solutions than the tested binary tournament genetic algorithm with partially mapped crossover; (2) inversion mutation performs better than other tested mutation operators when solving problems without cooperative tasks, and the swap-inversion combination performs better than other tested mutation operators/combinations when solving problems with cooperative tasks. As it is difficult to produce all desired effects with a single mutation operator, using multiple mutation operators (including both inversion and swap) is suggested when solving similar combinatorial optimization problems.
NASA Astrophysics Data System (ADS)
Rojali, Siahaan, Ida Sri Rejeki; Soewito, Benfano
2017-08-01
Steganography is the art and science of hiding the secret messages so the existence of the message cannot be detected by human senses. The data concealment is using the Multi Pixel Value Differencing (MPVD) algorithm, utilizing the difference from each pixel. The development was done by using six interval tables. The objective of this algorithm is to enhance the message capacity and to maintain the data security.
Cross contrast multi-channel image registration using image synthesis for MR brain images.
Chen, Min; Carass, Aaron; Jog, Amod; Lee, Junghoon; Roy, Snehashis; Prince, Jerry L
2017-02-01
Multi-modal deformable registration is important for many medical image analysis tasks such as atlas alignment, image fusion, and distortion correction. Whereas a conventional method would register images with different modalities using modality independent features or information theoretic metrics such as mutual information, this paper presents a new framework that addresses the problem using a two-channel registration algorithm capable of using mono-modal similarity measures such as sum of squared differences or cross-correlation. To make it possible to use these same-modality measures, image synthesis is used to create proxy images for the opposite modality as well as intensity-normalized images from each of the two available images. The new deformable registration framework was evaluated by performing intra-subject deformation recovery, intra-subject boundary alignment, and inter-subject label transfer experiments using multi-contrast magnetic resonance brain imaging data. Three different multi-channel registration algorithms were evaluated, revealing that the framework is robust to the multi-channel deformable registration algorithm that is used. With a single exception, all results demonstrated improvements when compared against single channel registrations using the same algorithm with mutual information. Copyright © 2016 Elsevier B.V. All rights reserved.
Hybrid algorithms for fuzzy reverse supply chain network design.
Che, Z H; Chiang, Tzu-An; Kuo, Y C; Cui, Zhihua
2014-01-01
In consideration of capacity constraints, fuzzy defect ratio, and fuzzy transport loss ratio, this paper attempted to establish an optimized decision model for production planning and distribution of a multiphase, multiproduct reverse supply chain, which addresses defects returned to original manufacturers, and in addition, develops hybrid algorithms such as Particle Swarm Optimization-Genetic Algorithm (PSO-GA), Genetic Algorithm-Simulated Annealing (GA-SA), and Particle Swarm Optimization-Simulated Annealing (PSO-SA) for solving the optimized model. During a case study of a multi-phase, multi-product reverse supply chain network, this paper explained the suitability of the optimized decision model and the applicability of the algorithms. Finally, the hybrid algorithms showed excellent solving capability when compared with original GA and PSO methods.
Hybrid Algorithms for Fuzzy Reverse Supply Chain Network Design
Che, Z. H.; Chiang, Tzu-An; Kuo, Y. C.
2014-01-01
In consideration of capacity constraints, fuzzy defect ratio, and fuzzy transport loss ratio, this paper attempted to establish an optimized decision model for production planning and distribution of a multiphase, multiproduct reverse supply chain, which addresses defects returned to original manufacturers, and in addition, develops hybrid algorithms such as Particle Swarm Optimization-Genetic Algorithm (PSO-GA), Genetic Algorithm-Simulated Annealing (GA-SA), and Particle Swarm Optimization-Simulated Annealing (PSO-SA) for solving the optimized model. During a case study of a multi-phase, multi-product reverse supply chain network, this paper explained the suitability of the optimized decision model and the applicability of the algorithms. Finally, the hybrid algorithms showed excellent solving capability when compared with original GA and PSO methods. PMID:24892057
Physics-Based Computational Algorithm for the Multi-Fluid Plasma Model
2014-06-30
applying it to study laser - 20 Physics-Based Multi-Fluid Plasma Algorithm Shumlak Figure 6: Blended finite element method applied to the species...separation problem in capsule implosions. Number densities and electric field are shown after the laser drive has compressed the multi-fluid plasma and...6 after the laser drive has started the compression. A separation clearly develops. The solution is found using an explicit advance (CFL=1) for the
2009-07-01
Performance Analysis of the Probabilistic Multi- Hypothesis Tracking Algorithm On the SEABAR Data Sets Dr. Christian G . Hempel Naval...Hypothesis Tracking,” NUWC-NPT Technical Report 10,428, Naval Undersea Warfare Center Division, Newport, RI, 15 February 1995. [2] G . McLachlan, T...the 9th International Conference on Information Fusion, Florence Italy, July, 2006. [8] C. Hempel, “Track Initialization for Multi-Static Active Sonay
Ulrich, Daniela; Edwards, Sharon L; Letouzey, Vincent; Su, Kai; White, Jacinta F; Rosamilia, Anna; Gargett, Caroline E; Werkmeister, Jerome A
2014-01-01
There are increasing numbers of reports describing human vaginal tissue composition in women with and without pelvic organ prolapse with conflicting results. The aim of this study was to compare ovine and human posterior vaginal tissue in terms of histological and biochemical tissue composition and to assess passive biomechanical properties of ovine vagina to further characterise this animal model for pelvic organ prolapse research. Vaginal tissue was collected from ovariectomised sheep (n = 6) and from postmenopausal women (n = 7) from the proximal, middle and distal thirds. Tissue histology was analyzed using Masson's Trichrome staining; total collagen was quantified by hydroxyproline assays, collagen III/I+III ratios by delayed reduction SDS PAGE, glycosaminoglycans by dimethylmethylene blue assay, and elastic tissue associated proteins (ETAP) by amino acid analysis. Young's modulus, maximum stress/strain, and permanent strain following cyclic loading were determined in ovine vagina. Both sheep and human vaginal tissue showed comparable tissue composition. Ovine vaginal tissue showed significantly higher total collagen and glycosaminoglycan values (p<0.05) nearest the cervix. No significant differences were found along the length of the human vagina for collagen, GAG or ETAP content. The proximal region was the stiffest (Young's modulus, p<0.05), strongest (maximum stress, p<0.05) compared to distal region, and most elastic (permanent strain). Sheep tissue composition and mechanical properties showed regional differences along the postmenopausal vaginal wall not apparent in human vagina, although the absolute content of proteins were similar. Knowledge of this baseline variation in the composition and mechanical properties of the vaginal wall will assist future studies using sheep as a model for vaginal surgery.
Multi-Objective Community Detection Based on Memetic Algorithm
2015-01-01
Community detection has drawn a lot of attention as it can provide invaluable help in understanding the function and visualizing the structure of networks. Since single objective optimization methods have intrinsic drawbacks to identifying multiple significant community structures, some methods formulate the community detection as multi-objective problems and adopt population-based evolutionary algorithms to obtain multiple community structures. Evolutionary algorithms have strong global search ability, but have difficulty in locating local optima efficiently. In this study, in order to identify multiple significant community structures more effectively, a multi-objective memetic algorithm for community detection is proposed by combining multi-objective evolutionary algorithm with a local search procedure. The local search procedure is designed by addressing three issues. Firstly, nondominated solutions generated by evolutionary operations and solutions in dominant population are set as initial individuals for local search procedure. Then, a new direction vector named as pseudonormal vector is proposed to integrate two objective functions together to form a fitness function. Finally, a network specific local search strategy based on label propagation rule is expanded to search the local optimal solutions efficiently. The extensive experiments on both artificial and real-world networks evaluate the proposed method from three aspects. Firstly, experiments on influence of local search procedure demonstrate that the local search procedure can speed up the convergence to better partitions and make the algorithm more stable. Secondly, comparisons with a set of classic community detection methods illustrate the proposed method can find single partitions effectively. Finally, the method is applied to identify hierarchical structures of networks which are beneficial for analyzing networks in multi-resolution levels. PMID:25932646
Multi-objective community detection based on memetic algorithm.
Wu, Peng; Pan, Li
2015-01-01
Community detection has drawn a lot of attention as it can provide invaluable help in understanding the function and visualizing the structure of networks. Since single objective optimization methods have intrinsic drawbacks to identifying multiple significant community structures, some methods formulate the community detection as multi-objective problems and adopt population-based evolutionary algorithms to obtain multiple community structures. Evolutionary algorithms have strong global search ability, but have difficulty in locating local optima efficiently. In this study, in order to identify multiple significant community structures more effectively, a multi-objective memetic algorithm for community detection is proposed by combining multi-objective evolutionary algorithm with a local search procedure. The local search procedure is designed by addressing three issues. Firstly, nondominated solutions generated by evolutionary operations and solutions in dominant population are set as initial individuals for local search procedure. Then, a new direction vector named as pseudonormal vector is proposed to integrate two objective functions together to form a fitness function. Finally, a network specific local search strategy based on label propagation rule is expanded to search the local optimal solutions efficiently. The extensive experiments on both artificial and real-world networks evaluate the proposed method from three aspects. Firstly, experiments on influence of local search procedure demonstrate that the local search procedure can speed up the convergence to better partitions and make the algorithm more stable. Secondly, comparisons with a set of classic community detection methods illustrate the proposed method can find single partitions effectively. Finally, the method is applied to identify hierarchical structures of networks which are beneficial for analyzing networks in multi-resolution levels.
Multi-layer service function chaining scheduling based on auxiliary graph in IP over optical network
NASA Astrophysics Data System (ADS)
Li, Yixuan; Li, Hui; Liu, Yuze; Ji, Yuefeng
2017-10-01
Software Defined Optical Network (SDON) can be considered as extension of Software Defined Network (SDN) in optical networks. SDON offers a unified control plane and makes optical network an intelligent transport network with dynamic flexibility and service adaptability. For this reason, a comprehensive optical transmission service, able to achieve service differentiation all the way down to the optical transport layer, can be provided to service function chaining (SFC). IP over optical network, as a promising networking architecture to interconnect data centers, is the most widely used scenarios of SFC. In this paper, we offer a flexible and dynamic resource allocation method for diverse SFC service requests in the IP over optical network. To do so, we firstly propose the concept of optical service function (OSF) and a multi-layer SFC model. OSF represents the comprehensive optical transmission service (e.g., multicast, low latency, quality of service, etc.), which can be achieved in multi-layer SFC model. OSF can also be considered as a special SF. Secondly, we design a resource allocation algorithm, which we call OSF-oriented optical service scheduling algorithm. It is able to address multi-layer SFC optical service scheduling and provide comprehensive optical transmission service, while meeting multiple optical transmission requirements (e.g., bandwidth, latency, availability). Moreover, the algorithm exploits the concept of Auxiliary Graph. Finally, we compare our algorithm with the Baseline algorithm in simulation. And simulation results show that our algorithm achieves superior performance than Baseline algorithm in low traffic load condition.
NASA Astrophysics Data System (ADS)
Ma, Weiwei; Gong, Cailan; Hu, Yong; Li, Long; Meng, Peng
2015-10-01
Remote sensing technology has been broadly recognized for its convenience and efficiency in mapping vegetation, particularly in high-altitude and inaccessible areas where there are lack of in-situ observations. In this study, Landsat Thematic Mapper (TM) images and Chinese environmental mitigation satellite CCD sensor (HJ-1 CCD) images, both of which are at 30m spatial resolution were employed for identifying and monitoring of vegetation types in a area of Western China——Qinghai Lake Watershed(QHLW). A decision classification tree (DCT) algorithm using multi-characteristic including seasonal TM/HJ-1 CCD time series data combined with digital elevation models (DEMs) dataset, and a supervised maximum likelihood classification (MLC) algorithm with single-data TM image were applied vegetation classification. Accuracy of the two algorithms was assessed using field observation data. Based on produced vegetation classification maps, it was found that the DCT using multi-season data and geomorphologic parameters was superior to the MLC algorithm using single-data image, improving the overall accuracy by 11.86% at second class level and significantly reducing the "salt and pepper" noise. The DCT algorithm applied to TM /HJ-1 CCD time series data geomorphologic parameters appeared as a valuable and reliable tool for monitoring vegetation at first class level (5 vegetation classes) and second class level(8 vegetation subclasses). The DCT algorithm using multi-characteristic might provide a theoretical basis and general approach to automatic extraction of vegetation types from remote sensing imagery over plateau areas.
NASA Astrophysics Data System (ADS)
Feng, Ju; Shen, Wen Zhong; Xu, Chang
2016-09-01
A new algorithm for multi-objective wind farm layout optimization is presented. It formulates the wind turbine locations as continuous variables and is capable of optimizing the number of turbines and their locations in the wind farm simultaneously. Two objectives are considered. One is to maximize the total power production, which is calculated by considering the wake effects using the Jensen wake model combined with the local wind distribution. The other is to minimize the total electrical cable length. This length is assumed to be the total length of the minimal spanning tree that connects all turbines and is calculated by using Prim's algorithm. Constraints on wind farm boundary and wind turbine proximity are also considered. An ideal test case shows the proposed algorithm largely outperforms a famous multi-objective genetic algorithm (NSGA-II). In the real test case based on the Horn Rev 1 wind farm, the algorithm also obtains useful Pareto frontiers and provides a wide range of Pareto optimal layouts with different numbers of turbines for a real-life wind farm developer.
NASA Astrophysics Data System (ADS)
Francoeur, Dany
Cette these de doctorat s'inscrit dans le cadre de projets CRIAQ (Consortium de recherche et d'innovation en aerospatiale du Quebec) orientes vers le developpement d'approches embarquees pour la detection de defauts dans des structures aeronautiques. L'originalite de cette these repose sur le developpement et la validation d'une nouvelle methode de detection, quantification et localisation d'une entaille dans une structure de joint a recouvrement par la propagation d'ondes vibratoires. La premiere partie expose l'etat des connaissances sur l'identification d'un defaut dans le contexte du Structural Health Monitoring (SHM), ainsi que la modelisation de joint a recouvrements. Le chapitre 3 developpe le modele de propagation d'onde d'un joint a recouvrement endommage par une entaille pour une onde de flexion dans la plage des moyennes frequences (10-50 kHz). A cette fin, un modele de transmission de ligne (TLM) est realise pour representer un joint unidimensionnel (1D). Ce modele 1D est ensuite adapte a un joint bi-dimensionnel (2D) en faisant l'hypothese d'un front d'onde plan incident et perpendiculaire au joint. Une methode d'identification parametrique est ensuite developpee pour permettre a la fois la calibration du modele du joint a recouvrement sain, la detection puis la caracterisation de l'entaille situee sur le joint. Cette methode est couplee a un algorithme qui permet une recherche exhaustive de tout l'espace parametrique. Cette technique permet d'extraire une zone d'incertitude reliee aux parametres du modele optimal. Une etude de sensibilite est egalement realisee sur l'identification. Plusieurs resultats de mesure sur des joints a recouvrements 1D et 2D sont realisees permettant ainsi l'etude de la repetabilite des resultats et la variabilite de differents cas d'endommagement. Les resultats de cette etude demontrent d'abord que la methode de detection proposee est tres efficace et permet de suivre la progression d'endommagement. De tres bons resultats de quantification et de localisation d'entailles ont ete obtenus dans les divers joints testes (1D et 2D). Il est prevu que l'utilisation d'ondes de Lamb permettraient d'etendre la plage de validite de la methode pour de plus petits dommages. Ces travaux visent d'abord la surveillance in-situ des structures de joint a recouvrements, mais d'autres types de defauts. (comme les disbond) et. de structures complexes sont egalement envisageables. Mots cles : joint a recouvrement, surveillance in situ, localisation et caracterisation de dommages
NASA Astrophysics Data System (ADS)
Luo, Bin; Lin, Lin; Zhong, ShiSheng
2018-02-01
In this research, we propose a preference-guided optimisation algorithm for multi-criteria decision-making (MCDM) problems with interval-valued fuzzy preferences. The interval-valued fuzzy preferences are decomposed into a series of precise and evenly distributed preference-vectors (reference directions) regarding the objectives to be optimised on the basis of uniform design strategy firstly. Then the preference information is further incorporated into the preference-vectors based on the boundary intersection approach, meanwhile, the MCDM problem with interval-valued fuzzy preferences is reformulated into a series of single-objective optimisation sub-problems (each sub-problem corresponds to a decomposed preference-vector). Finally, a preference-guided optimisation algorithm based on MOEA/D (multi-objective evolutionary algorithm based on decomposition) is proposed to solve the sub-problems in a single run. The proposed algorithm incorporates the preference-vectors within the optimisation process for guiding the search procedure towards a more promising subset of the efficient solutions matching the interval-valued fuzzy preferences. In particular, lots of test instances and an engineering application are employed to validate the performance of the proposed algorithm, and the results demonstrate the effectiveness and feasibility of the algorithm.
van der Lee, J H; Svrcek, W Y; Young, B R
2008-01-01
Model Predictive Control is a valuable tool for the process control engineer in a wide variety of applications. Because of this the structure of an MPC can vary dramatically from application to application. There have been a number of works dedicated to MPC tuning for specific cases. Since MPCs can differ significantly, this means that these tuning methods become inapplicable and a trial and error tuning approach must be used. This can be quite time consuming and can result in non-optimum tuning. In an attempt to resolve this, a generalized automated tuning algorithm for MPCs was developed. This approach is numerically based and combines a genetic algorithm with multi-objective fuzzy decision-making. The key advantages to this approach are that genetic algorithms are not problem specific and only need to be adapted to account for the number and ranges of tuning parameters for a given MPC. As well, multi-objective fuzzy decision-making can handle qualitative statements of what optimum control is, in addition to being able to use multiple inputs to determine tuning parameters that best match the desired results. This is particularly useful for multi-input, multi-output (MIMO) cases where the definition of "optimum" control is subject to the opinion of the control engineer tuning the system. A case study will be presented in order to illustrate the use of the tuning algorithm. This will include how different definitions of "optimum" control can arise, and how they are accounted for in the multi-objective decision making algorithm. The resulting tuning parameters from each of the definition sets will be compared, and in doing so show that the tuning parameters vary in order to meet each definition of optimum control, thus showing the generalized automated tuning algorithm approach for tuning MPCs is feasible.
NASA Astrophysics Data System (ADS)
Zhang, Chen; Ni, Zhiwei; Ni, Liping; Tang, Na
2016-10-01
Feature selection is an important method of data preprocessing in data mining. In this paper, a novel feature selection method based on multi-fractal dimension and harmony search algorithm is proposed. Multi-fractal dimension is adopted as the evaluation criterion of feature subset, which can determine the number of selected features. An improved harmony search algorithm is used as the search strategy to improve the efficiency of feature selection. The performance of the proposed method is compared with that of other feature selection algorithms on UCI data-sets. Besides, the proposed method is also used to predict the daily average concentration of PM2.5 in China. Experimental results show that the proposed method can obtain competitive results in terms of both prediction accuracy and the number of selected features.
NASA Astrophysics Data System (ADS)
Lu, Jianbo; Xi, Yugeng; Li, Dewei; Xu, Yuli; Gan, Zhongxue
2018-01-01
A common objective of model predictive control (MPC) design is the large initial feasible region, low online computational burden as well as satisfactory control performance of the resulting algorithm. It is well known that interpolation-based MPC can achieve a favourable trade-off among these different aspects. However, the existing results are usually based on fixed prediction scenarios, which inevitably limits the performance of the obtained algorithms. So by replacing the fixed prediction scenarios with the time-varying multi-step prediction scenarios, this paper provides a new insight into improvement of the existing MPC designs. The adopted control law is a combination of predetermined multi-step feedback control laws, based on which two MPC algorithms with guaranteed recursive feasibility and asymptotic stability are presented. The efficacy of the proposed algorithms is illustrated by a numerical example.
The concrete technology of post pouring zone of raft foundation of Hongyun Building B tower
NASA Astrophysics Data System (ADS)
Yin, Suhua; Yu, Liu; Wu, Yanli; Zhao, Ying
2017-08-01
The foundation of Hongyun building B tower is made of raft board foundation which is 3300mm in the thickness concreted pouring amount of large and the late poured band in the pouring settlement formed. The temperature of the pouring settlement was controlled in order to prevent the crack of the construction of the late poured band. The steel of post pouring band was designed and monitorred. The quality of post pouring band quality is guaranteed in the raft concrete foundation of Hongyun Building B tower.
Multiple feature fusion via covariance matrix for visual tracking
NASA Astrophysics Data System (ADS)
Jin, Zefenfen; Hou, Zhiqiang; Yu, Wangsheng; Wang, Xin; Sun, Hui
2018-04-01
Aiming at the problem of complicated dynamic scenes in visual target tracking, a multi-feature fusion tracking algorithm based on covariance matrix is proposed to improve the robustness of the tracking algorithm. In the frame-work of quantum genetic algorithm, this paper uses the region covariance descriptor to fuse the color, edge and texture features. It also uses a fast covariance intersection algorithm to update the model. The low dimension of region covariance descriptor, the fast convergence speed and strong global optimization ability of quantum genetic algorithm, and the fast computation of fast covariance intersection algorithm are used to improve the computational efficiency of fusion, matching, and updating process, so that the algorithm achieves a fast and effective multi-feature fusion tracking. The experiments prove that the proposed algorithm can not only achieve fast and robust tracking but also effectively handle interference of occlusion, rotation, deformation, motion blur and so on.
MIMO signal progressing with RLSCMA algorithm for multi-mode multi-core optical transmission system
NASA Astrophysics Data System (ADS)
Bi, Yuan; Liu, Bo; Zhang, Li-jia; Xin, Xiang-jun; Zhang, Qi; Wang, Yong-jun; Tian, Qing-hua; Tian, Feng; Mao, Ya-ya
2018-01-01
In the process of transmitting signals of multi-mode multi-core fiber, there will be mode coupling between modes. The mode dispersion will also occur because each mode has different transmission speed in the link. Mode coupling and mode dispersion will cause damage to the useful signal in the transmission link, so the receiver needs to deal received signal with digital signal processing, and compensate the damage in the link. We first analyzes the influence of mode coupling and mode dispersion in the process of transmitting signals of multi-mode multi-core fiber, then presents the relationship between the coupling coefficient and dispersion coefficient. Then we carry out adaptive signal processing with MIMO equalizers based on recursive least squares constant modulus algorithm (RLSCMA). The MIMO equalization algorithm offers adaptive equalization taps according to the degree of crosstalk in cores or modes, which eliminates the interference among different modes and cores in space division multiplexing(SDM) transmission system. The simulation results show that the distorted signals are restored efficiently with fast convergence speed.
Xiao, Li; Cai, Qin; Li, Zhilin; Zhao, Hongkai; Luo, Ray
2014-11-25
A multi-scale framework is proposed for more realistic molecular dynamics simulations in continuum solvent models by coupling a molecular mechanics treatment of solute with a fluid mechanics treatment of solvent. This article reports our initial efforts to formulate the physical concepts necessary for coupling the two mechanics and develop a 3D numerical algorithm to simulate the solvent fluid via the Navier-Stokes equation. The numerical algorithm was validated with multiple test cases. The validation shows that the algorithm is effective and stable, with observed accuracy consistent with our design.
NASA Technical Reports Server (NTRS)
Gao, Bo-Cai; Montes, Marcos J.; Davis, Curtiss O.
2003-01-01
This SIMBIOS contract supports several activities over its three-year time-span. These include certain computational aspects of atmospheric correction, including the modification of our hyperspectral atmospheric correction algorithm Tafkaa for various multi-spectral instruments, such as SeaWiFS, MODIS, and GLI. Additionally, since absorbing aerosols are becoming common in many coastal areas, we are making the model calculations to incorporate various absorbing aerosol models into tables used by our Tafkaa atmospheric correction algorithm. Finally, we have developed the algorithms to use MODIS data to characterize thin cirrus effects on aerosol retrieval.
NASA Astrophysics Data System (ADS)
Lallier-Daniels, Dominic
La conception de ventilateurs est souvent basée sur une méthodologie « essais/erreurs » d'amélioration de géométries existantes ainsi que sur l'expérience de design et les résultats expérimentaux cumulés par les entreprises. Cependant, cette méthodologie peut se révéler coûteuse en cas d'échec; même en cas de succès, des améliorations significatives en performance sont souvent difficiles, voire impossibles à obtenir. Le projet présent propose le développement et la validation d'une méthodologie de conception basée sur l'emploi du calcul méridien pour la conception préliminaire de turbomachines hélico-centrifuges (ou flux-mixte) et l'utilisation du calcul numérique d'écoulement fluides (CFD) pour la conception détaillée. La méthode de calcul méridien à la base du processus de conception proposé est d'abord présentée. Dans un premier temps, le cadre théorique est développé. Le calcul méridien demeurant fondamentalement un processus itératif, le processus de calcul est également présenté, incluant les méthodes numériques de calcul employée pour la résolution des équations fondamentales. Une validation du code méridien écrit dans le cadre du projet de maîtrise face à un algorithme de calcul méridien développé par l'auteur de la méthode ainsi qu'à des résultats de simulation numérique sur un code commercial est également réalisée. La méthodologie de conception de turbomachines développée dans le cadre de l'étude est ensuite présentée sous la forme d'une étude de cas pour un ventilateur hélico-centrifuge basé sur des spécifications fournies par le partenaire industriel Venmar. La méthodologie se divise en trois étapes: le calcul méridien est employé pour le pré-dimensionnement, suivi de simulations 2D de grilles d'aubes pour la conception détaillée des pales et finalement d'une analyse numérique 3D pour la validation et l'optimisation fine de la géométrie. Les résultats de calcul méridien sont en outre comparés aux résultats de simulation pour la géométrie 3D afin de valider l'emploi du calcul méridien comme outil de prédimensionnement.
NASA Astrophysics Data System (ADS)
Zhileykin, M. M.; Kotiev, G. O.; Nagatsev, M. V.
2018-02-01
In order to meet the growing mobility requirements for the wheeled vehicles on all types of terrain the engineers have to develop a large number of specialized control algorithms for the multi-axle wheeled vehicle (MWV) suspension improving such qualities as ride comfort, handling and stability. The authors have developed an adaptive algorithm of the dynamic damping of the MVW body oscillations. The algorithm provides high ride comfort and high mobility of the vehicle. The article discloses a method for synthesis of an adaptive dynamic continuous algorithm of the MVW body oscillation damping and provides simulation results proving high efficiency of the developed control algorithm.
An Improved Iris Recognition Algorithm Based on Hybrid Feature and ELM
NASA Astrophysics Data System (ADS)
Wang, Juan
2018-03-01
The iris image is easily polluted by noise and uneven light. This paper proposed an improved extreme learning machine (ELM) based iris recognition algorithm with hybrid feature. 2D-Gabor filters and GLCM is employed to generate a multi-granularity hybrid feature vector. 2D-Gabor filter and GLCM feature work for capturing low-intermediate frequency and high frequency texture information, respectively. Finally, we utilize extreme learning machine for iris recognition. Experimental results reveal our proposed ELM based multi-granularity iris recognition algorithm (ELM-MGIR) has higher accuracy of 99.86%, and lower EER of 0.12% under the premise of real-time performance. The proposed ELM-MGIR algorithm outperforms other mainstream iris recognition algorithms.
Wang, Rui; Zhou, Yongquan; Zhao, Chengyan; Wu, Haizhou
2015-01-01
Multi-threshold image segmentation is a powerful image processing technique that is used for the preprocessing of pattern recognition and computer vision. However, traditional multilevel thresholding methods are computationally expensive because they involve exhaustively searching the optimal thresholds to optimize the objective functions. To overcome this drawback, this paper proposes a flower pollination algorithm with a randomized location modification. The proposed algorithm is used to find optimal threshold values for maximizing Otsu's objective functions with regard to eight medical grayscale images. When benchmarked against other state-of-the-art evolutionary algorithms, the new algorithm proves itself to be robust and effective through numerical experimental results including Otsu's objective values and standard deviations.
Optimizing Constrained Single Period Problem under Random Fuzzy Demand
NASA Astrophysics Data System (ADS)
Taleizadeh, Ata Allah; Shavandi, Hassan; Riazi, Afshin
2008-09-01
In this paper, we consider the multi-product multi-constraint newsboy problem with random fuzzy demands and total discount. The demand of the products is often stochastic in the real word but the estimation of the parameters of distribution function may be done by fuzzy manner. So an appropriate option to modeling the demand of products is using the random fuzzy variable. The objective function of proposed model is to maximize the expected profit of newsboy. We consider the constraints such as warehouse space and restriction on quantity order for products, and restriction on budget. We also consider the batch size for products order. Finally we introduce a random fuzzy multi-product multi-constraint newsboy problem (RFM-PM-CNP) and it is changed to a multi-objective mixed integer nonlinear programming model. Furthermore, a hybrid intelligent algorithm based on genetic algorithm, Pareto and TOPSIS is presented for the developed model. Finally an illustrative example is presented to show the performance of the developed model and algorithm.
Employing multi-GPU power for molecular dynamics simulation: an extension of GALAMOST
NASA Astrophysics Data System (ADS)
Zhu, You-Liang; Pan, Deng; Li, Zhan-Wei; Liu, Hong; Qian, Hu-Jun; Zhao, Yang; Lu, Zhong-Yuan; Sun, Zhao-Yan
2018-04-01
We describe the algorithm of employing multi-GPU power on the basis of Message Passing Interface (MPI) domain decomposition in a molecular dynamics code, GALAMOST, which is designed for the coarse-grained simulation of soft matters. The code of multi-GPU version is developed based on our previous single-GPU version. In multi-GPU runs, one GPU takes charge of one domain and runs single-GPU code path. The communication between neighbouring domains takes a similar algorithm of CPU-based code of LAMMPS, but is optimised specifically for GPUs. We employ a memory-saving design which can enlarge maximum system size at the same device condition. An optimisation algorithm is employed to prolong the update period of neighbour list. We demonstrate good performance of multi-GPU runs on the simulation of Lennard-Jones liquid, dissipative particle dynamics liquid, polymer and nanoparticle composite, and two-patch particles on workstation. A good scaling of many nodes on cluster for two-patch particles is presented.
Local SIMPLE multi-atlas-based segmentation applied to lung lobe detection on chest CT
NASA Astrophysics Data System (ADS)
Agarwal, M.; Hendriks, E. A.; Stoel, B. C.; Bakker, M. E.; Reiber, J. H. C.; Staring, M.
2012-02-01
For multi atlas-based segmentation approaches, a segmentation fusion scheme which considers local performance measures may be more accurate than a method which uses a global performance measure. We improve upon an existing segmentation fusion method called SIMPLE and extend it to be localized and suitable for multi-labeled segmentations. We demonstrate the algorithm performance on 23 CT scans of COPD patients using a leave-one- out experiment. Our algorithm performs significantly better (p < 0.01) than majority voting, STAPLE, and SIMPLE, with a median overlap of the fissure of 0.45, 0.48, 0.55 and 0.6 for majority voting, STAPLE, SIMPLE, and the proposed algorithm, respectively.
Evaluation of prostate segmentation algorithms for MRI: the PROMISE12 challenge
Litjens, Geert; Toth, Robert; van de Ven, Wendy; Hoeks, Caroline; Kerkstra, Sjoerd; van Ginneken, Bram; Vincent, Graham; Guillard, Gwenael; Birbeck, Neil; Zhang, Jindang; Strand, Robin; Malmberg, Filip; Ou, Yangming; Davatzikos, Christos; Kirschner, Matthias; Jung, Florian; Yuan, Jing; Qiu, Wu; Gao, Qinquan; Edwards, Philip “Eddie”; Maan, Bianca; van der Heijden, Ferdinand; Ghose, Soumya; Mitra, Jhimli; Dowling, Jason; Barratt, Dean; Huisman, Henkjan; Madabhushi, Anant
2014-01-01
Prostate MRI image segmentation has been an area of intense research due to the increased use of MRI as a modality for the clinical workup of prostate cancer. Segmentation is useful for various tasks, e.g. to accurately localize prostate boundaries for radiotherapy or to initialize multi-modal registration algorithms. In the past, it has been difficult for research groups to evaluate prostate segmentation algorithms on multi-center, multi-vendor and multi-protocol data. Especially because we are dealing with MR images, image appearance, resolution and the presence of artifacts are affected by differences in scanners and/or protocols, which in turn can have a large influence on algorithm accuracy. The Prostate MR Image Segmentation (PROMISE12) challenge was setup to allow a fair and meaningful comparison of segmentation methods on the basis of performance and robustness. In this work we will discuss the initial results of the online PROMISE12 challenge, and the results obtained in the live challenge workshop hosted by the MICCAI2012 conference. In the challenge, 100 prostate MR cases from 4 different centers were included, with differences in scanner manufacturer, field strength and protocol. A total of 11 teams from academic research groups and industry participated. Algorithms showed a wide variety in methods and implementation, including active appearance models, atlas registration and level sets. Evaluation was performed using boundary and volume based metrics which were combined into a single score relating the metrics to human expert performance. The winners of the challenge where the algorithms by teams Imorphics and ScrAutoProstate, with scores of 85.72 and 84.29 overall. Both algorithms where significantly better than all other algorithms in the challenge (p < 0.05) and had an efficient implementation with a run time of 8 minutes and 3 second per case respectively. Overall, active appearance model based approaches seemed to outperform other approaches like multi-atlas registration, both on accuracy and computation time. Although average algorithm performance was good to excellent and the Imorphics algorithm outperformed the second observer on average, we showed that algorithm combination might lead to further improvement, indicating that optimal performance for prostate segmentation is not yet obtained. All results are available online at http://promise12.grand-challenge.org/. PMID:24418598
Multiple sequence alignment using multi-objective based bacterial foraging optimization algorithm.
Rani, R Ranjani; Ramyachitra, D
2016-12-01
Multiple sequence alignment (MSA) is a widespread approach in computational biology and bioinformatics. MSA deals with how the sequences of nucleotides and amino acids are sequenced with possible alignment and minimum number of gaps between them, which directs to the functional, evolutionary and structural relationships among the sequences. Still the computation of MSA is a challenging task to provide an efficient accuracy and statistically significant results of alignments. In this work, the Bacterial Foraging Optimization Algorithm was employed to align the biological sequences which resulted in a non-dominated optimal solution. It employs Multi-objective, such as: Maximization of Similarity, Non-gap percentage, Conserved blocks and Minimization of gap penalty. BAliBASE 3.0 benchmark database was utilized to examine the proposed algorithm against other methods In this paper, two algorithms have been proposed: Hybrid Genetic Algorithm with Artificial Bee Colony (GA-ABC) and Bacterial Foraging Optimization Algorithm. It was found that Hybrid Genetic Algorithm with Artificial Bee Colony performed better than the existing optimization algorithms. But still the conserved blocks were not obtained using GA-ABC. Then BFO was used for the alignment and the conserved blocks were obtained. The proposed Multi-Objective Bacterial Foraging Optimization Algorithm (MO-BFO) was compared with widely used MSA methods Clustal Omega, Kalign, MUSCLE, MAFFT, Genetic Algorithm (GA), Ant Colony Optimization (ACO), Artificial Bee Colony (ABC), Particle Swarm Optimization (PSO) and Hybrid Genetic Algorithm with Artificial Bee Colony (GA-ABC). The final results show that the proposed MO-BFO algorithm yields better alignment than most widely used methods. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Big drinkers: how BMI, gender and rules of thumb influence the free pouring of wine.
Smarandescu, Laura; Walker, Doug; Wansink, Brian
2014-11-01
This research examines free pouring behavior and provides an account of how Body Mass Index (BMI) and gender might lead to the overpouring, and consequently the overconsumption of wine. An observational study with young adults investigated how BMI and gender affect free-pouring of wine over a variety of pouring scenarios, and how rules-of-thumb in pouring affect the quantities of alcohol poured by men and women across BMI categories. For men, the amount poured was positively related to BMI. However, BMI did not affect pours by women. The use of the "half glass" rule-of-thumb in pouring reduced the volume of wine poured by over 20% for both men and women. Importantly, this rule-of-thumb substantially attenuated the pours by men at high BMI levels. Increasing awareness of pouring biases represents an early and effective step toward curbing alcohol consumption among men, and especially those who are overweight. Additionally, using a simple "half glass" rule-of-thumb may be an effective way to curb overpouring, despite non-standard glass sizes. Copyright © 2014. Published by Elsevier B.V.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kikinzon, Evgeny; Kuznetsov, Yuri; Lipnikov, Konstatin
In this study, we describe a new algorithm for solving multi-material diffusion problem when material interfaces are not aligned with the mesh. In this case interface reconstruction methods are used to construct approximate representation of interfaces between materials. They produce so-called multi-material cells, in which materials are represented by material polygons that contain only one material. The reconstructed interface is not continuous between cells. Finally, we suggest the new method for solving multi-material diffusion problems on such meshes and compare its performance with known homogenization methods.
Kikinzon, Evgeny; Kuznetsov, Yuri; Lipnikov, Konstatin; ...
2017-07-08
In this study, we describe a new algorithm for solving multi-material diffusion problem when material interfaces are not aligned with the mesh. In this case interface reconstruction methods are used to construct approximate representation of interfaces between materials. They produce so-called multi-material cells, in which materials are represented by material polygons that contain only one material. The reconstructed interface is not continuous between cells. Finally, we suggest the new method for solving multi-material diffusion problems on such meshes and compare its performance with known homogenization methods.
NASA Astrophysics Data System (ADS)
Rebaine, Ali
1997-08-01
Ce travail consiste en la simulation numerique des ecoulements internes compressibles bidimensionnels laminaires et turbulents. On s'interesse, particulierement, aux ecoulements dans les ejecteurs supersoniques. Les equations de Navier-Stokes sont formulees sous forme conservative et utilisent, comme variables independantes, les variables dites enthalpiques a savoir: la pression statique, la quantite de mouvement et l'enthalpie totale specifique. Une formulation variationnelle stable des equations de Navier-Stokes est utilisee. Elle est base sur la methode SUPG (Streamline Upwinding Petrov Galerkin) et utilise un operateur de capture des forts gradients. Un modele de turbulence, pour la simulation des ecoulements dans les ejecteurs, est mis au point. Il consiste a separer deux regions distinctes: une region proche de la paroi solide, ou le modele de Baldwin et Lomax est utilise et l'autre, loin de la paroi, ou une formulation nouvelle, basee sur le modele de Schlichting pour les jets, est proposee. Une technique de calcul de la viscosite turbulente, sur un maillage non structure, est implementee. La discretisation dans l'espace de la forme variationnelle est faite a l'aide de la methode des elements finis en utilisant une approximation mixte: quadratique pour les composantes de la quantite de mouvement et de la vitesse et lineaire pour le reste des variables. La discretisation temporelle est effectuee par une methode de differences finies en utilisant le schema d'Euler implicite. Le systeme matriciel, resultant de la discretisation spatio-temporelle, est resolu a l'aide de l'algorithme GMRES en utilisant un preconditionneur diagonal. Les validations numeriques ont ete menees sur plusieurs types de tuyeres et ejecteurs. La principale validation consiste en la simulation de l'ecoulement dans l'ejecteur teste au centre de recherche NASA Lewis. Les resultats obtenus sont tres comparables avec ceux des travaux anterieurs et sont nettement superieurs concernant les ecoulements turbulents dans les ejecteurs.
Using Alternative Multiplication Algorithms to "Offload" Cognition
ERIC Educational Resources Information Center
Jazby, Dan; Pearn, Cath
2015-01-01
When viewed through a lens of embedded cognition, algorithms may enable aspects of the cognitive work of multi-digit multiplication to be "offloaded" to the environmental structure created by an algorithm. This study analyses four multiplication algorithms by viewing different algorithms as enabling cognitive work to be distributed…
NASA Astrophysics Data System (ADS)
Moon, Byung-Young
2005-12-01
The hybrid neural-genetic multi-model parameter estimation algorithm was demonstrated. This method can be applied to structured system identification of electro-hydraulic servo system. This algorithms consist of a recurrent incremental credit assignment(ICRA) neural network and a genetic algorithm. The ICRA neural network evaluates each member of a generation of model and genetic algorithm produces new generation of model. To evaluate the proposed method, electro-hydraulic servo system was designed and manufactured. The experiment was carried out to figure out the hybrid neural-genetic multi-model parameter estimation algorithm. As a result, the dynamic characteristics were obtained such as the parameters(mass, damping coefficient, bulk modulus, spring coefficient), which minimize total square error. The result of this study can be applied to hydraulic systems in industrial fields.
Parallel and fault-tolerant algorithms for hypercube multiprocessors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aykanat, C.
1988-01-01
Several techniques for increasing the performance of parallel algorithms on distributed-memory message-passing multi-processor systems are investigated. These techniques are effectively implemented for the parallelization of the Scaled Conjugate Gradient (SCG) algorithm on a hypercube connected message-passing multi-processor. Significant performance improvement is achieved by using these techniques. The SCG algorithm is used for the solution phase of an FE modeling system. Almost linear speed-up is achieved, and it is shown that hypercube topology is scalable for an FE class of problem. The SCG algorithm is also shown to be suitable for vectorization, and near supercomputer performance is achieved on a vectormore » hypercube multiprocessor by exploiting both parallelization and vectorization. Fault-tolerance issues for the parallel SCG algorithm and for the hypercube topology are also addressed.« less
A label distance maximum-based classifier for multi-label learning.
Liu, Xiaoli; Bao, Hang; Zhao, Dazhe; Cao, Peng
2015-01-01
Multi-label classification is useful in many bioinformatics tasks such as gene function prediction and protein site localization. This paper presents an improved neural network algorithm, Max Label Distance Back Propagation Algorithm for Multi-Label Classification. The method was formulated by modifying the total error function of the standard BP by adding a penalty term, which was realized by maximizing the distance between the positive and negative labels. Extensive experiments were conducted to compare this method against state-of-the-art multi-label methods on three popular bioinformatic benchmark datasets. The results illustrated that this proposed method is more effective for bioinformatic multi-label classification compared to commonly used techniques.
Multi-particle phase space integration with arbitrary set of singularities in CompHEP
NASA Astrophysics Data System (ADS)
Kovalenko, D. N.; Pukhov, A. E.
1997-02-01
We describe an algorithm of multi-particle phase space integration for collision and decay processes realized in CompHEP package version 3.2. In the framework of this algorithm it is possible to regularize an arbitrary set of singularities caused by virtual particle propagators. The algorithm is based on the method of the recursive representation of kinematics and on the multichannel Monte Carlo approach. CompHEP package is available by WWW: http://theory.npi.msu.su/pukhov/comphep.html
Multi-objective optimisation and decision-making of space station logistics strategies
NASA Astrophysics Data System (ADS)
Zhu, Yue-he; Luo, Ya-zhong
2016-10-01
Space station logistics strategy optimisation is a complex engineering problem with multiple objectives. Finding a decision-maker-preferred compromise solution becomes more significant when solving such a problem. However, the designer-preferred solution is not easy to determine using the traditional method. Thus, a hybrid approach that combines the multi-objective evolutionary algorithm, physical programming, and differential evolution (DE) algorithm is proposed to deal with the optimisation and decision-making of space station logistics strategies. A multi-objective evolutionary algorithm is used to acquire a Pareto frontier and help determine the range parameters of the physical programming. Physical programming is employed to convert the four-objective problem into a single-objective problem, and a DE algorithm is applied to solve the resulting physical programming-based optimisation problem. Five kinds of objective preference are simulated and compared. The simulation results indicate that the proposed approach can produce good compromise solutions corresponding to different decision-makers' preferences.
Advances in multi-sensor data fusion: algorithms and applications.
Dong, Jiang; Zhuang, Dafang; Huang, Yaohuan; Fu, Jingying
2009-01-01
With the development of satellite and remote sensing techniques, more and more image data from airborne/satellite sensors have become available. Multi-sensor image fusion seeks to combine information from different images to obtain more inferences than can be derived from a single sensor. In image-based application fields, image fusion has emerged as a promising research area since the end of the last century. The paper presents an overview of recent advances in multi-sensor satellite image fusion. Firstly, the most popular existing fusion algorithms are introduced, with emphasis on their recent improvements. Advances in main applications fields in remote sensing, including object identification, classification, change detection and maneuvering targets tracking, are described. Both advantages and limitations of those applications are then discussed. Recommendations are addressed, including: (1) Improvements of fusion algorithms; (2) Development of "algorithm fusion" methods; (3) Establishment of an automatic quality assessment scheme.
Bi, Sheng; Zeng, Xiao; Tang, Xin; Qin, Shujia; Lai, King Wai Chiu
2016-01-01
Compressive sensing (CS) theory has opened up new paths for the development of signal processing applications. Based on this theory, a novel single pixel camera architecture has been introduced to overcome the current limitations and challenges of traditional focal plane arrays. However, video quality based on this method is limited by existing acquisition and recovery methods, and the method also suffers from being time-consuming. In this paper, a multi-frame motion estimation algorithm is proposed in CS video to enhance the video quality. The proposed algorithm uses multiple frames to implement motion estimation. Experimental results show that using multi-frame motion estimation can improve the quality of recovered videos. To further reduce the motion estimation time, a block match algorithm is used to process motion estimation. Experiments demonstrate that using the block match algorithm can reduce motion estimation time by 30%. PMID:26950127
NASA Astrophysics Data System (ADS)
Gu, Hui; Zhu, Hongxia; Cui, Yanfeng; Si, Fengqi; Xue, Rui; Xi, Han; Zhang, Jiayu
2018-06-01
An integrated combustion optimization scheme is proposed for the combined considering the restriction in coal-fired boiler combustion efficiency and outlet NOx emissions. Continuous attribute discretization and reduction techniques are handled as optimization preparation by E-Cluster and C_RED methods, in which the segmentation numbers don't need to be provided in advance and can be continuously adapted with data characters. In order to obtain results of multi-objections with clustering method for mixed data, a modified K-prototypes algorithm is then proposed. This algorithm can be divided into two stages as K-prototypes algorithm for clustering number self-adaptation and clustering for multi-objective optimization, respectively. Field tests were carried out at a 660 MW coal-fired boiler to provide real data as a case study for controllable attribute discretization and reduction in boiler system and obtaining optimization parameters considering [ maxηb, minyNOx ] multi-objective rule.
MREG V1.1 : a multi-scale image registration algorithm for SAR applications.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eichel, Paul H.
2013-08-01
MREG V1.1 is the sixth generation SAR image registration algorithm developed by the Signal Processing&Technology Department for Synthetic Aperture Radar applications. Like its predecessor algorithm REGI, it employs a powerful iterative multi-scale paradigm to achieve the competing goals of sub-pixel registration accuracy and the ability to handle large initial offsets. Since it is not model based, it allows for high fidelity tracking of spatially varying terrain-induced misregistration. Since it does not rely on image domain phase, it is equally adept at coherent and noncoherent image registration. This document provides a brief history of the registration processors developed by Dept. 5962more » leading up to MREG V1.1, a full description of the signal processing steps involved in the algorithm, and a user's manual with application specific recommendations for CCD, TwoColor MultiView, and SAR stereoscopy.« less
Simulation evaluation of capacitor bank impact on increasing supply current for alumunium production
NASA Astrophysics Data System (ADS)
Hasan, S.; Badra, K.; Dinzi, R.; Suherman
2018-03-01
DC current supply to power the electrolysis process in producing aluminium at PT Indonesia Asahan Aluminium (Persero) is about 193 kA. At this condition, the load voltage regulator (LVR) transformer generates 0.89 lagging power factor. By adding the capacitor bank to reduce the harmonic distortion, it is expected that the supply current will increase. This paper evaluates capacitor bank installation impact on the system by using ETAP 12.0 simulation. It has been obtained that by installing 90 MVAR capacitor bank in the secondary part of LVR, the power factor is corrected about 8% and DC current increases about 13.5%.
MultiMiTar: a novel multi objective optimization based miRNA-target prediction method.
Mitra, Ramkrishna; Bandyopadhyay, Sanghamitra
2011-01-01
Machine learning based miRNA-target prediction algorithms often fail to obtain a balanced prediction accuracy in terms of both sensitivity and specificity due to lack of the gold standard of negative examples, miRNA-targeting site context specific relevant features and efficient feature selection process. Moreover, all the sequence, structure and machine learning based algorithms are unable to distribute the true positive predictions preferentially at the top of the ranked list; hence the algorithms become unreliable to the biologists. In addition, these algorithms fail to obtain considerable combination of precision and recall for the target transcripts that are translationally repressed at protein level. In the proposed article, we introduce an efficient miRNA-target prediction system MultiMiTar, a Support Vector Machine (SVM) based classifier integrated with a multiobjective metaheuristic based feature selection technique. The robust performance of the proposed method is mainly the result of using high quality negative examples and selection of biologically relevant miRNA-targeting site context specific features. The features are selected by using a novel feature selection technique AMOSA-SVM, that integrates the multi objective optimization technique Archived Multi-Objective Simulated Annealing (AMOSA) and SVM. MultiMiTar is found to achieve much higher Matthew's correlation coefficient (MCC) of 0.583 and average class-wise accuracy (ACA) of 0.8 compared to the others target prediction methods for a completely independent test data set. The obtained MCC and ACA values of these algorithms range from -0.269 to 0.155 and 0.321 to 0.582, respectively. Moreover, it shows a more balanced result in terms of precision and sensitivity (recall) for the translationally repressed data set as compared to all the other existing methods. An important aspect is that the true positive predictions are distributed preferentially at the top of the ranked list that makes MultiMiTar reliable for the biologists. MultiMiTar is now available as an online tool at www.isical.ac.in/~bioinfo_miu/multimitar.htm. MultiMiTar software can be downloaded from www.isical.ac.in/~bioinfo_miu/multimitar-download.htm.
HEURISTIC OPTIMIZATION AND ALGORITHM TUNING APPLIED TO SORPTIVE BARRIER DESIGN
While heuristic optimization is applied in environmental applications, ad-hoc algorithm configuration is typical. We use a multi-layer sorptive barrier design problem as a benchmark for an algorithm-tuning procedure, as applied to three heuristics (genetic algorithms, simulated ...
A Note on Evolutionary Algorithms and Its Applications
ERIC Educational Resources Information Center
Bhargava, Shifali
2013-01-01
This paper introduces evolutionary algorithms with its applications in multi-objective optimization. Here elitist and non-elitist multiobjective evolutionary algorithms are discussed with their advantages and disadvantages. We also discuss constrained multiobjective evolutionary algorithms and their applications in various areas.
3D reconstruction from multi-view VHR-satellite images in MicMac
NASA Astrophysics Data System (ADS)
Rupnik, Ewelina; Pierrot-Deseilligny, Marc; Delorme, Arthur
2018-05-01
This work addresses the generation of high quality digital surface models by fusing multiple depths maps calculated with the dense image matching method. The algorithm is adapted to very high resolution multi-view satellite images, and the main contributions of this work are in the multi-view fusion. The algorithm is insensitive to outliers, takes into account the matching quality indicators, handles non-correlated zones (e.g. occlusions), and is solved with a multi-directional dynamic programming approach. No geometric constraints (e.g. surface planarity) or auxiliary data in form of ground control points are required for its operation. Prior to the fusion procedures, the RPC geolocation parameters of all images are improved in a bundle block adjustment routine. The performance of the algorithm is evaluated on two VHR (Very High Resolution)-satellite image datasets (Pléiades, WorldView-3) revealing its good performance in reconstructing non-textured areas, repetitive patterns, and surface discontinuities.
Zhang, Guoqing; Zhang, Xianku; Pang, Hongshuai
2015-09-01
This research is concerned with the problem of 4 degrees of freedom (DOF) ship manoeuvring identification modelling with the full-scale trial data. To avoid the multi-innovation matrix inversion in the conventional multi-innovation least squares (MILS) algorithm, a new transformed multi-innovation least squares (TMILS) algorithm is first developed by virtue of the coupling identification concept. And much effort is made to guarantee the uniformly ultimate convergence. Furthermore, the auto-constructed TMILS scheme is derived for the ship manoeuvring motion identification by combination with a statistic index. Comparing with the existing results, the proposed scheme has the significant computational advantage and is able to estimate the model structure. The illustrative examples demonstrate the effectiveness of the proposed algorithm, especially including the identification application with full-scale trial data. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.
Using MOEA with Redistribution and Consensus Branches to Infer Phylogenies.
Min, Xiaoping; Zhang, Mouzhao; Yuan, Sisi; Ge, Shengxiang; Liu, Xiangrong; Zeng, Xiangxiang; Xia, Ningshao
2017-12-26
In recent years, to infer phylogenies, which are NP-hard problems, more and more research has focused on using metaheuristics. Maximum Parsimony and Maximum Likelihood are two effective ways to conduct inference. Based on these methods, which can also be considered as the optimal criteria for phylogenies, various kinds of multi-objective metaheuristics have been used to reconstruct phylogenies. However, combining these two time-consuming methods results in those multi-objective metaheuristics being slower than a single objective. Therefore, we propose a novel, multi-objective optimization algorithm, MOEA-RC, to accelerate the processes of rebuilding phylogenies using structural information of elites in current populations. We compare MOEA-RC with two representative multi-objective algorithms, MOEA/D and NAGA-II, and a non-consensus version of MOEA-RC on three real-world datasets. The result is, within a given number of iterations, MOEA-RC achieves better solutions than the other algorithms.
NASA Astrophysics Data System (ADS)
Selvam, Kayalvizhi; Vinod Kumar, D. M.; Siripuram, Ramakanth
2017-04-01
In this paper, an optimization technique called peer enhanced teaching learning based optimization (PeTLBO) algorithm is used in multi-objective problem domain. The PeTLBO algorithm is parameter less so it reduced the computational burden. The proposed peer enhanced multi-objective based TLBO (PeMOTLBO) algorithm has been utilized to find a set of non-dominated optimal solutions [distributed generation (DG) location and sizing in distribution network]. The objectives considered are: real power loss and the voltage deviation subjected to voltage limits and maximum penetration level of DG in distribution network. Since the DG considered is capable of injecting real and reactive power to the distribution network the power factor is considered as 0.85 lead. The proposed peer enhanced multi-objective optimization technique provides different trade-off solutions in order to find the best compromise solution a fuzzy set theory approach has been used. The effectiveness of this proposed PeMOTLBO is tested on IEEE 33-bus and Indian 85-bus distribution system. The performance is validated with Pareto fronts and two performance metrics (C-metric and S-metric) by comparing with robust multi-objective technique called non-dominated sorting genetic algorithm-II and also with the basic TLBO.
Multi-fidelity stochastic collocation method for computation of statistical moments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhu, Xueyu, E-mail: xueyu-zhu@uiowa.edu; Linebarger, Erin M., E-mail: aerinline@sci.utah.edu; Xiu, Dongbin, E-mail: xiu.16@osu.edu
We present an efficient numerical algorithm to approximate the statistical moments of stochastic problems, in the presence of models with different fidelities. The method extends the multi-fidelity approximation method developed in . By combining the efficiency of low-fidelity models and the accuracy of high-fidelity models, our method exhibits fast convergence with a limited number of high-fidelity simulations. We establish an error bound of the method and present several numerical examples to demonstrate the efficiency and applicability of the multi-fidelity algorithm.
Options for Parallelizing a Planning and Scheduling Algorithm
NASA Technical Reports Server (NTRS)
Clement, Bradley J.; Estlin, Tara A.; Bornstein, Benjamin D.
2011-01-01
Space missions have a growing interest in putting multi-core processors onboard spacecraft. For many missions processing power significantly slows operations. We investigate how continual planning and scheduling algorithms can exploit multi-core processing and outline different potential design decisions for a parallelized planning architecture. This organization of choices and challenges helps us with an initial design for parallelizing the CASPER planning system for a mesh multi-core processor. This work extends that presented at another workshop with some preliminary results.
Improved blood glucose estimation through multi-sensor fusion.
Xiong, Feiyu; Hipszer, Brian R; Joseph, Jeffrey; Kam, Moshe
2011-01-01
Continuous glucose monitoring systems are an integral component of diabetes management. Efforts to improve the accuracy and robustness of these systems are at the forefront of diabetes research. Towards this goal, a multi-sensor approach was evaluated in hospitalized patients. In this paper, we report on a multi-sensor fusion algorithm to combine glucose sensor measurements in a retrospective fashion. The results demonstrate the algorithm's ability to improve the accuracy and robustness of the blood glucose estimation with current glucose sensor technology.
Distributed Optimization of Multi-Agent Systems: Framework, Local Optimizer, and Applications
NASA Astrophysics Data System (ADS)
Zu, Yue
Convex optimization problem can be solved in a centralized or distributed manner. Compared with centralized methods based on single-agent system, distributed algorithms rely on multi-agent systems with information exchanging among connected neighbors, which leads to great improvement on the system fault tolerance. Thus, a task within multi-agent system can be completed with presence of partial agent failures. By problem decomposition, a large-scale problem can be divided into a set of small-scale sub-problems that can be solved in sequence/parallel. Hence, the computational complexity is greatly reduced by distributed algorithm in multi-agent system. Moreover, distributed algorithm allows data collected and stored in a distributed fashion, which successfully overcomes the drawbacks of using multicast due to the bandwidth limitation. Distributed algorithm has been applied in solving a variety of real-world problems. Our research focuses on the framework and local optimizer design in practical engineering applications. In the first one, we propose a multi-sensor and multi-agent scheme for spatial motion estimation of a rigid body. Estimation performance is improved in terms of accuracy and convergence speed. Second, we develop a cyber-physical system and implement distributed computation devices to optimize the in-building evacuation path when hazard occurs. The proposed Bellman-Ford Dual-Subgradient path planning method relieves the congestion in corridor and the exit areas. At last, highway traffic flow is managed by adjusting speed limits to minimize the fuel consumption and travel time in the third project. Optimal control strategy is designed through both centralized and distributed algorithm based on convex problem formulation. Moreover, a hybrid control scheme is presented for highway network travel time minimization. Compared with no controlled case or conventional highway traffic control strategy, the proposed hybrid control strategy greatly reduces total travel time on test highway network.
NASA Astrophysics Data System (ADS)
Chuan, Zun Liang; Ismail, Noriszura; Shinyie, Wendy Ling; Lit Ken, Tan; Fam, Soo-Fen; Senawi, Azlyna; Yusoff, Wan Nur Syahidah Wan
2018-04-01
Due to the limited of historical precipitation records, agglomerative hierarchical clustering algorithms widely used to extrapolate information from gauged to ungauged precipitation catchments in yielding a more reliable projection of extreme hydro-meteorological events such as extreme precipitation events. However, identifying the optimum number of homogeneous precipitation catchments accurately based on the dendrogram resulted using agglomerative hierarchical algorithms are very subjective. The main objective of this study is to propose an efficient regionalized algorithm to identify the homogeneous precipitation catchments for non-stationary precipitation time series. The homogeneous precipitation catchments are identified using average linkage hierarchical clustering algorithm associated multi-scale bootstrap resampling, while uncentered correlation coefficient as the similarity measure. The regionalized homogeneous precipitation is consolidated using K-sample Anderson Darling non-parametric test. The analysis result shows the proposed regionalized algorithm performed more better compared to the proposed agglomerative hierarchical clustering algorithm in previous studies.
PHY-DLL dialogue: cross-layer design for optical-wireless OFDM downlink transmission
NASA Astrophysics Data System (ADS)
Wang, Xuguo; Li, Lee
2005-11-01
The use of radio over fiber to provide radio access has a number of advantages including the ability to deploy small, low-cost remote antenna units and ease of upgrade. And due to the great potential for increasing the capacity and quality of service, the combination of Orthogonal Frequency Division Multiplexing (OFDM) modulation and the sub-carrier multiplexed optical transmission is one of the best solutions for the future millimeter-wave mobile communication. And this makes the optimum utility of valuable radio resources essential. This paper devises a cross-layer adaptive algorithm for optical-wireless OFDM system, which takes into consideration not only transmission power limitation in the physical layer, but also traffic scheduling and user fairness at the data-link layer. According to proportional fairness principle and water-pouring theorem, we put forward the complete description of this cross-layer adaptive downlink transmission 6-step algorithm. Simulation results show that the proposed cross-layer algorithm outperforms the mere physical layer adaptive algorithm markedly. The novel scheme is able to improve performance of the packet success rate per time chip and average packet delay, support added active users.
Xiao, Li; Cai, Qin; Li, Zhilin; Zhao, Hongkai; Luo, Ray
2014-01-01
A multi-scale framework is proposed for more realistic molecular dynamics simulations in continuum solvent models by coupling a molecular mechanics treatment of solute with a fluid mechanics treatment of solvent. This article reports our initial efforts to formulate the physical concepts necessary for coupling the two mechanics and develop a 3D numerical algorithm to simulate the solvent fluid via the Navier-Stokes equation. The numerical algorithm was validated with multiple test cases. The validation shows that the algorithm is effective and stable, with observed accuracy consistent with our design. PMID:25404761
NASA Astrophysics Data System (ADS)
Hely, Clement
During the past 50 years, the use of composite materials drastically increase, mainly thanks to the interest of aeronautical industries for these strong and lightweight materials. To improve the productivity of composite materials manufacturing some of the largest aeronautics companies began to develop automated processes such as Automated Fibre Placement (AFP). The AFP workcells currently used by the industry were mainly developed for production of large, nearly flat, plates with low curvatures such as aircraft fuselages. However, the fields of aeronautics and sport goods production begin nowadays to show an interest for manufacturing of smaller and more complex parts. The aim of the project in which this research takes place is to design a new AFP workcell and to develop new techniques allowing production of parts with small size and complex geometry. The work presented in this thesis focuses on the path planning on multi-axial revolution surfaces, e.g. Y-shaped tubes of constant circular cross section. Several path planning algorithms will be presented aiming at the exhaustive coverage of a mandrel with pre-impregnated (prepreg) composite tape. The methodology used in two of these algorithms is to individually cover each branch of the Y-shaped part with paths deriving from a helix. In the first one, the helix will be cut at the boundary between a branch and the junction region (algorithm HD) while in the second (algorithm HA) the pseudo-helix path can be adjusted to follow this boundary. These two methods were shown to have some drawbacks compromising their practical use and possibly leading to parts with diminished mechanical properties. To avoid these drawbacks, two others algorithms were developed with a new methodology. With them, the aim is to cover two branches of the Y-shape with a continuous course (i.e. without cut). The first one uses a well known strategy which defines plies with a constant fibre orientation. Parallel paths are then computed to generate a full and uniform ply covering two branches. Once again this method suffers from a main drawback, namely that it can produce highly curved paths leading to manufacturing defects. To overcome this limitation, a last algorithm is proposed ensuring that the maximal curvature of a trajectory stays below a fixed threshold. However, fulfilling this constraint prevents to predict the complete shape of the path and to ensure a perfectly uniform coverage. It is thus proposed to generate an exhaustive set of trajectories having different shapes and covering all the part. Then, a selection algorithm is used to choose the ones which are best suited according to selection criteria. To help the definition of these criteria, a finite element analysis is conducted to give some insight concerning the best suited shapes for specific loading cases. Finally, simulations were carried out with a workcell constituted by a robotic manipulator associated with a rotary table to verify the feasibility of the paths generated by the different algorithms.
NASA Astrophysics Data System (ADS)
He, Xiaojun; Ma, Haotong; Luo, Chuanxin
2016-10-01
The optical multi-aperture imaging system is an effective way to magnify the aperture and increase the resolution of telescope optical system, the difficulty of which lies in detecting and correcting of co-phase error. This paper presents a method based on stochastic parallel gradient decent algorithm (SPGD) to correct the co-phase error. Compared with the current method, SPGD method can avoid detecting the co-phase error. This paper analyzed the influence of piston error and tilt error on image quality based on double-aperture imaging system, introduced the basic principle of SPGD algorithm, and discuss the influence of SPGD algorithm's key parameters (the gain coefficient and the disturbance amplitude) on error control performance. The results show that SPGD can efficiently correct the co-phase error. The convergence speed of the SPGD algorithm is improved with the increase of gain coefficient and disturbance amplitude, but the stability of the algorithm reduced. The adaptive gain coefficient can solve this problem appropriately. This paper's results can provide the theoretical reference for the co-phase error correction of the multi-aperture imaging system.
2013-01-01
intelligently selecting waveform parameters using adaptive algorithms. The adaptive algorithms optimize the waveform parameters based on (1) the EM...the environment. 15. SUBJECT TERMS cognitive radar, adaptive sensing, spectrum sensing, multi-objective optimization, genetic algorithms, machine...detection and classification block diagram. .........................................................6 Figure 5. Genetic algorithm block diagram
ERIC Educational Resources Information Center
Schulz, Andreas
2018-01-01
Theoretical analysis of whole number-based calculation strategies and digit-based algorithms for multi-digit multiplication and division reveals that strategy use includes two kinds of reasoning: reasoning about the relations between numbers and reasoning about the relations between operations. In contrast, algorithms aim to reduce the necessary…
Fair and efficient network congestion control based on minority game
NASA Astrophysics Data System (ADS)
Wang, Zuxi; Wang, Wen; Hu, Hanping; Deng, Zhaozhang
2011-12-01
Low link utility, RTT unfairness and unfairness of Multi-Bottleneck network are the existing problems in the present network congestion control algorithms at large. Through the analogy of network congestion control with the "El Farol Bar" problem, we establish a congestion control model based on minority game(MG), and then present a novel network congestion control algorithm based on the model. The result of simulations indicates that the proposed algorithm can make the achievements of link utility closing to 100%, zero packet lose rate, and small of queue size. Besides, the RTT unfairness and the unfairness of Multi-Bottleneck network can be solved, to achieve the max-min fairness in Multi-Bottleneck network, while efficiently weaken the "ping-pong" oscillation caused by the overall synchronization.
Eliseyev, Andrey; Aksenova, Tetiana
2016-01-01
In the current paper the decoding algorithms for motor-related BCI systems for continuous upper limb trajectory prediction are considered. Two methods for the smooth prediction, namely Sobolev and Polynomial Penalized Multi-Way Partial Least Squares (PLS) regressions, are proposed. The methods are compared to the Multi-Way Partial Least Squares and Kalman Filter approaches. The comparison demonstrated that the proposed methods combined the prediction accuracy of the algorithms of the PLS family and trajectory smoothness of the Kalman Filter. In addition, the prediction delay is significantly lower for the proposed algorithms than for the Kalman Filter approach. The proposed methods could be applied in a wide range of applications beyond neuroscience. PMID:27196417
CQPSO scheduling algorithm for heterogeneous multi-core DAG task model
NASA Astrophysics Data System (ADS)
Zhai, Wenzheng; Hu, Yue-Li; Ran, Feng
2017-07-01
Efficient task scheduling is critical to achieve high performance in a heterogeneous multi-core computing environment. The paper focuses on the heterogeneous multi-core directed acyclic graph (DAG) task model and proposes a novel task scheduling method based on an improved chaotic quantum-behaved particle swarm optimization (CQPSO) algorithm. A task priority scheduling list was built. A processor with minimum cumulative earliest finish time (EFT) was acted as the object of the first task assignment. The task precedence relationships were satisfied and the total execution time of all tasks was minimized. The experimental results show that the proposed algorithm has the advantage of optimization abilities, simple and feasible, fast convergence, and can be applied to the task scheduling optimization for other heterogeneous and distributed environment.
Micro-Doppler Signal Time-Frequency Algorithm Based on STFRFT.
Pang, Cunsuo; Han, Yan; Hou, Huiling; Liu, Shengheng; Zhang, Nan
2016-09-24
This paper proposes a time-frequency algorithm based on short-time fractional order Fourier transformation (STFRFT) for identification of a complicated movement targets. This algorithm, consisting of a STFRFT order-changing and quick selection method, is effective in reducing the computation load. A multi-order STFRFT time-frequency algorithm is also developed that makes use of the time-frequency feature of each micro-Doppler component signal. This algorithm improves the estimation accuracy of time-frequency curve fitting through multi-order matching. Finally, experiment data were used to demonstrate STFRFT's performance in micro-Doppler time-frequency analysis. The results validated the higher estimate accuracy of the proposed algorithm. It may be applied to an LFM (Linear frequency modulated) pulse radar, SAR (Synthetic aperture radar), or ISAR (Inverse synthetic aperture radar), for improving the probability of target recognition.
The Pandora multi-algorithm approach to automated pattern recognition in LAr TPC detectors
NASA Astrophysics Data System (ADS)
Marshall, J. S.; Blake, A. S. T.; Thomson, M. A.; Escudero, L.; de Vries, J.; Weston, J.;
2017-09-01
The development and operation of Liquid Argon Time Projection Chambers (LAr TPCs) for neutrino physics has created a need for new approaches to pattern recognition, in order to fully exploit the superb imaging capabilities offered by this technology. The Pandora Software Development Kit provides functionality to aid the process of designing, implementing and running pattern recognition algorithms. It promotes the use of a multi-algorithm approach to pattern recognition: individual algorithms each address a specific task in a particular topology; a series of many tens of algorithms then carefully builds-up a picture of the event. The input to the Pandora pattern recognition is a list of 2D Hits. The output from the chain of over 70 algorithms is a hierarchy of reconstructed 3D Particles, each with an identified particle type, vertex and direction.
Li, Bing; Yuan, Chunfeng; Xiong, Weihua; Hu, Weiming; Peng, Houwen; Ding, Xinmiao; Maybank, Steve
2017-12-01
In multi-instance learning (MIL), the relations among instances in a bag convey important contextual information in many applications. Previous studies on MIL either ignore such relations or simply model them with a fixed graph structure so that the overall performance inevitably degrades in complex environments. To address this problem, this paper proposes a novel multi-view multi-instance learning algorithm (MIL) that combines multiple context structures in a bag into a unified framework. The novel aspects are: (i) we propose a sparse -graph model that can generate different graphs with different parameters to represent various context relations in a bag, (ii) we propose a multi-view joint sparse representation that integrates these graphs into a unified framework for bag classification, and (iii) we propose a multi-view dictionary learning algorithm to obtain a multi-view graph dictionary that considers cues from all views simultaneously to improve the discrimination of the MIL. Experiments and analyses in many practical applications prove the effectiveness of the M IL.
a Global Registration Algorithm of the Single-Closed Ring Multi-Stations Point Cloud
NASA Astrophysics Data System (ADS)
Yang, R.; Pan, L.; Xiang, Z.; Zeng, H.
2018-04-01
Aimed at the global registration problem of the single-closed ring multi-stations point cloud, a formula in order to calculate the error of rotation matrix was constructed according to the definition of error. The global registration algorithm of multi-station point cloud was derived to minimize the error of rotation matrix. And fast-computing formulas of transformation matrix with whose implementation steps and simulation experiment scheme was given. Compared three different processing schemes of multi-station point cloud, the experimental results showed that the effectiveness of the new global registration method was verified, and it could effectively complete the global registration of point cloud.
Improved NSGA model for multi objective operation scheduling and its evaluation
NASA Astrophysics Data System (ADS)
Li, Weining; Wang, Fuyu
2017-09-01
Reasonable operation can increase the income of the hospital and improve the patient’s satisfactory level. In this paper, by using multi object operation scheduling method with improved NSGA algorithm, it shortens the operation time, reduces the operation costand lowers the operation risk, the multi-objective optimization model is established for flexible operation scheduling, through the MATLAB simulation method, the Pareto solution is obtained, the standardization of data processing. The optimal scheduling scheme is selected by using entropy weight -Topsis combination method. The results show that the algorithm is feasible to solve the multi-objective operation scheduling problem, and provide a reference for hospital operation scheduling.
Neighbor Discovery Algorithm in Wireless Local Area Networks Using Multi-beam Directional Antennas
NASA Astrophysics Data System (ADS)
Wang, Jin; Peng, Wei; Liu, Song
2017-10-01
Neighbor discovery is an important step for Wireless Local Area Networks (WLAN) and the use of multi-beam directional antennas can greatly improve the network performance. However, most neighbor discovery algorithms in WLAN, based on multi-beam directional antennas, can only work effectively in synchronous system but not in asynchro-nous system. And collisions at AP remain a bottleneck for neighbor discovery. In this paper, we propose two asynchrono-us neighbor discovery algorithms: asynchronous hierarchical scanning (AHS) and asynchronous directional scanning (ADS) algorithm. Both of them are based on three-way handshaking mechanism. AHS and ADS reduce collisions at AP to have a good performance in a hierarchical way and directional way respectively. In the end, the performance of the AHS and ADS are tested on OMNeT++. Moreover, it is analyzed that different application scenarios and the factors how to affect the performance of these algorithms. The simulation results show that AHS is suitable for the densely populated scenes around AP while ADS is suitable for that most of the neighborhood nodes are far from AP.
Challenge toward the prediction of typhoon behaviour and down pour
NASA Astrophysics Data System (ADS)
Takahashi, K.; Onishi, R.; Baba, Y.; Kida, S.; Matsuda, K.; Goto, K.; Fuchigami, H.
2013-08-01
Mechanisms of interactions among different scale phenomena play important roles for forecasting of weather and climate. Multi-scale Simulator for the Geoenvironment (MSSG), which deals with multi-scale multi-physics phenomena, is a coupled non-hydrostatic atmosphere-ocean model designed to be run efficiently on the Earth Simulator. We present simulation results with the world-highest 1.9km horizontal resolution for the entire globe and regional heavy rain with 1km horizontal resolution and 5m horizontal/vertical resolution for urban area simulation. To gain high performance by exploiting the system capabilities, we propose novel performance evaluation metrics introduced in previous studies that incorporate the effects of the data caching mechanism between CPU and memory. With a useful code optimization guideline based on such metrics, we demonstrate that MSSG can achieve an excellent peak performance ratio of 32.2% on the Earth Simulator with the single-core performance found to be a key to a reduced time-to-solution.
Level 2 Ancillary Products and Datasets Algorithm Theoretical Basis
NASA Technical Reports Server (NTRS)
Diner, D.; Abdou, W.; Gordon, H.; Kahn, R.; Knyazikhin, Y.; Martonchik, J.; McDonald, D.; McMuldroch, S.; Myneni, R.; West, R.
1999-01-01
This Algorithm Theoretical Basis (ATB) document describes the algorithms used to generate the parameters of certain ancillary products and datasets used during Level 2 processing of Multi-angle Imaging SpectroRadiometer (MIST) data.
[Multi-mathematical modelings for compatibility optimization of Jiangzhi granules].
Yang, Ming; Zhang, Li; Ge, Yingli; Lu, Yanliu; Ji, Guang
2011-12-01
To investigate into the method of "multi activity index evaluation and combination optimized of mult-component" for Chinese herbal formulas. According to the scheme of uniform experimental design, efficacy experiment, multi index evaluation, least absolute shrinkage, selection operator (LASSO) modeling, evolutionary optimization algorithm, validation experiment, we optimized the combination of Jiangzhi granules based on the activity indexes of blood serum ALT, ALT, AST, TG, TC, HDL, LDL and TG level of liver tissues, ratio of liver tissue to body. Analytic hierarchy process (AHP) combining with criteria importance through intercriteria correlation (CRITIC) for multi activity index evaluation was more reasonable and objective, it reflected the information of activity index's order and objective sample data. LASSO algorithm modeling could accurately reflect the relationship between different combination of Jiangzhi granule and the activity comprehensive indexes. The optimized combination of Jiangzhi granule showed better values of the activity comprehensive indexed than the original formula after the validation experiment. AHP combining with CRITIC can be used for multi activity index evaluation and LASSO algorithm, it is suitable for combination optimized of Chinese herbal formulas.
NASA Astrophysics Data System (ADS)
Pirpinia, Kleopatra; Bosman, Peter A. N.; Sonke, Jan-Jakob; van Herk, Marcel; Alderliesten, Tanja
2015-03-01
The use of gradient information is well-known to be highly useful in single-objective optimization-based image registration methods. However, its usefulness has not yet been investigated for deformable image registration from a multi-objective optimization perspective. To this end, within a previously introduced multi-objective optimization framework, we use a smooth B-spline-based dual-dynamic transformation model that allows us to derive gradient information analytically, while still being able to account for large deformations. Within the multi-objective framework, we previously employed a powerful evolutionary algorithm (EA) that computes and advances multiple outcomes at once, resulting in a set of solutions (a so-called Pareto front) that represents efficient trade-offs between the objectives. With the addition of the B-spline-based transformation model, we studied the usefulness of gradient information in multiobjective deformable image registration using three different optimization algorithms: the (gradient-less) EA, a gradientonly algorithm, and a hybridization of these two. We evaluated the algorithms to register highly deformed images: 2D MRI slices of the breast in prone and supine positions. Results demonstrate that gradient-based multi-objective optimization significantly speeds up optimization in the initial stages of optimization. However, allowing sufficient computational resources, better results could still be obtained with the EA. Ultimately, the hybrid EA found the best overall approximation of the optimal Pareto front, further indicating that adding gradient-based optimization for multiobjective optimization-based deformable image registration can indeed be beneficial
HD-MTL: Hierarchical Deep Multi-Task Learning for Large-Scale Visual Recognition.
Fan, Jianping; Zhao, Tianyi; Kuang, Zhenzhong; Zheng, Yu; Zhang, Ji; Yu, Jun; Peng, Jinye
2017-02-09
In this paper, a hierarchical deep multi-task learning (HD-MTL) algorithm is developed to support large-scale visual recognition (e.g., recognizing thousands or even tens of thousands of atomic object classes automatically). First, multiple sets of multi-level deep features are extracted from different layers of deep convolutional neural networks (deep CNNs), and they are used to achieve more effective accomplishment of the coarseto- fine tasks for hierarchical visual recognition. A visual tree is then learned by assigning the visually-similar atomic object classes with similar learning complexities into the same group, which can provide a good environment for determining the interrelated learning tasks automatically. By leveraging the inter-task relatedness (inter-class similarities) to learn more discriminative group-specific deep representations, our deep multi-task learning algorithm can train more discriminative node classifiers for distinguishing the visually-similar atomic object classes effectively. Our hierarchical deep multi-task learning (HD-MTL) algorithm can integrate two discriminative regularization terms to control the inter-level error propagation effectively, and it can provide an end-to-end approach for jointly learning more representative deep CNNs (for image representation) and more discriminative tree classifier (for large-scale visual recognition) and updating them simultaneously. Our incremental deep learning algorithms can effectively adapt both the deep CNNs and the tree classifier to the new training images and the new object classes. Our experimental results have demonstrated that our HD-MTL algorithm can achieve very competitive results on improving the accuracy rates for large-scale visual recognition.
Improvement of Speckle Contrast Image Processing by an Efficient Algorithm.
Steimers, A; Farnung, W; Kohl-Bareis, M
2016-01-01
We demonstrate an efficient algorithm for the temporal and spatial based calculation of speckle contrast for the imaging of blood flow by laser speckle contrast analysis (LASCA). It reduces the numerical complexity of necessary calculations, facilitates a multi-core and many-core implementation of the speckle analysis and enables an independence of temporal or spatial resolution and SNR. The new algorithm was evaluated for both spatial and temporal based analysis of speckle patterns with different image sizes and amounts of recruited pixels as sequential, multi-core and many-core code.
2008-06-01
postponed the fulfillment of her own Masters Degree by at least 18 months so that I would have the opportunity to earn mine. She is smart , lovely...GENETIC ALGORITHM AND MULTI AGENT SYSTEM TO EXPLORE EMERGENT PATTERNS OF SOCIAL RATIONALITY AND A DISTRESS-BASED MODEL FOR DECEIT IN THE WORKPLACE...of a Genetic Algorithm and Mutli Agent System to Explore Emergent Patterns of Social Rationality and a Distress-Based Model for Deceit in the
Harmonic regression based multi-temporal cloud filtering algorithm for Landsat 8
NASA Astrophysics Data System (ADS)
Joshi, P.
2015-12-01
Landsat data archive though rich is seen to have missing dates and periods owing to the weather irregularities and inconsistent coverage. The satellite images are further subject to cloud cover effects resulting in erroneous analysis and observations of ground features. In earlier studies the change detection algorithm using statistical control charts on harmonic residuals of multi-temporal Landsat 5 data have been shown to detect few prominent remnant clouds [Brooks, Evan B., et al, 2014]. So, in this work we build on this harmonic regression approach to detect and filter clouds using a multi-temporal series of Landsat 8 images. Firstly, we compute the harmonic coefficients using the fitting models on annual training data. This time series of residuals is further subjected to Shewhart X-bar control charts which signal the deviations of cloud points from the fitted multi-temporal fourier curve. For the process with standard deviation σ we found the second and third order harmonic regression with a x-bar chart control limit [Lσ] ranging between [0.5σ < Lσ < σ] as most efficient in detecting clouds. By implementing second order harmonic regression with successive x-bar chart control limits of L and 0.5 L on the NDVI, NDSI and haze optimized transformation (HOT), and utilizing the seasonal physical properties of these parameters, we have designed a novel multi-temporal algorithm for filtering clouds from Landsat 8 images. The method is applied to Virginia and Alabama in Landsat8 UTM zones 17 and 16 respectively. Our algorithm efficiently filters all types of cloud cover with an overall accuracy greater than 90%. As a result of the multi-temporal operation and the ability to recreate the multi-temporal database of images using only the coefficients of the fourier regression, our algorithm is largely storage and time efficient. The results show a good potential for this multi-temporal approach for cloud detection as a timely and targeted solution for the Landsat 8 research community, catering to the need for innovative processing solutions in the infant stage of the satellite.
NASA Technical Reports Server (NTRS)
Craig, Roy R., Jr.
1987-01-01
The major accomplishments of this research are: (1) the refinement and documentation of a multi-input, multi-output modal parameter estimation algorithm which is applicable to general linear, time-invariant dynamic systems; (2) the development and testing of an unsymmetric block-Lanzcos algorithm for reduced-order modeling of linear systems with arbitrary damping; and (3) the development of a control-structure-interaction (CSI) test facility.
NASA Technical Reports Server (NTRS)
Rediniotis, Othon K.
1999-01-01
Two new calibration algorithms were developed for the calibration of non-nulling multi-hole probes in compressible, subsonic flowfields. The reduction algorithms are robust and able to reduce data from any multi-hole probe inserted into any subsonic flowfield to generate very accurate predictions of the velocity vector, flow direction, total pressure and static pressure. One of the algorithms PROBENET is based on the theory of neural networks, while the other is of a more conventional nature (polynomial approximation technique) and introduces a novel idea of local least-squares fits. Both algorithms have been developed to complete, user-friendly software packages. New technology was developed for the fabrication of miniature multi-hole probes, with probe tip diameters all the way down to 0.035". Several miniature 5- and 7-hole probes, with different probe tip geometries (hemispherical, conical, faceted) and different overall shapes (straight, cobra, elbow probes) were fabricated, calibrated and tested. Emphasis was placed on the development of four stainless-steel conical 7-hole probes, 1/16" in diameter calibrated at NASA Langley for the entire subsonic regime. The developed calibration algorithms were extensively tested with these probes demonstrating excellent prediction capabilities. The probes were used in the "trap wing" wind tunnel tests in the 14'x22' wind tunnel at NASA Langley, providing valuable information on the flowfield over the wing. This report is organized in the following fashion. It consists of a "Technical Achievements" section that summarizes the major achievements, followed by an assembly of journal articles that were produced from this project and ends with two manuals for the two probe calibration algorithms developed.
Wang, Lin; Qu, Hui; Liu, Shan; Dun, Cai-xia
2013-01-01
As a practical inventory and transportation problem, it is important to synthesize several objectives for the joint replenishment and delivery (JRD) decision. In this paper, a new multiobjective stochastic JRD (MSJRD) of the one-warehouse and n-retailer systems considering the balance of service level and total cost simultaneously is proposed. The goal of this problem is to decide the reasonable replenishment interval, safety stock factor, and traveling routing. Secondly, two approaches are designed to handle this complex multi-objective optimization problem. Linear programming (LP) approach converts the multi-objective to single objective, while a multi-objective evolution algorithm (MOEA) solves a multi-objective problem directly. Thirdly, three intelligent optimization algorithms, differential evolution algorithm (DE), hybrid DE (HDE), and genetic algorithm (GA), are utilized in LP-based and MOEA-based approaches. Results of the MSJRD with LP-based and MOEA-based approaches are compared by a contrastive numerical example. To analyses the nondominated solution of MOEA, a metric is also used to measure the distribution of the last generation solution. Results show that HDE outperforms DE and GA whenever LP or MOEA is adopted.
Temporal phase unwrapping algorithms for fringe projection profilometry: A comparative review
Zuo, Chao; Huang, Lei; Zhang, Minliang; ...
2016-05-06
In fringe projection pro lometry (FPP), temporal phase unwrapping is an essential procedure to recover an unambiguous absolute phase even in the presence of large discontinuities or spatially isolated surfaces. So far, there are typically three groups of temporal phase unwrapping algorithms proposed in the literature: multi-frequency (hierarchical) approach, multi-wavelength (heterodyne) approach, and number-theoretical approach. In this paper, the three methods are investigated and compared in details by analytical, numerical, and experimental means. The basic principles and recent developments of the three kind of algorithms are firstly reviewed. Then, the reliability of different phase unwrapping algorithms is compared based onmore » a rigorous stochastic noise model. Moreover, this noise model is used to predict the optimum fringe period for each unwrapping approach, which is a key factor governing the phase measurement accuracy in FPP. Simulations and experimental results verified the correctness and validity of the proposed noise model as well as the prediction scheme. The results show that the multi-frequency temporal phase unwrapping provides the best unwrapping reliability, while the multi-wavelength approach is the most susceptible to noise-induced unwrapping errors.« less
Dun, Cai-xia
2013-01-01
As a practical inventory and transportation problem, it is important to synthesize several objectives for the joint replenishment and delivery (JRD) decision. In this paper, a new multiobjective stochastic JRD (MSJRD) of the one-warehouse and n-retailer systems considering the balance of service level and total cost simultaneously is proposed. The goal of this problem is to decide the reasonable replenishment interval, safety stock factor, and traveling routing. Secondly, two approaches are designed to handle this complex multi-objective optimization problem. Linear programming (LP) approach converts the multi-objective to single objective, while a multi-objective evolution algorithm (MOEA) solves a multi-objective problem directly. Thirdly, three intelligent optimization algorithms, differential evolution algorithm (DE), hybrid DE (HDE), and genetic algorithm (GA), are utilized in LP-based and MOEA-based approaches. Results of the MSJRD with LP-based and MOEA-based approaches are compared by a contrastive numerical example. To analyses the nondominated solution of MOEA, a metric is also used to measure the distribution of the last generation solution. Results show that HDE outperforms DE and GA whenever LP or MOEA is adopted. PMID:24302880
Yu, Yang; Wang, Sihan; Tang, Jiafu; Kaku, Ikou; Sun, Wei
2016-01-01
Productivity can be greatly improved by converting the traditional assembly line to a seru system, especially in the business environment with short product life cycles, uncertain product types and fluctuating production volumes. Line-seru conversion includes two decision processes, i.e., seru formation and seru load. For simplicity, however, previous studies focus on the seru formation with a given scheduling rule in seru load. We select ten scheduling rules usually used in seru load to investigate the influence of different scheduling rules on the performance of line-seru conversion. Moreover, we clarify the complexities of line-seru conversion for ten different scheduling rules from the theoretical perspective. In addition, multi-objective decisions are often used in line-seru conversion. To obtain Pareto-optimal solutions of multi-objective line-seru conversion, we develop two improved exact algorithms based on reducing time complexity and space complexity respectively. Compared with the enumeration based on non-dominated sorting to solve multi-objective problem, the two improved exact algorithms saves computation time greatly. Several numerical simulation experiments are performed to show the performance improvement brought by the two proposed exact algorithms.
Multi scales based sparse matrix spectral clustering image segmentation
NASA Astrophysics Data System (ADS)
Liu, Zhongmin; Chen, Zhicai; Li, Zhanming; Hu, Wenjin
2018-04-01
In image segmentation, spectral clustering algorithms have to adopt the appropriate scaling parameter to calculate the similarity matrix between the pixels, which may have a great impact on the clustering result. Moreover, when the number of data instance is large, computational complexity and memory use of the algorithm will greatly increase. To solve these two problems, we proposed a new spectral clustering image segmentation algorithm based on multi scales and sparse matrix. We devised a new feature extraction method at first, then extracted the features of image on different scales, at last, using the feature information to construct sparse similarity matrix which can improve the operation efficiency. Compared with traditional spectral clustering algorithm, image segmentation experimental results show our algorithm have better degree of accuracy and robustness.
Pugliese, Cara E; Kenworthy, Lauren; Bal, Vanessa Hus; Wallace, Gregory L; Yerys, Benjamin E; Maddox, Brenna B; White, Susan W; Popal, Haroon; Armour, Anna Chelsea; Miller, Judith; Herrington, John D; Schultz, Robert T; Martin, Alex; Anthony, Laura Gutermuth
2015-12-01
Recent updates have been proposed to the Autism Diagnostic Observation Schedule-2 Module 4 diagnostic algorithm. This new algorithm, however, has not yet been validated in an independent sample without intellectual disability (ID). This multi-site study compared the original and revised algorithms in individuals with ASD without ID. The revised algorithm demonstrated increased sensitivity, but lower specificity in the overall sample. Estimates were highest for females, individuals with a verbal IQ below 85 or above 115, and ages 16 and older. Best practice diagnostic procedures should include the Module 4 in conjunction with other assessment tools. Balancing needs for sensitivity and specificity depending on the purpose of assessment (e.g., clinical vs. research) and demographic characteristics mentioned above will enhance its utility.
A multi-group firefly algorithm for numerical optimization
NASA Astrophysics Data System (ADS)
Tong, Nan; Fu, Qiang; Zhong, Caiming; Wang, Pengjun
2017-08-01
To solve the problem of premature convergence of firefly algorithm (FA), this paper analyzes the evolution mechanism of the algorithm, and proposes an improved Firefly algorithm based on modified evolution model and multi-group learning mechanism (IMGFA). A Firefly colony is divided into several subgroups with different model parameters. Within each subgroup, the optimal firefly is responsible for leading the others fireflies to implement the early global evolution, and establish the information mutual system among the fireflies. And then, each firefly achieves local search by following the brighter firefly in its neighbors. At the same time, learning mechanism among the best fireflies in various subgroups to exchange information can help the population to obtain global optimization goals more effectively. Experimental results verify the effectiveness of the proposed algorithm.
NASA Astrophysics Data System (ADS)
Zou, Zhen-Zhen; Yu, Xu-Tao; Zhang, Zai-Chen
2018-04-01
At first, the entanglement source deployment problem is studied in a quantum multi-hop network, which has a significant influence on quantum connectivity. Two optimization algorithms are introduced with limited entanglement sources in this paper. A deployment algorithm based on node position (DNP) improves connectivity by guaranteeing that all overlapping areas of the distribution ranges of the entanglement sources contain nodes. In addition, a deployment algorithm based on an improved genetic algorithm (DIGA) is implemented by dividing the region into grids. From the simulation results, DNP and DIGA improve quantum connectivity by 213.73% and 248.83% compared to random deployment, respectively, and the latter performs better in terms of connectivity. However, DNP is more flexible and adaptive to change, as it stops running when all nodes are covered.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chin, George; Marquez, Andres; Choudhury, Sutanay
2012-09-01
Triadic analysis encompasses a useful set of graph mining methods that is centered on the concept of a triad, which is a subgraph of three nodes and the configuration of directed edges across the nodes. Such methods are often applied in the social sciences as well as many other diverse fields. Triadic methods commonly operate on a triad census that counts the number of triads of every possible edge configuration in a graph. Like other graph algorithms, triadic census algorithms do not scale well when graphs reach tens of millions to billions of nodes. To enable the triadic analysis ofmore » large-scale graphs, we developed and optimized a triad census algorithm to efficiently execute on shared memory architectures. We will retrace the development and evolution of a parallel triad census algorithm. Over the course of several versions, we continually adapted the code’s data structures and program logic to expose more opportunities to exploit parallelism on shared memory that would translate into improved computational performance. We will recall the critical steps and modifications that occurred during code development and optimization. Furthermore, we will compare the performances of triad census algorithm versions on three specific systems: Cray XMT, HP Superdome, and AMD multi-core NUMA machine. These three systems have shared memory architectures but with markedly different hardware capabilities to manage parallelism.« less
NASA Astrophysics Data System (ADS)
Taitano, W. T.; Chacón, L.; Simakov, A. N.; Molvig, K.
2015-09-01
In this study, we demonstrate a fully implicit algorithm for the multi-species, multidimensional Rosenbluth-Fokker-Planck equation which is exactly mass-, momentum-, and energy-conserving, and which preserves positivity. Unlike most earlier studies, we base our development on the Rosenbluth (rather than Landau) form of the Fokker-Planck collision operator, which reduces complexity while allowing for an optimal fully implicit treatment. Our discrete conservation strategy employs nonlinear constraints that force the continuum symmetries of the collision operator to be satisfied upon discretization. We converge the resulting nonlinear system iteratively using Jacobian-free Newton-Krylov methods, effectively preconditioned with multigrid methods for efficiency. Single- and multi-species numerical examples demonstrate the advertised accuracy properties of the scheme, and the superior algorithmic performance of our approach. In particular, the discretization approach is numerically shown to be second-order accurate in time and velocity space and to exhibit manifestly positive entropy production. That is, H-theorem behavior is indicated for all the examples we have tested. The solution approach is demonstrated to scale optimally with respect to grid refinement (with CPU time growing linearly with the number of mesh points), and timestep (showing very weak dependence of CPU time with time-step size). As a result, the proposed algorithm delivers several orders-of-magnitude speedup vs. explicit algorithms.
Dependence of the pour point of diesel fuels on the properties of the initial components
NASA Technical Reports Server (NTRS)
Ostashov, V. M.; Bobrovskiy, S. A.
1979-01-01
An analytical expression is obtained for the dependence of the pour point of diesel fuels on the pour point and weight relationship of the initial components. For determining the pour point of a multicomponent fuel mixture, it is assumed that the mixture of two components has the pour point of a separate equivalent component, then calculating the pour point of this equivalent component mixed with a third component, etc.
NASA Astrophysics Data System (ADS)
Montazeri, A.; West, C.; Monk, S. D.; Taylor, C. J.
2017-04-01
This paper concerns the problem of dynamic modelling and parameter estimation for a seven degree of freedom hydraulic manipulator. The laboratory example is a dual-manipulator mobile robotic platform used for research into nuclear decommissioning. In contrast to earlier control model-orientated research using the same machine, the paper develops a nonlinear, mechanistic simulation model that can subsequently be used to investigate physically meaningful disturbances. The second contribution is to optimise the parameters of the new model, i.e. to determine reliable estimates of the physical parameters of a complex robotic arm which are not known in advance. To address the nonlinear and non-convex nature of the problem, the research relies on the multi-objectivisation of an output error single-performance index. The developed algorithm utilises a multi-objective genetic algorithm (GA) in order to find a proper solution. The performance of the model and the GA is evaluated using both simulated (i.e. with a known set of 'true' parameters) and experimental data. Both simulation and experimental results show that multi-objectivisation has improved convergence of the estimated parameters compared to the single-objective output error problem formulation. This is achieved by integrating the validation phase inside the algorithm implicitly and exploiting the inherent structure of the multi-objective GA for this specific system identification problem.
MultiSETTER: web server for multiple RNA structure comparison.
Čech, Petr; Hoksza, David; Svozil, Daniel
2015-08-12
Understanding the architecture and function of RNA molecules requires methods for comparing and analyzing their tertiary and quaternary structures. While structural superposition of short RNAs is achievable in a reasonable time, large structures represent much bigger challenge. Therefore, we have developed a fast and accurate algorithm for RNA pairwise structure superposition called SETTER and implemented it in the SETTER web server. However, though biological relationships can be inferred by a pairwise structure alignment, key features preserved by evolution can be identified only from a multiple structure alignment. Thus, we extended the SETTER algorithm to the alignment of multiple RNA structures and developed the MultiSETTER algorithm. In this paper, we present the updated version of the SETTER web server that implements a user friendly interface to the MultiSETTER algorithm. The server accepts RNA structures either as the list of PDB IDs or as user-defined PDB files. After the superposition is computed, structures are visualized in 3D and several reports and statistics are generated. To the best of our knowledge, the MultiSETTER web server is the first publicly available tool for a multiple RNA structure alignment. The MultiSETTER server offers the visual inspection of an alignment in 3D space which may reveal structural and functional relationships not captured by other multiple alignment methods based either on a sequence or on secondary structure motifs.
NASA Technical Reports Server (NTRS)
Schallhorn, Paul; Majumdar, Alok
2012-01-01
This paper describes a finite volume based numerical algorithm that allows multi-dimensional computation of fluid flow within a system level network flow analysis. There are several thermo-fluid engineering problems where higher fidelity solutions are needed that are not within the capacity of system level codes. The proposed algorithm will allow NASA's Generalized Fluid System Simulation Program (GFSSP) to perform multi-dimensional flow calculation within the framework of GFSSP s typical system level flow network consisting of fluid nodes and branches. The paper presents several classical two-dimensional fluid dynamics problems that have been solved by GFSSP's multi-dimensional flow solver. The numerical solutions are compared with the analytical and benchmark solution of Poiseulle, Couette and flow in a driven cavity.
Pisharady, Pramod Kumar; Sotiropoulos, Stamatios N; Sapiro, Guillermo; Lenglet, Christophe
2017-09-01
We propose a sparse Bayesian learning algorithm for improved estimation of white matter fiber parameters from compressed (under-sampled q-space) multi-shell diffusion MRI data. The multi-shell data is represented in a dictionary form using a non-monoexponential decay model of diffusion, based on continuous gamma distribution of diffusivities. The fiber volume fractions with predefined orientations, which are the unknown parameters, form the dictionary weights. These unknown parameters are estimated with a linear un-mixing framework, using a sparse Bayesian learning algorithm. A localized learning of hyperparameters at each voxel and for each possible fiber orientations improves the parameter estimation. Our experiments using synthetic data from the ISBI 2012 HARDI reconstruction challenge and in-vivo data from the Human Connectome Project demonstrate the improvements.
Parallel transformation of K-SVD solar image denoising algorithm
NASA Astrophysics Data System (ADS)
Liang, Youwen; Tian, Yu; Li, Mei
2017-02-01
The images obtained by observing the sun through a large telescope always suffered with noise due to the low SNR. K-SVD denoising algorithm can effectively remove Gauss white noise. Training dictionaries for sparse representations is a time consuming task, due to the large size of the data involved and to the complexity of the training algorithms. In this paper, an OpenMP parallel programming language is proposed to transform the serial algorithm to the parallel version. Data parallelism model is used to transform the algorithm. Not one atom but multiple atoms updated simultaneously is the biggest change. The denoising effect and acceleration performance are tested after completion of the parallel algorithm. Speedup of the program is 13.563 in condition of using 16 cores. This parallel version can fully utilize the multi-core CPU hardware resources, greatly reduce running time and easily to transplant in multi-core platform.
Cavity control as a new quantum algorithms implementation treatment
NASA Astrophysics Data System (ADS)
AbuGhanem, M.; Homid, A. H.; Abdel-Aty, M.
2018-02-01
Based on recent experiments [ Nature 449, 438 (2007) and Nature Physics 6, 777 (2010)], a new approach for realizing quantum gates for the design of quantum algorithms was developed. Accordingly, the operation times of such gates while functioning in algorithm applications depend on the number of photons present in their resonant cavities. Multi-qubit algorithms can be realized in systems in which the photon number is increased slightly over the qubit number. In addition, the time required for operation is considerably less than the dephasing and relaxation times of the systems. The contextual use of the photon number as a main control in the realization of any algorithm was demonstrated. The results indicate the possibility of a full integration into the realization of multi-qubit multiphoton states and its application in algorithm designs. Furthermore, this approach will lead to a successful implementation of these designs in future experiments.
Genetic Algorithms Applied to Multi-Objective Aerodynamic Shape Optimization
NASA Technical Reports Server (NTRS)
Holst, Terry L.
2004-01-01
A genetic algorithm approach suitable for solving multi-objective optimization problems is described and evaluated using a series of aerodynamic shape optimization problems. Several new features including two variations of a binning selection algorithm and a gene-space transformation procedure are included. The genetic algorithm is suitable for finding pareto optimal solutions in search spaces that are defined by any number of genes and that contain any number of local extrema. A new masking array capability is included allowing any gene or gene subset to be eliminated as decision variables from the design space. This allows determination of the effect of a single gene or gene subset on the pareto optimal solution. Results indicate that the genetic algorithm optimization approach is flexible in application and reliable. The binning selection algorithms generally provide pareto front quality enhancements and moderate convergence efficiency improvements for most of the problems solved.
Genetic Algorithms Applied to Multi-Objective Aerodynamic Shape Optimization
NASA Technical Reports Server (NTRS)
Holst, Terry L.
2005-01-01
A genetic algorithm approach suitable for solving multi-objective problems is described and evaluated using a series of aerodynamic shape optimization problems. Several new features including two variations of a binning selection algorithm and a gene-space transformation procedure are included. The genetic algorithm is suitable for finding Pareto optimal solutions in search spaces that are defined by any number of genes and that contain any number of local extrema. A new masking array capability is included allowing any gene or gene subset to be eliminated as decision variables from the design space. This allows determination of the effect of a single gene or gene subset on the Pareto optimal solution. Results indicate that the genetic algorithm optimization approach is flexible in application and reliable. The binning selection algorithms generally provide Pareto front quality enhancements and moderate convergence efficiency improvements for most of the problems solved.
Memetic Algorithm-Based Multi-Objective Coverage Optimization for Wireless Sensor Networks
Chen, Zhi; Li, Shuai; Yue, Wenjing
2014-01-01
Maintaining effective coverage and extending the network lifetime as much as possible has become one of the most critical issues in the coverage of WSNs. In this paper, we propose a multi-objective coverage optimization algorithm for WSNs, namely MOCADMA, which models the coverage control of WSNs as the multi-objective optimization problem. MOCADMA uses a memetic algorithm with a dynamic local search strategy to optimize the coverage of WSNs and achieve the objectives such as high network coverage, effective node utilization and more residual energy. In MOCADMA, the alternative solutions are represented as the chromosomes in matrix form, and the optimal solutions are selected through numerous iterations of the evolution process, including selection, crossover, mutation, local enhancement, and fitness evaluation. The experiment and evaluation results show MOCADMA can have good capabilities in maintaining the sensing coverage, achieve higher network coverage while improving the energy efficiency and effectively prolonging the network lifetime, and have a significant improvement over some existing algorithms. PMID:25360579
Memetic algorithm-based multi-objective coverage optimization for wireless sensor networks.
Chen, Zhi; Li, Shuai; Yue, Wenjing
2014-10-30
Maintaining effective coverage and extending the network lifetime as much as possible has become one of the most critical issues in the coverage of WSNs. In this paper, we propose a multi-objective coverage optimization algorithm for WSNs, namely MOCADMA, which models the coverage control of WSNs as the multi-objective optimization problem. MOCADMA uses a memetic algorithm with a dynamic local search strategy to optimize the coverage of WSNs and achieve the objectives such as high network coverage, effective node utilization and more residual energy. In MOCADMA, the alternative solutions are represented as the chromosomes in matrix form, and the optimal solutions are selected through numerous iterations of the evolution process, including selection, crossover, mutation, local enhancement, and fitness evaluation. The experiment and evaluation results show MOCADMA can have good capabilities in maintaining the sensing coverage, achieve higher network coverage while improving the energy efficiency and effectively prolonging the network lifetime, and have a significant improvement over some existing algorithms.
Chen, Jun; Quan, Wenting; Cui, Tingwei
2015-01-01
In this study, two sample semi-analytical algorithms and one new unified multi-band semi-analytical algorithm (UMSA) for estimating chlorophyll-a (Chla) concentration were constructed by specifying optimal wavelengths. The three sample semi-analytical algorithms, including the three-band semi-analytical algorithm (TSA), four-band semi-analytical algorithm (FSA), and UMSA algorithm, were calibrated and validated by the dataset collected in the Yellow River Estuary between September 1 and 10, 2009. By comparing of the accuracy of assessment of TSA, FSA, and UMSA algorithms, it was found that the UMSA algorithm had a superior performance in comparison with the two other algorithms, TSA and FSA. Using the UMSA algorithm in retrieving Chla concentration in the Yellow River Estuary decreased by 25.54% NRMSE (normalized root mean square error) when compared with the FSA algorithm, and 29.66% NRMSE in comparison with the TSA algorithm. These are very significant improvements upon previous methods. Additionally, the study revealed that the TSA and FSA algorithms are merely more specific forms of the UMSA algorithm. Owing to the special form of the UMSA algorithm, if the same bands were used for both the TSA and UMSA algorithms or FSA and UMSA algorithms, the UMSA algorithm would theoretically produce superior results in comparison with the TSA and FSA algorithms. Thus, good results may also be produced if the UMSA algorithm were to be applied for predicting Chla concentration for datasets of Gitelson et al. (2008) and Le et al. (2009).
NASA Astrophysics Data System (ADS)
Wang, Wenrui; Wu, Yaohua; Wu, Yingying
2016-05-01
E-commerce, as an emerging marketing mode, has attracted more and more attention and gradually changed the way of our life. However, the existing layout of distribution centers can't fulfill the storage and picking demands of e-commerce sufficiently. In this paper, a modified miniload automated storage/retrieval system is designed to fit these new characteristics of e-commerce in logistics. Meanwhile, a matching problem, concerning with the improvement of picking efficiency in new system, is studied in this paper. The problem is how to reduce the travelling distance of totes between aisles and picking stations. A multi-stage heuristic algorithm is proposed based on statement and model of this problem. The main idea of this algorithm is, with some heuristic strategies based on similarity coefficients, minimizing the transportations of items which can not arrive in the destination picking stations just through direct conveyors. The experimental results based on the cases generated by computers show that the average reduced rate of indirect transport times can reach 14.36% with the application of multi-stage heuristic algorithm. For the cases from a real e-commerce distribution center, the order processing time can be reduced from 11.20 h to 10.06 h with the help of the modified system and the proposed algorithm. In summary, this research proposed a modified system and a multi-stage heuristic algorithm that can reduce the travelling distance of totes effectively and improve the whole performance of e-commerce distribution center.
Nonlinear inversion of potential-field data using a hybrid-encoding genetic algorithm
Chen, C.; Xia, J.; Liu, J.; Feng, G.
2006-01-01
Using a genetic algorithm to solve an inverse problem of complex nonlinear geophysical equations is advantageous because it does not require computer gradients of models or "good" initial models. The multi-point search of a genetic algorithm makes it easier to find the globally optimal solution while avoiding falling into a local extremum. As is the case in other optimization approaches, the search efficiency for a genetic algorithm is vital in finding desired solutions successfully in a multi-dimensional model space. A binary-encoding genetic algorithm is hardly ever used to resolve an optimization problem such as a simple geophysical inversion with only three unknowns. The encoding mechanism, genetic operators, and population size of the genetic algorithm greatly affect search processes in the evolution. It is clear that improved operators and proper population size promote the convergence. Nevertheless, not all genetic operations perform perfectly while searching under either a uniform binary or a decimal encoding system. With the binary encoding mechanism, the crossover scheme may produce more new individuals than with the decimal encoding. On the other hand, the mutation scheme in a decimal encoding system will create new genes larger in scope than those in the binary encoding. This paper discusses approaches of exploiting the search potential of genetic operations in the two encoding systems and presents an approach with a hybrid-encoding mechanism, multi-point crossover, and dynamic population size for geophysical inversion. We present a method that is based on the routine in which the mutation operation is conducted in the decimal code and multi-point crossover operation in the binary code. The mix-encoding algorithm is called the hybrid-encoding genetic algorithm (HEGA). HEGA provides better genes with a higher probability by a mutation operator and improves genetic algorithms in resolving complicated geophysical inverse problems. Another significant result is that final solution is determined by the average model derived from multiple trials instead of one computation due to the randomness in a genetic algorithm procedure. These advantages were demonstrated by synthetic and real-world examples of inversion of potential-field data. ?? 2005 Elsevier Ltd. All rights reserved.
Imputation of adverse drug reactions: Causality assessment in hospitals
Mastroianni, Patricia de Carvalho
2017-01-01
Background & objectives Different algorithms have been developed to standardize the causality assessment of adverse drug reactions (ADR). Although most share common characteristics, the results of the causality assessment are variable depending on the algorithm used. Therefore, using 10 different algorithms, the study aimed to compare inter-rater and multi-rater agreement for ADR causality assessment and identify the most consistent to hospitals. Methods Using ten causality algorithms, four judges independently assessed the first 44 cases of ADRs reported during the first year of implementation of a risk management service in a medium complexity hospital in the state of Sao Paulo (Brazil). Owing to variations in the terminology used for causality, the equivalent imputation terms were grouped into four categories: definite, probable, possible and unlikely. Inter-rater and multi-rater agreement analysis was performed by calculating the Cohen´s and Light´s kappa coefficients, respectively. Results None of the algorithms showed 100% reproducibility in the causal imputation. Fair inter-rater and multi-rater agreement was found. Emanuele (1984) and WHO-UMC (2010) algorithms showed a fair rate of agreement between the judges (k = 0.36). Interpretation & conclusions Although the ADR causality assessment algorithms were poorly reproducible, our data suggest that WHO-UMC algorithm is the most consistent for imputation in hospitals, since it allows evaluating the quality of the report. However, to improve the ability of assessing the causality using algorithms, it is necessary to include criteria for the evaluation of drug-related problems, which may be related to confounding variables that underestimate the causal association. PMID:28166274
Hop Optimization and Relay Node Selection in Multi-hop Wireless Ad-Hoc Networks
NASA Astrophysics Data System (ADS)
Li, Xiaohua(Edward)
In this paper we propose an efficient approach to determine the optimal hops for multi-hop ad hoc wireless networks. Based on the assumption that nodes use successive interference cancellation (SIC) and maximal ratio combining (MRC) to deal with mutual interference and to utilize all the received signal energy, we show that the signal-to-interference-plus-noise ratio (SINR) of a node is determined only by the nodes before it, not the nodes after it, along a packet forwarding path. Based on this observation, we propose an iterative procedure to select the relay nodes and to calculate the path SINR as well as capacity of an arbitrary multi-hop packet forwarding path. The complexity of the algorithm is extremely low, and scaling well with network size. The algorithm is applicable in arbitrarily large networks. Its performance is demonstrated as desirable by simulations. The algorithm can be helpful in analyzing the performance of multi-hop wireless networks.
(n, N) type maintenance policy for multi-component systems with failure interactions
NASA Astrophysics Data System (ADS)
Zhang, Zhuoqi; Wu, Su; Li, Binfeng; Lee, Seungchul
2015-04-01
This paper studies maintenance policies for multi-component systems in which failure interactions and opportunistic maintenance (OM) involve. This maintenance problem can be formulated as a Markov decision process (MDP). However, since an action set and state space in MDP exponentially expand as the number of components increase, traditional approaches are computationally intractable. To deal with curse of dimensionality, we decompose such a multi-component system into mutually influential single-component systems. Each single-component system is formulated as an MDP with the objective of minimising its long-run average maintenance cost. Under some reasonable assumptions, we prove the existence of the optimal (n, N) type policy for a single-component system. An algorithm to obtain the optimal (n, N) type policy is also proposed. Based on the proposed algorithm, we develop an iterative approximation algorithm to obtain an acceptable maintenance policy for a multi-component system. Numerical examples find that failure interactions and OM pose significant effects on a maintenance policy.
Yu, Hua-Gen
2015-01-28
We report a rigorous full dimensional quantum dynamics algorithm, the multi-layer Lanczos method, for computing vibrational energies and dipole transition intensities of polyatomic molecules without any dynamics approximation. The multi-layer Lanczos method is developed by using a few advanced techniques including the guided spectral transform Lanczos method, multi-layer Lanczos iteration approach, recursive residue generation method, and dipole-wavefunction contraction. The quantum molecular Hamiltonian at the total angular momentum J = 0 is represented in a set of orthogonal polyspherical coordinates so that the large amplitude motions of vibrations are naturally described. In particular, the algorithm is general and problem-independent. An applicationmore » is illustrated by calculating the infrared vibrational dipole transition spectrum of CH₄ based on the ab initio T8 potential energy surface of Schwenke and Partridge and the low-order truncated ab initio dipole moment surfaces of Yurchenko and co-workers. A comparison with experiments is made. The algorithm is also applicable for Raman polarizability active spectra.« less
MultiNest: Efficient and Robust Bayesian Inference
NASA Astrophysics Data System (ADS)
Feroz, F.; Hobson, M. P.; Bridges, M.
2011-09-01
We present further development and the first public release of our multimodal nested sampling algorithm, called MultiNest. This Bayesian inference tool calculates the evidence, with an associated error estimate, and produces posterior samples from distributions that may contain multiple modes and pronounced (curving) degeneracies in high dimensions. The developments presented here lead to further substantial improvements in sampling efficiency and robustness, as compared to the original algorithm presented in Feroz & Hobson (2008), which itself significantly outperformed existing MCMC techniques in a wide range of astrophysical inference problems. The accuracy and economy of the MultiNest algorithm is demonstrated by application to two toy problems and to a cosmological inference problem focusing on the extension of the vanilla LambdaCDM model to include spatial curvature and a varying equation of state for dark energy. The MultiNest software is fully parallelized using MPI and includes an interface to CosmoMC. It will also be released as part of the SuperBayeS package, for the analysis of supersymmetric theories of particle physics, at this http URL.
The Speech multi features fusion perceptual hash algorithm based on tensor decomposition
NASA Astrophysics Data System (ADS)
Huang, Y. B.; Fan, M. H.; Zhang, Q. Y.
2018-03-01
With constant progress in modern speech communication technologies, the speech data is prone to be attacked by the noise or maliciously tampered. In order to make the speech perception hash algorithm has strong robustness and high efficiency, this paper put forward a speech perception hash algorithm based on the tensor decomposition and multi features is proposed. This algorithm analyses the speech perception feature acquires each speech component wavelet packet decomposition. LPCC, LSP and ISP feature of each speech component are extracted to constitute the speech feature tensor. Speech authentication is done by generating the hash values through feature matrix quantification which use mid-value. Experimental results showing that the proposed algorithm is robust for content to maintain operations compared with similar algorithms. It is able to resist the attack of the common background noise. Also, the algorithm is highly efficiency in terms of arithmetic, and is able to meet the real-time requirements of speech communication and complete the speech authentication quickly.
Classifying epileptic EEG signals with delay permutation entropy and Multi-Scale K-means.
Zhu, Guohun; Li, Yan; Wen, Peng Paul; Wang, Shuaifang
2015-01-01
Most epileptic EEG classification algorithms are supervised and require large training datasets, that hinder their use in real time applications. This chapter proposes an unsupervised Multi-Scale K-means (MSK-means) MSK-means algorithm to distinguish epileptic EEG signals and identify epileptic zones. The random initialization of the K-means algorithm can lead to wrong clusters. Based on the characteristics of EEGs, the MSK-means MSK-means algorithm initializes the coarse-scale centroid of a cluster with a suitable scale factor. In this chapter, the MSK-means algorithm is proved theoretically superior to the K-means algorithm on efficiency. In addition, three classifiers: the K-means, MSK-means MSK-means and support vector machine (SVM), are used to identify seizure and localize epileptogenic zone using delay permutation entropy features. The experimental results demonstrate that identifying seizure with the MSK-means algorithm and delay permutation entropy achieves 4. 7 % higher accuracy than that of K-means, and 0. 7 % higher accuracy than that of the SVM.
NASA Astrophysics Data System (ADS)
Bagherzadeh, Seyed Amin; Asadi, Davood
2017-05-01
In search of a precise method for analyzing nonlinear and non-stationary flight data of an aircraft in the icing condition, an Empirical Mode Decomposition (EMD) algorithm enhanced by multi-objective optimization is introduced. In the proposed method, dissimilar IMF definitions are considered by the Genetic Algorithm (GA) in order to find the best decision parameters of the signal trend. To resolve disadvantages of the classical algorithm caused by the envelope concept, the signal trend is estimated directly in the proposed method. Furthermore, in order to simplify the performance and understanding of the EMD algorithm, the proposed method obviates the need for a repeated sifting process. The proposed enhanced EMD algorithm is verified by some benchmark signals. Afterwards, the enhanced algorithm is applied to simulated flight data in the icing condition in order to detect the ice assertion on the aircraft. The results demonstrate the effectiveness of the proposed EMD algorithm in aircraft ice detection by providing a figure of merit for the icing severity.
NASA Astrophysics Data System (ADS)
Guo, Zhan; Yan, Xuefeng
2018-04-01
Different operating conditions of p-xylene oxidation have different influences on the product, purified terephthalic acid. It is necessary to obtain the optimal combination of reaction conditions to ensure the quality of the products, cut down on consumption and increase revenues. A multi-objective differential evolution (MODE) algorithm co-evolved with the population-based incremental learning (PBIL) algorithm, called PBMODE, is proposed. The PBMODE algorithm was designed as a co-evolutionary system. Each individual has its own parameter individual, which is co-evolved by PBIL. PBIL uses statistical analysis to build a model based on the corresponding symbiotic individuals of the superior original individuals during the main evolutionary process. The results of simulations and statistical analysis indicate that the overall performance of the PBMODE algorithm is better than that of the compared algorithms and it can be used to optimize the operating conditions of the p-xylene oxidation process effectively and efficiently.
Crabtree, Nathaniel M; Moore, Jason H; Bowyer, John F; George, Nysia I
2017-01-01
A computational evolution system (CES) is a knowledge discovery engine that can identify subtle, synergistic relationships in large datasets. Pareto optimization allows CESs to balance accuracy with model complexity when evolving classifiers. Using Pareto optimization, a CES is able to identify a very small number of features while maintaining high classification accuracy. A CES can be designed for various types of data, and the user can exploit expert knowledge about the classification problem in order to improve discrimination between classes. These characteristics give CES an advantage over other classification and feature selection algorithms, particularly when the goal is to identify a small number of highly relevant, non-redundant biomarkers. Previously, CESs have been developed only for binary class datasets. In this study, we developed a multi-class CES. The multi-class CES was compared to three common feature selection and classification algorithms: support vector machine (SVM), random k-nearest neighbor (RKNN), and random forest (RF). The algorithms were evaluated on three distinct multi-class RNA sequencing datasets. The comparison criteria were run-time, classification accuracy, number of selected features, and stability of selected feature set (as measured by the Tanimoto distance). The performance of each algorithm was data-dependent. CES performed best on the dataset with the smallest sample size, indicating that CES has a unique advantage since the accuracy of most classification methods suffer when sample size is small. The multi-class extension of CES increases the appeal of its application to complex, multi-class datasets in order to identify important biomarkers and features.
NASA Astrophysics Data System (ADS)
Bauer, Johannes; Dávila-Chacón, Jorge; Wermter, Stefan
2015-10-01
Humans and other animals have been shown to perform near-optimally in multi-sensory integration tasks. Probabilistic population codes (PPCs) have been proposed as a mechanism by which optimal integration can be accomplished. Previous approaches have focussed on how neural networks might produce PPCs from sensory input or perform calculations using them, like combining multiple PPCs. Less attention has been given to the question of how the necessary organisation of neurons can arise and how the required knowledge about the input statistics can be learned. In this paper, we propose a model of learning multi-sensory integration based on an unsupervised learning algorithm in which an artificial neural network learns the noise characteristics of each of its sources of input. Our algorithm borrows from the self-organising map the ability to learn latent-variable models of the input and extends it to learning to produce a PPC approximating a probability density function over the latent variable behind its (noisy) input. The neurons in our network are only required to perform simple calculations and we make few assumptions about input noise properties and tuning functions. We report on a neurorobotic experiment in which we apply our algorithm to multi-sensory integration in a humanoid robot to demonstrate its effectiveness and compare it to human multi-sensory integration on the behavioural level. We also show in simulations that our algorithm performs near-optimally under certain plausible conditions, and that it reproduces important aspects of natural multi-sensory integration on the neural level.
Comparison of Two Detection Combination Algorithms for Phased Array Radars
2015-07-01
data were generated by a simulator of multi-function radar ( MFR ) and the combination algorithms are evaluated with the recorded simulation data. With...electronically scanned phased array Multi-Function Radar ( MFR ), is a type of radar whose transmitter and receiver functions are composed of numerous...small transmit/receive modules. An MFR can perform many functions previously performed by individual, dedicated radars for search, tracking and
NASA Astrophysics Data System (ADS)
Hashimoto, M.; Nakajima, T.; Morimoto, S.; Takenaka, H.
2014-12-01
We have developed a new satellite remote sensing algorithm to retrieve the aerosol optical characteristics using multi-wavelength and multi-pixel information of satellite imagers (MWP method). In this algorithm, the inversion method is a combination of maximum a posteriori (MAP) method (Rodgers, 2000) and the Phillips-Twomey method (Phillips, 1962; Twomey, 1963) as a smoothing constraint for the state vector. Furthermore, with the progress of computing technique, this method has being combined with the direct radiation transfer calculation numerically solved by each iteration step of the non-linear inverse problem, without using LUT (Look Up Table) with several constraints.Retrieved parameters in our algorithm are aerosol optical properties, such as aerosol optical thickness (AOT) of fine and coarse mode particles, a volume soot fraction in fine mode particles, and ground surface albedo of each observed wavelength. We simultaneously retrieve all the parameters that characterize pixels in each of horizontal sub-domains consisting the target area. Then we successively apply the retrieval method to all the sub-domains in the target area.We conducted numerical tests for the retrieval of aerosol properties and ground surface albedo for GOSAT/CAI imager data to test the algorithm for the land area. The result of the experiment showed that AOTs of fine mode and coarse mode, soot fraction and ground surface albedo are successfully retrieved within expected accuracy. We discuss the accuracy of the algorithm for various land surface types. Then, we applied this algorithm to GOSAT/CAI imager data, and we compared retrieved and surface-observed AOTs at the CAI pixel closest to an AERONET (Aerosol Robotic Network) or SKYNET site in each region. Comparison at several sites in urban area indicated that AOTs retrieved by our method are in agreement with surface-observed AOT within ±0.066.Our future work is to extend the algorithm for analysis of AGEOS-II/GLI and GCOM/C-SGLI data.
Rahaman, Mijanur; Pang, Chin-Tzong; Ishtyak, Mohd; Ahmad, Rais
2017-01-01
In this article, we introduce a perturbed system of generalized mixed quasi-equilibrium-like problems involving multi-valued mappings in Hilbert spaces. To calculate the approximate solutions of the perturbed system of generalized multi-valued mixed quasi-equilibrium-like problems, firstly we develop a perturbed system of auxiliary generalized multi-valued mixed quasi-equilibrium-like problems, and then by using the celebrated Fan-KKM technique, we establish the existence and uniqueness of solutions of the perturbed system of auxiliary generalized multi-valued mixed quasi-equilibrium-like problems. By deploying an auxiliary principle technique and an existence result, we formulate an iterative algorithm for solving the perturbed system of generalized multi-valued mixed quasi-equilibrium-like problems. Lastly, we study the strong convergence analysis of the proposed iterative sequences under monotonicity and some mild conditions. These results are new and generalize some known results in this field.
NASA Astrophysics Data System (ADS)
Ouyang, Qi; Lu, Wenxi; Hou, Zeyu; Zhang, Yu; Li, Shuai; Luo, Jiannan
2017-05-01
In this paper, a multi-algorithm genetically adaptive multi-objective (AMALGAM) method is proposed as a multi-objective optimization solver. It was implemented in the multi-objective optimization of a groundwater remediation design at sites contaminated by dense non-aqueous phase liquids. In this study, there were two objectives: minimization of the total remediation cost, and minimization of the remediation time. A non-dominated sorting genetic algorithm II (NSGA-II) was adopted to compare with the proposed method. For efficiency, the time-consuming surfactant-enhanced aquifer remediation simulation model was replaced by a surrogate model constructed by a multi-gene genetic programming (MGGP) technique. Similarly, two other surrogate modeling methods-support vector regression (SVR) and Kriging (KRG)-were employed to make comparisons with MGGP. In addition, the surrogate-modeling uncertainty was incorporated in the optimization model by chance-constrained programming (CCP). The results showed that, for the problem considered in this study, (1) the solutions obtained by AMALGAM incurred less remediation cost and required less time than those of NSGA-II, indicating that AMALGAM outperformed NSGA-II. It was additionally shown that (2) the MGGP surrogate model was more accurate than SVR and KRG; and (3) the remediation cost and time increased with the confidence level, which can enable decision makers to make a suitable choice by considering the given budget, remediation time, and reliability.
NASA Astrophysics Data System (ADS)
Luo, Qiankun; Wu, Jianfeng; Yang, Yun; Qian, Jiazhong; Wu, Jichun
2014-11-01
This study develops a new probabilistic multi-objective fast harmony search algorithm (PMOFHS) for optimal design of groundwater remediation systems under uncertainty associated with the hydraulic conductivity (K) of aquifers. The PMOFHS integrates the previously developed deterministic multi-objective optimization method, namely multi-objective fast harmony search algorithm (MOFHS) with a probabilistic sorting technique to search for Pareto-optimal solutions to multi-objective optimization problems in a noisy hydrogeological environment arising from insufficient K data. The PMOFHS is then coupled with the commonly used flow and transport codes, MODFLOW and MT3DMS, to identify the optimal design of groundwater remediation systems for a two-dimensional hypothetical test problem and a three-dimensional Indiana field application involving two objectives: (i) minimization of the total remediation cost through the engineering planning horizon, and (ii) minimization of the mass remaining in the aquifer at the end of the operational period, whereby the pump-and-treat (PAT) technology is used to clean up contaminated groundwater. Also, Monte Carlo (MC) analysis is employed to evaluate the effectiveness of the proposed methodology. Comprehensive analysis indicates that the proposed PMOFHS can find Pareto-optimal solutions with low variability and high reliability and is a potentially effective tool for optimizing multi-objective groundwater remediation problems under uncertainty.
Pugliese, Cara E.; Kenworthy, Lauren; Bal, Vanessa Hus; Wallace, Gregory L; Yerys, Benjamin E; Maddox, Brenna B.; White, Susan W.; Popal, Haroon; Armour, Anna Chelsea; Miller, Judith; Herrington, John D.; Schultz, Robert T.; Martin, Alex; Anthony, Laura Gutermuth
2015-01-01
Recent updates have been proposed to the Autism Diagnostic Observation Schedule-2 Module 4 diagnostic algorithm. This new algorithm, however, has not yet been validated in an independent sample without intellectual disability (ID). This multi-site study compared the original and revised algorithms in individuals with ASD without ID. The revised algorithm demonstrated increased sensitivity, but lower specificity in the overall sample. Estimates were highest for females, individuals with a verbal IQ below 85 or above 115, and ages 16 and older. Best practice diagnostic procedures should include the Module 4 in conjunction with other assessment tools. Balancing needs for sensitivity and specificity depending on the purpose of assessment (e.g., clinical vs. research) and demographic characteristics mentioned above will enhance its utility. PMID:26385796
NASA Astrophysics Data System (ADS)
Tereshin, Alexander A.; Usilin, Sergey A.; Arlazarov, Vladimir V.
2018-04-01
This paper aims to study the problem of multi-class object detection in video stream with Viola-Jones cascades. An adaptive algorithm for selecting Viola-Jones cascade based on greedy choice strategy in solution of the N-armed bandit problem is proposed. The efficiency of the algorithm on the problem of detection and recognition of the bank card logos in the video stream is shown. The proposed algorithm can be effectively used in documents localization and identification, recognition of road scene elements, localization and tracking of the lengthy objects , and for solving other problems of rigid object detection in a heterogeneous data flows. The computational efficiency of the algorithm makes it possible to use it both on personal computers and on mobile devices based on processors with low power consumption.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moignier, C; Huet, C; Barraux, V
Purpose: Advanced stereotactic radiotherapy (SRT) treatments require accurate dose calculation for treatment planning especially for treatment sites involving heterogeneous patient anatomy. The purpose of this study was to evaluate the accuracy of dose calculation algorithms, Raytracing and Monte Carlo (MC), implemented in the MultiPlan treatment planning system (TPS) in presence of heterogeneities. Methods: First, the LINAC of a CyberKnife radiotherapy facility was modeled with the PENELOPE MC code. A protocol for the measurement of dose distributions with EBT3 films was established and validated thanks to comparison between experimental dose distributions and calculated dose distributions obtained with MultiPlan Raytracing and MCmore » algorithms as well as with the PENELOPE MC model for treatments planned with the homogenous Easycube phantom. Finally, bones and lungs inserts were used to set up a heterogeneous Easycube phantom. Treatment plans with the 10, 7.5 or the 5 mm field sizes were generated in Multiplan TPS with different tumor localizations (in the lung and at the lung/bone/soft tissue interface). Experimental dose distributions were compared to the PENELOPE MC and Multiplan calculations using the gamma index method. Results: Regarding the experiment in the homogenous phantom, 100% of the points passed for the 3%/3mm tolerance criteria. These criteria include the global error of the method (CT-scan resolution, EBT3 dosimetry, LINAC positionning …), and were used afterwards to estimate the accuracy of the MultiPlan algorithms in heterogeneous media. Comparison of the dose distributions obtained in the heterogeneous phantom is in progress. Conclusion: This work has led to the development of numerical and experimental dosimetric tools for small beam dosimetry. Raytracing and MC algorithms implemented in MultiPlan TPS were evaluated in heterogeneous media.« less
Zaritsky, Assaf; Natan, Sari; Horev, Judith; Hecht, Inbal; Wolf, Lior; Ben-Jacob, Eshel; Tsarfaty, Ilan
2011-01-01
Confocal microscopy analysis of fluorescence and morphology is becoming the standard tool in cell biology and molecular imaging. Accurate quantification algorithms are required to enhance the understanding of different biological phenomena. We present a novel approach based on image-segmentation of multi-cellular regions in bright field images demonstrating enhanced quantitative analyses and better understanding of cell motility. We present MultiCellSeg, a segmentation algorithm to separate between multi-cellular and background regions for bright field images, which is based on classification of local patches within an image: a cascade of Support Vector Machines (SVMs) is applied using basic image features. Post processing includes additional classification and graph-cut segmentation to reclassify erroneous regions and refine the segmentation. This approach leads to a parameter-free and robust algorithm. Comparison to an alternative algorithm on wound healing assay images demonstrates its superiority. The proposed approach was used to evaluate common cell migration models such as wound healing and scatter assay. It was applied to quantify the acceleration effect of Hepatocyte growth factor/scatter factor (HGF/SF) on healing rate in a time lapse confocal microscopy wound healing assay and demonstrated that the healing rate is linear in both treated and untreated cells, and that HGF/SF accelerates the healing rate by approximately two-fold. A novel fully automated, accurate, zero-parameters method to classify and score scatter-assay images was developed and demonstrated that multi-cellular texture is an excellent descriptor to measure HGF/SF-induced cell scattering. We show that exploitation of textural information from differential interference contrast (DIC) images on the multi-cellular level can prove beneficial for the analyses of wound healing and scatter assays. The proposed approach is generic and can be used alone or alongside traditional fluorescence single-cell processing to perform objective, accurate quantitative analyses for various biological applications. PMID:22096600
Zaritsky, Assaf; Natan, Sari; Horev, Judith; Hecht, Inbal; Wolf, Lior; Ben-Jacob, Eshel; Tsarfaty, Ilan
2011-01-01
Confocal microscopy analysis of fluorescence and morphology is becoming the standard tool in cell biology and molecular imaging. Accurate quantification algorithms are required to enhance the understanding of different biological phenomena. We present a novel approach based on image-segmentation of multi-cellular regions in bright field images demonstrating enhanced quantitative analyses and better understanding of cell motility. We present MultiCellSeg, a segmentation algorithm to separate between multi-cellular and background regions for bright field images, which is based on classification of local patches within an image: a cascade of Support Vector Machines (SVMs) is applied using basic image features. Post processing includes additional classification and graph-cut segmentation to reclassify erroneous regions and refine the segmentation. This approach leads to a parameter-free and robust algorithm. Comparison to an alternative algorithm on wound healing assay images demonstrates its superiority. The proposed approach was used to evaluate common cell migration models such as wound healing and scatter assay. It was applied to quantify the acceleration effect of Hepatocyte growth factor/scatter factor (HGF/SF) on healing rate in a time lapse confocal microscopy wound healing assay and demonstrated that the healing rate is linear in both treated and untreated cells, and that HGF/SF accelerates the healing rate by approximately two-fold. A novel fully automated, accurate, zero-parameters method to classify and score scatter-assay images was developed and demonstrated that multi-cellular texture is an excellent descriptor to measure HGF/SF-induced cell scattering. We show that exploitation of textural information from differential interference contrast (DIC) images on the multi-cellular level can prove beneficial for the analyses of wound healing and scatter assays. The proposed approach is generic and can be used alone or alongside traditional fluorescence single-cell processing to perform objective, accurate quantitative analyses for various biological applications.
Lin, Na; Chen, Hanning; Jing, Shikai; Liu, Fang; Liang, Xiaodan
2017-03-01
In recent years, symbiosis as a rich source of potential engineering applications and computational model has attracted more and more attentions in the adaptive complex systems and evolution computing domains. Inspired by different symbiotic coevolution forms in nature, this paper proposed a series of multi-swarm particle swarm optimizers called PS 2 Os, which extend the single population particle swarm optimization (PSO) algorithm to interacting multi-swarms model by constructing hierarchical interaction topologies and enhanced dynamical update equations. According to different symbiotic interrelationships, four versions of PS 2 O are initiated to mimic mutualism, commensalism, predation, and competition mechanism, respectively. In the experiments, with five benchmark problems, the proposed algorithms are proved to have considerable potential for solving complex optimization problems. The coevolutionary dynamics of symbiotic species in each PS 2 O version are also studied respectively to demonstrate the heterogeneity of different symbiotic interrelationships that effect on the algorithm's performance. Then PS 2 O is used for solving the radio frequency identification (RFID) network planning (RNP) problem with a mixture of discrete and continuous variables. Simulation results show that the proposed algorithm outperforms the reference algorithms for planning RFID networks, in terms of optimization accuracy and computation robustness.
Case-Based Multi-Sensor Intrusion Detection
NASA Astrophysics Data System (ADS)
Schwartz, Daniel G.; Long, Jidong
2009-08-01
Multi-sensor intrusion detection systems (IDSs) combine the alerts raised by individual IDSs and possibly other kinds of devices such as firewalls and antivirus software. A critical issue in building a multi-sensor IDS is alert-correlation, i.e., determining which alerts are caused by the same attack. This paper explores a novel approach to alert correlation using case-based reasoning (CBR). Each case in the CBR system's library contains a pattern of alerts raised by some known attack type, together with the identity of the attack. Then during run time, the alert streams gleaned from the sensors are compared with the patterns in the cases, and a match indicates that the attack described by that case has occurred. For this purpose the design of a fast and accurate matching algorithm is imperative. Two such algorithms were explored: (i) the well-known Hungarian algorithm, and (ii) an order-preserving matching of our own device. Tests were conducted using the DARPA Grand Challenge Problem attack simulator. These showed that the both matching algorithms are effective in detecting attacks; but the Hungarian algorithm is inefficient; whereas the order-preserving one is very efficient, in fact runs in linear time.
NASA Astrophysics Data System (ADS)
Li, Dong-xia; Ye, Qian-wen
Out-of-band radiation suppression algorithm must be used efficiently for broadband aeronautical communication system in order not to interfere the operation of the existing systems in aviation L-Band. Based on the simple introduction of the broadband aeronautical multi-carrier communication (B-AMC) system model, several sidelobe suppression techniques in orthogonal frequency multiplexing (OFDM) system are presented and analyzed so as to find a suitable algorithm for B-AMC system in this paper. Simulation results show that raise-cosine function windowing can suppress the out-of-band radiation of B-AMC system effectively.
Superiorization-based multi-energy CT image reconstruction
Yang, Q; Cong, W; Wang, G
2017-01-01
The recently-developed superiorization approach is efficient and robust for solving various constrained optimization problems. This methodology can be applied to multi-energy CT image reconstruction with the regularization in terms of the prior rank, intensity and sparsity model (PRISM). In this paper, we propose a superiorized version of the simultaneous algebraic reconstruction technique (SART) based on the PRISM model. Then, we compare the proposed superiorized algorithm with the Split-Bregman algorithm in numerical experiments. The results show that both the Superiorized-SART and the Split-Bregman algorithms generate good results with weak noise and reduced artefacts. PMID:28983142
Li, Xiangyu; Xie, Nijie; Tian, Xinyue
2017-01-01
This paper proposes a scheduling and power management solution for energy harvesting heterogeneous multi-core WSN node SoC such that the system continues to operate perennially and uses the harvested energy efficiently. The solution consists of a heterogeneous multi-core system oriented task scheduling algorithm and a low-complexity dynamic workload scaling and configuration optimization algorithm suitable for light-weight platforms. Moreover, considering the power consumption of most WSN applications have the characteristic of data dependent behavior, we introduce branches handling mechanism into the solution as well. The experimental result shows that the proposed algorithm can operate in real-time on a lightweight embedded processor (MSP430), and that it can make a system do more valuable works and make more than 99.9% use of the power budget. PMID:28208730
Li, Xiangyu; Xie, Nijie; Tian, Xinyue
2017-02-08
This paper proposes a scheduling and power management solution for energy harvesting heterogeneous multi-core WSN node SoC such that the system continues to operate perennially and uses the harvested energy efficiently. The solution consists of a heterogeneous multi-core system oriented task scheduling algorithm and a low-complexity dynamic workload scaling and configuration optimization algorithm suitable for light-weight platforms. Moreover, considering the power consumption of most WSN applications have the characteristic of data dependent behavior, we introduce branches handling mechanism into the solution as well. The experimental result shows that the proposed algorithm can operate in real-time on a lightweight embedded processor (MSP430), and that it can make a system do more valuable works and make more than 99.9% use of the power budget.
Wang, Wei; Song, Wei-Guo; Liu, Shi-Xing; Zhang, Yong-Ming; Zheng, Hong-Yang; Tian, Wei
2011-04-01
An improved method for detecting cloud combining Kmeans clustering and the multi-spectral threshold approach is described. On the basis of landmark spectrum analysis, MODIS data is categorized into two major types initially by Kmeans method. The first class includes clouds, smoke and snow, and the second class includes vegetation, water and land. Then a multi-spectral threshold detection is applied to eliminate interference such as smoke and snow for the first class. The method is tested with MODIS data at different time under different underlying surface conditions. By visual method to test the performance of the algorithm, it was found that the algorithm can effectively detect smaller area of cloud pixels and exclude the interference of underlying surface, which provides a good foundation for the next fire detection approach.
Multi-sensor image fusion algorithm based on multi-objective particle swarm optimization algorithm
NASA Astrophysics Data System (ADS)
Xie, Xia-zhu; Xu, Ya-wei
2017-11-01
On the basis of DT-CWT (Dual-Tree Complex Wavelet Transform - DT-CWT) theory, an approach based on MOPSO (Multi-objective Particle Swarm Optimization Algorithm) was proposed to objectively choose the fused weights of low frequency sub-bands. High and low frequency sub-bands were produced by DT-CWT. Absolute value of coefficients was adopted as fusion rule to fuse high frequency sub-bands. Fusion weights in low frequency sub-bands were used as particles in MOPSO. Spatial Frequency and Average Gradient were adopted as two kinds of fitness functions in MOPSO. The experimental result shows that the proposed approach performances better than Average Fusion and fusion methods based on local variance and local energy respectively in brightness, clarity and quantitative evaluation which includes Entropy, Spatial Frequency, Average Gradient and QAB/F.
Demultiplexing based on frequency-domain joint decision MMA for MDM system
NASA Astrophysics Data System (ADS)
Caili, Gong; Li, Li; Guijun, Hu
2016-06-01
In this paper, we propose a demultiplexing method based on frequency-domain joint decision multi-modulus algorithm (FD-JDMMA) for mode division multiplexing (MDM) system. The performance of FD-JDMMA is compared with frequency-domain multi-modulus algorithm (FD-MMA) and frequency-domain least mean square (FD-LMS) algorithm. The simulation results show that FD-JDMMA outperforms FD-MMA in terms of BER and convergence speed in the cases of mQAM (m=4, 16 and 64) formats. And it is also demonstrated that FD-JDMMA achieves better BER performance and converges faster than FD-LMS in the cases of 16QAM and 64QAM. Furthermore, FD-JDMMA maintains similar computational complexity as the both equalization algorithms.
Scheduling Earth Observing Fleets Using Evolutionary Algorithms: Problem Description and Approach
NASA Technical Reports Server (NTRS)
Globus, Al; Crawford, James; Lohn, Jason; Morris, Robert; Clancy, Daniel (Technical Monitor)
2002-01-01
We describe work in progress concerning multi-instrument, multi-satellite scheduling. Most, although not all, Earth observing instruments currently in orbit are unique. In the relatively near future, however, we expect to see fleets of Earth observing spacecraft, many carrying nearly identical instruments. This presents a substantially new scheduling challenge. Inspired by successful commercial applications of evolutionary algorithms in scheduling domains, this paper presents work in progress regarding the use of evolutionary algorithms to solve a set of Earth observing related model problems. Both the model problems and the software are described. Since the larger problems will require substantial computation and evolutionary algorithms are embarrassingly parallel, we discuss our parallelization techniques using dedicated and cycle-scavenged workstations.
A versatile multi-objective FLUKA optimization using Genetic Algorithms
NASA Astrophysics Data System (ADS)
Vlachoudis, Vasilis; Antoniucci, Guido Arnau; Mathot, Serge; Kozlowska, Wioletta Sandra; Vretenar, Maurizio
2017-09-01
Quite often Monte Carlo simulation studies require a multi phase-space optimization, a complicated task, heavily relying on the operator experience and judgment. Examples of such calculations are shielding calculations with stringent conditions in the cost, in residual dose, material properties and space available, or in the medical field optimizing the dose delivered to a patient under a hadron treatment. The present paper describes our implementation inside flair[1] the advanced user interface of FLUKA[2,3] of a multi-objective Genetic Algorithm[Erreur ! Source du renvoi introuvable.] to facilitate the search for the optimum solution.
The LSST Data Mining Research Agenda
NASA Astrophysics Data System (ADS)
Borne, K.; Becla, J.; Davidson, I.; Szalay, A.; Tyson, J. A.
2008-12-01
We describe features of the LSST science database that are amenable to scientific data mining, object classification, outlier identification, anomaly detection, image quality assurance, and survey science validation. The data mining research agenda includes: scalability (at petabytes scales) of existing machine learning and data mining algorithms; development of grid-enabled parallel data mining algorithms; designing a robust system for brokering classifications from the LSST event pipeline (which may produce 10,000 or more event alerts per night) multi-resolution methods for exploration of petascale databases; indexing of multi-attribute multi-dimensional astronomical databases (beyond spatial indexing) for rapid querying of petabyte databases; and more.
NASA Astrophysics Data System (ADS)
Bonissone, Stefano R.
2001-11-01
There are many approaches to solving multi-objective optimization problems using evolutionary algorithms. We need to select methods for representing and aggregating preferences, as well as choosing strategies for searching in multi-dimensional objective spaces. First we suggest the use of linguistic variables to represent preferences and the use of fuzzy rule systems to implement tradeoff aggregations. After a review of alternatives EA methods for multi-objective optimizations, we explore the use of multi-sexual genetic algorithms (MSGA). In using a MSGA, we need to modify certain parts of the GAs, namely the selection and crossover operations. The selection operator groups solutions according to their gender tag to prepare them for crossover. The crossover is modified by appending a gender tag at the end of the chromosome. We use single and double point crossovers. We determine the gender of the offspring by the amount of genetic material provided by each parent. The parent that contributed the most to the creation of a specific offspring determines the gender that the offspring will inherit. This is still a work in progress, and in the conclusion we examine many future extensions and experiments.
Dynamic Task Allocation in Multi-Hop Multimedia Wireless Sensor Networks with Low Mobility
Jin, Yichao; Vural, Serdar; Gluhak, Alexander; Moessner, Klaus
2013-01-01
This paper presents a task allocation-oriented framework to enable efficient in-network processing and cost-effective multi-hop resource sharing for dynamic multi-hop multimedia wireless sensor networks with low node mobility, e.g., pedestrian speeds. The proposed system incorporates a fast task reallocation algorithm to quickly recover from possible network service disruptions, such as node or link failures. An evolutional self-learning mechanism based on a genetic algorithm continuously adapts the system parameters in order to meet the desired application delay requirements, while also achieving a sufficiently long network lifetime. Since the algorithm runtime incurs considerable time delay while updating task assignments, we introduce an adaptive window size to limit the delay periods and ensure an up-to-date solution based on node mobility patterns and device processing capabilities. To the best of our knowledge, this is the first study that yields multi-objective task allocation in a mobile multi-hop wireless environment under dynamic conditions. Simulations are performed in various settings, and the results show considerable performance improvement in extending network lifetime compared to heuristic mechanisms. Furthermore, the proposed framework provides noticeable reduction in the frequency of missing application deadlines. PMID:24135992
Medical image classification based on multi-scale non-negative sparse coding.
Zhang, Ruijie; Shen, Jian; Wei, Fushan; Li, Xiong; Sangaiah, Arun Kumar
2017-11-01
With the rapid development of modern medical imaging technology, medical image classification has become more and more important in medical diagnosis and clinical practice. Conventional medical image classification algorithms usually neglect the semantic gap problem between low-level features and high-level image semantic, which will largely degrade the classification performance. To solve this problem, we propose a multi-scale non-negative sparse coding based medical image classification algorithm. Firstly, Medical images are decomposed into multiple scale layers, thus diverse visual details can be extracted from different scale layers. Secondly, for each scale layer, the non-negative sparse coding model with fisher discriminative analysis is constructed to obtain the discriminative sparse representation of medical images. Then, the obtained multi-scale non-negative sparse coding features are combined to form a multi-scale feature histogram as the final representation for a medical image. Finally, SVM classifier is combined to conduct medical image classification. The experimental results demonstrate that our proposed algorithm can effectively utilize multi-scale and contextual spatial information of medical images, reduce the semantic gap in a large degree and improve medical image classification performance. Copyright © 2017 Elsevier B.V. All rights reserved.
Algorithmes et architectures pour ordinateurs quantiques supraconducteurs
NASA Astrophysics Data System (ADS)
Blais, Alexandre
Depuis sa formulation, la theorie de l'information a ete basee, implicitement, sur les lois de la physique classique. Une telle formulation est toutefois incomplete puisqu'elle ne tient pas compte de la realite quantique. Au cours des vingt dernieres annees, l'expansion de la theorie de l'information englobant les effets purement quantiques a connu un interet grandissant. La realisation d'un systeme de traitement de l'information quantique, un ordinateur quantique, presente toutefois de nombreux defis. Dans ce document, on s'interesse a differents aspects concernant ces defis. On commence par presenter des concepts algorithmiques comme l'optimisation de calculs quantiques et le calcul quantique geometrique. Par la suite, on s'interesse au design et a differents aspects de l'utilisation de qubits bases sur les jonctions Josephson. En particulier, un nouveau design de qubit supraconducteur est suggere. On presente aussi une approche originale pour l'interaction entre qubits. Cette approche est tres generale puisqu'elle peut etre appliquee a differents designs de qubits. Finalement, on s'interesse a la lecture des qubits supraconducteurs de flux. Le detecteur suggere ici a l'avantage de pouvoir etre decouple du qubit lorsqu'il n'y a pas de mesure en cours.
Optimisation thermique de moules d'injection construits par des processus génératifs
NASA Astrophysics Data System (ADS)
Boillat, E.; Glardon, R.; Paraschivescu, D.
2002-12-01
Une des potentialités les plus remarquables des procédés de production génératifs, comme le frittage sélectif par laser, est leur capacité à fabriquer des moules pour l'injection plastique équipés directement de canaux de refroidissement conformes, parfaitement adaptés aux empreintes Pour que l'industrie de l'injection puisse tirer pleinement parti de cette nouvelle opportunité, il est nécessaire de mettre à la disposition des moulistes des logiciels de simulation capables d'évaluer les gains de productivité et de qualité réalisables avec des systèmes de refroidissement mieux adaptés. Ces logiciels devraient aussi être capables, le cas échéant, de concevoir le système de refroidissement optimal dans des situations où l'empreinte d'injection est complexe. Devant le manque d'outils disponibles dans ce domaine, le but de cet article est de proposer un modèle simple de moules d'injection. Ce modèle permet de comparer différentes stratégies de refroidissement et peut être couplé avec un algorithme d'optimisation.
De l'importance des orbites periodiques: Detection et applications
NASA Astrophysics Data System (ADS)
Doyon, Bernard
L'ensemble des Orbites Periodiques Instables (OPIs) d'un systeme chaotique est intimement relie a ses proprietes dynamiques. A partir de l'ensemble (en principe infini) d'OPIs cachees dans l'espace des phases, on peut obtenir des quantites dynamiques importantes telles les exposants de Lyapunov, la mesure invariante, l'entropie topologique et la dimension fractale. En chaos quantique (i.e. l'etude de systemes quantiques qui ont un equivalent chaotique dans la limite classique), ces memes OPIs permettent de faire le pont entre le comportement classique et quantique de systemes non-integrables. La localisation de ces cycles fondamentaux est un probleme complexe. Cette these aborde dans un premier temps le probleme de la detection des OPIs dans les systemes chaotiques. Une etude comparative de deux algorithmes recents est presentee. Nous approfondissons ces deux methodes afin de les utiliser sur differents systemes dont des flots continus dissipatifs et conservatifs. Une analyse du taux de convergence des algorithmes est aussi realisee afin de degager les forces et les limites de ces schemes numeriques. Les methodes de detection que nous utilisons reposent sur une transformation particuliere de la dynamique initiale. Cette astuce nous a inspire une methode alternative pour cibler et stabiliser une orbite periodique quelconque dans un systeme chaotique. Le ciblage est en general combine aux methodes de controle pour stabiliser rapidement un cycle donne. En general, il faut connaitre la position et la stabilite du cycle en question. La nouvelle methode de ciblage que nous presentons ne demande pas de connaitre a priori la position et la stabilite des orbites periodiques. Elle pourrait etre un outil complementaire aux methodes de ciblage et de controle actuelles.
2007-09-17
been proposed; these include a combination of variable fidelity models, parallelisation strategies and hybridisation techniques (Coello, Veldhuizen et...Coello et al (Coello, Veldhuizen et al. 2002). 4.4.2 HIERARCHICAL POPULATION TOPOLOGY A hierarchical population topology, when integrated into...to hybrid parallel Multi-Objective Evolutionary Algorithms (pMOEA) (Cantu-Paz 2000; Veldhuizen , Zydallis et al. 2003); it uses a master slave
A data-driven multi-model methodology with deep feature selection for short-term wind forecasting
DOE Office of Scientific and Technical Information (OSTI.GOV)
Feng, Cong; Cui, Mingjian; Hodge, Bri-Mathias
With the growing wind penetration into the power system worldwide, improving wind power forecasting accuracy is becoming increasingly important to ensure continued economic and reliable power system operations. In this paper, a data-driven multi-model wind forecasting methodology is developed with a two-layer ensemble machine learning technique. The first layer is composed of multiple machine learning models that generate individual forecasts. A deep feature selection framework is developed to determine the most suitable inputs to the first layer machine learning models. Then, a blending algorithm is applied in the second layer to create an ensemble of the forecasts produced by firstmore » layer models and generate both deterministic and probabilistic forecasts. This two-layer model seeks to utilize the statistically different characteristics of each machine learning algorithm. A number of machine learning algorithms are selected and compared in both layers. This developed multi-model wind forecasting methodology is compared to several benchmarks. The effectiveness of the proposed methodology is evaluated to provide 1-hour-ahead wind speed forecasting at seven locations of the Surface Radiation network. Numerical results show that comparing to the single-algorithm models, the developed multi-model framework with deep feature selection procedure has improved the forecasting accuracy by up to 30%.« less
NASA Astrophysics Data System (ADS)
Wu, J.; Yang, Y.; Luo, Q.; Wu, J.
2012-12-01
This study presents a new hybrid multi-objective evolutionary algorithm, the niched Pareto tabu search combined with a genetic algorithm (NPTSGA), whereby the global search ability of niched Pareto tabu search (NPTS) is improved by the diversification of candidate solutions arose from the evolving nondominated sorting genetic algorithm II (NSGA-II) population. Also, the NPTSGA coupled with the commonly used groundwater flow and transport codes, MODFLOW and MT3DMS, is developed for multi-objective optimal design of groundwater remediation systems. The proposed methodology is then applied to a large-scale field groundwater remediation system for cleanup of large trichloroethylene (TCE) plume at the Massachusetts Military Reservation (MMR) in Cape Cod, Massachusetts. Furthermore, a master-slave (MS) parallelization scheme based on the Message Passing Interface (MPI) is incorporated into the NPTSGA to implement objective function evaluations in distributed processor environment, which can greatly improve the efficiency of the NPTSGA in finding Pareto-optimal solutions to the real-world application. This study shows that the MS parallel NPTSGA in comparison with the original NPTS and NSGA-II can balance the tradeoff between diversity and optimality of solutions during the search process and is an efficient and effective tool for optimizing the multi-objective design of groundwater remediation systems under complicated hydrogeologic conditions.
The design of multi-core DSP parallel model based on message passing and multi-level pipeline
NASA Astrophysics Data System (ADS)
Niu, Jingyu; Hu, Jian; He, Wenjing; Meng, Fanrong; Li, Chuanrong
2017-10-01
Currently, the design of embedded signal processing system is often based on a specific application, but this idea is not conducive to the rapid development of signal processing technology. In this paper, a parallel processing model architecture based on multi-core DSP platform is designed, and it is mainly suitable for the complex algorithms which are composed of different modules. This model combines the ideas of multi-level pipeline parallelism and message passing, and summarizes the advantages of the mainstream model of multi-core DSP (the Master-Slave model and the Data Flow model), so that it has better performance. This paper uses three-dimensional image generation algorithm to validate the efficiency of the proposed model by comparing with the effectiveness of the Master-Slave and the Data Flow model.
Multi-sources data fusion framework for remote triage prioritization in telehealth.
Salman, O H; Rasid, M F A; Saripan, M I; Subramaniam, S K
2014-09-01
The healthcare industry is streamlining processes to offer more timely and effective services to all patients. Computerized software algorithm and smart devices can streamline the relation between users and doctors by providing more services inside the healthcare telemonitoring systems. This paper proposes a multi-sources framework to support advanced healthcare applications. The proposed framework named Multi Sources Healthcare Architecture (MSHA) considers multi-sources: sensors (ECG, SpO2 and Blood Pressure) and text-based inputs from wireless and pervasive devices of Wireless Body Area Network. The proposed framework is used to improve the healthcare scalability efficiency by enhancing the remote triaging and remote prioritization processes for the patients. The proposed framework is also used to provide intelligent services over telemonitoring healthcare services systems by using data fusion method and prioritization technique. As telemonitoring system consists of three tiers (Sensors/ sources, Base station and Server), the simulation of the MSHA algorithm in the base station is demonstrated in this paper. The achievement of a high level of accuracy in the prioritization and triaging patients remotely, is set to be our main goal. Meanwhile, the role of multi sources data fusion in the telemonitoring healthcare services systems has been demonstrated. In addition to that, we discuss how the proposed framework can be applied in a healthcare telemonitoring scenario. Simulation results, for different symptoms relate to different emergency levels of heart chronic diseases, demonstrate the superiority of our algorithm compared with conventional algorithms in terms of classify and prioritize the patients remotely.
Multi-agent coordination algorithms for control of distributed energy resources in smart grids
NASA Astrophysics Data System (ADS)
Cortes, Andres
Sustainable energy is a top-priority for researchers these days, since electricity and transportation are pillars of modern society. Integration of clean energy technologies such as wind, solar, and plug-in electric vehicles (PEVs), is a major engineering challenge in operation and management of power systems. This is due to the uncertain nature of renewable energy technologies and the large amount of extra load that PEVs would add to the power grid. Given the networked structure of a power system, multi-agent control and optimization strategies are natural approaches to address the various problems of interest for the safe and reliable operation of the power grid. The distributed computation in multi-agent algorithms addresses three problems at the same time: i) it allows for the handling of problems with millions of variables that a single processor cannot compute, ii) it allows certain independence and privacy to electricity customers by not requiring any usage information, and iii) it is robust to localized failures in the communication network, being able to solve problems by simply neglecting the failing section of the system. We propose various algorithms to coordinate storage, generation, and demand resources in a power grid using multi-agent computation and decentralized decision making. First, we introduce a hierarchical vehicle-one-grid (V1G) algorithm for coordination of PEVs under usage constraints, where energy only flows from the grid in to the batteries of PEVs. We then present a hierarchical vehicle-to-grid (V2G) algorithm for PEV coordination that takes into consideration line capacity constraints in the distribution grid, and where energy flows both ways, from the grid in to the batteries, and from the batteries to the grid. Next, we develop a greedy-like hierarchical algorithm for management of demand response events with on/off loads. Finally, we introduce distributed algorithms for the optimal control of distributed energy resources, i.e., generation and storage in a microgrid. The algorithms we present are provably correct and tested in simulation. Each algorithm is assumed to work on a particular network topology, and simulation studies are carried out in order to demonstrate their convergence properties to a desired solution.
PRF Ambiguity Detrmination for Radarsat ScanSAR System
NASA Technical Reports Server (NTRS)
Jin, Michael Y.
1998-01-01
PRF ambiguity is a potential problem for a spaceborne SAR operated at high frequencies. For a strip mode SAR, there were several approaches to solve this problem. This paper, however, addresses PRF ambiguity determination algorithms suitable for a burst mode SAR system such as the Radarsat ScanSAR. The candidate algorithms include the wavelength diversity algorithm, range look cross correlation algorithm, and multi-PRF algorithm.
Development of an Aerosol Opacity Retrieval Algorithm for Use with Multi-Angle Land Surface Images
NASA Technical Reports Server (NTRS)
Diner, D.; Paradise, S.; Martonchik, J.
1994-01-01
In 1998, the Multi-angle Imaging SpectroRadiometer (MISR) will fly aboard the EOS-AM1 spacecraft. MISR will enable unique methods for retrieving the properties of atmospheric aerosols, by providing global imagery of the Earth at nine viewing angles in four visible and near-IR spectral bands. As part of the MISR algorithm development, theoretical methods of analyzing multi-angle, multi-spectral data are being tested using images acquired by the airborne Advanced Solid-State Array Spectroradiometer (ASAS). In this paper we derive a method to be used over land surfaces for retrieving the change in opacity between spectral bands, which can then be used in conjunction with an aerosol model to derive a bound on absolute opacity.
Evaluation of laser ablation crater relief by white light micro interferometer
NASA Astrophysics Data System (ADS)
Gurov, Igor; Volkov, Mikhail; Zhukova, Ekaterina; Ivanov, Nikita; Margaryants, Nikita; Potemkin, Andrey; Samokhvalov, Andrey; Shelygina, Svetlana
2017-06-01
A multi-view scanning method is suggested to assess a complicated surface relief by white light interferometer. Peculiarities of the method are demonstrated on a special object in the form of quadrangular pyramid cavity, which is formed at measurement of micro-hardness of materials using a hardness gauge. An algorithm of the joint processing of multi-view scanning results is developed that allows recovering correct relief values. Laser ablation craters were studied experimentally, and their relief was recovered using the developed method. It is shown that the multi-view scanning reduces ambiguity when determining the local depth of the laser ablation craters micro relief. Results of experimental studies of the multi-view scanning method and data processing algorithm are presented.
A formation control strategy with coupling weights for the multi-robot system
NASA Astrophysics Data System (ADS)
Liang, Xudong; Wang, Siming; Li, Weijie
2017-12-01
The distributed formation problem of the multi-robot system with general linear dynamic characteristics and directed communication topology is discussed. In order to avoid that the multi-robot system can not maintain the desired formation in the complex communication environment, the distributed cooperative algorithm with coupling weights based on zipf distribution is designed. The asymptotic stability condition for the formation of the multi-robot system is given, and the theory of the graph and the Lyapunov theory are used to prove that the formation can converge to the desired geometry formation and the desired motion rules of the virtual leader under this condition. Nontrivial simulations are performed to validate the effectiveness of the distributed cooperative algorithm with coupling weights.
Directly data processing algorithm for multi-wavelength pyrometer (MWP).
Xing, Jian; Peng, Bo; Ma, Zhao; Guo, Xin; Dai, Li; Gu, Weihong; Song, Wenlong
2017-11-27
Data processing of multi-wavelength pyrometer (MWP) is a difficult problem because unknown emissivity. So far some solutions developed generally assumed particular mathematical relations for emissivity versus wavelength or emissivity versus temperature. Due to the deviation between the hypothesis and actual situation, the inversion results can be seriously affected. So directly data processing algorithm of MWP that does not need to assume the spectral emissivity model in advance is main aim of the study. Two new data processing algorithms of MWP, Gradient Projection (GP) algorithm and Internal Penalty Function (IPF) algorithm, each of which does not require to fix emissivity model in advance, are proposed. The novelty core idea is that data processing problem of MWP is transformed into constraint optimization problem, then it can be solved by GP or IPF algorithms. By comparison of simulation results for some typical spectral emissivity models, it is found that IPF algorithm is superior to GP algorithm in terms of accuracy and efficiency. Rocket nozzle temperature experiment results show that true temperature inversion results from IPF algorithm agree well with the theoretical design temperature as well. So the proposed combination IPF algorithm with MWP is expected to be a directly data processing algorithm to clear up the unknown emissivity obstacle for MWP.
High-performance sparse matrix-matrix products on Intel KNL and multicore architectures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nagasaka, Y; Matsuoka, S; Azad, A
Sparse matrix-matrix multiplication (SpGEMM) is a computational primitive that is widely used in areas ranging from traditional numerical applications to recent big data analysis and machine learning. Although many SpGEMM algorithms have been proposed, hardware specific optimizations for multi- and many-core processors are lacking and a detailed analysis of their performance under various use cases and matrices is not available. We firstly identify and mitigate multiple bottlenecks with memory management and thread scheduling on Intel Xeon Phi (Knights Landing or KNL). Specifically targeting multi- and many-core processors, we develop a hash-table-based algorithm and optimize a heap-based shared-memory SpGEMM algorithm. Wemore » examine their performance together with other publicly available codes. Different from the literature, our evaluation also includes use cases that are representative of real graph algorithms, such as multi-source breadth-first search or triangle counting. Our hash-table and heap-based algorithms are showing significant speedups from libraries in the majority of the cases while different algorithms dominate the other scenarios with different matrix size, sparsity, compression factor and operation type. We wrap up in-depth evaluation results and make a recipe to give the best SpGEMM algorithm for target scenario. A critical finding is that hash-table-based SpGEMM gets a significant performance boost if the nonzeros are not required to be sorted within each row of the output matrix.« less
Takeshima, T; Takahashi, T; Yamashita, J; Okada, Y; Watanabe, S
2018-05-25
Multi-emitter fitting algorithms have been developed to improve the temporal resolution of single-molecule switching nanoscopy, but the molecular density range they can analyse is narrow and the computation required is intensive, significantly limiting their practical application. Here, we propose a computationally fast method, wedged template matching (WTM), an algorithm that uses a template matching technique to localise molecules at any overlapping molecular density from sparse to ultrahigh density with subdiffraction resolution. WTM achieves the localization of overlapping molecules at densities up to 600 molecules μm -2 with a high detection sensitivity and fast computational speed. WTM also shows localization precision comparable with that of DAOSTORM (an algorithm for high-density super-resolution microscopy), at densities up to 20 molecules μm -2 , and better than DAOSTORM at higher molecular densities. The application of WTM to a high-density biological sample image demonstrated that it resolved protein dynamics from live cell images with subdiffraction resolution and a temporal resolution of several hundred milliseconds or less through a significant reduction in the number of camera images required for a high-density reconstruction. WTM algorithm is a computationally fast, multi-emitter fitting algorithm that can analyse over a wide range of molecular densities. The algorithm is available through the website. https://doi.org/10.17632/bf3z6xpn5j.1. © 2018 The Authors. Journal of Microscopy published by JohnWiley & Sons Ltd on behalf of Royal Microscopical Society.
Zhang, Jia-Hua; Li, Xin; Yao, Feng-Mei; Li, Xian-Hua
2009-08-01
Land surface temperature (LST) is an important parameter in the study on the exchange of substance and energy between land surface and air for the land surface physics process at regional and global scales. Many applications of satellites remotely sensed data must provide exact and quantificational LST, such as drought, high temperature, forest fire, earthquake, hydrology and the vegetation monitor, and the models of global circulation and regional climate also need LST as input parameter. Therefore, the retrieval of LST using remote sensing technology becomes one of the key tasks in quantificational remote sensing study. Normally, in the spectrum bands, the thermal infrared (TIR, 3-15 microm) and microwave bands (1 mm-1 m) are important for retrieval of the LST. In the present paper, firstly, several methods for estimating the LST on the basis of thermal infrared (TIR) remote sensing were synthetically reviewed, i. e., the LST measured with an ground-base infrared thermometer, the LST retrieval from mono-window algorithm (MWA), single-channel algorithm (SCA), split-window techniques (SWT) and multi-channels algorithm(MCA), single-channel & multi-angle algorithm and multi-channels algorithm & multi-angle algorithm, and retrieval method of land surface component temperature using thermal infrared remotely sensed satellite observation. Secondly, the study status of land surface emissivity (epsilon) was presented. Thirdly, in order to retrieve LST for all weather conditions, microwave remotely sensed data, instead of thermal infrared data, have been developed recently, and the LST retrieval method from passive microwave remotely sensed data was also introduced. Finally, the main merits and shortcomings of different kinds of LST retrieval methods were discussed, respectively.
Jaiswal, Astha; Godinez, William J; Eils, Roland; Lehmann, Maik Jorg; Rohr, Karl
2015-11-01
Automatic fluorescent particle tracking is an essential task to study the dynamics of a large number of biological structures at a sub-cellular level. We have developed a probabilistic particle tracking approach based on multi-scale detection and two-step multi-frame association. The multi-scale detection scheme allows coping with particles in close proximity. For finding associations, we have developed a two-step multi-frame algorithm, which is based on a temporally semiglobal formulation as well as spatially local and global optimization. In the first step, reliable associations are determined for each particle individually in local neighborhoods. In the second step, the global spatial information over multiple frames is exploited jointly to determine optimal associations. The multi-scale detection scheme and the multi-frame association finding algorithm have been combined with a probabilistic tracking approach based on the Kalman filter. We have successfully applied our probabilistic tracking approach to synthetic as well as real microscopy image sequences of virus particles and quantified the performance. We found that the proposed approach outperforms previous approaches.
An evaluation of an educational intervention in psychology of injury for athletic training students.
Stiller-Ostrowski, Jennifer L; Gould, Daniel R; Covassin, Tracey
2009-01-01
"Psychosocial Intervention and Referral" is 1 of the 12 content areas in athletic training education programs, but knowledge gained and skill usage after an educational intervention in this area have never been evaluated. To evaluate the effectiveness of an educational intervention in increasing psychology-of-injury knowledge and skill usage in athletic training students (ATSs). Observational study. An accredited athletic training education program at a large Midwestern university. Participants included 26 ATSs divided into 2 groups: intervention group (4 men, 7 women; age = 21.4 +/- 0.67 years, grade point average = 3.37) and control group (7 men, 8 women; age = 21.5 +/- 3.8 years, grade point average = 3.27). All participants completed the Applied Sport Psychology for Athletic Trainers educational intervention. Psychology-of-injury knowledge tests and skill usage surveys were administered to all participants at the following intervals: baseline, intervention week 3, and intervention week 6. Retention tests were administered to intervention-group participants at 7 and 14 weeks after intervention. Analysis techniques included mixed-model analysis of variance (ANOVA) and repeated-measures ANOVA. The Applied Sport Psychology for Athletic Trainers educational intervention effectively increased psychology-of-injury knowledge (29-point increase from baseline to intervention week 6; F(2,23) = 29.358, P < .001, eta(p) (2) = 0.719) and skill usage (50-point increase from baseline to intervention week 6; F(2,23) = 5.999, P = .008, eta(p) (2) = 0.343) in undergraduate ATSs. These increases were maintained at the 7-week and 14-week retention testing (P < .001 for both). This first attempt at evaluating an educational intervention designed to improve ATSs' knowledge and skill usage revealed that the intervention was effective. Although both knowledge and skill usage scores decreased by the end of the retention period, the scores were still higher than baseline scores, indicating that the intervention was effective.
2008-06-01
capacity planning; • Electrical generation capacity planning; • Machine scheduling; • Freight scheduling; • Dairy farm expansion planning...Support Systems and Multi Criteria Decision Analysis Products A.2.11.2.2.1 ELECTRE IS ELECTRE IS is a generalization of ELECTRE I. It is a...criteria, ELECTRE IS supports the user in the process of selecting one alternative or a subset of alternatives. The method consists of two parts
Genetic algorithm for investigating flight MH370 in Indian Ocean using remotely sensed data
NASA Astrophysics Data System (ADS)
Marghany, Maged; Mansor, Shattri; Shariff, Abdul Rashid Bin Mohamed
2016-06-01
This study utilized Genetic algorithm (GA) for automatic detection and simulation trajectory movements of flight MH370 debris. In doing so, the Ocean Surface Topography Mission(OSTM) on the Jason- 2 satellite have been used within 1 and half year covers data to simulate the pattern of Flight MH370 debris movements across the southern Indian Ocean. Further, multi-objectives evolutionary algorithm also used to discriminate uncertainty of flight MH370 imagined and detection. The study shows that the ocean surface current speed is 0.5 m/s. This current patterns have developed a large anticlockwise gyre over a water depth of 8,000 m. The multi-objectives evolutionary algorithm suggested that objects are existed on satellite data are not flight MH370 debris. In addition, multiobjectives evolutionary algorithm suggested that the difficulties to acquire the exact location of flight MH370 due to complicated hydrodynamic movements across the southern Indian Ocean.
Tang, Wenming; Liu, Guixiong; Li, Yuzhong; Tan, Daji
2017-01-01
High data transmission efficiency is a key requirement for an ultrasonic phased array with multi-group ultrasonic sensors. Here, a novel FIFOs scheduling algorithm was proposed and the data transmission efficiency with hardware technology was improved. This algorithm includes FIFOs as caches for the ultrasonic scanning data obtained from the sensors with the output data in a bandwidth-sharing way, on the basis of which an optimal length ratio of all the FIFOs is achieved, allowing the reading operations to be switched among all the FIFOs without time slot waiting. Therefore, this algorithm enhances the utilization ratio of the reading bandwidth resources so as to obtain higher efficiency than the traditional scheduling algorithms. The reliability and validity of the algorithm are substantiated after its implementation in the field programmable gate array (FPGA) technology, and the bandwidth utilization ratio and the real-time performance of the ultrasonic phased array are enhanced. PMID:29035345
Rainfall Estimation over the Nile Basin using Multi-Spectral, Multi- Instrument Satellite Techniques
NASA Astrophysics Data System (ADS)
Habib, E.; Kuligowski, R.; Sazib, N.; Elshamy, M.; Amin, D.; Ahmed, M.
2012-04-01
Management of Egypt's Aswan High Dam is critical not only for flood control on the Nile but also for ensuring adequate water supplies for most of Egypt since rainfall is scarce over the vast majority of its land area. However, reservoir inflow is driven by rainfall over Sudan, Ethiopia, Uganda, and several other countries from which routine rain gauge data are sparse. Satellite- derived estimates of rainfall offer a much more detailed and timely set of data to form a basis for decisions on the operation of the dam. A single-channel infrared (IR) algorithm is currently in operational use at the Egyptian Nile Forecast Center (NFC). In this study, the authors report on the adaptation of a multi-spectral, multi-instrument satellite rainfall estimation algorithm (Self- Calibrating Multivariate Precipitation Retrieval, SCaMPR) for operational application by NFC over the Nile Basin. The algorithm uses a set of rainfall predictors that come from multi-spectral Infrared cloud top observations and self-calibrate them to a set of predictands that come from the more accurate, but less frequent, Microwave (MW) rain rate estimates. For application over the Nile Basin, the SCaMPR algorithm uses multiple satellite IR channels that have become recently available to NFC from the Spinning Enhanced Visible and Infrared Imager (SEVIRI). Microwave rain rates are acquired from multiple sources such as the Special Sensor Microwave/Imager (SSM/I), the Special Sensor Microwave Imager and Sounder (SSMIS), the Advanced Microwave Sounding Unit (AMSU), the Advanced Microwave Scanning Radiometer on EOS (AMSR-E), and the Tropical Rainfall Measuring Mission (TRMM) Microwave Imager (TMI). The algorithm has two main steps: rain/no-rain separation using discriminant analysis, and rain rate estimation using stepwise linear regression. We test two modes of algorithm calibration: real- time calibration with continuous updates of coefficients with newly coming MW rain rates, and calibration using static coefficients that are derived from IR-MW data from past observations. We also compare the SCaMPR algorithm to other global-scale satellite rainfall algorithms (e.g., 'Tropical Rainfall Measuring Mission (TRMM) and other sources' (TRMM-3B42) product, and the National Oceanographic and Atmospheric Administration Climate Prediction Center (NOAA-CPC) CMORPH product. The algorithm has several potential future applications such as: improving the performance accuracy of hydrologic forecasting models over the Nile Basin, and utilizing the enhanced rainfall datasets and better-calibrated hydrologic models to assess the impacts of climate change on the region's water availability using global circulation models and regional climate models.
Détermination de la vitesse de coupe en usinage à l'aide des réseaux de neurones
NASA Astrophysics Data System (ADS)
Amor, Noureddine; Noureddine, Ali; Kherfane, Riad Lakhdar
2018-02-01
En usinage par enlèvement de copeaux, il est nécessaire de connaître des éléments tels que la géométrie à obtenir, la matière à usiner, le type d'opération, la machine-outil, l'outil de coupe, la profondeur de passe, l'avance, la vitesse de coupe. Ces trois derniers éléments quantifiables sont déterminés à l'aide de tables, abaques, logiciel informatique dédié, ou système CFAO, offrant une large gamme de choix mais manquant de transparence et de flexibilité. La contribution de cet article est d'appliquer les techniques d'intelligence artificielle basées sur les réseaux de neurones artificiels (RNA) au développement d'un système de décision pour le choix des paramètres de coupe. Pour modéliser la vitesse de coupe, nous utilisons un RNA avec un algorithme de rétro-propagation. Des valeurs expérimentales provenant d'un abaque source serviront à construire et établir le RNA pour estimer la valeur de la vitesse de coupe, utilisant comme données un certain nombre de paramètres d'influence. La validité des résultats obtenus montre que cette méthode peut être appliquée avec succès et que son utilisation dans le domaine de l'usinage peut contribuer à optimiser les conditions de coupe par un choix plus précis et plus rapide de la vitesse de coupe.
NASA Astrophysics Data System (ADS)
Tolson, B.; Matott, L. S.; Gaffoor, T. A.; Asadzadeh, M.; Shafii, M.; Pomorski, P.; Xu, X.; Jahanpour, M.; Razavi, S.; Haghnegahdar, A.; Craig, J. R.
2015-12-01
We introduce asynchronous parallel implementations of the Dynamically Dimensioned Search (DDS) family of algorithms including DDS, discrete DDS, PA-DDS and DDS-AU. These parallel algorithms are unique from most existing parallel optimization algorithms in the water resources field in that parallel DDS is asynchronous and does not require an entire population (set of candidate solutions) to be evaluated before generating and then sending a new candidate solution for evaluation. One key advance in this study is developing the first parallel PA-DDS multi-objective optimization algorithm. The other key advance is enhancing the computational efficiency of solving optimization problems (such as model calibration) by combining a parallel optimization algorithm with the deterministic model pre-emption concept. These two efficiency techniques can only be combined because of the asynchronous nature of parallel DDS. Model pre-emption functions to terminate simulation model runs early, prior to completely simulating the model calibration period for example, when intermediate results indicate the candidate solution is so poor that it will definitely have no influence on the generation of further candidate solutions. The computational savings of deterministic model preemption available in serial implementations of population-based algorithms (e.g., PSO) disappear in synchronous parallel implementations as these algorithms. In addition to the key advances above, we implement the algorithms across a range of computation platforms (Windows and Unix-based operating systems from multi-core desktops to a supercomputer system) and package these for future modellers within a model-independent calibration software package called Ostrich as well as MATLAB versions. Results across multiple platforms and multiple case studies (from 4 to 64 processors) demonstrate the vast improvement over serial DDS-based algorithms and highlight the important role model pre-emption plays in the performance of parallel, pre-emptable DDS algorithms. Case studies include single- and multiple-objective optimization problems in water resources model calibration and in many cases linear or near linear speedups are observed.
Parallelization strategies for continuum-generalized method of moments on the multi-thread systems
NASA Astrophysics Data System (ADS)
Bustamam, A.; Handhika, T.; Ernastuti, Kerami, D.
2017-07-01
Continuum-Generalized Method of Moments (C-GMM) covers the Generalized Method of Moments (GMM) shortfall which is not as efficient as Maximum Likelihood estimator by using the continuum set of moment conditions in a GMM framework. However, this computation would take a very long time since optimizing regularization parameter. Unfortunately, these calculations are processed sequentially whereas in fact all modern computers are now supported by hierarchical memory systems and hyperthreading technology, which allowing for parallel computing. This paper aims to speed up the calculation process of C-GMM by designing a parallel algorithm for C-GMM on the multi-thread systems. First, parallel regions are detected for the original C-GMM algorithm. There are two parallel regions in the original C-GMM algorithm, that are contributed significantly to the reduction of computational time: the outer-loop and the inner-loop. Furthermore, this parallel algorithm will be implemented with standard shared-memory application programming interface, i.e. Open Multi-Processing (OpenMP). The experiment shows that the outer-loop parallelization is the best strategy for any number of observations.
Super-resolution for imagery from integrated microgrid polarimeters.
Hardie, Russell C; LeMaster, Daniel A; Ratliff, Bradley M
2011-07-04
Imagery from microgrid polarimeters is obtained by using a mosaic of pixel-wise micropolarizers on a focal plane array (FPA). Each distinct polarization image is obtained by subsampling the full FPA image. Thus, the effective pixel pitch for each polarization channel is increased and the sampling frequency is decreased. As a result, aliasing artifacts from such undersampling can corrupt the true polarization content of the scene. Here we present the first multi-channel multi-frame super-resolution (SR) algorithms designed specifically for the problem of image restoration in microgrid polarization imagers. These SR algorithms can be used to address aliasing and other degradations, without sacrificing field of view or compromising optical resolution with an anti-aliasing filter. The new SR methods are designed to exploit correlation between the polarimetric channels. One of the new SR algorithms uses a form of regularized least squares and has an iterative solution. The other is based on the faster adaptive Wiener filter SR method. We demonstrate that the new multi-channel SR algorithms are capable of providing significant enhancement of polarimetric imagery and that they outperform their independent channel counterparts.
Multi-strategy coevolving aging particle optimization.
Iacca, Giovanni; Caraffini, Fabio; Neri, Ferrante
2014-02-01
We propose Multi-Strategy Coevolving Aging Particles (MS-CAP), a novel population-based algorithm for black-box optimization. In a memetic fashion, MS-CAP combines two components with complementary algorithm logics. In the first stage, each particle is perturbed independently along each dimension with a progressively shrinking (decaying) radius, and attracted towards the current best solution with an increasing force. In the second phase, the particles are mutated and recombined according to a multi-strategy approach in the fashion of the ensemble of mutation strategies in Differential Evolution. The proposed algorithm is tested, at different dimensionalities, on two complete black-box optimization benchmarks proposed at the Congress on Evolutionary Computation 2010 and 2013. To demonstrate the applicability of the approach, we also test MS-CAP to train a Feedforward Neural Network modeling the kinematics of an 8-link robot manipulator. The numerical results show that MS-CAP, for the setting considered in this study, tends to outperform the state-of-the-art optimization algorithms on a large set of problems, thus resulting in a robust and versatile optimizer.
Multi-linear sparse reconstruction for SAR imaging based on higher-order SVD
NASA Astrophysics Data System (ADS)
Gao, Yu-Fei; Gui, Guan; Cong, Xun-Chao; Yang, Yue; Zou, Yan-Bin; Wan, Qun
2017-12-01
This paper focuses on the spotlight synthetic aperture radar (SAR) imaging for point scattering targets based on tensor modeling. In a real-world scenario, scatterers usually distribute in the block sparse pattern. Such a distribution feature has been scarcely utilized by the previous studies of SAR imaging. Our work takes advantage of this structure property of the target scene, constructing a multi-linear sparse reconstruction algorithm for SAR imaging. The multi-linear block sparsity is introduced into higher-order singular value decomposition (SVD) with a dictionary constructing procedure by this research. The simulation experiments for ideal point targets show the robustness of the proposed algorithm to the noise and sidelobe disturbance which always influence the imaging quality of the conventional methods. The computational resources requirement is further investigated in this paper. As a consequence of the algorithm complexity analysis, the present method possesses the superiority on resource consumption compared with the classic matching pursuit method. The imaging implementations for practical measured data also demonstrate the effectiveness of the algorithm developed in this paper.
A Three-Dimensional Target Depth-Resolution Method with a Single-Vector Sensor
Zhao, Anbang; Bi, Xuejie; Hui, Juan; Zeng, Caigao; Ma, Lin
2018-01-01
This paper mainly studies and verifies the target number category-resolution method in multi-target cases and the target depth-resolution method of aerial targets. Firstly, target depth resolution is performed by using the sign distribution of the reactive component of the vertical complex acoustic intensity; the target category and the number resolution in multi-target cases is realized with a combination of the bearing-time recording information; and the corresponding simulation verification is carried out. The algorithm proposed in this paper can distinguish between the single-target multi-line spectrum case and the multi-target multi-line spectrum case. This paper presents an improved azimuth-estimation method for multi-target cases, which makes the estimation results more accurate. Using the Monte Carlo simulation, the feasibility of the proposed target number and category-resolution algorithm in multi-target cases is verified. In addition, by studying the field characteristics of the aerial and surface targets, the simulation results verify that there is only amplitude difference between the aerial target field and the surface target field under the same environmental parameters, and an aerial target can be treated as a special case of a surface target; the aerial target category resolution can then be realized based on the sign distribution of the reactive component of the vertical acoustic intensity so as to realize three-dimensional target depth resolution. By processing data from a sea experiment, the feasibility of the proposed aerial target three-dimensional depth-resolution algorithm is verified. PMID:29649173
A Three-Dimensional Target Depth-Resolution Method with a Single-Vector Sensor.
Zhao, Anbang; Bi, Xuejie; Hui, Juan; Zeng, Caigao; Ma, Lin
2018-04-12
This paper mainly studies and verifies the target number category-resolution method in multi-target cases and the target depth-resolution method of aerial targets. Firstly, target depth resolution is performed by using the sign distribution of the reactive component of the vertical complex acoustic intensity; the target category and the number resolution in multi-target cases is realized with a combination of the bearing-time recording information; and the corresponding simulation verification is carried out. The algorithm proposed in this paper can distinguish between the single-target multi-line spectrum case and the multi-target multi-line spectrum case. This paper presents an improved azimuth-estimation method for multi-target cases, which makes the estimation results more accurate. Using the Monte Carlo simulation, the feasibility of the proposed target number and category-resolution algorithm in multi-target cases is verified. In addition, by studying the field characteristics of the aerial and surface targets, the simulation results verify that there is only amplitude difference between the aerial target field and the surface target field under the same environmental parameters, and an aerial target can be treated as a special case of a surface target; the aerial target category resolution can then be realized based on the sign distribution of the reactive component of the vertical acoustic intensity so as to realize three-dimensional target depth resolution. By processing data from a sea experiment, the feasibility of the proposed aerial target three-dimensional depth-resolution algorithm is verified.
NASA Astrophysics Data System (ADS)
Tang, Jian; Qiao, Junfei; Wu, ZhiWei; Chai, Tianyou; Zhang, Jian; Yu, Wen
2018-01-01
Frequency spectral data of mechanical vibration and acoustic signals relate to difficult-to-measure production quality and quantity parameters of complex industrial processes. A selective ensemble (SEN) algorithm can be used to build a soft sensor model of these process parameters by fusing valued information selectively from different perspectives. However, a combination of several optimized ensemble sub-models with SEN cannot guarantee the best prediction model. In this study, we use several techniques to construct mechanical vibration and acoustic frequency spectra of a data-driven industrial process parameter model based on selective fusion multi-condition samples and multi-source features. Multi-layer SEN (MLSEN) strategy is used to simulate the domain expert cognitive process. Genetic algorithm and kernel partial least squares are used to construct the inside-layer SEN sub-model based on each mechanical vibration and acoustic frequency spectral feature subset. Branch-and-bound and adaptive weighted fusion algorithms are integrated to select and combine outputs of the inside-layer SEN sub-models. Then, the outside-layer SEN is constructed. Thus, "sub-sampling training examples"-based and "manipulating input features"-based ensemble construction methods are integrated, thereby realizing the selective information fusion process based on multi-condition history samples and multi-source input features. This novel approach is applied to a laboratory-scale ball mill grinding process. A comparison with other methods indicates that the proposed MLSEN approach effectively models mechanical vibration and acoustic signals.
Ouyang, Qi; Lu, Wenxi; Hou, Zeyu; Zhang, Yu; Li, Shuai; Luo, Jiannan
2017-05-01
In this paper, a multi-algorithm genetically adaptive multi-objective (AMALGAM) method is proposed as a multi-objective optimization solver. It was implemented in the multi-objective optimization of a groundwater remediation design at sites contaminated by dense non-aqueous phase liquids. In this study, there were two objectives: minimization of the total remediation cost, and minimization of the remediation time. A non-dominated sorting genetic algorithm II (NSGA-II) was adopted to compare with the proposed method. For efficiency, the time-consuming surfactant-enhanced aquifer remediation simulation model was replaced by a surrogate model constructed by a multi-gene genetic programming (MGGP) technique. Similarly, two other surrogate modeling methods-support vector regression (SVR) and Kriging (KRG)-were employed to make comparisons with MGGP. In addition, the surrogate-modeling uncertainty was incorporated in the optimization model by chance-constrained programming (CCP). The results showed that, for the problem considered in this study, (1) the solutions obtained by AMALGAM incurred less remediation cost and required less time than those of NSGA-II, indicating that AMALGAM outperformed NSGA-II. It was additionally shown that (2) the MGGP surrogate model was more accurate than SVR and KRG; and (3) the remediation cost and time increased with the confidence level, which can enable decision makers to make a suitable choice by considering the given budget, remediation time, and reliability. Copyright © 2017 Elsevier B.V. All rights reserved.
Multi Dimensional Honey Bee Foraging Algorithm Based on Optimal Energy Consumption
NASA Astrophysics Data System (ADS)
Saritha, R.; Vinod Chandra, S. S.
2017-10-01
In this paper a new nature inspired algorithm is proposed based on natural foraging behavior of multi-dimensional honey bee colonies. This method handles issues that arise when food is shared from multiple sources by multiple swarms at multiple destinations. The self organizing nature of natural honey bee swarms in multiple colonies is based on the principle of energy consumption. Swarms of multiple colonies select a food source to optimally fulfill the requirements of its colonies. This is based on the energy requirement for transporting food between a source and destination. Minimum use of energy leads to maximizing profit in each colony. The mathematical model proposed here is based on this principle. This has been successfully evaluated by applying it on multi-objective transportation problem for optimizing cost and time. The algorithm optimizes the needs at each destination in linear time.
Li, Ming; Miao, Chunyan; Leung, Cyril
2015-01-01
Coverage control is one of the most fundamental issues in directional sensor networks. In this paper, the coverage optimization problem in a directional sensor network is formulated as a multi-objective optimization problem. It takes into account the coverage rate of the network, the number of working sensor nodes and the connectivity of the network. The coverage problem considered in this paper is characterized by the geographical irregularity of the sensed events and heterogeneity of the sensor nodes in terms of sensing radius, field of angle and communication radius. To solve this multi-objective problem, we introduce a learning automata-based coral reef algorithm for adaptive parameter selection and use a novel Tchebycheff decomposition method to decompose the multi-objective problem into a single-objective problem. Simulation results show the consistent superiority of the proposed algorithm over alternative approaches. PMID:26690162
Li, Ming; Miao, Chunyan; Leung, Cyril
2015-12-04
Coverage control is one of the most fundamental issues in directional sensor networks. In this paper, the coverage optimization problem in a directional sensor network is formulated as a multi-objective optimization problem. It takes into account the coverage rate of the network, the number of working sensor nodes and the connectivity of the network. The coverage problem considered in this paper is characterized by the geographical irregularity of the sensed events and heterogeneity of the sensor nodes in terms of sensing radius, field of angle and communication radius. To solve this multi-objective problem, we introduce a learning automata-based coral reef algorithm for adaptive parameter selection and use a novel Tchebycheff decomposition method to decompose the multi-objective problem into a single-objective problem. Simulation results show the consistent superiority of the proposed algorithm over alternative approaches.
Multi-Algorithm Particle Simulations with Spatiocyte.
Arjunan, Satya N V; Takahashi, Koichi
2017-01-01
As quantitative biologists get more measurements of spatially regulated systems such as cell division and polarization, simulation of reaction and diffusion of proteins using the data is becoming increasingly relevant to uncover the mechanisms underlying the systems. Spatiocyte is a lattice-based stochastic particle simulator for biochemical reaction and diffusion processes. Simulations can be performed at single molecule and compartment spatial scales simultaneously. Molecules can diffuse and react in 1D (filament), 2D (membrane), and 3D (cytosol) compartments. The implications of crowded regions in the cell can be investigated because each diffusing molecule has spatial dimensions. Spatiocyte adopts multi-algorithm and multi-timescale frameworks to simulate models that simultaneously employ deterministic, stochastic, and particle reaction-diffusion algorithms. Comparison of light microscopy images to simulation snapshots is supported by Spatiocyte microscopy visualization and molecule tagging features. Spatiocyte is open-source software and is freely available at http://spatiocyte.org .
Study on Data Clustering and Intelligent Decision Algorithm of Indoor Localization
NASA Astrophysics Data System (ADS)
Liu, Zexi
2018-01-01
Indoor positioning technology enables the human beings to have the ability of positional perception in architectural space, and there is a shortage of single network coverage and the problem of location data redundancy. So this article puts forward the indoor positioning data clustering algorithm and intelligent decision-making research, design the basic ideas of multi-source indoor positioning technology, analyzes the fingerprint localization algorithm based on distance measurement, position and orientation of inertial device integration. By optimizing the clustering processing of massive indoor location data, the data normalization pretreatment, multi-dimensional controllable clustering center and multi-factor clustering are realized, and the redundancy of locating data is reduced. In addition, the path is proposed based on neural network inference and decision, design the sparse data input layer, the dynamic feedback hidden layer and output layer, low dimensional results improve the intelligent navigation path planning.
SU-E-J-88: Deformable Registration Using Multi-Resolution Demons Algorithm for 4DCT.
Li, Dengwang; Yin, Yong
2012-06-01
In order to register 4DCT efficiently, we propose an improved deformable registration algorithm based on improved multi-resolution demons strategy to improve the efficiency of the algorithm. 4DCT images of lung cancer patients are collected from a General Electric Discovery ST CT scanner from our cancer hospital. All of the images are sorted into groups and reconstructed according to their phases, and eachrespiratory cycle is divided into 10 phases with the time interval of 10%. Firstly, in our improved demons algorithm we use gradients of both reference and floating images as deformation forces and also redistribute the forces according to the proportion of the two forces. Furthermore, we introduce intermediate variable to cost function for decreasing the noise in registration process. At the same time, Gaussian multi-resolution strategy and BFGS method for optimization are used to improve speed and accuracy of the registration. To validate the performance of the algorithm, we register the previous 10 phase-images. We compared the difference of floating and reference images before and after registered where two landmarks are decided by experienced clinician. We registered 10 phase-images of 4D-CT which is lung cancer patient from cancer hospital and choose images in exhalationas the reference images, and all other images were registered into the reference images. This method has a good accuracy demonstrated by a higher similarity measure for registration of 4D-CT and it can register a large deformation precisely. Finally, we obtain the tumor target achieved by the deformation fields using proposed method, which is more accurately than the internal margin (IM) expanded by the Gross Tumor Volume (GTV). Furthermore, we achieve tumor and normal tissue tracking and dose accumulation using 4DCT data. An efficient deformable registration algorithm was proposed by using multi-resolution demons algorithm for 4DCT. © 2012 American Association of Physicists in Medicine.
Carvalho, Gustavo A; Minnett, Peter J; Fleming, Lora E; Banzon, Viva F; Baringer, Warner
2010-06-01
In a continuing effort to develop suitable methods for the surveillance of Harmful Algal Blooms (HABs) of Karenia brevis using satellite radiometers, a new multi-algorithm method was developed to explore whether improvements in the remote sensing detection of the Florida Red Tide was possible. A Hybrid Scheme was introduced that sequentially applies the optimized versions of two pre-existing satellite-based algorithms: an Empirical Approach (using water-leaving radiance as a function of chlorophyll concentration) and a Bio-optical Technique (using particulate backscatter along with chlorophyll concentration). The long-term evaluation of the new multi-algorithm method was performed using a multi-year MODIS dataset (2002 to 2006; during the boreal Summer-Fall periods - July to December) along the Central West Florida Shelf between 25.75°N and 28.25°N. Algorithm validation was done with in situ measurements of the abundances of K. brevis; cell counts ≥1.5×10(4) cells l(-1) defined a detectable HAB. Encouraging statistical results were derived when either or both algorithms correctly flagged known samples. The majority of the valid match-ups were correctly identified (~80% of both HABs and non-blooming conditions) and few false negatives or false positives were produced (~20% of each). Additionally, most of the HAB-positive identifications in the satellite data were indeed HAB samples (positive predictive value: ~70%) and those classified as HAB-negative were almost all non-bloom cases (negative predictive value: ~86%). These results demonstrate an excellent detection capability, on average ~10% more accurate than the individual algorithms used separately. Thus, the new Hybrid Scheme could become a powerful tool for environmental monitoring of K. brevis blooms, with valuable consequences including leading to the more rapid and efficient use of ships to make in situ measurements of HABs.
Carvalho, Gustavo A.; Minnett, Peter J.; Fleming, Lora E.; Banzon, Viva F.; Baringer, Warner
2010-01-01
In a continuing effort to develop suitable methods for the surveillance of Harmful Algal Blooms (HABs) of Karenia brevis using satellite radiometers, a new multi-algorithm method was developed to explore whether improvements in the remote sensing detection of the Florida Red Tide was possible. A Hybrid Scheme was introduced that sequentially applies the optimized versions of two pre-existing satellite-based algorithms: an Empirical Approach (using water-leaving radiance as a function of chlorophyll concentration) and a Bio-optical Technique (using particulate backscatter along with chlorophyll concentration). The long-term evaluation of the new multi-algorithm method was performed using a multi-year MODIS dataset (2002 to 2006; during the boreal Summer-Fall periods – July to December) along the Central West Florida Shelf between 25.75°N and 28.25°N. Algorithm validation was done with in situ measurements of the abundances of K. brevis; cell counts ≥1.5×104 cells l−1 defined a detectable HAB. Encouraging statistical results were derived when either or both algorithms correctly flagged known samples. The majority of the valid match-ups were correctly identified (~80% of both HABs and non-blooming conditions) and few false negatives or false positives were produced (~20% of each). Additionally, most of the HAB-positive identifications in the satellite data were indeed HAB samples (positive predictive value: ~70%) and those classified as HAB-negative were almost all non-bloom cases (negative predictive value: ~86%). These results demonstrate an excellent detection capability, on average ~10% more accurate than the individual algorithms used separately. Thus, the new Hybrid Scheme could become a powerful tool for environmental monitoring of K. brevis blooms, with valuable consequences including leading to the more rapid and efficient use of ships to make in situ measurements of HABs. PMID:21037979
NASA Astrophysics Data System (ADS)
Acciarri, R.; Adams, C.; An, R.; Anthony, J.; Asaadi, J.; Auger, M.; Bagby, L.; Balasubramanian, S.; Baller, B.; Barnes, C.; Barr, G.; Bass, M.; Bay, F.; Bishai, M.; Blake, A.; Bolton, T.; Camilleri, L.; Caratelli, D.; Carls, B.; Castillo Fernandez, R.; Cavanna, F.; Chen, H.; Church, E.; Cianci, D.; Cohen, E.; Collin, G. H.; Conrad, J. M.; Convery, M.; Crespo-Anadón, J. I.; Del Tutto, M.; Devitt, D.; Dytman, S.; Eberly, B.; Ereditato, A.; Escudero Sanchez, L.; Esquivel, J.; Fadeeva, A. A.; Fleming, B. T.; Foreman, W.; Furmanski, A. P.; Garcia-Gamez, D.; Garvey, G. T.; Genty, V.; Goeldi, D.; Gollapinni, S.; Graf, N.; Gramellini, E.; Greenlee, H.; Grosso, R.; Guenette, R.; Hackenburg, A.; Hamilton, P.; Hen, O.; Hewes, J.; Hill, C.; Ho, J.; Horton-Smith, G.; Hourlier, A.; Huang, E.-C.; James, C.; Jan de Vries, J.; Jen, C.-M.; Jiang, L.; Johnson, R. A.; Joshi, J.; Jostlein, H.; Kaleko, D.; Karagiorgi, G.; Ketchum, W.; Kirby, B.; Kirby, M.; Kobilarcik, T.; Kreslo, I.; Laube, A.; Li, Y.; Lister, A.; Littlejohn, B. R.; Lockwitz, S.; Lorca, D.; Louis, W. C.; Luethi, M.; Lundberg, B.; Luo, X.; Marchionni, A.; Mariani, C.; Marshall, J.; Martinez Caicedo, D. A.; Meddage, V.; Miceli, T.; Mills, G. B.; Moon, J.; Mooney, M.; Moore, C. D.; Mousseau, J.; Murrells, R.; Naples, D.; Nienaber, P.; Nowak, J.; Palamara, O.; Paolone, V.; Papavassiliou, V.; Pate, S. F.; Pavlovic, Z.; Piasetzky, E.; Porzio, D.; Pulliam, G.; Qian, X.; Raaf, J. L.; Rafique, A.; Rochester, L.; Rudolf von Rohr, C.; Russell, B.; Schmitz, D. W.; Schukraft, A.; Seligman, W.; Shaevitz, M. H.; Sinclair, J.; Smith, A.; Snider, E. L.; Soderberg, M.; Söldner-Rembold, S.; Soleti, S. R.; Spentzouris, P.; Spitz, J.; St. John, J.; Strauss, T.; Szelc, A. M.; Tagg, N.; Terao, K.; Thomson, M.; Toups, M.; Tsai, Y.-T.; Tufanli, S.; Usher, T.; Van De Pontseele, W.; Van de Water, R. G.; Viren, B.; Weber, M.; Wickremasinghe, D. A.; Wolbers, S.; Wongjirad, T.; Woodruff, K.; Yang, T.; Yates, L.; Zeller, G. P.; Zennamo, J.; Zhang, C.
2018-01-01
The development and operation of liquid-argon time-projection chambers for neutrino physics has created a need for new approaches to pattern recognition in order to fully exploit the imaging capabilities offered by this technology. Whereas the human brain can excel at identifying features in the recorded events, it is a significant challenge to develop an automated, algorithmic solution. The Pandora Software Development Kit provides functionality to aid the design and implementation of pattern-recognition algorithms. It promotes the use of a multi-algorithm approach to pattern recognition, in which individual algorithms each address a specific task in a particular topology. Many tens of algorithms then carefully build up a picture of the event and, together, provide a robust automated pattern-recognition solution. This paper describes details of the chain of over one hundred Pandora algorithms and tools used to reconstruct cosmic-ray muon and neutrino events in the MicroBooNE detector. Metrics that assess the current pattern-recognition performance are presented for simulated MicroBooNE events, using a selection of final-state event topologies.
Multi-exemplar affinity propagation.
Wang, Chang-Dong; Lai, Jian-Huang; Suen, Ching Y; Zhu, Jun-Yong
2013-09-01
The affinity propagation (AP) clustering algorithm has received much attention in the past few years. AP is appealing because it is efficient, insensitive to initialization, and it produces clusters at a lower error rate than other exemplar-based methods. However, its single-exemplar model becomes inadequate when applied to model multisubclasses in some situations such as scene analysis and character recognition. To remedy this deficiency, we have extended the single-exemplar model to a multi-exemplar one to create a new multi-exemplar affinity propagation (MEAP) algorithm. This new model automatically determines the number of exemplars in each cluster associated with a super exemplar to approximate the subclasses in the category. Solving the model is NP-hard and we tackle it with the max-sum belief propagation to produce neighborhood maximum clusters, with no need to specify beforehand the number of clusters, multi-exemplars, and superexemplars. Also, utilizing the sparsity in the data, we are able to reduce substantially the computational time and storage. Experimental studies have shown MEAP's significant improvements over other algorithms on unsupervised image categorization and the clustering of handwritten digits.
Multi-frame image processing with panning cameras and moving subjects
NASA Astrophysics Data System (ADS)
Paolini, Aaron; Humphrey, John; Curt, Petersen; Kelmelis, Eric
2014-06-01
Imaging scenarios commonly involve erratic, unpredictable camera behavior or subjects that are prone to movement, complicating multi-frame image processing techniques. To address these issues, we developed three techniques that can be applied to multi-frame image processing algorithms in order to mitigate the adverse effects observed when cameras are panning or subjects within the scene are moving. We provide a detailed overview of the techniques and discuss the applicability of each to various movement types. In addition to this, we evaluated algorithm efficacy with demonstrated benefits using field test video, which has been processed using our commercially available surveillance product. Our results show that algorithm efficacy is significantly improved in common scenarios, expanding our software's operational scope. Our methods introduce little computational burden, enabling their use in real-time and low-power solutions, and are appropriate for long observation periods. Our test cases focus on imaging through turbulence, a common use case for multi-frame techniques. We present results of a field study designed to test the efficacy of these techniques under expanded use cases.
NASA Astrophysics Data System (ADS)
Mansor, Zakwan; Zakaria, Mohd Zakimi; Nor, Azuwir Mohd; Saad, Mohd Sazli; Ahmad, Robiah; Jamaluddin, Hishamuddin
2017-09-01
This paper presents the black-box modelling of palm oil biodiesel engine (POB) using multi-objective optimization differential evolution (MOODE) algorithm. Two objective functions are considered in the algorithm for optimization; minimizing the number of term of a model structure and minimizing the mean square error between actual and predicted outputs. The mathematical model used in this study to represent the POB system is nonlinear auto-regressive moving average with exogenous input (NARMAX) model. Finally, model validity tests are applied in order to validate the possible models that was obtained from MOODE algorithm and lead to select an optimal model.
AEROFROSH: a shock condition calculator for multi-component fuel aerosol-laden flows
DOE Office of Scientific and Technical Information (OSTI.GOV)
Campbell, Matthew Frederick; Haylett, D. R.; Davidson, D. F.
Here, this paper introduces an algorithm that determines the thermodynamic conditions behind incident and reflectedshocksinaerosol-ladenflows.Importantly,the algorithm accounts for the effects of droplet evaporation on post-shock properties. Additionally, this article describes an algorithm for resolving the effects of multiple-component- fuel droplets. This article presents the solution methodology and compares the results to those of another similar shock calculator. It also provides examples to show the impact of droplets on post-shock properties and the impact that multi-component fuel droplets have on shock experimental parameters. Finally, this paper presents a detailed uncertainty analysis of this algorithm’s calculations given typical exper- imental uncertainties
Algorithms of walking and stability for an anthropomorphic robot
NASA Astrophysics Data System (ADS)
Sirazetdinov, R. T.; Devaev, V. M.; Nikitina, D. V.; Fadeev, A. Y.; Kamalov, A. R.
2017-09-01
Autonomous movement of an anthropomorphic robot is considered as a superposition of a set of typical elements of movement - so-called patterns, each of which can be considered as an agent of some multi-agent system [ 1 ]. To control the AP-601 robot, an information and communication infrastructure has been created that represents some multi-agent system that allows the development of algorithms for individual patterns of moving and run them in the system as a set of independently executed and interacting agents. The algorithms of lateral movement of the anthropomorphic robot AP-601 series with active stability due to the stability pattern are presented.
AEROFROSH: a shock condition calculator for multi-component fuel aerosol-laden flows
Campbell, Matthew Frederick; Haylett, D. R.; Davidson, D. F.; ...
2015-08-18
Here, this paper introduces an algorithm that determines the thermodynamic conditions behind incident and reflectedshocksinaerosol-ladenflows.Importantly,the algorithm accounts for the effects of droplet evaporation on post-shock properties. Additionally, this article describes an algorithm for resolving the effects of multiple-component- fuel droplets. This article presents the solution methodology and compares the results to those of another similar shock calculator. It also provides examples to show the impact of droplets on post-shock properties and the impact that multi-component fuel droplets have on shock experimental parameters. Finally, this paper presents a detailed uncertainty analysis of this algorithm’s calculations given typical exper- imental uncertainties
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fawley, William M.
We discuss the underlying reasoning behind and the details of the numerical algorithm used in the GINGER free-electron laser(FEL) simulation code to load the initial shot noise microbunching on the electron beam. In particular, we point out that there are some additional subtleties which must be followed for multi-dimensional codes which are not necessary for one-dimensional formulations. Moreover, requiring that the higher harmonics of the microbunching also be properly initialized with the correct statistics leads to additional complexities. We present some numerical results including the predicted incoherent, spontaneous emission as tests of the shot noise algorithm's correctness.
Multi-limit unsymmetrical MLIBD image restoration algorithm
NASA Astrophysics Data System (ADS)
Yang, Yang; Cheng, Yiping; Chen, Zai-wang; Bo, Chen
2012-11-01
A novel multi-limit unsymmetrical iterative blind deconvolution(MLIBD) algorithm was presented to enhance the performance of adaptive optics image restoration.The algorithm enhances the reliability of iterative blind deconvolution by introducing the bandwidth limit into the frequency domain of point spread(PSF),and adopts the PSF dynamic support region estimation to improve the convergence speed.The unsymmetrical factor is automatically computed to advance its adaptivity.Image deconvolution comparing experiments between Richardson-Lucy IBD and MLIBD were done,and the result indicates that the iteration number is reduced by 22.4% and the peak signal-to-noise ratio is improved by 10.18dB with MLIBD method. The performance of MLIBD algorithm is outstanding in the images restoration the FK5-857 adaptive optics and the double-star adaptive optics.
NASA Astrophysics Data System (ADS)
Jiang, Yulian; Liu, Jianchang; Tan, Shubin; Ming, Pingsong
2014-09-01
In this paper, a robust consensus algorithm is developed and sufficient conditions for convergence to consensus are proposed for a multi-agent system (MAS) with exogenous disturbances subject to partial information. By utilizing H∞ robust control, differential game theory and a design-based approach, the consensus problem of the MAS with exogenous bounded interference is resolved and the disturbances are restrained, simultaneously. Attention is focused on designing an H∞ robust controller (the robust consensus algorithm) based on minimisation of our proposed rational and individual cost functions according to goals of the MAS. Furthermore, sufficient conditions for convergence of the robust consensus algorithm are given. An example is employed to demonstrate that our results are effective and more capable to restrain exogenous disturbances than the existing literature.
Modified Multi Prime RSA Cryptosystem
NASA Astrophysics Data System (ADS)
Ghazali Kamardan, M.; Aminudin, N.; Che-Him, Norziha; Sufahani, Suliadi; Khalid, Kamil; Roslan, Rozaini
2018-04-01
RSA [1] is one of the mostly used cryptosystem in securing data and information. Though, it has been recently discovered that RSA has some weaknesses and in advance technology, RSA is believed to be inefficient especially when it comes to decryption. Thus, a new algorithm called Multi prime RSA, an extended version of the standard RSA is studied. Then, a modification is made to the Multi prime RSA where another keys is shared secretly between the receiver and the sender to increase the securerity. As in RSA, the methodology used for modified Multi-prime RSA also consists of three phases; 1. Key Generation in which the secret and public keys are generated and published. In this phase, the secrecy is improved by adding more prime numbers and addition of secret keys. 2. Encryption of the message using the public and secret keys given. 3. Decryption of the secret message using the secret key generated. For the decryption phase, a method called Chinese Remainder Theorem is used which helps to fasten the computation. Since Multi prime RSA use more than two prime numbers, the algorithm is more efficient and secure when compared to the standard RSA. Furthermore, in modified Multi prime RSA another secret key is introduced to increase the obstacle to the attacker. Therefore, it is strongly believed that this new algorithm is better and can be an alternative to the RSA.
Design and multi-physics optimization of rotary MRF brakes
NASA Astrophysics Data System (ADS)
Topcu, Okan; Taşcıoğlu, Yiğit; Konukseven, Erhan İlhan
2018-03-01
Particle swarm optimization (PSO) is a popular method to solve the optimization problems. However, calculations for each particle will be excessive when the number of particles and complexity of the problem increases. As a result, the execution speed will be too slow to achieve the optimized solution. Thus, this paper proposes an automated design and optimization method for rotary MRF brakes and similar multi-physics problems. A modified PSO algorithm is developed for solving multi-physics engineering optimization problems. The difference between the proposed method and the conventional PSO is to split up the original single population into several subpopulations according to the division of labor. The distribution of tasks and the transfer of information to the next party have been inspired by behaviors of a hunting party. Simulation results show that the proposed modified PSO algorithm can overcome the problem of heavy computational burden of multi-physics problems while improving the accuracy. Wire type, MR fluid type, magnetic core material, and ideal current inputs have been determined by the optimization process. To the best of the authors' knowledge, this multi-physics approach is novel for optimizing rotary MRF brakes and the developed PSO algorithm is capable of solving other multi-physics engineering optimization problems. The proposed method has showed both better performance compared to the conventional PSO and also has provided small, lightweight, high impedance rotary MRF brake designs.
A detail-preserved and luminance-consistent multi-exposure image fusion algorithm
NASA Astrophysics Data System (ADS)
Wang, Guanquan; Zhou, Yue
2018-04-01
When irradiance across a scene varies greatly, we can hardly get an image of the scene without over- or underexposure area, because of the constraints of cameras. Multi-exposure image fusion (MEF) is an effective method to deal with this problem by fusing multi-exposure images of a static scene. A novel MEF method is described in this paper. In the proposed algorithm, coarser-scale luminance consistency is preserved by contribution adjustment using the luminance information between blocks; detail-preserved smoothing filter can stitch blocks smoothly without losing details. Experiment results show that the proposed method performs well in preserving luminance consistency and details.
Global, Multi-Objective Trajectory Optimization With Parametric Spreading
NASA Technical Reports Server (NTRS)
Vavrina, Matthew A.; Englander, Jacob A.; Phillips, Sean M.; Hughes, Kyle M.
2017-01-01
Mission design problems are often characterized by multiple, competing trajectory optimization objectives. Recent multi-objective trajectory optimization formulations enable generation of globally-optimal, Pareto solutions via a multi-objective genetic algorithm. A byproduct of these formulations is that clustering in design space can occur in evolving the population towards the Pareto front. This clustering can be a drawback, however, if parametric evaluations of design variables are desired. This effort addresses clustering by incorporating operators that encourage a uniform spread over specified design variables while maintaining Pareto front representation. The algorithm is demonstrated on a Neptune orbiter mission, and enhanced multidimensional visualization strategies are presented.
2010-01-01
Background Irregularly shaped spatial clusters are difficult to delineate. A cluster found by an algorithm often spreads through large portions of the map, impacting its geographical meaning. Penalized likelihood methods for Kulldorff's spatial scan statistics have been used to control the excessive freedom of the shape of clusters. Penalty functions based on cluster geometry and non-connectivity have been proposed recently. Another approach involves the use of a multi-objective algorithm to maximize two objectives: the spatial scan statistics and the geometric penalty function. Results & Discussion We present a novel scan statistic algorithm employing a function based on the graph topology to penalize the presence of under-populated disconnection nodes in candidate clusters, the disconnection nodes cohesion function. A disconnection node is defined as a region within a cluster, such that its removal disconnects the cluster. By applying this function, the most geographically meaningful clusters are sifted through the immense set of possible irregularly shaped candidate cluster solutions. To evaluate the statistical significance of solutions for multi-objective scans, a statistical approach based on the concept of attainment function is used. In this paper we compared different penalized likelihoods employing the geometric and non-connectivity regularity functions and the novel disconnection nodes cohesion function. We also build multi-objective scans using those three functions and compare them with the previous penalized likelihood scans. An application is presented using comprehensive state-wide data for Chagas' disease in puerperal women in Minas Gerais state, Brazil. Conclusions We show that, compared to the other single-objective algorithms, multi-objective scans present better performance, regarding power, sensitivity and positive predicted value. The multi-objective non-connectivity scan is faster and better suited for the detection of moderately irregularly shaped clusters. The multi-objective cohesion scan is most effective for the detection of highly irregularly shaped clusters. PMID:21034451
Application of multi-objective nonlinear optimization technique for coordinated ramp-metering
DOE Office of Scientific and Technical Information (OSTI.GOV)
Haj Salem, Habib; Farhi, Nadir; Lebacque, Jean Patrick, E-mail: abib.haj-salem@ifsttar.fr, E-mail: nadir.frahi@ifsttar.fr, E-mail: jean-patrick.lebacque@ifsttar.fr
2015-03-10
This paper aims at developing a multi-objective nonlinear optimization algorithm applied to coordinated motorway ramp metering. The multi-objective function includes two components: traffic and safety. Off-line simulation studies were performed on A4 France Motorway including 4 on-ramps.
Automatic Structural Parcellation of Mouse Brain MRI Using Multi-Atlas Label Fusion
Ma, Da; Cardoso, Manuel J.; Modat, Marc; Powell, Nick; Wells, Jack; Holmes, Holly; Wiseman, Frances; Tybulewicz, Victor; Fisher, Elizabeth; Lythgoe, Mark F.; Ourselin, Sébastien
2014-01-01
Multi-atlas segmentation propagation has evolved quickly in recent years, becoming a state-of-the-art methodology for automatic parcellation of structural images. However, few studies have applied these methods to preclinical research. In this study, we present a fully automatic framework for mouse brain MRI structural parcellation using multi-atlas segmentation propagation. The framework adopts the similarity and truth estimation for propagated segmentations (STEPS) algorithm, which utilises a locally normalised cross correlation similarity metric for atlas selection and an extended simultaneous truth and performance level estimation (STAPLE) framework for multi-label fusion. The segmentation accuracy of the multi-atlas framework was evaluated using publicly available mouse brain atlas databases with pre-segmented manually labelled anatomical structures as the gold standard, and optimised parameters were obtained for the STEPS algorithm in the label fusion to achieve the best segmentation accuracy. We showed that our multi-atlas framework resulted in significantly higher segmentation accuracy compared to single-atlas based segmentation, as well as to the original STAPLE framework. PMID:24475148
Improved hybrid optimization algorithm for 3D protein structure prediction.
Zhou, Changjun; Hou, Caixia; Wei, Xiaopeng; Zhang, Qiang
2014-07-01
A new improved hybrid optimization algorithm - PGATS algorithm, which is based on toy off-lattice model, is presented for dealing with three-dimensional protein structure prediction problems. The algorithm combines the particle swarm optimization (PSO), genetic algorithm (GA), and tabu search (TS) algorithms. Otherwise, we also take some different improved strategies. The factor of stochastic disturbance is joined in the particle swarm optimization to improve the search ability; the operations of crossover and mutation that are in the genetic algorithm are changed to a kind of random liner method; at last tabu search algorithm is improved by appending a mutation operator. Through the combination of a variety of strategies and algorithms, the protein structure prediction (PSP) in a 3D off-lattice model is achieved. The PSP problem is an NP-hard problem, but the problem can be attributed to a global optimization problem of multi-extremum and multi-parameters. This is the theoretical principle of the hybrid optimization algorithm that is proposed in this paper. The algorithm combines local search and global search, which overcomes the shortcoming of a single algorithm, giving full play to the advantage of each algorithm. In the current universal standard sequences, Fibonacci sequences and real protein sequences are certified. Experiments show that the proposed new method outperforms single algorithms on the accuracy of calculating the protein sequence energy value, which is proved to be an effective way to predict the structure of proteins.
Dai, Erpeng; Zhang, Zhe; Ma, Xiaodong; Dong, Zijing; Li, Xuesong; Xiong, Yuhui; Yuan, Chun; Guo, Hua
2018-03-23
To study the effects of 2D navigator distortion and noise level on interleaved EPI (iEPI) DWI reconstruction, using either the image- or k-space-based method. The 2D navigator acquisition was adjusted by reducing its echo spacing in the readout direction and undersampling in the phase encoding direction. A POCS-based reconstruction using image-space sampling function (IRIS) algorithm (POCSIRIS) was developed to reduce the impact of navigator distortion. POCSIRIS was then compared with the original IRIS algorithm and a SPIRiT-based k-space algorithm, under different navigator distortion and noise levels. Reducing the navigator distortion can improve the reconstruction of iEPI DWI. The proposed POCSIRIS and SPIRiT-based algorithms are more tolerable to different navigator distortion levels, compared to the original IRIS algorithm. SPIRiT may be hindered by low SNR of the navigator. Multi-shot iEPI DWI reconstruction can be improved by reducing the 2D navigator distortion. Different reconstruction methods show variable sensitivity to navigator distortion or noise levels. Furthermore, the findings can be valuable in applications such as simultaneous multi-slice accelerated iEPI DWI and multi-slab diffusion imaging. © 2018 International Society for Magnetic Resonance in Medicine.
Huang, X N; Ren, H P
2016-05-13
Robust adaptation is a critical ability of gene regulatory network (GRN) to survive in a fluctuating environment, which represents the system responding to an input stimulus rapidly and then returning to its pre-stimulus steady state timely. In this paper, the GRN is modeled using the Michaelis-Menten rate equations, which are highly nonlinear differential equations containing 12 undetermined parameters. The robust adaption is quantitatively described by two conflicting indices. To identify the parameter sets in order to confer the GRNs with robust adaptation is a multi-variable, multi-objective, and multi-peak optimization problem, which is difficult to acquire satisfactory solutions especially high-quality solutions. A new best-neighbor particle swarm optimization algorithm is proposed to implement this task. The proposed algorithm employs a Latin hypercube sampling method to generate the initial population. The particle crossover operation and elitist preservation strategy are also used in the proposed algorithm. The simulation results revealed that the proposed algorithm could identify multiple solutions in one time running. Moreover, it demonstrated a superior performance as compared to the previous methods in the sense of detecting more high-quality solutions within an acceptable time. The proposed methodology, owing to its universality and simplicity, is useful for providing the guidance to design GRN with superior robust adaptation.
Mixed raster content (MRC) model for compound image compression
NASA Astrophysics Data System (ADS)
de Queiroz, Ricardo L.; Buckley, Robert R.; Xu, Ming
1998-12-01
This paper will describe the Mixed Raster Content (MRC) method for compressing compound images, containing both binary test and continuous-tone images. A single compression algorithm that simultaneously meets the requirements for both text and image compression has been elusive. MRC takes a different approach. Rather than using a single algorithm, MRC uses a multi-layered imaging model for representing the results of multiple compression algorithms, including ones developed specifically for text and for images. As a result, MRC can combine the best of existing or new compression algorithms and offer different quality-compression ratio tradeoffs. The algorithms used by MRC set the lower bound on its compression performance. Compared to existing algorithms, MRC has some image-processing overhead to manage multiple algorithms and the imaging model. This paper will develop the rationale for the MRC approach by describing the multi-layered imaging model in light of a rate-distortion trade-off. Results will be presented comparing images compressed using MRC, JPEG and state-of-the-art wavelet algorithms such as SPIHT. MRC has been approved or proposed as an architectural model for several standards, including ITU Color Fax, IETF Internet Fax, and JPEG 2000.
A novel multi-item joint replenishment problem considering multiple type discounts.
Cui, Ligang; Zhang, Yajun; Deng, Jie; Xu, Maozeng
2018-01-01
In business replenishment, discount offers of multi-item may either provide different discount schedules with a single discount type, or provide schedules with multiple discount types. The paper investigates the joint effects of multiple discount schemes on the decisions of multi-item joint replenishment. In this paper, a joint replenishment problem (JRP) model, considering three discount (all-unit discount, incremental discount, total volume discount) offers simultaneously, is constructed to determine the basic cycle time and joint replenishment frequencies of multi-item. To solve the proposed problem, a heuristic algorithm is proposed to find the optimal solutions and the corresponding total cost of the JRP model. Numerical experiment is performed to test the algorithm and the computational results of JRPs under different discount combinations show different significance in the replenishment cost reduction.
A non-oscillatory energy-splitting method for the computation of compressible multi-fluid flows
NASA Astrophysics Data System (ADS)
Lei, Xin; Li, Jiequan
2018-04-01
This paper proposes a new non-oscillatory energy-splitting conservative algorithm for computing multi-fluid flows in the Eulerian framework. In comparison with existing multi-fluid algorithms in the literature, it is shown that the mass fraction model with isobaric hypothesis is a plausible choice for designing numerical methods for multi-fluid flows. Then we construct a conservative Godunov-based scheme with the high order accurate extension by using the generalized Riemann problem solver, through the detailed analysis of kinetic energy exchange when fluids are mixed under the hypothesis of isobaric equilibrium. Numerical experiments are carried out for the shock-interface interaction and shock-bubble interaction problems, which display the excellent performance of this type of schemes and demonstrate that nonphysical oscillations are suppressed around material interfaces substantially.
Efficient selection of tagging single-nucleotide polymorphisms in multiple populations.
Howie, Bryan N; Carlson, Christopher S; Rieder, Mark J; Nickerson, Deborah A
2006-08-01
Common genetic polymorphism may explain a portion of the heritable risk for common diseases, so considerable effort has been devoted to finding and typing common single-nucleotide polymorphisms (SNPs) in the human genome. Many SNPs show correlated genotypes, or linkage disequilibrium (LD), suggesting that only a subset of all SNPs (known as tagging SNPs, or tagSNPs) need to be genotyped for disease association studies. Based on the genetic differences that exist among human populations, most tagSNP sets are defined in a single population and applied only in populations that are closely related. To improve the efficiency of multi-population analyses, we have developed an algorithm called MultiPop-TagSelect that finds a near-minimal union of population-specific tagSNP sets across an arbitrary number of populations. We present this approach as an extension of LD-select, a tagSNP selection method that uses a greedy algorithm to group SNPs into bins based on their pairwise association patterns, although the MultiPop-TagSelect algorithm could be used with any SNP tagging approach that allows choices between nearly equivalent SNPs. We evaluate the algorithm by considering tagSNP selection in candidate-gene resequencing data and lower density whole-chromosome data. Our analysis reveals that an exhaustive search is often intractable, while the developed algorithm can quickly and reliably find near-optimal solutions even for difficult tagSNP selection problems. Using populations of African, Asian, and European ancestry, we also show that an optimal multi-population set of tagSNPs can be substantially smaller (up to 44%) than a typical set obtained through independent or sequential selection.
Freer, Phoebe E; Slanetz, Priscilla J; Haas, Jennifer S; Tung, Nadine M; Hughes, Kevin S; Armstrong, Katrina; Semine, A Alan; Troyan, Susan L; Birdwell, Robyn L
2015-09-01
Stemming from breast density notification legislation in Massachusetts effective 2015, we sought to develop a collaborative evidence-based approach to density notification that could be used by practitioners across the state. Our goal was to develop an evidence-based consensus management algorithm to help patients and health care providers follow best practices to implement a coordinated, evidence-based, cost-effective, sustainable practice and to standardize care in recommendations for supplemental screening. We formed the Massachusetts Breast Risk Education and Assessment Task Force (MA-BREAST) a multi-institutional, multi-disciplinary panel of expert radiologists, surgeons, primary care physicians, and oncologists to develop a collaborative approach to density notification legislation. Using evidence-based data from the Institute for Clinical and Economic Review, the Cochrane review, National Comprehensive Cancer Network guidelines, American Cancer Society recommendations, and American College of Radiology appropriateness criteria, the group collaboratively developed an evidence-based best-practices algorithm. The expert consensus algorithm uses breast density as one element in the risk stratification to determine the need for supplemental screening. Women with dense breasts and otherwise low risk (<15% lifetime risk), do not routinely require supplemental screening per the expert consensus. Women of high risk (>20% lifetime) should consider supplemental screening MRI in addition to routine mammography regardless of breast density. We report the development of the multi-disciplinary collaborative approach to density notification. We propose a risk stratification algorithm to assess personal level of risk to determine the need for supplemental screening for an individual woman.
A multiobjective optimization algorithm is applied to a groundwater quality management problem involving remediation by pump-and-treat (PAT). The multiobjective optimization framework uses the niched Pareto genetic algorithm (NPGA) and is applied to simultaneously minimize the...
Accurate identification of microseismic P- and S-phase arrivals using the multi-step AIC algorithm
NASA Astrophysics Data System (ADS)
Zhu, Mengbo; Wang, Liguan; Liu, Xiaoming; Zhao, Jiaxuan; Peng, Ping'an
2018-03-01
Identification of P- and S-phase arrivals is the primary work in microseismic monitoring. In this study, a new multi-step AIC algorithm is proposed. This algorithm consists of P- and S-phase arrival pickers (P-picker and S-picker). The P-picker contains three steps: in step 1, a preliminary P-phase arrival window is determined by the waveform peak. Then a preliminary P-pick is identified using the AIC algorithm. Finally, the P-phase arrival window is narrowed based on the above P-pick. Thus the P-phase arrival can be identified accurately by using the AIC algorithm again. The S-picker contains five steps: in step 1, a narrow S-phase arrival window is determined based on the P-pick and the AIC curve of amplitude biquadratic time-series. In step 2, the S-picker automatically judges whether the S-phase arrival is clear to identify. In step 3 and 4, the AIC extreme points are extracted, and the relationship between the local minimum and the S-phase arrival is researched. In step 5, the S-phase arrival is picked based on the maximum probability criterion. To evaluate of the proposed algorithm, a P- and S-picks classification criterion is also established based on a source location numerical simulation. The field data tests show a considerable improvement of the multi-step AIC algorithm in comparison with the manual picks and the original AIC algorithm. Furthermore, the technique is independent of the kind of SNR. Even in the poor-quality signal group which the SNRs are below 5, the effective picking rates (the corresponding location error is <15 m) of P- and S-phase arrivals are still up to 80.9% and 76.4% respectively.
NASA Astrophysics Data System (ADS)
Fu, Lin; Hu, Xiangyu Y.; Adams, Nikolaus A.
2017-12-01
We propose efficient single-step formulations for reinitialization and extending algorithms, which are critical components of level-set based interface-tracking methods. The level-set field is reinitialized with a single-step (non iterative) "forward tracing" algorithm. A minimum set of cells is defined that describes the interface, and reinitialization employs only data from these cells. Fluid states are extrapolated or extended across the interface by a single-step "backward tracing" algorithm. Both algorithms, which are motivated by analogy to ray-tracing, avoid multiple block-boundary data exchanges that are inevitable for iterative reinitialization and extending approaches within a parallel-computing environment. The single-step algorithms are combined with a multi-resolution conservative sharp-interface method and validated by a wide range of benchmark test cases. We demonstrate that the proposed reinitialization method achieves second-order accuracy in conserving the volume of each phase. The interface location is invariant to reapplication of the single-step reinitialization. Generally, we observe smaller absolute errors than for standard iterative reinitialization on the same grid. The computational efficiency is higher than for the standard and typical high-order iterative reinitialization methods. We observe a 2- to 6-times efficiency improvement over the standard method for serial execution. The proposed single-step extending algorithm, which is commonly employed for assigning data to ghost cells with ghost-fluid or conservative interface interaction methods, shows about 10-times efficiency improvement over the standard method while maintaining same accuracy. Despite their simplicity, the proposed algorithms offer an efficient and robust alternative to iterative reinitialization and extending methods for level-set based multi-phase simulations.
Akbari, Hamed; Bilello, Michel; Da, Xiao; Davatzikos, Christos
2015-01-01
Evaluating various algorithms for the inter-subject registration of brain magnetic resonance images (MRI) is a necessary topic receiving growing attention. Existing studies evaluated image registration algorithms in specific tasks or using specific databases (e.g., only for skull-stripped images, only for single-site images, etc.). Consequently, the choice of registration algorithms seems task- and usage/parameter-dependent. Nevertheless, recent large-scale, often multi-institutional imaging-related studies create the need and raise the question whether some registration algorithms can 1) generally apply to various tasks/databases posing various challenges; 2) perform consistently well, and while doing so, 3) require minimal or ideally no parameter tuning. In seeking answers to this question, we evaluated 12 general-purpose registration algorithms, for their generality, accuracy and robustness. We fixed their parameters at values suggested by algorithm developers as reported in the literature. We tested them in 7 databases/tasks, which present one or more of 4 commonly-encountered challenges: 1) inter-subject anatomical variability in skull-stripped images; 2) intensity homogeneity, noise and large structural differences in raw images; 3) imaging protocol and field-of-view (FOV) differences in multi-site data; and 4) missing correspondences in pathology-bearing images. Totally 7,562 registrations were performed. Registration accuracies were measured by (multi-)expert-annotated landmarks or regions of interest (ROIs). To ensure reproducibility, we used public software tools, public databases (whenever possible), and we fully disclose the parameter settings. We show evaluation results, and discuss the performances in light of algorithms’ similarity metrics, transformation models and optimization strategies. We also discuss future directions for the algorithm development and evaluations. PMID:24951685
Feng, Yen-Yi; Wu, I-Chin; Chen, Tzu-Li
2017-03-01
The number of emergency cases or emergency room visits rapidly increases annually, thus leading to an imbalance in supply and demand and to the long-term overcrowding of hospital emergency departments (EDs). However, current solutions to increase medical resources and improve the handling of patient needs are either impractical or infeasible in the Taiwanese environment. Therefore, EDs must optimize resource allocation given limited medical resources to minimize the average length of stay of patients and medical resource waste costs. This study constructs a multi-objective mathematical model for medical resource allocation in EDs in accordance with emergency flow or procedure. The proposed mathematical model is complex and difficult to solve because its performance value is stochastic; furthermore, the model considers both objectives simultaneously. Thus, this study develops a multi-objective simulation optimization algorithm by integrating a non-dominated sorting genetic algorithm II (NSGA II) with multi-objective computing budget allocation (MOCBA) to address the challenges of multi-objective medical resource allocation. NSGA II is used to investigate plausible solutions for medical resource allocation, and MOCBA identifies effective sets of feasible Pareto (non-dominated) medical resource allocation solutions in addition to effectively allocating simulation or computation budgets. The discrete event simulation model of ED flow is inspired by a Taiwan hospital case and is constructed to estimate the expected performance values of each medical allocation solution as obtained through NSGA II. Finally, computational experiments are performed to verify the effectiveness and performance of the integrated NSGA II and MOCBA method, as well as to derive non-dominated medical resource allocation solutions from the algorithms.
NASA Astrophysics Data System (ADS)
Safdernejad, Morteza S.; Karpenko, Oleksii; Ye, Chaofeng; Udpa, Lalita; Udpa, Satish
2016-02-01
The advent of Giant Magneto-Resistive (GMR) technology permits development of novel highly sensitive array probes for Eddy Current (EC) inspection of multi-layer riveted structures. Multi-frequency GMR measurements with different EC pene-tration depths show promise for detection of bottom layer notches at fastener sites. However, the distortion of the induced magnetic field due to flaws is dominated by the strong fastener signal, which makes defect detection and classification a challenging prob-lem. This issue is more pronounced for ferromagnetic fasteners that concentrate most of the magnetic flux. In the present work, a novel multi-frequency mixing algorithm is proposed to suppress rivet signal response and enhance defect detection capability of the GMR array probe. The algorithm is baseline-free and does not require any assumptions about the sample geometry being inspected. Fastener signal suppression is based upon the random sample consensus (RANSAC) method, which iteratively estimates parameters of a mathematical model from a set of observed data with outliers. Bottom layer defects at fastener site are simulated as EDM notches of different length. Performance of the proposed multi-frequency mixing approach is evaluated on finite element data and experimental GMR measurements obtained with unidirectional planar current excitation. Initial results are promising demonstrating the feasibility of the approach.
Alshamlan, Hala M; Badr, Ghada H; Alohali, Yousef A
2015-06-01
Naturally inspired evolutionary algorithms prove effectiveness when used for solving feature selection and classification problems. Artificial Bee Colony (ABC) is a relatively new swarm intelligence method. In this paper, we propose a new hybrid gene selection method, namely Genetic Bee Colony (GBC) algorithm. The proposed algorithm combines the used of a Genetic Algorithm (GA) along with Artificial Bee Colony (ABC) algorithm. The goal is to integrate the advantages of both algorithms. The proposed algorithm is applied to a microarray gene expression profile in order to select the most predictive and informative genes for cancer classification. In order to test the accuracy performance of the proposed algorithm, extensive experiments were conducted. Three binary microarray datasets are use, which include: colon, leukemia, and lung. In addition, another three multi-class microarray datasets are used, which are: SRBCT, lymphoma, and leukemia. Results of the GBC algorithm are compared with our recently proposed technique: mRMR when combined with the Artificial Bee Colony algorithm (mRMR-ABC). We also compared the combination of mRMR with GA (mRMR-GA) and Particle Swarm Optimization (mRMR-PSO) algorithms. In addition, we compared the GBC algorithm with other related algorithms that have been recently published in the literature, using all benchmark datasets. The GBC algorithm shows superior performance as it achieved the highest classification accuracy along with the lowest average number of selected genes. This proves that the GBC algorithm is a promising approach for solving the gene selection problem in both binary and multi-class cancer classification. Copyright © 2015 Elsevier Ltd. All rights reserved.
Multi exposure image fusion algorithm based on YCbCr space
NASA Astrophysics Data System (ADS)
Yang, T. T.; Fang, P. Y.
2018-05-01
To solve the problem that scene details and visual effects are difficult to be optimized in high dynamic image synthesis, we proposes a multi exposure image fusion algorithm for processing low dynamic range images in YCbCr space, and weighted blending of luminance and chromatic aberration components respectively. The experimental results show that the method can retain color effect of the fused image while balancing details of the bright and dark regions of the high dynamic image.
Automated brain tumor segmentation using spatial accuracy-weighted hidden Markov Random Field.
Nie, Jingxin; Xue, Zhong; Liu, Tianming; Young, Geoffrey S; Setayesh, Kian; Guo, Lei; Wong, Stephen T C
2009-09-01
A variety of algorithms have been proposed for brain tumor segmentation from multi-channel sequences, however, most of them require isotropic or pseudo-isotropic resolution of the MR images. Although co-registration and interpolation of low-resolution sequences, such as T2-weighted images, onto the space of the high-resolution image, such as T1-weighted image, can be performed prior to the segmentation, the results are usually limited by partial volume effects due to interpolation of low-resolution images. To improve the quality of tumor segmentation in clinical applications where low-resolution sequences are commonly used together with high-resolution images, we propose the algorithm based on Spatial accuracy-weighted Hidden Markov random field and Expectation maximization (SHE) approach for both automated tumor and enhanced-tumor segmentation. SHE incorporates the spatial interpolation accuracy of low-resolution images into the optimization procedure of the Hidden Markov Random Field (HMRF) to segment tumor using multi-channel MR images with different resolutions, e.g., high-resolution T1-weighted and low-resolution T2-weighted images. In experiments, we evaluated this algorithm using a set of simulated multi-channel brain MR images with known ground-truth tissue segmentation and also applied it to a dataset of MR images obtained during clinical trials of brain tumor chemotherapy. The results show that more accurate tumor segmentation results can be obtained by comparing with conventional multi-channel segmentation algorithms.
Lim, Hansaim; Gray, Paul; Xie, Lei; Poleksic, Aleksandar
2016-01-01
Conventional one-drug-one-gene approach has been of limited success in modern drug discovery. Polypharmacology, which focuses on searching for multi-targeted drugs to perturb disease-causing networks instead of designing selective ligands to target individual proteins, has emerged as a new drug discovery paradigm. Although many methods for single-target virtual screening have been developed to improve the efficiency of drug discovery, few of these algorithms are designed for polypharmacology. Here, we present a novel theoretical framework and a corresponding algorithm for genome-scale multi-target virtual screening based on the one-class collaborative filtering technique. Our method overcomes the sparseness of the protein-chemical interaction data by means of interaction matrix weighting and dual regularization from both chemicals and proteins. While the statistical foundation behind our method is general enough to encompass genome-wide drug off-target prediction, the program is specifically tailored to find protein targets for new chemicals with little to no available interaction data. We extensively evaluate our method using a number of the most widely accepted gene-specific and cross-gene family benchmarks and demonstrate that our method outperforms other state-of-the-art algorithms for predicting the interaction of new chemicals with multiple proteins. Thus, the proposed algorithm may provide a powerful tool for multi-target drug design. PMID:27958331
Lim, Hansaim; Gray, Paul; Xie, Lei; Poleksic, Aleksandar
2016-12-13
Conventional one-drug-one-gene approach has been of limited success in modern drug discovery. Polypharmacology, which focuses on searching for multi-targeted drugs to perturb disease-causing networks instead of designing selective ligands to target individual proteins, has emerged as a new drug discovery paradigm. Although many methods for single-target virtual screening have been developed to improve the efficiency of drug discovery, few of these algorithms are designed for polypharmacology. Here, we present a novel theoretical framework and a corresponding algorithm for genome-scale multi-target virtual screening based on the one-class collaborative filtering technique. Our method overcomes the sparseness of the protein-chemical interaction data by means of interaction matrix weighting and dual regularization from both chemicals and proteins. While the statistical foundation behind our method is general enough to encompass genome-wide drug off-target prediction, the program is specifically tailored to find protein targets for new chemicals with little to no available interaction data. We extensively evaluate our method using a number of the most widely accepted gene-specific and cross-gene family benchmarks and demonstrate that our method outperforms other state-of-the-art algorithms for predicting the interaction of new chemicals with multiple proteins. Thus, the proposed algorithm may provide a powerful tool for multi-target drug design.
Development of model reference adaptive control theory for electric power plant control applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mabius, L.E.
1982-09-15
The scope of this effort includes the theoretical development of a multi-input, multi-output (MIMO) Model Reference Control (MRC) algorithm, (i.e., model following control law), Model Reference Adaptive Control (MRAC) algorithm and the formulation of a nonlinear model of a typical electric power plant. Previous single-input, single-output MRAC algorithm designs have been generalized to MIMO MRAC designs using the MIMO MRC algorithm. This MRC algorithm, which has been developed using Command Generator Tracker methodologies, represents the steady state behavior (in the adaptive sense) of the MRAC algorithm. The MRC algorithm is a fundamental component in the MRAC design and stability analysis.more » An enhanced MRC algorithm, which has been developed for systems with more controls than regulated outputs, alleviates the MRC stability constraint of stable plant transmission zeroes. The nonlinear power plant model is based on the Cromby model with the addition of a governor valve management algorithm, turbine dynamics and turbine interactions with extraction flows. An application of the MRC algorithm to a linearization of this model demonstrates its applicability to power plant systems. In particular, the generated power changes at 7% per minute while throttle pressure and temperature, reheat temperature and drum level are held constant with a reasonable level of control. The enhanced algorithm reduces significantly control fluctuations without modifying the output response.« less
Hybrid protection algorithms based on game theory in multi-domain optical networks
NASA Astrophysics Data System (ADS)
Guo, Lei; Wu, Jingjing; Hou, Weigang; Liu, Yejun; Zhang, Lincong; Li, Hongming
2011-12-01
With the network size increasing, the optical backbone is divided into multiple domains and each domain has its own network operator and management policy. At the same time, the failures in optical network may lead to a huge data loss since each wavelength carries a lot of traffic. Therefore, the survivability in multi-domain optical network is very important. However, existing survivable algorithms can achieve only the unilateral optimization for profit of either users or network operators. Then, they cannot well find the double-win optimal solution with considering economic factors for both users and network operators. Thus, in this paper we develop the multi-domain network model with involving multiple Quality of Service (QoS) parameters. After presenting the link evaluation approach based on fuzzy mathematics, we propose the game model to find the optimal solution to maximize the user's utility, the network operator's utility, and the joint utility of user and network operator. Since the problem of finding double-win optimal solution is NP-complete, we propose two new hybrid protection algorithms, Intra-domain Sub-path Protection (ISP) algorithm and Inter-domain End-to-end Protection (IEP) algorithm. In ISP and IEP, the hybrid protection means that the intelligent algorithm based on Bacterial Colony Optimization (BCO) and the heuristic algorithm are used to solve the survivability in intra-domain routing and inter-domain routing, respectively. Simulation results show that ISP and IEP have the similar comprehensive utility. In addition, ISP has better resource utilization efficiency, lower blocking probability, and higher network operator's utility, while IEP has better user's utility.
NASA Astrophysics Data System (ADS)
Daïf, A.; Ali Chérif, A.; Bresson, J.; Sarh, B.
1995-10-01
The vaporization of one or two multi-component fuel droplets in hot air-stream is presented. A thermal wind tunnel with experimental channel has been designed to develop an experimental process. Firstly, the comparison between experimental results and numerical data is presented for the case of an isolated multi-component droplet. The numerical method is based on the resolution of heat and mass transfer equations between the droplet and the gas stream. This model includes the effect of Stephan flow, the effect of variable thermophysical properties of the components, and the non-unitary Lewis number in the gas film. The experimental results show the micro-explosion phenomenon observed in the liquid phase of multi-component droplet at low temperature. The experimental case of two pure or multi-component droplets in interaction is also presented. On présente un article de synthèse sur l'évaporation d'une ou deux gouttes de carburants à plusieurs composants dans un écoulement d'air chaud. Un dispositif expérimental constitué d'une soufflerie thermique, avec veine d'expérimentation, est réalisé pour permettre cette étude. Pour le cas d'une goutte isolée, une comparaison expérience-calcul est entreprise. Le principe de la méthode numerique consiste en la résolution des équations de transfert de masse et de chaleur entre la goutte et l'écoulement. Ce modèle prend en compte les effets de l'écoulement de Stephan, les variations des propriétés thermophysiques des composants dans les deux phases et la valeur du nombre de Lewis différente de l'unité dans le film de vapeur. Outre l'analyse plus approfondie qu'apporte la confrontation entre le calcul et l'expérience, les résultats expérimentaux montrent le phénomène de micro-explosion observé à l'intérieur de la goutte liquide. Le cas expérimental de deux gouttes en interaction est abordé qu'il s'agisse de gouttes de carburant pur ou de mélange.
2014-01-01
Background Network-based learning algorithms for automated function prediction (AFP) are negatively affected by the limited coverage of experimental data and limited a priori known functional annotations. As a consequence their application to model organisms is often restricted to well characterized biological processes and pathways, and their effectiveness with poorly annotated species is relatively limited. A possible solution to this problem might consist in the construction of big networks including multiple species, but this in turn poses challenging computational problems, due to the scalability limitations of existing algorithms and the main memory requirements induced by the construction of big networks. Distributed computation or the usage of big computers could in principle respond to these issues, but raises further algorithmic problems and require resources not satisfiable with simple off-the-shelf computers. Results We propose a novel framework for scalable network-based learning of multi-species protein functions based on both a local implementation of existing algorithms and the adoption of innovative technologies: we solve “locally” the AFP problem, by designing “vertex-centric” implementations of network-based algorithms, but we do not give up thinking “globally” by exploiting the overall topology of the network. This is made possible by the adoption of secondary memory-based technologies that allow the efficient use of the large memory available on disks, thus overcoming the main memory limitations of modern off-the-shelf computers. This approach has been applied to the analysis of a large multi-species network including more than 300 species of bacteria and to a network with more than 200,000 proteins belonging to 13 Eukaryotic species. To our knowledge this is the first work where secondary-memory based network analysis has been applied to multi-species function prediction using biological networks with hundreds of thousands of proteins. Conclusions The combination of these algorithmic and technological approaches makes feasible the analysis of large multi-species networks using ordinary computers with limited speed and primary memory, and in perspective could enable the analysis of huge networks (e.g. the whole proteomes available in SwissProt), using well-equipped stand-alone machines. PMID:24843788
Solomon, Justin; Mileto, Achille; Nelson, Rendon C; Roy Choudhury, Kingshuk; Samei, Ehsan
2016-04-01
To determine if radiation dose and reconstruction algorithm affect the computer-based extraction and analysis of quantitative imaging features in lung nodules, liver lesions, and renal stones at multi-detector row computed tomography (CT). Retrospective analysis of data from a prospective, multicenter, HIPAA-compliant, institutional review board-approved clinical trial was performed by extracting 23 quantitative imaging features (size, shape, attenuation, edge sharpness, pixel value distribution, and texture) of lesions on multi-detector row CT images of 20 adult patients (14 men, six women; mean age, 63 years; range, 38-72 years) referred for known or suspected focal liver lesions, lung nodules, or kidney stones. Data were acquired between September 2011 and April 2012. All multi-detector row CT scans were performed at two different radiation dose levels; images were reconstructed with filtered back projection, adaptive statistical iterative reconstruction, and model-based iterative reconstruction (MBIR) algorithms. A linear mixed-effects model was used to assess the effect of radiation dose and reconstruction algorithm on extracted features. Among the 23 imaging features assessed, radiation dose had a significant effect on five, three, and four of the features for liver lesions, lung nodules, and renal stones, respectively (P < .002 for all comparisons). Adaptive statistical iterative reconstruction had a significant effect on three, one, and one of the features for liver lesions, lung nodules, and renal stones, respectively (P < .002 for all comparisons). MBIR reconstruction had a significant effect on nine, 11, and 15 of the features for liver lesions, lung nodules, and renal stones, respectively (P < .002 for all comparisons). Of note, the measured size of lung nodules and renal stones with MBIR was significantly different than those for the other two algorithms (P < .002 for all comparisons). Although lesion texture was significantly affected by the reconstruction algorithm used (average of 3.33 features affected by MBIR throughout lesion types; P < .002, for all comparisons), no significant effect of the radiation dose setting was observed for all but one of the texture features (P = .002-.998). Radiation dose settings and reconstruction algorithms affect the extraction and analysis of quantitative imaging features in lesions at multi-detector row CT.
NASA Astrophysics Data System (ADS)
Hashimoto, M.; Takenaka, H.; Higurashi, A.; Nakajima, T.
2017-12-01
Aerosol in the atmosphere is an important constituent for determining the earth's radiation budget, so the accurate aerosol retrievals from satellite is useful. We have developed a satellite remote sensing algorithm to retrieve the aerosol optical properties using multi-wavelength and multi-pixel information of satellite imagers (MWPM). The method simultaneously derives aerosol optical properties, such as aerosol optical thickness (AOT), single scattering albedo (SSA) and aerosol size information, by using spatial difference of wavelegths (multi-wavelength) and surface reflectances (multi-pixel). The method is useful for aerosol retrieval over spatially heterogeneous surface like an urban region. In this algorithm, the inversion method is a combination of an optimal method and smoothing constraint for the state vector. Furthermore, this method has been combined with the direct radiation transfer calculation (RTM) numerically solved by each iteration step of the non-linear inverse problem, without using look up table (LUT) with several constraints. However, it takes too much computation time. To accelerate the calculation time, we replaced the RTM with an accelerated RTM solver learned by neural network-based method, EXAM (Takenaka et al., 2011), using Rster code. And then, the calculation time was shorternd to about one thouthandth. We applyed MWPM combined with EXAM to GOSAT/TANSO-CAI (Cloud and Aerosol Imager). CAI is a supplement sensor of TANSO-FTS, dedicated to measure cloud and aerosol properties. CAI has four bands, 380, 674, 870 and 1600 nm, and observes in 500 meters resolution for band1, band2 and band3, and 1.5 km for band4. Retrieved parameters are aerosol optical properties, such as aerosol optical thickness (AOT) of fine and coarse mode particles at a wavelenth of 500nm, a volume soot fraction in fine mode particles, and ground surface albedo of each observed wavelength by combining a minimum reflectance method and Fukuda et al. (2013). We will show the results and discuss the accuracy of the algorithm for various surface types. Our future work is to extend the algorithm for analysis of GOSAT-2/TANSO-CAI-2 and GCOM/C-SGLI data.
INTERIOR VIEW, LOOKING WEST, WITH CRANE OPERATOR, TED SEALS, POURING ...
INTERIOR VIEW, LOOKING WEST, WITH CRANE OPERATOR, TED SEALS, POURING MOLTEN METAL INTO A 1,300 TON ELECTRIC HOLDING FURNACE OR MIXER. AN ELECTRONIC SCALE RECORDED THAT 50.5 TONS OF METAL WERE POURED INTO THE FURNACE DURING THIS POUR. - American Cast Iron Pipe Company, Mixer Building, 1501 Thirty-first Avenue North, Birmingham, Jefferson County, AL
Nash equilibrium and multi criterion aerodynamic optimization
NASA Astrophysics Data System (ADS)
Tang, Zhili; Zhang, Lianhe
2016-06-01
Game theory and its particular Nash Equilibrium (NE) are gaining importance in solving Multi Criterion Optimization (MCO) in engineering problems over the past decade. The solution of a MCO problem can be viewed as a NE under the concept of competitive games. This paper surveyed/proposed four efficient algorithms for calculating a NE of a MCO problem. Existence and equivalence of the solution are analyzed and proved in the paper based on fixed point theorem. Specific virtual symmetric Nash game is also presented to set up an optimization strategy for single objective optimization problems. Two numerical examples are presented to verify proposed algorithms. One is mathematical functions' optimization to illustrate detailed numerical procedures of algorithms, the other is aerodynamic drag reduction of civil transport wing fuselage configuration by using virtual game. The successful application validates efficiency of algorithms in solving complex aerodynamic optimization problem.
NASA Astrophysics Data System (ADS)
Chen, Xiang; Zhang, Xiong; Jia, Zupeng
2017-06-01
The Multi-Material Arbitrary Lagrangian Eulerian (MMALE) method is an effective way to simulate the multi-material flow with severe surface deformation. Comparing with the traditional Arbitrary Lagrangian Eulerian (ALE) method, the MMALE method allows for multiple materials in a single cell which overcomes the difficulties in grid refinement process. In recent decades, many researches have been conducted for the Lagrangian, rezoning and surface reconstruction phases, but less attention has been paid to the multi-material remapping phase especially for the three-dimensional problems due to two complex geometric problems: the polyhedron subdivision and the polyhedron intersection. In this paper, we propose a ;Clipping and Projecting; algorithm for polyhedron intersection whose basic idea comes from the commonly used method by Grandy (1999) [29] and Jia et al. (2013) [34]. Our new algorithm solves the geometric problem by an incremental modification of the topology based on segment-plane intersections. A comparison with Jia et al. (2013) [34] shows our new method improves the efficiency by 55% to 65% when calculating polyhedron intersections. Moreover, the instability caused by the geometric degeneracy can be thoroughly avoided because the geometry integrity is preserved in the new algorithm. We also focus on the polyhedron subdivision process and describe an algorithm which could automatically and precisely tackle the various situations including convex, non-convex and multiple subdivisions. Numerical studies indicate that by using our polyhedron subdivision and intersection algorithm, the volume conversation of the remapping phase can be exactly preserved in the MMALE simulation.
Multi-Objective Constraint Satisfaction for Mobile Robot Area Defense
2010-03-01
17 NSGA-II non-dominated sorting genetic algorithm II . . . . . . . . . . . . . . . . . . . 17 jMetal Metaheuristic Algorithms in...to alert the other agents and ensure trust in the system. This research presents an algorithm that tasks robots to meet the two specific goals of...problem is defined as a constraint satisfaction problem solved using the Non-dominated Sorting Genetic Algorithm II (NSGA-II). Both goals of
Transonic Wing Shape Optimization Using a Genetic Algorithm
NASA Technical Reports Server (NTRS)
Holst, Terry L.; Pulliam, Thomas H.; Kwak, Dochan (Technical Monitor)
2002-01-01
A method for aerodynamic shape optimization based on a genetic algorithm approach is demonstrated. The algorithm is coupled with a transonic full potential flow solver and is used to optimize the flow about transonic wings including multi-objective solutions that lead to the generation of pareto fronts. The results indicate that the genetic algorithm is easy to implement, flexible in application and extremely reliable.
A novel communication mechanism based on node potential multi-path routing
NASA Astrophysics Data System (ADS)
Bu, Youjun; Zhang, Chuanhao; Jiang, YiMing; Zhang, Zhen
2016-10-01
With the network scales rapidly and new network applications emerge frequently, bandwidth supply for today's Internet could not catch up with the rapid increasing requirements. Unfortunately, irrational using of network sources makes things worse. Actual network deploys single-next-hop optimization paths for data transmission, but such "best effort" model leads to the imbalance use of network resources and usually leads to local congestion. On the other hand Multi-path routing can use the aggregation bandwidth of multi paths efficiently and improve the robustness of network, security, load balancing and quality of service. As a result, multi-path has attracted much attention in the routing and switching research fields and many important ideas and solutions have been proposed. This paper focuses on implementing the parallel transmission of multi next-hop data, balancing the network traffic and reducing the congestion. It aimed at exploring the key technologies of the multi-path communication network, which could provide a feasible academic support for subsequent applications of multi-path communication networking. It proposed a novel multi-path algorithm based on node potential in the network. And the algorithm can fully use of the network link resource and effectively balance network link resource utilization.
Efficient Geometric Sound Propagation Using Visibility Culling
NASA Astrophysics Data System (ADS)
Chandak, Anish
2011-07-01
Simulating propagation of sound can improve the sense of realism in interactive applications such as video games and can lead to better designs in engineering applications such as architectural acoustics. In this thesis, we present geometric sound propagation techniques which are faster than prior methods and map well to upcoming parallel multi-core CPUs. We model specular reflections by using the image-source method and model finite-edge diffraction by using the well-known Biot-Tolstoy-Medwin (BTM) model. We accelerate the computation of specular reflections by applying novel visibility algorithms, FastV and AD-Frustum, which compute visibility from a point. We accelerate finite-edge diffraction modeling by applying a novel visibility algorithm which computes visibility from a region. Our visibility algorithms are based on frustum tracing and exploit recent advances in fast ray-hierarchy intersections, data-parallel computations, and scalable, multi-core algorithms. The AD-Frustum algorithm adapts its computation to the scene complexity and allows small errors in computing specular reflection paths for higher computational efficiency. FastV and our visibility algorithm from a region are general, object-space, conservative visibility algorithms that together significantly reduce the number of image sources compared to other techniques while preserving the same accuracy. Our geometric propagation algorithms are an order of magnitude faster than prior approaches for modeling specular reflections and two to ten times faster for modeling finite-edge diffraction. Our algorithms are interactive, scale almost linearly on multi-core CPUs, and can handle large, complex, and dynamic scenes. We also compare the accuracy of our sound propagation algorithms with other methods. Once sound propagation is performed, it is desirable to listen to the propagated sound in interactive and engineering applications. We can generate smooth, artifact-free output audio signals by applying efficient audio-processing algorithms. We also present the first efficient audio-processing algorithm for scenarios with simultaneously moving source and moving receiver (MS-MR) which incurs less than 25% overhead compared to static source and moving receiver (SS-MR) or moving source and static receiver (MS-SR) scenario.
Data Mining Algorithms for Classification of Complex Biomedical Data
ERIC Educational Resources Information Center
Lan, Liang
2012-01-01
In my dissertation, I will present my research which contributes to solve the following three open problems from biomedical informatics: (1) Multi-task approaches for microarray classification; (2) Multi-label classification of gene and protein prediction from multi-source biological data; (3) Spatial scan for movement data. In microarray…
Optimal configuration of power grid sources based on optimal particle swarm algorithm
NASA Astrophysics Data System (ADS)
Wen, Yuanhua
2018-04-01
In order to optimize the distribution problem of power grid sources, an optimized particle swarm optimization algorithm is proposed. First, the concept of multi-objective optimization and the Pareto solution set are enumerated. Then, the performance of the classical genetic algorithm, the classical particle swarm optimization algorithm and the improved particle swarm optimization algorithm are analyzed. The three algorithms are simulated respectively. Compared with the test results of each algorithm, the superiority of the algorithm in convergence and optimization performance is proved, which lays the foundation for subsequent micro-grid power optimization configuration solution.
Multiple R&D projects scheduling optimization with improved particle swarm algorithm.
Liu, Mengqi; Shan, Miyuan; Wu, Juan
2014-01-01
For most enterprises, in order to win the initiative in the fierce competition of market, a key step is to improve their R&D ability to meet the various demands of customers more timely and less costly. This paper discusses the features of multiple R&D environments in large make-to-order enterprises under constrained human resource and budget, and puts forward a multi-project scheduling model during a certain period. Furthermore, we make some improvements to existed particle swarm algorithm and apply the one developed here to the resource-constrained multi-project scheduling model for a simulation experiment. Simultaneously, the feasibility of model and the validity of algorithm are proved in the experiment.
Fuzzy multi-objective chance-constrained programming model for hazardous materials transportation
NASA Astrophysics Data System (ADS)
Du, Jiaoman; Yu, Lean; Li, Xiang
2016-04-01
Hazardous materials transportation is an important and hot issue of public safety. Based on the shortest path model, this paper presents a fuzzy multi-objective programming model that minimizes the transportation risk to life, travel time and fuel consumption. First, we present the risk model, travel time model and fuel consumption model. Furthermore, we formulate a chance-constrained programming model within the framework of credibility theory, in which the lengths of arcs in the transportation network are assumed to be fuzzy variables. A hybrid intelligent algorithm integrating fuzzy simulation and genetic algorithm is designed for finding a satisfactory solution. Finally, some numerical examples are given to demonstrate the efficiency of the proposed model and algorithm.
NASA Astrophysics Data System (ADS)
Wickersham, Andrew Joseph
There are two critical research needs for the study of hydrocarbon combustion in high speed flows: 1) combustion diagnostics with adequate temporal and spatial resolution, and 2) mathematical techniques that can extract key information from large datasets. The goal of this work is to address these needs, respectively, by the use of high speed and multi-perspective chemiluminescence and advanced mathematical algorithms. To obtain the measurements, this work explored the application of high speed chemiluminescence diagnostics and the use of fiber-based endoscopes (FBEs) for non-intrusive and multi-perspective chemiluminescence imaging up to 20 kHz. Non-intrusive and full-field imaging measurements provide a wealth of information for model validation and design optimization of propulsion systems. However, it is challenging to obtain such measurements due to various implementation difficulties such as optical access, thermal management, and equipment cost. This work therefore explores the application of FBEs for non-intrusive imaging to supersonic propulsion systems. The FBEs used in this work are demonstrated to overcome many of the aforementioned difficulties and provided datasets from multiple angular positions up to 20 kHz in a supersonic combustor. The combustor operated on ethylene fuel at Mach 2 with an inlet stagnation temperature and pressure of approximately 640 degrees Fahrenheit and 70 psia, respectively. The imaging measurements were obtained from eight perspectives simultaneously, providing full-field datasets under such flow conditions for the first time, allowing the possibility of inferring multi-dimensional measurements. Due to the high speed and multi-perspective nature, such new diagnostic capability generates a large volume of data and calls for analysis algorithms that can process the data and extract key physics effectively. To extract the key combustion dynamics from the measurements, three mathematical methods were investigated in this work: Fourier analysis, proper orthogonal decomposition (POD), and wavelet analysis (WA). These algorithms were first demonstrated and tested on imaging measurements obtained from one perspective in a sub-sonic combustor (up to Mach 0.2). The results show that these algorithms are effective in extracting the key physics from large datasets, including the characteristic frequencies of flow-flame interactions especially during transient processes such as lean blow off and ignition. After these relatively simple tests and demonstrations, these algorithms were applied to process the measurements obtained from multi-perspective in the supersonic combustor. compared to past analyses (which have been limited to data obtained from one perspective only), the availability of data at multiple perspective provide further insights into the flame and flow structures in high speed flows. In summary, this work shows that high speed chemiluminescence is a simple yet powerful combustion diagnostic. Especially when combined with FBEs and the analyses algorithms described in this work, such diagnostics provide full-field imaging at high repetition rate in challenging flows. Based on such measurements, a wealth of information can be obtained from proper analysis algorithms, including characteristic frequency, dominating flame modes, and even multi-dimensional flame and flow structures.
Vogel, Curtis R; Yang, Qiang
2006-08-21
We present two different implementations of the Fourier domain preconditioned conjugate gradient algorithm (FD-PCG) to efficiently solve the large structured linear systems that arise in optimal volume turbulence estimation, or tomography, for multi-conjugate adaptive optics (MCAO). We describe how to deal with several critical technical issues, including the cone coordinate transformation problem and sensor subaperture grid spacing. We also extend the FD-PCG approach to handle the deformable mirror fitting problem for MCAO.
Swarm Intelligence for Urban Dynamics Modelling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ghnemat, Rawan; Bertelle, Cyrille; Duchamp, Gerard H. E.
2009-04-16
In this paper, we propose swarm intelligence algorithms to deal with dynamical and spatial organization emergence. The goal is to model and simulate the developement of spatial centers using multi-criteria. We combine a decentralized approach based on emergent clustering mixed with spatial constraints or attractions. We propose an extension of the ant nest building algorithm with multi-center and adaptive process. Typically, this model is suitable to analyse and simulate urban dynamics like gentrification or the dynamics of the cultural equipment in urban area.
Scheduling for the National Hockey League Using a Multi-objective Evolutionary Algorithm
NASA Astrophysics Data System (ADS)
Craig, Sam; While, Lyndon; Barone, Luigi
We describe a multi-objective evolutionary algorithm that derives schedules for the National Hockey League according to three objectives: minimising the teams' total travel, promoting equity in rest time between games, and minimising long streaks of home or away games. Experiments show that the system is able to derive schedules that beat the 2008-9 NHL schedule in all objectives simultaneously, and that it returns a set of schedules that offer a range of trade-offs across the objectives.
NASA Technical Reports Server (NTRS)
Abbott, David; Batten, Adam; Carpenter, David; Dunlop, John; Edwards, Graeme; Farmer, Tony; Gaffney, Bruce; Hedley, Mark; Hoschke, Nigel; Isaacs, Peter;
2008-01-01
This report describes the first phase of the implementation of the Concept Demonstrator. The Concept Demonstrator system is a powerful and flexible experimental test-bed platform for developing sensors, communications systems, and multi-agent based algorithms for an intelligent vehicle health monitoring system for deployment in aerospace vehicles. The Concept Demonstrator contains sensors and processing hardware distributed throughout the structure, and uses multi-agent algorithms to characterize impacts and determine an appropriate response to these impacts.
Swarm Intelligence for Urban Dynamics Modelling
NASA Astrophysics Data System (ADS)
Ghnemat, Rawan; Bertelle, Cyrille; Duchamp, Gérard H. E.
2009-04-01
In this paper, we propose swarm intelligence algorithms to deal with dynamical and spatial organization emergence. The goal is to model and simulate the developement of spatial centers using multi-criteria. We combine a decentralized approach based on emergent clustering mixed with spatial constraints or attractions. We propose an extension of the ant nest building algorithm with multi-center and adaptive process. Typically, this model is suitable to analyse and simulate urban dynamics like gentrification or the dynamics of the cultural equipment in urban area.
Algorithm Development for the Multi-Fluid Plasma Model
2011-05-30
392, Sep 1995. [13] L Chacon , DC Barnes, DA Knoll, and GH Miley. An implicit energy- conservative 2D Fokker-Planck algorithm. Journal of Computational...Physics, 157(2):618–653, 2000. [14] L Chacon , DC Barnes, DA Knoll, and GH Miley. An implicit energy- conservative 2D Fokker-Planck algorithm - II
An Efficient Next Hop Selection Algorithm for Multi-Hop Body Area Networks
Ayatollahitafti, Vahid; Ngadi, Md Asri; Mohamad Sharif, Johan bin; Abdullahi, Mohammed
2016-01-01
Body Area Networks (BANs) consist of various sensors which gather patient’s vital signs and deliver them to doctors. One of the most significant challenges faced, is the design of an energy-efficient next hop selection algorithm to satisfy Quality of Service (QoS) requirements for different healthcare applications. In this paper, a novel efficient next hop selection algorithm is proposed in multi-hop BANs. This algorithm uses the minimum hop count and a link cost function jointly in each node to choose the best next hop node. The link cost function includes the residual energy, free buffer size, and the link reliability of the neighboring nodes, which is used to balance the energy consumption and to satisfy QoS requirements in terms of end to end delay and reliability. Extensive simulation experiments were performed to evaluate the efficiency of the proposed algorithm using the NS-2 simulator. Simulation results show that our proposed algorithm provides significant improvement in terms of energy consumption, number of packets forwarded, end to end delay and packet delivery ratio compared to the existing routing protocol. PMID:26771586
Cardiac Arrhythmia Classification by Multi-Layer Perceptron and Convolution Neural Networks.
Savalia, Shalin; Emamian, Vahid
2018-05-04
The electrocardiogram (ECG) plays an imperative role in the medical field, as it records heart signal over time and is used to discover numerous cardiovascular diseases. If a documented ECG signal has a certain irregularity in its predefined features, this is called arrhythmia, the types of which include tachycardia, bradycardia, supraventricular arrhythmias, and ventricular, etc. This has encouraged us to do research that consists of distinguishing between several arrhythmias by using deep neural network algorithms such as multi-layer perceptron (MLP) and convolution neural network (CNN). The TensorFlow library that was established by Google for deep learning and machine learning is used in python to acquire the algorithms proposed here. The ECG databases accessible at PhysioBank.com and kaggle.com were used for training, testing, and validation of the MLP and CNN algorithms. The proposed algorithm consists of four hidden layers with weights, biases in MLP, and four-layer convolution neural networks which map ECG samples to the different classes of arrhythmia. The accuracy of the algorithm surpasses the performance of the current algorithms that have been developed by other cardiologists in both sensitivity and precision.
A Hybrid Cellular Genetic Algorithm for Multi-objective Crew Scheduling Problem
NASA Astrophysics Data System (ADS)
Jolai, Fariborz; Assadipour, Ghazal
Crew scheduling is one of the important problems of the airline industry. This problem aims to cover a number of flights by crew members, such that all the flights are covered. In a robust scheduling the assignment should be so that the total cost, delays, and unbalanced utilization are minimized. As the problem is NP-hard and the objectives are in conflict with each other, a multi-objective meta-heuristic called CellDE, which is a hybrid cellular genetic algorithm, is implemented as the optimization method. The proposed algorithm provides the decision maker with a set of non-dominated or Pareto-optimal solutions, and enables them to choose the best one according to their preferences. A set of problems of different sizes is generated and solved using the proposed algorithm. Evaluating the performance of the proposed algorithm, three metrics are suggested, and the diversity and the convergence of the achieved Pareto front are appraised. Finally a comparison is made between CellDE and PAES, another meta-heuristic algorithm. The results show the superiority of CellDE.
Liu, Ying; Lita, Lucian Vlad; Niculescu, Radu Stefan; Mitra, Prasenjit; Giles, C Lee
2008-11-06
Owing to new advances in computer hardware, large text databases have become more prevalent than ever.Automatically mining information from these databases proves to be a challenge due to slow pattern/string matching techniques. In this paper we present a new, fast multi-string pattern matching method based on the well known Aho-Chorasick algorithm. Advantages of our algorithm include:the ability to exploit the natural structure of text, the ability to perform significant character shifting, avoiding backtracking jumps that are not useful, efficiency in terms of matching time and avoiding the typical "sub-string" false positive errors.Our algorithm is applicable to many fields with free text, such as the health care domain and the scientific document field. In this paper, we apply the BSS algorithm to health care data and mine hundreds of thousands of medical concepts from a large Electronic Medical Record (EMR) corpora simultaneously and efficiently. Experimental results show the superiority of our algorithm when compared with the top of the line multi-string matching algorithms.
NASA Astrophysics Data System (ADS)
Zhang, Lijuan; Li, Yang; Wang, Junnan; Liu, Ying
2018-03-01
In this paper, we propose a point spread function (PSF) reconstruction method and joint maximum a posteriori (JMAP) estimation method for the adaptive optics image restoration. Using the JMAP method as the basic principle, we establish the joint log likelihood function of multi-frame adaptive optics (AO) images based on the image Gaussian noise models. To begin with, combining the observed conditions and AO system characteristics, a predicted PSF model for the wavefront phase effect is developed; then, we build up iterative solution formulas of the AO image based on our proposed algorithm, addressing the implementation process of multi-frame AO images joint deconvolution method. We conduct a series of experiments on simulated and real degraded AO images to evaluate our proposed algorithm. Compared with the Wiener iterative blind deconvolution (Wiener-IBD) algorithm and Richardson-Lucy IBD algorithm, our algorithm has better restoration effects including higher peak signal-to-noise ratio ( PSNR) and Laplacian sum ( LS) value than the others. The research results have a certain application values for actual AO image restoration.
NASA Astrophysics Data System (ADS)
Lan, Ma; Xiao, Wen; Chen, Zonghui; Hao, Hongliang; Pan, Feng
2018-01-01
Real-time micro-vibration measurement is widely used in engineering applications. It is very difficult for traditional optical detection methods to achieve real-time need in a relatively high frequency and multi-spot synchronous measurement of a region at the same time,especially at the nanoscale. Based on the method of heterodyne interference, an experimental system of real-time measurement of micro - vibration is constructed to satisfy the demand in engineering applications. The vibration response signal is measured by combing optical heterodyne interferometry and a high-speed CMOS-DVR image acquisition system. Then, by extracting and processing multiple pixels at the same time, four digital demodulation technique are implemented to simultaneously acquire the vibrating velocity of the target from the recorded sequences of images. Different kinds of demodulation algorithms are analyzed and the results show that these four demodulation algorithms are suitable for different interference signals. Both autocorrelation algorithm and cross-correlation algorithm meet the needs of real-time measurements. The autocorrelation algorithm demodulates the frequency more accurately, while the cross-correlation algorithm is more accurate in solving the amplitude.
BaTMAn: Bayesian Technique for Multi-image Analysis
NASA Astrophysics Data System (ADS)
Casado, J.; Ascasibar, Y.; García-Benito, R.; Guidi, G.; Choudhury, O. S.; Bellocchi, E.; Sánchez, S. F.; Díaz, A. I.
2016-12-01
Bayesian Technique for Multi-image Analysis (BaTMAn) characterizes any astronomical dataset containing spatial information and performs a tessellation based on the measurements and errors provided as input. The algorithm iteratively merges spatial elements as long as they are statistically consistent with carrying the same information (i.e. identical signal within the errors). The output segmentations successfully adapt to the underlying spatial structure, regardless of its morphology and/or the statistical properties of the noise. BaTMAn identifies (and keeps) all the statistically-significant information contained in the input multi-image (e.g. an IFS datacube). The main aim of the algorithm is to characterize spatially-resolved data prior to their analysis.
NASA Astrophysics Data System (ADS)
Qiu, J. P.; Niu, D. X.
Micro-grid is one of the key technologies of the future energy supplies. Take economic planning. reliability, and environmental protection of micro grid as a basis for the analysis of multi-strategy objective programming problems for micro grid which contains wind power, solar power, and battery and micro gas turbine. Establish the mathematical model of each power generation characteristics and energy dissipation. and change micro grid planning multi-objective function under different operating strategies to a single objective model based on AHP method. Example analysis shows that in combination with dynamic ant mixed genetic algorithm can get the optimal power output of this model.
NASA Astrophysics Data System (ADS)
Nejlaoui, Mohamed; Houidi, Ajmi; Affi, Zouhaier; Romdhane, Lotfi
2017-10-01
This paper deals with the robust safety design optimization of a rail vehicle system moving in short radius curved tracks. A combined multi-objective imperialist competitive algorithm and Monte Carlo method is developed and used for the robust multi-objective optimization of the rail vehicle system. This robust optimization of rail vehicle safety considers simultaneously the derailment angle and its standard deviation where the design parameters uncertainties are considered. The obtained results showed that the robust design reduces significantly the sensitivity of the rail vehicle safety to the design parameters uncertainties compared to the determinist one and to the literature results.
NASA Astrophysics Data System (ADS)
Ding, Zhongan; Gao, Chen; Yan, Shengteng; Yang, Canrong
2017-10-01
The power user electric energy data acquire system (PUEEDAS) is an important part of smart grid. This paper builds a multi-objective optimization model for the performance of the PUEEADS from the point of view of the combination of the comprehensive benefits and cost. Meanwhile, the Chebyshev decomposition approach is used to decompose the multi-objective optimization problem. We design a MOEA/D evolutionary algorithm to solve the problem. By analyzing the Pareto optimal solution set of multi-objective optimization problem and comparing it with the monitoring value to grasp the direction of optimizing the performance of the PUEEDAS. Finally, an example is designed for specific analysis.
Network-centric decision architecture for financial or 1/f data models
NASA Astrophysics Data System (ADS)
Jaenisch, Holger M.; Handley, James W.; Massey, Stoney; Case, Carl T.; Songy, Claude G.
2002-12-01
This paper presents a decision architecture algorithm for training neural equation based networks to make autonomous multi-goal oriented, multi-class decisions. These architectures make decisions based on their individual goals and draw from the same network centric feature set. Traditionally, these architectures are comprised of neural networks that offer marginal performance due to lack of convergence of the training set. We present an approach for autonomously extracting sample points as I/O exemplars for generation of multi-branch, multi-node decision architectures populated by adaptively derived neural equations. To test the robustness of this architecture, open source data sets in the form of financial time series were used, requiring a three-class decision space analogous to the lethal, non-lethal, and clutter discrimination problem. This algorithm and the results of its application are presented here.
Combinatorial Optimization in Project Selection Using Genetic Algorithm
NASA Astrophysics Data System (ADS)
Dewi, Sari; Sawaluddin
2018-01-01
This paper discusses the problem of project selection in the presence of two objective functions that maximize profit and minimize cost and the existence of some limitations is limited resources availability and time available so that there is need allocation of resources in each project. These resources are human resources, machine resources, raw material resources. This is treated as a consideration to not exceed the budget that has been determined. So that can be formulated mathematics for objective function (multi-objective) with boundaries that fulfilled. To assist the project selection process, a multi-objective combinatorial optimization approach is used to obtain an optimal solution for the selection of the right project. It then described a multi-objective method of genetic algorithm as one method of multi-objective combinatorial optimization approach to simplify the project selection process in a large scope.
Fast parallel algorithm for slicing STL based on pipeline
NASA Astrophysics Data System (ADS)
Ma, Xulong; Lin, Feng; Yao, Bo
2016-05-01
In Additive Manufacturing field, the current researches of data processing mainly focus on a slicing process of large STL files or complicated CAD models. To improve the efficiency and reduce the slicing time, a parallel algorithm has great advantages. However, traditional algorithms can't make full use of multi-core CPU hardware resources. In the paper, a fast parallel algorithm is presented to speed up data processing. A pipeline mode is adopted to design the parallel algorithm. And the complexity of the pipeline algorithm is analyzed theoretically. To evaluate the performance of the new algorithm, effects of threads number and layers number are investigated by a serial of experiments. The experimental results show that the threads number and layers number are two remarkable factors to the speedup ratio. The tendency of speedup versus threads number reveals a positive relationship which greatly agrees with the Amdahl's law, and the tendency of speedup versus layers number also keeps a positive relationship agreeing with Gustafson's law. The new algorithm uses topological information to compute contours with a parallel method of speedup. Another parallel algorithm based on data parallel is used in experiments to show that pipeline parallel mode is more efficient. A case study at last shows a suspending performance of the new parallel algorithm. Compared with the serial slicing algorithm, the new pipeline parallel algorithm can make full use of the multi-core CPU hardware, accelerate the slicing process, and compared with the data parallel slicing algorithm, the new slicing algorithm in this paper adopts a pipeline parallel model, and a much higher speedup ratio and efficiency is achieved.
Minimum Interference Channel Assignment Algorithm for Multicast in a Wireless Mesh Network.
Choi, Sangil; Park, Jong Hyuk
2016-12-02
Wireless mesh networks (WMNs) have been considered as one of the key technologies for the configuration of wireless machines since they emerged. In a WMN, wireless routers provide multi-hop wireless connectivity between hosts in the network and also allow them to access the Internet via gateway devices. Wireless routers are typically equipped with multiple radios operating on different channels to increase network throughput. Multicast is a form of communication that delivers data from a source to a set of destinations simultaneously. It is used in a number of applications, such as distributed games, distance education, and video conferencing. In this study, we address a channel assignment problem for multicast in multi-radio multi-channel WMNs. In a multi-radio multi-channel WMN, two nearby nodes will interfere with each other and cause a throughput decrease when they transmit on the same channel. Thus, an important goal for multicast channel assignment is to reduce the interference among networked devices. We have developed a minimum interference channel assignment (MICA) algorithm for multicast that accurately models the interference relationship between pairs of multicast tree nodes using the concept of the interference factor and assigns channels to tree nodes to minimize interference within the multicast tree. Simulation results show that MICA achieves higher throughput and lower end-to-end packet delay compared with an existing channel assignment algorithm named multi-channel multicast (MCM). In addition, MICA achieves much lower throughput variation among the destination nodes than MCM.
Minimum Interference Channel Assignment Algorithm for Multicast in a Wireless Mesh Network
Choi, Sangil; Park, Jong Hyuk
2016-01-01
Wireless mesh networks (WMNs) have been considered as one of the key technologies for the configuration of wireless machines since they emerged. In a WMN, wireless routers provide multi-hop wireless connectivity between hosts in the network and also allow them to access the Internet via gateway devices. Wireless routers are typically equipped with multiple radios operating on different channels to increase network throughput. Multicast is a form of communication that delivers data from a source to a set of destinations simultaneously. It is used in a number of applications, such as distributed games, distance education, and video conferencing. In this study, we address a channel assignment problem for multicast in multi-radio multi-channel WMNs. In a multi-radio multi-channel WMN, two nearby nodes will interfere with each other and cause a throughput decrease when they transmit on the same channel. Thus, an important goal for multicast channel assignment is to reduce the interference among networked devices. We have developed a minimum interference channel assignment (MICA) algorithm for multicast that accurately models the interference relationship between pairs of multicast tree nodes using the concept of the interference factor and assigns channels to tree nodes to minimize interference within the multicast tree. Simulation results show that MICA achieves higher throughput and lower end-to-end packet delay compared with an existing channel assignment algorithm named multi-channel multicast (MCM). In addition, MICA achieves much lower throughput variation among the destination nodes than MCM. PMID:27918438
NASA Astrophysics Data System (ADS)
Zheng, Yan
2015-03-01
Internet of things (IoT), focusing on providing users with information exchange and intelligent control, attracts a lot of attention of researchers from all over the world since the beginning of this century. IoT is consisted of large scale of sensor nodes and data processing units, and the most important features of IoT can be illustrated as energy confinement, efficient communication and high redundancy. With the sensor nodes increment, the communication efficiency and the available communication band width become bottle necks. Many research work is based on the instance which the number of joins is less. However, it is not proper to the increasing multi-join query in whole internet of things. To improve the communication efficiency between parallel units in the distributed sensor network, this paper proposed parallel query optimization algorithm based on distribution attributes cost graph. The storage information relations and the network communication cost are considered in this algorithm, and an optimized information changing rule is established. The experimental result shows that the algorithm has good performance, and it would effectively use the resource of each node in the distributed sensor network. Therefore, executive efficiency of multi-join query between different nodes could be improved.
NASA Astrophysics Data System (ADS)
Meng, Luming; Sheong, Fu Kit; Zeng, Xiangze; Zhu, Lizhe; Huang, Xuhui
2017-07-01
Constructing Markov state models from large-scale molecular dynamics simulation trajectories is a promising approach to dissect the kinetic mechanisms of complex chemical and biological processes. Combined with transition path theory, Markov state models can be applied to identify all pathways connecting any conformational states of interest. However, the identified pathways can be too complex to comprehend, especially for multi-body processes where numerous parallel pathways with comparable flux probability often coexist. Here, we have developed a path lumping method to group these parallel pathways into metastable path channels for analysis. We define the similarity between two pathways as the intercrossing flux between them and then apply the spectral clustering algorithm to lump these pathways into groups. We demonstrate the power of our method by applying it to two systems: a 2D-potential consisting of four metastable energy channels and the hydrophobic collapse process of two hydrophobic molecules. In both cases, our algorithm successfully reveals the metastable path channels. We expect this path lumping algorithm to be a promising tool for revealing unprecedented insights into the kinetic mechanisms of complex multi-body processes.
NASA Astrophysics Data System (ADS)
Mayvan, Ali D.; Aghaeinia, Hassan; Kazemi, Mohammad
2017-12-01
This paper focuses on robust transceiver design for throughput enhancement on the interference channel (IC), under imperfect channel state information (CSI). In this paper, two algorithms are proposed to improve the throughput of the multi-input multi-output (MIMO) IC. Each transmitter and receiver has, respectively, M and N antennas and IC operates in a time division duplex mode. In the first proposed algorithm, each transceiver adjusts its filter to maximize the expected value of signal-to-interference-plus-noise ratio (SINR). On the other hand, the second algorithm tries to minimize the variances of the SINRs to hedge against the variability due to CSI error. Taylor expansion is exploited to approximate the effect of CSI imperfection on mean and variance. The proposed robust algorithms utilize the reciprocity of wireless networks to optimize the estimated statistical properties in two different working modes. Monte Carlo simulations are employed to investigate sum rate performance of the proposed algorithms and the advantage of incorporating variation minimization into the transceiver design.
NASA Astrophysics Data System (ADS)
Zarchi, Milad; Attaran, Behrooz
2017-11-01
This study develops a mathematical model to investigate the behaviour of adaptable shock absorber dynamics for the six-degree-of-freedom aircraft model in the taxiing phase. The purpose of this research is to design a proportional-integral-derivative technique for control of an active vibration absorber system using a hydraulic nonlinear actuator based on the bees algorithm. This optimization algorithm is inspired by the natural intelligent foraging behaviour of honey bees. The neighbourhood search strategy is used to find better solutions around the previous one. The parameters of the controller are adjusted by minimizing the aircraft's acceleration and impact force as the multi-objective function. The major advantages of this algorithm over other optimization algorithms are its simplicity, flexibility and robustness. The results of the numerical simulation indicate that the active suspension increases the comfort of the ride for passengers and the fatigue life of the structure. This is achieved by decreasing the impact force, displacement and acceleration significantly.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Acciarri, R.; Adams, C.; An, R.
The development and operation of Liquid-Argon Time-Projection Chambers for neutrino physics has created a need for new approaches to pattern recognition in order to fully exploit the imaging capabilities offered by this technology. Whereas the human brain can excel at identifying features in the recorded events, it is a significant challenge to develop an automated, algorithmic solution. The Pandora Software Development Kit provides functionality to aid the design and implementation of pattern-recognition algorithms. It promotes the use of a multi-algorithm approach to pattern recognition, in which individual algorithms each address a specific task in a particular topology. Many tens ofmore » algorithms then carefully build up a picture of the event and, together, provide a robust automated pattern-recognition solution. This paper describes details of the chain of over one hundred Pandora algorithms and tools used to reconstruct cosmic-ray muon and neutrino events in the MicroBooNE detector. Metrics that assess the current pattern-recognition performance are presented for simulated MicroBooNE events, using a selection of final-state event topologies.« less
Acciarri, R.; Adams, C.; An, R.; ...
2018-01-29
The development and operation of Liquid-Argon Time-Projection Chambers for neutrino physics has created a need for new approaches to pattern recognition in order to fully exploit the imaging capabilities offered by this technology. Whereas the human brain can excel at identifying features in the recorded events, it is a significant challenge to develop an automated, algorithmic solution. The Pandora Software Development Kit provides functionality to aid the design and implementation of pattern-recognition algorithms. It promotes the use of a multi-algorithm approach to pattern recognition, in which individual algorithms each address a specific task in a particular topology. Many tens ofmore » algorithms then carefully build up a picture of the event and, together, provide a robust automated pattern-recognition solution. This paper describes details of the chain of over one hundred Pandora algorithms and tools used to reconstruct cosmic-ray muon and neutrino events in the MicroBooNE detector. Metrics that assess the current pattern-recognition performance are presented for simulated MicroBooNE events, using a selection of final-state event topologies.« less
Progress Towards a Rad-Hydro Code for Modern Computing Architectures LA-UR-10-02825
NASA Astrophysics Data System (ADS)
Wohlbier, J. G.; Lowrie, R. B.; Bergen, B.; Calef, M.
2010-11-01
We are entering an era of high performance computing where data movement is the overwhelming bottleneck to scalable performance, as opposed to the speed of floating-point operations per processor. All multi-core hardware paradigms, whether heterogeneous or homogeneous, be it the Cell processor, GPGPU, or multi-core x86, share this common trait. In multi-physics applications such as inertial confinement fusion or astrophysics, one may be solving multi-material hydrodynamics with tabular equation of state data lookups, radiation transport, nuclear reactions, and charged particle transport in a single time cycle. The algorithms are intensely data dependent, e.g., EOS, opacity, nuclear data, and multi-core hardware memory restrictions are forcing code developers to rethink code and algorithm design. For the past two years LANL has been funding a small effort referred to as Multi-Physics on Multi-Core to explore ideas for code design as pertaining to inertial confinement fusion and astrophysics applications. The near term goals of this project are to have a multi-material radiation hydrodynamics capability, with tabular equation of state lookups, on cartesian and curvilinear block structured meshes. In the longer term we plan to add fully implicit multi-group radiation diffusion and material heat conduction, and block structured AMR. We will report on our progress to date.
Low Phase Noise Fiber Optics Links for Space Applications
2005-07-13
photo- oscillateur intégré pour 17 dB de pertes optiques Liaison active à 874,2 MHz avec photo- oscillateur intégré pour 18 dB de pertes optiques Liaison...active à 874,2 MHz avec photo- oscillateur intégré pour 20 dB de pertes optiques Liaison active à 874,2 MHz avec photo- oscillateur intégré pour 23 dB...de pertes optiques Liaison active à 874,2 MHz avec photo- oscillateur intégré pour 25 dB de pertes optiques Liaison active à 874,2 MHz avec photo
NASA Astrophysics Data System (ADS)
Li, Jing; Xie, Weixin; Pei, Jihong
2018-03-01
Sea-land segmentation is one of the key technologies of sea target detection in remote sensing images. At present, the existing algorithms have the problems of low accuracy, low universality and poor automatic performance. This paper puts forward a sea-land segmentation algorithm based on multi-feature fusion for a large-field remote sensing image removing island. Firstly, the coastline data is extracted and all of land area is labeled by using the geographic information in large-field remote sensing image. Secondly, three features (local entropy, local texture and local gradient mean) is extracted in the sea-land border area, and the three features combine a 3D feature vector. And then the MultiGaussian model is adopted to describe 3D feature vectors of sea background in the edge of the coastline. Based on this multi-gaussian sea background model, the sea pixels and land pixels near coastline are classified more precise. Finally, the coarse segmentation result and the fine segmentation result are fused to obtain the accurate sea-land segmentation. Comparing and analyzing the experimental results by subjective vision, it shows that the proposed method has high segmentation accuracy, wide applicability and strong anti-disturbance ability.
Multi-AUV Target Search Based on Bioinspired Neurodynamics Model in 3-D Underwater Environments.
Cao, Xiang; Zhu, Daqi; Yang, Simon X
2016-11-01
Target search in 3-D underwater environments is a challenge in multiple autonomous underwater vehicles (multi-AUVs) exploration. This paper focuses on an effective strategy for multi-AUV target search in the 3-D underwater environments with obstacles. First, the Dempster-Shafer theory of evidence is applied to extract information of environment from the sonar data to build a grid map of the underwater environments. Second, a topologically organized bioinspired neurodynamics model based on the grid map is constructed to represent the dynamic environment. The target globally attracts the AUVs through the dynamic neural activity landscape of the model, while the obstacles locally push the AUVs away to avoid collision. Finally, the AUVs plan their search path to the targets autonomously by a steepest gradient descent rule. The proposed algorithm deals with various situations, such as static targets search, dynamic targets search, and one or several AUVs break down in the 3-D underwater environments with obstacles. The simulation results show that the proposed algorithm is capable of guiding multi-AUV to achieve search task of multiple targets with higher efficiency and adaptability compared with other algorithms.
Nonlinear information fusion algorithms for data-efficient multi-fidelity modelling.
Perdikaris, P; Raissi, M; Damianou, A; Lawrence, N D; Karniadakis, G E
2017-02-01
Multi-fidelity modelling enables accurate inference of quantities of interest by synergistically combining realizations of low-cost/low-fidelity models with a small set of high-fidelity observations. This is particularly effective when the low- and high-fidelity models exhibit strong correlations, and can lead to significant computational gains over approaches that solely rely on high-fidelity models. However, in many cases of practical interest, low-fidelity models can only be well correlated to their high-fidelity counterparts for a specific range of input parameters, and potentially return wrong trends and erroneous predictions if probed outside of their validity regime. Here we put forth a probabilistic framework based on Gaussian process regression and nonlinear autoregressive schemes that is capable of learning complex nonlinear and space-dependent cross-correlations between models of variable fidelity, and can effectively safeguard against low-fidelity models that provide wrong trends. This introduces a new class of multi-fidelity information fusion algorithms that provide a fundamental extension to the existing linear autoregressive methodologies, while still maintaining the same algorithmic complexity and overall computational cost. The performance of the proposed methods is tested in several benchmark problems involving both synthetic and real multi-fidelity datasets from computational fluid dynamics simulations.
NASA Astrophysics Data System (ADS)
Kim, Hyo-Su; Kim, Dong-Hoi
The dynamic channel allocation (DCA) scheme in multi-cell systems causes serious inter-cell interference (ICI) problem to some existing calls when channels for new calls are allocated. Such a problem can be addressed by advanced centralized DCA design that is able to minimize ICI. Thus, in this paper, a centralized DCA is developed for the downlink of multi-cell orthogonal frequency division multiple access (OFDMA) systems with full spectral reuse. However, in practice, as the search space of channel assignment for centralized DCA scheme in multi-cell systems grows exponentially with the increase of the number of required calls, channels, and cells, it becomes an NP-hard problem and is currently too complicated to find an optimum channel allocation. In this paper, we propose an ant colony optimization (ACO) based DCA scheme using a low-complexity ACO algorithm which is a kind of heuristic algorithm in order to solve the aforementioned problem. Simulation results demonstrate significant performance improvements compared to the existing schemes in terms of the grade of service (GoS) performance and the forced termination probability of existing calls without degrading the system performance of the average throughput.
Multi-Robot Coalitions Formation with Deadlines: Complexity Analysis and Solutions
2017-01-01
Multi-robot task allocation is one of the main problems to address in order to design a multi-robot system, very especially when robots form coalitions that must carry out tasks before a deadline. A lot of factors affect the performance of these systems and among them, this paper is focused on the physical interference effect, produced when two or more robots want to access the same point simultaneously. To our best knowledge, this paper presents the first formal description of multi-robot task allocation that includes a model of interference. Thanks to this description, the complexity of the allocation problem is analyzed. Moreover, the main contribution of this paper is to provide the conditions under which the optimal solution of the aforementioned allocation problem can be obtained solving an integer linear problem. The optimal results are compared to previous allocation algorithms already proposed by the first two authors of this paper and with a new method proposed in this paper. The results obtained show how the new task allocation algorithms reach up more than an 80% of the median of the optimal solution, outperforming previous auction algorithms with a huge reduction of the execution time. PMID:28118384
Multi-Robot Coalitions Formation with Deadlines: Complexity Analysis and Solutions.
Guerrero, Jose; Oliver, Gabriel; Valero, Oscar
2017-01-01
Multi-robot task allocation is one of the main problems to address in order to design a multi-robot system, very especially when robots form coalitions that must carry out tasks before a deadline. A lot of factors affect the performance of these systems and among them, this paper is focused on the physical interference effect, produced when two or more robots want to access the same point simultaneously. To our best knowledge, this paper presents the first formal description of multi-robot task allocation that includes a model of interference. Thanks to this description, the complexity of the allocation problem is analyzed. Moreover, the main contribution of this paper is to provide the conditions under which the optimal solution of the aforementioned allocation problem can be obtained solving an integer linear problem. The optimal results are compared to previous allocation algorithms already proposed by the first two authors of this paper and with a new method proposed in this paper. The results obtained show how the new task allocation algorithms reach up more than an 80% of the median of the optimal solution, outperforming previous auction algorithms with a huge reduction of the execution time.
NASA Astrophysics Data System (ADS)
Peng, Ao-Ping; Li, Zhi-Hui; Wu, Jun-Lin; Jiang, Xin-Yu
2016-12-01
Based on the previous researches of the Gas-Kinetic Unified Algorithm (GKUA) for flows from highly rarefied free-molecule transition to continuum, a new implicit scheme of cell-centered finite volume method is presented for directly solving the unified Boltzmann model equation covering various flow regimes. In view of the difficulty in generating the single-block grid system with high quality for complex irregular bodies, a multi-block docking grid generation method is designed on the basis of data transmission between blocks, and the data structure is constructed for processing arbitrary connection relations between blocks with high efficiency and reliability. As a result, the gas-kinetic unified algorithm with the implicit scheme and multi-block docking grid has been firstly established and used to solve the reentry flow problems around the multi-bodies covering all flow regimes with the whole range of Knudsen numbers from 10 to 3.7E-6. The implicit and explicit schemes are applied to computing and analyzing the supersonic flows in near-continuum and continuum regimes around a circular cylinder with careful comparison each other. It is shown that the present algorithm and modelling possess much higher computational efficiency and faster converging properties. The flow problems including two and three side-by-side cylinders are simulated from highly rarefied to near-continuum flow regimes, and the present computed results are found in good agreement with the related DSMC simulation and theoretical analysis solutions, which verify the good accuracy and reliability of the present method. It is observed that the spacing of the multi-body is smaller, the cylindrical throat obstruction is greater with the flow field of single-body asymmetrical more obviously and the normal force coefficient bigger. While in the near-continuum transitional flow regime of near-space flying surroundings, the spacing of the multi-body increases to six times of the diameter of the single-body, the interference effects of the multi-bodies tend to be negligible. The computing practice has confirmed that it is feasible for the present method to compute the aerodynamics and reveal flow mechanism around complex multi-body vehicles covering all flow regimes from the gas-kinetic point of view of solving the unified Boltzmann model velocity distribution function equation.
Energy-Efficient Deadline-Aware Data-Gathering Scheme Using Multiple Mobile Data Collectors.
Dasgupta, Rumpa; Yoon, Seokhoon
2017-04-01
In wireless sensor networks, the data collected by sensors are usually forwarded to the sink through multi-hop forwarding. However, multi-hop forwarding can be inefficient due to the energy hole problem and high communications overhead. Moreover, when the monitored area is large and the number of sensors is small, sensors cannot send the data via multi-hop forwarding due to the lack of network connectivity. In order to address those problems of multi-hop forwarding, in this paper, we consider a data collection scheme that uses mobile data collectors (MDCs), which visit sensors and collect data from them. Due to the recent breakthroughs in wireless power transfer technology, MDCs can also be used to recharge the sensors to keep them from draining their energy. In MDC-based data-gathering schemes, a big challenge is how to find the MDCs' traveling paths in a balanced way, such that their energy consumption is minimized and the packet-delay constraint is satisfied. Therefore, in this paper, we aim at finding the MDCs' paths, taking energy efficiency and delay constraints into account. We first define an optimization problem, named the delay-constrained energy minimization (DCEM) problem, to find the paths for MDCs. An integer linear programming problem is formulated to find the optimal solution. We also propose a two-phase path-selection algorithm to efficiently solve the DCEM problem. Simulations are performed to compare the performance of the proposed algorithms with two heuristics algorithms for the vehicle routing problem under various scenarios. The simulation results show that the proposed algorithms can outperform existing algorithms in terms of energy efficiency and packet delay.
Freer, Phoebe E.; Slanetz, Priscilla J.; Haas, Jennifer S.; Tung, Nadine M.; Hughes, Kevin S.; Armstrong, Katrina; Semine, A. Alan; Troyan, Susan L.; Birdwell, Robyn L.
2015-01-01
Purpose Stemming from breast density notification legislation in Massachusetts effective 2015, we sought to develop a collaborative evidence-based approach to density notification that could be used by practitioners across the state. Our goal was to develop an evidence-based consensus management algorithm to help patients and health care providers follow best practices to implement a coordinated, evidence-based, cost-effective, sustainable practice and to standardize care in recommendations for supplemental screening. Methods We formed the Massachusetts Breast Risk Education and Assessment Task Force (MA-BREAST) a multi-institutional, multi-disciplinary panel of expert radiologists, surgeons, primary care physicians, and oncologists to develop a collaborative approach to density notification legislation. Using evidence-based data from the Institute for Clinical and Economic Review (ICER), the Cochrane review, National Comprehensive Cancer Network (NCCN) guidelines, American Cancer Society (ACS) recommendations, and American College of Radiology (ACR) appropriateness criteria, the group collaboratively developed an evidence-based best-practices algorithm. Results The expert consensus algorithm uses breast density as one element in the risk stratification to determine the need for supplemental screening. Women with dense breasts and otherwise low risk (<15% lifetime risk), do not routinely require supplemental screening per the expert consensus. Women of high risk (>20% lifetime) should consider supplemental screening MRI in addition to routine mammography regardless of breast density. Conclusion We report the development of the multi-disciplinary collaborative approach to density notification. We propose a risk stratification algorithm to assess personal level of risk to determine the need for supplemental screening for an individual woman. PMID:26290416
Automatic detection of multi-level acetowhite regions in RGB color images of the uterine cervix
NASA Astrophysics Data System (ADS)
Lange, Holger
2005-04-01
Uterine cervical cancer is the second most common cancer among women worldwide. Colposcopy is a diagnostic method used to detect cancer precursors and cancer of the uterine cervix, whereby a physician (colposcopist) visually inspects the metaplastic epithelium on the cervix for certain distinctly abnormal morphologic features. A contrast agent, a 3-5% acetic acid solution, is used, causing abnormal and metaplastic epithelia to turn white. The colposcopist considers diagnostic features such as the acetowhite, blood vessel structure, and lesion margin to derive a clinical diagnosis. STI Medical Systems is developing a Computer-Aided-Diagnosis (CAD) system for colposcopy -- ColpoCAD, a complex image analysis system that at its core assesses the same visual features as used by colposcopists. The acetowhite feature has been identified as one of the most important individual predictors of lesion severity. Here, we present the details and preliminary results of a multi-level acetowhite region detection algorithm for RGB color images of the cervix, including the detection of the anatomic features: cervix, os and columnar region, which are used for the acetowhite region detection. The RGB images are assumed to be glare free, either obtained by cross-polarized image acquisition or glare removal pre-processing. The basic approach of the algorithm is to extract a feature image from the RGB image that provides a good acetowhite to cervix background ratio, to segment the feature image using novel pixel grouping and multi-stage region-growing algorithms that provide region segmentations with different levels of detail, to extract the acetowhite regions from the region segmentations using a novel region selection algorithm, and then finally to extract the multi-levels from the acetowhite regions using multiple thresholds. The performance of the algorithm is demonstrated using human subject data.
Energy-Efficient Deadline-Aware Data-Gathering Scheme Using Multiple Mobile Data Collectors
Dasgupta, Rumpa; Yoon, Seokhoon
2017-01-01
In wireless sensor networks, the data collected by sensors are usually forwarded to the sink through multi-hop forwarding. However, multi-hop forwarding can be inefficient due to the energy hole problem and high communications overhead. Moreover, when the monitored area is large and the number of sensors is small, sensors cannot send the data via multi-hop forwarding due to the lack of network connectivity. In order to address those problems of multi-hop forwarding, in this paper, we consider a data collection scheme that uses mobile data collectors (MDCs), which visit sensors and collect data from them. Due to the recent breakthroughs in wireless power transfer technology, MDCs can also be used to recharge the sensors to keep them from draining their energy. In MDC-based data-gathering schemes, a big challenge is how to find the MDCs’ traveling paths in a balanced way, such that their energy consumption is minimized and the packet-delay constraint is satisfied. Therefore, in this paper, we aim at finding the MDCs’ paths, taking energy efficiency and delay constraints into account. We first define an optimization problem, named the delay-constrained energy minimization (DCEM) problem, to find the paths for MDCs. An integer linear programming problem is formulated to find the optimal solution. We also propose a two-phase path-selection algorithm to efficiently solve the DCEM problem. Simulations are performed to compare the performance of the proposed algorithms with two heuristics algorithms for the vehicle routing problem under various scenarios. The simulation results show that the proposed algorithms can outperform existing algorithms in terms of energy efficiency and packet delay. PMID:28368300
Equations of Motion of a Ground Moving Target for a Multi-Channel Spaceborne SAR
2009-03-01
Canada as represented by the Minister of National Defence, 2009 c© Sa Majesté la Reine (en droit du Canada), telle que représentée par le ministre de...ex., RADARSAT- 2 ou TerraSAR-X). Les travaux menant au présent mémoire technique visaient à dériver un ensemble d’équations de mouvement d’une cible...Dragos̆ević ; DRDC Ottawa TM 2008-326 ; R & D pour la défense Canada – Ottawa ; mars 2009. Introduction : Le traitement des données des radars à synthèse
Urbanowicz, Ryan J; Kiralis, Jeff; Sinnott-Armstrong, Nicholas A; Heberling, Tamra; Fisher, Jonathan M; Moore, Jason H
2012-10-01
Geneticists who look beyond single locus disease associations require additional strategies for the detection of complex multi-locus effects. Epistasis, a multi-locus masking effect, presents a particular challenge, and has been the target of bioinformatic development. Thorough evaluation of new algorithms calls for simulation studies in which known disease models are sought. To date, the best methods for generating simulated multi-locus epistatic models rely on genetic algorithms. However, such methods are computationally expensive, difficult to adapt to multiple objectives, and unlikely to yield models with a precise form of epistasis which we refer to as pure and strict. Purely and strictly epistatic models constitute the worst-case in terms of detecting disease associations, since such associations may only be observed if all n-loci are included in the disease model. This makes them an attractive gold standard for simulation studies considering complex multi-locus effects. We introduce GAMETES, a user-friendly software package and algorithm which generates complex biallelic single nucleotide polymorphism (SNP) disease models for simulation studies. GAMETES rapidly and precisely generates random, pure, strict n-locus models with specified genetic constraints. These constraints include heritability, minor allele frequencies of the SNPs, and population prevalence. GAMETES also includes a simple dataset simulation strategy which may be utilized to rapidly generate an archive of simulated datasets for given genetic models. We highlight the utility and limitations of GAMETES with an example simulation study using MDR, an algorithm designed to detect epistasis. GAMETES is a fast, flexible, and precise tool for generating complex n-locus models with random architectures. While GAMETES has a limited ability to generate models with higher heritabilities, it is proficient at generating the lower heritability models typically used in simulation studies evaluating new algorithms. In addition, the GAMETES modeling strategy may be flexibly combined with any dataset simulation strategy. Beyond dataset simulation, GAMETES could be employed to pursue theoretical characterization of genetic models and epistasis.
NASA Astrophysics Data System (ADS)
Ling, Jun
Achieving reliable underwater acoustic communications (UAC) has long been recognized as a challenging problem owing to the scarce bandwidth available and the reverberant spread in both time and frequency domains. To pursue high data rates, we consider a multi-input multi-output (MIMO) UAC system, and our focus is placed on two main issues regarding a MIMO UAC system: (1) channel estimation, which involves the design of the training sequences and the development of a reliable channel estimation algorithm, and (2) symbol detection, which requires interference cancelation schemes due to simultaneous transmission from multiple transducers. To enhance channel estimation performance, we present a cyclic approach for designing training sequences with good auto- and cross-correlation properties, and a channel estimation algorithm called the iterative adaptive approach (IAA). Sparse channel estimates can be obtained by combining IAA with the Bayesian information criterion (BIC). Moreover, we present sparse learning via iterative minimization (SLIM) and demonstrate that SLIM gives similar performance to IAA but at a much lower computational cost. Furthermore, an extension of the SLIM algorithm is introduced to estimate the sparse and frequency modulated acoustic channels. The extended algorithm is referred to as generalization of SLIM (GoSLIM). Regarding symbol detection, a linear minimum mean-squared error based detection scheme, called RELAX-BLAST, which is a combination of vertical Bell Labs layered space-time (V-BLAST) algorithm and the cyclic principle of the RELAX algorithm, is presented and it is shown that RELAX-BLAST outperforms V-BLAST. We show that RELAX-BLAST can be implemented efficiently by making use of the conjugate gradient method and diagonalization properties of circulant matrices. This fast implementation approach requires only simple fast Fourier transform operations and facilitates parallel implementations. The effectiveness of the proposed MIMO schemes is verified by both computer simulations and experimental results obtained by analyzing the measurements acquired in multiple in-water experiments.
Dragas, Jelena; Jäckel, David; Hierlemann, Andreas; Franke, Felix
2017-01-01
Reliable real-time low-latency spike sorting with large data throughput is essential for studies of neural network dynamics and for brain-machine interfaces (BMIs), in which the stimulation of neural networks is based on the networks' most recent activity. However, the majority of existing multi-electrode spike-sorting algorithms are unsuited for processing high quantities of simultaneously recorded data. Recording from large neuronal networks using large high-density electrode sets (thousands of electrodes) imposes high demands on the data-processing hardware regarding computational complexity and data transmission bandwidth; this, in turn, entails demanding requirements in terms of chip area, memory resources and processing latency. This paper presents computational complexity optimization techniques, which facilitate the use of spike-sorting algorithms in large multi-electrode-based recording systems. The techniques are then applied to a previously published algorithm, on its own, unsuited for large electrode set recordings. Further, a real-time low-latency high-performance VLSI hardware architecture of the modified algorithm is presented, featuring a folded structure capable of processing the activity of hundreds of neurons simultaneously. The hardware is reconfigurable “on-the-fly” and adaptable to the nonstationarities of neuronal recordings. By transmitting exclusively spike time stamps and/or spike waveforms, its real-time processing offers the possibility of data bandwidth and data storage reduction. PMID:25415989
Dragas, Jelena; Jackel, David; Hierlemann, Andreas; Franke, Felix
2015-03-01
Reliable real-time low-latency spike sorting with large data throughput is essential for studies of neural network dynamics and for brain-machine interfaces (BMIs), in which the stimulation of neural networks is based on the networks' most recent activity. However, the majority of existing multi-electrode spike-sorting algorithms are unsuited for processing high quantities of simultaneously recorded data. Recording from large neuronal networks using large high-density electrode sets (thousands of electrodes) imposes high demands on the data-processing hardware regarding computational complexity and data transmission bandwidth; this, in turn, entails demanding requirements in terms of chip area, memory resources and processing latency. This paper presents computational complexity optimization techniques, which facilitate the use of spike-sorting algorithms in large multi-electrode-based recording systems. The techniques are then applied to a previously published algorithm, on its own, unsuited for large electrode set recordings. Further, a real-time low-latency high-performance VLSI hardware architecture of the modified algorithm is presented, featuring a folded structure capable of processing the activity of hundreds of neurons simultaneously. The hardware is reconfigurable “on-the-fly” and adaptable to the nonstationarities of neuronal recordings. By transmitting exclusively spike time stamps and/or spike waveforms, its real-time processing offers the possibility of data bandwidth and data storage reduction.
[Cluster analysis in biomedical researches].
Akopov, A S; Moskovtsev, A A; Dolenko, S A; Savina, G D
2013-01-01
Cluster analysis is one of the most popular methods for the analysis of multi-parameter data. The cluster analysis reveals the internal structure of the data, group the separate observations on the degree of their similarity. The review provides a definition of the basic concepts of cluster analysis, and discusses the most popular clustering algorithms: k-means, hierarchical algorithms, Kohonen networks algorithms. Examples are the use of these algorithms in biomedical research.
Algorithm for designing smart factory Industry 4.0
NASA Astrophysics Data System (ADS)
Gurjanov, A. V.; Zakoldaev, D. A.; Shukalov, A. V.; Zharinov, I. O.
2018-03-01
The designing task of production division of the Industry 4.0 item designing company is being studied. The authors proposed an algorithm, which is based on the modified V L Volkovich method. This algorithm allows generating options how to arrange the production with robotized technological equipment functioning in the automatic mode. The optimization solution of the multi-criteria task for some additive criteria is the base of the algorithm.
Multi-period project portfolio selection under risk considerations and stochastic income
NASA Astrophysics Data System (ADS)
Tofighian, Ali Asghar; Moezzi, Hamid; Khakzar Barfuei, Morteza; Shafiee, Mahmood
2018-02-01
This paper deals with multi-period project portfolio selection problem. In this problem, the available budget is invested on the best portfolio of projects in each period such that the net profit is maximized. We also consider more realistic assumptions to cover wider range of applications than those reported in previous studies. A novel mathematical model is presented to solve the problem, considering risks, stochastic incomes, and possibility of investing extra budget in each time period. Due to the complexity of the problem, an effective meta-heuristic method hybridized with a local search procedure is presented to solve the problem. The algorithm is based on genetic algorithm (GA), which is a prominent method to solve this type of problems. The GA is enhanced by a new solution representation and well selected operators. It also is hybridized with a local search mechanism to gain better solution in shorter time. The performance of the proposed algorithm is then compared with well-known algorithms, like basic genetic algorithm (GA), particle swarm optimization (PSO), and electromagnetism-like algorithm (EM-like) by means of some prominent indicators. The computation results show the superiority of the proposed algorithm in terms of accuracy, robustness and computation time. At last, the proposed algorithm is wisely combined with PSO to improve the computing time considerably.
NASA Astrophysics Data System (ADS)
Mansor, S. B.; Pormanafi, S.; Mahmud, A. R. B.; Pirasteh, S.
2012-08-01
In this study, a geospatial model for land use allocation was developed from the view of simulating the biological autonomous adaptability to environment and the infrastructural preference. The model was developed based on multi-agent genetic algorithm. The model was customized to accommodate the constraint set for the study area, namely the resource saving and environmental-friendly. The model was then applied to solve the practical multi-objective spatial optimization allocation problems of land use in the core region of Menderjan Basin in Iran. The first task was to study the dominant crops and economic suitability evaluation of land. Second task was to determine the fitness function for the genetic algorithms. The third objective was to optimize the land use map using economical benefits. The results has indicated that the proposed model has much better performance for solving complex multi-objective spatial optimization allocation problems and it is a promising method for generating land use alternatives for further consideration in spatial decision-making.
Wu, Tingzhu; Lin, Yue; Zheng, Lili; Guo, Ziquan; Xu, Jianxing; Liang, Shijie; Liu, Zhuguagn; Lu, Yijun; Shih, Tien-Mo; Chen, Zhong
2018-02-19
An optimal design of light-emitting diode (LED) lighting that benefits both the photosynthesis performance for plants and the visional health for human eyes has drawn considerable attention. In the present study, we have developed a multi-color driving algorithm that serves as a liaison between desired spectral power distributions and pulse-width-modulation duty cycles. With the aid of this algorithm, our multi-color plant-growth light sources can optimize correlated-color temperature (CCT) and color rendering index (CRI) such that photosynthetic luminous efficacy of radiation (PLER) is maximized regardless of the number of LEDs and the type of photosynthetic action spectrum (PAS). In order to illustrate the accuracies of the proposed algorithm and the practicalities of our plant-growth light sources, we choose six color LEDs and German PAS for experiments. Finally, our study can help provide a useful guide to improve light qualities in plant factories, in which long-term co-inhabitance of plants and human beings is required.
Implementation of a multi-threaded framework for large-scale scientific applications
Sexton-Kennedy, E.; Gartung, Patrick; Jones, C. D.; ...
2015-05-22
The CMS experiment has recently completed the development of a multi-threaded capable application framework. In this paper, we will discuss the design, implementation and application of this framework to production applications in CMS. For the 2015 LHC run, this functionality is particularly critical for both our online and offline production applications, which depend on faster turn-around times and a reduced memory footprint relative to before. These applications are complex codes, each including a large number of physics-driven algorithms. While the framework is capable of running a mix of thread-safe and 'legacy' modules, algorithms running in our production applications need tomore » be thread-safe for optimal use of this multi-threaded framework at a large scale. Towards this end, we discuss the types of changes, which were necessary for our algorithms to achieve good performance of our multithreaded applications in a full-scale application. Lastly performance numbers for what has been achieved for the 2015 run are presented.« less
NASA Astrophysics Data System (ADS)
Rabbani, Masoud; Montazeri, Mona; Farrokhi-Asl, Hamed; Rafiei, Hamed
2016-12-01
Mixed-model assembly lines are increasingly accepted in many industrial environments to meet the growing trend of greater product variability, diversification of customer demands, and shorter life cycles. In this research, a new mathematical model is presented considering balancing a mixed-model U-line and human-related issues, simultaneously. The objective function consists of two separate components. The first part of the objective function is related to balance problem. In this part, objective functions are minimizing the cycle time, minimizing the number of workstations, and maximizing the line efficiencies. The second part is related to human issues and consists of hiring cost, firing cost, training cost, and salary. To solve the presented model, two well-known multi-objective evolutionary algorithms, namely non-dominated sorting genetic algorithm and multi-objective particle swarm optimization, have been used. A simple solution representation is provided in this paper to encode the solutions. Finally, the computational results are compared and analyzed.
NASA Astrophysics Data System (ADS)
Yuan, Congcong; Jia, Xiaofeng; Liu, Shishuo; Zhang, Jie
2018-02-01
Accurate characterization of hydraulic fracturing zones is currently becoming increasingly important in production optimization, since hydraulic fracturing may increase the porosity and permeability of the reservoir significantly. Recently, the feasibility of the reverse time migration (RTM) method has been studied for the application in imaging fractures during borehole microseismic monitoring. However, strong low-frequency migration noise, poorly illuminated areas, and the low signal to noise ratio (SNR) data can degrade the imaging results. To improve the quality of the images, we propose a multi-cross-correlation staining algorithm to incorporate into the microseismic reverse time migration for imaging fractures using scattered data. Under the modified RTM method, our results are revealed in two images: one is the improved RTM image using the multi-cross-correlation condition, and the other is an image of the target region using the generalized staining algorithm. The numerical examples show that, compared with the conventional RTM, our method can significantly improve the spatial resolution of images, especially for the image of target region.
Multidisciplinary Multiobjective Optimal Design for Turbomachinery Using Evolutionary Algorithm
NASA Technical Reports Server (NTRS)
2005-01-01
This report summarizes Dr. Lian s efforts toward developing a robust and efficient tool for multidisciplinary and multi-objective optimal design for turbomachinery using evolutionary algorithms. This work consisted of two stages. The first stage (from July 2003 to June 2004) Dr. Lian focused on building essential capabilities required for the project. More specifically, Dr. Lian worked on two subjects: an enhanced genetic algorithm (GA) and an integrated optimization system with a GA and a surrogate model. The second stage (from July 2004 to February 2005) Dr. Lian formulated aerodynamic optimization and structural optimization into a multi-objective optimization problem and performed multidisciplinary and multi-objective optimizations on a transonic compressor blade based on the proposed model. Dr. Lian s numerical results showed that the proposed approach can effectively reduce the blade weight and increase the stage pressure ratio in an efficient manner. In addition, the new design was structurally safer than the original design. Five conference papers and three journal papers were published on this topic by Dr. Lian.
Scientific Visualization and Simulation for Multi-dimensional Marine Environment Data
NASA Astrophysics Data System (ADS)
Su, T.; Liu, H.; Wang, W.; Song, Z.; Jia, Z.
2017-12-01
As higher attention on the ocean and rapid development of marine detection, there are increasingly demands for realistic simulation and interactive visualization of marine environment in real time. Based on advanced technology such as GPU rendering, CUDA parallel computing and rapid grid oriented strategy, a series of efficient and high-quality visualization methods, which can deal with large-scale and multi-dimensional marine data in different environmental circumstances, has been proposed in this paper. Firstly, a high-quality seawater simulation is realized by FFT algorithm, bump mapping and texture animation technology. Secondly, large-scale multi-dimensional marine hydrological environmental data is virtualized by 3d interactive technologies and volume rendering techniques. Thirdly, seabed terrain data is simulated with improved Delaunay algorithm, surface reconstruction algorithm, dynamic LOD algorithm and GPU programming techniques. Fourthly, seamless modelling in real time for both ocean and land based on digital globe is achieved by the WebGL technique to meet the requirement of web-based application. The experiments suggest that these methods can not only have a satisfying marine environment simulation effect, but also meet the rendering requirements of global multi-dimension marine data. Additionally, a simulation system for underwater oil spill is established by OSG 3D-rendering engine. It is integrated with the marine visualization method mentioned above, which shows movement processes, physical parameters, current velocity and direction for different types of deep water oil spill particle (oil spill particles, hydrates particles, gas particles, etc.) dynamically and simultaneously in multi-dimension. With such application, valuable reference and decision-making information can be provided for understanding the progress of oil spill in deep water, which is helpful for ocean disaster forecasting, warning and emergency response.
Multi-Parent Clustering Algorithms from Stochastic Grammar Data Models
NASA Technical Reports Server (NTRS)
Mjoisness, Eric; Castano, Rebecca; Gray, Alexander
1999-01-01
We introduce a statistical data model and an associated optimization-based clustering algorithm which allows data vectors to belong to zero, one or several "parent" clusters. For each data vector the algorithm makes a discrete decision among these alternatives. Thus, a recursive version of this algorithm would place data clusters in a Directed Acyclic Graph rather than a tree. We test the algorithm with synthetic data generated according to the statistical data model. We also illustrate the algorithm using real data from large-scale gene expression assays.
Enhanced intelligent water drops algorithm for multi-depot vehicle routing problem
Akutsah, Francis; Olusanya, Micheal O.; Adewumi, Aderemi O.
2018-01-01
The intelligent water drop algorithm is a swarm-based metaheuristic algorithm, inspired by the characteristics of water drops in the river and the environmental changes resulting from the action of the flowing river. Since its appearance as an alternative stochastic optimization method, the algorithm has found applications in solving a wide range of combinatorial and functional optimization problems. This paper presents an improved intelligent water drop algorithm for solving multi-depot vehicle routing problems. A simulated annealing algorithm was introduced into the proposed algorithm as a local search metaheuristic to prevent the intelligent water drop algorithm from getting trapped into local minima and also improve its solution quality. In addition, some of the potential problematic issues associated with using simulated annealing that include high computational runtime and exponential calculation of the probability of acceptance criteria, are investigated. The exponential calculation of the probability of acceptance criteria for the simulated annealing based techniques is computationally expensive. Therefore, in order to maximize the performance of the intelligent water drop algorithm using simulated annealing, a better way of calculating the probability of acceptance criteria is considered. The performance of the proposed hybrid algorithm is evaluated by using 33 standard test problems, with the results obtained compared with the solutions offered by four well-known techniques from the subject literature. Experimental results and statistical tests show that the new method possesses outstanding performance in terms of solution quality and runtime consumed. In addition, the proposed algorithm is suitable for solving large-scale problems. PMID:29554662
Enhanced intelligent water drops algorithm for multi-depot vehicle routing problem.
Ezugwu, Absalom E; Akutsah, Francis; Olusanya, Micheal O; Adewumi, Aderemi O
2018-01-01
The intelligent water drop algorithm is a swarm-based metaheuristic algorithm, inspired by the characteristics of water drops in the river and the environmental changes resulting from the action of the flowing river. Since its appearance as an alternative stochastic optimization method, the algorithm has found applications in solving a wide range of combinatorial and functional optimization problems. This paper presents an improved intelligent water drop algorithm for solving multi-depot vehicle routing problems. A simulated annealing algorithm was introduced into the proposed algorithm as a local search metaheuristic to prevent the intelligent water drop algorithm from getting trapped into local minima and also improve its solution quality. In addition, some of the potential problematic issues associated with using simulated annealing that include high computational runtime and exponential calculation of the probability of acceptance criteria, are investigated. The exponential calculation of the probability of acceptance criteria for the simulated annealing based techniques is computationally expensive. Therefore, in order to maximize the performance of the intelligent water drop algorithm using simulated annealing, a better way of calculating the probability of acceptance criteria is considered. The performance of the proposed hybrid algorithm is evaluated by using 33 standard test problems, with the results obtained compared with the solutions offered by four well-known techniques from the subject literature. Experimental results and statistical tests show that the new method possesses outstanding performance in terms of solution quality and runtime consumed. In addition, the proposed algorithm is suitable for solving large-scale problems.
NASA Astrophysics Data System (ADS)
Luo, Shouhua; Shen, Tao; Sun, Yi; Li, Jing; Li, Guang; Tang, Xiangyang
2018-04-01
In high resolution (microscopic) CT applications, the scan field of view should cover the entire specimen or sample to allow complete data acquisition and image reconstruction. However, truncation may occur in projection data and results in artifacts in reconstructed images. In this study, we propose a low resolution image constrained reconstruction algorithm (LRICR) for interior tomography in microscopic CT at high resolution. In general, the multi-resolution acquisition based methods can be employed to solve the data truncation problem if the project data acquired at low resolution are utilized to fill up the truncated projection data acquired at high resolution. However, most existing methods place quite strict restrictions on the data acquisition geometry, which greatly limits their utility in practice. In the proposed LRICR algorithm, full and partial data acquisition (scan) at low and high resolutions, respectively, are carried out. Using the image reconstructed from sparse projection data acquired at low resolution as the prior, a microscopic image at high resolution is reconstructed from the truncated projection data acquired at high resolution. Two synthesized digital phantoms, a raw bamboo culm and a specimen of mouse femur, were utilized to evaluate and verify performance of the proposed LRICR algorithm. Compared with the conventional TV minimization based algorithm and the multi-resolution scout-reconstruction algorithm, the proposed LRICR algorithm shows significant improvement in reduction of the artifacts caused by data truncation, providing a practical solution for high quality and reliable interior tomography in microscopic CT applications. The proposed LRICR algorithm outperforms the multi-resolution scout-reconstruction method and the TV minimization based reconstruction for interior tomography in microscopic CT.
NASA Astrophysics Data System (ADS)
Lin, Yi-Kuei; Yeh, Cheng-Ta
2013-05-01
From the perspective of supply chain management, the selected carrier plays an important role in freight delivery. This article proposes a new criterion of multi-commodity reliability and optimises the carrier selection based on such a criterion for logistics networks with routes and nodes, over which multiple commodities are delivered. Carrier selection concerns the selection of exactly one carrier to deliver freight on each route. The capacity of each carrier has several available values associated with a probability distribution, since some of a carrier's capacity may be reserved for various orders. Therefore, the logistics network, given any carrier selection, is a multi-commodity multi-state logistics network. Multi-commodity reliability is defined as a probability that the logistics network can satisfy a customer's demand for various commodities, and is a performance indicator for freight delivery. To solve this problem, this study proposes an optimisation algorithm that integrates genetic algorithm, minimal paths and Recursive Sum of Disjoint Products. A practical example in which multi-sized LCD monitors are delivered from China to Germany is considered to illustrate the solution procedure.
Sampling Approaches for Multi-Domain Internet Performance Measurement Infrastructures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Calyam, Prasad
2014-09-15
The next-generation of high-performance networks being developed in DOE communities are critical for supporting current and emerging data-intensive science applications. The goal of this project is to investigate multi-domain network status sampling techniques and tools to measure/analyze performance, and thereby provide “network awareness” to end-users and network operators in DOE communities. We leverage the infrastructure and datasets available through perfSONAR, which is a multi-domain measurement framework that has been widely deployed in high-performance computing and networking communities; the DOE community is a core developer and the largest adopter of perfSONAR. Our investigations include development of semantic scheduling algorithms, measurement federationmore » policies, and tools to sample multi-domain and multi-layer network status within perfSONAR deployments. We validate our algorithms and policies with end-to-end measurement analysis tools for various monitoring objectives such as network weather forecasting, anomaly detection, and fault-diagnosis. In addition, we develop a multi-domain architecture for an enterprise-specific perfSONAR deployment that can implement monitoring-objective based sampling and that adheres to any domain-specific measurement policies.« less
Convergence and Applications of a Gossip-Based Gauss-Newton Algorithm
NASA Astrophysics Data System (ADS)
Li, Xiao; Scaglione, Anna
2013-11-01
The Gauss-Newton algorithm is a popular and efficient centralized method for solving non-linear least squares problems. In this paper, we propose a multi-agent distributed version of this algorithm, named Gossip-based Gauss-Newton (GGN) algorithm, which can be applied in general problems with non-convex objectives. Furthermore, we analyze and present sufficient conditions for its convergence and show numerically that the GGN algorithm achieves performance comparable to the centralized algorithm, with graceful degradation in case of network failures. More importantly, the GGN algorithm provides significant performance gains compared to other distributed first order methods.
Ochi, Kento; Kamiura, Moto
2015-09-01
A multi-armed bandit problem is a search problem on which a learning agent must select the optimal arm among multiple slot machines generating random rewards. UCB algorithm is one of the most popular methods to solve multi-armed bandit problems. It achieves logarithmic regret performance by coordinating balance between exploration and exploitation. Since UCB algorithms, researchers have empirically known that optimistic value functions exhibit good performance in multi-armed bandit problems. The terms optimistic or optimism might suggest that the value function is sufficiently larger than the sample mean of rewards. The first definition of UCB algorithm is focused on the optimization of regret, and it is not directly based on the optimism of a value function. We need to think the reason why the optimism derives good performance in multi-armed bandit problems. In the present article, we propose a new method, which is called Overtaking method, to solve multi-armed bandit problems. The value function of the proposed method is defined as an upper bound of a confidence interval with respect to an estimator of expected value of reward: the value function asymptotically approaches to the expected value of reward from the upper bound. If the value function is larger than the expected value under the asymptote, then the learning agent is almost sure to be able to obtain the optimal arm. This structure is called sand-sifter mechanism, which has no regrowth of value function of suboptimal arms. It means that the learning agent can play only the current best arm in each time step. Consequently the proposed method achieves high accuracy rate and low regret and some value functions of it can outperform UCB algorithms. This study suggests the advantage of optimism of agents in uncertain environment by one of the simplest frameworks. Copyright © 2015 The Authors. Published by Elsevier Ireland Ltd.. All rights reserved.
NASA Astrophysics Data System (ADS)
Lee, Suk-Jun; Yu, Seung-Man
2017-08-01
The purpose of this study was to evaluate the usefulness and clinical applications of MultiVaneXD which was applying iterative motion correction reconstruction algorithm T2-weighted images compared with MultiVane images taken with a 3T MRI. A total of 20 patients with suspected pathologies of the liver and pancreatic-biliary system based on clinical and laboratory findings underwent upper abdominal MRI, acquired using the MultiVane and MultiVaneXD techniques. Two reviewers analyzed the MultiVane and MultiVaneXD T2-weighted images qualitatively and quantitatively. Each reviewer evaluated vessel conspicuity by observing motion artifacts and the sharpness of the portal vein, hepatic vein, and upper organs. The signal-to-noise ratio (SNR) and contrast-to-noise ratio (CNR) were calculated by one reviewer for quantitative analysis. The interclass correlation coefficient was evaluated to measure inter-observer reliability. There were significant differences between MultiVane and MultiVaneXD in motion artifact evaluation. Furthermore, MultiVane was given a better score than MultiVaneXD in abdominal organ sharpness and vessel conspicuity, but the difference was insignificant. The reliability coefficient values were over 0.8 in every evaluation. MultiVaneXD (2.12) showed a higher value than did MultiVane (1.98), but the difference was insignificant ( p = 0.135). MultiVaneXD is a motion correction method that is more advanced than MultiVane, and it produced an increased SNR, resulting in a greater ability to detect focal abdominal lesions.
Computational electromagnetics: the physics of smooth versus oscillatory fields.
Chew, W C
2004-03-15
This paper starts by discussing the difference in the physics between solutions to Laplace's equation (static) and Maxwell's equations for dynamic problems (Helmholtz equation). Their differing physical characters are illustrated by how the two fields convey information away from their source point. The paper elucidates the fact that their differing physical characters affect the use of Laplacian field and Helmholtz field in imaging. They also affect the design of fast computational algorithms for electromagnetic scattering problems. Specifically, a comparison is made between fast algorithms developed using wavelets, the simple fast multipole method, and the multi-level fast multipole algorithm for electrodynamics. The impact of the physical characters of the dynamic field on the parallelization of the multi-level fast multipole algorithm is also discussed. The relationship of diagonalization of translators to group theory is presented. Finally, future areas of research for computational electromagnetics are described.