Sample records for tabulation isat algorithm

  1. Large-Scale Parallel Simulations of Turbulent Combustion using Combined Dimension Reduction and Tabulation of Chemistry

    DTIC Science & Technology

    2012-05-22

    tabulation of the reduced space is performed using the In Situ Adaptive Tabulation ( ISAT ) algorithm. In addition, we use x2f mpi – a Fortran library...for parallel vector-valued function evaluation (used with ISAT in this context) – to efficiently redistribute the chemistry workload among the...Constrained-Equilibrium (RCCE) method, and tabulation of the reduced space is performed using the In Situ Adaptive Tabulation ( ISAT ) algorithm. In addition

  2. Reduced description of reactive flows with tabulation of chemistry

    NASA Astrophysics Data System (ADS)

    Ren, Zhuyin; Goldin, Graham M.; Hiremath, Varun; Pope, Stephen B.

    2011-12-01

    The direct use of large chemical mechanisms in multi-dimensional Computational Fluid Dynamics (CFD) is computationally expensive due to the large number of chemical species and the wide range of chemical time scales involved. To meet this challenge, a reduced description of reactive flows in combination with chemistry tabulation is proposed to effectively reduce the computational cost. In the reduced description, the species are partitioned into represented species and unrepresented species; the reactive system is described in terms of a smaller number of represented species instead of the full set of chemical species in the mechanism; and the evolution equations are solved only for the represented species. When required, the unrepresented species are reconstructed assuming that they are in constrained chemical equilibrium. In situ adaptive tabulation (ISAT) is employed to speed the chemistry calculation through tabulating information of the reduced system. The proposed dimension-reduction / tabulation methodology determines and tabulates in situ the necessary information of the nr-dimensional reduced system based on the ns-species detailed mechanism. Compared to the full description with ISAT, the reduced descriptions achieve additional computational speed-up by solving fewer transport equations and faster ISAT retrieving. The approach is validated in both a methane/air premixed flame and a methane/air non-premixed flame. With the GRI 1.2 mechanism consisting of 31 species, the reduced descriptions (with 12 to 16 represented species) achieve a speed-up factor of up to three compared to the full description with ISAT, with a relatively moderate decrease in accuracy compared to the full description.

  3. Interchange Safety Analysis Tool (ISAT) : user manual

    DOT National Transportation Integrated Search

    2007-06-01

    This User Manual describes the usage and operation of the spreadsheet-based Interchange Safety Analysis Tool (ISAT). ISAT provides design and safety engineers with an automated tool for assessing the safety effects of geometric design and traffic con...

  4. Effects of combined dimension reduction and tabulation on the simulations of a turbulent premixed flame using a large-eddy simulation/probability density function method

    NASA Astrophysics Data System (ADS)

    Kim, Jeonglae; Pope, Stephen B.

    2014-05-01

    A turbulent lean-premixed propane-air flame stabilised by a triangular cylinder as a flame-holder is simulated to assess the accuracy and computational efficiency of combined dimension reduction and tabulation of chemistry. The computational condition matches the Volvo rig experiments. For the reactive simulation, the Lagrangian Large-Eddy Simulation/Probability Density Function (LES/PDF) formulation is used. A novel two-way coupling approach between LES and PDF is applied to obtain resolved density to reduce its statistical fluctuations. Composition mixing is evaluated by the modified Interaction-by-Exchange with the Mean (IEM) model. A baseline case uses In Situ Adaptive Tabulation (ISAT) to calculate chemical reactions efficiently. Its results demonstrate good agreement with the experimental measurements in turbulence statistics, temperature, and minor species mass fractions. For dimension reduction, 11 and 16 represented species are chosen and a variant of Rate Controlled Constrained Equilibrium (RCCE) is applied in conjunction with ISAT to each case. All the quantities in the comparison are indistinguishable from the baseline results using ISAT only. The combined use of RCCE/ISAT reduces the computational time for chemical reaction by more than 50%. However, for the current turbulent premixed flame, chemical reaction takes only a minor portion of the overall computational cost, in contrast to non-premixed flame simulations using LES/PDF, presumably due to the restricted manifold of purely premixed flame in the composition space. Instead, composition mixing is the major contributor to cost reduction since the mean-drift term, which is computationally expensive, is computed for the reduced representation. Overall, a reduction of more than 15% in the computational cost is obtained.

  5. Conversion of Isatin to Isatate as Related to Growth Promotion in Avena Coleoptile and Pisum Stem Sections 1

    PubMed Central

    Chen, H.-R.; Galston, A. W.; Milstone, L.

    1966-01-01

    Isatin, (indole 2,3-dione), which promotes elongation of Pisum stem sections at concentrations exceeding 0.1 mm, promotes elongation of Avena coleoptile sections only at higher concentrations, exceeding 1 mm. Aged isatin solutions are more active than fresh solutions, due to the slow, spontaneous conversion to isatate (o-aminophenylglyoxylate). A concentration of 0.1 mm aged isatin is as active in Avena coleoptile sections as in peas. Isatate has been independently synthesized and its auxin activity in both Avena coleoptile and Pisum stem sections confirmed. The synthetic isatate is more effective than isatin in both systems. This suggests that the auxin activity of isatin is due to its conversion to isatate. PMID:16656429

  6. Thermal Analysis of Iodine Satellite (iSAT)

    NASA Technical Reports Server (NTRS)

    Mauro, Stephanie

    2015-01-01

    This paper presents the progress of the thermal analysis and design of the Iodine Satellite (iSAT). The purpose of the iSAT spacecraft (SC) is to demonstrate the ability of the iodine Hall Thruster propulsion system throughout a one year mission in an effort to mature the system for use on future satellites. The benefit of this propulsion system is that it uses a propellant, iodine, that is easy to store and provides a high thrust-to-mass ratio. The spacecraft will also act as a bus for an earth observation payload, the Long Wave Infrared (LWIR) Camera. Four phases of the mission, determined to either be critical to achieving requirements or phases of thermal concern, are modeled. The phases are the Right Ascension of the Ascending Node (RAAN) Change, Altitude Reduction, De-Orbit, and Science Phases. Each phase was modeled in a worst case hot environment and the coldest phase, the Science Phase, was also modeled in a worst case cold environment. The thermal environments of the spacecraft are especially important to model because iSAT has a very high power density. The satellite is the size of a 12 unit cubesat, and dissipates slightly more than 75 Watts of power as heat at times. The maximum temperatures for several components are above their maximum operational limit for one or more cases. The analysis done for the first Design and Analysis Cycle (DAC1) showed that many components were above or within 5 degrees Centigrade of their maximum operation limit. The battery is a component of concern because although it is not over its operational temperature limit, efficiency greatly decreases if it operates at the currently predicted temperatures. In the second Design and Analysis Cycle (DAC2), many steps were taken to mitigate the overheating of components, including isolating several high temperature components, removal of components, and rearrangement of systems. These changes have greatly increased the thermal margin available.

  7. The Iodine Satellite (iSAT) Hall Thruster Demonstration Mission Concept and Development

    NASA Technical Reports Server (NTRS)

    Dankanich, John W.; Polzin, Kurt A.; Calvert, Derek; Kamhawi, Hani

    2014-01-01

    The use of iodine propellant for Hall thrusters has been studied and proposed by multiple organizations due to the potential mission benefits over xenon. In 2013, NASA Marshall Space Flight Center competitively selected a project for the maturation of an iodine flight operational feed system through the Technology Investment Program. Multiple partnerships and collaborations have allowed the team to expand the scope to include additional mission concept development and risk reduction to support a flight system demonstration, the iodine Satellite (iSAT). The iSAT project was initiated and is progressing towards a technology demonstration mission preliminary design review. The current status of the mission concept development and risk reduction efforts in support of this project is presented.

  8. Idaho Percentile Results for the 2015 and 2016 ISAT (SBAC) English Language Arts and Mathematics Tests in Grades 3-8 and 10

    ERIC Educational Resources Information Center

    Stoneberg, Bert D.

    2016-01-01

    Idaho uses the English Language Arts and Mathematics tests from the Smarter Balanced Assessment Consortium (SBAC) for the Idaho Standard Achievement Tests (ISAT). ISAT results have been have been reported almost exclusively as "percent proficient" statistics (i.e., the percentage of Idaho students who performed at the "A" level…

  9. Iowa satellite project ISAT-1

    NASA Technical Reports Server (NTRS)

    1993-01-01

    Satellite systems to date have been mainly scientific in nature. Only a few systems have been of direct use to the public such as for telephone or television transmission. Space enterprises have remained a mystery to the general public and beyond the reach of the small business community. The result is a less than supportive public when it comes to space activities. The purpose of the ISAT-1 program is to develop a small and relatively inexpensive satellite that will serve the State of Iowa, primarily for educational purposes. It will provide products, services, and activities that will be educational, practical, and useful for a large number for people. The emphasis is on public awareness, 'space literacy', and routine practical applications rather than high technology. The initial conceptual design phase was complete when the current team took over the project. Some areas of the conceptual design were taken a little farther, but for the most part this team started at the detailed design stage.

  10. 48 CFR 908.7117 - Tabulating machine cards.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 48 Federal Acquisition Regulations System 5 2011-10-01 2011-10-01 false Tabulating machine cards. 908.7117 Section 908.7117 Federal Acquisition Regulations System DEPARTMENT OF ENERGY COMPETITION... Tabulating machine cards. DOE offices shall acquire tabulating machine cards in accordance with FPMR 41 CFR...

  11. 48 CFR 908.7117 - Tabulating machine cards.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 48 Federal Acquisition Regulations System 5 2010-10-01 2010-10-01 false Tabulating machine cards. 908.7117 Section 908.7117 Federal Acquisition Regulations System DEPARTMENT OF ENERGY COMPETITION... Tabulating machine cards. DOE offices shall acquire tabulating machine cards in accordance with FPMR 41 CFR...

  12. Use of Tabulated Thermochemical Data for Pure Compounds

    NASA Technical Reports Server (NTRS)

    Jacobson, Nathan S.

    1999-01-01

    Thermodynamic data for inorganic compounds is found in a variety of tabulations and computer databases. An extensive listing of sources of inorganic thermodynamic data is provided. The three major tabulations are the JANAF tables. Thermodynamic Properties of Individual Substances, and the tabulation by Barin. The notation and choice of standard states is different in each of these tabulations, so combining data from the different tabulations is often a problem. By understanding the choice of standard states, it is possible to develop simple equations for conversion of the data from one form to another.

  13. 41 CFR 101-26.509 - Tabulating machine cards.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 41 Public Contracts and Property Management 2 2011-07-01 2007-07-01 true Tabulating machine cards... PROGRAM 26.5-GSA Procurement Programs § 101-26.509 Tabulating machine cards. Procurement by Federal agencies of tabulating machine cards shall be made in accordance with the provisions of this § 101-26.509...

  14. Providing Transparency and Credibility: The Selection of International Students for Australian Universities. An Examination of the Relationship between Scores in the International Student Admissions Test (ISAT), Final Year Academic Programs and an Australian University's Foundation Program

    ERIC Educational Resources Information Center

    Lai, Kelvin; Nankervis, Susan; Story, Margot; Hodgson, Wayne; Lewenberg, Michael; Ball, Marita MacMahon

    2008-01-01

    Throughout 2003-04 five cohorts of students in their final year of school studies in various Malaysian colleges and a group of students completing an Australian university foundation year in Malaysia sat the International Student Admissions Test (ISAT). The ISAT is a multiple-choice test of general academic abilities developed for students whose…

  15. SmallSats, Iodine Propulsion Technology, Applications to Low-Cost Lunar Missions, and the Iodine Satellite (iSAT) Project

    NASA Technical Reports Server (NTRS)

    Dankanich, John W.

    2014-01-01

    Closing Remarks: ?(1) SmallSats hold significant potential for future low cost high value missions; (2) Propulsion remains a key limiting capability for SmallSats that Iodine can address: High ISP * Density for volume constrained spacecraft; Indefinite quiescence, unpressurized and non-hazardous as a secondary payload; (3) Iodine enables MicroSat and SmallSat maneuverability: Enables transfer into high value orbits, constellation deployment and deorbit; (4) Iodine may enable a new class of planetary and exploration class missions: Enables GTO launched secondary spacecraft to transit to the moon, asteroids, and other interplanetary destinations for approximately 150 million dollars full life cycle cost including the launch; (5) ESPA based OTVs are also volume constrained and a shift from xenon to iodine can significantly increase the transfer vehicle change in volume capability including transfers from GTO to a range of Lunar Orbits; (6) The iSAT project is a fast pace high value iodine Hall technology demonstration mission: Partnership with NASA GRC and NASA MSFC with industry partner - Busek; (7) The iSAT mission is an approved project with PDR in November of 2014 and is targeting a flight opportunity in FY17.

  16. How Do They Compare? ITBS and ISAT Reading and Mathematics in the Chicago Public Schools, 1999 to 2002. Research Data Brief.

    ERIC Educational Resources Information Center

    Easton, John Q.; Correa, Macarena; Luppescu, Stuart; Park, Hye-Sook; Ponisciak, Stephen; Rosenkranz, Todd; Sporte, Susan

    For several decades, the Iowa Tests of Basic Skills (ITBS) held the preeminent role in measuring student and school performance in the Chicago Public Schools (CPS), Illinois. In the context of the No Child Left Behind act and new calls for accountability, the CPS has decided to include results from the Illinois Standard Achievement Test (ISAT) in…

  17. TABULATED EQUIVALENT SDR FLAMELET (TESF) MODEFL

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    KUNDU, PRITHWISH; AMEEN, mUHSIN MOHAMMED; UNNIKRISHNAN, UMESH

    The code consists of an implementation of a novel tabulated combustion model for non-premixed flames in CFD solvers. This novel technique/model is used to implement an unsteady flamelet tabulation without using progress variables for non-premixed flames. It also has the capability to include history effects which is unique within tabulated flamelet models. The flamelet table generation code can be run in parallel to generate tables with large chemistry mechanisms in relatively short wall clock times. The combustion model/code reads these tables. This framework can be coupled with any CFD solver with RANS as well as LES turbulence models. This frameworkmore » enables CFD solvers to run large chemistry mechanisms with large number of grids at relatively lower computational costs. Currently it has been coupled with the Converge CFD code and validated against available experimental data. This model can be used to simulate non-premixed combustion in a variety of applications like reciprocating engines, gas turbines and industrial burners operating over a wide range of fuels.« less

  18. The Iodine Satellite (iSat) Project Development Towards Critical Design Review

    NASA Technical Reports Server (NTRS)

    Dankanich, John W.; Calvert, Derek; Kamhawi, Hani; Hickman, Tyler; Szabo, James; Byrne, Lawrence

    2015-01-01

    Despite the prevalence of small satellites in recent years, the systems flown to date have very limited propulsion capability. SmallSats are typically secondary payloads and have significant constraints for volume, mass, and power in addition to limitations on the use of hazardous propellants or stored energy. These constraints limit the options for SmallSat maneuverability. NASA's Space Technology Mission Directorate approved the iodine Satellite flight project for a rapid demonstration of iodine Hall thruster technology in a 12U (cubesat units) configuration under the Small Spacecraft Technology Program. The mission is a partnership between NASA MSFC, NASA GRC, and Busek Co, Inc., with the Air Force supporting the propulsion technology maturation. The team is working towards the critical design review in the final design and fabrication phase of the project. The current design shows positive technical performance margins in all areas. The iSat project is planned for launch readiness in the spring of 2017.

  19. Evaluation of different flamelet tabulation methods for laminar spray combustion

    NASA Astrophysics Data System (ADS)

    Luo, Yujuan; Wen, Xu; Wang, Haiou; Luo, Kun; Fan, Jianren

    2018-05-01

    In this work, three different flamelet tabulation methods for spray combustion are evaluated. Major differences among these methods lie in the treatment of the temperature boundary conditions of the flamelet equations. Particularly, in the first tabulation method ("M1"), both the fuel and oxidizer temperature boundary conditions are set to be fixed. In the second tabulation method ("M2"), the fuel temperature boundary condition is varied while the oxidizer temperature boundary condition is fixed. In the third tabulation method ("M3"), both the fuel and oxidizer temperature boundary conditions are varied and set to be equal. The focus of this work is to investigate whether the heat transfer between the droplet phase and gas phase can be represented by the studied tabulation methods through a priori analyses. To this end, spray flames stabilized in a three-dimensional counterflow are first simulated with detailed chemistry. Then, the trajectory variables are calculated from the detailed chemistry solutions. Finally, the tabulated thermo-chemical quantities are compared to the corresponding values from the detailed chemistry solutions. The comparisons show that the gas temperature cannot be predicted by "M1" with only a mixture fraction and reaction progress variable being the trajectory variables. The gas temperature can be correctly predicted by both "M2" and "M3," in which the total enthalpy is introduced as an additional manifold. In "M2," variations of the oxidizer temperature are considered with a temperature modification technique, which is not required in "M3." Interestingly, it is found that the mass fractions of the reactants and major products are not sensitive to the representation of the interphase heat transfer in the flamelet chemtables, and they can be correctly predicted by all tabulation methods. By contrast, the intermediate species CO and H2 in the premixed flame reaction zone are over-predicted by all tabulation methods.

  20. 23 CFR 635.113 - Bid opening and bid tabulations.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... permitted. (b) The STD shall prepare and forward tabulations of bids to the Division Administrator. These tabulations shall be certified by a responsible STD official and shall show: (1) Bid item details for at least... opened and reviewed in accordance with the terms of the solicitation. The STD must use its own procedures...

  1. 23 CFR 635.113 - Bid opening and bid tabulations.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... permitted. (b) The STD shall prepare and forward tabulations of bids to the Division Administrator. These tabulations shall be certified by a responsible STD official and shall show: (1) Bid item details for at least... opened and reviewed in accordance with the terms of the solicitation. The STD must use its own procedures...

  2. 23 CFR 635.113 - Bid opening and bid tabulations.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... permitted. (b) The STD shall prepare and forward tabulations of bids to the Division Administrator. These tabulations shall be certified by a responsible STD official and shall show: (1) Bid item details for at least... opened and reviewed in accordance with the terms of the solicitation. The STD must use its own procedures...

  3. 23 CFR 635.113 - Bid opening and bid tabulations.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... permitted. (b) The STD shall prepare and forward tabulations of bids to the Division Administrator. These tabulations shall be certified by a responsible STD official and shall show: (1) Bid item details for at least... opened and reviewed in accordance with the terms of the solicitation. The STD must use its own procedures...

  4. Evidence of photosymbiosis in Palaeozoic tabulate corals.

    PubMed

    Zapalski, Mikolaj K

    2014-01-22

    Coral reefs form the most diverse of all marine ecosystems on the Earth. Corals are among their main components and owe their bioconstructing abilities to a symbiosis with algae (Symbiodinium). The coral-algae symbiosis had been traced back to the Triassic (ca 240 Ma). Modern reef-building corals (Scleractinia) appeared after the Permian-Triassic crisis; in the Palaeozoic, some of the main reef constructors were extinct tabulate corals. The calcium carbonate secreted by extant photosymbiotic corals bears characteristic isotope (C and O) signatures. The analysis of tabulate corals belonging to four orders (Favositida, Heliolitida, Syringoporida and Auloporida) from Silurian to Permian strata of Europe and Africa shows these characteristic carbon and oxygen stable isotope signatures. The δ(18)O to δ(13)C ratios in recent photosymbiotic scleractinians are very similar to those of Palaeozoic tabulates, thus providing strong evidence of such symbioses as early as the Middle Silurian (ca 430 Ma). Corals in Palaeozoic reefs used the same cellular mechanisms for carbonate secretion as recent reefs, and thus contributed to reef formation.

  5. The Iodine Satellite (iSat) Project Development Towards Critical Design Review (CDR)

    NASA Technical Reports Server (NTRS)

    Dankanich, John W.; Selby, Michael; Polzin, Kurt A.; Kamhawi, Hani; Hickman, Tyler; Byrne, Larry

    2016-01-01

    Despite the prevalence of Small Satellites in recent years, the systems flown to date have very limited propulsion capability. SmallSats are typically secondary payloads and have significant constraints for volume, mass, and power in addition to limitations on the use of hazardous propellants or stored energy (i.e. high pressure vessels). These constraints limit the options for SmallSat maneuverability. NASA's Space Technology Mission Directorate approved the iodine Satellite flight project for a rapid demonstration of iodine Hall thruster technology in a 12U configuration under the Small Spacecraft Technology Program. The project formally began in FY15 as a partnership between NASA MSFC, NASA GRC, and Busek Co, Inc., with the Air Force supporting the propulsion technology maturation. The team is in final preparation of the Critical Design Review prior to initiating the fabrication and integration phase of the project. The iSat project is on schedule for a launch opportunity in November 2017.

  6. 7 CFR 900.308 - Tabulation of ballots.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... Conduct of Referenda To Determine Producer Approval of Milk Marketing Orders To Be Made Effective Pursuant to Agricultural Marketing Agreement Act of 1937, as Amended § 900.308 Tabulation of ballots. (a... Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Marketing...

  7. Tabulated Neutron Emission Rates for Plutonium Oxide

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shores, Erik Frederick

    This work tabulates neutron emission rates for 80 plutonium oxide samples as reported in the literature. Plutonium-­238 and plutonium-­239 oxides are included and such emission rates are useful for scaling tallies from Monte Carlo simulations and estimating dose rates for health physics applications.

  8. BLS Machine-Readable Data and Tabulating Routines.

    ERIC Educational Resources Information Center

    DiFillipo, Tony

    This report describes the machine-readable data and tabulating routines that the Bureau of Labor Statistics (BLS) is prepared to distribute. An introduction discusses the LABSTAT (Labor Statistics) database and the BLS policy on release of unpublished data. Descriptions summarizing data stored in 25 files follow this format: overview, data…

  9. Research Trends with Cross Tabulation Search Engine

    ERIC Educational Resources Information Center

    Yin, Chengjiu; Hirokawa, Sachio; Yau, Jane Yin-Kim; Hashimoto, Kiyota; Tabata, Yoshiyuki; Nakatoh, Tetsuya

    2013-01-01

    To help researchers in building a knowledge foundation of their research fields which could be a time-consuming process, the authors have developed a Cross Tabulation Search Engine (CTSE). Its purpose is to assist researchers in 1) conducting research surveys, 2) efficiently and effectively retrieving information (such as important researchers,…

  10. Tabulated Combustion Model Development For Non-Premixed Flames

    NASA Astrophysics Data System (ADS)

    Kundu, Prithwish

    Turbulent non-premixed flames play a very important role in the field of engineering ranging from power generation to propulsion. The coupling of fluid mechanics and complicated combustion chemistry of fuels pose a challenge for the numerical modeling of these type of problems. Combustion modeling in Computational Fluid Dynamics (CFD) is one of the most important tools used for predictive modeling of complex systems and to understand the basic fundamentals of combustion. Traditional combustion models solve a transport equation of each species with a source term. In order to resolve the complex chemistry accurately it is important to include a large number of species. However, the computational cost is generally proportional to the cube of number of species. The presence of a large number of species in a flame makes the use of CFD computationally expensive and beyond reach for some applications or inaccurate when solved with simplified chemistry. For highly turbulent flows, it also becomes important to incorporate the effects of turbulence chemistry interaction (TCI). The aim of this work is to develop high fidelity combustion models based on the flamelet concept and to significantly advance the existing capabilities. A thorough investigation of existing models (Finite-rate chemistry and Representative Interactive Flamelet (RIF)) and comparative study of combustion models was done initially on a constant volume combustion chamber with diesel fuel injection. The CFD modeling was validated with experimental results and was also successfully applied to a single cylinder diesel engine. The effect of number of flamelets on the RIF model and flamelet initialization strategies were studied. The RIF model with multiple flamelets is computationally expensive and a model was proposed on the frame work of RIF. The new model was based on tabulated chemistry and incorporated TCI effects. A multidimensional tabulated chemistry database generation code was developed based on the 1

  11. 41 CFR 101-26.509 - Tabulating machine cards.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 41 Public Contracts and Property Management 2 2010-07-01 2010-07-01 true Tabulating machine cards. 101-26.509 Section 101-26.509 Public Contracts and Property Management Federal Property Management Regulations System FEDERAL PROPERTY MANAGEMENT REGULATIONS SUPPLY AND PROCUREMENT 26-PROCUREMENT SOURCES AND...

  12. Expansion of Tabulated Scattering Matrices in Generalized Spherical Functions

    NASA Technical Reports Server (NTRS)

    Mishchenko, Michael I.; Geogdzhayev, Igor V.; Yang, Ping

    2016-01-01

    An efficient way to solve the vector radiative transfer equation for plane-parallel turbid media is to Fourier-decompose it in azimuth. This methodology is typically based on the analytical computation of the Fourier components of the phase matrix and is predicated on the knowledge of the coefficients appearing in the expansion of the normalized scattering matrix in generalized spherical functions. Quite often the expansion coefficients have to be determined from tabulated values of the scattering matrix obtained from measurements or calculated by solving the Maxwell equations. In such cases one needs an efficient and accurate computer procedure converting a tabulated scattering matrix into the corresponding set of expansion coefficients. This short communication summarizes the theoretical basis of this procedure and serves as the user guide to a simple public-domain FORTRAN program.

  13. 23 CFR 635.113 - Bid opening and bid tabulations.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... CONSTRUCTION AND MAINTENANCE Contract Procedures § 635.113 Bid opening and bid tabulations. (a) All bids... contractors, during the period following the opening of bids and before the award of the contract shall not be...

  14. 41 CFR 101-26.509-1 - Requisitioning tabulating machine cards available from Federal Supply Schedule contracts.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 41 Public Contracts and Property Management 2 2011-07-01 2007-07-01 true Requisitioning tabulating... Contracts and Property Management Federal Property Management Regulations System FEDERAL PROPERTY MANAGEMENT... electrical and mechanical contact tabulating machines, including aperture cards and copy cards. Federal...

  15. 41 CFR 101-26.509-1 - Requisitioning tabulating machine cards available from Federal Supply Schedule contracts.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 41 Public Contracts and Property Management 2 2010-07-01 2010-07-01 true Requisitioning tabulating... Contracts and Property Management Federal Property Management Regulations System FEDERAL PROPERTY MANAGEMENT... electrical and mechanical contact tabulating machines, including aperture cards and copy cards. Federal...

  16. External Threat Risk Assessment Algorithm (ExTRAA)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Powell, Troy C.

    Two risk assessment algorithms and philosophies have been augmented and combined to form a new algorit hm, the External Threat Risk Assessment Algorithm (ExTRAA), that allows for effective and statistically sound analysis of external threat sources in relation to individual attack methods . In addition to the attack method use probability and the attack method employment consequence, t he concept of defining threat sources is added to the risk assessment process. Sample data is tabulated and depicted in radar plots and bar graphs for algorithm demonstration purposes. The largest success of ExTRAA is its ability to visualize the kind ofmore » r isk posed in a given situation using the radar plot method.« less

  17. Analysis of the Command and Control Segment (CCS) attitude estimation algorithm

    NASA Technical Reports Server (NTRS)

    Stockwell, Catherine

    1993-01-01

    This paper categorizes the qualitative behavior of the Command and Control Segment (CCS) differential correction algorithm as applied to attitude estimation using simultaneous spin axis sun angle and Earth cord length measurements. The categories of interest are the domains of convergence, divergence, and their boundaries. Three series of plots are discussed that show the dependence of the estimation algorithm on the vehicle radius, the sun/Earth angle, and the spacecraft attitude. Common qualitative dynamics to all three series are tabulated and discussed. Out-of-limits conditions for the estimation algorithm are identified and discussed.

  18. An a priori study of different tabulation methods for turbulent pulverised coal combustion

    NASA Astrophysics Data System (ADS)

    Luo, Yujuan; Wen, Xu; Wang, Haiou; Luo, Kun; Jin, Hanhui; Fan, Jianren

    2018-05-01

    In many practical pulverised coal combustion systems, different oxidiser streams exist, e.g. the primary- and secondary-air streams in the power plant boilers, which makes the modelling of these systems challenging. In this work, three tabulation methods for modelling pulverised coal combustion are evaluated through an a priori study. Pulverised coal flames stabilised in a three-dimensional turbulent counterflow, consisting of different oxidiser streams, are simulated with detailed chemistry first. Then, the thermo-chemical quantities calculated with different tabulation methods are compared to those from detailed chemistry solutions. The comparison shows that the conventional two-stream flamelet model with a fixed oxidiser temperature cannot predict the flame temperature correctly. The conventional two-stream flamelet model is then modified to set the oxidiser temperature equal to the fuel temperature, both of which are varied in the flamelets. By this means, the variations of oxidiser temperature can be considered. It is found that this modified tabulation method performs very well on prediction of the flame temperature. The third tabulation method is an extended three-stream flamelet model that was initially proposed for gaseous combustion. The results show that the reference gaseous temperature profile can be overall reproduced by the extended three-stream flamelet model. Interestingly, it is found that the predictions of major species mass fractions are not sensitive to the oxidiser temperature boundary conditions for the flamelet equations in the a priori analyses.

  19. Tabulations of ambient ozone data obtained by GASP (Global Air Sampling Program) airliners, March 1975 to July 1979

    NASA Technical Reports Server (NTRS)

    Jasperson, W. H.; Holdeman, J. D.

    1984-01-01

    Tabulations are given of GASP ambient ozone mean, standard deviation, median, 84th percentile, and 98th percentile values, by month, flight level, and geographical region. These data are tabulated to conform to the temporal and spatial resolution required by FAA Advisory Circular 120-38 (monthly by 2000 ft in altitude by 5 deg in latitude) for climatological data used to show compliance with cabin ozone regulations. In addition seasonal x 10 deg latitude tabulations are included which are directly comparable to and supersede the interim GASP ambient ozone tabulations given in appendix B of FAA-EE-80-43 (NASA TM-81528). Selected probability variations are highlighted to illustrate the spatial and temporal variability of ambient ozone and to compare results from the coarse and fine grid analyses.

  20. Network Prime-Time Violence Tabulations for 1975-76 Season.

    ERIC Educational Resources Information Center

    Klapper, Joseph T.

    This is an annual report on violence in prime-time television. The tabulations, based on 13 weeks of monitoring prime-time programs on three networks, indicate a decline in violence by 24% and a decline in the rate per hour of dramatic violence to 1.9 incidents per hour since last season. The study also indicated that the introduction of the…

  1. 41 CFR 101-26.509-2 - Requisitioning tabulating machine cards not available from Federal Supply Schedule contracts.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... machine cards not available from Federal Supply Schedule contracts. 101-26.509-2 Section 101-26.509-2... Programs § 101-26.509-2 Requisitioning tabulating machine cards not available from Federal Supply Schedule contracts. (a) Requisitions for tabulating machine cards covered by Federal Supply Schedule contracts which...

  2. Experimental study of a generic high-speed civil transport: Tabulated data

    NASA Technical Reports Server (NTRS)

    Belton, Pamela S.; Campbell, Richard L.

    1992-01-01

    An experimental study of a generic high-speed civil transport was conducted in LaRC's 8-Foot Transonic Pressure Tunnel. The data base was obtained for the purpose of assessing the accuracy of various levels of computational analysis. Two models differing only in wing tip geometry were tested with and without flow-through nacelles. The baseline model has a curved or crescent wing tip shape while the second model has a more conventional straight wing tip shape. The study was conducted at Mach numbers from 0.30-1.19. Force data were obtained on both the straight and curved wing tip models. Only the curved wing tip model was instrumented for measuring pressures. Longitudinal and lateral-directional aerodynamic data are presented without analysis in tabulated form. Pressure coefficients for the curved wing tip model are also presented in tabulated form.

  3. 15 CFR 101.1 - Report of tabulations of population to states and localities pursuant to 13 U.S.C. 141(c).

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 15 Commerce and Foreign Trade 1 2014-01-01 2014-01-01 false Report of tabulations of population to... DECENNIAL CENSUS POPULATION INFORMATION § 101.1 Report of tabulations of population to states and localities... the methodology to be used in calculating the tabulations of population reported to States and...

  4. 15 CFR 101.1 - Report of tabulations of population to states and localities pursuant to 13 U.S.C. 141(c).

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 15 Commerce and Foreign Trade 1 2012-01-01 2012-01-01 false Report of tabulations of population to... DECENNIAL CENSUS POPULATION INFORMATION § 101.1 Report of tabulations of population to states and localities... the methodology to be used in calculating the tabulations of population reported to States and...

  5. 15 CFR 101.1 - Report of tabulations of population to states and localities pursuant to 13 U.S.C. 141(c).

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 15 Commerce and Foreign Trade 1 2013-01-01 2013-01-01 false Report of tabulations of population to... DECENNIAL CENSUS POPULATION INFORMATION § 101.1 Report of tabulations of population to states and localities... the methodology to be used in calculating the tabulations of population reported to States and...

  6. The Fluidez en La Lectura Oral (FLO) Portion of the Indicadores Dinamicos De Exito en La Lectura (IDEL) and the English Language Portion of the Illinois Standard Achievement Test (ISAT): A Correlational Study of Second and Third Grade English Language Learners

    ERIC Educational Resources Information Center

    Ganan, Brian J.

    2012-01-01

    This study examined the relationship between Spanish oral reading fluency (ORF) at the end of second grade and students' performance on the third grade ISAT reading test. The major research question guiding this study was: What is the direction and strength of the relationship between performance on the 2nd grade IDEL FLO, a Spanish language ORF…

  7. Survey of United States Army Reserve (USAR) Troop Program Unit (TPU) soldiers 1989. Tabulation of Questionnaire Responses: Cross-Sectional Sample: Officers and Enlisted Personnel

    DTIC Science & Technology

    1989-09-30

    26 QUESTIONNAIRE INSTRUMENT ri. -I., DATA TABULATION VOLUMES This material provides information for use by readers to interpret...The second longitudinal Tabulation Volume reports the 1988 questionnaire responses of the junior enlisted "stayers" who were used as the sample to...the specific crossing variables used for the cross-sectional and longitudinal Tabulation Volumes. Cross-Sectional Tabulation Volumes. Demographic

  8. Propulsion System Testing for the Iodine Satellite (iSAT) Demonstration Mission

    NASA Technical Reports Server (NTRS)

    Polzin, Kurt A.; Kamhawi, Hani

    2015-01-01

    vacuum chamber (it is under 10(exp -6) torr at -75 C), making it possible to 'cryopump' the propellant with lower-cost recirculating refrigerant-based systems as opposed to using liquid nitrogen or low temperature gaseous helium cryopanels. An iodine-based system is not without its challenges. The primary challenge is that the entire feed system must be maintained at an elevated temperature to prevent the iodine from depositing (transitioning from the gas phase directly back into the solid phase), which will block the propellant feed lines. Furthermore, deposition will occur unless the temperature in the lines is not greater than the temperature of the propellant reservoir. The flow rate can be controlled by adjusting the heating applied to the reservoir, but as with any thermal control there is a relatively slow response to changes in the heating rate. In the present paper, we describe the propulsion and propellant feed system for the iodine satellite (iSAT) flight demonstration mission. The system is based around the Busek BHT-200 Hall thruster, which has been modified for chemical compatibility with iodine vapor. While the gross propellant flow rate is maintained by the heated propellant reservoir, the flow to the anode and cathode are adjusted using two heated Vacco proportional flow control valves (PFCV), which provide very fast response on the flow rate adjustment. The flight mission design layout will be presented, showing how the system will be packaged into the overall 12-U spacecraft and the techniques being employed to protect the remaining spacecraft hardware from the propulsion system (e.g., plasma impingement, iodine deposition, thermal loads). In addition to the flight system design, results of testing the thruster and cathode with both operating on iodine propellant are presented. The tests are conducted on a thrust stand (see Fig. 1) in a large vacuum chamber containing a beam dump chilled to below -100 C to 'cryopump' the propellant. The thruster

  9. UniGene Tabulator: a full parser for the UniGene format.

    PubMed

    Lenzi, Luca; Frabetti, Flavia; Facchin, Federica; Casadei, Raffaella; Vitale, Lorenza; Canaider, Silvia; Carinci, Paolo; Zannotti, Maria; Strippoli, Pierluigi

    2006-10-15

    UniGene Tabulator 1.0 provides a solution for full parsing of UniGene flat file format; it implements a structured graphical representation of each data field present in UniGene following import into a common database managing system usable in a personal computer. This database includes related tables for sequence, protein similarity, sequence-tagged site (STS) and transcript map interval (TXMAP) data, plus a summary table where each record represents a UniGene cluster. UniGene Tabulator enables full local management of UniGene data, allowing parsing, querying, indexing, retrieving, exporting and analysis of UniGene data in a relational database form, usable on Macintosh (OS X 10.3.9 or later) and Windows (2000, with service pack 4, XP, with service pack 2 or later) operating systems-based computers. The current release, including both the FileMaker runtime applications, is freely available at http://apollo11.isto.unibo.it/software/

  10. 2010 Military Family Life Project (MFLP) - Couples: Tabulations of Responses

    DTIC Science & Technology

    2013-08-31

    interest income; dividends; child support/alimony; social security, welfare assistance; and net rent, trusts, and royalties from any other investments ...2010 Military Family Life Project: Couples Tabulations of Responses Additional copies of this report may be obtained from: Defense... RESPONSES Defense Manpower Data Center Human Resources Strategic Assessment Program 4800 Mark Center Drive, Suite 04E25-01, Alexandria, VA 22350

  11. Improvements on the minimax algorithm for the Laplace transformation of orbital energy denominators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Helmich-Paris, Benjamin, E-mail: b.helmichparis@vu.nl; Visscher, Lucas, E-mail: l.visscher@vu.nl

    2016-09-15

    We present a robust and non-heuristic algorithm that finds all extremum points of the error distribution function of numerically Laplace-transformed orbital energy denominators. The extremum point search is one of the two key steps for finding the minimax approximation. If pre-tabulation of initial guesses is supposed to be avoided, strategies for a sufficiently robust algorithm have not been discussed so far. We compare our non-heuristic approach with a bracketing and bisection algorithm and demonstrate that 3 times less function evaluations are required altogether when applying it to typical non-relativistic and relativistic quantum chemical systems.

  12. 2012 Survey of Reserve Components Spouses (RCSS): Tabulations of Responses

    DTIC Science & Technology

    2012-09-30

    injury/medical problems Child care problems Other family/personal obligation Maternity / paternity leave Labor dispute Weather affected job School...Did you interact with the unit or Service point of contact? ............................. 262  36.  How satisfied are you with the level of assistance...I did not interact with the unit or Service point of contact were tabulated separately, as responses to the constructed question Did you interact

  13. A New LES/PDF Method for Computational Modeling of Turbulent Reacting Flows

    NASA Astrophysics Data System (ADS)

    Turkeri, Hasret; Muradoglu, Metin; Pope, Stephen B.

    2013-11-01

    A new LES/PDF method is developed for computational modeling of turbulent reacting flows. The open source package, OpenFOAM, is adopted as the LES solver and combined with the particle-based Monte Carlo method to solve the LES/PDF model equations. The dynamic Smagorinsky model is employed to account for the subgrid-scale motions. The LES solver is first validated for the Sandia Flame D using a steady flamelet method in which the chemical compositions, density and temperature fields are parameterized by the mean mixture fraction and its variance. In this approach, the modeled transport equations for the mean mixture fraction and the square of the mixture fraction are solved and the variance is then computed from its definition. The results are found to be in a good agreement with the experimental data. Then the LES solver is combined with the particle-based Monte Carlo algorithm to form a complete solver for the LES/PDF model equations. The in situ adaptive tabulation (ISAT) algorithm is incorporated into the LES/PDF method for efficient implementation of detailed chemical kinetics. The LES/PDF method is also applied to the Sandia Flame D using the GRI-Mech 3.0 chemical mechanism and the results are compared with the experimental data and the earlier PDF simulations. The Scientific and Technical Research Council of Turkey (TUBITAK), Grant No. 111M067.

  14. Tabulate Corals after the Frasnian/Famennian Crisis: A Unique Fauna from the Holy Cross Mountains, Poland

    PubMed Central

    Zapalski, Mikołaj K.; Berkowski, Błażej; Wrzołek, Tomasz

    2016-01-01

    Famennian tabulate corals were very rare worldwide, and their biodiversity was relatively low. Here we report a unique tabulate fauna from the mid- and late Famennian of the western part of the Holy Cross Mountains (Kowala and Ostrówka), Poland. We describe eight species (four of them new, namely ?Michelinia vinni sp. nov., Thamnoptychia mistiaeni sp. nov., Syringopora kowalensis sp. nov. and Syringopora hilarowiczi sp. nov.); the whole fauna consists of ten species (two others described in previous papers). These corals form two assemblages—the lower, mid-Famennian with Thamnoptychia and the upper, late Famennian with representatives of genera ?Michelinia, Favosites, Syringopora and ?Yavorskia. The Famennian tabulates from Kowala represent the richest Famennian assemblage appearing after the F/F crisis (these faunas appear some 10 Ma after the extinction event). Corals described here most probably inhabited deeper water settings, near the limit between euphotic and disphotic zones or slightly above. At generic level, these faunas show similarities to other Devonian and Carboniferous faunas, which might suggest their ancestry to at least several Carboniferous lineages. Tabulate faunas described here represent new recruits (the basin of the Holy Cross mountains was not a refuge during the F/F crisis) and have no direct evolutionary linkage to Frasnian faunas from Kowala. The colonization of the seafloor took place in two separate steps: first was monospecific assemblage of Thamnoptychia, and later came the diversified Favosites-Syringopora-Michelinia fauna. PMID:27007689

  15. Large eddy simulation of turbulent premixed combustion using tabulated detailed chemistry and presumed probability density function

    NASA Astrophysics Data System (ADS)

    Zhang, Hongda; Han, Chao; Ye, Taohong; Ren, Zhuyin

    2016-03-01

    A method of chemistry tabulation combined with presumed probability density function (PDF) is applied to simulate piloted premixed jet burner flames with high Karlovitz number using large eddy simulation. Thermo-chemistry states are tabulated by the combination of auto-ignition and extended auto-ignition model. To evaluate the predictive capability of the proposed tabulation method to represent the thermo-chemistry states under the condition of different fresh gases temperature, a-priori study is conducted by performing idealised transient one-dimensional premixed flame simulations. Presumed PDF is used to involve the interaction of turbulence and flame with beta PDF to model the reaction progress variable distribution. Two presumed PDF models, Dirichlet distribution and independent beta distribution, respectively, are applied for representing the interaction between two mixture fractions that are associated with three inlet streams. Comparisons of statistical results show that two presumed PDF models for the two mixture fractions are both capable of predicting temperature and major species profiles, however, they are shown to have a significant effect on the predictions for intermediate species. An analysis of the thermo-chemical state-space representation of the sub-grid scale (SGS) combustion model is performed by comparing correlations between the carbon monoxide mass fraction and temperature. The SGS combustion model based on the proposed chemistry tabulation can reasonably capture the peak value and change trend of intermediate species. Aspects regarding model extensions to adequately predict the peak location of intermediate species are discussed.

  16. Tabulate Corals after the Frasnian/Famennian Crisis: A Unique Fauna from the Holy Cross Mountains, Poland.

    PubMed

    Zapalski, Mikołaj K; Berkowski, Błażej; Wrzołek, Tomasz

    2016-01-01

    Famennian tabulate corals were very rare worldwide, and their biodiversity was relatively low. Here we report a unique tabulate fauna from the mid- and late Famennian of the western part of the Holy Cross Mountains (Kowala and Ostrówka), Poland. We describe eight species (four of them new, namely ?Michelinia vinni sp. nov., Thamnoptychia mistiaeni sp. nov., Syringopora kowalensis sp. nov. and Syringopora hilarowiczi sp. nov.); the whole fauna consists of ten species (two others described in previous papers). These corals form two assemblages-the lower, mid-Famennian with Thamnoptychia and the upper, late Famennian with representatives of genera ?Michelinia, Favosites, Syringopora and ?Yavorskia. The Famennian tabulates from Kowala represent the richest Famennian assemblage appearing after the F/F crisis (these faunas appear some 10 Ma after the extinction event). Corals described here most probably inhabited deeper water settings, near the limit between euphotic and disphotic zones or slightly above. At generic level, these faunas show similarities to other Devonian and Carboniferous faunas, which might suggest their ancestry to at least several Carboniferous lineages. Tabulate faunas described here represent new recruits (the basin of the Holy Cross mountains was not a refuge during the F/F crisis) and have no direct evolutionary linkage to Frasnian faunas from Kowala. The colonization of the seafloor took place in two separate steps: first was monospecific assemblage of Thamnoptychia, and later came the diversified Favosites-Syringopora-Michelinia fauna.

  17. FUEL-FLEXIBLE GASIFICATION-COMBUSTION TECHNOLOGY FOR PRODUCTION OF H2 AND SEQUESTRATION-READY CO2

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    George Rizeq; Janice West; Arnaldo Frydman

    Further development of a combustion Large Eddy Simulation (LES) code for the design of advanced gaseous combustion systems is described in this sixth quarterly report. CFD Research Corporation (CFDRC) is developing the LES module within the parallel, unstructured solver included in the commercial CFD-ACE+ software. In this quarter, in-situ adaptive tabulation (ISAT) for efficient chemical rate storage and retrieval was implemented and tested within the Linear Eddy Model (LEM). ISAT type 3 is being tested so that extrapolation can be performed and further improve the retrieval rate. Further testing of the LEM for subgrid chemistry was performed for parallel applicationsmore » and for multi-step chemistry. Validation of the software on backstep and bluff-body reacting cases were performed. Initial calculations of the SimVal experiment at Georgia Tech using their LES code were performed. Georgia Tech continues the effort to parameterize the LEM over composition space so that a neural net can be used efficiently in the combustion LES code. A new and improved Artificial Neural Network (ANN), with log-transformed output, for the 1-step chemistry was implemented in CFDRC's LES code and gave reasonable results. This quarter, the 2nd consortium meeting was held at CFDRC. Next quarter, LES software development and testing will continue. Alpha testing of the code will continue to be performed on cases of interest to the industrial consortium. Optimization of subgrid models will be pursued, particularly with the ISAT approach. Also next quarter, the demonstration of the neural net approach, for multi-step chemical kinetics speed-up in CFD-ACE+, will be accomplished.« less

  18. A tabulation of pipe length to diameter ratios as a function of Mach number and pressure ratios for compressible flow

    NASA Technical Reports Server (NTRS)

    Dixon, G. V.; Barringer, S. R.; Gray, C. E.; Leatherman, A. D.

    1975-01-01

    Computer programs and resulting tabulations are presented of pipeline length-to-diameter ratios as a function of Mach number and pressure ratios for compressible flow. The tabulations are applicable to air, nitrogen, oxygen, and hydrogen for compressible isothermal flow with friction and compressible adiabatic flow with friction. Also included are equations for the determination of weight flow. The tabulations presented cover a wider range of Mach numbers for choked, adiabatic flow than available from commonly used engineering literature. Additional information presented, but which is not available from this literature, is unchoked, adiabatic flow over a wide range of Mach numbers, and choked and unchoked, isothermal flow for a wide range of Mach numbers.

  19. PRT Impact Study Pre-PRT Phase : Volume 3. Frequency Tabulations from Four Transportation-Related Surveys

    DOT National Transportation Integrated Search

    1976-03-01

    The report gives tabulations of survey responses which were collected in Morgantown, West Virginia, as part of a study to assess the impact of the installation of the Personal Rapid Transit (PRT) System.

  20. Indoor radon regulation using tabulated values of temporal radon variation.

    PubMed

    Tsapalov, Andrey; Kovler, Konstantin

    2018-03-01

    Mass measurements of indoor radon concentrations have been conducted for about 30 years. In most of the countries, a national reference/action/limit level is adopted, limiting the annual average indoor radon (AAIR) concentration. However, until now, there is no single and generally accepted international protocol for determining the AAIR with a known confidence interval, based on measurements of different durations. Obviously, as the duration of measurements increases, the uncertainty of the AAIR estimation decreases. The lack of the information about the confidence interval of the determined AAIR level does not allow correct comparison with the radon reference level. This greatly complicates development of an effective indoor radon measurement protocol and strategy. The paper proposes a general principle of indoor radon regulation, based on the simple criteria widely used in metrology, and introduces a new parameter - coefficient of temporal radon variation K V (t) that depends on the measurement duration and determines the uncertainty of the AAIR. An algorithm for determining K V (t) based on the results of annual continuous radon monitoring in experimental rooms is proposed. Included are indoor radon activity concentrations and equilibrium equivalent concentration (EEC) of radon progeny. The monitoring was conducted in 10 selected experimental rooms located in 7 buildings, mainly in the Moscow region (Russia), from 2006 to 2013. The experimental and tabulated values of K V (t) and also the values of the coefficient of temporal EEC variation depending on the mode and duration of the measurements were obtained. The recommendations to improve the efficiency and reliability of indoor radon regulation are given. The importance of taking into account the geological factors is discussed. The representativity of the results of the study is estimated and the approach for their verification is proposed. Copyright © 2017 Elsevier Ltd. All rights reserved.

  1. Thermal Analysis of Iodine Satellite (iSAT) from Preliminary Design Review (PDR) to Critical Design Review (CDR)

    NASA Technical Reports Server (NTRS)

    Mauro, Stephanie

    2016-01-01

    The Iodine Satellite (iSAT) is a 12U cubesat with a primary mission to demonstrate the iodine fueled Hall Effect Thruster (HET) propulsion system. The spacecraft (SC) will operate throughout a one year mission in an effort to mature the propulsion system for use in future applications. The benefit of the HET is that it uses a propellant, iodine, which is easy to store and provides a high thrust-to-mass ratio. This paper will describe the thermal analysis and design of the SC between Preliminary Design Review (PDR) and Critical Design Review (CDR). The design of the satellite has undergone many changes due to a variety of challenges, both before PDR and during the time period discussed in this paper. Thermal challenges associated with the system include a high power density, small amounts of available radiative surface area, localized temperature requirements of the propulsion components, and unknown orbital parameters. The thermal control system is implemented to maintain component temperatures within their respective operational limits throughout the mission, while also maintaining propulsion components at the high temperatures needed to allow gaseous iodine propellant to flow. The design includes heaters, insulation, radiators, coatings, and thermal straps. Currently, the maximum temperatures for several components are near to their maximum operation limit, and the battery is close to its minimum operation limit. Mitigation strategies and planned work to solve these challenges will be discussed.

  2. Food Tabulator. DOT No. 211.582-010. Cafeteria Occupations. Coordinator's Guide. First Edition.

    ERIC Educational Resources Information Center

    East Texas State Univ., Commerce. Occupational Curriculum Lab.

    This study guide, one of eight individualized units developed for students enrolled in cooperative part-time training and employed in a cafeteria, is composed of information about one specific occupation; this unit focuses on the duties of the food tabulator. Materials provided in this guide for coordinator use include a student progress chart; a…

  3. Survey of United States Army Reserve (USAR) Troop Program Unit (TPU) soldiers 1989. Tabulation of Questionnaire Responses: Cross-Sectional Sample: Junior Enlisted (E1-E4)

    DTIC Science & Technology

    1989-09-30

    information for use by readers to interpret the tabulation volumes accompanying the final project report: 1989 Survey of U.S. Army Reserve (USAR) Troop...34stayers" who were used as the sample to generate the first longitudinal Tabulation Volume. Comparing questionnaire response frequencies between the...as described below). Detailed below are the specific crossing variables used for the cross-sectional and longitudinal Tabulation Volumes. Cross

  4. Tabulated dose uniformity ratio and minimum dose data: rectangular 60Co source plaques

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Galanter, L.

    1971-01-01

    The data tabulated herein extend to rectangular cobalt-60 plaques the information presented for square plaques in BNL 50145 (Revised). The user is referred to BNL 50145 (Revised) and to the other reports listed for a complete discussion of the parameters involved in data generation and for instructions on the use of these data in gamma irradiator design.

  5. Algorithm for Lossless Compression of Calibrated Hyperspectral Imagery

    NASA Technical Reports Server (NTRS)

    Kiely, Aaron B.; Klimesh, Matthew A.

    2010-01-01

    A two-stage predictive method was developed for lossless compression of calibrated hyperspectral imagery. The first prediction stage uses a conventional linear predictor intended to exploit spatial and/or spectral dependencies in the data. The compressor tabulates counts of the past values of the difference between this initial prediction and the actual sample value. To form the ultimate predicted value, in the second stage, these counts are combined with an adaptively updated weight function intended to capture information about data regularities introduced by the calibration process. Finally, prediction residuals are losslessly encoded using adaptive arithmetic coding. Algorithms of this type are commonly tested on a readily available collection of images from the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) hyperspectral imager. On the standard calibrated AVIRIS hyperspectral images that are most widely used for compression benchmarking, the new compressor provides more than 0.5 bits/sample improvement over the previous best compression results. The algorithm has been implemented in Mathematica. The compression algorithm was demonstrated as beneficial on 12-bit calibrated AVIRIS images.

  6. 2015 Workplace and Gender Relations Survey of Reserve Component Members: Tabulations of Responses

    DTIC Science & Technology

    2016-03-17

    Report 6. AUTHOR(S) DefenseResearch, Surveys,andStatistics Center (RSSC) 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) Defense Manpower Data Center...2015 WORKPLACE AND GENDER RELATIONS SURVEY OF RESERVE COMPONENT MEMBERS: TABULATIONS OF RESPONSES Defense Manpower Data Center Defense Research...Surveys, and Statistics Center 4800 Mark Center Drive, Suite 04E25-01, Alexandria, VA 22350-4000 ii DMDC Acknowledgments The Defense Manpower Data

  7. Moon view period tabulations (with station masking) for Manned Space Flight Network stations, book 1

    NASA Technical Reports Server (NTRS)

    Gattie, M. M.; Williams, R. L.

    1970-01-01

    The times during which MSFN stations can view the moon are tabulated. Station view periods for each month are given. All times and dates refer to Greenwich Mean Time. AOS and LOS refer to the center of the moon at zero degrees elevation for moon rise and set, respectively.

  8. Tabulation as a high-resolution alternative to coarse-graining protein interactions: Initial application to virus capsid subunits

    NASA Astrophysics Data System (ADS)

    Spiriti, Justin; Zuckerman, Daniel M.

    2015-12-01

    Traditional coarse-graining based on a reduced number of interaction sites often entails a significant sacrifice of chemical accuracy. As an alternative, we present a method for simulating large systems composed of interacting macromolecules using an energy tabulation strategy previously devised for small rigid molecules or molecular fragments [S. Lettieri and D. M. Zuckerman, J. Comput. Chem. 33, 268-275 (2012); J. Spiriti and D. M. Zuckerman, J. Chem. Theory Comput. 10, 5161-5177 (2014)]. We treat proteins as rigid and construct distance and orientation-dependent tables of the interaction energy between them. Arbitrarily detailed interactions may be incorporated into the tables, but as a proof-of-principle, we tabulate a simple α-carbon Gō-like model for interactions between dimeric subunits of the hepatitis B viral capsid. This model is significantly more structurally realistic than previous models used in capsid assembly studies. We are able to increase the speed of Monte Carlo simulations by a factor of up to 6700 compared to simulations without tables, with only minimal further loss in accuracy. To obtain further enhancement of sampling, we combine tabulation with the weighted ensemble (WE) method, in which multiple parallel simulations are occasionally replicated or pruned in order to sample targeted regions of a reaction coordinate space. In the initial study reported here, WE is able to yield pathways of the final ˜25% of the assembly process.

  9. 2017 Workplace and Gender Relations Survey of Reserve Component Members: Tabulations of Responses

    DTIC Science & Technology

    2018-04-30

    2017 Workplace and Gender Relations Survey of Reserve Component Members Tabulations of Responses Additional copies of this report may be...http://www.dtic.mil/dtic/order.html Ask for report by DTIC# OPA Report No. 2018-012 April 2018 2017 Workplace and Gender Relations Survey of...Alexandria, VA 22350-4000 2017 Workplace and Gender Relations Survey of Reserve Component Members ii OPA Acknowledgments The Office of People Analytics

  10. Exact Doppler broadening of tabulated cross sections. [SIGMA 1 kernel broadening method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cullen, D.E.; Weisbin, C.R.

    1976-07-01

    The SIGMA1 kernel broadening method is presented to Doppler broaden to any required accuracy a cross section that is described by a table of values and linear-linear interpolation in energy-cross section between tabulated values. The method is demonstrated to have no temperature or energy limitations and to be equally applicable to neutron or charged-particle cross sections. The method is qualitatively and quantitatively compared to contemporary approximate methods of Doppler broadening with particular emphasis on the effect of each approximation introduced.

  11. Opacplot2: Enabling tabulated EoS and opacity compatibility for HEDLP simulations with the FLASH code

    NASA Astrophysics Data System (ADS)

    Laune, Jordan; Tzeferacos, Petros; Feister, Scott; Fatenejad, Milad; Yurchak, Roman; Flocke, Norbert; Weide, Klaus; Lamb, Donald

    2017-10-01

    Thermodynamic and opacity properties of materials are necessary to accurately simulate laser-driven laboratory experiments. Such data are compiled in tabular format since the thermodynamic range that needs to be covered cannot be described with one single theoretical model. Moreover, tabulated data can be made available prior to runtime, reducing both compute cost and code complexity. This approach is employed by the FLASH code. Equation of state (EoS) and opacity data comes in various formats, matrix-layouts, and file-structures. We discuss recent developments on opacplot2, an open-source Python module that manipulates tabulated EoS and opacity data. We present software that builds upon opacplot2 and enables easy-to-use conversion of different table formats into the IONMIX format, the native tabular input used by FLASH. Our work enables FLASH users to take advantage of a wider range of accurate EoS and opacity tables in simulating HELP experiments at the National Laser User Facilities.

  12. The 1986/87 Army Communications Objectives Measurement System: Supplementary Tabulations of Enlisted Markets

    DTIC Science & Technology

    1988-07-01

    and SEdward 11oke (We tat) 13a. TYPE OF REPORT 13b. TIME COVERED 14. DATE OF REPORT (Year, Month, Day) 15. PAGE COUNT Interim FROM 86/10 TO 87/06...TABULATIONS OF ENLISTED MARKETS 1. INTRODUCTION The Aray Communications Objectives Measurement System survey has been designed to provide timely information to...1987. During that time 6774 youth, ages 16 through 24, com- pleted the 30 minute ACOMS youth interview. A similar volume is also available for the

  13. A marketing approach to carpool demand analysis. Technical memorandum II. Survey tabulations and evaluation. Conservation paper. [Commuter survey in 3 major urban areas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1976-07-01

    The memorandum contains many detailed tabulations, cross tabulations, and major conclusions for policy assessment resulting from a survey taken in connection with a research effort examining the role of individuals attitudes and perceptions in deciding whether or not to carpool. The research was based upon a survey of commuters in 3 major urban areas and has resulted in a sizeable new data base on respondents' socio-economic and worktrip characteristics, travel perceptions, and travel preferences.

  14. Three computer codes to read, plot and tabulate operational test-site recorded solar data

    NASA Technical Reports Server (NTRS)

    Stewart, S. D.; Sampson, R. S., Jr.; Stonemetz, R. E.; Rouse, S. L.

    1980-01-01

    Computer programs used to process data that will be used in the evaluation of collector efficiency and solar system performance are described. The program, TAPFIL, reads data from an IBM 360 tape containing information (insolation, flowrates, temperatures, etc.) from 48 operational solar heating and cooling test sites. Two other programs, CHPLOT and WRTCNL, plot and tabulate the data from the direct access, unformatted TAPFIL file. The methodology of the programs, their inputs, and their outputs are described.

  15. Program to Produce Tabulated Data Set Describing NSWC Burn Model for Hydrodynamic Computations

    DTIC Science & Technology

    1990-09-11

    helpful insights of Dr. Raafat Guirguis of the Naval Surface Warfare Center on how the NSWC Burn Model works, and Drs. Schittke and Feisler of...R. Guirguis ) 1 R13 (P. Miller ) 1 R13 (K. Kin) 2 R13 (C. Coffey) 1 R13 (H. Sandusky) 1 R13 (D. Tasker) 1 R13 (E. Lanar) 1 R13 (J. Forbes) 1 R13 (R...NAVSWC TR 90-364 AD-A238 710 PROGRAM TO PRODUCE TABULATED DATA SET DESCRIBING NSWC BURN MODEL FOR HYDRODYNAMIC COMPUTATIONS BY LEWIS C. HUDSON III

  16. Human Action Recognition in Surveillance Videos using Abductive Reasoning on Linear Temporal Logic

    DTIC Science & Technology

    2012-08-29

    help of the optical flows (Lucas 75 and Kanade, 1981). 76 3.2 Atomic Propositions 77 isAt (ti, Oj, Lk)  Object Oj is at location Lk at time...simultaneously at two locations in the same frame. This can 84 be represented mathematically as: 85 isAt (ti, Oj, Lk... isAt (ti, Oj, Lm)  Lk   Lm

  17. Implementation of a Tabulated Failure Model Into a Generalized Composite Material Model Suitable for Use in Impact Problems

    NASA Technical Reports Server (NTRS)

    Goldberg, Robert K.; Carney, Kelly S.; Dubois, Paul; Hoffarth, Canio; Khaled, Bilal; Shyamsunder, Loukham; Rajan, Subramaniam; Blankenhorn, Gunther

    2017-01-01

    The need for accurate material models to simulate the deformation, damage and failure of polymer matrix composites under impact conditions is becoming critical as these materials are gaining increased use in the aerospace and automotive communities. The aerospace community has identified several key capabilities which are currently lacking in the available material models in commercial transient dynamic finite element codes. To attempt to improve the predictive capability of composite impact simulations, a next generation material model is being developed for incorporation within the commercial transient dynamic finite element code LS-DYNA. The material model, which incorporates plasticity, damage and failure, utilizes experimentally based tabulated input to define the evolution of plasticity and damage and the initiation of failure as opposed to specifying discrete input parameters such as modulus and strength. The plasticity portion of the orthotropic, three-dimensional, macroscopic composite constitutive model is based on an extension of the Tsai-Wu composite failure model into a generalized yield function with a non-associative flow rule. For the damage model, a strain equivalent formulation is used to allow for the uncoupling of the deformation and damage analyses. For the failure model, a tabulated approach is utilized in which a stress or strain based invariant is defined as a function of the location of the current stress state in stress space to define the initiation of failure. Failure surfaces can be defined with any arbitrary shape, unlike traditional failure models where the mathematical functions used to define the failure surface impose a specific shape on the failure surface. In the current paper, the complete development of the failure model is described and the generation of a tabulated failure surface for a representative composite material is discussed.

  18. Retention time alignment of LC/MS data by a divide-and-conquer algorithm.

    PubMed

    Zhang, Zhongqi

    2012-04-01

    Liquid chromatography-mass spectrometry (LC/MS) has become the method of choice for characterizing complex mixtures. These analyses often involve quantitative comparison of components in multiple samples. To achieve automated sample comparison, the components of interest must be detected and identified, and their retention times aligned and peak areas calculated. This article describes a simple pairwise iterative retention time alignment algorithm, based on the divide-and-conquer approach, for alignment of ion features detected in LC/MS experiments. In this iterative algorithm, ion features in the sample run are first aligned with features in the reference run by applying a single constant shift of retention time. The sample chromatogram is then divided into two shorter chromatograms, which are aligned to the reference chromatogram the same way. Each shorter chromatogram is further divided into even shorter chromatograms. This process continues until each chromatogram is sufficiently narrow so that ion features within it have a similar retention time shift. In six pairwise LC/MS alignment examples containing a total of 6507 confirmed true corresponding feature pairs with retention time shifts up to five peak widths, the algorithm successfully aligned these features with an error rate of 0.2%. The alignment algorithm is demonstrated to be fast, robust, fully automatic, and superior to other algorithms. After alignment and gap-filling of detected ion features, their abundances can be tabulated for direct comparison between samples.

  19. Survey of United States Army Reserve (USAR) Troop Program Unit (TPU) soldiers 1989. Tabulation of Questionnaire Responses: Longitudinal Sample: Junior Enlisted Stayers from 1988 to 1989. 1989 Questionnaire Responses

    DTIC Science & Technology

    1989-09-30

    AD-A237 531 1989 SURVEY OF UNITED STATES ARMY RESERVE (USAR) TROOP PROGRAM UNIT (TPU) SOLDIERS Tabulation of Questionnaire Responses: Longitudinal...Program Unit (TPU) Soldiers . The Tabulation Volumes list questionnaire items and the percent of respondents (weighted to population estimates) who have...Reserve population eligible for selection was defined by the number of personnel rec,,rds on a Dpeber 1988 SIDPERS data base; this totalled 280,265

  20. DARTAB: a program to combine airborne radionuclide environmental exposure data with dosimetric and health effects data to generate tabulations of predicted health impacts

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Begovich, C.L.; Eckerman, K.F.; Schlatter, E.C.

    1981-08-01

    The DARTAB computer code combines radionuclide environmental exposure data with dosimetric and health effects data to generate tabulations of the predicted impact of radioactive airborne effluents. DARTAB is independent of the environmental transport code used to generate the environmental exposure data and the codes used to produce the dosimetric and health effects data. Therefore human dose and risk calculations need not be added to every environmental transport code. Options are included in DARTAB to permit the user to request tabulations by various topics (e.g., cancer site, exposure pathway, etc.) to facilitate characterization of the human health impacts of the effluents.more » The DARTAB code was written at ORNL for the US Environmental Protection Agency, Office of Radiation Programs.« less

  1. Information Security – Guidance for Manually Completing the Information Security Awareness Training

    EPA Pesticide Factsheets

    The purpose of this guidance is to provide an alternative manual process for disseminating EPA Information Security Awareness Training (ISAT) materials and collecting results from EPA users who elect to complete the ISAT manually.

  2. Risk of recurrent subarachnoid haemorrhage, death, or dependence and standardised mortality ratios after clipping or coiling of an intracranial aneurysm in the International Subarachnoid Aneurysm Trial (ISAT): long-term follow-up

    PubMed Central

    Molyneux, Andrew J; Kerr, Richard SC; Birks, Jacqueline; Ramzi, Najib; Yarnold, Julia; Sneade, Mary; Rischmiller, Joan

    2009-01-01

    Summary Background Our aim was to assess the long-term risks of death, disability, and rebleeding in patients randomly assigned to clipping or endovascular coiling after rupture of an intracranial aneurysm in the follow-up of the International Subarachnoid Aneurysm Trial (ISAT). Methods 2143 patients with ruptured intracranial aneurysms were enrolled between 1994 and 2002 at 43 neurosurgical centres and randomly assigned to clipping or coiling. Clinical outcomes at 1 year have been previously reported. All UK and some non-UK centres continued long-term follow-up of 2004 patients enrolled in the original cohort. Annual follow-up has been done for a minimum of 6 years and a maximum of 14 years (mean follow-up 9 years). All deaths and rebleeding events were recorded. Analysis of rebleeding was by allocation and by treatment received. ISAT is registered, number ISRCTN49866681. Findings 24 rebleeds had occurred more than 1 year after treatment. Of these, 13 were from the treated aneurysm (ten in the coiling group and three in the clipping group; log rank p=0·06 by intention-to-treat analysis). There were 8447 person-years of follow-up in the coiling group and 8177 person-years of follow-up in the clipping group. Four rebleeds occurred from a pre-existing aneurysm and six from new aneurysms. At 5 years, 11% (112 of 1046) of the patients in the endovascular group and 14% (144 of 1041) of the patients in the neurosurgical group had died (log-rank p=0·03). The risk of death at 5 years was significantly lower in the coiling group than in the clipping group (relative risk 0·77, 95% CI 0·61–0·98; p=0·03), but the proportion of survivors at 5 years who were independent did not differ between the two groups: endovascular 83% (626 of 755) and neurosurgical 82% (584 of 713). The standardised mortality rate, conditional on survival at 1 year, was increased for patients treated for ruptured aneurysms compared with the general population (1·57, 95% CI 1·32–1·82; p<0

  3. Fluid mechanics experiments in oscillatory flow. Volume 2: Tabulated data

    NASA Technical Reports Server (NTRS)

    Seume, J.; Friedman, G.; Simon, T. W.

    1992-01-01

    Results of a fluid mechanics measurement program in oscillating flow within a circular duct are presented. The program began with a survey of transition behavior over a range of oscillation frequency and magnitude and continued with a detailed study at a single operating point. Such measurements were made in support of Stirling engine development. Values of three dimensionless parameters, Re sub max, Re sub w, and A sub R, embody the velocity amplitude, frequency of oscillation, and mean fluid displacement of the cycle, respectively. Measurements were first made over a range of these parameters that are representative of the heat exchanger tubes in the heater section of NASA's Stirling cycle Space Power Research Engine (SPRE). Measurements were taken of the axial and radial components of ensemble-averaged velocity and rms velocity fluctuation and the dominant Reynolds shear stress, at various radial positions for each of four axial stations. In each run, transition from laminar to turbulent flow, and its reverse, were identified and sufficient data was gathered to propose the transition mechanism. Volume 2 contains data reduction program listings and tabulated data (including its graphics).

  4. Tabulation and summary of thermodynamic effects data for developed cavitation on ogive-nosed bodies

    NASA Technical Reports Server (NTRS)

    Holl, J. W.; Billet, M. L.; Weir, D. S.

    1978-01-01

    Thermodynamic effects data for developed cavitation on zero and quarter caliber ogives in Freon 113 and water are tabulated and summarized. These data include temperature depression (delta T), flow coefficient (C sub Q), and various geometrical characteristics of the cavity. For the delta T tests, the free-stream temperature varied from 35 C to 95 C in Freon 113 and from 60 C to 125 C in water for a velocity range of 19.5 m/sec to 36.6 m/sec. Two correlations of the delta T data by the entrainment method are presented. These correlations involve different combinations of the Nusselt, Reynolds, Froude, Weber, and Peclet numbers and dimensionless cavity length.

  5. Modelling alkali metal emissions in large-eddy simulation of a preheated pulverised-coal turbulent jet flame using tabulated chemistry

    NASA Astrophysics Data System (ADS)

    Wan, Kaidi; Xia, Jun; Vervisch, Luc; Liu, Yingzu; Wang, Zhihua; Cen, Kefa

    2018-03-01

    The numerical modelling of alkali metal reacting dynamics in turbulent pulverised-coal combustion is discussed using tabulated sodium chemistry in large eddy simulation (LES). A lookup table is constructed from a detailed sodium chemistry mechanism including five sodium species, i.e. Na, NaO, NaO2, NaOH and Na2O2H2, and 24 elementary reactions. This sodium chemistry table contains four coordinates, i.e. the equivalence ratio, the mass fraction of the sodium element, the gas-phase temperature, and a progress variable. The table is first validated against the detailed sodium chemistry mechanism by zero-dimensional simulations. Then, LES of a turbulent pulverised-coal jet flame is performed and major coal-flame parameters compared against experiments. The chemical percolation devolatilisation (CPD) model and the partially stirred reactor (PaSR) model are employed to predict coal pyrolysis and gas-phase combustion, respectively. The response of the five sodium species in the pulverised-coal jet flame is subsequently examined. Finally, a systematic global sensitivity analysis of the sodium lookup table is performed and the accuracy of the proposed tabulated sodium chemistry approach has been calibrated.

  6. Tabulation of hybrid theory calculated e-N2 vibrational and rotational cross sections

    NASA Technical Reports Server (NTRS)

    Chandra, N.; Temkin, A.

    1976-01-01

    Vibrational excitation cross sections of N2 by electron impact are tabulated. Integrated cross sections are given for transitions v yields v prime where o=or v=or 8 in the energy range 0.1 eV=or E=or 10 eV. The energy grid is chosen to be most dense in the resonance region (2 to 4 eV) so that the substructure is present in the numerical results. Coefficients in the angular distribution formula (differential scattering cross section) for transitions v=0 yields v prime = or 8 are also numerically given over the same grid of energies. Simultaneous rotation-vibration coefficients are also given for transitions v=o,j=o; 1 yields v prime=o, j=o,2,4; 1,3,5. All results are obtained from the hybrid theory.

  7. Incorporation of Failure Into an Orthotropic Three-Dimensional Model with Tabulated Input Suitable for Use in Composite Impact Problems

    NASA Technical Reports Server (NTRS)

    Goldberg, Robert K.; Carney, Kelly S.; Dubois, Paul; Hoffarth, Canio; Khaled, Bilal; Shyamsunder, Loukham; Rajan, Subramaniam; Blankenhorn, Gunther

    2017-01-01

    The need for accurate material models to simulate the deformation, damage and failure of polymer matrix composites under impact conditions is becoming critical as these materials are gaining increased use in the aerospace and automotive communities. The aerospace community has identified several key capabilities which are currently lacking in the available material models in commercial transient dynamic finite element codes. To attempt to improve the predictive capability of composite impact simulations, a next generation material model is being developed for incorporation within the commercial transient dynamic finite element code LS-DYNA. The material model, which incorporates plasticity, damage and failure, utilizes experimentally based tabulated input to define the evolution of plasticity and damage and the initiation of failure as opposed to specifying discrete input parameters such as modulus and strength. The plasticity portion of the orthotropic, three-dimensional, macroscopic composite constitutive model is based on an extension of the Tsai-Wu composite failure model into a generalized yield function with a non-associative flow rule. For the damage model, a strain equivalent formulation is used to allow for the uncoupling of the deformation and damage analyses. In the damage model, a semi-coupled approach is employed where the overall damage in a particular coordinate direction is assumed to be a multiplicative combination of the damage in that direction resulting from the applied loads in various coordinate directions. For the failure model, a tabulated approach is utilized in which a stress or strain based invariant is defined as a function of the location of the current stress state in stress space to define the initiation of failure. Failure surfaces can be defined with any arbitrary shape, unlike traditional failure models where the mathematical functions used to define the failure surface impose a specific shape on the failure surface. In the current

  8. Public Elementary and Secondary School Revenues and Current Expenditures for Fiscal Year 1987 (School Year 1986-87): Preliminary Tabulations. E.D. TABS.

    ERIC Educational Resources Information Center

    National Center for Education Statistics (ED), Washington, DC.

    This document reports preliminary tabulations of public elementary and secondary school revenues and current expenditures for Fiscal Year 1987 (School Year 1986-87). Data shows revenues by local, state, intermediate, and federal sources, and current expenditures by categories of instruction, support services, noninstructional services, and fixed…

  9. Use of Management Pathways or Algorithms in Children With Chronic Cough: Systematic Reviews.

    PubMed

    Chang, Anne B; Oppenheimer, John J; Weinberger, Miles; Weir, Kelly; Rubin, Bruce K; Irwin, Richard S

    2016-01-01

    Use of appropriate cough pathways or algorithms may reduce the morbidity of chronic cough, lead to earlier diagnosis of chronic underlying illness, and reduce unnecessary costs and medications. We undertook three systematic reviews to examine three related key questions (KQ): In children aged ?14 years with chronic cough (> 4 weeks' duration), KQ1, do cough management protocols (or algorithms) improve clinical outcomes? KQ2, should the cough management or testing algorithm differ depending on the duration and/or severity? KQ3, should the cough management or testing algorithm differ depending on the associated characteristics of the cough and clinical history? We used the CHEST expert cough panel's protocol. Two authors screened searches and selected and extracted data. Only systematic reviews, randomized controlled trials (RCTs), and cohort studies published in English were included. Data were presented in Preferred Reporting Items for Systematic Reviews and Meta-analyses flowcharts and summary tabulated. Nine studies were included in KQ1 (RCT = 1; cohort studies = 7) and eight in KQ3 (RCT = 2; cohort = 6), but none in KQ2. There is high-quality evidence that in children aged ?14 years with chronic cough (> 4 weeks' duration), the use of cough management protocols (or algorithms) improves clinical outcomes and cough management or the testing algorithm should differ depending on the associated characteristics of the cough and clinical history. It remains uncertain whether the management or testing algorithm should depend on the duration or severity of chronic cough. Pending new data, chronic cough in children should be defined as > 4 weeks' duration and children should be systematically evaluated with treatment targeted to the underlying cause irrespective of the cough severity. Copyright © 2016 American College of Chest Physicians. All rights reserved.

  10. 15 CFR 101.1 - Report of tabulations of population to states and localities pursuant to 13 U.S.C. 141(c).

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... localities pursuant to 13 U.S.C. 141(c). The determination of the Secretary will be published in the Federal... until after he or she receives the recommendation of the Director of the Census, together with the... Director of the Census analyzing the methodologies that may be used in making the tabulations of population...

  11. Temperature-Dependent, Linearly Interpolable, Tabulated Cross Section Library Based on ENDF/B-VI, Release 8.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    CULLEN, D. E.

    2005-02-21

    Version 00 As distributed, the original evaluated data include cross sections represented in the form of a combination of resonance parameters and/or tabulated energy dependent cross sections, nominally at 0 Kelvin temperature. For use in applications this library has been processed into the form of temperature dependent cross sections at eight neutron reactor like temperatures, between 0 and 2100 Kelvin, in steps of 300 Kelvin. It has also been processed to five astrophysics like temperatures, 1, 10, 100 eV, 1 and 10 keV. For reference purposes, 300 Kelvin is approximately 1/40 eV, so that 1 eV is approximately 12,000 Kelvin.more » At each temperature the cross sections are tabulated and linearly interpolable in energy. POINT2004 contains all of the evaluations in the ENDF/B-VI general purpose library, which contains evaluations for 328 materials (isotopes or naturally occurring elemental mixtures of isotopes). No special purpose ENDF/B-VI libraries, such as fission products, thermal scattering, or photon interaction data are included. The majority of these evaluations are complete, in the sense that they include all cross sections over the energy range 10-5 eV to at least 20 MeV. However, the following are only partial evaluations that either contain only single reactions and no total cross section (Mg24, K41, Ti46, Ti47, Ti48, Ti50 and Ni59), or do not include energy dependent cross sections above the resonance region (Ar40, Mo92, Mo98, Mo100, In115, Sn120, Sn122 and Sn124). The CCC-638/TART20002 code package is recommended for use with these data. Codes within TART can be used to display these data or to run calculations using these data.« less

  12. Influence of different dose calculation algorithms on the estimate of NTCP for lung complications.

    PubMed

    Hedin, Emma; Bäck, Anna

    2013-09-06

    Due to limitations and uncertainties in dose calculation algorithms, different algorithms can predict different dose distributions and dose-volume histograms for the same treatment. This can be a problem when estimating the normal tissue complication probability (NTCP) for patient-specific dose distributions. Published NTCP model parameters are often derived for a different dose calculation algorithm than the one used to calculate the actual dose distribution. The use of algorithm-specific NTCP model parameters can prevent errors caused by differences in dose calculation algorithms. The objective of this work was to determine how to change the NTCP model parameters for lung complications derived for a simple correction-based pencil beam dose calculation algorithm, in order to make them valid for three other common dose calculation algorithms. NTCP was calculated with the relative seriality (RS) and Lyman-Kutcher-Burman (LKB) models. The four dose calculation algorithms used were the pencil beam (PB) and collapsed cone (CC) algorithms employed by Oncentra, and the pencil beam convolution (PBC) and anisotropic analytical algorithm (AAA) employed by Eclipse. Original model parameters for lung complications were taken from four published studies on different grades of pneumonitis, and new algorithm-specific NTCP model parameters were determined. The difference between original and new model parameters was presented in relation to the reported model parameter uncertainties. Three different types of treatments were considered in the study: tangential and locoregional breast cancer treatment and lung cancer treatment. Changing the algorithm without the derivation of new model parameters caused changes in the NTCP value of up to 10 percentage points for the cases studied. Furthermore, the error introduced could be of the same magnitude as the confidence intervals of the calculated NTCP values. The new NTCP model parameters were tabulated as the algorithm was varied from PB

  13. Generation of optimal artificial neural networks using a pattern search algorithm: application to approximation of chemical systems.

    PubMed

    Ihme, Matthias; Marsden, Alison L; Pitsch, Heinz

    2008-02-01

    A pattern search optimization method is applied to the generation of optimal artificial neural networks (ANNs). Optimization is performed using a mixed variable extension to the generalized pattern search method. This method offers the advantage that categorical variables, such as neural transfer functions and nodal connectivities, can be used as parameters in optimization. When used together with a surrogate, the resulting algorithm is highly efficient for expensive objective functions. Results demonstrate the effectiveness of this method in optimizing an ANN for the number of neurons, the type of transfer function, and the connectivity among neurons. The optimization method is applied to a chemistry approximation of practical relevance. In this application, temperature and a chemical source term are approximated as functions of two independent parameters using optimal ANNs. Comparison of the performance of optimal ANNs with conventional tabulation methods demonstrates equivalent accuracy by considerable savings in memory storage. The architecture of the optimal ANN for the approximation of the chemical source term consists of a fully connected feedforward network having four nonlinear hidden layers and 117 synaptic weights. An equivalent representation of the chemical source term using tabulation techniques would require a 500 x 500 grid point discretization of the parameter space.

  14. A Temperature-Dependent, Linearly Interpolable, Tabulated Cross Section Library Based on ENDF/B-VI, Release 7.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    CULLEN, D. E.

    2001-06-13

    Version 00 As distributed, the original evaluated data include cross sections represented in the form of a combination of resonance parameters and/or tabulated energy dependent cross sections, nominally at 0 Kelvin temperature. For use in applications, these ENDF/B-VI, Release 7 data were processed into the form of temperature dependent cross sections at eight temperatures between 0 and 2100 Kelvin, in steps of 300 Kelvin. At each temperature the cross sections are tabulated and linearly interpolable in energy. POINT2000 contains all of the evaluations in the ENDF/B-VI general purpose library, which contains evaluations for 324 materials (isotopes or naturally occurring elementalmore » mixtures of isotopes). No special purpose ENDF/B-VI libraries, such as fission products, thermal scattering, photon interaction data are included. The majority of these evaluations are complete, in the sense that they include all cross sections over the energy range 10-5 eV to at least 20 MeV. However, the following are only partial evaluations that either only contain single reactions and no total cross section (Mg24, K41, Ti46, Ti47, Ti48, Ti50 and Ni59), or do not include energy dependent cross sections above the resonance region (Ar40, Mo92, Mo98, Mo100, In115, Sn120, Sn122 and Sn124). The CCC-638/TART96 code package will soon be updated to TART2000, which is recommended for use with these data. Codes within TART2000 can be used to display these data or to run calculations using these data.« less

  15. Using Tabulated Experimental Data to Drive an Orthotropic Elasto-Plastic Three-Dimensional Model for Impact Analysis

    NASA Technical Reports Server (NTRS)

    Hoffarth, C.; Khaled, B.; Rajan, S. D.; Goldberg, R.; Carney, K.; DuBois, P.; Blankenhorn, Gunther

    2016-01-01

    An orthotropic elasto-plastic-damage three-dimensional model with tabulated input has been developed to analyze the impact response of composite materials. The theory has been implemented as MAT 213 into a tailored version of LS-DYNA being developed under a joint effort of the FAA and NASA and has the following features: (a) the theory addresses any composite architecture that can be experimentally characterized as an orthotropic material and includes rate and temperature sensitivities, (b) the formulation is applicable for solid as well as shell element implementations and utilizes input data in a tabulated form directly from processed experimental data, (c) deformation and damage mechanics are both accounted for within the material model, (d) failure criteria are established that are functions of strain and damage parameters, and mesh size dependence is included, and (e) the theory can be efficiently implemented into a commercial code for both sequential and parallel executions. The salient features of the theory as implemented in LS-DYNA are illustrated using a widely used composite - the T800S/3900-2B[P2352W-19] BMS8-276 Rev-H-Unitape fiber/resin unidirectional composite. First, the experimental tests to characterize the deformation, damage and failure parameters in the material behavior are discussed. Second, the MAT213 input model and implementation details are presented with particular attention given to procedures that have been incorporated to ensure that the yield surfaces in the rate and temperature dependent plasticity model are convex. Finally, the paper concludes with a validation test designed to test the stability, accuracy and efficiency of the implemented model.

  16. The Impact of Year-Round Education on Fifth Grade African American Reading Achievement Scores in an Urban Illinois School

    ERIC Educational Resources Information Center

    Merrill, Carolyn Ann

    2012-01-01

    The purpose of this quantitative, causal-comparative study was to determine the impact of the year-round education school calendar on the standardized test performance of fifth grade African American students, as measured by the Illinois Standards Achievement Test (ISAT) in reading. The ISAT reading scores from two year-round education (YRE)…

  17. Model aerodynamic test results for two variable cycle engine coannular exhaust systems at simulated takeoff and cruise conditions. Comprehensive data report. Volume 2: Tabulated aerodynamic data book 2

    NASA Technical Reports Server (NTRS)

    Nelson, D. P.

    1981-01-01

    Tabulated aerodynamic data from coannular nozzle performance tests are given for test runs 26 through 37. The data include nozzle thrust coefficient parameters, nozzle discharge coefficients, and static pressure tap measurements.

  18. Modeling of Embedded Human Systems

    DTIC Science & Technology

    2013-07-01

    ISAT study [7] for DARPA in 20051 concretized the notion of an embedded human, who is a necessary component of the system. The proposed work integrates...Technology, IEEE Transactions on, vol. 16, no. 2, pp. 229–244, March 2008. [7] C. J. Tomlin and S. S. Sastry, “Embedded humans,” tech. rep., DARPA ISAT

  19. A metabolic way to investigate related hurdles causing poor bioavailability in oral delivery of isoacteoside in rats employing ultrahigh-performance liquid chromatography/quadrupole time-of-flight tandem mass spectrometry.

    PubMed

    Cui, Qingling; Pan, Yingni; Yan, Xiaowei; Qu, Bao; Liu, Xiaoqiu; Xiao, Wei

    2017-02-28

    Isoacteoside (ISAT), a phenylethanoid glycoside that acts as the principal bioactive component in traditional Chinese medicines, possesses broad pharmacological effects such as neuroprotective, antihypertensive and hepatoprotective activities. However, its pharmaceutical development has been severely limited due to the poor oral bioavailability. It is essential and significant to investigate related hurdles leading to the poor bioavailability of isoacteoside. Whole animal metabolism studies were conducted in rats, followed by metabolic mechanism including gastrointestinal stability, intestinal flora metabolism and intestinal enzyme metabolism employing the powerful method ultrahigh-performance liquid chromatography combined with quadrupole time-of-flight tandem mass spectrometry (UPLC/QTOF-MS/MS). A simple, rapid and sensitive method has been developed which comprehensively revealed the underlying cause of poor bioavailability of ISAT in a metabolic manner. The prototype of ISAT and its combined metabolites have not been detected in plasma. Furthermore, the residual content of the parent compound in in vitro experiments was approximately 59%, 5% and barely none in intestinal bacteria, intestinal S9 and simulated intestinal juice at 6 h, respectively. The present work has demonstrated that the factors causing the poor bioavailability of isoacteoside should be attributed to the metabolism. In general, the metabolism that resulted from intestinal flora and intestinal enzymes were predominant reasons giving rise to the poor bioavailability of ISAT, which also suggested that metabolites might be responsible for the excellent pharmacological effect of ISAT. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  20. Hybrid finite-volume/transported PDF method for the simulation of turbulent reactive flows

    NASA Astrophysics Data System (ADS)

    Raman, Venkatramanan

    A novel computational scheme is formulated for simulating turbulent reactive flows in complex geometries with detailed chemical kinetics. A Probability Density Function (PDF) based method that handles the scalar transport equation is coupled with an existing Finite Volume (FV) Reynolds-Averaged Navier-Stokes (RANS) flow solver. The PDF formulation leads to closed chemical source terms and facilitates the use of detailed chemical mechanisms without approximations. The particle-based PDF scheme is modified to handle complex geometries and grid structures. Grid-independent particle evolution schemes that scale linearly with the problem size are implemented in the Monte-Carlo PDF solver. A novel algorithm, in situ adaptive tabulation (ISAT) is employed to ensure tractability of complex chemistry involving a multitude of species. Several non-reacting test cases are performed to ascertain the efficiency and accuracy of the method. Simulation results from a turbulent jet-diffusion flame case are compared against experimental data. The effect of micromixing model, turbulence model and reaction scheme on flame predictions are discussed extensively. Finally, the method is used to analyze the Dow Chlorination Reactor. Detailed kinetics involving 37 species and 158 reactions as well as a reduced form with 16 species and 21 reactions are used. The effect of inlet configuration on reactor behavior and product distribution is analyzed. Plant-scale reactors exhibit quenching phenomena that cannot be reproduced by conventional simulation methods. The FV-PDF method predicts quenching accurately and provides insight into the dynamics of the reactor near extinction. The accuracy of the fractional time-stepping technique in discussed in the context of apparent multiple-steady states observed in a non-premixed feed configuration of the chlorination reactor.

  1. Influence of different dose calculation algorithms on the estimate of NTCP for lung complications

    PubMed Central

    Bäck, Anna

    2013-01-01

    Due to limitations and uncertainties in dose calculation algorithms, different algorithms can predict different dose distributions and dose‐volume histograms for the same treatment. This can be a problem when estimating the normal tissue complication probability (NTCP) for patient‐specific dose distributions. Published NTCP model parameters are often derived for a different dose calculation algorithm than the one used to calculate the actual dose distribution. The use of algorithm‐specific NTCP model parameters can prevent errors caused by differences in dose calculation algorithms. The objective of this work was to determine how to change the NTCP model parameters for lung complications derived for a simple correction‐based pencil beam dose calculation algorithm, in order to make them valid for three other common dose calculation algorithms. NTCP was calculated with the relative seriality (RS) and Lyman‐Kutcher‐Burman (LKB) models. The four dose calculation algorithms used were the pencil beam (PB) and collapsed cone (CC) algorithms employed by Oncentra, and the pencil beam convolution (PBC) and anisotropic analytical algorithm (AAA) employed by Eclipse. Original model parameters for lung complications were taken from four published studies on different grades of pneumonitis, and new algorithm‐specific NTCP model parameters were determined. The difference between original and new model parameters was presented in relation to the reported model parameter uncertainties. Three different types of treatments were considered in the study: tangential and locoregional breast cancer treatment and lung cancer treatment. Changing the algorithm without the derivation of new model parameters caused changes in the NTCP value of up to 10 percentage points for the cases studied. Furthermore, the error introduced could be of the same magnitude as the confidence intervals of the calculated NTCP values. The new NTCP model parameters were tabulated as the algorithm was

  2. Optimization of view weighting in tilted-plane-based reconstruction algorithms to minimize helical artifacts in multi-slice helical CT

    NASA Astrophysics Data System (ADS)

    Tang, Xiangyang

    2003-05-01

    In multi-slice helical CT, the single-tilted-plane-based reconstruction algorithm has been proposed to combat helical and cone beam artifacts by tilting a reconstruction plane to fit a helical source trajectory optimally. Furthermore, to improve the noise characteristics or dose efficiency of the single-tilted-plane-based reconstruction algorithm, the multi-tilted-plane-based reconstruction algorithm has been proposed, in which the reconstruction plane deviates from the pose globally optimized due to an extra rotation along the 3rd axis. As a result, the capability of suppressing helical and cone beam artifacts in the multi-tilted-plane-based reconstruction algorithm is compromised. An optomized tilted-plane-based reconstruction algorithm is proposed in this paper, in which a matched view weighting strategy is proposed to optimize the capability of suppressing helical and cone beam artifacts and noise characteristics. A helical body phantom is employed to quantitatively evaluate the imaging performance of the matched view weighting approach by tabulating artifact index and noise characteristics, showing that the matched view weighting improves both the helical artifact suppression and noise characteristics or dose efficiency significantly in comparison to the case in which non-matched view weighting is applied. Finally, it is believed that the matched view weighting approach is of practical importance in the development of multi-slive helical CT, because it maintains the computational structure of fan beam filtered backprojection and demands no extra computational services.

  3. Mirrorless Lasing in Optically Pumped Rubidium Vapor

    DTIC Science & Technology

    2013-03-01

    2 or 6P1/2-6S1/2, I is the pump intensity, and Isat is found using equation 4.3. sat = hν32(32 + 30) 32 , (4.3) where ν32 is the...is the small signal gain coefficient, Isat is the saturation intensity, and z is the gain path length. With this assumption the IR pulse energy at

  4. Shear flow control of cold and heated rectangular jets by mechanical tabs. Volume 2: Tabulated data

    NASA Technical Reports Server (NTRS)

    Brown, W. H.; Ahuja, K. K.

    1989-01-01

    The effects of mechanical protrusions on the jet mixing characteristics of rectangular nozzles for heated and unheated subsonic and supersonic jet plumes were studied. The characteristics of a rectangular nozzle of aspect ratio 4 without the mechanical protrusions were first investigated. Intrusive probes were used to make the flow measurements. Possible errors introduced by intrusive probes in making shear flow measurements were also examined. Several scaled sizes of mechanical tabs were then tested, configured around the perimeter of the rectangular jet. Both the number and the location of the tabs were varied. From this, the best configuration was selected. This volume contains tabulated data for each of the data runs cited in Volume 1. Baseline characteristics, mixing modifications (subsonic and supersonic, heated and unheated) and miscellaneous charts are included.

  5. Tabulation of data from tests of an NPL 9510 airfoil in the Langley 0.3-meter transonic cryogenic tunnel

    NASA Technical Reports Server (NTRS)

    Jenkins, R. V.

    1983-01-01

    The tabulated data from tests of a six inch chord NPL 9510 airfoil in the Langley 0.3-Meter Transonic Cryogenic Tunnel. The tests were performed over the following range of conditions: Mach numbers of 0.35 to 0.82, total temperature of 94 K to 300 K, total pressure of 1.20 to 5.81 atm, Reynolds number based on chord of 1.34 x 10 to the 6th to 48.23 x 10 to the 6th, and angle of attack of 0 deg to 6 deg. The NPL 9510 airfoil was observed to have decreasing drag coefficient up to the highest test Reynolds number.

  6. Numerical investigation of a helicopter combustion chamber using LES and tabulated chemistry

    NASA Astrophysics Data System (ADS)

    Auzillon, Pierre; Riber, Eléonore; Gicquel, Laurent Y. M.; Gicquel, Olivier; Darabiha, Nasser; Veynante, Denis; Fiorina, Benoît

    2013-01-01

    This article presents Large Eddy Simulations (LES) of a realistic aeronautical combustor device: the chamber CTA1 designed by TURBOMECA. Under nominal operating conditions, experiments show hot spots observed on the combustor walls, in the vicinity of the injectors. These high temperature regions disappear when modifying the fuel stream equivalence ratio. In order to account for detailed chemistry effects within LES, the numerical simulation uses the recently developed turbulent combustion model F-TACLES (Filtered TAbulated Chemistry for LES). The principle of this model is first to generate a lookup table where thermochemical variables are computed from a set of filtered laminar unstrained premixed flamelets. To model the interactions between the flame and the turbulence at the subgrid scale, a flame wrinkling analytical model is introduced and the Filtered Density Function (FDF) of the mixture fraction is modeled by a β function. Filtered thermochemical quantities are stored as a function of three coordinates: the filtered progress variable, the filtered mixture fraction and the mixture fraction subgrid scale variance. The chemical lookup table is then coupled with the LES using a mathematical formalism that ensures an accurate prediction of the flame dynamics. The numerical simulation of the CTA1 chamber with the F-TACLES turbulent combustion model reproduces fairly the temperature fields observed in experiments. In particular the influence of the fuel stream equivalence ratio on the flame position is well captured.

  7. Nuclear Magnetic Dipole and Electric Quadrupole Moments: Their Measurement and Tabulation as Accessible Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stone, N. J., E-mail: n.stone@physics.ox.ac.uk

    The most recent tabulations of nuclear magnetic dipole and electric quadrupole moments have been prepared and published by the Nuclear Data Section of the IAEA, Vienna [N. J. Stone, Report No. INDC(NDS)-0650 (2013); Report No. INDC(NDS)-0658 (2014)]. The first of these is a table of recommended quadrupole moments for all isotopes in which all experimental results are made consistent with a limited number of adopted standards for each element; the second is a combined listing of all measurements of both moments. Both tables cover all isotopes and energy levels. In this paper, the considerations relevant to the preparation of bothmore » tables are described, together with observations as to the importance and (where appropriate) application of necessary corrections to achieve the “best” values. Some discussion of experimental methods is included with emphasis on their precision. The aim of the published quadrupole moment table is to provide a standard reference in which the value given for each moment is the best available and for which full provenance is given. A table of recommended magnetic dipole moments is in preparation, with the same objective in view.« less

  8. The minimally invasive spinal deformity surgery algorithm: a reproducible rational framework for decision making in minimally invasive spinal deformity surgery.

    PubMed

    Mummaneni, Praveen V; Shaffrey, Christopher I; Lenke, Lawrence G; Park, Paul; Wang, Michael Y; La Marca, Frank; Smith, Justin S; Mundis, Gregory M; Okonkwo, David O; Moal, Bertrand; Fessler, Richard G; Anand, Neel; Uribe, Juan S; Kanter, Adam S; Akbarnia, Behrooz; Fu, Kai-Ming G

    2014-05-01

    Minimally invasive surgery (MIS) is an alternative to open deformity surgery for the treatment of patients with adult spinal deformity. However, at this time MIS techniques are not as versatile as open deformity techniques, and MIS techniques have been reported to result in suboptimal sagittal plane correction or pseudarthrosis when used for severe deformities. The minimally invasive spinal deformity surgery (MISDEF) algorithm was created to provide a framework for rational decision making for surgeons who are considering MIS versus open spine surgery. A team of experienced spinal deformity surgeons developed the MISDEF algorithm that incorporates a patient's preoperative radiographic parameters and leads to one of 3 general plans ranging from MIS direct or indirect decompression to open deformity surgery with osteotomies. The authors surveyed fellowship-trained spine surgeons experienced with spinal deformity surgery to validate the algorithm using a set of 20 cases to establish interobserver reliability. They then resurveyed the same surgeons 2 months later with the same cases presented in a different sequence to establish intraobserver reliability. Responses were collected and tabulated. Fleiss' analysis was performed using MATLAB software. Over a 3-month period, 11 surgeons completed the surveys. Responses for MISDEF algorithm case review demonstrated an interobserver kappa of 0.58 for the first round of surveys and an interobserver kappa of 0.69 for the second round of surveys, consistent with substantial agreement. In at least 10 cases there was perfect agreement between the reviewing surgeons. The mean intraobserver kappa for the 2 surveys was 0.86 ± 0.15 (± SD) and ranged from 0.62 to 1. The use of the MISDEF algorithm provides consistent and straightforward guidance for surgeons who are considering either an MIS or an open approach for the treatment of patients with adult spinal deformity. The MISDEF algorithm was found to have substantial inter- and

  9. Incorporation of Plasticity and Damage Into an Orthotropic Three-Dimensional Model with Tabulated Input Suitable for Use in Composite Impact Problems

    NASA Technical Reports Server (NTRS)

    Goldberg, Robert K.; Carney, Kelly S.; DuBois, Paul; Hoffarth, Canio; Rajan, Subramaniam; Blackenhorn, Gunther

    2015-01-01

    The need for accurate material models to simulate the deformation, damage and failure of polymer matrix composites under impact conditions is becoming critical as these materials are gaining increased usage in the aerospace and automotive industries. While there are several composite material models currently available within commercial transient dynamic finite element codes, several features have been identified as being lacking in the currently available material models that could substantially enhance the predictive capability of the impact simulations. A specific desired feature pertains to the incorporation of both plasticity and damage within the material model. Another desired feature relates to using experimentally based tabulated stress-strain input to define the evolution of plasticity and damage as opposed to specifying discrete input properties (such as modulus and strength) and employing analytical functions to track the response of the material. To begin to address these needs, a combined plasticity and damage model suitable for use with both solid and shell elements is being developed for implementation within the commercial code LS-DYNA. The plasticity model is based on extending the Tsai-Wu composite failure model into a strain-hardening based orthotropic plasticity model with a non-associative flow rule. The evolution of the yield surface is determined based on tabulated stress-strain curves in the various normal and shear directions and is tracked using the effective plastic strain. The effective plastic strain is computed by using the non-associative flow rule in combination with appropriate numerical methods. To compute the evolution of damage, a strain equivalent semi-coupled formulation is used, in which a load in one direction results in a stiffness reduction in multiple coordinate directions. A specific laminated composite is examined to demonstrate the process of characterizing and analyzing the response of a composite using the developed

  10. Tabulated Pressure Coefficient Data from a Tail Loads Investigation on a 1/15-Scale Model of the Goodyear XZP5K Airship

    NASA Technical Reports Server (NTRS)

    Cannon, Michael D.

    1956-01-01

    This paper contains tail and hull loads data obtained in an investigation of a l/15-scale model of the Goodyear XZP5K airship. Data are presented in the form of tabulated pressure coefficients over a pitch and yaw range of +/-20 deg and 0 deg to 30 deg respectively, with various rudder and elevator deflections. Two tail configurations of different plan forms were tested on the model. The investigation was conducted in the Langley full-scale tunnel at a Reynolds number of approximately 16.5 x 10(exp 6) based on hull length, which corresponds to a Mach number of about 0.12.

  11. Tabulated Pressure Data for a Series of Controls on a 40 Deg Sweptback Wing at Mach Numbers of 1.61 and 2.01

    NASA Technical Reports Server (NTRS)

    Lord, D. R.

    1957-01-01

    An investigation has been made at Mach numbers of 1.61 and 2.01 and Reynolds numbers of 1.7 x l0(exp 6) and 3.6 x l0(exp 6) to determine the pressure distributions over a swept wing with a series of 14 control configurations. The wing had 40 deg of sweep of the quarter-chord line, an aspect ratio of 3.1, and a taper ratio of 0.4. Measurements were made at angles of attack from 0 deg to +/- 15 deg for control deflections from -60 deg to 60 deg. This report contains tabulated pressure data for the complete range of test conditions.

  12. Findings of the International Subarachnoid Aneurysm Trial and the National Study of Subarachnoid Haemorrhage in context.

    PubMed

    Reeves, B C; Langham, J; Lindsay, K W; Molyneux, A J; Browne, J P; Copley, L; Shaw, D; Gholkar, A; Kirkpatrick, P J

    2007-08-01

    Concern has been expressed about the applicability of the findings of the International Subarachnoid Aneurysm Trial (ISAT) with respect to the relative effects on outcome of coiling and clipping. It has been suggested that the findings of the National Study of Subarachnoid Haemorrhage may have greater relevance for neurosurgical practice. The objective of this paper was to interpret the findings of these two studies in the context of differences in their study populations, design, execution and analysis. Because of differences in design and analysis, the findings of the two studies are not directly comparable. The ISAT analysed all randomized patients by intention-to-treat, including some who did not undergo a repair, and obtained the primary outcome for 99% of participants. The National Study only analysed participants who underwent clipping or coiling, according to the method of repair, and obtained the primary outcome for 91% of participants. Time to repair was also considered differently in the two studies. The comparison between coiling and clipping was susceptible to confounding in the National Study, but not in the ISAT. The two study populations differed to some extent, but inspection of these differences does not support the view that coiling was applied inappropriately in the National Study. Therefore, there are many reasons why the two studies estimated different sizes of effect. The possibility that there were real, systematic differences in practice between the ISAT and the National Study cannot be ruled out, but such explanations must be seen in the context of other explanations relating to chance, differences in design or analysis, or confounding.

  13. Comparing in Cylinder Pressure Modelling of a DI Diesel Engine Fuelled on Alternative Fuel Using Two Tabulated Chemistry Approaches.

    PubMed

    Ngayihi Abbe, Claude Valery; Nzengwa, Robert; Danwe, Raidandi

    2014-01-01

    The present work presents the comparative simulation of a diesel engine fuelled on diesel fuel and biodiesel fuel. Two models, based on tabulated chemistry, were implemented for the simulation purpose and results were compared with experimental data obtained from a single cylinder diesel engine. The first model is a single zone model based on the Krieger and Bormann combustion model while the second model is a two-zone model based on Olikara and Bormann combustion model. It was shown that both models can predict well the engine's in-cylinder pressure as well as its overall performances. The second model showed a better accuracy than the first, while the first model was easier to implement and faster to compute. It was found that the first method was better suited for real time engine control and monitoring while the second one was better suited for engine design and emission prediction.

  14. Comparing in Cylinder Pressure Modelling of a DI Diesel Engine Fuelled on Alternative Fuel Using Two Tabulated Chemistry Approaches

    PubMed Central

    Ngayihi Abbe, Claude Valery; Nzengwa, Robert; Danwe, Raidandi

    2014-01-01

    The present work presents the comparative simulation of a diesel engine fuelled on diesel fuel and biodiesel fuel. Two models, based on tabulated chemistry, were implemented for the simulation purpose and results were compared with experimental data obtained from a single cylinder diesel engine. The first model is a single zone model based on the Krieger and Bormann combustion model while the second model is a two-zone model based on Olikara and Bormann combustion model. It was shown that both models can predict well the engine's in-cylinder pressure as well as its overall performances. The second model showed a better accuracy than the first, while the first model was easier to implement and faster to compute. It was found that the first method was better suited for real time engine control and monitoring while the second one was better suited for engine design and emission prediction. PMID:27379306

  15. STAR Algorithm Integration Team - Facilitating operational algorithm development

    NASA Astrophysics Data System (ADS)

    Mikles, V. J.

    2015-12-01

    The NOAA/NESDIS Center for Satellite Research and Applications (STAR) provides technical support of the Joint Polar Satellite System (JPSS) algorithm development and integration tasks. Utilizing data from the S-NPP satellite, JPSS generates over thirty Environmental Data Records (EDRs) and Intermediate Products (IPs) spanning atmospheric, ocean, cryosphere, and land weather disciplines. The Algorithm Integration Team (AIT) brings technical expertise and support to product algorithms, specifically in testing and validating science algorithms in a pre-operational environment. The AIT verifies that new and updated algorithms function in the development environment, enforces established software development standards, and ensures that delivered packages are functional and complete. AIT facilitates the development of new JPSS-1 algorithms by implementing a review approach based on the Enterprise Product Lifecycle (EPL) process. Building on relationships established during the S-NPP algorithm development process and coordinating directly with science algorithm developers, the AIT has implemented structured reviews with self-contained document suites. The process has supported algorithm improvements for products such as ozone, active fire, vegetation index, and temperature and moisture profiles.

  16. Selfish Gene Algorithm Vs Genetic Algorithm: A Review

    NASA Astrophysics Data System (ADS)

    Ariff, Norharyati Md; Khalid, Noor Elaiza Abdul; Hashim, Rathiah; Noor, Noorhayati Mohamed

    2016-11-01

    Evolutionary algorithm is one of the algorithms inspired by the nature. Within little more than a decade hundreds of papers have reported successful applications of EAs. In this paper, the Selfish Gene Algorithms (SFGA), as one of the latest evolutionary algorithms (EAs) inspired from the Selfish Gene Theory which is an interpretation of Darwinian Theory ideas from the biologist Richards Dawkins on 1989. In this paper, following a brief introduction to the Selfish Gene Algorithm (SFGA), the chronology of its evolution is presented. It is the purpose of this paper is to present an overview of the concepts of Selfish Gene Algorithm (SFGA) as well as its opportunities and challenges. Accordingly, the history, step involves in the algorithm are discussed and its different applications together with an analysis of these applications are evaluated.

  17. An efficient algorithm for function optimization: modified stem cells algorithm

    NASA Astrophysics Data System (ADS)

    Taherdangkoo, Mohammad; Paziresh, Mahsa; Yazdi, Mehran; Bagheri, Mohammad Hadi

    2013-03-01

    In this paper, we propose an optimization algorithm based on the intelligent behavior of stem cell swarms in reproduction and self-organization. Optimization algorithms, such as the Genetic Algorithm (GA), Particle Swarm Optimization (PSO) algorithm, Ant Colony Optimization (ACO) algorithm and Artificial Bee Colony (ABC) algorithm, can give solutions to linear and non-linear problems near to the optimum for many applications; however, in some case, they can suffer from becoming trapped in local optima. The Stem Cells Algorithm (SCA) is an optimization algorithm inspired by the natural behavior of stem cells in evolving themselves into new and improved cells. The SCA avoids the local optima problem successfully. In this paper, we have made small changes in the implementation of this algorithm to obtain improved performance over previous versions. Using a series of benchmark functions, we assess the performance of the proposed algorithm and compare it with that of the other aforementioned optimization algorithms. The obtained results prove the superiority of the Modified Stem Cells Algorithm (MSCA).

  18. Interactive specification acquisition via scenarios: A proposal

    NASA Technical Reports Server (NTRS)

    Hall, Robert J.

    1992-01-01

    Some reactive systems are most naturally specified by giving large collections of behavior scenarios. These collections not only specify the behavior of the system, but also provide good test suites for validating the implemented system. Due to the complexity of the systems and the number of scenarios, however, it appears that automated assistance is necessary to make this software development process workable. Interactive Specification Acquisition Tool (ISAT) is a proposed interactive system for supporting the acquisition and maintenance of a formal system specification from scenarios, as well as automatic synthesis of control code and automated test generation. This paper discusses the background, motivation, proposed functions, and implementation status of ISAT.

  19. Denni Algorithm An Enhanced Of SMS (Scan, Move and Sort) Algorithm

    NASA Astrophysics Data System (ADS)

    Aprilsyah Lubis, Denni; Salim Sitompul, Opim; Marwan; Tulus; Andri Budiman, M.

    2017-12-01

    Sorting has been a profound area for the algorithmic researchers, and many resources are invested to suggest a more working sorting algorithm. For this purpose many existing sorting algorithms were observed in terms of the efficiency of the algorithmic complexity. Efficient sorting is important to optimize the use of other algorithms that require sorted lists to work correctly. Sorting has been considered as a fundamental problem in the study of algorithms that due to many reasons namely, the necessary to sort information is inherent in many applications, algorithms often use sorting as a key subroutine, in algorithm design there are many essential techniques represented in the body of sorting algorithms, and many engineering issues come to the fore when implementing sorting algorithms., Many algorithms are very well known for sorting the unordered lists, and one of the well-known algorithms that make the process of sorting to be more economical and efficient is SMS (Scan, Move and Sort) algorithm, an enhancement of Quicksort invented Rami Mansi in 2010. This paper presents a new sorting algorithm called Denni-algorithm. The Denni algorithm is considered as an enhancement on the SMS algorithm in average, and worst cases. The Denni algorithm is compared with the SMS algorithm and the results were promising.

  20. An abstraction layer for efficient memory management of tabulated chemistry and flamelet solutions

    NASA Astrophysics Data System (ADS)

    Weise, Steffen; Messig, Danny; Meyer, Bernd; Hasse, Christian

    2013-06-01

    A large number of methods for simulating reactive flows exist, some of them, for example, directly use detailed chemical kinetics or use precomputed and tabulated flame solutions. Both approaches couple the research fields computational fluid dynamics and chemistry tightly together using either an online or offline approach to solve the chemistry domain. The offline approach usually involves a method of generating databases or so-called Lookup-Tables (LUTs). As these LUTs are extended to not only contain material properties but interactions between chemistry and turbulent flow, the number of parameters and thus dimensions increases. Given a reasonable discretisation, file sizes can increase drastically. The main goal of this work is to provide methods that handle large database files efficiently. A Memory Abstraction Layer (MAL) has been developed that handles requested LUT entries efficiently by splitting the database file into several smaller blocks. It keeps the total memory usage at a minimum using thin allocation methods and compression to minimise filesystem operations. The MAL has been evaluated using three different test cases. The first rather generic one is a sequential reading operation on an LUT to evaluate the runtime behaviour as well as the memory consumption of the MAL. The second test case is a simulation of a non-premixed turbulent flame, the so-called HM1 flame, which is a well-known test case in the turbulent combustion community. The third test case is a simulation of a non-premixed laminar flame as described by McEnally in 1996 and Bennett in 2000. Using the previously developed solver 'flameletFoam' in conjunction with the MAL, memory consumption and the performance penalty introduced were studied. The total memory used while running a parallel simulation was reduced significantly while the CPU time overhead associated with the MAL remained low.

  1. Static stability and control effectiveness of models 12-0 and 34-0 of the vehicle 3 configuration, volume 3. [tabulated source data

    NASA Technical Reports Server (NTRS)

    Allen, E. C.; Tuttle, T.

    1973-01-01

    Static stability and control effectiveness characteristics of two 0.004 scale models of the vehicle 3 configuration are reported. The components investigated consisted of a single aft body, vertical/rudder, OMS pods with two interchangeable wings, four interchangeable forward bodies, four trimmers, and a spoiler. The test was conducted in 14 x 14 inch trisonic wind tunnel over a Mach number range from 0.6 to 4.96. Angles of attack from 0 to 60 degrees and angles of sideslip from -10 to 10 degrees at 0, 10, 20,30, and 40 degrees angle of attack were tested. Elevon, body flap, and speed brake deflection composed the parametric considerations. No grit was placed on the models during the test. The tabulated source data and incremental data figures are presented.

  2. Real gas flow fields about three dimensional configurations

    NASA Technical Reports Server (NTRS)

    Balakrishnan, A.; Lombard, C. K.; Davy, W. C.

    1983-01-01

    Real gas, inviscid supersonic flow fields over a three-dimensional configuration are determined using a factored implicit algorithm. Air in chemical equilibrium is considered and its local thermodynamic properties are computed by an equilibrium composition method. Numerical solutions are presented for both real and ideal gases at three different Mach numbers and at two different altitudes. Selected results are illustrated by contour plots and are also tabulated for future reference. Results obtained compare well with existing tabulated numerical solutions and hence validate the solution technique.

  3. Hybrid Cryptosystem Using Tiny Encryption Algorithm and LUC Algorithm

    NASA Astrophysics Data System (ADS)

    Rachmawati, Dian; Sharif, Amer; Jaysilen; Andri Budiman, Mohammad

    2018-01-01

    Security becomes a very important issue in data transmission and there are so many methods to make files more secure. One of that method is cryptography. Cryptography is a method to secure file by writing the hidden code to cover the original file. Therefore, if the people do not involve in cryptography, they cannot decrypt the hidden code to read the original file. There are many methods are used in cryptography, one of that method is hybrid cryptosystem. A hybrid cryptosystem is a method that uses a symmetric algorithm to secure the file and use an asymmetric algorithm to secure the symmetric algorithm key. In this research, TEA algorithm is used as symmetric algorithm and LUC algorithm is used as an asymmetric algorithm. The system is tested by encrypting and decrypting the file by using TEA algorithm and using LUC algorithm to encrypt and decrypt the TEA key. The result of this research is by using TEA Algorithm to encrypt the file, the cipher text form is the character from ASCII (American Standard for Information Interchange) table in the form of hexadecimal numbers and the cipher text size increase by sixteen bytes as the plaintext length is increased by eight characters.

  4. User Guide for HUFPrint, A Tabulation and Visualization Utility for the Hydrogeologic-Unit Flow (HUF) Package of MODFLOW

    USGS Publications Warehouse

    Banta, Edward R.; Provost, Alden M.

    2008-01-01

    This report documents HUFPrint, a computer program that extracts and displays information about model structure and hydraulic properties from the input data for a model built using the Hydrogeologic-Unit Flow (HUF) Package of the U.S. Geological Survey's MODFLOW program for modeling ground-water flow. HUFPrint reads the HUF Package and other MODFLOW input files, processes the data by hydrogeologic unit and by model layer, and generates text and graphics files useful for visualizing the data or for further processing. For hydrogeologic units, HUFPrint outputs such hydraulic properties as horizontal hydraulic conductivity along rows, horizontal hydraulic conductivity along columns, horizontal anisotropy, vertical hydraulic conductivity or anisotropy, specific storage, specific yield, and hydraulic-conductivity depth-dependence coefficient. For model layers, HUFPrint outputs such effective hydraulic properties as horizontal hydraulic conductivity along rows, horizontal hydraulic conductivity along columns, horizontal anisotropy, specific storage, primary direction of anisotropy, and vertical conductance. Text files tabulating hydraulic properties by hydrogeologic unit, by model layer, or in a specified vertical section may be generated. Graphics showing two-dimensional cross sections and one-dimensional vertical sections at specified locations also may be generated. HUFPrint reads input files designed for MODFLOW-2000 or MODFLOW-2005.

  5. Genetic algorithms

    NASA Technical Reports Server (NTRS)

    Wang, Lui; Bayer, Steven E.

    1991-01-01

    Genetic algorithms are mathematical, highly parallel, adaptive search procedures (i.e., problem solving methods) based loosely on the processes of natural genetics and Darwinian survival of the fittest. Basic genetic algorithms concepts are introduced, genetic algorithm applications are introduced, and results are presented from a project to develop a software tool that will enable the widespread use of genetic algorithm technology.

  6. Algorithm aversion: people erroneously avoid algorithms after seeing them err.

    PubMed

    Dietvorst, Berkeley J; Simmons, Joseph P; Massey, Cade

    2015-02-01

    Research shows that evidence-based algorithms more accurately predict the future than do human forecasters. Yet when forecasters are deciding whether to use a human forecaster or a statistical algorithm, they often choose the human forecaster. This phenomenon, which we call algorithm aversion, is costly, and it is important to understand its causes. We show that people are especially averse to algorithmic forecasters after seeing them perform, even when they see them outperform a human forecaster. This is because people more quickly lose confidence in algorithmic than human forecasters after seeing them make the same mistake. In 5 studies, participants either saw an algorithm make forecasts, a human make forecasts, both, or neither. They then decided whether to tie their incentives to the future predictions of the algorithm or the human. Participants who saw the algorithm perform were less confident in it, and less likely to choose it over an inferior human forecaster. This was true even among those who saw the algorithm outperform the human.

  7. The Texas Medication Algorithm Project (TMAP) schizophrenia algorithms.

    PubMed

    Miller, A L; Chiles, J A; Chiles, J K; Crismon, M L; Rush, A J; Shon, S P

    1999-10-01

    In the Texas Medication Algorithm Project (TMAP), detailed guidelines for medication management of schizophrenia and related disorders, bipolar disorders, and major depressive disorders have been developed and implemented. This article describes the algorithms developed for medication treatment of schizophrenia and related disorders. The guidelines recommend a sequence of medications and discuss dosing, duration, and switch-over tactics. They also specify response criteria at each stage of the algorithm for both positive and negative symptoms. The rationale and evidence for each aspect of the algorithms are presented.

  8. Algorithm Visualization System for Teaching Spatial Data Algorithms

    ERIC Educational Resources Information Center

    Nikander, Jussi; Helminen, Juha; Korhonen, Ari

    2010-01-01

    TRAKLA2 is a web-based learning environment for data structures and algorithms. The system delivers automatically assessed algorithm simulation exercises that are solved using a graphical user interface. In this work, we introduce a novel learning environment for spatial data algorithms, SDA-TRAKLA2, which has been implemented on top of the…

  9. Motion Cueing Algorithm Development: Initial Investigation and Redesign of the Algorithms

    NASA Technical Reports Server (NTRS)

    Telban, Robert J.; Wu, Weimin; Cardullo, Frank M.; Houck, Jacob A. (Technical Monitor)

    2000-01-01

    In this project four motion cueing algorithms were initially investigated. The classical algorithm generated results with large distortion and delay and low magnitude. The NASA adaptive algorithm proved to be well tuned with satisfactory performance, while the UTIAS adaptive algorithm produced less desirable results. Modifications were made to the adaptive algorithms to reduce the magnitude of undesirable spikes. The optimal algorithm was found to have the potential for improved performance with further redesign. The center of simulator rotation was redefined. More terms were added to the cost function to enable more tuning flexibility. A new design approach using a Fortran/Matlab/Simulink setup was employed. A new semicircular canals model was incorporated in the algorithm. With these changes results show the optimal algorithm has some advantages over the NASA adaptive algorithm. Two general problems observed in the initial investigation required solutions. A nonlinear gain algorithm was developed that scales the aircraft inputs by a third-order polynomial, maximizing the motion cues while remaining within the operational limits of the motion system. A braking algorithm was developed to bring the simulator to a full stop at its motion limit and later release the brake to follow the cueing algorithm output.

  10. Asynchronous machine rotor speed estimation using a tabulated numerical approach

    NASA Astrophysics Data System (ADS)

    Nguyen, Huu Phuc; De Miras, Jérôme; Charara, Ali; Eltabach, Mario; Bonnet, Stéphane

    2017-12-01

    This paper proposes a new method to estimate the rotor speed of the asynchronous machine by looking at the estimation problem as a nonlinear optimal control problem. The behavior of the nonlinear plant model is approximated off-line as a prediction map using a numerical one-step time discretization obtained from simulations. At each time-step, the speed of the induction machine is selected satisfying the dynamic fitting problem between the plant output and the predicted output, leading the system to adopt its dynamical behavior. Thanks to the limitation of the prediction horizon to a single time-step, the execution time of the algorithm can be completely bounded. It can thus easily be implemented and embedded into a real-time system to observe the speed of the real induction motor. Simulation results show the performance and robustness of the proposed estimator.

  11. Experimental study of main rotor tip geometry and tail rotor interactions in hover. Volume 2: Run log and tabulated data

    NASA Technical Reports Server (NTRS)

    Balch, D. T.; Lombardi, J.

    1985-01-01

    A model scale hover test was conducted in the Sikorsky Aircraft Model Rotor hover Facility to identify and quantify the impact of the tail rotor on the demonstrated advantages of advanced geometry tip configurations. The existence of mutual interference between hovering main rotor and a tail rotor was acknowledged in the test. The test was conducted using the Basic Model Test Rig and two scaled main rotor systems, one representing a 1/5.727 scale UH-60A BLACK HAWK and the others a 1/4.71 scale S-76. Eight alternate rotor tip configurations were tested, 3 on the BLACK HAWK rotor and 6 on the S-76 rotor. Four of these tips were then selected for testing in close proximity to an operating tail rotor (operating in both tractor and pusher modes) to determine if the performance advantages that could be obtained from the use of advanced geometry tips in a main rotor only environment would still exist in the more complex flow field involving a tail rotor. This volume contains the test run log and tabulated data.

  12. Tabulated Data From a Pressure-Distribution Investigation at Mach Number 2.01 of a 45 Deg Sweptback-Wing Airplane Model at Combined Angles of Attack and Sideslip

    NASA Technical Reports Server (NTRS)

    Gapcynski, John P.; Landrum, Emma Jean

    1958-01-01

    A pressure-distribution investigation of a wing-body combination has been conducted in the Langley 4- by 4-foot supersonic pressure tunnel at a Mach number of 2.01. The model configuration consisted of an ogive-circular-cylinder body (fineness ratio of approximately ii) and a wing with 45 deg of sweepback at the quarter-chord line, an aspect ratio of 4, and a taper ratio of 0.2. Data were obtained on high-, mid-, and low-wing configurations and for the body and wing alone for a range of angles of attack and yaw from 0 deg to 15 deg. The tabulated pressure coefficients are presented in this report.

  13. CMC prediction for ionic surfactants in pure water and aqueous salt solutions based solely on tabulated molecular parameters.

    PubMed

    Karakashev, Stoyan I; Smoukov, Stoyan K

    2017-09-01

    The critical micelle concentration (CMC) of various surfactants is difficult to predict accurately, yet often necessary to do in both industry and science. Hence, quantum-chemical software packages for precise calculation of CMC were developed, but they are expensive and time consuming. We show here an easy method for calculating CMC with a reasonable accuracy. Firstly, CMC 0 (intrinsic CMC, absent added salt) was coupled with quantitative structure - property relationship (QSPR) with defined by us parameter "CMC predictor" f 1 . It can be easily calculated from a number of tabulated molecular parameters - the adsorption energy of surfactant's head, the adsorption energy of its methylene groups, its number of carbon atoms, the specific adsorption energy of its counter-ions, their valency and bare radius. We applied this method to determine CMC 0 to a test set of 11 ionic surfactants, yielding 7.5% accuracy. Furthermore, we calculated CMC in the presence of added salts using the advanced version of Corrin-Harkins equation, which accounts for both the intrinsic and the added counter-ions. Our salt-saturation multiplier, accounts for both the type and concentration of the added counter-ions. We applied our theory to a test set containing 11 anionic/cationic surfactant+salt systems, achieving 8% accuracy. Copyright © 2017 Elsevier Inc. All rights reserved.

  14. Image processing meta-algorithm development via genetic manipulation of existing algorithm graphs

    NASA Astrophysics Data System (ADS)

    Schalkoff, Robert J.; Shaaban, Khaled M.

    1999-07-01

    Automatic algorithm generation for image processing applications is not a new idea, however previous work is either restricted to morphological operates or impractical. In this paper, we show recent research result in the development and use of meta-algorithms, i.e. algorithms which lead to new algorithms. Although the concept is generally applicable, the application domain in this work is restricted to image processing. The meta-algorithm concept described in this paper is based upon out work in dynamic algorithm. The paper first present the concept of dynamic algorithms which, on the basis of training and archived algorithmic experience embedded in an algorithm graph (AG), dynamically adjust the sequence of operations applied to the input image data. Each node in the tree-based representation of a dynamic algorithm with out degree greater than 2 is a decision node. At these nodes, the algorithm examines the input data and determines which path will most likely achieve the desired results. This is currently done using nearest-neighbor classification. The details of this implementation are shown. The constrained perturbation of existing algorithm graphs, coupled with a suitable search strategy, is one mechanism to achieve meta-algorithm an doffers rich potential for the discovery of new algorithms. In our work, a meta-algorithm autonomously generates new dynamic algorithm graphs via genetic recombination of existing algorithm graphs. The AG representation is well suited to this genetic-like perturbation, using a commonly- employed technique in artificial neural network synthesis, namely the blueprint representation of graphs. A number of exam. One of the principal limitations of our current approach is the need for significant human input in the learning phase. Efforts to overcome this limitation are discussed. Future research directions are indicated.

  15. Comparison of Algorithm-based Estimates of Occupational Diesel Exhaust Exposure to Those of Multiple Independent Raters in a Population-based Case–Control Study

    PubMed Central

    Friesen, Melissa C.

    2013-01-01

    Objectives: Algorithm-based exposure assessments based on patterns in questionnaire responses and professional judgment can readily apply transparent exposure decision rules to thousands of jobs quickly. However, we need to better understand how algorithms compare to a one-by-one job review by an exposure assessor. We compared algorithm-based estimates of diesel exhaust exposure to those of three independent raters within the New England Bladder Cancer Study, a population-based case–control study, and identified conditions under which disparities occurred in the assessments of the algorithm and the raters. Methods: Occupational diesel exhaust exposure was assessed previously using an algorithm and a single rater for all 14 983 jobs reported by 2631 study participants during personal interviews conducted from 2001 to 2004. Two additional raters independently assessed a random subset of 324 jobs that were selected based on strata defined by the cross-tabulations of the algorithm and the first rater’s probability assessments for each job, oversampling their disagreements. The algorithm and each rater assessed the probability, intensity and frequency of occupational diesel exhaust exposure, as well as a confidence rating for each metric. Agreement among the raters, their aggregate rating (average of the three raters’ ratings) and the algorithm were evaluated using proportion of agreement, kappa and weighted kappa (κw). Agreement analyses on the subset used inverse probability weighting to extrapolate the subset to estimate agreement for all jobs. Classification and Regression Tree (CART) models were used to identify patterns in questionnaire responses that predicted disparities in exposure status (i.e., unexposed versus exposed) between the first rater and the algorithm-based estimates. Results: For the probability, intensity and frequency exposure metrics, moderate to moderately high agreement was observed among raters (κw = 0.50–0.76) and between the

  16. Motion Cueing Algorithm Development: Piloted Performance Testing of the Cueing Algorithms

    NASA Technical Reports Server (NTRS)

    Houck, Jacob A. (Technical Monitor); Telban, Robert J.; Cardullo, Frank M.; Kelly, Lon C.

    2005-01-01

    The relative effectiveness in simulating aircraft maneuvers with both current and newly developed motion cueing algorithms was assessed with an eleven-subject piloted performance evaluation conducted on the NASA Langley Visual Motion Simulator (VMS). In addition to the current NASA adaptive algorithm, two new cueing algorithms were evaluated: the optimal algorithm and the nonlinear algorithm. The test maneuvers included a straight-in approach with a rotating wind vector, an offset approach with severe turbulence and an on/off lateral gust that occurs as the aircraft approaches the runway threshold, and a takeoff both with and without engine failure after liftoff. The maneuvers were executed with each cueing algorithm with added visual display delay conditions ranging from zero to 200 msec. Two methods, the quasi-objective NASA Task Load Index (TLX), and power spectral density analysis of pilot control, were used to assess pilot workload. Piloted performance parameters for the approach maneuvers, the vertical velocity upon touchdown and the runway touchdown position, were also analyzed but did not show any noticeable difference among the cueing algorithms. TLX analysis reveals, in most cases, less workload and variation among pilots with the nonlinear algorithm. Control input analysis shows pilot-induced oscillations on a straight-in approach were less prevalent compared to the optimal algorithm. The augmented turbulence cues increased workload on an offset approach that the pilots deemed more realistic compared to the NASA adaptive algorithm. The takeoff with engine failure showed the least roll activity for the nonlinear algorithm, with the least rudder pedal activity for the optimal algorithm.

  17. Super-Encryption Implementation Using Monoalphabetic Algorithm and XOR Algorithm for Data Security

    NASA Astrophysics Data System (ADS)

    Rachmawati, Dian; Andri Budiman, Mohammad; Aulia, Indra

    2018-03-01

    The exchange of data that occurs offline and online is very vulnerable to the threat of data theft. In general, cryptography is a science and art to maintain data secrecy. An encryption is a cryptography algorithm in which data is transformed into cipher text, which is something that is unreadable and meaningless so it cannot be read or understood by other parties. In super-encryption, two or more encryption algorithms are combined to make it more secure. In this work, Monoalphabetic algorithm and XOR algorithm are combined to form a super- encryption. Monoalphabetic algorithm works by changing a particular letter into a new letter based on existing keywords while the XOR algorithm works by using logic operation XOR Since Monoalphabetic algorithm is a classical cryptographic algorithm and XOR algorithm is a modern cryptographic algorithm, this scheme is expected to be both easy-to-implement and more secure. The combination of the two algorithms is capable of securing the data and restoring it back to its original form (plaintext), so the data integrity is still ensured.

  18. Proposal of a Clinical Decision Tree Algorithm Using Factors Associated with Severe Dengue Infection.

    PubMed

    Tamibmaniam, Jayashamani; Hussin, Narwani; Cheah, Wee Kooi; Ng, Kee Sing; Muninathan, Prema

    2016-01-01

    WHO's new classification in 2009: dengue with or without warning signs and severe dengue, has necessitated large numbers of admissions to hospitals of dengue patients which in turn has been imposing a huge economical and physical burden on many hospitals around the globe, particularly South East Asia and Malaysia where the disease has seen a rapid surge in numbers in recent years. Lack of a simple tool to differentiate mild from life threatening infection has led to unnecessary hospitalization of dengue patients. We conducted a single-centre, retrospective study involving serologically confirmed dengue fever patients, admitted in a single ward, in Hospital Kuala Lumpur, Malaysia. Data was collected for 4 months from February to May 2014. Socio demography, co-morbidity, days of illness before admission, symptoms, warning signs, vital signs and laboratory result were all recorded. Descriptive statistics was tabulated and simple and multiple logistic regression analysis was done to determine significant risk factors associated with severe dengue. 657 patients with confirmed dengue were analysed, of which 59 (9.0%) had severe dengue. Overall, the commonest warning sign were vomiting (36.1%) and abdominal pain (32.1%). Previous co-morbid, vomiting, diarrhoea, pleural effusion, low systolic blood pressure, high haematocrit, low albumin and high urea were found as significant risk factors for severe dengue using simple logistic regression. However the significant risk factors for severe dengue with multiple logistic regressions were only vomiting, pleural effusion, and low systolic blood pressure. Using those 3 risk factors, we plotted an algorithm for predicting severe dengue. When compared to the classification of severe dengue based on the WHO criteria, the decision tree algorithm had a sensitivity of 0.81, specificity of 0.54, positive predictive value of 0.16 and negative predictive of 0.96. The decision tree algorithm proposed in this study showed high sensitivity

  19. Linear feature detection algorithm for astronomical surveys - I. Algorithm description

    NASA Astrophysics Data System (ADS)

    Bektešević, Dino; Vinković, Dejan

    2017-11-01

    Computer vision algorithms are powerful tools in astronomical image analyses, especially when automation of object detection and extraction is required. Modern object detection algorithms in astronomy are oriented towards detection of stars and galaxies, ignoring completely the detection of existing linear features. With the emergence of wide-field sky surveys, linear features attract scientific interest as possible trails of fast flybys of near-Earth asteroids and meteors. In this work, we describe a new linear feature detection algorithm designed specifically for implementation in big data astronomy. The algorithm combines a series of algorithmic steps that first remove other objects (stars and galaxies) from the image and then enhance the line to enable more efficient line detection with the Hough algorithm. The rate of false positives is greatly reduced thanks to a step that replaces possible line segments with rectangles and then compares lines fitted to the rectangles with the lines obtained directly from the image. The speed of the algorithm and its applicability in astronomical surveys are also discussed.

  20. The Orthogonally Partitioned EM Algorithm: Extending the EM Algorithm for Algorithmic Stability and Bias Correction Due to Imperfect Data.

    PubMed

    Regier, Michael D; Moodie, Erica E M

    2016-05-01

    We propose an extension of the EM algorithm that exploits the common assumption of unique parameterization, corrects for biases due to missing data and measurement error, converges for the specified model when standard implementation of the EM algorithm has a low probability of convergence, and reduces a potentially complex algorithm into a sequence of smaller, simpler, self-contained EM algorithms. We use the theory surrounding the EM algorithm to derive the theoretical results of our proposal, showing that an optimal solution over the parameter space is obtained. A simulation study is used to explore the finite sample properties of the proposed extension when there is missing data and measurement error. We observe that partitioning the EM algorithm into simpler steps may provide better bias reduction in the estimation of model parameters. The ability to breakdown a complicated problem in to a series of simpler, more accessible problems will permit a broader implementation of the EM algorithm, permit the use of software packages that now implement and/or automate the EM algorithm, and make the EM algorithm more accessible to a wider and more general audience.

  1. The Mendeleev-Meyer force project.

    PubMed

    Santos, Sergio; Lai, Chia-Yun; Amadei, Carlo A; Gadelrab, Karim R; Tang, Tzu-Chieh; Verdaguer, Albert; Barcons, Victor; Font, Josep; Colchero, Jaime; Chiesa, Matteo

    2016-10-14

    Here we present the Mendeleev-Meyer Force Project which aims at tabulating all materials and substances in a fashion similar to the periodic table. The goal is to group and tabulate substances using nanoscale force footprints rather than atomic number or electronic configuration as in the periodic table. The process is divided into: (1) acquiring nanoscale force data from materials, (2) parameterizing the raw data into standardized input features to generate a library, (3) feeding the standardized library into an algorithm to generate, enhance or exploit a model to identify a material or property. We propose producing databases mimicking the Materials Genome Initiative, the Medical Literature Analysis and Retrieval System Online (MEDLARS) or the PRoteomics IDEntifications database (PRIDE) and making these searchable online via search engines mimicking Pubmed or the PRIDE web interface. A prototype exploiting deep learning algorithms, i.e. multilayer neural networks, is presented.

  2. Gate length scaling optimization of FinFETs

    NASA Astrophysics Data System (ADS)

    Chen, Shoumian; Shang, Enming; Hu, Shaojian

    2018-06-01

    This paper introduces a device performance optimization approach for the FinFET through optimization of the gate length. As a result of reducing the gate length, the leakage current (Ioff) increases, and consequently, the stress along the channel enhances which leads to an increase in the drive current (Isat) of the PMOS. In order to sustain Ioff, work function is adjusted to offset the effect of the increased stress. Changing the gate length of the transistor yields different drive currents when the leakage current is fixed by adjusting the work function. For a given device, an optimal gate length is found to provide the highest drive current. As an example, for a standard performance device with Ioff = 1 nA/um, the best performance Isat = 856 uA/um is at L = 34 nm for 14 nm FinFET and Isat = 1130 uA/um at L = 21 nm for 7 nm FinFET. A 7 nm FinFET will exhibit performance boost of 32% comparing with 14 nm FinFET. However, applying the same method to a 5 nm FinFET, the performance boosting is out of expectance comparing to the 7 nm FinFET, which is due to the severe short-channel-effect and the exhausted channel stress in the FinFET.

  3. Map and tabulation of quaternary mass movements along the United States-Canadian Atlantic continental slope from 32 degrees 00 minutes to 47 degrees 00 minutes N. latitude

    USGS Publications Warehouse

    Booth, J.S.; O'Leary, Dennis W.; Popenoe, Peter; Robb, James M.; McGregor, B.A.

    1988-01-01

    Since the initial report on the Grand Banks Slump off southern Newfoundland (Heezen and Ewing, 1952), a large body of data on submarine mass movement along the Atlantic continental margin of the United States and Canada has been published.  These data were compiled to provide this distribution map (sheet 1) and tabulation (sheet 2) of "principal facts" on mass movement of the northwest Atlantic Continental Slope.  Although we prepared this inventory to facilitate our study of Quaternary mass movement along the U.S. Continental Slope, we judged the compilation to be large enough and detailed enough to be published as a generally useful data source and compendium.  Sheet 3 shows examples of mass movement styles.

  4. Algorithm improvement program nuclide identification algorithm scoring criteria and scoring application.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Enghauser, Michael

    2016-02-01

    The goal of the Domestic Nuclear Detection Office (DNDO) Algorithm Improvement Program (AIP) is to facilitate gamma-radiation detector nuclide identification algorithm development, improvement, and validation. Accordingly, scoring criteria have been developed to objectively assess the performance of nuclide identification algorithms. In addition, a Microsoft Excel spreadsheet application for automated nuclide identification scoring has been developed. This report provides an overview of the equations, nuclide weighting factors, nuclide equivalencies, and configuration weighting factors used by the application for scoring nuclide identification algorithm performance. Furthermore, this report presents a general overview of the nuclide identification algorithm scoring application including illustrative examples.

  5. Multimodal optimization by using hybrid of artificial bee colony algorithm and BFGS algorithm

    NASA Astrophysics Data System (ADS)

    Anam, S.

    2017-10-01

    Optimization has become one of the important fields in Mathematics. Many problems in engineering and science can be formulated into optimization problems. They maybe have many local optima. The optimization problem with many local optima, known as multimodal optimization problem, is how to find the global solution. Several metaheuristic methods have been proposed to solve multimodal optimization problems such as Particle Swarm Optimization (PSO), Genetics Algorithm (GA), Artificial Bee Colony (ABC) algorithm, etc. The performance of the ABC algorithm is better than or similar to those of other population-based algorithms with the advantage of employing a fewer control parameters. The ABC algorithm also has the advantages of strong robustness, fast convergence and high flexibility. However, it has the disadvantages premature convergence in the later search period. The accuracy of the optimal value cannot meet the requirements sometimes. Broyden-Fletcher-Goldfarb-Shanno (BFGS) algorithm is a good iterative method for finding a local optimum. Compared with other local optimization methods, the BFGS algorithm is better. Based on the advantages of the ABC algorithm and the BFGS algorithm, this paper proposes a hybrid of the artificial bee colony algorithm and the BFGS algorithm to solve the multimodal optimization problem. The first step is that the ABC algorithm is run to find a point. In the second step is that the point obtained by the first step is used as an initial point of BFGS algorithm. The results show that the hybrid method can overcome from the basic ABC algorithm problems for almost all test function. However, if the shape of function is flat, the proposed method cannot work well.

  6. Adaptive cockroach swarm algorithm

    NASA Astrophysics Data System (ADS)

    Obagbuwa, Ibidun C.; Abidoye, Ademola P.

    2017-07-01

    An adaptive cockroach swarm optimization (ACSO) algorithm is proposed in this paper to strengthen the existing cockroach swarm optimization (CSO) algorithm. The ruthless component of CSO algorithm is modified by the employment of blend crossover predator-prey evolution method which helps algorithm prevent any possible population collapse, maintain population diversity and create adaptive search in each iteration. The performance of the proposed algorithm on 16 global optimization benchmark function problems was evaluated and compared with the existing CSO, cuckoo search, differential evolution, particle swarm optimization and artificial bee colony algorithms.

  7. Parallel consistent labeling algorithms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Samal, A.; Henderson, T.

    Mackworth and Freuder have analyzed the time complexity of several constraint satisfaction algorithms. Mohr and Henderson have given new algorithms, AC-4 and PC-3, for arc and path consistency, respectively, and have shown that the arc consistency algorithm is optimal in time complexity and of the same order space complexity as the earlier algorithms. In this paper, they give parallel algorithms for solving node and arc consistency. They show that any parallel algorithm for enforcing arc consistency in the worst case must have O(na) sequential steps, where n is number of nodes, and a is the number of labels per node.more » They give several parallel algorithms to do arc consistency. It is also shown that they all have optimal time complexity. The results of running the parallel algorithms on a BBN Butterfly multiprocessor are also presented.« less

  8. Algorithm That Synthesizes Other Algorithms for Hashing

    NASA Technical Reports Server (NTRS)

    James, Mark

    2010-01-01

    An algorithm that includes a collection of several subalgorithms has been devised as a means of synthesizing still other algorithms (which could include computer code) that utilize hashing to determine whether an element (typically, a number or other datum) is a member of a set (typically, a list of numbers). Each subalgorithm synthesizes an algorithm (e.g., a block of code) that maps a static set of key hashes to a somewhat linear monotonically increasing sequence of integers. The goal in formulating this mapping is to cause the length of the sequence thus generated to be as close as practicable to the original length of the set and thus to minimize gaps between the elements. The advantage of the approach embodied in this algorithm is that it completely avoids the traditional approach of hash-key look-ups that involve either secondary hash generation and look-up or further searching of a hash table for a desired key in the event of collisions. This algorithm guarantees that it will never be necessary to perform a search or to generate a secondary key in order to determine whether an element is a member of a set. This algorithm further guarantees that any algorithm that it synthesizes can be executed in constant time. To enforce these guarantees, the subalgorithms are formulated to employ a set of techniques, each of which works very effectively covering a certain class of hash-key values. These subalgorithms are of two types, summarized as follows: Given a list of numbers, try to find one or more solutions in which, if each number is shifted to the right by a constant number of bits and then masked with a rotating mask that isolates a set of bits, a unique number is thereby generated. In a variant of the foregoing procedure, omit the masking. Try various combinations of shifting, masking, and/or offsets until the solutions are found. From the set of solutions, select the one that provides the greatest compression for the representation and is executable in the

  9. Strategic Control Algorithm Development : Volume 3. Strategic Algorithm Report.

    DOT National Transportation Integrated Search

    1974-08-01

    The strategic algorithm report presents a detailed description of the functional basic strategic control arrival algorithm. This description is independent of a particular computer or language. Contained in this discussion are the geometrical and env...

  10. Algorithm to find high density EEG scalp coordinates and analysis of their correspondence to structural and functional regions of the brain.

    PubMed

    Giacometti, Paolo; Perdue, Katherine L; Diamond, Solomon G

    2014-05-30

    Interpretation and analysis of electroencephalography (EEG) measurements relies on the correspondence of electrode scalp coordinates to structural and functional regions of the brain. An algorithm is introduced for automatic calculation of the International 10-20, 10-10, and 10-5 scalp coordinates of EEG electrodes on a boundary element mesh of a human head. The EEG electrode positions are then used to generate parcellation regions of the cerebral cortex based on proximity to the EEG electrodes. The scalp electrode calculation method presented in this study effectively and efficiently identifies EEG locations without prior digitization of coordinates. The average of electrode proximity parcellations of the cortex were tabulated with respect to structural and functional regions of the brain in a population of 20 adult subjects. Parcellations based on electrode proximity and EEG sensitivity were compared. The parcellation regions based on sensitivity and proximity were found to have 44.0 ± 11.3% agreement when demarcated by the International 10-20, 32.4 ± 12.6% by the 10-10, and 24.7 ± 16.3% by the 10-5 electrode positioning system. The EEG positioning algorithm is a fast and easy method of locating EEG scalp coordinates without the need for digitized electrode positions. The parcellation method presented summarizes the EEG scalp locations with respect to brain regions without computation of a full EEG forward model solution. The reference table of electrode proximity versus cortical regions may be used by experimenters to select electrodes that correspond to anatomical and functional regions of interest. Copyright © 2014 Elsevier B.V. All rights reserved.

  11. Quick fuzzy backpropagation algorithm.

    PubMed

    Nikov, A; Stoeva, S

    2001-03-01

    A modification of the fuzzy backpropagation (FBP) algorithm called QuickFBP algorithm is proposed, where the computation of the net function is significantly quicker. It is proved that the FBP algorithm is of exponential time complexity, while the QuickFBP algorithm is of polynomial time complexity. Convergence conditions of the QuickFBP, resp. the FBP algorithm are defined and proved for: (1) single output neural networks in case of training patterns with different targets; and (2) multiple output neural networks in case of training patterns with equivalued target vector. They support the automation of the weights training process (quasi-unsupervised learning) establishing the target value(s) depending on the network's input values. In these cases the simulation results confirm the convergence of both algorithms. An example with a large-sized neural network illustrates the significantly greater training speed of the QuickFBP rather than the FBP algorithm. The adaptation of an interactive web system to users on the basis of the QuickFBP algorithm is presented. Since the QuickFBP algorithm ensures quasi-unsupervised learning, this implies its broad applicability in areas of adaptive and adaptable interactive systems, data mining, etc. applications.

  12. Algorithm Improvement Program Nuclide Identification Algorithm Scoring Criteria And Scoring Application - DNDO.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Enghauser, Michael

    2015-02-01

    The goal of the Domestic Nuclear Detection Office (DNDO) Algorithm Improvement Program (AIP) is to facilitate gamma-radiation detector nuclide identification algorithm development, improvement, and validation. Accordingly, scoring criteria have been developed to objectively assess the performance of nuclide identification algorithms. In addition, a Microsoft Excel spreadsheet application for automated nuclide identification scoring has been developed. This report provides an overview of the equations, nuclide weighting factors, nuclide equivalencies, and configuration weighting factors used by the application for scoring nuclide identification algorithm performance. Furthermore, this report presents a general overview of the nuclide identification algorithm scoring application including illustrative examples.

  13. Aerodynamic Loads at Mach Numbers from 0.70 to 2.22 on an Airplane Model Having a Wing and Canard of Triangular Plan Form and Either Single or Twin Vertical Tails Supplement I-Tabulated Data for the Model with Single Vertical Tails. Supplement 1; Tabulated Data for the Model with Single Vertical Tail

    NASA Technical Reports Server (NTRS)

    Peterson, Victor L.; Menees, Gene P.

    1961-01-01

    Tabulated results of a wind-tunnel investigation of the aerodynamic loads on a canard airplane model with a single vertical tail are presented for Mach numbers from 0.70 to 2.22. The Reynolds number for the measurements was 2.9 x 10(exp 6) based on the wing mean aerodynamic chord. The results include local static pressure coefficients measured on the wing, body, and vertical tail for angles of attack from -4 deg to + 16 deg, angles of sideslip of 0 deg and 5.3 deg, vertical-tail settings of 0 deg and 5 deg, and nominal canard deflections of 0 deg and 10 deg. Also included are section force and moment coefficients obtained from integrations of the local pressures and model-component force and moment coefficients obtained from integrations of the section coefficients. Geometric details of the model and the locations of the pressure orifices are shown. An index to the data contained herein is presented and definitions of nomenclature are given.

  14. Improved multivariate polynomial factoring algorithm

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, P.S.

    1978-10-01

    A new algorithm for factoring multivariate polynomials over the integers based on an algorithm by Wang and Rothschild is described. The new algorithm has improved strategies for dealing with the known problems of the original algorithm, namely, the leading coefficient problem, the bad-zero problem and the occurrence of extraneous factors. It has an algorithm for correctly predetermining leading coefficients of the factors. A new and efficient p-adic algorithm named EEZ is described. Bascially it is a linearly convergent variable-by-variable parallel construction. The improved algorithm is generally faster and requires less store then the original algorithm. Machine examples with comparative timingmore » are included.« less

  15. Algorithmic and user study of an autocompletion algorithm on a large medical vocabulary.

    PubMed

    Sevenster, Merlijn; van Ommering, Rob; Qian, Yuechen

    2012-02-01

    Autocompletion supports human-computer interaction in software applications that let users enter textual data. We will be inspired by the use case in which medical professionals enter ontology concepts, catering the ongoing demand for structured and standardized data in medicine. Goal is to give an algorithmic analysis of one particular autocompletion algorithm, called multi-prefix matching algorithm, which suggests terms whose words' prefixes contain all words in the string typed by the user, e.g., in this sense, opt ner me matches optic nerve meningioma. Second we aim to investigate how well it supports users entering concepts from a large and comprehensive medical vocabulary (snomed ct). We give a concise description of the multi-prefix algorithm, and sketch how it can be optimized to meet required response time. Performance will be compared to a baseline algorithm, which gives suggestions that extend the string typed by the user to the right, e.g. optic nerve m gives optic nerve meningioma, but opt ner me does not. We conduct a user experiment in which 12 participants are invited to complete 40 snomed ct terms with the baseline algorithm and another set of 40 snomed ct terms with the multi-prefix algorithm. Our results show that users need significantly fewer keystrokes when supported by the multi-prefix algorithm than when supported by the baseline algorithm. The proposed algorithm is a competitive candidate for searching and retrieving terms from a large medical ontology. Copyright © 2011 Elsevier Inc. All rights reserved.

  16. Optimal Fungal Space Searching Algorithms.

    PubMed

    Asenova, Elitsa; Lin, Hsin-Yu; Fu, Eileen; Nicolau, Dan V; Nicolau, Dan V

    2016-10-01

    Previous experiments have shown that fungi use an efficient natural algorithm for searching the space available for their growth in micro-confined networks, e.g., mazes. This natural "master" algorithm, which comprises two "slave" sub-algorithms, i.e., collision-induced branching and directional memory, has been shown to be more efficient than alternatives, with one, or the other, or both sub-algorithms turned off. In contrast, the present contribution compares the performance of the fungal natural algorithm against several standard artificial homologues. It was found that the space-searching fungal algorithm consistently outperforms uninformed algorithms, such as Depth-First-Search (DFS). Furthermore, while the natural algorithm is inferior to informed ones, such as A*, this under-performance does not importantly increase with the increase of the size of the maze. These findings suggest that a systematic effort of harvesting the natural space searching algorithms used by microorganisms is warranted and possibly overdue. These natural algorithms, if efficient, can be reverse-engineered for graph and tree search strategies.

  17. Mathematical and Statistical Software Index.

    DTIC Science & Technology

    1986-08-01

    geometric) mean HMEAN - harmonic mean MEDIAN - median MODE - mode QUANT - quantiles OGIVE - distribution curve IQRNG - interpercentile range RANGE ... range mutliphase pivoting algorithm cross-classification multiple discriminant analysis cross-tabul ation mul tipl e-objecti ve model curve fitting...Statistics). .. .. .... ...... ..... ...... ..... .. 21 *RANGEX (Correct Correlations for Curtailment of Range ). .. .. .... ...... ... 21 *RUMMAGE II (Analysis

  18. On factoring RSA modulus using random-restart hill-climbing algorithm and Pollard’s rho algorithm

    NASA Astrophysics Data System (ADS)

    Budiman, M. A.; Rachmawati, D.

    2017-12-01

    The security of the widely-used RSA public key cryptography algorithm depends on the difficulty of factoring a big integer into two large prime numbers. For many years, the integer factorization problem has been intensively and extensively studied in the field of number theory. As a result, a lot of deterministic algorithms such as Euler’s algorithm, Kraitchik’s, and variants of Pollard’s algorithms have been researched comprehensively. Our study takes a rather uncommon approach: rather than making use of intensive number theories, we attempt to factorize RSA modulus n by using random-restart hill-climbing algorithm, which belongs the class of metaheuristic algorithms. The factorization time of RSA moduli with different lengths is recorded and compared with the factorization time of Pollard’s rho algorithm, which is a deterministic algorithm. Our experimental results indicates that while random-restart hill-climbing algorithm is an acceptable candidate to factorize smaller RSA moduli, the factorization speed is much slower than that of Pollard’s rho algorithm.

  19. Iodine Satellite

    NASA Technical Reports Server (NTRS)

    Kamhawi, Hani; Dankanich, John; Martinez, Andres; Petro, Andrew

    2015-01-01

    The Iodine Satellite (iSat) spacecraft will be the first CubeSat to demonstrate high change in velocity from a primary propulsion system by using Hall thruster technology and iodine as a propellant. The mission will demonstrate CubeSat maneuverability, including plane change, altitude change and change in its closest approach to Earth to ensure atmospheric reentry in less than 90 days. The mission is planned for launch in fall 2017. Hall thruster technology is a type of electric propulsion. Electric propulsion uses electricity, typically from solar panels, to accelerate the propellant. Electric propulsion can accelerate propellant to 10 times higher velocities than traditional chemical propulsion systems, which significantly increases fuel efficiency. To enable the success of the propulsion subsystem, iSat will also demonstrate power management and thermal control capabilities well beyond the current state-of-the-art for spacecraft of its size. This technology is a viable primary propulsion system that can be used on small satellites ranging from about 22 pounds (10 kilograms) to more than 1,000 pounds (450 kilograms). iSat's fuel efficiency is ten times greater and its propulsion per volume is 100 times greater than current cold-gas systems and three times better than the same system operating on xenon. iSat's iodine propulsion system consists of a 200 watt (W) Hall thruster, a cathode, a tank to store solid iodine, a power processing unit (PPU) and the feed system to supply the iodine. This propulsion system is based on a 200 W Hall thruster developed by Busek Co. Inc., which was previously flown using xenon as the propellant. Several improvements have been made to the original system to include a compact PPU, targeting greater than 80 percent reduction in mass and volume of conventional PPU designs. The cathode technology is planned to enable heaterless cathode conditioning, significantly increasing total system efficiency. The feed system has been designed to

  20. A Modified Decision Tree Algorithm Based on Genetic Algorithm for Mobile User Classification Problem

    PubMed Central

    Liu, Dong-sheng; Fan, Shu-jiang

    2014-01-01

    In order to offer mobile customers better service, we should classify the mobile user firstly. Aimed at the limitations of previous classification methods, this paper puts forward a modified decision tree algorithm for mobile user classification, which introduced genetic algorithm to optimize the results of the decision tree algorithm. We also take the context information as a classification attributes for the mobile user and we classify the context into public context and private context classes. Then we analyze the processes and operators of the algorithm. At last, we make an experiment on the mobile user with the algorithm, we can classify the mobile user into Basic service user, E-service user, Plus service user, and Total service user classes and we can also get some rules about the mobile user. Compared to C4.5 decision tree algorithm and SVM algorithm, the algorithm we proposed in this paper has higher accuracy and more simplicity. PMID:24688389

  1. Genetic Algorithms and Local Search

    NASA Technical Reports Server (NTRS)

    Whitley, Darrell

    1996-01-01

    The first part of this presentation is a tutorial level introduction to the principles of genetic search and models of simple genetic algorithms. The second half covers the combination of genetic algorithms with local search methods to produce hybrid genetic algorithms. Hybrid algorithms can be modeled within the existing theoretical framework developed for simple genetic algorithms. An application of a hybrid to geometric model matching is given. The hybrid algorithm yields results that improve on the current state-of-the-art for this problem.

  2. The Texas Medication Algorithm Project antipsychotic algorithm for schizophrenia: 2003 update.

    PubMed

    Miller, Alexander L; Hall, Catherine S; Buchanan, Robert W; Buckley, Peter F; Chiles, John A; Conley, Robert R; Crismon, M Lynn; Ereshefsky, Larry; Essock, Susan M; Finnerty, Molly; Marder, Stephen R; Miller, Del D; McEvoy, Joseph P; Rush, A John; Saeed, Sy A; Schooler, Nina R; Shon, Steven P; Stroup, Scott; Tarin-Godoy, Bernardo

    2004-04-01

    The Texas Medication Algorithm Project (TMAP) has been a public-academic collaboration in which guidelines for medication treatment of schizophrenia, bipolar disorder, and major depressive disorder were used in selected public outpatient clinics in Texas. Subsequently, these algorithms were implemented throughout Texas and are being used in other states. Guidelines require updating when significant new evidence emerges; the antipsychotic algorithm for schizophrenia was last updated in 1999. This article reports the recommendations developed in 2002 and 2003 by a group of experts, clinicians, and administrators. A conference in January 2002 began the update process. Before the conference, experts in the pharmacologic treatment of schizophrenia, clinicians, and administrators reviewed literature topics and prepared presentations. Topics included ziprasidone's inclusion in the algorithm, the number of antipsychotics tried before clozapine, and the role of first generation antipsychotics. Data were rated according to Agency for Healthcare Research and Quality criteria. After discussing the presentations, conference attendees arrived at consensus recommendations. Consideration of aripiprazole's inclusion was subsequently handled by electronic communications. The antipsychotic algorithm for schizophrenia was updated to include ziprasidone and aripiprazole among the first-line agents. Relative to the prior algorithm, the number of stages before clozapine was reduced. First generation antipsychotics were included but not as first-line choices. For patients refusing or not responding to clozapine and clozapine augmentation, preference was given to trying monotherapy with another antipsychotic before resorting to antipsychotic combinations. Consensus on algorithm revisions was achieved, but only further well-controlled research will answer many key questions about sequence and type of medication treatments of schizophrenia.

  3. Asteroid rotation. I - Tabulation and analysis of rates, pole positions and shapes. II - A theory for the collisional evolution of rotation rates

    NASA Technical Reports Server (NTRS)

    Harris, A. W.; Burns, J. A.

    1979-01-01

    Rotation properties and shape data for 182 asteroids are compiled and analyzed, and a collisional model for the evolution of the mean rotation rate of asteroids is proposed. Tabulations of asteroid rotation rates, taxonomic types, pole positions, sizes and shapes and plots of rotation frequency and light curve amplitude against size indicate that asteroid rotational frequency increases with decreasing size for all asteroids except those of the C or S classes. Light curve data also indicate that small asteroids are more irregular in shape than large asteroids. The dispersion in rotation rates observed is well represented by a three dimensional Maxwellian distribution, suggestive of collisional encounters between asteroids. In the proposed model, the rotation rate is found to tend toward an equilibrium value, at which spin-up due to infrequent, large collisions is balanced by a drag due to the larger number of small collisions. The lower mean rotation rate of C-type asteroids is attributed to a lower means density of that class, and the increase in rotation rate with decreasing size is interpreted as indicative of a substantial population of strong asteroids.

  4. Accurate Finite Difference Algorithms

    NASA Technical Reports Server (NTRS)

    Goodrich, John W.

    1996-01-01

    Two families of finite difference algorithms for computational aeroacoustics are presented and compared. All of the algorithms are single step explicit methods, they have the same order of accuracy in both space and time, with examples up to eleventh order, and they have multidimensional extensions. One of the algorithm families has spectral like high resolution. Propagation with high order and high resolution algorithms can produce accurate results after O(10(exp 6)) periods of propagation with eight grid points per wavelength.

  5. Iodine Satellite

    NASA Technical Reports Server (NTRS)

    Dankanich, John; Kamhawi, Hani; Szabo, James

    2015-01-01

    This project is a collaborative effort to mature an iodine propulsion system while reducing risk and increasing fidelity of a technology demonstration mission concept. 1 The FY 2014 tasks include investments leveraged throughout NASA, from multiple mission directorates, as a partnership with NASA Glenn Research Center (GRC), a NASA Marshall Space Flight Center (MSFC) Technology Investment Project, and an Air Force partnership. Propulsion technology is often a critical enabling technology for space missions. NASA is investing in technologies to enable high value missions with very small and low-cost spacecraft, even CubeSats. However, these small spacecraft currently lack any appreciable propulsion capability. CubeSats are typically deployed and drift without any ability to transfer to higher value orbits, perform orbit maintenance, or deorbit. However, the iodine Hall system can allow the spacecraft to transfer into a higher value science orbit. The iodine satellite (iSAT) will be able to achieve a (Delta)V of >500 m/s with <1 kg of solid iodine propellant, which can be stored in an unpressurized benign state prior to launch. The iSAT propulsion system consists of the 200 W Hall thruster, solid iodine propellant tank, a power processing unit, and the necessary valves and tubing to route the iodine vapor. The propulsion system is led by GRC, with critical hardware provided by the Busek Co. The propellant tank begins with solid iodine unpressurized on the ground and in-flight before operations, which is then heated via tank heaters to a temperature at which solid iodine sublimates to iodine vapor. The vapor is then routed through tubing and custom valves to control mass flow to the thruster and cathode assembly. 2 The thruster then ionizes the vapor and accelerates it via magnetic and electrostatic fields, resulting in thrust with a specific impulse >1,300 s. The iSAT spacecraft, illustrated in figure 1, is currently a 12U CubeSat. The spacecraft chassis will be

  6. Algorithm to find high density EEG scalp coordinates and analysis of their correspondence to structural and functional regions of the brain

    PubMed Central

    Giacometti, Paolo; Perdue, Katherine L.; Diamond, Solomon G.

    2014-01-01

    Background Interpretation and analysis of electroencephalography (EEG) measurements relies on the correspondence of electrode scalp coordinates to structural and functional regions of the brain. New Method An algorithm is introduced for automatic calculation of the International 10–20, 10-10, and 10-5 scalp coordinates of EEG electrodes on a boundary element mesh of a human head. The EEG electrode positions are then used to generate parcellation regions of the cerebral cortex based on proximity to the EEG electrodes. Results The scalp electrode calculation method presented in this study effectively and efficiently identifies EEG locations without prior digitization of coordinates. The average of electrode proximity parcellations of the cortex were tabulated with respect to structural and functional regions of the brain in a population of 20 adult subjects. Comparison with Existing Methods Parcellations based on electrode proximity and EEG sensitivity were compared. The parcellation regions based on sensitivity and proximity were found to have 44.0 ± 11.3% agreement when demarcated by the International 10–20, 32.4 ± 12.6% by the 10-10, and 24.7 ± 16.3% by the 10-5 electrode positioning system. Conclusions The EEG positioning algorithm is a fast and easy method of locating EEG scalp coordinates without the need for digitized electrode positions. The parcellation method presented summarizes the EEG scalp locations with respect to brain regions without computation of a full EEG forward model solution. The reference table of electrode proximity versus cortical regions may be used by experimenters to select electrodes that correspond to anatomical and functional regions of interest. PMID:24769168

  7. JPSS Cryosphere Algorithms: Integration and Testing in Algorithm Development Library (ADL)

    NASA Astrophysics Data System (ADS)

    Tsidulko, M.; Mahoney, R. L.; Meade, P.; Baldwin, D.; Tschudi, M. A.; Das, B.; Mikles, V. J.; Chen, W.; Tang, Y.; Sprietzer, K.; Zhao, Y.; Wolf, W.; Key, J.

    2014-12-01

    JPSS is a next generation satellite system that is planned to be launched in 2017. The satellites will carry a suite of sensors that are already on board the Suomi National Polar-orbiting Partnership (S-NPP) satellite. The NOAA/NESDIS/STAR Algorithm Integration Team (AIT) works within the Algorithm Development Library (ADL) framework which mimics the operational JPSS Interface Data Processing Segment (IDPS). The AIT contributes in development, integration and testing of scientific algorithms employed in the IDPS. This presentation discusses cryosphere related activities performed in ADL. The addition of a new ancillary data set - NOAA Global Multisensor Automated Snow/Ice data (GMASI) - with ADL code modifications is described. Preliminary GMASI impact on the gridded Snow/Ice product is estimated. Several modifications to the Ice Age algorithm that demonstrates mis-classification of ice type for certain areas/time periods are tested in the ADL. Sensitivity runs for day time, night time and terminator zone are performed and presented. Comparisons between the original and modified versions of the Ice Age algorithm are also presented.

  8. Algorithm-development activities

    NASA Technical Reports Server (NTRS)

    Carder, Kendall L.

    1994-01-01

    The task of algorithm-development activities at USF continues. The algorithm for determining chlorophyll alpha concentration, (Chl alpha) and gelbstoff absorption coefficient for SeaWiFS and MODIS-N radiance data is our current priority.

  9. Theoretic derivation of directed acyclic subgraph algorithm and comparisons with message passing algorithm

    NASA Astrophysics Data System (ADS)

    Ha, Jeongmok; Jeong, Hong

    2016-07-01

    This study investigates the directed acyclic subgraph (DAS) algorithm, which is used to solve discrete labeling problems much more rapidly than other Markov-random-field-based inference methods but at a competitive accuracy. However, the mechanism by which the DAS algorithm simultaneously achieves competitive accuracy and fast execution speed, has not been elucidated by a theoretical derivation. We analyze the DAS algorithm by comparing it with a message passing algorithm. Graphical models, inference methods, and energy-minimization frameworks are compared between DAS and message passing algorithms. Moreover, the performances of DAS and other message passing methods [sum-product belief propagation (BP), max-product BP, and tree-reweighted message passing] are experimentally compared.

  10. A Winner Determination Algorithm for Combinatorial Auctions Based on Hybrid Artificial Fish Swarm Algorithm

    NASA Astrophysics Data System (ADS)

    Zheng, Genrang; Lin, ZhengChun

    The problem of winner determination in combinatorial auctions is a hotspot electronic business, and a NP hard problem. A Hybrid Artificial Fish Swarm Algorithm(HAFSA), which is combined with First Suite Heuristic Algorithm (FSHA) and Artificial Fish Swarm Algorithm (AFSA), is proposed to solve the problem after probing it base on the theories of AFSA. Experiment results show that the HAFSA is a rapidly and efficient algorithm for The problem of winner determining. Compared with Ant colony Optimization Algorithm, it has a good performance with broad and prosperous application.

  11. A review on quantum search algorithms

    NASA Astrophysics Data System (ADS)

    Giri, Pulak Ranjan; Korepin, Vladimir E.

    2017-12-01

    The use of superposition of states in quantum computation, known as quantum parallelism, has significant advantage in terms of speed over the classical computation. It is evident from the early invented quantum algorithms such as Deutsch's algorithm, Deutsch-Jozsa algorithm and its variation as Bernstein-Vazirani algorithm, Simon algorithm, Shor's algorithms, etc. Quantum parallelism also significantly speeds up the database search algorithm, which is important in computer science because it comes as a subroutine in many important algorithms. Quantum database search of Grover achieves the task of finding the target element in an unsorted database in a time quadratically faster than the classical computer. We review Grover's quantum search algorithms for a singe and multiple target elements in a database. The partial search algorithm of Grover and Radhakrishnan and its optimization by Korepin called GRK algorithm are also discussed.

  12. Optimization Of Feature Weight TheVoting Feature Intervals 5 Algorithm Using Partical Swarm Optimization Algorithm

    NASA Astrophysics Data System (ADS)

    Hayana Hasibuan, Eka; Mawengkang, Herman; Efendi, Syahril

    2017-12-01

    The use of Partical Swarm Optimization Algorithm in this research is to optimize the feature weights on the Voting Feature Interval 5 algorithm so that we can find the model of using PSO algorithm with VFI 5. Optimization of feature weight on Diabetes or Dyspesia data is considered important because it is very closely related to the livelihood of many people, so if there is any inaccuracy in determining the most dominant feature weight in the data will cause death. Increased accuracy by using PSO Algorithm ie fold 1 from 92.31% to 96.15% increase accuracy of 3.8%, accuracy of fold 2 on Algorithm VFI5 of 92.52% as well as generated on PSO Algorithm means accuracy fixed, then in fold 3 increase accuracy of 85.19% Increased to 96.29% Accuracy increased by 11%. The total accuracy of all three trials increased by 14%. In general the Partical Swarm Optimization algorithm has succeeded in increasing the accuracy to several fold, therefore it can be concluded the PSO algorithm is well used in optimizing the VFI5 Classification Algorithm.

  13. Artifact removal algorithms for stroke detection using a multistatic MIST beamforming algorithm.

    PubMed

    Ricci, E; Di Domenico, S; Cianca, E; Rossi, T

    2015-01-01

    Microwave imaging (MWI) has been recently proved as a promising imaging modality for low-complexity, low-cost and fast brain imaging tools, which could play a fundamental role to efficiently manage emergencies related to stroke and hemorrhages. This paper focuses on the UWB radar imaging approach and in particular on the processing algorithms of the backscattered signals. Assuming the use of the multistatic version of the MIST (Microwave Imaging Space-Time) beamforming algorithm, developed by Hagness et al. for the early detection of breast cancer, the paper proposes and compares two artifact removal algorithms. Artifacts removal is an essential step of any UWB radar imaging system and currently considered artifact removal algorithms have been shown not to be effective in the specific scenario of brain imaging. First of all, the paper proposes modifications of a known artifact removal algorithm. These modifications are shown to be effective to achieve good localization accuracy and lower false positives. However, the main contribution is the proposal of an artifact removal algorithm based on statistical methods, which allows to achieve even better performance but with much lower computational complexity.

  14. Portfolios of quantum algorithms.

    PubMed

    Maurer, S M; Hogg, T; Huberman, B A

    2001-12-17

    Quantum computation holds promise for the solution of many intractable problems. However, since many quantum algorithms are stochastic in nature they can find the solution of hard problems only probabilistically. Thus the efficiency of the algorithms has to be characterized by both the expected time to completion and the associated variance. In order to minimize both the running time and its uncertainty, we show that portfolios of quantum algorithms analogous to those of finance can outperform single algorithms when applied to the NP-complete problems such as 3-satisfiability.

  15. DNABIT Compress - Genome compression algorithm.

    PubMed

    Rajarajeswari, Pothuraju; Apparao, Allam

    2011-01-22

    Data compression is concerned with how information is organized in data. Efficient storage means removal of redundancy from the data being stored in the DNA molecule. Data compression algorithms remove redundancy and are used to understand biologically important molecules. We present a compression algorithm, "DNABIT Compress" for DNA sequences based on a novel algorithm of assigning binary bits for smaller segments of DNA bases to compress both repetitive and non repetitive DNA sequence. Our proposed algorithm achieves the best compression ratio for DNA sequences for larger genome. Significantly better compression results show that "DNABIT Compress" algorithm is the best among the remaining compression algorithms. While achieving the best compression ratios for DNA sequences (Genomes),our new DNABIT Compress algorithm significantly improves the running time of all previous DNA compression programs. Assigning binary bits (Unique BIT CODE) for (Exact Repeats, Reverse Repeats) fragments of DNA sequence is also a unique concept introduced in this algorithm for the first time in DNA compression. This proposed new algorithm could achieve the best compression ratio as much as 1.58 bits/bases where the existing best methods could not achieve a ratio less than 1.72 bits/bases.

  16. File text security using Hybrid Cryptosystem with Playfair Cipher Algorithm and Knapsack Naccache-Stern Algorithm

    NASA Astrophysics Data System (ADS)

    Amalia; Budiman, M. A.; Sitepu, R.

    2018-03-01

    Cryptography is one of the best methods to keep the information safe from security attack by unauthorized people. At present, Many studies had been done by previous researchers to generate a more robust cryptographic algorithm to provide high security for data communication. To strengthen data security, one of the methods is hybrid cryptosystem method that combined symmetric and asymmetric algorithm. In this study, we observed a hybrid cryptosystem method contain Modification Playfair Cipher 16x16 algorithm as a symmetric algorithm and Knapsack Naccache-Stern as an asymmetric algorithm. We observe a running time of this hybrid algorithm with some of the various experiments. We tried different amount of characters to be tested which are 10, 100, 1000, 10000 and 100000 characters and we also examined the algorithm with various key’s length which are 10, 20, 30, 40 of key length. The result of our study shows that the processing time for encryption and decryption process each algorithm is linearly proportional, it means the longer messages character then, the more significant times needed to encrypt and decrypt the messages. The encryption running time of Knapsack Naccache-Stern algorithm takes a longer time than its decryption, while the encryption running time of modification Playfair Cipher 16x16 algorithm takes less time than its decryption.

  17. The global Minmax k-means algorithm.

    PubMed

    Wang, Xiaoyan; Bai, Yanping

    2016-01-01

    The global k -means algorithm is an incremental approach to clustering that dynamically adds one cluster center at a time through a deterministic global search procedure from suitable initial positions, and employs k -means to minimize the sum of the intra-cluster variances. However the global k -means algorithm sometimes results singleton clusters and the initial positions sometimes are bad, after a bad initialization, poor local optimal can be easily obtained by k -means algorithm. In this paper, we modified the global k -means algorithm to eliminate the singleton clusters at first, and then we apply MinMax k -means clustering error method to global k -means algorithm to overcome the effect of bad initialization, proposed the global Minmax k -means algorithm. The proposed clustering method is tested on some popular data sets and compared to the k -means algorithm, the global k -means algorithm and the MinMax k -means algorithm. The experiment results show our proposed algorithm outperforms other algorithms mentioned in the paper.

  18. Algorithmic Coordination in Robotic Networks

    DTIC Science & Technology

    2010-11-29

    appropriate performance, robustness and scalability properties for various task allocation , surveillance, and information gathering applications is...networking, we envision designing and analyzing algorithms with appropriate performance, robustness and scalability properties for various task ...distributed algorithms for target assignments; based on the classic auction algorithms in static networks, we intend to design efficient algorithms in worst

  19. Algorithm Engineering: Concepts and Practice

    NASA Astrophysics Data System (ADS)

    Chimani, Markus; Klein, Karsten

    Over the last years the term algorithm engineering has become wide spread synonym for experimental evaluation in the context of algorithm development. Yet it implies even more. We discuss the major weaknesses of traditional "pen and paper" algorithmics and the ever-growing gap between theory and practice in the context of modern computer hardware and real-world problem instances. We present the key ideas and concepts of the central algorithm engineering cycle that is based on a full feedback loop: It starts with the design of the algorithm, followed by the analysis, implementation, and experimental evaluation. The results of the latter can then be reused for modifications to the algorithmic design, stronger or input-specific theoretic performance guarantees, etc. We describe the individual steps of the cycle, explaining the rationale behind them and giving examples of how to conduct these steps thoughtfully. Thereby we give an introduction to current algorithmic key issues like I/O-efficient or parallel algorithms, succinct data structures, hardware-aware implementations, and others. We conclude with two especially insightful success stories—shortest path problems and text search—where the application of algorithm engineering techniques led to tremendous performance improvements compared with previous state-of-the-art approaches.

  20. Radicalization, Linkage, and Diversity: Current Trends in Terrorism in Europe

    DTIC Science & Technology

    2011-01-01

    Restaurant in Exeter Only the attacker was injured Independent June 2008 Nicholas Roddis United Kingdom Unknown Attacker arrested Independent August...is evolving into a sort of franchise organ- isation, which acts as a point of reference for independent terrorist groups or individuals.5 Sageman’s

  1. CURE-SMOTE algorithm and hybrid algorithm for feature selection and parameter optimization based on random forests.

    PubMed

    Ma, Li; Fan, Suohai

    2017-03-14

    The random forests algorithm is a type of classifier with prominent universality, a wide application range, and robustness for avoiding overfitting. But there are still some drawbacks to random forests. Therefore, to improve the performance of random forests, this paper seeks to improve imbalanced data processing, feature selection and parameter optimization. We propose the CURE-SMOTE algorithm for the imbalanced data classification problem. Experiments on imbalanced UCI data reveal that the combination of Clustering Using Representatives (CURE) enhances the original synthetic minority oversampling technique (SMOTE) algorithms effectively compared with the classification results on the original data using random sampling, Borderline-SMOTE1, safe-level SMOTE, C-SMOTE, and k-means-SMOTE. Additionally, the hybrid RF (random forests) algorithm has been proposed for feature selection and parameter optimization, which uses the minimum out of bag (OOB) data error as its objective function. Simulation results on binary and higher-dimensional data indicate that the proposed hybrid RF algorithms, hybrid genetic-random forests algorithm, hybrid particle swarm-random forests algorithm and hybrid fish swarm-random forests algorithm can achieve the minimum OOB error and show the best generalization ability. The training set produced from the proposed CURE-SMOTE algorithm is closer to the original data distribution because it contains minimal noise. Thus, better classification results are produced from this feasible and effective algorithm. Moreover, the hybrid algorithm's F-value, G-mean, AUC and OOB scores demonstrate that they surpass the performance of the original RF algorithm. Hence, this hybrid algorithm provides a new way to perform feature selection and parameter optimization.

  2. Biological sequence compression algorithms.

    PubMed

    Matsumoto, T; Sadakane, K; Imai, H

    2000-01-01

    Today, more and more DNA sequences are becoming available. The information about DNA sequences are stored in molecular biology databases. The size and importance of these databases will be bigger and bigger in the future, therefore this information must be stored or communicated efficiently. Furthermore, sequence compression can be used to define similarities between biological sequences. The standard compression algorithms such as gzip or compress cannot compress DNA sequences, but only expand them in size. On the other hand, CTW (Context Tree Weighting Method) can compress DNA sequences less than two bits per symbol. These algorithms do not use special structures of biological sequences. Two characteristic structures of DNA sequences are known. One is called palindromes or reverse complements and the other structure is approximate repeats. Several specific algorithms for DNA sequences that use these structures can compress them less than two bits per symbol. In this paper, we improve the CTW so that characteristic structures of DNA sequences are available. Before encoding the next symbol, the algorithm searches an approximate repeat and palindrome using hash and dynamic programming. If there is a palindrome or an approximate repeat with enough length then our algorithm represents it with length and distance. By using this preprocessing, a new program achieves a little higher compression ratio than that of existing DNA-oriented compression algorithms. We also describe new compression algorithm for protein sequences.

  3. Hybrid cryptosystem implementation using fast data encipherment algorithm (FEAL) and goldwasser-micali algorithm for file security

    NASA Astrophysics Data System (ADS)

    Rachmawati, D.; Budiman, M. A.; Siburian, W. S. E.

    2018-05-01

    On the process of exchanging files, security is indispensable to avoid the theft of data. Cryptography is one of the sciences used to secure the data by way of encoding. Fast Data Encipherment Algorithm (FEAL) is a block cipher symmetric cryptographic algorithms. Therefore, the file which wants to protect is encrypted and decrypted using the algorithm FEAL. To optimize the security of the data, session key that is utilized in the algorithm FEAL encoded with the Goldwasser-Micali algorithm, which is an asymmetric cryptographic algorithm and using probabilistic concept. In the encryption process, the key was converted into binary form. The selection of values of x that randomly causes the results of the cipher key is different for each binary value. The concept of symmetry and asymmetry algorithm merger called Hybrid Cryptosystem. The use of the algorithm FEAL and Goldwasser-Micali can restore the message to its original form and the algorithm FEAL time required for encryption and decryption is directly proportional to the length of the message. However, on Goldwasser- Micali algorithm, the length of the message is not directly proportional to the time of encryption and decryption.

  4. Immigration and Its Effect on the College-Going Outcomes of Natives

    ERIC Educational Resources Information Center

    Neymotin, Florence

    2009-01-01

    In this paper, I analyze immigration's effect on the SAT-scores and college application patterns of high school students in California and Texas. The student-level dataset used is longitudinal in nature and is matched via a unique algorithm to the Census 2000 summary tabulation files to determine immigration at the local census-place level. The…

  5. Revisiting negative selection algorithms.

    PubMed

    Ji, Zhou; Dasgupta, Dipankar

    2007-01-01

    This paper reviews the progress of negative selection algorithms, an anomaly/change detection approach in Artificial Immune Systems (AIS). Following its initial model, we try to identify the fundamental characteristics of this family of algorithms and summarize their diversities. There exist various elements in this method, including data representation, coverage estimate, affinity measure, and matching rules, which are discussed for different variations. The various negative selection algorithms are categorized by different criteria as well. The relationship and possible combinations with other AIS or other machine learning methods are discussed. Prospective development and applicability of negative selection algorithms and their influence on related areas are then speculated based on the discussion.

  6. Study of image matching algorithm and sub-pixel fitting algorithm in target tracking

    NASA Astrophysics Data System (ADS)

    Yang, Ming-dong; Jia, Jianjun; Qiang, Jia; Wang, Jian-yu

    2015-03-01

    Image correlation matching is a tracking method that searched a region most approximate to the target template based on the correlation measure between two images. Because there is no need to segment the image, and the computation of this method is little. Image correlation matching is a basic method of target tracking. This paper mainly studies the image matching algorithm of gray scale image, which precision is at sub-pixel level. The matching algorithm used in this paper is SAD (Sum of Absolute Difference) method. This method excels in real-time systems because of its low computation complexity. The SAD method is introduced firstly and the most frequently used sub-pixel fitting algorithms are introduced at the meantime. These fitting algorithms can't be used in real-time systems because they are too complex. However, target tracking often requires high real-time performance, we put forward a fitting algorithm named paraboloidal fitting algorithm based on the consideration above, this algorithm is simple and realized easily in real-time system. The result of this algorithm is compared with that of surface fitting algorithm through image matching simulation. By comparison, the precision difference between these two algorithms is little, it's less than 0.01pixel. In order to research the influence of target rotation on precision of image matching, the experiment of camera rotation was carried on. The detector used in the camera is a CMOS detector. It is fixed to an arc pendulum table, take pictures when the camera rotated different angles. Choose a subarea in the original picture as the template, and search the best matching spot using image matching algorithm mentioned above. The result shows that the matching error is bigger when the target rotation angle is larger. It's an approximate linear relation. Finally, the influence of noise on matching precision was researched. Gaussian noise and pepper and salt noise were added in the image respectively, and the image

  7. Statistical efficiency of adaptive algorithms.

    PubMed

    Widrow, Bernard; Kamenetsky, Max

    2003-01-01

    The statistical efficiency of a learning algorithm applied to the adaptation of a given set of variable weights is defined as the ratio of the quality of the converged solution to the amount of data used in training the weights. Statistical efficiency is computed by averaging over an ensemble of learning experiences. A high quality solution is very close to optimal, while a low quality solution corresponds to noisy weights and less than optimal performance. In this work, two gradient descent adaptive algorithms are compared, the LMS algorithm and the LMS/Newton algorithm. LMS is simple and practical, and is used in many applications worldwide. LMS/Newton is based on Newton's method and the LMS algorithm. LMS/Newton is optimal in the least squares sense. It maximizes the quality of its adaptive solution while minimizing the use of training data. Many least squares adaptive algorithms have been devised over the years, but no other least squares algorithm can give better performance, on average, than LMS/Newton. LMS is easily implemented, but LMS/Newton, although of great mathematical interest, cannot be implemented in most practical applications. Because of its optimality, LMS/Newton serves as a benchmark for all least squares adaptive algorithms. The performances of LMS and LMS/Newton are compared, and it is found that under many circumstances, both algorithms provide equal performance. For example, when both algorithms are tested with statistically nonstationary input signals, their average performances are equal. When adapting with stationary input signals and with random initial conditions, their respective learning times are on average equal. However, under worst-case initial conditions, the learning time of LMS can be much greater than that of LMS/Newton, and this is the principal disadvantage of the LMS algorithm. But the strong points of LMS are ease of implementation and optimal performance under important practical conditions. For these reasons, the LMS

  8. New development of the image matching algorithm

    NASA Astrophysics Data System (ADS)

    Zhang, Xiaoqiang; Feng, Zhao

    2018-04-01

    To study the image matching algorithm, algorithm four elements are described, i.e., similarity measurement, feature space, search space and search strategy. Four common indexes for evaluating the image matching algorithm are described, i.e., matching accuracy, matching efficiency, robustness and universality. Meanwhile, this paper describes the principle of image matching algorithm based on the gray value, image matching algorithm based on the feature, image matching algorithm based on the frequency domain analysis, image matching algorithm based on the neural network and image matching algorithm based on the semantic recognition, and analyzes their characteristics and latest research achievements. Finally, the development trend of image matching algorithm is discussed. This study is significant for the algorithm improvement, new algorithm design and algorithm selection in practice.

  9. Fusing face-verification algorithms and humans.

    PubMed

    O'Toole, Alice J; Abdi, Hervé; Jiang, Fang; Phillips, P Jonathon

    2007-10-01

    It has been demonstrated recently that state-of-the-art face-recognition algorithms can surpass human accuracy at matching faces over changes in illumination. The ranking of algorithms and humans by accuracy, however, does not provide information about whether algorithms and humans perform the task comparably or whether algorithms and humans can be fused to improve performance. In this paper, we fused humans and algorithms using partial least square regression (PLSR). In the first experiment, we applied PLSR to face-pair similarity scores generated by seven algorithms participating in the Face Recognition Grand Challenge. The PLSR produced an optimal weighting of the similarity scores, which we tested for generality with a jackknife procedure. Fusing the algorithms' similarity scores using the optimal weights produced a twofold reduction of error rate over the most accurate algorithm. Next, human-subject-generated similarity scores were added to the PLSR analysis. Fusing humans and algorithms increased the performance to near-perfect classification accuracy. These results are discussed in terms of maximizing face-verification accuracy with hybrid systems consisting of multiple algorithms and humans.

  10. Evidence-based algorithm for heparin dosing before cardiopulmonary bypass. Part 1: Development of the algorithm.

    PubMed

    McKinney, Mark C; Riley, Jeffrey B

    2007-12-01

    The incidence of heparin resistance during adult cardiac surgery with cardiopulmonary bypass has been reported at 15%-20%. The consistent use of a clinical decision-making algorithm may increase the consistency of patient care and likely reduce the total required heparin dose and other problems associated with heparin dosing. After a directed survey of practicing perfusionists regarding treatment of heparin resistance and a literature search for high-level evidence regarding the diagnosis and treatment of heparin resistance, an evidence-based decision-making algorithm was constructed. The face validity of the algorithm decisive steps and logic was confirmed by a second survey of practicing perfusionists. The algorithm begins with review of the patient history to identify predictors for heparin resistance. The definition for heparin resistance contained in the algorithm is an activated clotting time < 450 seconds with > 450 IU/kg heparin loading dose. Based on the literature, the treatment for heparin resistance used in the algorithm is anti-thrombin III supplement. The algorithm seems to be valid and is supported by high-level evidence and clinician opinion. The next step is a human randomized clinical trial to test the clinical procedure guideline algorithm vs. current standard clinical practice.

  11. Decryption of pure-position permutation algorithms.

    PubMed

    Zhao, Xiao-Yu; Chen, Gang; Zhang, Dan; Wang, Xiao-Hong; Dong, Guang-Chang

    2004-07-01

    Pure position permutation image encryption algorithms, commonly used as image encryption investigated in this work are unfortunately frail under known-text attack. In view of the weakness of pure position permutation algorithm, we put forward an effective decryption algorithm for all pure-position permutation algorithms. First, a summary of the pure position permutation image encryption algorithms is given by introducing the concept of ergodic matrices. Then, by using probability theory and algebraic principles, the decryption probability of pure-position permutation algorithms is verified theoretically; and then, by defining the operation system of fuzzy ergodic matrices, we improve a specific decryption algorithm. Finally, some simulation results are shown.

  12. The Superior Lambert Algorithm

    NASA Astrophysics Data System (ADS)

    der, G.

    2011-09-01

    Lambert algorithms are used extensively for initial orbit determination, mission planning, space debris correlation, and missile targeting, just to name a few applications. Due to the significance of the Lambert problem in Astrodynamics, Gauss, Battin, Godal, Lancaster, Gooding, Sun and many others (References 1 to 15) have provided numerous formulations leading to various analytic solutions and iterative methods. Most Lambert algorithms and their computer programs can only work within one revolution, break down or converge slowly when the transfer angle is near zero or 180 degrees, and their multi-revolution limitations are either ignored or barely addressed. Despite claims of robustness, many Lambert algorithms fail without notice, and the users seldom have a clue why. The DerAstrodynamics lambert2 algorithm, which is based on the analytic solution formulated by Sun, works for any number of revolutions and converges rapidly at any transfer angle. It provides significant capability enhancements over every other Lambert algorithm in use today. These include improved speed, accuracy, robustness, and multirevolution capabilities as well as implementation simplicity. Additionally, the lambert2 algorithm provides a powerful tool for solving the angles-only problem without artificial singularities (pointed out by Gooding in Reference 16), which involves 3 lines of sight captured by optical sensors, or systems such as the Air Force Space Surveillance System (AFSSS). The analytic solution is derived from the extended Godal’s time equation by Sun, while the iterative method of solution is that of Laguerre, modified for robustness. The Keplerian solution of a Lambert algorithm can be extended to include the non-Keplerian terms of the Vinti algorithm via a simple targeting technique (References 17 to 19). Accurate analytic non-Keplerian trajectories can be predicted for satellites and ballistic missiles, while performing at least 100 times faster in speed than most

  13. The Texas Medication Algorithm Project antipsychotic algorithm for schizophrenia: 2006 update.

    PubMed

    Moore, Troy A; Buchanan, Robert W; Buckley, Peter F; Chiles, John A; Conley, Robert R; Crismon, M Lynn; Essock, Susan M; Finnerty, Molly; Marder, Stephen R; Miller, Del D; McEvoy, Joseph P; Robinson, Delbert G; Schooler, Nina R; Shon, Steven P; Stroup, T Scott; Miller, Alexander L

    2007-11-01

    A panel of academic psychiatrists and pharmacists, clinicians from the Texas public mental health system, advocates, and consumers met in June 2006 in Dallas, Tex., to review recent evidence in the pharmacologic treatment of schizophrenia. The goal of the consensus conference was to update and revise the Texas Medication Algorithm Project (TMAP) algorithm for schizophrenia used in the Texas Implementation of Medication Algorithms, a statewide quality assurance program for treatment of major psychiatric illness. Four questions were identified via premeeting teleconferences. (1) Should antipsychotic treatment of first-episode schizophrenia be different from that of multiepisode schizophrenia? (2) In which algorithm stages should first-generation antipsychotics (FGAs) be an option? (3) How many antipsychotic trials should precede a clozapine trial? (4) What is the status of augmentation strategies for clozapine? Subgroups reviewed the evidence in each area and presented their findings at the conference. The algorithm was updated to incorporate the following recommendations. (1) Persons with first-episode schizophrenia typically require lower antipsychotic doses and are more sensitive to side effects such as weight gain and extrapyramidal symptoms (group consensus). Second-generation antipsychotics (SGAs) are preferred for treatment of first-episode schizophrenia (majority opinion). (2) FGAs should be included in algorithm stages after first episode that include SGAs other than clozapine as options (group consensus). (3) The recommended number of trials of other antipsychotics that should precede a clozapine trial is 2, but earlier use of clozapine should be considered in the presence of persistent problems such as suicidality, comorbid violence, and substance abuse (group consensus). (4) Augmentation is reasonable for persons with inadequate response to clozapine, but published results on augmenting agents have not identified replicable positive results (group

  14. Algorithms and Libraries

    NASA Technical Reports Server (NTRS)

    Dongarra, Jack

    1998-01-01

    This exploratory study initiated our inquiry into algorithms and applications that would benefit by latency tolerant approach to algorithm building, including the construction of new algorithms where appropriate. In a multithreaded execution, when a processor reaches a point where remote memory access is necessary, the request is sent out on the network and a context--switch occurs to a new thread of computation. This effectively masks a long and unpredictable latency due to remote loads, thereby providing tolerance to remote access latency. We began to develop standards to profile various algorithm and application parameters, such as the degree of parallelism, granularity, precision, instruction set mix, interprocessor communication, latency etc. These tools will continue to develop and evolve as the Information Power Grid environment matures. To provide a richer context for this research, the project also focused on issues of fault-tolerance and computation migration of numerical algorithms and software. During the initial phase we tried to increase our understanding of the bottlenecks in single processor performance. Our work began by developing an approach for the automatic generation and optimization of numerical software for processors with deep memory hierarchies and pipelined functional units. Based on the results we achieved in this study we are planning to study other architectures of interest, including development of cost models, and developing code generators appropriate to these architectures.

  15. An efficient parallel termination detection algorithm

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baker, A. H.; Crivelli, S.; Jessup, E. R.

    2004-05-27

    Information local to any one processor is insufficient to monitor the overall progress of most distributed computations. Typically, a second distributed computation for detecting termination of the main computation is necessary. In order to be a useful computational tool, the termination detection routine must operate concurrently with the main computation, adding minimal overhead, and it must promptly and correctly detect termination when it occurs. In this paper, we present a new algorithm for detecting the termination of a parallel computation on distributed-memory MIMD computers that satisfies all of those criteria. A variety of termination detection algorithms have been devised. Ofmore » these, the algorithm presented by Sinha, Kale, and Ramkumar (henceforth, the SKR algorithm) is unique in its ability to adapt to the load conditions of the system on which it runs, thereby minimizing the impact of termination detection on performance. Because their algorithm also detects termination quickly, we consider it to be the most efficient practical algorithm presently available. The termination detection algorithm presented here was developed for use in the PMESC programming library for distributed-memory MIMD computers. Like the SKR algorithm, our algorithm adapts to system loads and imposes little overhead. Also like the SKR algorithm, ours is tree-based, and it does not depend on any assumptions about the physical interconnection topology of the processors or the specifics of the distributed computation. In addition, our algorithm is easier to implement and requires only half as many tree traverses as does the SKR algorithm. This paper is organized as follows. In section 2, we define our computational model. In section 3, we review the SKR algorithm. We introduce our new algorithm in section 4, and prove its correctness in section 5. We discuss its efficiency and present experimental results in section 6.« less

  16. Verifying a Computer Algorithm Mathematically.

    ERIC Educational Resources Information Center

    Olson, Alton T.

    1986-01-01

    Presents an example of mathematics from an algorithmic point of view, with emphasis on the design and verification of this algorithm. The program involves finding roots for algebraic equations using the half-interval search algorithm. The program listing is included. (JN)

  17. Aerodynamic Loads at Mach Numbers from 0.70 to 2.22 on an Airplane Model Having a Wing and Canard of Triangular Plan Form and Either Single or Twin Vertical Tails. Supplement 2; Tabulated Data for the Model with Twin Vertical Tails

    NASA Technical Reports Server (NTRS)

    Peterson, Victor L.; Menees, Gene P.

    1961-01-01

    Tabulated results of a wind-tunnel investigation of the aerodynamic loads on a canard airplane model with twin vertical tails are presented for Mach numbers from 0.70 to 2.22. The Reynolds number for the measurements was 2.9 x 10(exp 6) based on the wing mean aerodynamic chord. The results include local static-pressure coefficients measured on the wing, body, and one of the vertical tails for angles of attack from -4 degrees to 16 degree angles of sideslip of 0 degrees and 5.3 degrees, and nominal canard deflections of O degrees and 10 degrees. Also included are section force and moment coefficients obtained from integrations of the local pressures and model-component force and moment coefficients obtained from integrations of the section coefficients. Geometric details of the model are shown and the locations of the pressure orifices are shown. An index to the data contained herein is presented and definitions of nomenclature are given. Detailed descriptions of the model and experiments and a brief discussion of some of the results are given. Tabulated results of measurements of the aerodynamic loads on the same canard model but having a single vertical tail instead of twin vertical tails are presented.

  18. Adaptive reference update (ARU) algorithm. A stochastic search algorithm for efficient optimization of multi-drug cocktails

    PubMed Central

    2012-01-01

    Background Multi-target therapeutics has been shown to be effective for treating complex diseases, and currently, it is a common practice to combine multiple drugs to treat such diseases to optimize the therapeutic outcomes. However, considering the huge number of possible ways to mix multiple drugs at different concentrations, it is practically difficult to identify the optimal drug combination through exhaustive testing. Results In this paper, we propose a novel stochastic search algorithm, called the adaptive reference update (ARU) algorithm, that can provide an efficient and systematic way for optimizing multi-drug cocktails. The ARU algorithm iteratively updates the drug combination to improve its response, where the update is made by comparing the response of the current combination with that of a reference combination, based on which the beneficial update direction is predicted. The reference combination is continuously updated based on the drug response values observed in the past, thereby adapting to the underlying drug response function. To demonstrate the effectiveness of the proposed algorithm, we evaluated its performance based on various multi-dimensional drug functions and compared it with existing algorithms. Conclusions Simulation results show that the ARU algorithm significantly outperforms existing stochastic search algorithms, including the Gur Game algorithm. In fact, the ARU algorithm can more effectively identify potent drug combinations and it typically spends fewer iterations for finding effective combinations. Furthermore, the ARU algorithm is robust to random fluctuations and noise in the measured drug response, which makes the algorithm well-suited for practical drug optimization applications. PMID:23134742

  19. The PlusCal Algorithm Language

    NASA Astrophysics Data System (ADS)

    Lamport, Leslie

    Algorithms are different from programs and should not be described with programming languages. The only simple alternative to programming languages has been pseudo-code. PlusCal is an algorithm language that can be used right now to replace pseudo-code, for both sequential and concurrent algorithms. It is based on the TLA + specification language, and a PlusCal algorithm is automatically translated to a TLA + specification that can be checked with the TLC model checker and reasoned about formally.

  20. G/SPLINES: A hybrid of Friedman's Multivariate Adaptive Regression Splines (MARS) algorithm with Holland's genetic algorithm

    NASA Technical Reports Server (NTRS)

    Rogers, David

    1991-01-01

    G/SPLINES are a hybrid of Friedman's Multivariable Adaptive Regression Splines (MARS) algorithm with Holland's Genetic Algorithm. In this hybrid, the incremental search is replaced by a genetic search. The G/SPLINE algorithm exhibits performance comparable to that of the MARS algorithm, requires fewer least squares computations, and allows significantly larger problems to be considered.

  1. Software For Genetic Algorithms

    NASA Technical Reports Server (NTRS)

    Wang, Lui; Bayer, Steve E.

    1992-01-01

    SPLICER computer program is genetic-algorithm software tool used to solve search and optimization problems. Provides underlying framework and structure for building genetic-algorithm application program. Written in Think C.

  2. Cloud Model Bat Algorithm

    PubMed Central

    Zhou, Yongquan; Xie, Jian; Li, Liangliang; Ma, Mingzhi

    2014-01-01

    Bat algorithm (BA) is a novel stochastic global optimization algorithm. Cloud model is an effective tool in transforming between qualitative concepts and their quantitative representation. Based on the bat echolocation mechanism and excellent characteristics of cloud model on uncertainty knowledge representation, a new cloud model bat algorithm (CBA) is proposed. This paper focuses on remodeling echolocation model based on living and preying characteristics of bats, utilizing the transformation theory of cloud model to depict the qualitative concept: “bats approach their prey.” Furthermore, Lévy flight mode and population information communication mechanism of bats are introduced to balance the advantage between exploration and exploitation. The simulation results show that the cloud model bat algorithm has good performance on functions optimization. PMID:24967425

  3. Distributed k-Means Algorithm and Fuzzy c-Means Algorithm for Sensor Networks Based on Multiagent Consensus Theory.

    PubMed

    Qin, Jiahu; Fu, Weiming; Gao, Huijun; Zheng, Wei Xing

    2016-03-03

    This paper is concerned with developing a distributed k-means algorithm and a distributed fuzzy c-means algorithm for wireless sensor networks (WSNs) where each node is equipped with sensors. The underlying topology of the WSN is supposed to be strongly connected. The consensus algorithm in multiagent consensus theory is utilized to exchange the measurement information of the sensors in WSN. To obtain a faster convergence speed as well as a higher possibility of having the global optimum, a distributed k-means++ algorithm is first proposed to find the initial centroids before executing the distributed k-means algorithm and the distributed fuzzy c-means algorithm. The proposed distributed k-means algorithm is capable of partitioning the data observed by the nodes into measure-dependent groups which have small in-group and large out-group distances, while the proposed distributed fuzzy c-means algorithm is capable of partitioning the data observed by the nodes into different measure-dependent groups with degrees of membership values ranging from 0 to 1. Simulation results show that the proposed distributed algorithms can achieve almost the same results as that given by the centralized clustering algorithms.

  4. Quantum algorithm for support matrix machines

    NASA Astrophysics Data System (ADS)

    Duan, Bojia; Yuan, Jiabin; Liu, Ying; Li, Dan

    2017-09-01

    We propose a quantum algorithm for support matrix machines (SMMs) that efficiently addresses an image classification problem by introducing a least-squares reformulation. This algorithm consists of two core subroutines: a quantum matrix inversion (Harrow-Hassidim-Lloyd, HHL) algorithm and a quantum singular value thresholding (QSVT) algorithm. The two algorithms can be implemented on a universal quantum computer with complexity O[log(npq) ] and O[log(pq)], respectively, where n is the number of the training data and p q is the size of the feature space. By iterating the algorithms, we can find the parameters for the SMM classfication model. Our analysis shows that both HHL and QSVT algorithms achieve an exponential increase of speed over their classical counterparts.

  5. Evolutionary pattern search algorithms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hart, W.E.

    1995-09-19

    This paper defines a class of evolutionary algorithms called evolutionary pattern search algorithms (EPSAs) and analyzes their convergence properties. This class of algorithms is closely related to evolutionary programming, evolutionary strategie and real-coded genetic algorithms. EPSAs are self-adapting systems that modify the step size of the mutation operator in response to the success of previous optimization steps. The rule used to adapt the step size can be used to provide a stationary point convergence theory for EPSAs on any continuous function. This convergence theory is based on an extension of the convergence theory for generalized pattern search methods. An experimentalmore » analysis of the performance of EPSAs demonstrates that these algorithms can perform a level of global search that is comparable to that of canonical EAs. We also describe a stopping rule for EPSAs, which reliably terminated near stationary points in our experiments. This is the first stopping rule for any class of EAs that can terminate at a given distance from stationary points.« less

  6. Effects of visualization on algorithm comprehension

    NASA Astrophysics Data System (ADS)

    Mulvey, Matthew

    Computer science students are expected to learn and apply a variety of core algorithms which are an essential part of the field. Any one of these algorithms by itself is not necessarily extremely complex, but remembering the large variety of algorithms and the differences between them is challenging. To address this challenge, we present a novel algorithm visualization tool designed to enhance students understanding of Dijkstra's algorithm by allowing them to discover the rules of the algorithm for themselves. It is hoped that a deeper understanding of the algorithm will help students correctly select, adapt and apply the appropriate algorithm when presented with a problem to solve, and that what is learned here will be applicable to the design of other visualization tools designed to teach different algorithms. Our visualization tool is currently in the prototype stage, and this thesis will discuss the pedagogical approach that informs its design, as well as the results of some initial usability testing. Finally, to clarify the direction for further development of the tool, four different variations of the prototype were implemented, and the instructional effectiveness of each was assessed by having a small sample participants use the different versions of the prototype and then take a quiz to assess their comprehension of the algorithm.

  7. Idaho Percentile Results for the SBAC English Language Arts and Mathematics Tests, 2015-2017, Grades 3-8 and 10

    ERIC Educational Resources Information Center

    Stoneberg, Bert D.

    2018-01-01

    Idaho uses the English Language Arts and Mathematics tests from the Smarter Balanced Assessment Consortium (SBAC) for the Idaho Standard Achievement Tests. ISAT results have been reported almost exclusively as "percent proficient or above" statistics (i.e., the percentage of Idaho students who performed at the "A" level). This…

  8. Idaho Region IV Fourth-Grade Teachers' Perceptions about the Educational Influence of Idaho State Achievement Standards and the Idaho State Achievement Tests

    ERIC Educational Resources Information Center

    Wiggins, Annette Marie

    2010-01-01

    The purpose of this study was to explore Idaho Region IV fourth-grade teachers' perceptions regarding the educational influence of Idaho State Achievement Standards and the Idaho Standards Achievement Tests (ISAT) in language usage, reading, and math. Differences between subgroups based on teacher/school demographics, specifically, teachers'…

  9. Two Meanings of Algorithmic Mathematics.

    ERIC Educational Resources Information Center

    Maurer, Stephen B.

    1984-01-01

    Two mathematical topics are interpreted from the viewpoints of traditional (performing algorithms) and contemporary (creating algorithms and thinking in terms of them for solving problems and developing theory) algorithmic mathematics. The two topics are Horner's method for evaluating polynomials and Gauss's method for solving systems of linear…

  10. Recursive Branching Simulated Annealing Algorithm

    NASA Technical Reports Server (NTRS)

    Bolcar, Matthew; Smith, J. Scott; Aronstein, David

    2012-01-01

    This innovation is a variation of a simulated-annealing optimization algorithm that uses a recursive-branching structure to parallelize the search of a parameter space for the globally optimal solution to an objective. The algorithm has been demonstrated to be more effective at searching a parameter space than traditional simulated-annealing methods for a particular problem of interest, and it can readily be applied to a wide variety of optimization problems, including those with a parameter space having both discrete-value parameters (combinatorial) and continuous-variable parameters. It can take the place of a conventional simulated- annealing, Monte-Carlo, or random- walk algorithm. In a conventional simulated-annealing (SA) algorithm, a starting configuration is randomly selected within the parameter space. The algorithm randomly selects another configuration from the parameter space and evaluates the objective function for that configuration. If the objective function value is better than the previous value, the new configuration is adopted as the new point of interest in the parameter space. If the objective function value is worse than the previous value, the new configuration may be adopted, with a probability determined by a temperature parameter, used in analogy to annealing in metals. As the optimization continues, the region of the parameter space from which new configurations can be selected shrinks, and in conjunction with lowering the annealing temperature (and thus lowering the probability for adopting configurations in parameter space with worse objective functions), the algorithm can converge on the globally optimal configuration. The Recursive Branching Simulated Annealing (RBSA) algorithm shares some features with the SA algorithm, notably including the basic principles that a starting configuration is randomly selected from within the parameter space, the algorithm tests other configurations with the goal of finding the globally optimal

  11. Estimation of TOA based MUSIC algorithm and cross correlation algorithm of appropriate interval

    NASA Astrophysics Data System (ADS)

    Lin, Wei; Liu, Jun; Zhou, Yineng; Huang, Jiyan

    2017-03-01

    Localization of mobile station (MS) has now gained considerable attention due to its wide applications in military, environmental, health and commercial systems. Phrase angle and encode data of MSK system model are two critical parameters in time-of-arrival (TOA) localization technique; nevertheless, precise value of phrase angle and encode data are not easy to achieved in general. In order to meet the actual situation, we should consider the condition that phase angle and encode data is unknown. In this paper, a novel TOA localization method, which combine MUSIC algorithm and cross correlation algorithm in an appropriate interval, is proposed. Simulations show that the proposed method has better performance than music algorithm and cross correlation algorithm of the whole interval.

  12. An overview of smart grid routing algorithms

    NASA Astrophysics Data System (ADS)

    Wang, Junsheng; OU, Qinghai; Shen, Haijuan

    2017-08-01

    This paper summarizes the typical routing algorithm in smart grid by analyzing the communication business and communication requirements of intelligent grid. Mainly from the two kinds of routing algorithm is analyzed, namely clustering routing algorithm and routing algorithm, analyzed the advantages and disadvantages of two kinds of typical routing algorithm in routing algorithm and applicability.

  13. Solar Occultation Retrieval Algorithm Development

    NASA Technical Reports Server (NTRS)

    Lumpe, Jerry D.

    2004-01-01

    This effort addresses the comparison and validation of currently operational solar occultation retrieval algorithms, and the development of generalized algorithms for future application to multiple platforms. initial development of generalized forward model algorithms capable of simulating transmission data from of the POAM II/III and SAGE II/III instruments. Work in the 2" quarter will focus on: completion of forward model algorithms, including accurate spectral characteristics for all instruments, and comparison of simulated transmission data with actual level 1 instrument data for specific occultation events.

  14. A Parametric k-Means Algorithm

    PubMed Central

    Tarpey, Thaddeus

    2007-01-01

    Summary The k points that optimally represent a distribution (usually in terms of a squared error loss) are called the k principal points. This paper presents a computationally intensive method that automatically determines the principal points of a parametric distribution. Cluster means from the k-means algorithm are nonparametric estimators of principal points. A parametric k-means approach is introduced for estimating principal points by running the k-means algorithm on a very large simulated data set from a distribution whose parameters are estimated using maximum likelihood. Theoretical and simulation results are presented comparing the parametric k-means algorithm to the usual k-means algorithm and an example on determining sizes of gas masks is used to illustrate the parametric k-means algorithm. PMID:17917692

  15. Analysis of an Optimized MLOS Tomographic Reconstruction Algorithm and Comparison to the MART Reconstruction Algorithm

    NASA Astrophysics Data System (ADS)

    La Foy, Roderick; Vlachos, Pavlos

    2011-11-01

    An optimally designed MLOS tomographic reconstruction algorithm for use in 3D PIV and PTV applications is analyzed. Using a set of optimized reconstruction parameters, the reconstructions produced by the MLOS algorithm are shown to be comparable to reconstructions produced by the MART algorithm for a range of camera geometries, camera numbers, and particle seeding densities. The resultant velocity field error calculated using PIV and PTV algorithms is further minimized by applying both pre and post processing to the reconstructed data sets.

  16. DNABIT Compress – Genome compression algorithm

    PubMed Central

    Rajarajeswari, Pothuraju; Apparao, Allam

    2011-01-01

    Data compression is concerned with how information is organized in data. Efficient storage means removal of redundancy from the data being stored in the DNA molecule. Data compression algorithms remove redundancy and are used to understand biologically important molecules. We present a compression algorithm, “DNABIT Compress” for DNA sequences based on a novel algorithm of assigning binary bits for smaller segments of DNA bases to compress both repetitive and non repetitive DNA sequence. Our proposed algorithm achieves the best compression ratio for DNA sequences for larger genome. Significantly better compression results show that “DNABIT Compress” algorithm is the best among the remaining compression algorithms. While achieving the best compression ratios for DNA sequences (Genomes),our new DNABIT Compress algorithm significantly improves the running time of all previous DNA compression programs. Assigning binary bits (Unique BIT CODE) for (Exact Repeats, Reverse Repeats) fragments of DNA sequence is also a unique concept introduced in this algorithm for the first time in DNA compression. This proposed new algorithm could achieve the best compression ratio as much as 1.58 bits/bases where the existing best methods could not achieve a ratio less than 1.72 bits/bases. PMID:21383923

  17. Smell Detection Agent Based Optimization Algorithm

    NASA Astrophysics Data System (ADS)

    Vinod Chandra, S. S.

    2016-09-01

    In this paper, a novel nature-inspired optimization algorithm has been employed and the trained behaviour of dogs in detecting smell trails is adapted into computational agents for problem solving. The algorithm involves creation of a surface with smell trails and subsequent iteration of the agents in resolving a path. This algorithm can be applied in different computational constraints that incorporate path-based problems. Implementation of the algorithm can be treated as a shortest path problem for a variety of datasets. The simulated agents have been used to evolve the shortest path between two nodes in a graph. This algorithm is useful to solve NP-hard problems that are related to path discovery. This algorithm is also useful to solve many practical optimization problems. The extensive derivation of the algorithm can be enabled to solve shortest path problems.

  18. Quantum algorithm for linear regression

    NASA Astrophysics Data System (ADS)

    Wang, Guoming

    2017-07-01

    We present a quantum algorithm for fitting a linear regression model to a given data set using the least-squares approach. Differently from previous algorithms which yield a quantum state encoding the optimal parameters, our algorithm outputs these numbers in the classical form. So by running it once, one completely determines the fitted model and then can use it to make predictions on new data at little cost. Moreover, our algorithm works in the standard oracle model, and can handle data sets with nonsparse design matrices. It runs in time poly( log2(N ) ,d ,κ ,1 /ɛ ) , where N is the size of the data set, d is the number of adjustable parameters, κ is the condition number of the design matrix, and ɛ is the desired precision in the output. We also show that the polynomial dependence on d and κ is necessary. Thus, our algorithm cannot be significantly improved. Furthermore, we also give a quantum algorithm that estimates the quality of the least-squares fit (without computing its parameters explicitly). This algorithm runs faster than the one for finding this fit, and can be used to check whether the given data set qualifies for linear regression in the first place.

  19. Contact solution algorithms

    NASA Technical Reports Server (NTRS)

    Tielking, John T.

    1989-01-01

    Two algorithms for obtaining static contact solutions are described in this presentation. Although they were derived for contact problems involving specific structures (a tire and a solid rubber cylinder), they are sufficiently general to be applied to other shell-of-revolution and solid-body contact problems. The shell-of-revolution contact algorithm is a method of obtaining a point load influence coefficient matrix for the portion of shell surface that is expected to carry a contact load. If the shell is sufficiently linear with respect to contact loading, a single influence coefficient matrix can be used to obtain a good approximation of the contact pressure distribution. Otherwise, the matrix will be updated to reflect nonlinear load-deflection behavior. The solid-body contact algorithm utilizes a Lagrange multiplier to include the contact constraint in a potential energy functional. The solution is found by applying the principle of minimum potential energy. The Lagrange multiplier is identified as the contact load resultant for a specific deflection. At present, only frictionless contact solutions have been obtained with these algorithms. A sliding tread element has been developed to calculate friction shear force in the contact region of the rolling shell-of-revolution tire model.

  20. Synthesis of Greedy Algorithms Using Dominance Relations

    NASA Technical Reports Server (NTRS)

    Nedunuri, Srinivas; Smith, Douglas R.; Cook, William R.

    2010-01-01

    Greedy algorithms exploit problem structure and constraints to achieve linear-time performance. Yet there is still no completely satisfactory way of constructing greedy algorithms. For example, the Greedy Algorithm of Edmonds depends upon translating a problem into an algebraic structure called a matroid, but the existence of such a translation can be as hard to determine as the existence of a greedy algorithm itself. An alternative characterization of greedy algorithms is in terms of dominance relations, a well-known algorithmic technique used to prune search spaces. We demonstrate a process by which dominance relations can be methodically derived for a number of greedy algorithms, including activity selection, and prefix-free codes. By incorporating our approach into an existing framework for algorithm synthesis, we demonstrate that it could be the basis for an effective engineering method for greedy algorithms. We also compare our approach with other characterizations of greedy algorithms.

  1. Algorithm for Compressing Time-Series Data

    NASA Technical Reports Server (NTRS)

    Hawkins, S. Edward, III; Darlington, Edward Hugo

    2012-01-01

    An algorithm based on Chebyshev polynomials effects lossy compression of time-series data or other one-dimensional data streams (e.g., spectral data) that are arranged in blocks for sequential transmission. The algorithm was developed for use in transmitting data from spacecraft scientific instruments to Earth stations. In spite of its lossy nature, the algorithm preserves the information needed for scientific analysis. The algorithm is computationally simple, yet compresses data streams by factors much greater than two. The algorithm is not restricted to spacecraft or scientific uses: it is applicable to time-series data in general. The algorithm can also be applied to general multidimensional data that have been converted to time-series data, a typical example being image data acquired by raster scanning. However, unlike most prior image-data-compression algorithms, this algorithm neither depends on nor exploits the two-dimensional spatial correlations that are generally present in images. In order to understand the essence of this compression algorithm, it is necessary to understand that the net effect of this algorithm and the associated decompression algorithm is to approximate the original stream of data as a sequence of finite series of Chebyshev polynomials. For the purpose of this algorithm, a block of data or interval of time for which a Chebyshev polynomial series is fitted to the original data is denoted a fitting interval. Chebyshev approximation has two properties that make it particularly effective for compressing serial data streams with minimal loss of scientific information: The errors associated with a Chebyshev approximation are nearly uniformly distributed over the fitting interval (this is known in the art as the "equal error property"); and the maximum deviations of the fitted Chebyshev polynomial from the original data have the smallest possible values (this is known in the art as the "min-max property").

  2. Novel medical image enhancement algorithms

    NASA Astrophysics Data System (ADS)

    Agaian, Sos; McClendon, Stephen A.

    2010-01-01

    In this paper, we present two novel medical image enhancement algorithms. The first, a global image enhancement algorithm, utilizes an alpha-trimmed mean filter as its backbone to sharpen images. The second algorithm uses a cascaded unsharp masking technique to separate the high frequency components of an image in order for them to be enhanced using a modified adaptive contrast enhancement algorithm. Experimental results from enhancing electron microscopy, radiological, CT scan and MRI scan images, using the MATLAB environment, are then compared to the original images as well as other enhancement methods, such as histogram equalization and two forms of adaptive contrast enhancement. An image processing scheme for electron microscopy images of Purkinje cells will also be implemented and utilized as a comparison tool to evaluate the performance of our algorithm.

  3. Tactical Synthesis Of Efficient Global Search Algorithms

    NASA Technical Reports Server (NTRS)

    Nedunuri, Srinivas; Smith, Douglas R.; Cook, William R.

    2009-01-01

    Algorithm synthesis transforms a formal specification into an efficient algorithm to solve a problem. Algorithm synthesis in Specware combines the formal specification of a problem with a high-level algorithm strategy. To derive an efficient algorithm, a developer must define operators that refine the algorithm by combining the generic operators in the algorithm with the details of the problem specification. This derivation requires skill and a deep understanding of the problem and the algorithmic strategy. In this paper we introduce two tactics to ease this process. The tactics serve a similar purpose to tactics used for determining indefinite integrals in calculus, that is suggesting possible ways to attack the problem.

  4. Trees, bialgebras and intrinsic numerical algorithms

    NASA Technical Reports Server (NTRS)

    Crouch, Peter; Grossman, Robert; Larson, Richard

    1990-01-01

    Preliminary work about intrinsic numerical integrators evolving on groups is described. Fix a finite dimensional Lie group G; let g denote its Lie algebra, and let Y(sub 1),...,Y(sub N) denote a basis of g. A class of numerical algorithms is presented that approximate solutions to differential equations evolving on G of the form: dot-x(t) = F(x(t)), x(0) = p is an element of G. The algorithms depend upon constants c(sub i) and c(sub ij), for i = 1,...,k and j is less than i. The algorithms have the property that if the algorithm starts on the group, then it remains on the group. In addition, they also have the property that if G is the abelian group R(N), then the algorithm becomes the classical Runge-Kutta algorithm. The Cayley algebra generated by labeled, ordered trees is used to generate the equations that the coefficients c(sub i) and c(sub ij) must satisfy in order for the algorithm to yield an rth order numerical integrator and to analyze the resulting algorithms.

  5. Empirical study of parallel LRU simulation algorithms

    NASA Technical Reports Server (NTRS)

    Carr, Eric; Nicol, David M.

    1994-01-01

    This paper reports on the performance of five parallel algorithms for simulating a fully associative cache operating under the LRU (Least-Recently-Used) replacement policy. Three of the algorithms are SIMD, and are implemented on the MasPar MP-2 architecture. Two other algorithms are parallelizations of an efficient serial algorithm on the Intel Paragon. One SIMD algorithm is quite simple, but its cost is linear in the cache size. The two other SIMD algorithm are more complex, but have costs that are independent on the cache size. Both the second and third SIMD algorithms compute all stack distances; the second SIMD algorithm is completely general, whereas the third SIMD algorithm presumes and takes advantage of bounds on the range of reference tags. Both MIMD algorithm implemented on the Paragon are general and compute all stack distances; they differ in one step that may affect their respective scalability. We assess the strengths and weaknesses of these algorithms as a function of problem size and characteristics, and compare their performance on traces derived from execution of three SPEC benchmark programs.

  6. Firefly algorithm with chaos

    NASA Astrophysics Data System (ADS)

    Gandomi, A. H.; Yang, X.-S.; Talatahari, S.; Alavi, A. H.

    2013-01-01

    A recently developed metaheuristic optimization algorithm, firefly algorithm (FA), mimics the social behavior of fireflies based on the flashing and attraction characteristics of fireflies. In the present study, we will introduce chaos into FA so as to increase its global search mobility for robust global optimization. Detailed studies are carried out on benchmark problems with different chaotic maps. Here, 12 different chaotic maps are utilized to tune the attractive movement of the fireflies in the algorithm. The results show that some chaotic FAs can clearly outperform the standard FA.

  7. NOSS Altimeter Detailed Algorithm specifications

    NASA Technical Reports Server (NTRS)

    Hancock, D. W.; Mcmillan, J. D.

    1982-01-01

    The details of the algorithms and data sets required for satellite radar altimeter data processing are documented in a form suitable for (1) development of the benchmark software and (2) coding the operational software. The algorithms reported in detail are those established for altimeter processing. The algorithms which required some additional development before documenting for production were only scoped. The algorithms are divided into two levels of processing. The first level converts the data to engineering units and applies corrections for instrument variations. The second level provides geophysical measurements derived from altimeter parameters for oceanographic users.

  8. Algorithmic complexity of quantum capacity

    NASA Astrophysics Data System (ADS)

    Oskouei, Samad Khabbazi; Mancini, Stefano

    2018-04-01

    We analyze the notion of quantum capacity from the perspective of algorithmic (descriptive) complexity. To this end, we resort to the concept of semi-computability in order to describe quantum states and quantum channel maps. We introduce algorithmic entropies (like algorithmic quantum coherent information) and derive relevant properties for them. Then we show that quantum capacity based on semi-computable concept equals the entropy rate of algorithmic coherent information, which in turn equals the standard quantum capacity. Thanks to this, we finally prove that the quantum capacity, for a given semi-computable channel, is limit computable.

  9. Families of Graph Algorithms: SSSP Case Study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kanewala Appuhamilage, Thejaka Amila Jay; Zalewski, Marcin J.; Lumsdaine, Andrew

    2017-08-28

    Single-Source Shortest Paths (SSSP) is a well-studied graph problem. Examples of SSSP algorithms include the original Dijkstra’s algorithm and the parallel Δ-stepping and KLA-SSSP algorithms. In this paper, we use a novel Abstract Graph Machine (AGM) model to show that all these algorithms share a common logic and differ from one another by the order in which they perform work. We use the AGM model to thoroughly analyze the family of algorithms that arises from the common logic. We start with the basic algorithm without any ordering (Chaotic), and then we derive the existing and new algorithms by methodically exploringmore » semantic and spatial ordering of work. Our experimental results show that new derived algorithms show better performance than the existing distributed memory parallel algorithms, especially at higher scales.« less

  10. Portable Health Algorithms Test System

    NASA Technical Reports Server (NTRS)

    Melcher, Kevin J.; Wong, Edmond; Fulton, Christopher E.; Sowers, Thomas S.; Maul, William A.

    2010-01-01

    A document discusses the Portable Health Algorithms Test (PHALT) System, which has been designed as a means for evolving the maturity and credibility of algorithms developed to assess the health of aerospace systems. Comprising an integrated hardware-software environment, the PHALT system allows systems health management algorithms to be developed in a graphical programming environment, to be tested and refined using system simulation or test data playback, and to be evaluated in a real-time hardware-in-the-loop mode with a live test article. The integrated hardware and software development environment provides a seamless transition from algorithm development to real-time implementation. The portability of the hardware makes it quick and easy to transport between test facilities. This hard ware/software architecture is flexible enough to support a variety of diagnostic applications and test hardware, and the GUI-based rapid prototyping capability is sufficient to support development execution, and testing of custom diagnostic algorithms. The PHALT operating system supports execution of diagnostic algorithms under real-time constraints. PHALT can perform real-time capture and playback of test rig data with the ability to augment/ modify the data stream (e.g. inject simulated faults). It performs algorithm testing using a variety of data input sources, including real-time data acquisition, test data playback, and system simulations, and also provides system feedback to evaluate closed-loop diagnostic response and mitigation control.

  11. Evaluation of a Web-Based App Demonstrating an Exclusionary Algorithmic Approach to TNM Cancer Staging

    PubMed Central

    2015-01-01

    Background TNM staging plays a critical role in the evaluation and management of a range of different types of cancers. The conventional combinatorial approach to the determination of an anatomic stage relies on the identification of distinct tumor (T), node (N), and metastasis (M) classifications to generate a TNM grouping. This process is inherently inefficient due to the need for scrupulous review of the criteria specified for each classification to ensure accurate assignment. An exclusionary approach to TNM staging based on sequential constraint of options may serve to minimize the number of classifications that need to be reviewed to accurately determine an anatomic stage. Objective Our aim was to evaluate the usability and utility of a Web-based app configured to demonstrate an exclusionary approach to TNM staging. Methods Internal medicine residents, surgery residents, and oncology fellows engaged in clinical training were asked to evaluate a Web-based app developed as an instructional aid incorporating (1) an exclusionary algorithm that polls tabulated classifications and sorts them into ranked order based on frequency counts, (2) reconfiguration of classification criteria to generate disambiguated yes/no questions that function as selection and exclusion prompts, and (3) a selectable grid of TNM groupings that provides dynamic graphic demonstration of the effects of sequentially selecting or excluding specific classifications. Subjects were asked to evaluate the performance of this app after completing exercises simulating the staging of different types of cancers encountered during training. Results Survey responses indicated high levels of agreement with statements supporting the usability and utility of this app. Subjects reported that its user interface provided a clear display with intuitive controls and that the exclusionary approach to TNM staging it demonstrated represented an efficient process of assignment that helped to clarify distinctions

  12. Hardware Acceleration of Adaptive Neural Algorithms.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    James, Conrad D.

    As tradit ional numerical computing has faced challenges, researchers have turned towards alternative computing approaches to reduce power - per - computation metrics and improve algorithm performance. Here, we describe an approach towards non - conventional computing that strengthens the connection between machine learning and neuroscience concepts. The Hardware Acceleration of Adaptive Neural Algorithms (HAANA) project ha s develop ed neural machine learning algorithms and hardware for applications in image processing and cybersecurity. While machine learning methods are effective at extracting relevant features from many types of data, the effectiveness of these algorithms degrades when subjected to real - worldmore » conditions. Our team has generated novel neural - inspired approa ches to improve the resiliency and adaptability of machine learning algorithms. In addition, we have also designed and fabricated hardware architectures and microelectronic devices specifically tuned towards the training and inference operations of neural - inspired algorithms. Finally, our multi - scale simulation framework allows us to assess the impact of microelectronic device properties on algorithm performance.« less

  13. A genetic algorithm for replica server placement

    NASA Astrophysics Data System (ADS)

    Eslami, Ghazaleh; Toroghi Haghighat, Abolfazl

    2012-01-01

    Modern distribution systems use replication to improve communication delay experienced by their clients. Some techniques have been developed for web server replica placement. One of the previous studies was Greedy algorithm proposed by Qiu et al, that needs knowledge about network topology. In This paper, first we introduce a genetic algorithm for web server replica placement. Second, we compare our algorithm with Greedy algorithm proposed by Qiu et al, and Optimum algorithm. We found that our approach can achieve better results than Greedy algorithm proposed by Qiu et al but it's computational time is more than Greedy algorithm.

  14. A genetic algorithm for replica server placement

    NASA Astrophysics Data System (ADS)

    Eslami, Ghazaleh; Toroghi Haghighat, Abolfazl

    2011-12-01

    Modern distribution systems use replication to improve communication delay experienced by their clients. Some techniques have been developed for web server replica placement. One of the previous studies was Greedy algorithm proposed by Qiu et al, that needs knowledge about network topology. In This paper, first we introduce a genetic algorithm for web server replica placement. Second, we compare our algorithm with Greedy algorithm proposed by Qiu et al, and Optimum algorithm. We found that our approach can achieve better results than Greedy algorithm proposed by Qiu et al but it's computational time is more than Greedy algorithm.

  15. Greedy algorithms in disordered systems

    NASA Astrophysics Data System (ADS)

    Duxbury, P. M.; Dobrin, R.

    1999-08-01

    We discuss search, minimal path and minimal spanning tree algorithms and their applications to disordered systems. Greedy algorithms solve these problems exactly, and are related to extremal dynamics in physics. Minimal cost path (Dijkstra) and minimal cost spanning tree (Prim) algorithms provide extremal dynamics for a polymer in a random medium (the KPZ universality class) and invasion percolation (without trapping) respectively.

  16. A hardware-algorithm co-design approach to optimize seizure detection algorithms for implantable applications.

    PubMed

    Raghunathan, Shriram; Gupta, Sumeet K; Markandeya, Himanshu S; Roy, Kaushik; Irazoqui, Pedro P

    2010-10-30

    Implantable neural prostheses that deliver focal electrical stimulation upon demand are rapidly emerging as an alternate therapy for roughly a third of the epileptic patient population that is medically refractory. Seizure detection algorithms enable feedback mechanisms to provide focally and temporally specific intervention. Real-time feasibility and computational complexity often limit most reported detection algorithms to implementations using computers for bedside monitoring or external devices communicating with the implanted electrodes. A comparison of algorithms based on detection efficacy does not present a complete picture of the feasibility of the algorithm with limited computational power, as is the case with most battery-powered applications. We present a two-dimensional design optimization approach that takes into account both detection efficacy and hardware cost in evaluating algorithms for their feasibility in an implantable application. Detection features are first compared for their ability to detect electrographic seizures from micro-electrode data recorded from kainate-treated rats. Circuit models are then used to estimate the dynamic and leakage power consumption of the compared features. A score is assigned based on detection efficacy and the hardware cost for each of the features, then plotted on a two-dimensional design space. An optimal combination of compared features is used to construct an algorithm that provides maximal detection efficacy per unit hardware cost. The methods presented in this paper would facilitate the development of a common platform to benchmark seizure detection algorithms for comparison and feasibility analysis in the next generation of implantable neuroprosthetic devices to treat epilepsy. Copyright © 2010 Elsevier B.V. All rights reserved.

  17. Efficient image compression algorithm for computer-animated images

    NASA Astrophysics Data System (ADS)

    Yfantis, Evangelos A.; Au, Matthew Y.; Miel, G.

    1992-10-01

    An image compression algorithm is described. The algorithm is an extension of the run-length image compression algorithm and its implementation is relatively easy. This algorithm was implemented and compared with other existing popular compression algorithms and with the Lempel-Ziv (LZ) coding. The Lempel-Ziv algorithm is available as a utility in the UNIX operating system and is also referred to as the UNIX uncompress. Sometimes our algorithm is best in terms of saving memory space, and sometimes one of the competing algorithms is best. The algorithm is lossless, and the intent is for the algorithm to be used in computer graphics animated images. Comparisons made with the LZ algorithm indicate that the decompression time using our algorithm is faster than that using the LZ algorithm. Once the data are in memory, a relatively simple and fast transformation is applied to uncompress the file.

  18. Efficient RNA structure comparison algorithms.

    PubMed

    Arslan, Abdullah N; Anandan, Jithendar; Fry, Eric; Monschke, Keith; Ganneboina, Nitin; Bowerman, Jason

    2017-12-01

    Recently proposed relative addressing-based ([Formula: see text]) RNA secondary structure representation has important features by which an RNA structure database can be stored into a suffix array. A fast substructure search algorithm has been proposed based on binary search on this suffix array. Using this substructure search algorithm, we present a fast algorithm that finds the largest common substructure of given multiple RNA structures in [Formula: see text] format. The multiple RNA structure comparison problem is NP-hard in its general formulation. We introduced a new problem for comparing multiple RNA structures. This problem has more strict similarity definition and objective, and we propose an algorithm that solves this problem efficiently. We also develop another comparison algorithm that iteratively calls this algorithm to locate nonoverlapping large common substructures in compared RNAs. With the new resulting tools, we improved the RNASSAC website (linked from http://faculty.tamuc.edu/aarslan ). This website now also includes two drawing tools: one specialized for preparing RNA substructures that can be used as input by the search tool, and another one for automatically drawing the entire RNA structure from a given structure sequence.

  19. An algorithmic framework for multiobjective optimization.

    PubMed

    Ganesan, T; Elamvazuthi, I; Shaari, Ku Zilati Ku; Vasant, P

    2013-01-01

    Multiobjective (MO) optimization is an emerging field which is increasingly being encountered in many fields globally. Various metaheuristic techniques such as differential evolution (DE), genetic algorithm (GA), gravitational search algorithm (GSA), and particle swarm optimization (PSO) have been used in conjunction with scalarization techniques such as weighted sum approach and the normal-boundary intersection (NBI) method to solve MO problems. Nevertheless, many challenges still arise especially when dealing with problems with multiple objectives (especially in cases more than two). In addition, problems with extensive computational overhead emerge when dealing with hybrid algorithms. This paper discusses these issues by proposing an alternative framework that utilizes algorithmic concepts related to the problem structure for generating efficient and effective algorithms. This paper proposes a framework to generate new high-performance algorithms with minimal computational overhead for MO optimization.

  20. Testing an earthquake prediction algorithm

    USGS Publications Warehouse

    Kossobokov, V.G.; Healy, J.H.; Dewey, J.W.

    1997-01-01

    A test to evaluate earthquake prediction algorithms is being applied to a Russian algorithm known as M8. The M8 algorithm makes intermediate term predictions for earthquakes to occur in a large circle, based on integral counts of transient seismicity in the circle. In a retroactive prediction for the period January 1, 1985 to July 1, 1991 the algorithm as configured for the forward test would have predicted eight of ten strong earthquakes in the test area. A null hypothesis, based on random assignment of predictions, predicts eight earthquakes in 2.87% of the trials. The forward test began July 1, 1991 and will run through December 31, 1997. As of July 1, 1995, the algorithm had forward predicted five out of nine earthquakes in the test area, which success ratio would have been achieved in 53% of random trials with the null hypothesis.

  1. Advancements to the planogram frequency–distance rebinning algorithm

    PubMed Central

    Champley, Kyle M; Raylman, Raymond R; Kinahan, Paul E

    2010-01-01

    In this paper we consider the task of image reconstruction in positron emission tomography (PET) with the planogram frequency–distance rebinning (PFDR) algorithm. The PFDR algorithm is a rebinning algorithm for PET systems with panel detectors. The algorithm is derived in the planogram coordinate system which is a native data format for PET systems with panel detectors. A rebinning algorithm averages over the redundant four-dimensional set of PET data to produce a three-dimensional set of data. Images can be reconstructed from this rebinned three-dimensional set of data. This process enables one to reconstruct PET images more quickly than reconstructing directly from the four-dimensional PET data. The PFDR algorithm is an approximate rebinning algorithm. We show that implementing the PFDR algorithm followed by the (ramp) filtered backprojection (FBP) algorithm in linogram coordinates from multiple views reconstructs a filtered version of our image. We develop an explicit formula for this filter which can be used to achieve exact reconstruction by means of a modified FBP algorithm applied to the stack of rebinned linograms and can also be used to quantify the errors introduced by the PFDR algorithm. This filter is similar to the filter in the planogram filtered backprojection algorithm derived by Brasse et al. The planogram filtered backprojection and exact reconstruction with the PFDR algorithm require complete projections which can be completed with a reprojection algorithm. The PFDR algorithm is similar to the rebinning algorithm developed by Kao et al. By expressing the PFDR algorithm in detector coordinates, we provide a comparative analysis between the two algorithms. Numerical experiments using both simulated data and measured data from a positron emission mammography/tomography (PEM/PET) system are performed. Images are reconstructed by PFDR+FBP (PFDR followed by 2D FBP reconstruction), PFDRX (PFDR followed by the modified FBP algorithm for exact

  2. A Comparison of Three Curve Intersection Algorithms

    NASA Technical Reports Server (NTRS)

    Sederberg, T. W.; Parry, S. R.

    1985-01-01

    An empirical comparison is made between three algorithms for computing the points of intersection of two planar Bezier curves. The algorithms compared are: the well known Bezier subdivision algorithm, which is discussed in Lane 80; a subdivision algorithm based on interval analysis due to Koparkar and Mudur; and an algorithm due to Sederberg, Anderson and Goldman which reduces the problem to one of finding the roots of a univariate polynomial. The details of these three algorithms are presented in their respective references.

  3. One-dimensional swarm algorithm packaging

    NASA Astrophysics Data System (ADS)

    Lebedev, Boris K.; Lebedev, Oleg B.; Lebedeva, Ekaterina O.

    2018-05-01

    The paper considers an algorithm for solving the problem of onedimensional packaging based on the adaptive behavior model of an ant colony. The key role in the development of the ant algorithm is the choice of representation (interpretation) of the solution. The structure of the solution search graph, the procedure for finding solutions on the graph, the methods of deposition and evaporation of pheromone are described. Unlike the canonical paradigm of an ant algorithm, an ant on the solution search graph generates sets of elements distributed across blocks. Experimental studies were conducted on IBM PC. Compared with the existing algorithms, the results are improved.

  4. Multiple-algorithm parallel fusion of infrared polarization and intensity images based on algorithmic complementarity and synergy

    NASA Astrophysics Data System (ADS)

    Zhang, Lei; Yang, Fengbao; Ji, Linna; Lv, Sheng

    2018-01-01

    Diverse image fusion methods perform differently. Each method has advantages and disadvantages compared with others. One notion is that the advantages of different image methods can be effectively combined. A multiple-algorithm parallel fusion method based on algorithmic complementarity and synergy is proposed. First, in view of the characteristics of the different algorithms and difference-features among images, an index vector-based feature-similarity is proposed to define the degree of complementarity and synergy. This proposed index vector is a reliable evidence indicator for algorithm selection. Second, the algorithms with a high degree of complementarity and synergy are selected. Then, the different degrees of various features and infrared intensity images are used as the initial weights for the nonnegative matrix factorization (NMF). This avoids randomness of the NMF initialization parameter. Finally, the fused images of different algorithms are integrated using the NMF because of its excellent data fusing performance on independent features. Experimental results demonstrate that the visual effect and objective evaluation index of the fused images obtained using the proposed method are better than those obtained using traditional methods. The proposed method retains all the advantages that individual fusion algorithms have.

  5. QCCM Center for Quantum Algorithms

    DTIC Science & Technology

    2008-10-17

    algorithms (e.g., quantum walks and adiabatic computing ), as well as theoretical advances relating algorithms to physical implementations (e.g...Park, NC 27709-2211 15. SUBJECT TERMS Quantum algorithms, quantum computing , fault-tolerant error correction Richard Cleve MITACS East Academic...0511200 Algebraic results on quantum automata A. Ambainis, M. Beaudry, M. Golovkins, A. Kikusts, M. Mercer, D. Thrien Theory of Computing Systems 39(2006

  6. One algorithm to rule them all? An evaluation and discussion of ten eye movement event-detection algorithms.

    PubMed

    Andersson, Richard; Larsson, Linnea; Holmqvist, Kenneth; Stridh, Martin; Nyström, Marcus

    2017-04-01

    Almost all eye-movement researchers use algorithms to parse raw data and detect distinct types of eye movement events, such as fixations, saccades, and pursuit, and then base their results on these. Surprisingly, these algorithms are rarely evaluated. We evaluated the classifications of ten eye-movement event detection algorithms, on data from an SMI HiSpeed 1250 system, and compared them to manual ratings of two human experts. The evaluation focused on fixations, saccades, and post-saccadic oscillations. The evaluation used both event duration parameters, and sample-by-sample comparisons to rank the algorithms. The resulting event durations varied substantially as a function of what algorithm was used. This evaluation differed from previous evaluations by considering a relatively large set of algorithms, multiple events, and data from both static and dynamic stimuli. The main conclusion is that current detectors of only fixations and saccades work reasonably well for static stimuli, but barely better than chance for dynamic stimuli. Differing results across evaluation methods make it difficult to select one winner for fixation detection. For saccade detection, however, the algorithm by Larsson, Nyström and Stridh (IEEE Transaction on Biomedical Engineering, 60(9):2484-2493,2013) outperforms all algorithms in data from both static and dynamic stimuli. The data also show how improperly selected algorithms applied to dynamic data misestimate fixation and saccade properties.

  7. Comparative evaluation of the Bio-Rad Geenius HIV-1/2 Confirmatory Assay and the Bio-Rad Multispot HIV-1/2 Rapid Test as an alternative differentiation assay for CLSI M53 algorithm-I.

    PubMed

    Malloch, L; Kadivar, K; Putz, J; Levett, P N; Tang, J; Hatchette, T F; Kadkhoda, K; Ng, D; Ho, J; Kim, J

    2013-12-01

    The CLSI-M53-A, Criteria for Laboratory Testing and Diagnosis of Human Immunodeficiency Virus (HIV) Infection; Approved Guideline includes an algorithm in which samples that are reactive on a 4th generation EIA screen proceed to a supplemental assay that is able to confirm and differentiate between antibodies to HIV-1 and HIV-2. The recently CE-marked Bio-Rad Geenius HIV-1/2 Confirmatory Assay was evaluated as an alternative to the FDA-approved Bio-Rad Multispot HIV-1/HIV-2 Rapid Test which has been previously validated for use in this new algorithm. This study used reference samples submitted to the Canadian - NLHRS and samples from commercial sources. Data was tabulated in 2×2 tables for statistical analysis; sensitivity, specificity, predictive values, kappa and likelihood ratios. The overall performance of the Geenius and Multispot was very high; sensitivity (100%, 100%), specificity (96.3%, 99.1%), positive (45.3, 181) and negative (0, 0) likelihood ratios respectively, high kappa (0.96) and low bias index (0.0068). The ability to differentiate HIV-1 (99.2%, 100%) and HIV-2 (98.1%, 98.1%) Ab was also very high. The Bio-Rad Geenius HIV-1/2 Confirmatory Assay is a suitable alternative to the validated Multispot for use in the second stage of CLSI M53 algorithm-I. The Geenius has additional features including traceability and sample and cassette barcoding that improve the quality management/assurance of HIV testing. It is anticipated that the CLSI M53 guideline and assays such as the Geenius will reduce the number of indeterminate test results previously associated with the HIV-1 WB and improve the ability to differentiate HIV-2 infections. Crown Copyright © 2013. Published by Elsevier B.V. All rights reserved.

  8. QPSO-Based Adaptive DNA Computing Algorithm

    PubMed Central

    Karakose, Mehmet; Cigdem, Ugur

    2013-01-01

    DNA (deoxyribonucleic acid) computing that is a new computation model based on DNA molecules for information storage has been increasingly used for optimization and data analysis in recent years. However, DNA computing algorithm has some limitations in terms of convergence speed, adaptability, and effectiveness. In this paper, a new approach for improvement of DNA computing is proposed. This new approach aims to perform DNA computing algorithm with adaptive parameters towards the desired goal using quantum-behaved particle swarm optimization (QPSO). Some contributions provided by the proposed QPSO based on adaptive DNA computing algorithm are as follows: (1) parameters of population size, crossover rate, maximum number of operations, enzyme and virus mutation rate, and fitness function of DNA computing algorithm are simultaneously tuned for adaptive process, (2) adaptive algorithm is performed using QPSO algorithm for goal-driven progress, faster operation, and flexibility in data, and (3) numerical realization of DNA computing algorithm with proposed approach is implemented in system identification. Two experiments with different systems were carried out to evaluate the performance of the proposed approach with comparative results. Experimental results obtained with Matlab and FPGA demonstrate ability to provide effective optimization, considerable convergence speed, and high accuracy according to DNA computing algorithm. PMID:23935409

  9. Algorithmic Mechanism Design of Evolutionary Computation.

    PubMed

    Pei, Yan

    2015-01-01

    We consider algorithmic design, enhancement, and improvement of evolutionary computation as a mechanism design problem. All individuals or several groups of individuals can be considered as self-interested agents. The individuals in evolutionary computation can manipulate parameter settings and operations by satisfying their own preferences, which are defined by an evolutionary computation algorithm designer, rather than by following a fixed algorithm rule. Evolutionary computation algorithm designers or self-adaptive methods should construct proper rules and mechanisms for all agents (individuals) to conduct their evolution behaviour correctly in order to definitely achieve the desired and preset objective(s). As a case study, we propose a formal framework on parameter setting, strategy selection, and algorithmic design of evolutionary computation by considering the Nash strategy equilibrium of a mechanism design in the search process. The evaluation results present the efficiency of the framework. This primary principle can be implemented in any evolutionary computation algorithm that needs to consider strategy selection issues in its optimization process. The final objective of our work is to solve evolutionary computation design as an algorithmic mechanism design problem and establish its fundamental aspect by taking this perspective. This paper is the first step towards achieving this objective by implementing a strategy equilibrium solution (such as Nash equilibrium) in evolutionary computation algorithm.

  10. Algorithmic Mechanism Design of Evolutionary Computation

    PubMed Central

    2015-01-01

    We consider algorithmic design, enhancement, and improvement of evolutionary computation as a mechanism design problem. All individuals or several groups of individuals can be considered as self-interested agents. The individuals in evolutionary computation can manipulate parameter settings and operations by satisfying their own preferences, which are defined by an evolutionary computation algorithm designer, rather than by following a fixed algorithm rule. Evolutionary computation algorithm designers or self-adaptive methods should construct proper rules and mechanisms for all agents (individuals) to conduct their evolution behaviour correctly in order to definitely achieve the desired and preset objective(s). As a case study, we propose a formal framework on parameter setting, strategy selection, and algorithmic design of evolutionary computation by considering the Nash strategy equilibrium of a mechanism design in the search process. The evaluation results present the efficiency of the framework. This primary principle can be implemented in any evolutionary computation algorithm that needs to consider strategy selection issues in its optimization process. The final objective of our work is to solve evolutionary computation design as an algorithmic mechanism design problem and establish its fundamental aspect by taking this perspective. This paper is the first step towards achieving this objective by implementing a strategy equilibrium solution (such as Nash equilibrium) in evolutionary computation algorithm. PMID:26257777

  11. Firefly Mating Algorithm for Continuous Optimization Problems.

    PubMed

    Ritthipakdee, Amarita; Thammano, Arit; Premasathian, Nol; Jitkongchuen, Duangjai

    2017-01-01

    This paper proposes a swarm intelligence algorithm, called firefly mating algorithm (FMA), for solving continuous optimization problems. FMA uses genetic algorithm as the core of the algorithm. The main feature of the algorithm is a novel mating pair selection method which is inspired by the following 2 mating behaviors of fireflies in nature: (i) the mutual attraction between males and females causes them to mate and (ii) fireflies of both sexes are of the multiple-mating type, mating with multiple opposite sex partners. A female continues mating until her spermatheca becomes full, and, in the same vein, a male can provide sperms for several females until his sperm reservoir is depleted. This new feature enhances the global convergence capability of the algorithm. The performance of FMA was tested with 20 benchmark functions (sixteen 30-dimensional functions and four 2-dimensional ones) against FA, ALC-PSO, COA, MCPSO, LWGSODE, MPSODDS, DFOA, SHPSOS, LSA, MPDPGA, DE, and GABC algorithms. The experimental results showed that the success rates of our proposed algorithm with these functions were higher than those of other algorithms and the proposed algorithm also required fewer numbers of iterations to reach the global optima.

  12. Firefly Mating Algorithm for Continuous Optimization Problems

    PubMed Central

    Ritthipakdee, Amarita; Premasathian, Nol; Jitkongchuen, Duangjai

    2017-01-01

    This paper proposes a swarm intelligence algorithm, called firefly mating algorithm (FMA), for solving continuous optimization problems. FMA uses genetic algorithm as the core of the algorithm. The main feature of the algorithm is a novel mating pair selection method which is inspired by the following 2 mating behaviors of fireflies in nature: (i) the mutual attraction between males and females causes them to mate and (ii) fireflies of both sexes are of the multiple-mating type, mating with multiple opposite sex partners. A female continues mating until her spermatheca becomes full, and, in the same vein, a male can provide sperms for several females until his sperm reservoir is depleted. This new feature enhances the global convergence capability of the algorithm. The performance of FMA was tested with 20 benchmark functions (sixteen 30-dimensional functions and four 2-dimensional ones) against FA, ALC-PSO, COA, MCPSO, LWGSODE, MPSODDS, DFOA, SHPSOS, LSA, MPDPGA, DE, and GABC algorithms. The experimental results showed that the success rates of our proposed algorithm with these functions were higher than those of other algorithms and the proposed algorithm also required fewer numbers of iterations to reach the global optima. PMID:28808442

  13. An Algorithmic Framework for Multiobjective Optimization

    PubMed Central

    Ganesan, T.; Elamvazuthi, I.; Shaari, Ku Zilati Ku; Vasant, P.

    2013-01-01

    Multiobjective (MO) optimization is an emerging field which is increasingly being encountered in many fields globally. Various metaheuristic techniques such as differential evolution (DE), genetic algorithm (GA), gravitational search algorithm (GSA), and particle swarm optimization (PSO) have been used in conjunction with scalarization techniques such as weighted sum approach and the normal-boundary intersection (NBI) method to solve MO problems. Nevertheless, many challenges still arise especially when dealing with problems with multiple objectives (especially in cases more than two). In addition, problems with extensive computational overhead emerge when dealing with hybrid algorithms. This paper discusses these issues by proposing an alternative framework that utilizes algorithmic concepts related to the problem structure for generating efficient and effective algorithms. This paper proposes a framework to generate new high-performance algorithms with minimal computational overhead for MO optimization. PMID:24470795

  14. The Rational Hybrid Monte Carlo algorithm

    NASA Astrophysics Data System (ADS)

    Clark, Michael

    2006-12-01

    The past few years have seen considerable progress in algorithmic development for the generation of gauge fields including the effects of dynamical fermions. The Rational Hybrid Monte Carlo (RHMC) algorithm, where Hybrid Monte Carlo is performed using a rational approximation in place the usual inverse quark matrix kernel is one of these developments. This algorithm has been found to be extremely beneficial in many areas of lattice QCD (chiral fermions, finite temperature, Wilson fermions etc.). We review the algorithm and some of these benefits, and we compare against other recent algorithm developements. We conclude with an update of the Berlin wall plot comparing costs of all popular fermion formulations.

  15. Parametric classification of handvein patterns based on texture features

    NASA Astrophysics Data System (ADS)

    Al Mahafzah, Harbi; Imran, Mohammad; Supreetha Gowda H., D.

    2018-04-01

    In this paper, we have developed Biometric recognition system adopting hand based modality Handvein,which has the unique pattern for each individual and it is impossible to counterfeit and fabricate as it is an internal feature. We have opted in choosing feature extraction algorithms such as LBP-visual descriptor, LPQ-blur insensitive texture operator, Log-Gabor-Texture descriptor. We have chosen well known classifiers such as KNN and SVM for classification. We have experimented and tabulated results of single algorithm recognition rate for Handvein under different distance measures and kernel options. The feature level fusion is carried out which increased the performance level.

  16. Algorithms for accelerated convergence of adaptive PCA.

    PubMed

    Chatterjee, C; Kang, Z; Roychowdhury, V P

    2000-01-01

    We derive and discuss new adaptive algorithms for principal component analysis (PCA) that are shown to converge faster than the traditional PCA algorithms due to Oja, Sanger, and Xu. It is well known that traditional PCA algorithms that are derived by using gradient descent on an objective function are slow to converge. Furthermore, the convergence of these algorithms depends on appropriate choices of the gain sequences. Since online applications demand faster convergence and an automatic selection of gains, we present new adaptive algorithms to solve these problems. We first present an unconstrained objective function, which can be minimized to obtain the principal components. We derive adaptive algorithms from this objective function by using: 1) gradient descent; 2) steepest descent; 3) conjugate direction; and 4) Newton-Raphson methods. Although gradient descent produces Xu's LMSER algorithm, the steepest descent, conjugate direction, and Newton-Raphson methods produce new adaptive algorithms for PCA. We also provide a discussion on the landscape of the objective function, and present a global convergence proof of the adaptive gradient descent PCA algorithm using stochastic approximation theory. Extensive experiments with stationary and nonstationary multidimensional Gaussian sequences show faster convergence of the new algorithms over the traditional gradient descent methods.We also compare the steepest descent adaptive algorithm with state-of-the-art methods on stationary and nonstationary sequences.

  17. Multimodal Estimation of Distribution Algorithms.

    PubMed

    Yang, Qiang; Chen, Wei-Neng; Li, Yun; Chen, C L Philip; Xu, Xiang-Min; Zhang, Jun

    2016-02-15

    Taking the advantage of estimation of distribution algorithms (EDAs) in preserving high diversity, this paper proposes a multimodal EDA. Integrated with clustering strategies for crowding and speciation, two versions of this algorithm are developed, which operate at the niche level. Then these two algorithms are equipped with three distinctive techniques: 1) a dynamic cluster sizing strategy; 2) an alternative utilization of Gaussian and Cauchy distributions to generate offspring; and 3) an adaptive local search. The dynamic cluster sizing affords a potential balance between exploration and exploitation and reduces the sensitivity to the cluster size in the niching methods. Taking advantages of Gaussian and Cauchy distributions, we generate the offspring at the niche level through alternatively using these two distributions. Such utilization can also potentially offer a balance between exploration and exploitation. Further, solution accuracy is enhanced through a new local search scheme probabilistically conducted around seeds of niches with probabilities determined self-adaptively according to fitness values of these seeds. Extensive experiments conducted on 20 benchmark multimodal problems confirm that both algorithms can achieve competitive performance compared with several state-of-the-art multimodal algorithms, which is supported by nonparametric tests. Especially, the proposed algorithms are very promising for complex problems with many local optima.

  18. Spaceborne SAR Imaging Algorithm for Coherence Optimized.

    PubMed

    Qiu, Zhiwei; Yue, Jianping; Wang, Xueqin; Yue, Shun

    2016-01-01

    This paper proposes SAR imaging algorithm with largest coherence based on the existing SAR imaging algorithm. The basic idea of SAR imaging algorithm in imaging processing is that output signal can have maximum signal-to-noise ratio (SNR) by using the optimal imaging parameters. Traditional imaging algorithm can acquire the best focusing effect, but would bring the decoherence phenomenon in subsequent interference process. Algorithm proposed in this paper is that SAR echo adopts consistent imaging parameters in focusing processing. Although the SNR of the output signal is reduced slightly, their coherence is ensured greatly, and finally the interferogram with high quality is obtained. In this paper, two scenes of Envisat ASAR data in Zhangbei are employed to conduct experiment for this algorithm. Compared with the interferogram from the traditional algorithm, the results show that this algorithm is more suitable for SAR interferometry (InSAR) research and application.

  19. Spaceborne SAR Imaging Algorithm for Coherence Optimized

    PubMed Central

    Qiu, Zhiwei; Yue, Jianping; Wang, Xueqin; Yue, Shun

    2016-01-01

    This paper proposes SAR imaging algorithm with largest coherence based on the existing SAR imaging algorithm. The basic idea of SAR imaging algorithm in imaging processing is that output signal can have maximum signal-to-noise ratio (SNR) by using the optimal imaging parameters. Traditional imaging algorithm can acquire the best focusing effect, but would bring the decoherence phenomenon in subsequent interference process. Algorithm proposed in this paper is that SAR echo adopts consistent imaging parameters in focusing processing. Although the SNR of the output signal is reduced slightly, their coherence is ensured greatly, and finally the interferogram with high quality is obtained. In this paper, two scenes of Envisat ASAR data in Zhangbei are employed to conduct experiment for this algorithm. Compared with the interferogram from the traditional algorithm, the results show that this algorithm is more suitable for SAR interferometry (InSAR) research and application. PMID:26871446

  20. Multi-object Detection and Discrimination Algorithms

    DTIC Science & Technology

    2015-03-26

    with  an   algorithm  similar  to  a  depth-­‐first   search .   This  stage  of  the   algorithm  is  O(CN).  From...Multi-object Detection and Discrimination Algorithms This document contains an overview of research and work performed and published at the University...of Florida from October 1, 2009 to October 31, 2013 pertaining to proposal 57306CS: Multi-object Detection and Discrimination Algorithms

  1. QRS Detection Algorithm for Telehealth Electrocardiogram Recordings.

    PubMed

    Khamis, Heba; Weiss, Robert; Xie, Yang; Chang, Chan-Wei; Lovell, Nigel H; Redmond, Stephen J

    2016-07-01

    QRS detection algorithms are needed to analyze electrocardiogram (ECG) recordings generated in telehealth environments. However, the numerous published QRS detectors focus on clean clinical data. Here, a "UNSW" QRS detection algorithm is described that is suitable for clinical ECG and also poorer quality telehealth ECG. The UNSW algorithm generates a feature signal containing information about ECG amplitude and derivative, which is filtered according to its frequency content and an adaptive threshold is applied. The algorithm was tested on clinical and telehealth ECG and the QRS detection performance is compared to the Pan-Tompkins (PT) and Gutiérrez-Rivas (GR) algorithm. For the MIT-BIH Arrhythmia database (virtually artifact free, clinical ECG), the overall sensitivity (Se) and positive predictivity (+P) of the UNSW algorithm was >99%, which was comparable to PT and GR. When applied to the MIT-BIH noise stress test database (clinical ECG with added calibrated noise) after artifact masking, all three algorithms had overall Se >99%, and the UNSW algorithm had higher +P (98%, p < 0.05) than PT and GR. For 250 telehealth ECG records (unsupervised recordings; dry metal electrodes), the UNSW algorithm had 98% Se and 95% +P which was superior to PT (+P: p < 0.001) and GR (Se and +P: p < 0.001). This is the first study to describe a QRS detection algorithm for telehealth data and evaluate it on clinical and telehealth ECG with superior results to published algorithms. The UNSW algorithm could be used to manage increasing telehealth ECG analysis workloads.

  2. A pipelined FPGA implementation of an encryption algorithm based on genetic algorithm

    NASA Astrophysics Data System (ADS)

    Thirer, Nonel

    2013-05-01

    With the evolution of digital data storage and exchange, it is essential to protect the confidential information from every unauthorized access. High performance encryption algorithms were developed and implemented by software and hardware. Also many methods to attack the cipher text were developed. In the last years, the genetic algorithm has gained much interest in cryptanalysis of cipher texts and also in encryption ciphers. This paper analyses the possibility to use the genetic algorithm as a multiple key sequence generator for an AES (Advanced Encryption Standard) cryptographic system, and also to use a three stages pipeline (with four main blocks: Input data, AES Core, Key generator, Output data) to provide a fast encryption and storage/transmission of a large amount of data.

  3. Conflict-Aware Scheduling Algorithm

    NASA Technical Reports Server (NTRS)

    Wang, Yeou-Fang; Borden, Chester

    2006-01-01

    conflict-aware scheduling algorithm is being developed to help automate the allocation of NASA s Deep Space Network (DSN) antennas and equipment that are used to communicate with interplanetary scientific spacecraft. The current approach for scheduling DSN ground resources seeks to provide an equitable distribution of tracking services among the multiple scientific missions and is very labor intensive. Due to the large (and increasing) number of mission requests for DSN services, combined with technical and geometric constraints, the DSN is highly oversubscribed. To help automate the process, and reduce the DSN and spaceflight project labor effort required for initiating, maintaining, and negotiating schedules, a new scheduling algorithm is being developed. The scheduling algorithm generates a "conflict-aware" schedule, where all requests are scheduled based on a dynamic priority scheme. The conflict-aware scheduling algorithm allocates all requests for DSN tracking services while identifying and maintaining the conflicts to facilitate collaboration and negotiation between spaceflight missions. These contrast with traditional "conflict-free" scheduling algorithms that assign tracks that are not in conflict and mark the remainder as unscheduled. In the case where full schedule automation is desired (based on mission/event priorities, fairness, allocation rules, geometric constraints, and ground system capabilities/ constraints), a conflict-free schedule can easily be created from the conflict-aware schedule by removing lower priority items that are in conflict.

  4. One improved LSB steganography algorithm

    NASA Astrophysics Data System (ADS)

    Song, Bing; Zhang, Zhi-hong

    2013-03-01

    It is easy to be detected by X2 and RS steganalysis with high accuracy that using LSB algorithm to hide information in digital image. We started by selecting information embedded location and modifying the information embedded method, combined with sub-affine transformation and matrix coding method, improved the LSB algorithm and a new LSB algorithm was proposed. Experimental results show that the improved one can resist the X2 and RS steganalysis effectively.

  5. Integrating Algorithm Visualization Video into a First-Year Algorithm and Data Structure Course

    ERIC Educational Resources Information Center

    Crescenzi, Pilu; Malizia, Alessio; Verri, M. Cecilia; Diaz, Paloma; Aedo, Ignacio

    2012-01-01

    In this paper we describe the results that we have obtained while integrating algorithm visualization (AV) movies (strongly tightened with the other teaching material), within a first-year undergraduate course on algorithms and data structures. Our experimental results seem to support the hypothesis that making these movies available significantly…

  6. Firefly Algorithm, Lévy Flights and Global Optimization

    NASA Astrophysics Data System (ADS)

    Yang, Xin-She

    Nature-inspired algorithms such as Particle Swarm Optimization and Firefly Algorithm are among the most powerful algorithms for optimization. In this paper, we intend to formulate a new metaheuristic algorithm by combining Lévy flights with the search strategy via the Firefly Algorithm. Numerical studies and results suggest that the proposed Lévy-flight firefly algorithm is superior to existing metaheuristic algorithms. Finally implications for further research and wider applications will be discussed.

  7. A novel algorithm for Bluetooth ECG.

    PubMed

    Pandya, Utpal T; Desai, Uday B

    2012-11-01

    In wireless transmission of ECG, data latency will be significant when battery power level and data transmission distance are not maintained. In applications like home monitoring or personalized care, to overcome the joint effect of previous issues of wireless transmission and other ECG measurement noises, a novel filtering strategy is required. Here, a novel algorithm, identified as peak rejection adaptive sampling modified moving average (PRASMMA) algorithm for wireless ECG is introduced. This algorithm first removes error in bit pattern of received data if occurred in wireless transmission and then removes baseline drift. Afterward, a modified moving average is implemented except in the region of each QRS complexes. The algorithm also sets its filtering parameters according to different sampling rate selected for acquisition of signals. To demonstrate the work, a prototyped Bluetooth-based ECG module is used to capture ECG with different sampling rate and in different position of patient. This module transmits ECG wirelessly to Bluetooth-enabled devices where the PRASMMA algorithm is applied on captured ECG. The performance of PRASMMA algorithm is compared with moving average and S-Golay algorithms visually as well as numerically. The results show that the PRASMMA algorithm can significantly improve the ECG reconstruction by efficiently removing the noise and its use can be extended to any parameters where peaks are importance for diagnostic purpose.

  8. Comparison of Snow Mass Estimates from a Prototype Passive Microwave Snow Algorithm, a Revised Algorithm and a Snow Depth Climatology

    NASA Technical Reports Server (NTRS)

    Foster, J. L.; Chang, A. T. C.; Hall, D. K.

    1997-01-01

    While it is recognized that no single snow algorithm is capable of producing accurate global estimates of snow depth, for research purposes it is useful to test an algorithm's performance in different climatic areas in order to see how it responds to a variety of snow conditions. This study is one of the first to develop separate passive microwave snow algorithms for North America and Eurasia by including parameters that consider the effects of variations in forest cover and crystal size on microwave brightness temperature. A new algorithm (GSFC 1996) is compared to a prototype algorithm (Chang et al., 1987) and to a snow depth climatology (SDC), which for this study is considered to be a standard reference or baseline. It is shown that the GSFC 1996 algorithm compares much more favorably to the SDC than does the Chang et al. (1987) algorithm. For example, in North America in February there is a 15% difference between the GSFC 198-96 Algorithm and the SDC, but with the Chang et al. (1987) algorithm the difference is greater than 50%. In Eurasia, also in February, there is only a 1.3% difference between the GSFC 1996 algorithm and the SDC, whereas with the Chang et al. (1987) algorithm the difference is about 20%. As expected, differences tend to be less when the snow cover extent is greater, particularly for Eurasia. The GSFC 1996 algorithm performs better in North America in each month than dose the Chang et al. (1987) algorithm. This is also the case in Eurasia, except in April and May when the Chang et al.(1987) algorithms is in closer accord to the SDC than is GSFC 1996 algorithm.

  9. Noise-enhanced clustering and competitive learning algorithms.

    PubMed

    Osoba, Osonde; Kosko, Bart

    2013-01-01

    Noise can provably speed up convergence in many centroid-based clustering algorithms. This includes the popular k-means clustering algorithm. The clustering noise benefit follows from the general noise benefit for the expectation-maximization algorithm because many clustering algorithms are special cases of the expectation-maximization algorithm. Simulations show that noise also speeds up convergence in stochastic unsupervised competitive learning, supervised competitive learning, and differential competitive learning. Copyright © 2012 Elsevier Ltd. All rights reserved.

  10. Fast algorithm of adaptive Fourier series

    NASA Astrophysics Data System (ADS)

    Gao, You; Ku, Min; Qian, Tao

    2018-05-01

    Adaptive Fourier decomposition (AFD, precisely 1-D AFD or Core-AFD) was originated for the goal of positive frequency representations of signals. It achieved the goal and at the same time offered fast decompositions of signals. There then arose several types of AFDs. AFD merged with the greedy algorithm idea, and in particular, motivated the so-called pre-orthogonal greedy algorithm (Pre-OGA) that was proven to be the most efficient greedy algorithm. The cost of the advantages of the AFD type decompositions is, however, the high computational complexity due to the involvement of maximal selections of the dictionary parameters. The present paper offers one formulation of the 1-D AFD algorithm by building the FFT algorithm into it. Accordingly, the algorithm complexity is reduced, from the original $\\mathcal{O}(M N^2)$ to $\\mathcal{O}(M N\\log_2 N)$, where $N$ denotes the number of the discretization points on the unit circle and $M$ denotes the number of points in $[0,1)$. This greatly enhances the applicability of AFD. Experiments are carried out to show the high efficiency of the proposed algorithm.

  11. Comparison between iterative wavefront control algorithm and direct gradient wavefront control algorithm for adaptive optics system

    NASA Astrophysics Data System (ADS)

    Cheng, Sheng-Yi; Liu, Wen-Jin; Chen, Shan-Qiu; Dong, Li-Zhi; Yang, Ping; Xu, Bing

    2015-08-01

    Among all kinds of wavefront control algorithms in adaptive optics systems, the direct gradient wavefront control algorithm is the most widespread and common method. This control algorithm obtains the actuator voltages directly from wavefront slopes through pre-measuring the relational matrix between deformable mirror actuators and Hartmann wavefront sensor with perfect real-time characteristic and stability. However, with increasing the number of sub-apertures in wavefront sensor and deformable mirror actuators of adaptive optics systems, the matrix operation in direct gradient algorithm takes too much time, which becomes a major factor influencing control effect of adaptive optics systems. In this paper we apply an iterative wavefront control algorithm to high-resolution adaptive optics systems, in which the voltages of each actuator are obtained through iteration arithmetic, which gains great advantage in calculation and storage. For AO system with thousands of actuators, the computational complexity estimate is about O(n2) ˜ O(n3) in direct gradient wavefront control algorithm, while the computational complexity estimate in iterative wavefront control algorithm is about O(n) ˜ (O(n)3/2), in which n is the number of actuators of AO system. And the more the numbers of sub-apertures and deformable mirror actuators, the more significant advantage the iterative wavefront control algorithm exhibits. Project supported by the National Key Scientific and Research Equipment Development Project of China (Grant No. ZDYZ2013-2), the National Natural Science Foundation of China (Grant No. 11173008), and the Sichuan Provincial Outstanding Youth Academic Technology Leaders Program, China (Grant No. 2012JQ0012).

  12. Using Alternative Multiplication Algorithms to "Offload" Cognition

    ERIC Educational Resources Information Center

    Jazby, Dan; Pearn, Cath

    2015-01-01

    When viewed through a lens of embedded cognition, algorithms may enable aspects of the cognitive work of multi-digit multiplication to be "offloaded" to the environmental structure created by an algorithm. This study analyses four multiplication algorithms by viewing different algorithms as enabling cognitive work to be distributed…

  13. Distilling the Verification Process for Prognostics Algorithms

    NASA Technical Reports Server (NTRS)

    Roychoudhury, Indranil; Saxena, Abhinav; Celaya, Jose R.; Goebel, Kai

    2013-01-01

    The goal of prognostics and health management (PHM) systems is to ensure system safety, and reduce downtime and maintenance costs. It is important that a PHM system is verified and validated before it can be successfully deployed. Prognostics algorithms are integral parts of PHM systems. This paper investigates a systematic process of verification of such prognostics algorithms. To this end, first, this paper distinguishes between technology maturation and product development. Then, the paper describes the verification process for a prognostics algorithm as it moves up to higher maturity levels. This process is shown to be an iterative process where verification activities are interleaved with validation activities at each maturation level. In this work, we adopt the concept of technology readiness levels (TRLs) to represent the different maturity levels of a prognostics algorithm. It is shown that at each TRL, the verification of a prognostics algorithm depends on verifying the different components of the algorithm according to the requirements laid out by the PHM system that adopts this prognostics algorithm. Finally, using simplified examples, the systematic process for verifying a prognostics algorithm is demonstrated as the prognostics algorithm moves up TRLs.

  14. Comparative measurements of plasma potential with ball-pen and Langmuir probe in low-temperature magnetized plasma

    NASA Astrophysics Data System (ADS)

    Zanáška, M.; Adámek, J.; Peterka, M.; Kudrna, P.; Tichý, M.

    2015-03-01

    The ball-pen probe (BPP) is used for direct plasma potential measurements in magnetized plasma. The probe can adjust the ratio of the electron and ion saturation currents Isat-/Isat+ to be close to one and therefore its I-V characteristic becomes nearly symmetric. If this is achieved, the floating potential of the BPP is close to the plasma potential. Because of its rather simple construction, it offers an attractive probe for measurements in magnetized plasma. Comparative measurements of plasma potential by BPPs of different dimensions as well as one Langmuir probe (LP) in an argon discharge plasma of a cylindrical magnetron were performed at various experimental conditions. An additional comparison by an emissive probe was also performed. All these types of probes provide similar values of plasma potential in a wide range of plasma parameters. Our results for three different BPP dimensions indicate that the BPP can be operated in a cylindrical magnetron DC argon discharge if the value of the ratio of the magnetic field and neutral gas pressure, B/p, is greater than approximately 10 mT/Pa.

  15. Hybrid employment recommendation algorithm based on Spark

    NASA Astrophysics Data System (ADS)

    Li, Zuoquan; Lin, Yubei; Zhang, Xingming

    2017-08-01

    Aiming at the real-time application of collaborative filtering employment recommendation algorithm (CF), a clustering collaborative filtering recommendation algorithm (CCF) is developed, which applies hierarchical clustering to CF and narrows the query range of neighbour items. In addition, to solve the cold-start problem of content-based recommendation algorithm (CB), a content-based algorithm with users’ information (CBUI) is introduced for job recommendation. Furthermore, a hybrid recommendation algorithm (HRA) which combines CCF and CBUI algorithms is proposed, and implemented on Spark platform. The experimental results show that HRA can overcome the problems of cold start and data sparsity, and achieve good recommendation accuracy and scalability for employment recommendation.

  16. Categorizing Variations of Student-Implemented Sorting Algorithms

    ERIC Educational Resources Information Center

    Taherkhani, Ahmad; Korhonen, Ari; Malmi, Lauri

    2012-01-01

    In this study, we examined freshmen students' sorting algorithm implementations in data structures and algorithms' course in two phases: at the beginning of the course before the students received any instruction on sorting algorithms, and after taking a lecture on sorting algorithms. The analysis revealed that many students have insufficient…

  17. Project resource reallocation algorithm

    NASA Technical Reports Server (NTRS)

    Myers, J. E.

    1981-01-01

    A methodology for adjusting baseline cost estimates according to project schedule changes is described. An algorithm which performs a linear expansion or contraction of the baseline project resource distribution in proportion to the project schedule expansion or contraction is presented. Input to the algorithm consists of the deck of cards (PACE input data) prepared for the baseline project schedule as well as a specification of the nature of the baseline schedule change. Output of the algorithm is a new deck of cards with all work breakdown structure block and element of cost estimates redistributed for the new project schedule. This new deck can be processed through PACE to produce a detailed cost estimate for the new schedule.

  18. Algorithm Estimates Microwave Water-Vapor Delay

    NASA Technical Reports Server (NTRS)

    Robinson, Steven E.

    1989-01-01

    Accuracy equals or exceeds conventional linear algorithms. "Profile" algorithm improved algorithm using water-vapor-radiometer data to produce estimates of microwave delays caused by water vapor in troposphere. Does not require site-specific and weather-dependent empirical parameters other than standard meteorological data, latitude, and altitude for use in conjunction with published standard atmospheric data. Basic premise of profile algorithm, wet-path delay approximated closely by solution to simplified version of nonlinear delay problem and generated numerically from each radiometer observation and simultaneous meteorological data.

  19. Decoding algorithm for vortex communications receiver

    NASA Astrophysics Data System (ADS)

    Kupferman, Judy; Arnon, Shlomi

    2018-01-01

    Vortex light beams can provide a tremendous alphabet for encoding information. We derive a symbol decoding algorithm for a direct detection matrix detector vortex beam receiver using Laguerre Gauss (LG) modes, and develop a mathematical model of symbol error rate (SER) for this receiver. We compare SER as a function of signal to noise ratio (SNR) for our algorithm and for the Pearson correlation algorithm. To our knowledge, this is the first comprehensive treatment of a decoding algorithm of a matrix detector for an LG receiver.

  20. A new bio-inspired optimisation algorithm: Bird Swarm Algorithm

    NASA Astrophysics Data System (ADS)

    Meng, Xian-Bing; Gao, X. Z.; Lu, Lihua; Liu, Yu; Zhang, Hengzhen

    2016-07-01

    A new bio-inspired algorithm, namely Bird Swarm Algorithm (BSA), is proposed for solving optimisation applications. BSA is based on the swarm intelligence extracted from the social behaviours and social interactions in bird swarms. Birds mainly have three kinds of behaviours: foraging behaviour, vigilance behaviour and flight behaviour. Birds may forage for food and escape from the predators by the social interactions to obtain a high chance of survival. By modelling these social behaviours, social interactions and the related swarm intelligence, four search strategies associated with five simplified rules are formulated in BSA. Simulations and comparisons based on eighteen benchmark problems demonstrate the effectiveness, superiority and stability of BSA. Some proposals for future research about BSA are also discussed.

  1. Selecting materialized views using random algorithm

    NASA Astrophysics Data System (ADS)

    Zhou, Lijuan; Hao, Zhongxiao; Liu, Chi

    2007-04-01

    The data warehouse is a repository of information collected from multiple possibly heterogeneous autonomous distributed databases. The information stored at the data warehouse is in form of views referred to as materialized views. The selection of the materialized views is one of the most important decisions in designing a data warehouse. Materialized views are stored in the data warehouse for the purpose of efficiently implementing on-line analytical processing queries. The first issue for the user to consider is query response time. So in this paper, we develop algorithms to select a set of views to materialize in data warehouse in order to minimize the total view maintenance cost under the constraint of a given query response time. We call it query_cost view_ selection problem. First, cost graph and cost model of query_cost view_ selection problem are presented. Second, the methods for selecting materialized views by using random algorithms are presented. The genetic algorithm is applied to the materialized views selection problem. But with the development of genetic process, the legal solution produced become more and more difficult, so a lot of solutions are eliminated and producing time of the solutions is lengthened in genetic algorithm. Therefore, improved algorithm has been presented in this paper, which is the combination of simulated annealing algorithm and genetic algorithm for the purpose of solving the query cost view selection problem. Finally, in order to test the function and efficiency of our algorithms experiment simulation is adopted. The experiments show that the given methods can provide near-optimal solutions in limited time and works better in practical cases. Randomized algorithms will become invaluable tools for data warehouse evolution.

  2. Efficient Approximation Algorithms for Weighted $b$-Matching

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Khan, Arif; Pothen, Alex; Mostofa Ali Patwary, Md.

    2016-01-01

    We describe a half-approximation algorithm, b-Suitor, for computing a b-Matching of maximum weight in a graph with weights on the edges. b-Matching is a generalization of the well-known Matching problem in graphs, where the objective is to choose a subset of M edges in the graph such that at most a specified number b(v) of edges in M are incident on each vertex v. Subject to this restriction we maximize the sum of the weights of the edges in M. We prove that the b-Suitor algorithm computes the same b-Matching as the one obtained by the greedy algorithm for themore » problem. We implement the algorithm on serial and shared-memory parallel processors, and compare its performance against a collection of approximation algorithms that have been proposed for the Matching problem. Our results show that the b-Suitor algorithm outperforms the Greedy and Locally Dominant edge algorithms by one to two orders of magnitude on a serial processor. The b-Suitor algorithm has a high degree of concurrency, and it scales well up to 240 threads on a shared memory multiprocessor. The b-Suitor algorithm outperforms the Locally Dominant edge algorithm by a factor of fourteen on 16 cores of an Intel Xeon multiprocessor.« less

  3. [An improved algorithm for electrohysterogram envelope extraction].

    PubMed

    Lu, Yaosheng; Pan, Jie; Chen, Zhaoxia; Chen, Zhaoxia

    2017-02-01

    Extraction uterine contraction signal from abdominal uterine electromyogram(EMG) signal is considered as the most promising method to replace the traditional tocodynamometer(TOCO) for detecting uterine contractions activity. The traditional root mean square(RMS) algorithm has only some limited values in canceling the impulsive noise. In our study, an improved algorithm for uterine EMG envelope extraction was proposed to overcome the problem. Firstly, in our experiment, zero-crossing detection method was used to separate the burst of uterine electrical activity from the raw uterine EMG signal. After processing the separated signals by employing two filtering windows which have different width, we used the traditional RMS algorithm to extract uterus EMG envelope. To assess the performance of the algorithm, the improved algorithm was compared with two existing intensity of uterine electromyogram(IEMG) extraction algorithms. The results showed that the improved algorithm was better than the traditional ones in eliminating impulsive noise present in the uterine EMG signal. The measurement sensitivity and positive predictive value(PPV) of the improved algorithm were 0.952 and 0.922, respectively, which were not only significantly higher than the corresponding values(0.859 and 0.847) of the first comparison algorithm, but also higher than the values(0.928 and 0.877) of the second comparison algorithm. Thus the new method is reliable and effective.

  4. Seamless Merging of Hypertext and Algorithm Animation

    ERIC Educational Resources Information Center

    Karavirta, Ville

    2009-01-01

    Online learning material that students use by themselves is one of the typical usages of algorithm animation (AA). Thus, the integration of algorithm animations into hypertext is seen as an important topic today to promote the usage of algorithm animation in teaching. This article presents an algorithm animation viewer implemented purely using…

  5. Multicore and GPU algorithms for Nussinov RNA folding

    PubMed Central

    2014-01-01

    Background One segment of a RNA sequence might be paired with another segment of the same RNA sequence due to the force of hydrogen bonds. This two-dimensional structure is called the RNA sequence's secondary structure. Several algorithms have been proposed to predict an RNA sequence's secondary structure. These algorithms are referred to as RNA folding algorithms. Results We develop cache efficient, multicore, and GPU algorithms for RNA folding using Nussinov's algorithm. Conclusions Our cache efficient algorithm provides a speedup between 1.6 and 3.0 relative to a naive straightforward single core code. The multicore version of the cache efficient single core algorithm provides a speedup, relative to the naive single core algorithm, between 7.5 and 14.0 on a 6 core hyperthreaded CPU. Our GPU algorithm for the NVIDIA C2050 is up to 1582 times as fast as the naive single core algorithm and between 5.1 and 11.2 times as fast as the fastest previously known GPU algorithm for Nussinov RNA folding. PMID:25082539

  6. Array architectures for iterative algorithms

    NASA Technical Reports Server (NTRS)

    Jagadish, Hosagrahar V.; Rao, Sailesh K.; Kailath, Thomas

    1987-01-01

    Regular mesh-connected arrays are shown to be isomorphic to a class of so-called regular iterative algorithms. For a wide variety of problems it is shown how to obtain appropriate iterative algorithms and then how to translate these algorithms into arrays in a systematic fashion. Several 'systolic' arrays presented in the literature are shown to be specific cases of the variety of architectures that can be derived by the techniques presented here. These include arrays for Fourier Transform, Matrix Multiplication, and Sorting.

  7. Testing block subdivision algorithms on block designs

    NASA Astrophysics Data System (ADS)

    Wiseman, Natalie; Patterson, Zachary

    2016-01-01

    Integrated land use-transportation models predict future transportation demand taking into account how households and firms arrange themselves partly as a function of the transportation system. Recent integrated models require parcels as inputs and produce household and employment predictions at the parcel scale. Block subdivision algorithms automatically generate parcel patterns within blocks. Evaluating block subdivision algorithms is done by way of generating parcels and comparing them to those in a parcel database. Three block subdivision algorithms are evaluated on how closely they reproduce parcels of different block types found in a parcel database from Montreal, Canada. While the authors who developed each of the algorithms have evaluated them, they have used their own metrics and block types to evaluate their own algorithms. This makes it difficult to compare their strengths and weaknesses. The contribution of this paper is in resolving this difficulty with the aim of finding a better algorithm suited to subdividing each block type. The proposed hypothesis is that given the different approaches that block subdivision algorithms take, it's likely that different algorithms are better adapted to subdividing different block types. To test this, a standardized block type classification is used that consists of mutually exclusive and comprehensive categories. A statistical method is used for finding a better algorithm and the probability it will perform well for a given block type. Results suggest the oriented bounding box algorithm performs better for warped non-uniform sites, as well as gridiron and fragmented uniform sites. It also produces more similar parcel areas and widths. The Generalized Parcel Divider 1 algorithm performs better for gridiron non-uniform sites. The Straight Skeleton algorithm performs better for loop and lollipop networks as well as fragmented non-uniform and warped uniform sites. It also produces more similar parcel shapes and patterns.

  8. Digital signal processing algorithms for automatic voice recognition

    NASA Technical Reports Server (NTRS)

    Botros, Nazeih M.

    1987-01-01

    The current digital signal analysis algorithms are investigated that are implemented in automatic voice recognition algorithms. Automatic voice recognition means, the capability of a computer to recognize and interact with verbal commands. The digital signal is focused on, rather than the linguistic, analysis of speech signal. Several digital signal processing algorithms are available for voice recognition. Some of these algorithms are: Linear Predictive Coding (LPC), Short-time Fourier Analysis, and Cepstrum Analysis. Among these algorithms, the LPC is the most widely used. This algorithm has short execution time and do not require large memory storage. However, it has several limitations due to the assumptions used to develop it. The other 2 algorithms are frequency domain algorithms with not many assumptions, but they are not widely implemented or investigated. However, with the recent advances in the digital technology, namely signal processors, these 2 frequency domain algorithms may be investigated in order to implement them in voice recognition. This research is concerned with real time, microprocessor based recognition algorithms.

  9. N-terminal pro-B-type natriuretic peptide diagnostic algorithm versus American Heart Association algorithm for Kawasaki disease.

    PubMed

    Dionne, Audrey; Meloche-Dumas, Léamarie; Desjardins, Laurent; Turgeon, Jean; Saint-Cyr, Claire; Autmizguine, Julie; Spigelblatt, Linda; Fournier, Anne; Dahdah, Nagib

    2017-03-01

    Diagnosis of Kawasaki disease (KD) can be challenging in the absence of a confirmatory test or pathognomonic finding, especially when clinical criteria are incomplete. We recently proposed serum N-terminal pro-B-type natriuretic peptide (NT-proBNP) as an adjunctive diagnostic test. We retrospectively tested a new algorithm to help KD diagnosis based on NT-proBNP, coronary artery dilation (CAD) at onset, and abnormal serum albumin or C-reactive protein (CRP). The goal was to assess the performance of the algorithm and compare its performance with that of the 2004 American Heart Association (AHA)/American Academy of Pediatrics (AAP) algorithm. The algorithm was tested on 124 KD patients with NT-proBNP measured on admission at the present institutions between 2007 and 2013. Age at diagnosis was 3.4 ± 3.0 years, with a median of five diagnostic criteria; and 55 of the 124 patients (44%) had incomplete KD. CA complications occurred in 64 (52%), with aneurysm in 14 (11%). Using this algorithm, 120/124 (97%) were to be treated, based on high NT-proBNP alone for 79 (64%); on onset CAD for 14 (11%); and on high CRP or low albumin for 27 (22%). Using the AHA/AAP algorithm, 22/47 (47%) of the eligible patients with incomplete KD would not have been referred for treatment, compared with 3/55 (5%) with the NT-proBNP algorithm (P < 0.001). This NT-proBNP-based algorithm is efficient to identify and treat patients with KD, including those with incomplete KD. This study paves the way for a prospective validation trial of the algorithm. © 2016 Japan Pediatric Society.

  10. Faster Parameterized Algorithms for Minor Containment

    NASA Astrophysics Data System (ADS)

    Adler, Isolde; Dorn, Frederic; Fomin, Fedor V.; Sau, Ignasi; Thilikos, Dimitrios M.

    The theory of Graph Minors by Robertson and Seymour is one of the deepest and significant theories in modern Combinatorics. This theory has also a strong impact on the recent development of Algorithms, and several areas, like Parameterized Complexity, have roots in Graph Minors. Until very recently it was a common belief that Graph Minors Theory is mainly of theoretical importance. However, it appears that many deep results from Robertson and Seymour's theory can be also used in the design of practical algorithms. Minor containment testing is one of algorithmically most important and technical parts of the theory, and minor containment in graphs of bounded branchwidth is a basic ingredient of this algorithm. In order to implement minor containment testing on graphs of bounded branchwidth, Hicks [NETWORKS 04] described an algorithm, that in time O(3^{k^2}\\cdot (h+k-1)!\\cdot m) decides if a graph G with m edges and branchwidth k, contains a fixed graph H on h vertices as a minor. That algorithm follows the ideas introduced by Robertson and Seymour in [J'CTSB 95]. In this work we improve the dependence on k of Hicks' result by showing that checking if H is a minor of G can be done in time O(2^{(2k +1 )\\cdot log k} \\cdot h^{2k} \\cdot 2^{2h^2} \\cdot m). Our approach is based on a combinatorial object called rooted packing, which captures the properties of the potential models of subgraphs of H that we seek in our dynamic programming algorithm. This formulation with rooted packings allows us to speed up the algorithm when G is embedded in a fixed surface, obtaining the first single-exponential algorithm for minor containment testing. Namely, it runs in time 2^{O(k)} \\cdot h^{2k} \\cdot 2^{O(h)} \\cdot n, with n = |V(G)|. Finally, we show that slight modifications of our algorithm permit to solve some related problems within the same time bounds, like induced minor or contraction minor containment.

  11. PSC algorithm description

    NASA Technical Reports Server (NTRS)

    Nobbs, Steven G.

    1995-01-01

    An overview of the performance seeking control (PSC) algorithm and details of the important components of the algorithm are given. The onboard propulsion system models, the linear programming optimization, and engine control interface are described. The PSC algorithm receives input from various computers on the aircraft including the digital flight computer, digital engine control, and electronic inlet control. The PSC algorithm contains compact models of the propulsion system including the inlet, engine, and nozzle. The models compute propulsion system parameters, such as inlet drag and fan stall margin, which are not directly measurable in flight. The compact models also compute sensitivities of the propulsion system parameters to change in control variables. The engine model consists of a linear steady state variable model (SSVM) and a nonlinear model. The SSVM is updated with efficiency factors calculated in the engine model update logic, or Kalman filter. The efficiency factors are used to adjust the SSVM to match the actual engine. The propulsion system models are mathematically integrated to form an overall propulsion system model. The propulsion system model is then optimized using a linear programming optimization scheme. The goal of the optimization is determined from the selected PSC mode of operation. The resulting trims are used to compute a new operating point about which the optimization process is repeated. This process is continued until an overall (global) optimum is reached before applying the trims to the controllers.

  12. A community detection algorithm based on structural similarity

    NASA Astrophysics Data System (ADS)

    Guo, Xuchao; Hao, Xia; Liu, Yaqiong; Zhang, Li; Wang, Lu

    2017-09-01

    In order to further improve the efficiency and accuracy of community detection algorithm, a new algorithm named SSTCA (the community detection algorithm based on structural similarity with threshold) is proposed. In this algorithm, the structural similarities are taken as the weights of edges, and the threshold k is considered to remove multiple edges whose weights are less than the threshold, and improve the computational efficiency. Tests were done on the Zachary’s network, Dolphins’ social network and Football dataset by the proposed algorithm, and compared with GN and SSNCA algorithm. The results show that the new algorithm is superior to other algorithms in accuracy for the dense networks and the operating efficiency is improved obviously.

  13. Two Improved Algorithms for Envelope and Wavefront Reduction

    NASA Technical Reports Server (NTRS)

    Kumfert, Gary; Pothen, Alex

    1997-01-01

    Two algorithms for reordering sparse, symmetric matrices or undirected graphs to reduce envelope and wavefront are considered. The first is a combinatorial algorithm introduced by Sloan and further developed by Duff, Reid, and Scott; we describe enhancements to the Sloan algorithm that improve its quality and reduce its run time. Our test problems fall into two classes with differing asymptotic behavior of their envelope parameters as a function of the weights in the Sloan algorithm. We describe an efficient 0(nlogn + m) time implementation of the Sloan algorithm, where n is the number of rows (vertices), and m is the number of nonzeros (edges). On a collection of test problems, the improved Sloan algorithm required, on the average, only twice the time required by the simpler Reverse Cuthill-Mckee algorithm while improving the mean square wavefront by a factor of three. The second algorithm is a hybrid that combines a spectral algorithm for envelope and wavefront reduction with a refinement step that uses a modified Sloan algorithm. The hybrid algorithm reduces the envelope size and mean square wavefront obtained from the Sloan algorithm at the cost of greater running times. We illustrate how these reductions translate into tangible benefits for frontal Cholesky factorization and incomplete factorization preconditioning.

  14. Privacy Preservation in Distributed Subgradient Optimization Algorithms.

    PubMed

    Lou, Youcheng; Yu, Lean; Wang, Shouyang; Yi, Peng

    2017-07-31

    In this paper, some privacy-preserving features for distributed subgradient optimization algorithms are considered. Most of the existing distributed algorithms focus mainly on the algorithm design and convergence analysis, but not the protection of agents' privacy. Privacy is becoming an increasingly important issue in applications involving sensitive information. In this paper, we first show that the distributed subgradient synchronous homogeneous-stepsize algorithm is not privacy preserving in the sense that the malicious agent can asymptotically discover other agents' subgradients by transmitting untrue estimates to its neighbors. Then a distributed subgradient asynchronous heterogeneous-stepsize projection algorithm is proposed and accordingly its convergence and optimality is established. In contrast to the synchronous homogeneous-stepsize algorithm, in the new algorithm agents make their optimization updates asynchronously with heterogeneous stepsizes. The introduced two mechanisms of projection operation and asynchronous heterogeneous-stepsize optimization can guarantee that agents' privacy can be effectively protected.

  15. The Texas Children's Medication Algorithm Project: Revision of the Algorithm for Pharmacotherapy of Attention-Deficit/Hyperactivity Disorder

    ERIC Educational Resources Information Center

    Pliszka, Steven R.; Crismon, M. Lynn; Hughes, Carroll W.; Corners, C. Keith; Emslie, Graham J.; Jensen, Peter S.; McCracken, James T.; Swanson, James M.; Lopez, Molly

    2006-01-01

    Objective: In 1998, the Texas Department of Mental Health and Mental Retardation developed algorithms for medication treatment of attention-deficit/hyperactivity disorder (ADHD). Advances in the psychopharmacology of ADHD and results of a feasibility study of algorithm use in community mental health centers caused the algorithm to be modified and…

  16. Classification of voting algorithms for N-version software

    NASA Astrophysics Data System (ADS)

    Tsarev, R. Yu; Durmuş, M. S.; Üstoglu, I.; Morozov, V. A.

    2018-05-01

    A voting algorithm in N-version software is a crucial component that evaluates the execution of each of the N versions and determines the correct result. Obviously, the result of the voting algorithm determines the outcome of the N-version software in general. Thus, the choice of the voting algorithm is a vital issue. A lot of voting algorithms were already developed and they may be selected for implementation based on the specifics of the analysis of input data. However, the voting algorithms applied in N-version software are not classified. This article presents an overview of classic and recent voting algorithms used in N-version software and the authors' classification of the voting algorithms. Moreover, the steps of the voting algorithms are presented and the distinctive features of the voting algorithms in Nversion software are defined.

  17. Operational algorithm development and refinement approaches

    NASA Astrophysics Data System (ADS)

    Ardanuy, Philip E.

    2003-11-01

    Next-generation polar and geostationary systems, such as the National Polar-orbiting Operational Environmental Satellite System (NPOESS) and the Geostationary Operational Environmental Satellite (GOES)-R, will deploy new generations of electro-optical reflective and emissive capabilities. These will include low-radiometric-noise, improved spatial resolution multi-spectral and hyperspectral imagers and sounders. To achieve specified performances (e.g., measurement accuracy, precision, uncertainty, and stability), and best utilize the advanced space-borne sensing capabilities, a new generation of retrieval algorithms will be implemented. In most cases, these advanced algorithms benefit from ongoing testing and validation using heritage research mission algorithms and data [e.g., the Earth Observing System (EOS)] Moderate-resolution Imaging Spectroradiometer (MODIS) and Shuttle Ozone Limb Scattering Experiment (SOLSE)/Limb Ozone Retreival Experiment (LORE). In these instances, an algorithm's theoretical basis is not static, but rather improves with time. Once frozen, an operational algorithm can "lose ground" relative to research analogs. Cost/benefit analyses provide a basis for change management. The challenge is in reconciling and balancing the stability, and "comfort," that today"s generation of operational platforms provide (well-characterized, known, sensors and algorithms) with the greatly improved quality, opportunities, and risks, that the next generation of operational sensors and algorithms offer. By using the best practices and lessons learned from heritage/groundbreaking activities, it is possible to implement an agile process that enables change, while managing change. This approach combines a "known-risk" frozen baseline with preset completion schedules with insertion opportunities for algorithm advances as ongoing validation activities identify and repair areas of weak performance. This paper describes an objective, adaptive implementation roadmap that

  18. Intelligent Use of CFAR Algorithms

    DTIC Science & Technology

    1993-05-01

    the reference windows can raise the threshold too high in many CFAR algorithms and result in masking of targets. GCMLD is a modification of CMLD that...AD-A267 755 RL-TR-93-75 III 11 III II liiI Interim Report May 1993 INTELLIGENT USE OF CFAR ALGORITHMS Kaman Sciences Corporation P. Antonik, B...AND DATES COVERED IMay 1993 Inte ’rim Jan 92 - Se2 92 4. TITLE AND SUBTITLE 5. FUNDING NUMBERS INTELLIGENT USE OF CFAR ALGORITHMS C - F30602-91-C-0017

  19. Approximation algorithms for planning and control

    NASA Technical Reports Server (NTRS)

    Boddy, Mark; Dean, Thomas

    1989-01-01

    A control system operating in a complex environment will encounter a variety of different situations, with varying amounts of time available to respond to critical events. Ideally, such a control system will do the best possible with the time available. In other words, its responses should approximate those that would result from having unlimited time for computation, where the degree of the approximation depends on the amount of time it actually has. There exist approximation algorithms for a wide variety of problems. Unfortunately, the solution to any reasonably complex control problem will require solving several computationally intensive problems. Algorithms for successive approximation are a subclass of the class of anytime algorithms, algorithms that return answers for any amount of computation time, where the answers improve as more time is allotted. An architecture is described for allocating computation time to a set of anytime algorithms, based on expectations regarding the value of the answers they return. The architecture described is quite general, producing optimal schedules for a set of algorithms under widely varying conditions.

  20. Gradient Evolution-based Support Vector Machine Algorithm for Classification

    NASA Astrophysics Data System (ADS)

    Zulvia, Ferani E.; Kuo, R. J.

    2018-03-01

    This paper proposes a classification algorithm based on a support vector machine (SVM) and gradient evolution (GE) algorithms. SVM algorithm has been widely used in classification. However, its result is significantly influenced by the parameters. Therefore, this paper aims to propose an improvement of SVM algorithm which can find the best SVMs’ parameters automatically. The proposed algorithm employs a GE algorithm to automatically determine the SVMs’ parameters. The GE algorithm takes a role as a global optimizer in finding the best parameter which will be used by SVM algorithm. The proposed GE-SVM algorithm is verified using some benchmark datasets and compared with other metaheuristic-based SVM algorithms. The experimental results show that the proposed GE-SVM algorithm obtains better results than other algorithms tested in this paper.

  1. A new algorithm for attitude-independent magnetometer calibration

    NASA Technical Reports Server (NTRS)

    Alonso, Roberto; Shuster, Malcolm D.

    1994-01-01

    A new algorithm is developed for inflight magnetometer bias determination without knowledge of the attitude. This algorithm combines the fast convergence of a heuristic algorithm currently in use with the correct treatment of the statistics and without discarding data. The algorithm performance is examined using simulated data and compared with previous algorithms.

  2. The Chandra Source Catalog: Algorithms

    NASA Astrophysics Data System (ADS)

    McDowell, Jonathan; Evans, I. N.; Primini, F. A.; Glotfelty, K. J.; McCollough, M. L.; Houck, J. C.; Nowak, M. A.; Karovska, M.; Davis, J. E.; Rots, A. H.; Siemiginowska, A. L.; Hain, R.; Evans, J. D.; Anderson, C. S.; Bonaventura, N. R.; Chen, J. C.; Doe, S. M.; Fabbiano, G.; Galle, E. C.; Gibbs, D. G., II; Grier, J. D.; Hall, D. M.; Harbo, P. N.; He, X.; Lauer, J.; Miller, J. B.; Mitschang, A. W.; Morgan, D. L.; Nichols, J. S.; Plummer, D. A.; Refsdal, B. L.; Sundheim, B. A.; Tibbetts, M. S.; van Stone, D. W.; Winkelman, S. L.; Zografou, P.

    2009-09-01

    Creation of the Chandra Source Catalog (CSC) required adjustment of existing pipeline processing, adaptation of existing interactive analysis software for automated use, and development of entirely new algorithms. Data calibration was based on the existing pipeline, but more rigorous data cleaning was applied and the latest calibration data products were used. For source detection, a local background map was created including the effects of ACIS source readout streaks. The existing wavelet source detection algorithm was modified and a set of post-processing scripts used to correct the results. To analyse the source properties we ran the SAO Traceray trace code for each source to generate a model point spread function, allowing us to find encircled energy correction factors and estimate source extent. Further algorithms were developed to characterize the spectral, spatial and temporal properties of the sources and to estimate the confidence intervals on count rates and fluxes. Finally, sources detected in multiple observations were matched, and best estimates of their merged properties derived. In this paper we present an overview of the algorithms used, with more detailed treatment of some of the newly developed algorithms presented in companion papers.

  3. A quantum–quantum Metropolis algorithm

    PubMed Central

    Yung, Man-Hong; Aspuru-Guzik, Alán

    2012-01-01

    The classical Metropolis sampling method is a cornerstone of many statistical modeling applications that range from physics, chemistry, and biology to economics. This method is particularly suitable for sampling the thermal distributions of classical systems. The challenge of extending this method to the simulation of arbitrary quantum systems is that, in general, eigenstates of quantum Hamiltonians cannot be obtained efficiently with a classical computer. However, this challenge can be overcome by quantum computers. Here, we present a quantum algorithm which fully generalizes the classical Metropolis algorithm to the quantum domain. The meaning of quantum generalization is twofold: The proposed algorithm is not only applicable to both classical and quantum systems, but also offers a quantum speedup relative to the classical counterpart. Furthermore, unlike the classical method of quantum Monte Carlo, this quantum algorithm does not suffer from the negative-sign problem associated with fermionic systems. Applications of this algorithm include the study of low-temperature properties of quantum systems, such as the Hubbard model, and preparing the thermal states of sizable molecules to simulate, for example, chemical reactions at an arbitrary temperature. PMID:22215584

  4. A Novel Algorithm Combining Finite State Method and Genetic Algorithm for Solving Crude Oil Scheduling Problem

    PubMed Central

    Duan, Qian-Qian; Yang, Gen-Ke; Pan, Chang-Chun

    2014-01-01

    A hybrid optimization algorithm combining finite state method (FSM) and genetic algorithm (GA) is proposed to solve the crude oil scheduling problem. The FSM and GA are combined to take the advantage of each method and compensate deficiencies of individual methods. In the proposed algorithm, the finite state method makes up for the weakness of GA which is poor at local searching ability. The heuristic returned by the FSM can guide the GA algorithm towards good solutions. The idea behind this is that we can generate promising substructure or partial solution by using FSM. Furthermore, the FSM can guarantee that the entire solution space is uniformly covered. Therefore, the combination of the two algorithms has better global performance than the existing GA or FSM which is operated individually. Finally, a real-life crude oil scheduling problem from the literature is used for conducting simulation. The experimental results validate that the proposed method outperforms the state-of-art GA method. PMID:24772031

  5. A controllable sensor management algorithm capable of learning

    NASA Astrophysics Data System (ADS)

    Osadciw, Lisa A.; Veeramacheneni, Kalyan K.

    2005-03-01

    Sensor management technology progress is challenged by the geographic space it spans, the heterogeneity of the sensors, and the real-time timeframes within which plans controlling the assets are executed. This paper presents a new sensor management paradigm and demonstrates its application in a sensor management algorithm designed for a biometric access control system. This approach consists of an artificial intelligence (AI) algorithm focused on uncertainty measures, which makes the high level decisions to reduce uncertainties and interfaces with the user, integrated cohesively with a bottom up evolutionary algorithm, which optimizes the sensor network"s operation as determined by the AI algorithm. The sensor management algorithm presented is composed of a Bayesian network, the AI algorithm component, and a swarm optimization algorithm, the evolutionary algorithm. Thus, the algorithm can change its own performance goals in real-time and will modify its own decisions based on observed measures within the sensor network. The definition of the measures as well as the Bayesian network determine the robustness of the algorithm and its utility in reacting dynamically to changes in the global system.

  6. Basic firefly algorithm for document clustering

    NASA Astrophysics Data System (ADS)

    Mohammed, Athraa Jasim; Yusof, Yuhanis; Husni, Husniza

    2015-12-01

    The Document clustering plays significant role in Information Retrieval (IR) where it organizes documents prior to the retrieval process. To date, various clustering algorithms have been proposed and this includes the K-means and Particle Swarm Optimization. Even though these algorithms have been widely applied in many disciplines due to its simplicity, such an approach tends to be trapped in a local minimum during its search for an optimal solution. To address the shortcoming, this paper proposes a Basic Firefly (Basic FA) algorithm to cluster text documents. The algorithm employs the Average Distance to Document Centroid (ADDC) as the objective function of the search. Experiments utilizing the proposed algorithm were conducted on the 20Newsgroups benchmark dataset. Results demonstrate that the Basic FA generates a more robust and compact clusters than the ones produced by K-means and Particle Swarm Optimization (PSO).

  7. A sampling algorithm for segregation analysis

    PubMed Central

    Tier, Bruce; Henshall, John

    2001-01-01

    Methods for detecting Quantitative Trait Loci (QTL) without markers have generally used iterative peeling algorithms for determining genotype probabilities. These algorithms have considerable shortcomings in complex pedigrees. A Monte Carlo Markov chain (MCMC) method which samples the pedigree of the whole population jointly is described. Simultaneous sampling of the pedigree was achieved by sampling descent graphs using the Metropolis-Hastings algorithm. A descent graph describes the inheritance state of each allele and provides pedigrees guaranteed to be consistent with Mendelian sampling. Sampling descent graphs overcomes most, if not all, of the limitations incurred by iterative peeling algorithms. The algorithm was able to find the QTL in most of the simulated populations. However, when the QTL was not modeled or found then its effect was ascribed to the polygenic component. No QTL were detected when they were not simulated. PMID:11742631

  8. Thermodynamic tabulations for selected phases in the system CaO-Al2O3-SiO2-H2 at 101.325 kPa (1 atm) between 273.15 and 1800 K

    USGS Publications Warehouse

    Haas, John L.; Robinson, Glipin R.; Hemingway, Bruch S.

    1981-01-01

    The standard thermodynamic properties of phases in the lime‐alumina‐silica‐ water system between 273.15 and 1800 K at 101.325 kPa (1 atm) were evalated from published experimental data. Phases included in the compilation are boehmite, diaspore, gibbsite, kaolinite, dickite, halloysite, andalusite, kyanite, sillimanite, Ca‐Al cliniopyroxene, anorthite, gehlenite, grossular, prehnite, zoisite, margarite, wollastonite, cyclowollastonite ( = pseudowollastonite), larnite, Ca olivine, hatrurite, and rankinite. The properties include heat capacity, entropy, relative enthalpy, and the Gibbs energy function of the phases and the enthalpies, Gibbs energies, and equilibrium constants for formation both from the elements and the oxides. Tabulated values are given at 50 K intervals with the 2‐sigma confidence limit at 250 K intervals. Summaries for each phase give the temperature‐ dependent functions for heat capacity, entropy, and relative enthalpy and the experimental data used in the final evaluation.

  9. Tabulation of comet observations.

    NASA Astrophysics Data System (ADS)

    1991-07-01

    Concerning comets: 1957 III Arend-Roland, 1957 V Mrkos, 1958 III Burnham, 1959 III Bester-Hoffmeister, 1959 VI Alcock, 1959 VIII P/Giacobini-Zinner, 1960 I P/Wild 1, 1960 II Burnham, 1960 III P/Schaumasse, 1960 VIII P/Finlay, 1961 V Wilson-Hubbard, 1961 VIII Seki, 1962 III Seki-Lines, 1962 VIII Humason, 1963 I Ikeya, 1963 III Alcock, 1963 V Pereyra, 1964 VI Tomita-Gerber-Honda, 1964 VIII Ikeya, 1964 IX Everhart, 1979 X Bradfield, 1980 X P/Stephan-Oterma, 1980 XII Meier, 1980 XIII P/Tuttle, 1981 II Panther, 1982 I Bowell, 1982 IV P/Grigg-Skjellerup, 1982 VII P/d'Arrest, 1986 III P/Halley, 1987 IV Shoemaker, 1987 XII P/Hartley 3, 1987 XIX P/Schwassmann-Wachmann 2, 1987 XXIX Bradfield, 1987 XXX Levy, 1987 XXXII McNaught, 1987 XXXIII P/Borrelly, 1987 XXXVI P/Parker-Hartley, 1987 XXXVII P/Helin- Roman-Alu 1, 1988 III Shoemaker-Holt, 1988 V Liller, 1988 VIII P/Ge-Wang, 1988 XI P/Shoemaker-Holt 2, 1988 XIV P/Tempel 2, 1988 XV Machholz, 1988 XX Yanaka, 1988 XXI Shoemaker, 1988 XXIV Yanaka, 1989 III Shoemaker, 1989 V Shoemaker-Holt-Rodriquez, 1989 VIII P/Pons-Winnecke, 1989 X P/Brorsen-Metcalf, 1989 XI P/Gunn, 1989 XIII P/Lovas 1, 1989 XVIII McKenzie-Russell, 1989 XIX Okazaki-Levy-Rudenko, 1989 XX P/Clark, 1989 XXI Helin-Ronan-Alu, 1989 XXII Aarseth-Brewington, 1989h P/Van Biesbroeck, 1989t P/Wild 2, 1989u P/Kearns-Kwee, 1989c1 Austin, 1989e1 Skorichenko-George, 1990a P/Wild 4, 1990b Černis-Kiuchi-Nakamura, 1990c Levy, 1990e P/Wolf-Harrington, 1990f P/Honda-Mrkos-Pajdušáková, 1990g McNaught-Hughes, 1990i Tsuchiya-Kiuchi, 1990n P/Taylor, 1990ο P/Shoemaker-Levy 1, 1991a P/Metcalf-Brewington, 1991b Arai, 1991c P/Swift-Gehrels, 1991d Shoemaker-Levy, 1991e P/Shoemaker-Levy 3, 1991h P/Takamizawa, 1991j P/Hartley 1, 1991k P/Mrkos, 1991l Helin-Lawrence, 1991n P/Faye, 1991q P/Levy, 1991t P/Hartley 2, P/Encke, P/Schwassmann-Wachmann 1.

  10. Tabulation of comet observations.

    NASA Astrophysics Data System (ADS)

    1993-10-01

    Concerning comets: 1955 III Mrkos, 1955 IV Bakharev-Macfarlane-Krienke, 1955 V Honda, 1956 III Mrkos, 1956 IV P/Olbers, 1957 III Arend-Roland, 1957 V Mrkos, 1958 III Burnham, 1959 VIII P/Giacobini-Zinner, 1960 II Burnham, 1973 XII Kohoutek, 1974 III Bradfield, 1975 IX Kobayashi-Berger-Milon, 1975 X Suzuki-Saigusa-Mori, 1975 XI Bradfield, 1975 XII Mori-Sato-Fujikawa, 1976 IV Bradfield, 1976 VI West, 1979 VII Bradfield, 1980 X P/Stephan-Oerma, 1980 XII Meier, 1980 XIII P/Tuttle, 1981 II Panther, 1981 IV P/Borrelly, 1981 XIX P/Swift-Gehrels, 1982 I Bowell, 1982 IV P/Grigg-Skjellerup, 1982 VI Austin, 1982 VII P/d'Arrest, 1982 VIII P/Churyumov-Gerasimenko, 1983 V Sugano-Saigusa-Fujikawa, 1983 VII IRAS-Araki-Alcock, 1983 X P/Tempel 2, 1983 XI P/Tempel 1, 1983 XIII P/Kopff, 1983 XIV P/IRAS, 1983 XV Shoemaker, 1984 III P/Hartley-IRAS, 1984 IV P/Crommelin, 1984 XI P/Faye, 1984 XIII Austin, 1984 XIV P/Wild 2, 1984 XVI P/Shoemaker 1, 1984 XXIII Levy-Rudenko, 1985 I P/Tsuchinshan 1, 1985 XIII P/Giacobini-Zinner, 1985 XV P/Giclas, 1985 XVI P/Ciffréo, 1985 XVII Hartley-Good, 1985 XVIII P/Shoemaker 3, 1985 XIX Thiele, 1986 I P/Boethin, 1986 III P/Halley, 1986 VIII P/Machholz, 1986 XVII Levy, 1986 XVIII Terasako, 1987 II Sorrells, 1987 VII Wilson, 1987 XIX P/Schwassmann-Wachmann 2, 1987 XXI Levy, 1987 XXIII Rudenko, 1987 XXIV P/Brooks 2, 1987 XXVII P/Kohoutek, 1987 XXIX Bradfield, 1988 IV Furuyama, 1988 XIV P/Tempel 2, 1989 III Shoemaker, 1989 XV P/Schwassmann-Wachmann 1, 1989 XIX Okazaki-Levy-Rudenko, 1990 V Austin, 1990 XVII Tsuchiya-Kiuchi, 1990 XX Levy, 1990 XXI P/Encke, 1990 XXVI Arai, 1991 I P/Metcalf-Brewington, 1991 XV P/Hartley 2, 1991 XVII P/Arend-Rigaux, 1991a1 Shoemaker-Levy, 1991g1 Zanotta-Brewington, 1992c P/Howell, 1992d Tanaka-Machholz, 1992e P/Singer Brewster, 1992f P/Shoemaker-Levy 8, 1992h Spacewatch, 1992j P/Ashbrook-Jackson, 1992t P/Swift-Tuttle, 1992u P/Väisälä 1, 1992w P/Slaughter-Burnham, 1992x P/Schaumasse, 1992y Shoemaker, 1993a Mueller, 1993d Mueller, 1993e P/Shoemaker-Levy 9, 1993f P/Forbes, 1993i P/Holmes, 1993j P/Neujmin 3, 1993k P/Shajn-Schaldach, 1993l P/Helin-Lawrence, 1993m P/Hartley 3, 1993n P/Whipple, 1993ο P/West-Kohoutek-Ikemura, 1993p Mueller, P/Smirnova-Chernykh.

  11. Tabulation of comet observations.

    NASA Astrophysics Data System (ADS)

    Concerning comets: 1962 VIII Humason, 1971 V Toba, 1975 XI Bradfield, 1979 X Bradfield, 1980 X P/Stephan-Oterma, 1980 XI P/Encke, 1980 XIII P/Tuttle, 1981 II Panther, 1982 VI Austin, 1982 VIII P/Churyumov-Gerasimenko, 1983 V Sugano-Saigusa-Fujikawa, 1983 VII IRAS-Araki-Alcock, 1983 XIII P/Kopff, 1984 III P/Hartley-IRAS, 1985 XIII P/Giacobini-Zinner, 1985 XVII Hartley-Good, 1985 XIX Thiele, 1986 III P/Halley, 1986h P/Schwassmann-Wachmann 2, 1986j P/Comas Solá, 1986k P/Kohoutek, 1986l Wilson, 1986m P/Grigg-Skjellerup, 1986n Sorrells, 1987h P/Howell, 1987l P/Reinmuth 2, 1987m P/Brooks 2, 1987n P/Harrington, 1987p P/Borrelly, 1987r P/Reinmuth 1, 1987s Bradfield, 1987u Rudenko, 1987y Levy, 1987z P/Shoemaker-Holt, 1987b1 McNaught, 1987d1 Ichimura, 1987f1 Furuyama, 1988a Liller, 1988b Shoemaker, 1988c Maury-Phinney, 1988e Levy, P/Schwassmann-Wachmann 1.

  12. Tabulation of comet observations.

    NASA Astrophysics Data System (ADS)

    1983-07-01

    Concerning comets: 1974 II P/Schwassmann-Wachmann 1, 1977 XIV Kohler, 1978 XXI Meier, 1979 X Bradfield, 1980 X P/Stephan-Oterma, 1980 XII Meier, 1981 II Panther, 1982 VI Austin, 1982 VII P/d'Arrest, 1982 VIII P/Churyumov-Gerasimenko, 1982 IX P/Russell 3, 1982 X P/Gunn, 1982d P/Tempel 2, 1982j P/Tempel 1, 1982k P/Kopff, 1983b P/Pons-Winnecke, 1983d IRAS-Araki-Alcock, 1983e Sugano-Saigusa-Fujikawa, 1983h P/Johnson.

  13. Tabulation of comet observations.

    NASA Astrophysics Data System (ADS)

    1985-04-01

    Concerning comets: 1961 VIII Seki, 1962 III Seki-Lines, 1963 I Ikeya, 1963 III Alcock, 1964 VIII Ikeya, 1965 VIII Ikeya-Seki, 1966 V Kilston, 1967 II Rudnicki, 1968 I Ikeya-Seki, 1968 VI Honda, 1969 IX Tago-Sato-Kosaka, 1970 II Bennett, 1971 V Toba, 1973 XII Kohoutek, 1974 II P/Schwassmann-Wachmann 1, 1974 III Bradfield, 1975 IX Kobayashi-Berger-Milon, 1975 X Suzuki-Saigusa-Mori, 1975 XII Mori-Sato-Fujikawa, 1976 VI West, 1976 XI P/d'Arrest, 1979 X Bradfield, 1980 XI P/Encke, 1980 XIII P/Tuttle, 1980 XV Bradfield, 1981 II Panther, 1982i P/Halley, 1983 XIII P/Kopff, 1983n P/Crommelin, 1983v P/Hartley-IRAS, 1983w P/Clark, 1984c P/Neujmin, 1984f Shoemaker, 1984g P/Wolf-Harrington, 1984h P/Faye, 1984i Austin, 1984j P/Takamizawa, 1984k P/Arend-Rigaux, 1984m P/Schaumasse, 1984p Tsuchinshan 1, 1984q P/Shoemaker 1, 1984s Shoemaker, 1984t Levy-Rudenko.

  14. Tabulation of comet observations.

    NASA Astrophysics Data System (ADS)

    1993-01-01

    Concerning comets: 1973 XII Kohoutek, 1975 IX Kobayashi-Berger-Milon, 1976 VI West, 1976 XI P/d'Arrest, 1977 XIV Kohler, 1979 X Bradfield, 1980 X P/Stephan-Oterma, 1980 XV Bradfield, 1981 II Panther, 1982 VI Austin, 1983 V Sugano-Saigusa-Fujikawa, 1983 VII IRAS-Araki-Alcock, 1983 XIII P/Kopff, 1984 XIII Austin, 1984 XXIII Levy-Rudenko, 1985 XIII P/Giacobini-Zinner, 1985 XVII Hartley-Good, 1985 XIX Thiele, 1986 I P/Boethin, 1986 III P/Halley, 1986 XVIII Terasako, 1987 II Sorrells, 1987 III Nishikawa-Takamizawa-Tago, 1987 X P/Grigg-Skjellerup, 1987 XXIII Rudenko, 1987 XXIX Bradfield, 1987 XXXII McNaught, 1987 XXXIII P/Borrelly, 1988 IV Furuyama, 1988 V Liller, 1988 XIV P/Tempel 2, 1988 XV Machholz, 1988 XX Yanaka, 1988 XXIV Yanaka, 1989 X P/Brorsen-Metcalf, 1989 XV P/Schwassmann-Wachmann 1, 1989 XIX Okazaki-Levy-Rudenko, 1989 XXI Helin-Roman-Alu, 1989 XXII Aarseth-Brewington, 1990 III Černis-Kiuchi-Nakamura, 1990 VI Skorichenko-George, 1990 VIII P/Schwassmann-Wachmann 3, 1990 IX P/Peters-Hartley, 1990 X P/Wild 4, 1990 XIV P/Honda Mrkos-Pajdušáková, 1990 XVII Tsuchiya-Kiuchi, 1990 XXI P/Encke, 1990 XXVI Arai, 1991 XI P/Levy, 1991 XV P/Hartley 2, 1991 XVI P/Wirtanen, 1991 XVII P/Arend-Rigaux, 1991 XXI P/Faye, 1991 XXIII P/Shoemaker 1, 1991 XXIV Shoemaker-Levy, 1991l Helin-Lawrence, 1991ο P/Chernykh, 1991r Helin-Alu, 1991a1 Shoemaker-Levy, 1991g1 Zanotta-Brewington, 1991h1 Mueller, 1912d Tanaka-Machholz, 1992f P/Shoemaker-Levy 8, 1992k Machholz, 1992l P/Giclas, 1992p P/Brewington, 1992q Helin-Lawrence, 1992s P/Ciffréo, 1992t P/Swift-Tuttle, 1992u P/Väisälä, 1992x P/Schaumasse, 1992y Shoemaker, 1992a1 Ohshita, 1993a Mueller, P/Smirnova-Chernykh.

  15. Tabulation of comet observations.

    NASA Astrophysics Data System (ADS)

    1999-07-01

    Concerning comets: C/1995 O1 (Hale-Bopp), C/1996 J1 (Evans-Drinkwater), C/1997 BA6 (Spacewatch), C/1997 D1 (Mueller), C/1997 H2 (SOHO), C/1997 J1 (Mueller), C/1997 J2 (Meunier-Dupouy), C/1997 N1 (Tabur), C/1997 O1 (Tilbrook), C/1997 T1 (Utsunomiya), C/1998 H1 (Stonehouse), C/1998 J1 (SOHO), C/1998 K1 (Mueller), C/1998 K2 (LINEAR), C/1998 K5 (LINEAR), C/1998 M1 (LINEAR), C/1998 M2 (LINEAR), C/1998 M3 (Larsen), C/1998 M4 (LINEAR), C/1998 M5 (LINEAR), C/1998 P1 (Williams), C/1998 T1 (LINEAR), C/1998 U5 (LINEAR), C/1999 F1 (Catalina), C/1999 F2 (Dalcanton), C/1999 H1 (Lee), C/1999 H3 (LINEAR), C/1999 J2 (Skiff), C/1999 J3 (LINEAR), C/1999 J4 (LINEAR), C/1999 K2 (Ferris), C/1999 K3 (LINEAR), C/1999 K5 (LINEAR), C/1999 K6 (LINEAR), C/1999 K7 (LINEAR), C/1999 K8 (LINEAR), C/1999 L2 (LINEAR), C/1999 N2 (Lynn), 2P/Encke, 9P/Tempel 1, 10P/Tempel 2, 21P/Giacobini-Zinner, 29P/Schwassmann-Wachmann 1, 37P/Forbes, 43P/Wolf-Harrington, 46P/Wirtanen, 48P/Johnson, 49P/Arend-Rigaux, 52P/Harrington-Abell, 55P/Tempel-Tuttle, 62P/Tsuchinshan 1, 65P/Gunn, 69P/Taylor, 78P/Gehrels 2, 81P/Wild 2, 88P/Howell, 92P/Lovas 1, 94P/Russell 4, 95P/Chiron, 100P/Hartley 1, 103P/Hartley 2, 104P/Kowal 2, 105P/Singer Brewster, 118P/Shoemaker-Levy 4, 121P/Shoemaker-Holt 2, 128P/Shoemaker-Holt 1, 132P/Helin-Roman-Alu 2, 134P/Kowal-Vávrová, 135P/Shoemaker-Levy 8, 137P/Shoemaker-Levy 2, 140P/Bowell-Skiff, P/1998 U3 (Jäger), P/1998 W1 (Spahr), P/1999 DN3 (Korlević-Jurić), P/1999 E1 (Li), P/1999 G1 (LINEAR), P/1999 J5 (LINEAR).

  16. Tabulation of comet observations.

    NASA Astrophysics Data System (ADS)

    2000-04-01

    Concerning comets: C/1995 O1 (Hale-Bopp), C/1997 BA6 (Spacewatch), C/1998 K2 (LINEAR), C/1998 M5 (LINEAR), C/1998 P1 (Williams), C/1998 T1 (LINEAR), C/1998 U5 (LINEAR), C/1999 A1 (Tilbrook), C/1999 E1 (Li), C/1999 F1 (Catalina), C/1999 F2 (Dalcanton), C/1999 H1 (Lee), C/1999 H3 (LINEAR), C/1999 J2 (Skiff), C/1999 J3 (LINEAR), C/1999 K1 (SOHO), C/1999 K2 (Ferris), C/1999 K3 (LINEAR), C/1999 K5 (LINEAR), C/1999 K6 (LINEAR), C/1999 K8 (LINEAR), C/1999 L3 (LINEAR), C/1999 N2 (Lynn), C/1999 N4 (LINEAR), C/1999 S2 (McNaught-Watson), C/1999 S3 (LINEAR), C/1999 S4 (LINEAR), C/1999 T1 (McNaught-Hartley), C/1999 T2 (LINEAR), C/1999 T3 (LINEAR), C/1999 U1 (Ferris), C/1999 U4 (Catalina-Skiff), C/1999 XS87 (LINEAR), C/1999 Y1 (LINEAR), C/2000 A1 (Montani), C/2000 B2 (LINEAR), C/2000 B4 (LINEAR), C/2000 CT54 (LINEAR), C/2000 D2 (LINEAR), 4P/Faye, 9P/Tempel 1, 10P/Tempel 2, 21P/Giacobini-Zinner, 29P/Schwassmann-Wachmann 1, 37P/Forbes, 50P/Arend, 52P/Harrington-Abell, 59P/Kearns-Kwee, 60P/Tsuchinshan 2, 63P/Wild 1, 71P/Clark, 73P/Schwassmann-Wachmann 3, 74P/Smirnova-Chernykh, 84P/Giclas, 93P/Lovas 1, 95P/Chiron, 105P/Singer Brewster, 106P/Schuster, 114P/Wisemann-Skiff, 140P/Bowell-Skiff, 141P/Machholz 2, 142P/Ge-Wang, 143P/Kowal-Mrkos, P/1998 S1 (LINEAR-Mueller), P/1998 U3 (Jäger), P/1998 W1 (Spahr), P/1998 Y2 (Li), P/1999 RO28 (LONEOS), P/1999 U3 (LINEAR), P/1999 V1 (Catalina), P/1999 WJ7 (Korlević), P/1999 X1 (Hug-Bell), P/1999 XB69 (LINEAR), P/1999 XN120 (Catalina), P/2000 B3 (LINEAR), P/2000 C1 (Hergenrother), P/2000 G1 (LINEAR).

  17. Tabulation of comet observations.

    NASA Astrophysics Data System (ADS)

    1999-10-01

    Concerning comets: C/1995 O1 (Hale-Bopp), C/1997 BA6 (Spacewatch), C/1997 J2 (Meunier-Dupouy), C/1998 K5 (LINEAR), C/1998 M2 (LINEAR), C/1998 M5 (LINEAR), C/1998 P1 (Williams), C/1998 Q1 (LINEAR), C/1998 T1 (LINEAR), C/1998 U5 (LINEAR), C/1999 F2 (Dalcanton), C/1999 H1 (Lee), C/1999 H3 (LINEAR), C/1999 J2 (Skiff), C/1999 J3 (LINEAR), C/1999 K2 (Ferris), C/1999 K3 (LINEAR), C/1999 K5 (LINEAR), C/1999 K6 (LINEAR), C/1999 K8 (LINEAR), C/1999 L2 (LINEAR), C/1999 N2 (Lynn), C/1999 N4 (LINEAR), C/1999 S3 (LINEAR), C/1999 S4 (LINEAR), C/1999 T1 (McNaught-Hartley), C/1999 T2 (LINEAR), C/1999 T3 (LINEAR), C/1999 U1 (Ferris), 2P/Encke, 4P/Faye, 10P/Tempel 2, 21P/Giacobini-Zinner, 29P/Schwassmann-Wachmann 1, 37P/Forbes, 46P/Wirtanen, 50P/Arend, 52P/Harrington-Abell, 59P/Kearns-Kwee, 74P/Smirnova-Chernykh, 84P/Giclas, 88P/Howell, 93P/Lovas 1, 106P/Schuster, 114P/Wiseman-Skiff, 136P/Mueller 3, 137P/Shoemaker-Levy 2, 141P/Machholz 2, 142P/Ge-Wang, P/1998 G1 (LINEAR), P/1998 QP54 (LONEOS-Tucker), P/1998 S1 (LINEAR-Mueller), P/1998 U2 (Mueller), P/1998 U3 (Jäger), P/1998 W1 (Spahr), P/1998 Y1 (LINEAR), P/1999 RO2 (LONEOS).

  18. Tabulation of comet observations.

    NASA Astrophysics Data System (ADS)

    2000-01-01

    Concerning comets: C/1995 O1 (Hale-Bopp), C/1997 BA6 (Spacewatch), C/1998 K2 (LINEAR), C/1998 M1 (LINEAR), C/1998 M5 (LINEAR), C/1998 P1 (Williams), C/1998 T1 (LINEAR), C/1998 W3 (LINEAR), C/1999 E1 (Li), C/1999 F2 (Dalcanton), C/1999 H1 (Lee), C/1999 H3 (LINEAR), C/1999 J2 (Skiff), C/1999 J3 (LINEAR), C/1999 K2 (Ferris), C/1999 K5 (LINEAR), C/1999 K6 (LINEAR), C/1999 K8 (LINEAR), C/1999 L2 (LINEAR), C/1999 L3 (LINEAR), C/1999 N2 (Lynn), C/1999 S3 (LINEAR), C/1999 S4 (LINEAR), C/1999 T1 (McNaught-Hartley), C/1999 T2 (LINEAR), C/1999 T3 (LINEAR), C/1999 U1 (Ferris), C/1999 U4 (Catalina-Skiff), C/1999 Y1 (LINEAR), 4P/Faye, 10P/Tempel 2, 29P/Schwassmann-Wachmann 1, 37P/Forbes, 50P/Arend, 59P/Kearns-Kwee, 63P/Wild 1, 65P/Gunn, 74P/Smirnova-Chernykh, 84P/Giclas, 106P/Schuster, 108P/Ciffréo, 114P/Wisemann-Skiff, 117P/Helin-Roman-Alu 1, 136P/Mueller 3, 137P/Shoemaker-Levy 2, 141P/Machholz 2, P/1998 U4 (Spahr), P/1999 RO28 (LEONOS), P/1999 U3 (LINEAR), P/1999 V1 (Catalina), P/1999 X1 (Hug-Bell).

  19. Tabulation of comet observations.

    NASA Astrophysics Data System (ADS)

    1993-07-01

    Concerning comets: 1955 III Mrkos, 1955 IV Bakharev-Macfarlane-Krienke, 1955 V Honda, 1956 III Mrkos, 1956 IV P/Olbers, 1957 V Mrkos, 1961 II Candy, 1961 V Wilson-Hubbard, 1962 III Seki-Lines, 1962 V P/Tuttle-Giacobini-Kresák, 1963 I Ikeya, 1963 III Alcock, 1964 VI Tomita-Gerber-Honda, 1964 IX Everhart, 1965 VIII Ikeya-Seki, 1966 II Barbon, 1966 V Kilston, 1967 III Wild, 1967 IV Seki, 1967 V P/Tuttle, 1967 X P/Tempel 2, 1970 I Daido-Fujikawa, 1975 IX Kobayashi-Berger-Milon, 1979 X Bradfield, 1986 III P/Halley, 1989 X P/Brorsen-Metcalf, 1989 XIX Okazaki-Levy-Rudenko, 1990 III Cernis-Kiuchi-Nakamura, 1990 V Austin, 1990 XIV P/Honda-Mrkos-Pajdušáková, 1990 XVII Tsuchiya-Kiuchi, 1990 XX Levy, 1990 XXI P/Encke, 1990 XXVIII P/Wild 2, 1991 XI P/Levy, 1991 XV P/Hartley 2, 1991a1 Shoemaker-Levy, 1992h Spacewatch, 1992l P/Giclas, 1992n P/Schuster, 1992ο P/Daniel, 1992s P/Ciffréo, 1992t P/Swift-Tuttle, 1992u P/Väisälä 1, 1992x P/Schaumasse, 1992y Shoemaker, 1992a1 Ohshita, 1993a Mueller, 1993e P/Shoemaker-Levy 9, P/Smirnova-Chernykh, P/Schwassmann-Wachmann 1.

  20. Tabulation of comet observations.

    NASA Astrophysics Data System (ADS)

    1995-07-01

    Concerning comets: C/1958 D1 (Burnham), C/1959 Q1 (Alcock), C/1959 Q2 (Alcock), C/1959 Y1 (Burnham), C/1960 Y1 (Candy), C/1961 O1 (Wilson-Hubbard), C/1961 R1 (Humason), C/1961 T1 (Seki), C/1962 H1 (Honda), C/1963 A1 (Ikeya), C/1963 F1 (Alcock), C/1963 R1 (Pereyra), C/1964 N1 (Ikeya), C/1964 P1 (Everhart), C/1966 P1 (Kilston), C/1966 P2 (Barbon), C/1966 R1 (Ikeya-Everhart), C/1966 T1 (Rudnicki), C/1967 Y1 (Ikeya-Seki), C/1968 H1 (Tago-Honda-Yamamoto), C/1968 L1 (Whitaker-Thomas), C/1968 N1 (Honda), C/1968 Q1 (Bally-Clayton), C/1968 Q2 (Honda), C/1968 U1 (Wild), C/1968 Y1 (Thomas), C/1969 O1 (Kohoutek), C/1969 P1 (Fujikawa), C/1969 Y1 (Bennett), C/1970 B1 (Daido-Fujikawa), C/1970 N1 (Abe), C/1970 U1 (Suzuki-Sato-Seki), C/1971 E1 (Toba), C/1972 E1 (Bradfield), C/1972 L1 (Sandage), C/1972 U1 (Kojima), C/1973 A1 (Heck-Sause), C/1973 E1 (Kohoutek), C/1975 T1 (Mori-Sato-Fujikawa), C/1975 T2 (Suzuki-Saigusa-Mori), C/1975 V1 (West), C/1975 V2 (Bradfield), C/1975 X1 (Sato), C/1976 D1 (Bradfield), C/1977 V1 (Tsuchinshan), C/1984 N1 (Austin), C/1987 P1 (Bradfield), C/1988 A1 (Liller), C/1989 Q1 (Okazaki-Levy-Rudenko), C/1989 X1 (Austin), C/1990 E1 (Černis-Kiuchi-Nakamura), C/1990 K1 (Levy), C/1990 N1 (Tsuchiya-Kiuchi), C/1991 A2 (Arai), C/1991 F2 (Helin-Lawrence), C/1991 T2 (Shoemaker-Levy), C/1991 X2 (Mueller), C/1991 Y1 (Zanotta-Brewington), C/1992 F1 (Tanaka-Machholz), C/1992 U1 (Shoemaker), C/1992 W1 (Ohshita), C/1994 J2 (Takamizawa), C/1994 N1 (Nakamura-Nishimura-Machholz), C/1994 T1 (Machholz), 1P/Halley, 2P/Encke, 4P/Faye, 6P/d'Arrest, 8P/Tuttle, 9P/Tempel 1, 10P/Tempel 2, 15P/Finlay, 16P/Brooks 2, 19P/Borrelly, 23P/Brorsen-Metcalf, 24P/Schaumasse, 29P/Schwassmann-Wachmann 1, 31P/Schwassmann-Wachmann 2, 40P/Väisälä 1, 41P/Tuttle-Giacobini-Kresák, 45P/Honda-Mrkos-Pajdušáková, 51P/Harrington, 59P/Kearns-Kwee, 64P/Swift-Gehrels, 65P/Gunn, 71P/Clark, 73P/Schwassmann-Wachmann 3, 75P/Kohoutek, 76P/West-Kohoutek-Ikemura, 77P/Longmore, 78P/Gehrels 2, 85P/Boethin, 95P/Chiron, 97P/Metcalf-Brewington, 103P/Hartley 2, 104P/Kowal 2, 108P/Ciffréo, 109P/Swift-Tuttle, 110P/Hartley 3, 116P/Wild 4, P/1991 L3 (Levy), P/1991 V1 (Shoemaker-Levy 6), P/1992 Q1 (Brewington), P/1993 W1 (Mueller 5), P/1994 P1 (Machholz 2).

  1. Maritime Casualty Tabulation (1972)

    DOT National Transportation Integrated Search

    1975-01-01

    The report creates a data base of the maritime casualties during 1972 which would have been candidates for a distress channel in a satellite communications service. There are 1546 casualties recorded in this report for the calendar year 1972; of thes...

  2. Halftoning Algorithms and Systems.

    DTIC Science & Technology

    1996-08-01

    TERMS 15. NUMBER IF PAGESi. Halftoning algorithms; error diffusions ; color printing; topographic maps 16. PRICE CODE 17. SECURITY CLASSIFICATION 18...graylevels for each screen level. In the case of error diffusion algorithms, the calibration procedure using the new centering concept manifests itself as a...Novel Centering Concept for Overlapping Correction Paper / Transparency (Patent Applied 5/94)I * Applications To Error Diffusion * To Dithering (IS&T

  3. Sorting on STAR. [CDC computer algorithm timing comparison

    NASA Technical Reports Server (NTRS)

    Stone, H. S.

    1978-01-01

    Timing comparisons are given for three sorting algorithms written for the CDC STAR computer. One algorithm is Hoare's (1962) Quicksort, which is the fastest or nearly the fastest sorting algorithm for most computers. A second algorithm is a vector version of Quicksort that takes advantage of the STAR's vector operations. The third algorithm is an adaptation of Batcher's (1968) sorting algorithm, which makes especially good use of vector operations but has a complexity of N(log N)-squared as compared with a complexity of N log N for the Quicksort algorithms. In spite of its worse complexity, Batcher's sorting algorithm is competitive with the serial version of Quicksort for vectors up to the largest that can be treated by STAR. Vector Quicksort outperforms the other two algorithms and is generally preferred. These results indicate that unusual instruction sets can introduce biases in program execution time that counter results predicted by worst-case asymptotic complexity analysis.

  4. Genetic algorithms using SISAL parallel programming language

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tejada, S.

    1994-05-06

    Genetic algorithms are a mathematical optimization technique developed by John Holland at the University of Michigan [1]. The SISAL programming language possesses many of the characteristics desired to implement genetic algorithms. SISAL is a deterministic, functional programming language which is inherently parallel. Because SISAL is functional and based on mathematical concepts, genetic algorithms can be efficiently translated into the language. Several of the steps involved in genetic algorithms, such as mutation, crossover, and fitness evaluation, can be parallelized using SISAL. In this paper I will l discuss the implementation and performance of parallel genetic algorithms in SISAL.

  5. In-Trail Procedure (ITP) Algorithm Design

    NASA Technical Reports Server (NTRS)

    Munoz, Cesar A.; Siminiceanu, Radu I.

    2007-01-01

    The primary objective of this document is to provide a detailed description of the In-Trail Procedure (ITP) algorithm, which is part of the Airborne Traffic Situational Awareness In-Trail Procedure (ATSA-ITP) application. To this end, the document presents a high level description of the ITP Algorithm and a prototype implementation of this algorithm in the programming language C.

  6. Clustering algorithm for determining community structure in large networks

    NASA Astrophysics Data System (ADS)

    Pujol, Josep M.; Béjar, Javier; Delgado, Jordi

    2006-07-01

    We propose an algorithm to find the community structure in complex networks based on the combination of spectral analysis and modularity optimization. The clustering produced by our algorithm is as accurate as the best algorithms on the literature of modularity optimization; however, the main asset of the algorithm is its efficiency. The best match for our algorithm is Newman’s fast algorithm, which is the reference algorithm for clustering in large networks due to its efficiency. When both algorithms are compared, our algorithm outperforms the fast algorithm both in efficiency and accuracy of the clustering, in terms of modularity. Thus, the results suggest that the proposed algorithm is a good choice to analyze the community structure of medium and large networks in the range of tens and hundreds of thousand vertices.

  7. Efficient sequential and parallel algorithms for record linkage.

    PubMed

    Mamun, Abdullah-Al; Mi, Tian; Aseltine, Robert; Rajasekaran, Sanguthevar

    2014-01-01

    Integrating data from multiple sources is a crucial and challenging problem. Even though there exist numerous algorithms for record linkage or deduplication, they suffer from either large time needs or restrictions on the number of datasets that they can integrate. In this paper we report efficient sequential and parallel algorithms for record linkage which handle any number of datasets and outperform previous algorithms. Our algorithms employ hierarchical clustering algorithms as the basis. A key idea that we use is radix sorting on certain attributes to eliminate identical records before any further processing. Another novel idea is to form a graph that links similar records and find the connected components. Our sequential and parallel algorithms have been tested on a real dataset of 1,083,878 records and synthetic datasets ranging in size from 50,000 to 9,000,000 records. Our sequential algorithm runs at least two times faster, for any dataset, than the previous best-known algorithm, the two-phase algorithm using faster computation of the edit distance (TPA (FCED)). The speedups obtained by our parallel algorithm are almost linear. For example, we get a speedup of 7.5 with 8 cores (residing in a single node), 14.1 with 16 cores (residing in two nodes), and 26.4 with 32 cores (residing in four nodes). We have compared the performance of our sequential algorithm with TPA (FCED) and found that our algorithm outperforms the previous one. The accuracy is the same as that of this previous best-known algorithm.

  8. The algorithms for rational spline interpolation of surfaces

    NASA Technical Reports Server (NTRS)

    Schiess, J. R.

    1986-01-01

    Two algorithms for interpolating surfaces with spline functions containing tension parameters are discussed. Both algorithms are based on the tensor products of univariate rational spline functions. The simpler algorithm uses a single tension parameter for the entire surface. This algorithm is generalized to use separate tension parameters for each rectangular subregion. The new algorithm allows for local control of tension on the interpolating surface. Both algorithms are illustrated and the results are compared with the results of bicubic spline and bilinear interpolation of terrain elevation data.

  9. A divide-and-conquer algorithm for large-scale de novo transcriptome assembly through combining small assemblies from existing algorithms.

    PubMed

    Sze, Sing-Hoi; Parrott, Jonathan J; Tarone, Aaron M

    2017-12-06

    While the continued development of high-throughput sequencing has facilitated studies of entire transcriptomes in non-model organisms, the incorporation of an increasing amount of RNA-Seq libraries has made de novo transcriptome assembly difficult. Although algorithms that can assemble a large amount of RNA-Seq data are available, they are generally very memory-intensive and can only be used to construct small assemblies. We develop a divide-and-conquer strategy that allows these algorithms to be utilized, by subdividing a large RNA-Seq data set into small libraries. Each individual library is assembled independently by an existing algorithm, and a merging algorithm is developed to combine these assemblies by picking a subset of high quality transcripts to form a large transcriptome. When compared to existing algorithms that return a single assembly directly, this strategy achieves comparable or increased accuracy as memory-efficient algorithms that can be used to process a large amount of RNA-Seq data, and comparable or decreased accuracy as memory-intensive algorithms that can only be used to construct small assemblies. Our divide-and-conquer strategy allows memory-intensive de novo transcriptome assembly algorithms to be utilized to construct large assemblies.

  10. Anti-aliasing algorithm development

    NASA Astrophysics Data System (ADS)

    Bodrucki, F.; Davis, J.; Becker, J.; Cordell, J.

    2017-10-01

    In this paper, we discuss the testing image processing algorithms for mitigation of aliasing artifacts under pulsed illumination. Previously sensors were tested, one with a fixed frame rate and one with an adjustable frame rate, which results showed different degrees of operability when subjected to a Quantum Cascade Laser (QCL) laser pulsed at the frame rate of the fixe-rate sensor. We implemented algorithms to allow the adjustable frame-rate sensor to detect the presence of aliasing artifacts, and in response, to alter the frame rate of the sensor. The result was that the sensor output showed a varying laser intensity (beat note) as opposed to a fixed signal level. A MIRAGE Infrared Scene Projector (IRSP) was used to explore the efficiency of the new algorithms, introduction secondary elements into the sensor's field of view.

  11. Model aerodynamic test results for two variable cycle engine coannular exhaust systems at simulated takeoff and cruise conditions. Comprehensive data report. Volume 2: Tabulated aeroynamic data book 1

    NASA Technical Reports Server (NTRS)

    Nelson, D. P.

    1981-01-01

    Tabulated data from wind tunnel tests conducted to evaluate the aerodynamic performance of an advanced coannular exhaust nozzle for a future supersonic propulsion system are presented. Tests were conducted with two test configurations: (1) a short flap mechanism for fan stream control with an isentropic contoured flow splitter, and (2) an iris fan nozzle with a conical flow splitter. Both designs feature a translating primary plug and an auxiliary inlet ejector. Tests were conducted at takeoff and simulated cruise conditions. Data were acquired at Mach numbers of 0, 0.36, 0.9, and 2.0 for a wide range of nozzle operating conditions. At simulated supersonic cruise, both configurations demonstrated good performance, comparable to levels assumed in earlier advanced supersonic propulsion studies. However, at subsonic cruise, both configurations exhibited performance that was 6 to 7.5 percent less than the study assumptions. At takeoff conditions, the iris configuration performance approached the assumed levels, while the short flap design was 4 to 6 percent less. Data are provided through test run 25.

  12. Dose calculation accuracy of the Monte Carlo algorithm for CyberKnife compared with other commercially available dose calculation algorithms.

    PubMed

    Sharma, Subhash; Ott, Joseph; Williams, Jamone; Dickow, Danny

    2011-01-01

    Monte Carlo dose calculation algorithms have the potential for greater accuracy than traditional model-based algorithms. This enhanced accuracy is particularly evident in regions of lateral scatter disequilibrium, which can develop during treatments incorporating small field sizes and low-density tissue. A heterogeneous slab phantom was used to evaluate the accuracy of several commercially available dose calculation algorithms, including Monte Carlo dose calculation for CyberKnife, Analytical Anisotropic Algorithm and Pencil Beam convolution for the Eclipse planning system, and convolution-superposition for the Xio planning system. The phantom accommodated slabs of varying density; comparisons between planned and measured dose distributions were accomplished with radiochromic film. The Monte Carlo algorithm provided the most accurate comparison between planned and measured dose distributions. In each phantom irradiation, the Monte Carlo predictions resulted in gamma analysis comparisons >97%, using acceptance criteria of 3% dose and 3-mm distance to agreement. In general, the gamma analysis comparisons for the other algorithms were <95%. The Monte Carlo dose calculation algorithm for CyberKnife provides more accurate dose distribution calculations in regions of lateral electron disequilibrium than commercially available model-based algorithms. This is primarily because of the ability of Monte Carlo algorithms to implicitly account for tissue heterogeneities, density scaling functions; and/or effective depth correction factors are not required. Copyright © 2011 American Association of Medical Dosimetrists. Published by Elsevier Inc. All rights reserved.

  13. Fractal Landscape Algorithms for Environmental Simulations

    NASA Astrophysics Data System (ADS)

    Mao, H.; Moran, S.

    2014-12-01

    Natural science and geographical research are now able to take advantage of environmental simulations that more accurately test experimental hypotheses, resulting in deeper understanding. Experiments affected by the natural environment can benefit from 3D landscape simulations capable of simulating a variety of terrains and environmental phenomena. Such simulations can employ random terrain generation algorithms that dynamically simulate environments to test specific models against a variety of factors. Through the use of noise functions such as Perlin noise, Simplex noise, and diamond square algorithms, computers can generate simulations that model a variety of landscapes and ecosystems. This study shows how these algorithms work together to create realistic landscapes. By seeding values into the diamond square algorithm, one can control the shape of landscape. Perlin noise and Simplex noise are also used to simulate moisture and temperature. The smooth gradient created by coherent noise allows more realistic landscapes to be simulated. Terrain generation algorithms can be used in environmental studies and physics simulations. Potential studies that would benefit from simulations include the geophysical impact of flash floods or drought on a particular region and regional impacts on low lying area due to global warming and rising sea levels. Furthermore, terrain generation algorithms also serve as aesthetic tools to display landscapes (Google Earth), and simulate planetary landscapes. Hence, it can be used as a tool to assist science education. Algorithms used to generate these natural phenomena provide scientists a different approach in analyzing our world. The random algorithms used in terrain generation not only contribute to the generating the terrains themselves, but are also capable of simulating weather patterns.

  14. A quantum causal discovery algorithm

    NASA Astrophysics Data System (ADS)

    Giarmatzi, Christina; Costa, Fabio

    2018-03-01

    Finding a causal model for a set of classical variables is now a well-established task—but what about the quantum equivalent? Even the notion of a quantum causal model is controversial. Here, we present a causal discovery algorithm for quantum systems. The input to the algorithm is a process matrix describing correlations between quantum events. Its output consists of different levels of information about the underlying causal model. Our algorithm determines whether the process is causally ordered by grouping the events into causally ordered non-signaling sets. It detects if all relevant common causes are included in the process, which we label Markovian, or alternatively if some causal relations are mediated through some external memory. For a Markovian process, it outputs a causal model, namely the causal relations and the corresponding mechanisms, represented as quantum states and channels. Our algorithm opens the route to more general quantum causal discovery methods.

  15. Parallelizing flow-accumulation calculations on graphics processing units—From iterative DEM preprocessing algorithm to recursive multiple-flow-direction algorithm

    NASA Astrophysics Data System (ADS)

    Qin, Cheng-Zhi; Zhan, Lijun

    2012-06-01

    As one of the important tasks in digital terrain analysis, the calculation of flow accumulations from gridded digital elevation models (DEMs) usually involves two steps in a real application: (1) using an iterative DEM preprocessing algorithm to remove the depressions and flat areas commonly contained in real DEMs, and (2) using a recursive flow-direction algorithm to calculate the flow accumulation for every cell in the DEM. Because both algorithms are computationally intensive, quick calculation of the flow accumulations from a DEM (especially for a large area) presents a practical challenge to personal computer (PC) users. In recent years, rapid increases in hardware capacity of the graphics processing units (GPUs) provided in modern PCs have made it possible to meet this challenge in a PC environment. Parallel computing on GPUs using a compute-unified-device-architecture (CUDA) programming model has been explored to speed up the execution of the single-flow-direction algorithm (SFD). However, the parallel implementation on a GPU of the multiple-flow-direction (MFD) algorithm, which generally performs better than the SFD algorithm, has not been reported. Moreover, GPU-based parallelization of the DEM preprocessing step in the flow-accumulation calculations has not been addressed. This paper proposes a parallel approach to calculate flow accumulations (including both iterative DEM preprocessing and a recursive MFD algorithm) on a CUDA-compatible GPU. For the parallelization of an MFD algorithm (MFD-md), two different parallelization strategies using a GPU are explored. The first parallelization strategy, which has been used in the existing parallel SFD algorithm on GPU, has the problem of computing redundancy. Therefore, we designed a parallelization strategy based on graph theory. The application results show that the proposed parallel approach to calculate flow accumulations on a GPU performs much faster than either sequential algorithms or other parallel GPU

  16. HYBRID FAST HANKEL TRANSFORM ALGORITHM FOR ELECTROMAGNETIC MODELING

    EPA Science Inventory

    A hybrid fast Hankel transform algorithm has been developed that uses several complementary features of two existing algorithms: Anderson's digital filtering or fast Hankel transform (FHT) algorithm and Chave's quadrature and continued fraction algorithm. A hybrid FHT subprogram ...

  17. Efficient Record Linkage Algorithms Using Complete Linkage Clustering

    PubMed Central

    Mamun, Abdullah-Al; Aseltine, Robert; Rajasekaran, Sanguthevar

    2016-01-01

    Data from different agencies share data of the same individuals. Linking these datasets to identify all the records belonging to the same individuals is a crucial and challenging problem, especially given the large volumes of data. A large number of available algorithms for record linkage are prone to either time inefficiency or low-accuracy in finding matches and non-matches among the records. In this paper we propose efficient as well as reliable sequential and parallel algorithms for the record linkage problem employing hierarchical clustering methods. We employ complete linkage hierarchical clustering algorithms to address this problem. In addition to hierarchical clustering, we also use two other techniques: elimination of duplicate records and blocking. Our algorithms use sorting as a sub-routine to identify identical copies of records. We have tested our algorithms on datasets with millions of synthetic records. Experimental results show that our algorithms achieve nearly 100% accuracy. Parallel implementations achieve almost linear speedups. Time complexities of these algorithms do not exceed those of previous best-known algorithms. Our proposed algorithms outperform previous best-known algorithms in terms of accuracy consuming reasonable run times. PMID:27124604

  18. Efficient Record Linkage Algorithms Using Complete Linkage Clustering.

    PubMed

    Mamun, Abdullah-Al; Aseltine, Robert; Rajasekaran, Sanguthevar

    2016-01-01

    Data from different agencies share data of the same individuals. Linking these datasets to identify all the records belonging to the same individuals is a crucial and challenging problem, especially given the large volumes of data. A large number of available algorithms for record linkage are prone to either time inefficiency or low-accuracy in finding matches and non-matches among the records. In this paper we propose efficient as well as reliable sequential and parallel algorithms for the record linkage problem employing hierarchical clustering methods. We employ complete linkage hierarchical clustering algorithms to address this problem. In addition to hierarchical clustering, we also use two other techniques: elimination of duplicate records and blocking. Our algorithms use sorting as a sub-routine to identify identical copies of records. We have tested our algorithms on datasets with millions of synthetic records. Experimental results show that our algorithms achieve nearly 100% accuracy. Parallel implementations achieve almost linear speedups. Time complexities of these algorithms do not exceed those of previous best-known algorithms. Our proposed algorithms outperform previous best-known algorithms in terms of accuracy consuming reasonable run times.

  19. An Implementation of RC4+ Algorithm and Zig-zag Algorithm in a Super Encryption Scheme for Text Security

    NASA Astrophysics Data System (ADS)

    Budiman, M. A.; Amalia; Chayanie, N. I.

    2018-03-01

    Cryptography is the art and science of using mathematical methods to preserve message security. There are two types of cryptography, namely classical and modern cryptography. Nowadays, most people would rather use modern cryptography than classical cryptography because it is harder to break than the classical one. One of classical algorithm is the Zig-zag algorithm that uses the transposition technique: the original message is unreadable unless the person has the key to decrypt the message. To improve the security, the Zig-zag Cipher is combined with RC4+ Cipher which is one of the symmetric key algorithms in the form of stream cipher. The two algorithms are combined to make a super-encryption. By combining these two algorithms, the message will be harder to break by a cryptanalyst. The result showed that complexity of the combined algorithm is θ(n2 ), while the complexity of Zig-zag Cipher and RC4+ Cipher are θ(n2 ) and θ(n), respectively.

  20. Psychophysical Comparisons in Image Compression Algorithms.

    DTIC Science & Technology

    1999-03-01

    Leister, M., "Lossy Lempel - Ziv Algorithm for Large Alphabet Sources and Applications to Image Compression ," IEEE Proceedings, v.I, pp. 225-228, September...1623-1642, September 1990. Sanford, M.A., An Analysis of Data Compression Algorithms used in the Transmission of Imagery, Master’s Thesis, Naval...NAVAL POSTGRADUATE SCHOOL Monterey, California THESIS PSYCHOPHYSICAL COMPARISONS IN IMAGE COMPRESSION ALGORITHMS by % Christopher J. Bodine • March

  1. Least significant qubit algorithm for quantum images

    NASA Astrophysics Data System (ADS)

    Sang, Jianzhi; Wang, Shen; Li, Qiong

    2016-11-01

    To study the feasibility of the classical image least significant bit (LSB) information hiding algorithm on quantum computer, a least significant qubit (LSQb) information hiding algorithm of quantum image is proposed. In this paper, we focus on a novel quantum representation for color digital images (NCQI). Firstly, by designing the three qubits comparator and unitary operators, the reasonability and feasibility of LSQb based on NCQI are presented. Then, the concrete LSQb information hiding algorithm is proposed, which can realize the aim of embedding the secret qubits into the least significant qubits of RGB channels of quantum cover image. Quantum circuit of the LSQb information hiding algorithm is also illustrated. Furthermore, the secrets extracting algorithm and circuit are illustrated through utilizing control-swap gates. The two merits of our algorithm are: (1) it is absolutely blind and (2) when extracting secret binary qubits, it does not need any quantum measurement operation or any other help from classical computer. Finally, simulation and comparative analysis show the performance of our algorithm.

  2. Novel and efficient tag SNPs selection algorithms.

    PubMed

    Chen, Wen-Pei; Hung, Che-Lun; Tsai, Suh-Jen Jane; Lin, Yaw-Ling

    2014-01-01

    SNPs are the most abundant forms of genetic variations amongst species; the association studies between complex diseases and SNPs or haplotypes have received great attention. However, these studies are restricted by the cost of genotyping all SNPs; thus, it is necessary to find smaller subsets, or tag SNPs, representing the rest of the SNPs. In fact, the existing tag SNP selection algorithms are notoriously time-consuming. An efficient algorithm for tag SNP selection was presented, which was applied to analyze the HapMap YRI data. The experimental results show that the proposed algorithm can achieve better performance than the existing tag SNP selection algorithms; in most cases, this proposed algorithm is at least ten times faster than the existing methods. In many cases, when the redundant ratio of the block is high, the proposed algorithm can even be thousands times faster than the previously known methods. Tools and web services for haplotype block analysis integrated by hadoop MapReduce framework are also developed using the proposed algorithm as computation kernels.

  3. Exact and Heuristic Algorithms for Runway Scheduling

    NASA Technical Reports Server (NTRS)

    Malik, Waqar A.; Jung, Yoon C.

    2016-01-01

    This paper explores the Single Runway Scheduling (SRS) problem with arrivals, departures, and crossing aircraft on the airport surface. Constraints for wake vortex separations, departure area navigation separations and departure time window restrictions are explicitly considered. The main objective of this research is to develop exact and heuristic based algorithms that can be used in real-time decision support tools for Air Traffic Control Tower (ATCT) controllers. The paper provides a multi-objective dynamic programming (DP) based algorithm that finds the exact solution to the SRS problem, but may prove unusable for application in real-time environment due to large computation times for moderate sized problems. We next propose a second algorithm that uses heuristics to restrict the search space for the DP based algorithm. A third algorithm based on a combination of insertion and local search (ILS) heuristics is then presented. Simulation conducted for the east side of Dallas/Fort Worth International Airport allows comparison of the three proposed algorithms and indicates that the ILS algorithm performs favorably in its ability to find efficient solutions and its computation times.

  4. Efficient sequential and parallel algorithms for record linkage

    PubMed Central

    Mamun, Abdullah-Al; Mi, Tian; Aseltine, Robert; Rajasekaran, Sanguthevar

    2014-01-01

    Background and objective Integrating data from multiple sources is a crucial and challenging problem. Even though there exist numerous algorithms for record linkage or deduplication, they suffer from either large time needs or restrictions on the number of datasets that they can integrate. In this paper we report efficient sequential and parallel algorithms for record linkage which handle any number of datasets and outperform previous algorithms. Methods Our algorithms employ hierarchical clustering algorithms as the basis. A key idea that we use is radix sorting on certain attributes to eliminate identical records before any further processing. Another novel idea is to form a graph that links similar records and find the connected components. Results Our sequential and parallel algorithms have been tested on a real dataset of 1 083 878 records and synthetic datasets ranging in size from 50 000 to 9 000 000 records. Our sequential algorithm runs at least two times faster, for any dataset, than the previous best-known algorithm, the two-phase algorithm using faster computation of the edit distance (TPA (FCED)). The speedups obtained by our parallel algorithm are almost linear. For example, we get a speedup of 7.5 with 8 cores (residing in a single node), 14.1 with 16 cores (residing in two nodes), and 26.4 with 32 cores (residing in four nodes). Conclusions We have compared the performance of our sequential algorithm with TPA (FCED) and found that our algorithm outperforms the previous one. The accuracy is the same as that of this previous best-known algorithm. PMID:24154837

  5. Runtime support for parallelizing data mining algorithms

    NASA Astrophysics Data System (ADS)

    Jin, Ruoming; Agrawal, Gagan

    2002-03-01

    With recent technological advances, shared memory parallel machines have become more scalable, and offer large main memories and high bus bandwidths. They are emerging as good platforms for data warehousing and data mining. In this paper, we focus on shared memory parallelization of data mining algorithms. We have developed a series of techniques for parallelization of data mining algorithms, including full replication, full locking, fixed locking, optimized full locking, and cache-sensitive locking. Unlike previous work on shared memory parallelization of specific data mining algorithms, all of our techniques apply to a large number of common data mining algorithms. In addition, we propose a reduction-object based interface for specifying a data mining algorithm. We show how our runtime system can apply any of the technique we have developed starting from a common specification of the algorithm.

  6. Problem solving with genetic algorithms and Splicer

    NASA Technical Reports Server (NTRS)

    Bayer, Steven E.; Wang, Lui

    1991-01-01

    Genetic algorithms are highly parallel, adaptive search procedures (i.e., problem-solving methods) loosely based on the processes of population genetics and Darwinian survival of the fittest. Genetic algorithms have proven useful in domains where other optimization techniques perform poorly. The main purpose of the paper is to discuss a NASA-sponsored software development project to develop a general-purpose tool for using genetic algorithms. The tool, called Splicer, can be used to solve a wide variety of optimization problems and is currently available from NASA and COSMIC. This discussion is preceded by an introduction to basic genetic algorithm concepts and a discussion of genetic algorithm applications.

  7. Computional algorithm for lifetime exposure to antimicrobials in pigs using register data-The LEA algorithm.

    PubMed

    Birkegård, Anna Camilla; Andersen, Vibe Dalhoff; Halasa, Tariq; Jensen, Vibeke Frøkjær; Toft, Nils; Vigre, Håkan

    2017-10-01

    Accurate and detailed data on antimicrobial exposure in pig production are essential when studying the association between antimicrobial exposure and antimicrobial resistance. Due to difficulties in obtaining primary data on antimicrobial exposure in a large number of farms, there is a need for a robust and valid method to estimate the exposure using register data. An approach that estimates the antimicrobial exposure in every rearing period during the lifetime of a pig using register data was developed into a computational algorithm. In this approach data from national registers on antimicrobial purchases, movements of pigs and farm demographics registered at farm level are used. The algorithm traces batches of pigs retrospectively from slaughter to the farm(s) that housed the pigs during their finisher, weaner, and piglet period. Subsequently, the algorithm estimates the antimicrobial exposure as the number of Animal Defined Daily Doses for treatment of one kg pig in each of the rearing periods. Thus, the antimicrobial purchase data at farm level are translated into antimicrobial exposure estimates at batch level. A batch of pigs is defined here as pigs sent to slaughter at the same day from the same farm. In this study we present, validate, and optimise a computational algorithm that calculate the lifetime exposure of antimicrobials for slaughter pigs. The algorithm was evaluated by comparing the computed estimates to data on antimicrobial usage from farm records in 15 farm units. We found a good positive correlation between the two estimates. The algorithm was run for Danish slaughter pigs sent to slaughter in January to March 2015 from farms with more than 200 finishers to estimate the proportion of farms that it was applicable for. In the final process, the algorithm was successfully run for batches of pigs originating from 3026 farms with finisher units (77% of the initial population). This number can be increased if more accurate register data can be

  8. A Robustly Stabilizing Model Predictive Control Algorithm

    NASA Technical Reports Server (NTRS)

    Ackmece, A. Behcet; Carson, John M., III

    2007-01-01

    A model predictive control (MPC) algorithm that differs from prior MPC algorithms has been developed for controlling an uncertain nonlinear system. This algorithm guarantees the resolvability of an associated finite-horizon optimal-control problem in a receding-horizon implementation.

  9. US-VISIT Identity Matching Algorithm Evaluation Program: ADIS Algorithm Evaluation Project Plan Update

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grant, C W; Lenderman, J S; Gansemer, J D

    This document is an update to the 'ADIS Algorithm Evaluation Project Plan' specified in the Statement of Work for the US-VISIT Identity Matching Algorithm Evaluation Program, as deliverable II.D.1. The original plan was delivered in August 2010. This document modifies the plan to reflect modified deliverables reflecting delays in obtaining a database refresh. This document describes the revised schedule of the program deliverables. The detailed description of the processes used, the statistical analysis processes and the results of the statistical analysis will be described fully in the program deliverables. The US-VISIT Identity Matching Algorithm Evaluation Program is work performed bymore » Lawrence Livermore National Laboratory (LLNL) under IAA HSHQVT-07-X-00002 P00004 from the Department of Homeland Security (DHS).« less

  10. On Super-Resolution and the MUSIC Algorithm,

    DTIC Science & Technology

    1985-05-01

    SUPER-RESOLUTION AND THE MUSIC ALGORITHM AUTHOR: G D de Villiers DATE: May 1985 SUMMARY Simulation results for phased array signal processing using...the MUSIC algorithm are presented. The model used is more realistic than previous ones and it gives an indication as to how the algorithm would perform...resolution ON SUPER-RESOLUTION AND THE MUSIC ALGORITHM 1. INTRODUCTION At present there is a considerable amount of interest in "high-resolution" b

  11. An Image Encryption Algorithm Based on Information Hiding

    NASA Astrophysics Data System (ADS)

    Ge, Xin; Lu, Bin; Liu, Fenlin; Gong, Daofu

    Aiming at resolving the conflict between security and efficiency in the design of chaotic image encryption algorithms, an image encryption algorithm based on information hiding is proposed based on the “one-time pad” idea. A random parameter is introduced to ensure a different keystream for each encryption, which has the characteristics of “one-time pad”, improving the security of the algorithm rapidly without significant increase in algorithm complexity. The random parameter is embedded into the ciphered image with information hiding technology, which avoids negotiation for its transport and makes the application of the algorithm easier. Algorithm analysis and experiments show that the algorithm is secure against chosen plaintext attack, differential attack and divide-and-conquer attack, and has good statistical properties in ciphered images.

  12. Scalability problems of simple genetic algorithms.

    PubMed

    Thierens, D

    1999-01-01

    Scalable evolutionary computation has become an intensively studied research topic in recent years. The issue of scalability is predominant in any field of algorithmic design, but it became particularly relevant for the design of competent genetic algorithms once the scalability problems of simple genetic algorithms were understood. Here we present some of the work that has aided in getting a clear insight in the scalability problems of simple genetic algorithms. Particularly, we discuss the important issue of building block mixing. We show how the need for mixing places a boundary in the GA parameter space that, together with the boundary from the schema theorem, delimits the region where the GA converges reliably to the optimum in problems of bounded difficulty. This region shrinks rapidly with increasing problem size unless the building blocks are tightly linked in the problem coding structure. In addition, we look at how straightforward extensions of the simple genetic algorithm-namely elitism, niching, and restricted mating are not significantly improving the scalability problems.

  13. Passive microwave algorithm development and evaluation

    NASA Technical Reports Server (NTRS)

    Petty, Grant W.

    1995-01-01

    The scientific objectives of this grant are: (1) thoroughly evaluate, both theoretically and empirically, all available Special Sensor Microwave Imager (SSM/I) retrieval algorithms for column water vapor, column liquid water, and surface wind speed; (2) where both appropriate and feasible, develop, validate, and document satellite passive microwave retrieval algorithms that offer significantly improved performance compared with currently available algorithms; and (3) refine and validate a novel physical inversion scheme for retrieving rain rate over the ocean. This report summarizes work accomplished or in progress during the first year of a three year grant. The emphasis during the first year has been on the validation and refinement of the rain rate algorithm published by Petty and on the analysis of independent data sets that can be used to help evaluate the performance of rain rate algorithms over remote areas of the ocean. Two articles in the area of global oceanic precipitation are attached.

  14. Birkhoffian symplectic algorithms derived from Hamiltonian symplectic algorithms

    NASA Astrophysics Data System (ADS)

    Xin-Lei, Kong; Hui-Bin, Wu; Feng-Xiang, Mei

    2016-01-01

    In this paper, we focus on the construction of structure preserving algorithms for Birkhoffian systems, based on existing symplectic schemes for the Hamiltonian equations. The key of the method is to seek an invertible transformation which drives the Birkhoffian equations reduce to the Hamiltonian equations. When there exists such a transformation, applying the corresponding inverse map to symplectic discretization of the Hamiltonian equations, then resulting difference schemes are verified to be Birkhoffian symplectic for the original Birkhoffian equations. To illustrate the operation process of the method, we construct several desirable algorithms for the linear damped oscillator and the single pendulum with linear dissipation respectively. All of them exhibit excellent numerical behavior, especially in preserving conserved quantities. Project supported by the National Natural Science Foundation of China (Grant No. 11272050), the Excellent Young Teachers Program of North China University of Technology (Grant No. XN132), and the Construction Plan for Innovative Research Team of North China University of Technology (Grant No. XN129).

  15. Kidney-inspired algorithm for optimization problems

    NASA Astrophysics Data System (ADS)

    Jaddi, Najmeh Sadat; Alvankarian, Jafar; Abdullah, Salwani

    2017-01-01

    In this paper, a population-based algorithm inspired by the kidney process in the human body is proposed. In this algorithm the solutions are filtered in a rate that is calculated based on the mean of objective functions of all solutions in the current population of each iteration. The filtered solutions as the better solutions are moved to filtered blood and the rest are transferred to waste representing the worse solutions. This is a simulation of the glomerular filtration process in the kidney. The waste solutions are reconsidered in the iterations if after applying a defined movement operator they satisfy the filtration rate, otherwise it is expelled from the waste solutions, simulating the reabsorption and excretion functions of the kidney. In addition, a solution assigned as better solution is secreted if it is not better than the worst solutions simulating the secreting process of blood in the kidney. After placement of all the solutions in the population, the best of them is ranked, the waste and filtered blood are merged to become a new population and the filtration rate is updated. Filtration provides the required exploitation while generating a new solution and reabsorption gives the necessary exploration for the algorithm. The algorithm is assessed by applying it on eight well-known benchmark test functions and compares the results with other algorithms in the literature. The performance of the proposed algorithm is better on seven out of eight test functions when it is compared with the most recent researches in literature. The proposed kidney-inspired algorithm is able to find the global optimum with less function evaluations on six out of eight test functions. A statistical analysis further confirms the ability of this algorithm to produce good-quality results.

  16. Optimally stopped variational quantum algorithms

    NASA Astrophysics Data System (ADS)

    Vinci, Walter; Shabani, Alireza

    2018-04-01

    Quantum processors promise a paradigm shift in high-performance computing which needs to be assessed by accurate benchmarking measures. In this article, we introduce a benchmark for the variational quantum algorithm (VQA), recently proposed as a heuristic algorithm for small-scale quantum processors. In VQA, a classical optimization algorithm guides the processor's quantum dynamics to yield the best solution for a given problem. A complete assessment of the scalability and competitiveness of VQA should take into account both the quality and the time of dynamics optimization. The method of optimal stopping, employed here, provides such an assessment by explicitly including time as a cost factor. Here, we showcase this measure for benchmarking VQA as a solver for some quadratic unconstrained binary optimization. Moreover, we show that a better choice for the cost function of the classical routine can significantly improve the performance of the VQA algorithm and even improve its scaling properties.

  17. Multilevel algorithms for nonlinear optimization

    NASA Technical Reports Server (NTRS)

    Alexandrov, Natalia; Dennis, J. E., Jr.

    1994-01-01

    Multidisciplinary design optimization (MDO) gives rise to nonlinear optimization problems characterized by a large number of constraints that naturally occur in blocks. We propose a class of multilevel optimization methods motivated by the structure and number of constraints and by the expense of the derivative computations for MDO. The algorithms are an extension to the nonlinear programming problem of the successful class of local Brown-Brent algorithms for nonlinear equations. Our extensions allow the user to partition constraints into arbitrary blocks to fit the application, and they separately process each block and the objective function, restricted to certain subspaces. The methods use trust regions as a globalization strategy, and they have been shown to be globally convergent under reasonable assumptions. The multilevel algorithms can be applied to all classes of MDO formulations. Multilevel algorithms for solving nonlinear systems of equations are a special case of the multilevel optimization methods. In this case, they can be viewed as a trust-region globalization of the Brown-Brent class.

  18. A new optimized GA-RBF neural network algorithm.

    PubMed

    Jia, Weikuan; Zhao, Dean; Shen, Tian; Su, Chunyang; Hu, Chanli; Zhao, Yuyan

    2014-01-01

    When confronting the complex problems, radial basis function (RBF) neural network has the advantages of adaptive and self-learning ability, but it is difficult to determine the number of hidden layer neurons, and the weights learning ability from hidden layer to the output layer is low; these deficiencies easily lead to decreasing learning ability and recognition precision. Aiming at this problem, we propose a new optimized RBF neural network algorithm based on genetic algorithm (GA-RBF algorithm), which uses genetic algorithm to optimize the weights and structure of RBF neural network; it chooses new ways of hybrid encoding and optimizing simultaneously. Using the binary encoding encodes the number of the hidden layer's neurons and using real encoding encodes the connection weights. Hidden layer neurons number and connection weights are optimized simultaneously in the new algorithm. However, the connection weights optimization is not complete; we need to use least mean square (LMS) algorithm for further leaning, and finally get a new algorithm model. Using two UCI standard data sets to test the new algorithm, the results show that the new algorithm improves the operating efficiency in dealing with complex problems and also improves the recognition precision, which proves that the new algorithm is valid.

  19. Algorithmic formulation of control problems in manipulation

    NASA Technical Reports Server (NTRS)

    Bejczy, A. K.

    1975-01-01

    The basic characteristics of manipulator control algorithms are discussed. The state of the art in the development of manipulator control algorithms is briefly reviewed. Different end-point control techniques are described together with control algorithms which operate on external sensor (imaging, proximity, tactile, and torque/force) signals in realtime. Manipulator control development at JPL is briefly described and illustrated with several figures. The JPL work pays special attention to the front or operator input end of the control algorithms.

  20. Annealed Importance Sampling Reversible Jump MCMC algorithms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Karagiannis, Georgios; Andrieu, Christophe

    2013-03-20

    It will soon be 20 years since reversible jump Markov chain Monte Carlo (RJ-MCMC) algorithms have been proposed. They have significantly extended the scope of Markov chain Monte Carlo simulation methods, offering the promise to be able to routinely tackle transdimensional sampling problems, as encountered in Bayesian model selection problems for example, in a principled and flexible fashion. Their practical efficient implementation, however, still remains a challenge. A particular difficulty encountered in practice is in the choice of the dimension matching variables (both their nature and their distribution) and the reversible transformations which allow one to define the one-to-one mappingsmore » underpinning the design of these algorithms. Indeed, even seemingly sensible choices can lead to algorithms with very poor performance. The focus of this paper is the development and performance evaluation of a method, annealed importance sampling RJ-MCMC (aisRJ), which addresses this problem by mitigating the sensitivity of RJ-MCMC algorithms to the aforementioned poor design. As we shall see the algorithm can be understood as being an “exact approximation” of an idealized MCMC algorithm that would sample from the model probabilities directly in a model selection set-up. Such an idealized algorithm may have good theoretical convergence properties, but typically cannot be implemented, and our algorithms can approximate the performance of such idealized algorithms to an arbitrary degree while not introducing any bias for any degree of approximation. Our approach combines the dimension matching ideas of RJ-MCMC with annealed importance sampling and its Markov chain Monte Carlo implementation. We illustrate the performance of the algorithm with numerical simulations which indicate that, although the approach may at first appear computationally involved, it is in fact competitive.« less

  1. Recent Advancements in Lightning Jump Algorithm Work

    NASA Technical Reports Server (NTRS)

    Schultz, Christopher J.; Petersen, Walter A.; Carey, Lawrence D.

    2010-01-01

    In the past year, the primary objectives were to show the usefulness of total lightning as compared to traditional cloud-to-ground (CG) networks, test the lightning jump algorithm configurations in other regions of the country, increase the number of thunderstorms within our thunderstorm database, and to pinpoint environments that could prove difficult for any lightning jump configuration. A total of 561 thunderstorms have been examined in the past year (409 non-severe, 152 severe) from four regions of the country (North Alabama, Washington D.C., High Plains of CO/KS, and Oklahoma). Results continue to indicate that the 2 lightning jump algorithm configuration holds the most promise in terms of prospective operational lightning jump algorithms, with a probability of detection (POD) at 81%, a false alarm rate (FAR) of 45%, a critical success index (CSI) of 49% and a Heidke Skill Score (HSS) of 0.66. The second best performing algorithm configuration was the Threshold 4 algorithm, which had a POD of 72%, FAR of 51%, a CSI of 41% and an HSS of 0.58. Because a more complex algorithm configuration shows the most promise in terms of prospective operational lightning jump algorithms, accurate thunderstorm cell tracking work must be undertaken to track lightning trends on an individual thunderstorm basis over time. While these numbers for the 2 configuration are impressive, the algorithm does have its weaknesses. Specifically, low-topped and tropical cyclone thunderstorm environments are present issues for the 2 lightning jump algorithm, because of the suppressed vertical depth impact on overall flash counts (i.e., a relative dearth in lightning). For example, in a sample of 120 thunderstorms from northern Alabama that contained 72 missed events by the 2 algorithm 36% of the misses were associated with these two environments (17 storms).

  2. Online Performance-Improvement Algorithms

    DTIC Science & Technology

    1994-08-01

    fault rate as the request sequence length approaches infinity. Their algorithms are based on an innovative use of the classical Ziv - Lempel [85] data ...Report CS-TR-348-91. [85] J. Ziv and A. Lempel . Compression of individual sequences via variable-rate coding. IEEE Trans. Inf. Theory, 24:530-53`, 1978. 94...Deferred Data Structuring Recall that our incremental multi-trip algorithm spreads the building of the fence-tree over several trips in order to

  3. An Innovative Thinking-Based Intelligent Information Fusion Algorithm

    PubMed Central

    Hu, Liang; Liu, Gang; Zhou, Jin

    2013-01-01

    This study proposes an intelligent algorithm that can realize information fusion in reference to the relative research achievements in brain cognitive theory and innovative computation. This algorithm treats knowledge as core and information fusion as a knowledge-based innovative thinking process. Furthermore, the five key parts of this algorithm including information sense and perception, memory storage, divergent thinking, convergent thinking, and evaluation system are simulated and modeled. This algorithm fully develops innovative thinking skills of knowledge in information fusion and is a try to converse the abstract conception of brain cognitive science to specific and operable research routes and strategies. Furthermore, the influences of each parameter of this algorithm on algorithm performance are analyzed and compared with those of classical intelligent algorithms trough test. Test results suggest that the algorithm proposed in this study can obtain the optimum problem solution by less target evaluation times, improve optimization effectiveness, and achieve the effective fusion of information. PMID:23956699

  4. An innovative thinking-based intelligent information fusion algorithm.

    PubMed

    Lu, Huimin; Hu, Liang; Liu, Gang; Zhou, Jin

    2013-01-01

    This study proposes an intelligent algorithm that can realize information fusion in reference to the relative research achievements in brain cognitive theory and innovative computation. This algorithm treats knowledge as core and information fusion as a knowledge-based innovative thinking process. Furthermore, the five key parts of this algorithm including information sense and perception, memory storage, divergent thinking, convergent thinking, and evaluation system are simulated and modeled. This algorithm fully develops innovative thinking skills of knowledge in information fusion and is a try to converse the abstract conception of brain cognitive science to specific and operable research routes and strategies. Furthermore, the influences of each parameter of this algorithm on algorithm performance are analyzed and compared with those of classical intelligent algorithms trough test. Test results suggest that the algorithm proposed in this study can obtain the optimum problem solution by less target evaluation times, improve optimization effectiveness, and achieve the effective fusion of information.

  5. Improved Bat Algorithm Applied to Multilevel Image Thresholding

    PubMed Central

    2014-01-01

    Multilevel image thresholding is a very important image processing technique that is used as a basis for image segmentation and further higher level processing. However, the required computational time for exhaustive search grows exponentially with the number of desired thresholds. Swarm intelligence metaheuristics are well known as successful and efficient optimization methods for intractable problems. In this paper, we adjusted one of the latest swarm intelligence algorithms, the bat algorithm, for the multilevel image thresholding problem. The results of testing on standard benchmark images show that the bat algorithm is comparable with other state-of-the-art algorithms. We improved standard bat algorithm, where our modifications add some elements from the differential evolution and from the artificial bee colony algorithm. Our new proposed improved bat algorithm proved to be better than five other state-of-the-art algorithms, improving quality of results in all cases and significantly improving convergence speed. PMID:25165733

  6. A Novel Hybrid Firefly Algorithm for Global Optimization.

    PubMed

    Zhang, Lina; Liu, Liqiang; Yang, Xin-She; Dai, Yuntao

    Global optimization is challenging to solve due to its nonlinearity and multimodality. Traditional algorithms such as the gradient-based methods often struggle to deal with such problems and one of the current trends is to use metaheuristic algorithms. In this paper, a novel hybrid population-based global optimization algorithm, called hybrid firefly algorithm (HFA), is proposed by combining the advantages of both the firefly algorithm (FA) and differential evolution (DE). FA and DE are executed in parallel to promote information sharing among the population and thus enhance searching efficiency. In order to evaluate the performance and efficiency of the proposed algorithm, a diverse set of selected benchmark functions are employed and these functions fall into two groups: unimodal and multimodal. The experimental results show better performance of the proposed algorithm compared to the original version of the firefly algorithm (FA), differential evolution (DE) and particle swarm optimization (PSO) in the sense of avoiding local minima and increasing the convergence rate.

  7. A Novel Hybrid Firefly Algorithm for Global Optimization

    PubMed Central

    Zhang, Lina; Liu, Liqiang; Yang, Xin-She; Dai, Yuntao

    2016-01-01

    Global optimization is challenging to solve due to its nonlinearity and multimodality. Traditional algorithms such as the gradient-based methods often struggle to deal with such problems and one of the current trends is to use metaheuristic algorithms. In this paper, a novel hybrid population-based global optimization algorithm, called hybrid firefly algorithm (HFA), is proposed by combining the advantages of both the firefly algorithm (FA) and differential evolution (DE). FA and DE are executed in parallel to promote information sharing among the population and thus enhance searching efficiency. In order to evaluate the performance and efficiency of the proposed algorithm, a diverse set of selected benchmark functions are employed and these functions fall into two groups: unimodal and multimodal. The experimental results show better performance of the proposed algorithm compared to the original version of the firefly algorithm (FA), differential evolution (DE) and particle swarm optimization (PSO) in the sense of avoiding local minima and increasing the convergence rate. PMID:27685869

  8. Efficient algorithms for single-axis attitude estimation

    NASA Technical Reports Server (NTRS)

    Shuster, M. D.

    1981-01-01

    The computationally efficient algorithms determine attitude from the measurement of art lengths and dihedral angles. The dependence of these algorithms on the solution of trigonometric equations was reduced. Both single time and batch estimators are presented along with the covariance analysis of each algorithm.

  9. A tunable algorithm for collective decision-making.

    PubMed

    Pratt, Stephen C; Sumpter, David J T

    2006-10-24

    Complex biological systems are increasingly understood in terms of the algorithms that guide the behavior of system components and the information pathways that link them. Much attention has been given to robust algorithms, or those that allow a system to maintain its functions in the face of internal or external perturbations. At the same time, environmental variation imposes a complementary need for algorithm versatility, or the ability to alter system function adaptively as external circumstances change. An important goal of systems biology is thus the identification of biological algorithms that can meet multiple challenges rather than being narrowly specified to particular problems. Here we show that emigrating colonies of the ant Temnothorax curvispinosus tune the parameters of a single decision algorithm to respond adaptively to two distinct problems: rapid abandonment of their old nest in a crisis and deliberative selection of the best available new home when their old nest is still intact. The algorithm uses a stepwise commitment scheme and a quorum rule to integrate information gathered by numerous individual ants visiting several candidate homes. By varying the rates at which they search for and accept these candidates, the ants yield a colony-level response that adaptively emphasizes either speed or accuracy. We propose such general but tunable algorithms as a design feature of complex systems, each algorithm providing elegant solutions to a wide range of problems.

  10. Localization Algorithms of Underwater Wireless Sensor Networks: A Survey

    PubMed Central

    Han, Guangjie; Jiang, Jinfang; Shu, Lei; Xu, Yongjun; Wang, Feng

    2012-01-01

    In Underwater Wireless Sensor Networks (UWSNs), localization is one of most important technologies since it plays a critical role in many applications. Motivated by widespread adoption of localization, in this paper, we present a comprehensive survey of localization algorithms. First, we classify localization algorithms into three categories based on sensor nodes’ mobility: stationary localization algorithms, mobile localization algorithms and hybrid localization algorithms. Moreover, we compare the localization algorithms in detail and analyze future research directions of localization algorithms in UWSNs. PMID:22438752

  11. An improved NAS-RIF algorithm for image restoration

    NASA Astrophysics Data System (ADS)

    Gao, Weizhe; Zou, Jianhua; Xu, Rong; Liu, Changhai; Li, Hengnian

    2016-10-01

    Space optical images are inevitably degraded by atmospheric turbulence, error of the optical system and motion. In order to get the true image, a novel nonnegativity and support constants recursive inverse filtering (NAS-RIF) algorithm is proposed to restore the degraded image. Firstly the image noise is weaken by Contourlet denoising algorithm. Secondly, the reliable object support region estimation is used to accelerate the algorithm convergence. We introduce the optimal threshold segmentation technology to improve the object support region. Finally, an object construction limit and the logarithm function are added to enhance algorithm stability. Experimental results demonstrate that, the proposed algorithm can increase the PSNR, and improve the quality of the restored images. The convergence speed of the proposed algorithm is faster than that of the original NAS-RIF algorithm.

  12. Minimalist ensemble algorithms for genome-wide protein localization prediction.

    PubMed

    Lin, Jhih-Rong; Mondal, Ananda Mohan; Liu, Rong; Hu, Jianjun

    2012-07-03

    Computational prediction of protein subcellular localization can greatly help to elucidate its functions. Despite the existence of dozens of protein localization prediction algorithms, the prediction accuracy and coverage are still low. Several ensemble algorithms have been proposed to improve the prediction performance, which usually include as many as 10 or more individual localization algorithms. However, their performance is still limited by the running complexity and redundancy among individual prediction algorithms. This paper proposed a novel method for rational design of minimalist ensemble algorithms for practical genome-wide protein subcellular localization prediction. The algorithm is based on combining a feature selection based filter and a logistic regression classifier. Using a novel concept of contribution scores, we analyzed issues of algorithm redundancy, consensus mistakes, and algorithm complementarity in designing ensemble algorithms. We applied the proposed minimalist logistic regression (LR) ensemble algorithm to two genome-wide datasets of Yeast and Human and compared its performance with current ensemble algorithms. Experimental results showed that the minimalist ensemble algorithm can achieve high prediction accuracy with only 1/3 to 1/2 of individual predictors of current ensemble algorithms, which greatly reduces computational complexity and running time. It was found that the high performance ensemble algorithms are usually composed of the predictors that together cover most of available features. Compared to the best individual predictor, our ensemble algorithm improved the prediction accuracy from AUC score of 0.558 to 0.707 for the Yeast dataset and from 0.628 to 0.646 for the Human dataset. Compared with popular weighted voting based ensemble algorithms, our classifier-based ensemble algorithms achieved much better performance without suffering from inclusion of too many individual predictors. We proposed a method for rational design

  13. Minimalist ensemble algorithms for genome-wide protein localization prediction

    PubMed Central

    2012-01-01

    Background Computational prediction of protein subcellular localization can greatly help to elucidate its functions. Despite the existence of dozens of protein localization prediction algorithms, the prediction accuracy and coverage are still low. Several ensemble algorithms have been proposed to improve the prediction performance, which usually include as many as 10 or more individual localization algorithms. However, their performance is still limited by the running complexity and redundancy among individual prediction algorithms. Results This paper proposed a novel method for rational design of minimalist ensemble algorithms for practical genome-wide protein subcellular localization prediction. The algorithm is based on combining a feature selection based filter and a logistic regression classifier. Using a novel concept of contribution scores, we analyzed issues of algorithm redundancy, consensus mistakes, and algorithm complementarity in designing ensemble algorithms. We applied the proposed minimalist logistic regression (LR) ensemble algorithm to two genome-wide datasets of Yeast and Human and compared its performance with current ensemble algorithms. Experimental results showed that the minimalist ensemble algorithm can achieve high prediction accuracy with only 1/3 to 1/2 of individual predictors of current ensemble algorithms, which greatly reduces computational complexity and running time. It was found that the high performance ensemble algorithms are usually composed of the predictors that together cover most of available features. Compared to the best individual predictor, our ensemble algorithm improved the prediction accuracy from AUC score of 0.558 to 0.707 for the Yeast dataset and from 0.628 to 0.646 for the Human dataset. Compared with popular weighted voting based ensemble algorithms, our classifier-based ensemble algorithms achieved much better performance without suffering from inclusion of too many individual predictors. Conclusions We

  14. Algorithm Optimally Allocates Actuation of a Spacecraft

    NASA Technical Reports Server (NTRS)

    Motaghedi, Shi

    2007-01-01

    A report presents an algorithm that solves the following problem: Allocate the force and/or torque to be exerted by each thruster and reaction-wheel assembly on a spacecraft for best performance, defined as minimizing the error between (1) the total force and torque commanded by the spacecraft control system and (2) the total of forces and torques actually exerted by all the thrusters and reaction wheels. The algorithm incorporates the matrix vector relationship between (1) the total applied force and torque and (2) the individual actuator force and torque values. It takes account of such constraints as lower and upper limits on the force or torque that can be applied by a given actuator. The algorithm divides the aforementioned problem into two optimization problems that it solves sequentially. These problems are of a type, known in the art as semi-definite programming problems, that involve linear matrix inequalities. The algorithm incorporates, as sub-algorithms, prior algorithms that solve such optimization problems very efficiently. The algorithm affords the additional advantage that the solution requires the minimum rate of consumption of fuel for the given best performance.

  15. Implementing a self-structuring data learning algorithm

    NASA Astrophysics Data System (ADS)

    Graham, James; Carson, Daniel; Ternovskiy, Igor

    2016-05-01

    In this paper, we elaborate on what we did to implement our self-structuring data learning algorithm. To recap, we are working to develop a data learning algorithm that will eventually be capable of goal driven pattern learning and extrapolation of more complex patterns from less complex ones. At this point we have developed a conceptual framework for the algorithm, but have yet to discuss our actual implementation and the consideration and shortcuts we needed to take to create said implementation. We will elaborate on our initial setup of the algorithm and the scenarios we used to test our early stage algorithm. While we want this to be a general algorithm, it is necessary to start with a simple scenario or two to provide a viable development and testing environment. To that end, our discussion will be geared toward what we include in our initial implementation and why, as well as what concerns we may have. In the future, we expect to be able to apply our algorithm to a more general approach, but to do so within a reasonable time, we needed to pick a place to start.

  16. Hybrid Architectures for Evolutionary Computing Algorithms

    DTIC Science & Technology

    2008-01-01

    other EC algorithms to FPGA Core Burns P1026/MAPLD 200532 Genetic Algorithm Hardware References S. Scott, A. Samal , and S. Seth, “HGA: A Hardware Based...on Parallel and Distributed Processing (IPPS/SPDP 󈨦), pp. 316-320, Proceedings. IEEE Computer Society 1998. [12] Scott, S. D. , Samal , A., and...Algorithm Hardware References S. Scott, A. Samal , and S. Seth, “HGA: A Hardware Based Genetic Algorithm”, Proceedings of the 1995 ACM Third

  17. Lifted worm algorithm for the Ising model

    NASA Astrophysics Data System (ADS)

    Elçi, Eren Metin; Grimm, Jens; Ding, Lijie; Nasrawi, Abrahim; Garoni, Timothy M.; Deng, Youjin

    2018-04-01

    We design an irreversible worm algorithm for the zero-field ferromagnetic Ising model by using the lifting technique. We study the dynamic critical behavior of an energylike observable on both the complete graph and toroidal grids, and compare our findings with reversible algorithms such as the Prokof'ev-Svistunov worm algorithm. Our results show that the lifted worm algorithm improves the dynamic exponent of the energylike observable on the complete graph and leads to a significant constant improvement on toroidal grids.

  18. A joint equalization algorithm in high speed communication systems

    NASA Astrophysics Data System (ADS)

    Hao, Xin; Lin, Changxing; Wang, Zhaohui; Cheng, Binbin; Deng, Xianjin

    2018-02-01

    This paper presents a joint equalization algorithm in high speed communication systems. This algorithm takes the advantages of traditional equalization algorithms to use pre-equalization and post-equalization. The pre-equalization algorithm takes the advantage of CMA algorithm, which is not sensitive to the frequency offset. Pre-equalization is located before the carrier recovery loop in order to make the carrier recovery loop a better performance and overcome most of the frequency offset. The post-equalization takes the advantage of MMA algorithm in order to overcome the residual frequency offset. This paper analyzes the advantages and disadvantages of several equalization algorithms in the first place, and then simulates the proposed joint equalization algorithm in Matlab platform. The simulation results shows the constellation diagrams and the bit error rate curve, both these results show that the proposed joint equalization algorithm is better than the traditional algorithms. The residual frequency offset is shown directly in the constellation diagrams. When SNR is 14dB, the bit error rate of the simulated system with the proposed joint equalization algorithm is 103 times better than CMA algorithm, 77 times better than MMA equalization, and 9 times better than CMA-MMA equalization.

  19. Ant algorithms for discrete optimization.

    PubMed

    Dorigo, M; Di Caro, G; Gambardella, L M

    1999-01-01

    This article presents an overview of recent work on ant algorithms, that is, algorithms for discrete optimization that took inspiration from the observation of ant colonies' foraging behavior, and introduces the ant colony optimization (ACO) metaheuristic. In the first part of the article the basic biological findings on real ants are reviewed and their artificial counterparts as well as the ACO metaheuristic are defined. In the second part of the article a number of applications of ACO algorithms to combinatorial optimization and routing in communications networks are described. We conclude with a discussion of related work and of some of the most important aspects of the ACO metaheuristic.

  20. Research of improved banker algorithm

    NASA Astrophysics Data System (ADS)

    Yuan, Xingde; Xu, Hong; Qiao, Shijiao

    2013-03-01

    In the multi-process operating system, resource management strategy of system is a critical global issue, especially when many processes implicating for the limited resources, since unreasonable scheduling will cause dead lock. The most classical solution for dead lock question is the banker algorithm; however, it has its own deficiency and only can avoid dead lock occurring in a certain extent. This article aims at reducing unnecessary safety checking, and then uses the new allocation strategy to improve the banker algorithm. Through full analysis and example verification of the new allocation strategy, the results show the improved banker algorithm obtains substantial increase in performance.

  1. Faster fourier transformation: The algorithm of S. Winograd

    NASA Technical Reports Server (NTRS)

    Zohar, S.

    1979-01-01

    The new DFT algorithm of S. Winograd is developed and presented in detail. This is an algorithm which uses about 1/5 of the number of multiplications used by the Cooley-Tukey algorithm and is applicable to any order which is a product of relatively prime factors from the following list: 2,3,4,5,7,8,9,16. The algorithm is presented in terms of a series of tableaus which are convenient, compact, graphical representations of the sequence of arithmetic operations in the corresponding parts of the algorithm. Using these in conjunction with included Tables makes it relatively easy to apply the algorithm and evaluate its performance.

  2. Algorithm Calculates Cumulative Poisson Distribution

    NASA Technical Reports Server (NTRS)

    Bowerman, Paul N.; Nolty, Robert C.; Scheuer, Ernest M.

    1992-01-01

    Algorithm calculates accurate values of cumulative Poisson distribution under conditions where other algorithms fail because numbers are so small (underflow) or so large (overflow) that computer cannot process them. Factors inserted temporarily to prevent underflow and overflow. Implemented in CUMPOIS computer program described in "Cumulative Poisson Distribution Program" (NPO-17714).

  3. A quasi-Newton algorithm for large-scale nonlinear equations.

    PubMed

    Huang, Linghua

    2017-01-01

    In this paper, the algorithm for large-scale nonlinear equations is designed by the following steps: (i) a conjugate gradient (CG) algorithm is designed as a sub-algorithm to obtain the initial points of the main algorithm, where the sub-algorithm's initial point does not have any restrictions; (ii) a quasi-Newton algorithm with the initial points given by sub-algorithm is defined as main algorithm, where a new nonmonotone line search technique is presented to get the step length [Formula: see text]. The given nonmonotone line search technique can avoid computing the Jacobian matrix. The global convergence and the [Formula: see text]-order convergent rate of the main algorithm are established under suitable conditions. Numerical results show that the proposed method is competitive with a similar method for large-scale problems.

  4. SDR input power estimation algorithms

    NASA Astrophysics Data System (ADS)

    Briones, J. C.; Nappier, J. M.

    The General Dynamics (GD) S-Band software defined radio (SDR) in the Space Communications and Navigation (SCAN) Testbed on the International Space Station (ISS) provides experimenters an opportunity to develop and demonstrate experimental waveforms in space. The SDR has an analog and a digital automatic gain control (AGC) and the response of the AGCs to changes in SDR input power and temperature was characterized prior to the launch and installation of the SCAN Testbed on the ISS. The AGCs were used to estimate the SDR input power and SNR of the received signal and the characterization results showed a nonlinear response to SDR input power and temperature. In order to estimate the SDR input from the AGCs, three algorithms were developed and implemented on the ground software of the SCAN Testbed. The algorithms include a linear straight line estimator, which used the digital AGC and the temperature to estimate the SDR input power over a narrower section of the SDR input power range. There is a linear adaptive filter algorithm that uses both AGCs and the temperature to estimate the SDR input power over a wide input power range. Finally, an algorithm that uses neural networks was designed to estimate the input power over a wide range. This paper describes the algorithms in detail and their associated performance in estimating the SDR input power.

  5. POSE Algorithms for Automated Docking

    NASA Technical Reports Server (NTRS)

    Heaton, Andrew F.; Howard, Richard T.

    2011-01-01

    POSE (relative position and attitude) can be computed in many different ways. Given a sensor that measures bearing to a finite number of spots corresponding to known features (such as a target) of a spacecraft, a number of different algorithms can be used to compute the POSE. NASA has sponsored the development of a flash LIDAR proximity sensor called the Vision Navigation Sensor (VNS) for use by the Orion capsule in future docking missions. This sensor generates data that can be used by a variety of algorithms to compute POSE solutions inside of 15 meters, including at the critical docking range of approximately 1-2 meters. Previously NASA participated in a DARPA program called Orbital Express that achieved the first automated docking for the American space program. During this mission a large set of high quality mated sensor data was obtained at what is essentially the docking distance. This data set is perhaps the most accurate truth data in existence for docking proximity sensors in orbit. In this paper, the flight data from Orbital Express is used to test POSE algorithms at 1.22 meters range. Two different POSE algorithms are tested for two different Fields-of-View (FOVs) and two different pixel noise levels. The results of the analysis are used to predict future performance of the POSE algorithms with VNS data.

  6. SDR Input Power Estimation Algorithms

    NASA Technical Reports Server (NTRS)

    Nappier, Jennifer M.; Briones, Janette C.

    2013-01-01

    The General Dynamics (GD) S-Band software defined radio (SDR) in the Space Communications and Navigation (SCAN) Testbed on the International Space Station (ISS) provides experimenters an opportunity to develop and demonstrate experimental waveforms in space. The SDR has an analog and a digital automatic gain control (AGC) and the response of the AGCs to changes in SDR input power and temperature was characterized prior to the launch and installation of the SCAN Testbed on the ISS. The AGCs were used to estimate the SDR input power and SNR of the received signal and the characterization results showed a nonlinear response to SDR input power and temperature. In order to estimate the SDR input from the AGCs, three algorithms were developed and implemented on the ground software of the SCAN Testbed. The algorithms include a linear straight line estimator, which used the digital AGC and the temperature to estimate the SDR input power over a narrower section of the SDR input power range. There is a linear adaptive filter algorithm that uses both AGCs and the temperature to estimate the SDR input power over a wide input power range. Finally, an algorithm that uses neural networks was designed to estimate the input power over a wide range. This paper describes the algorithms in detail and their associated performance in estimating the SDR input power.

  7. Detection of algorithmic trading

    NASA Astrophysics Data System (ADS)

    Bogoev, Dimitar; Karam, Arzé

    2017-10-01

    We develop a new approach to reflect the behavior of algorithmic traders. Specifically, we provide an analytical and tractable way to infer patterns of quote volatility and price momentum consistent with different types of strategies employed by algorithmic traders, and we propose two ratios to quantify these patterns. Quote volatility ratio is based on the rate of oscillation of the best ask and best bid quotes over an extremely short period of time; whereas price momentum ratio is based on identifying patterns of rapid upward or downward movement in prices. The two ratios are evaluated across several asset classes. We further run a two-stage Artificial Neural Network experiment on the quote volatility ratio; the first stage is used to detect the quote volatility patterns resulting from algorithmic activity, while the second is used to validate the quality of signal detection provided by our measure.

  8. Knowledge-based tracking algorithm

    NASA Astrophysics Data System (ADS)

    Corbeil, Allan F.; Hawkins, Linda J.; Gilgallon, Paul F.

    1990-10-01

    This paper describes the Knowledge-Based Tracking (KBT) algorithm for which a real-time flight test demonstration was recently conducted at Rome Air Development Center (RADC). In KBT processing, the radar signal in each resolution cell is thresholded at a lower than normal setting to detect low RCS targets. This lower threshold produces a larger than normal false alarm rate. Therefore, additional signal processing including spectral filtering, CFAR and knowledge-based acceptance testing are performed to eliminate some of the false alarms. TSC's knowledge-based Track-Before-Detect (TBD) algorithm is then applied to the data from each azimuth sector to detect target tracks. In this algorithm, tentative track templates are formed for each threshold crossing and knowledge-based association rules are applied to the range, Doppler, and azimuth measurements from successive scans. Lastly, an M-association out of N-scan rule is used to declare a detection. This scan-to-scan integration enhances the probability of target detection while maintaining an acceptably low output false alarm rate. For a real-time demonstration of the KBT algorithm, the L-band radar in the Surveillance Laboratory (SL) at RADC was used to illuminate a small Cessna 310 test aircraft. The received radar signal wa digitized and processed by a ST-100 Array Processor and VAX computer network in the lab. The ST-100 performed all of the radar signal processing functions, including Moving Target Indicator (MTI) pulse cancelling, FFT Doppler filtering, and CFAR detection. The VAX computers performed the remaining range-Doppler clustering, beamsplitting and TBD processing functions. The KBT algorithm provided a 9.5 dB improvement relative to single scan performance with a nominal real time delay of less than one second between illumination and display.

  9. The serial message-passing schedule for LDPC decoding algorithms

    NASA Astrophysics Data System (ADS)

    Liu, Mingshan; Liu, Shanshan; Zhou, Yuan; Jiang, Xue

    2015-12-01

    The conventional message-passing schedule for LDPC decoding algorithms is the so-called flooding schedule. It has the disadvantage that the updated messages cannot be used until next iteration, thus reducing the convergence speed . In this case, the Layered Decoding algorithm (LBP) based on serial message-passing schedule is proposed. In this paper the decoding principle of LBP algorithm is briefly introduced, and then proposed its two improved algorithms, the grouped serial decoding algorithm (Grouped LBP) and the semi-serial decoding algorithm .They can improve LBP algorithm's decoding speed while maintaining a good decoding performance.

  10. Learning Intelligent Genetic Algorithms Using Japanese Nonograms

    ERIC Educational Resources Information Center

    Tsai, Jinn-Tsong; Chou, Ping-Yi; Fang, Jia-Cen

    2012-01-01

    An intelligent genetic algorithm (IGA) is proposed to solve Japanese nonograms and is used as a method in a university course to learn evolutionary algorithms. The IGA combines the global exploration capabilities of a canonical genetic algorithm (CGA) with effective condensed encoding, improved fitness function, and modified crossover and…

  11. One cutting plane algorithm using auxiliary functions

    NASA Astrophysics Data System (ADS)

    Zabotin, I. Ya; Kazaeva, K. E.

    2016-11-01

    We propose an algorithm for solving a convex programming problem from the class of cutting methods. The algorithm is characterized by the construction of approximations using some auxiliary functions, instead of the objective function. Each auxiliary function bases on the exterior penalty function. In proposed algorithm the admissible set and the epigraph of each auxiliary function are embedded into polyhedral sets. In connection with the above, the iteration points are found by solving linear programming problems. We discuss the implementation of the algorithm and prove its convergence.

  12. Research on Palmprint Identification Method Based on Quantum Algorithms

    PubMed Central

    Zhang, Zhanzhan

    2014-01-01

    Quantum image recognition is a technology by using quantum algorithm to process the image information. It can obtain better effect than classical algorithm. In this paper, four different quantum algorithms are used in the three stages of palmprint recognition. First, quantum adaptive median filtering algorithm is presented in palmprint filtering processing. Quantum filtering algorithm can get a better filtering result than classical algorithm through the comparison. Next, quantum Fourier transform (QFT) is used to extract pattern features by only one operation due to quantum parallelism. The proposed algorithm exhibits an exponential speed-up compared with discrete Fourier transform in the feature extraction. Finally, quantum set operations and Grover algorithm are used in palmprint matching. According to the experimental results, quantum algorithm only needs to apply square of N operations to find out the target palmprint, but the traditional method needs N times of calculation. At the same time, the matching accuracy of quantum algorithm is almost 100%. PMID:25105165

  13. Improved Collaborative Filtering Algorithm via Information Transformation

    NASA Astrophysics Data System (ADS)

    Liu, Jian-Guo; Wang, Bing-Hong; Guo, Qiang

    In this paper, we propose a spreading activation approach for collaborative filtering (SA-CF). By using the opinion spreading process, the similarity between any users can be obtained. The algorithm has remarkably higher accuracy than the standard collaborative filtering using the Pearson correlation. Furthermore, we introduce a free parameter β to regulate the contributions of objects to user-user correlations. The numerical results indicate that decreasing the influence of popular objects can further improve the algorithmic accuracy and personality. We argue that a better algorithm should simultaneously require less computation and generate higher accuracy. Accordingly, we further propose an algorithm involving only the top-N similar neighbors for each target user, which has both less computational complexity and higher algorithmic accuracy.

  14. Solving TSP problem with improved genetic algorithm

    NASA Astrophysics Data System (ADS)

    Fu, Chunhua; Zhang, Lijun; Wang, Xiaojing; Qiao, Liying

    2018-05-01

    The TSP is a typical NP problem. The optimization of vehicle routing problem (VRP) and city pipeline optimization can use TSP to solve; therefore it is very important to the optimization for solving TSP problem. The genetic algorithm (GA) is one of ideal methods in solving it. The standard genetic algorithm has some limitations. Improving the selection operator of genetic algorithm, and importing elite retention strategy can ensure the select operation of quality, In mutation operation, using the adaptive algorithm selection can improve the quality of search results and variation, after the chromosome evolved one-way evolution reverse operation is added which can make the offspring inherit gene of parental quality improvement opportunities, and improve the ability of searching the optimal solution algorithm.

  15. Comparison of Quantum Dots-in-a-Double-Well and Quantum Dots-in-a-Well Focal Plane Arrays in the Long-Wave Infrared

    DTIC Science & Technology

    2011-07-01

    taken with the same camera head, operating temperature, range of calibrated blackbody illuminations, and using the same long-wavelength IR ( LWIR ) f/2...measurements shown in this article and are tabulated for comparison purposes only. Images were taken with all four devices using an f/2 LWIR lens (8–12 μm...These were acquired after a nonuniformity correction. A custom image-scaling algorithm was used to avoid the standard nonuniformity corrected scaling

  16. Computer algorithm for coding gain

    NASA Technical Reports Server (NTRS)

    Dodd, E. E.

    1974-01-01

    Development of a computer algorithm for coding gain for use in an automated communications link design system. Using an empirical formula which defines coding gain as used in space communications engineering, an algorithm is constructed on the basis of available performance data for nonsystematic convolutional encoding with soft-decision (eight-level) Viterbi decoding.

  17. Lidar Ratios for Dust Aerosols Derived From Retrievals of CALIPSO Visible Extinction Profiles Constrained by Optical Depths from MODIS-Aqua and CALIPSO/CloudSat Ocean Surface Reflectance Measurements

    NASA Technical Reports Server (NTRS)

    Young, Stuart A.; Josset, Damien B.; Vaughan, Mark A.

    2010-01-01

    CALIPSO's (Cloud Aerosol Lidar Infrared Pathfinder Satellite Observations) analysis algorithms generally require the use of tabulated values of the lidar ratio in order to retrieve aerosol extinction and optical depth from measured profiles of attenuated backscatter. However, for any given time or location, the lidar ratio for a given aerosol type can differ from the tabulated value. To gain some insight as to the extent of the variability, we here calculate the lidar ratio for dust aerosols using aerosol optical depth constraints from two sources. Daytime measurements are constrained using Level 2, Collection 5, 550-nm aerosol optical depth measurements made over the ocean by the MODIS (Moderate Resolution Imaging Spectroradiometer) on board the Aqua satellite, which flies in formation with CALIPSO. We also retrieve lidar ratios from night-time profiles constrained by aerosol column optical depths obtained by analysis of CALIPSO and CloudSat backscatter signals from the ocean surface.

  18. Testing algorithms for critical slowing down

    NASA Astrophysics Data System (ADS)

    Cossu, Guido; Boyle, Peter; Christ, Norman; Jung, Chulwoo; Jüttner, Andreas; Sanfilippo, Francesco

    2018-03-01

    We present the preliminary tests on two modifications of the Hybrid Monte Carlo (HMC) algorithm. Both algorithms are designed to travel much farther in the Hamiltonian phase space for each trajectory and reduce the autocorrelations among physical observables thus tackling the critical slowing down towards the continuum limit. We present a comparison of costs of the new algorithms with the standard HMC evolution for pure gauge fields, studying the autocorrelation times for various quantities including the topological charge.

  19. Duality quantum algorithm efficiently simulates open quantum systems

    PubMed Central

    Wei, Shi-Jie; Ruan, Dong; Long, Gui-Lu

    2016-01-01

    Because of inevitable coupling with the environment, nearly all practical quantum systems are open system, where the evolution is not necessarily unitary. In this paper, we propose a duality quantum algorithm for simulating Hamiltonian evolution of an open quantum system. In contrast to unitary evolution in a usual quantum computer, the evolution operator in a duality quantum computer is a linear combination of unitary operators. In this duality quantum algorithm, the time evolution of the open quantum system is realized by using Kraus operators which is naturally implemented in duality quantum computer. This duality quantum algorithm has two distinct advantages compared to existing quantum simulation algorithms with unitary evolution operations. Firstly, the query complexity of the algorithm is O(d3) in contrast to O(d4) in existing unitary simulation algorithm, where d is the dimension of the open quantum system. Secondly, By using a truncated Taylor series of the evolution operators, this duality quantum algorithm provides an exponential improvement in precision compared with previous unitary simulation algorithm. PMID:27464855

  20. Parameterization of Keeling's network generation algorithm.

    PubMed

    Badham, Jennifer; Abbass, Hussein; Stocker, Rob

    2008-09-01

    Simulation is increasingly being used to examine epidemic behaviour and assess potential management options. The utility of the simulations rely on the ability to replicate those aspects of the social structure that are relevant to epidemic transmission. One approach is to generate networks with desired social properties. Recent research by Keeling and his colleagues has generated simulated networks with a range of properties, and examined the impact of these properties on epidemic processes occurring over the network. However, published work has included only limited analysis of the algorithm itself and the way in which the network properties are related to the algorithm parameters. This paper identifies some relationships between the algorithm parameters and selected network properties (mean degree, degree variation, clustering coefficient and assortativity). Our approach enables users of the algorithm to efficiently generate a network with given properties, thereby allowing realistic social networks to be used as the basis of epidemic simulations. Alternatively, the algorithm could be used to generate social networks with a range of property values, enabling analysis of the impact of these properties on epidemic behaviour.

  1. Bell-Curve Based Evolutionary Optimization Algorithm

    NASA Technical Reports Server (NTRS)

    Sobieszczanski-Sobieski, J.; Laba, K.; Kincaid, R.

    1998-01-01

    The paper presents an optimization algorithm that falls in the category of genetic, or evolutionary algorithms. While the bit exchange is the basis of most of the Genetic Algorithms (GA) in research and applications in America, some alternatives, also in the category of evolutionary algorithms, but use a direct, geometrical approach have gained popularity in Europe and Asia. The Bell-Curve Based Evolutionary Algorithm (BCB) is in this alternative category and is distinguished by the use of a combination of n-dimensional geometry and the normal distribution, the bell-curve, in the generation of the offspring. The tool for creating a child is a geometrical construct comprising a line connecting two parents and a weighted point on that line. The point that defines the child deviates from the weighted point in two directions: parallel and orthogonal to the connecting line, the deviation in each direction obeying a probabilistic distribution. Tests showed satisfactory performance of BCB. The principal advantage of BCB is its controllability via the normal distribution parameters and the geometrical construct variables.

  2. Location-Aware Mobile Learning of Spatial Algorithms

    ERIC Educational Resources Information Center

    Karavirta, Ville

    2013-01-01

    Learning an algorithm--a systematic sequence of operations for solving a problem with given input--is often difficult for students due to the abstract nature of the algorithms and the data they process. To help students understand the behavior of algorithms, a subfield in computing education research has focused on algorithm…

  3. Verification of IEEE Compliant Subtractive Division Algorithms

    NASA Technical Reports Server (NTRS)

    Miner, Paul S.; Leathrum, James F., Jr.

    1996-01-01

    A parameterized definition of subtractive floating point division algorithms is presented and verified using PVS. The general algorithm is proven to satisfy a formal definition of an IEEE standard for floating point arithmetic. The utility of the general specification is illustrated using a number of different instances of the general algorithm.

  4. Bouc-Wen hysteresis model identification using Modified Firefly Algorithm

    NASA Astrophysics Data System (ADS)

    Zaman, Mohammad Asif; Sikder, Urmita

    2015-12-01

    The parameters of Bouc-Wen hysteresis model are identified using a Modified Firefly Algorithm. The proposed algorithm uses dynamic process control parameters to improve its performance. The algorithm is used to find the model parameter values that results in the least amount of error between a set of given data points and points obtained from the Bouc-Wen model. The performance of the algorithm is compared with the performance of conventional Firefly Algorithm, Genetic Algorithm and Differential Evolution algorithm in terms of convergence rate and accuracy. Compared to the other three optimization algorithms, the proposed algorithm is found to have good convergence rate with high degree of accuracy in identifying Bouc-Wen model parameters. Finally, the proposed method is used to find the Bouc-Wen model parameters from experimental data. The obtained model is found to be in good agreement with measured data.

  5. Limitations and potentials of current motif discovery algorithms

    PubMed Central

    Hu, Jianjun; Li, Bin; Kihara, Daisuke

    2005-01-01

    Computational methods for de novo identification of gene regulation elements, such as transcription factor binding sites, have proved to be useful for deciphering genetic regulatory networks. However, despite the availability of a large number of algorithms, their strengths and weaknesses are not sufficiently understood. Here, we designed a comprehensive set of performance measures and benchmarked five modern sequence-based motif discovery algorithms using large datasets generated from Escherichia coli RegulonDB. Factors that affect the prediction accuracy, scalability and reliability are characterized. It is revealed that the nucleotide and the binding site level accuracy are very low, while the motif level accuracy is relatively high, which indicates that the algorithms can usually capture at least one correct motif in an input sequence. To exploit diverse predictions from multiple runs of one or more algorithms, a consensus ensemble algorithm has been developed, which achieved 6–45% improvement over the base algorithms by increasing both the sensitivity and specificity. Our study illustrates limitations and potentials of existing sequence-based motif discovery algorithms. Taking advantage of the revealed potentials, several promising directions for further improvements are discussed. Since the sequence-based algorithms are the baseline of most of the modern motif discovery algorithms, this paper suggests substantial improvements would be possible for them. PMID:16284194

  6. [A new peak detection algorithm of Raman spectra].

    PubMed

    Jiang, Cheng-Zhi; Sun, Qiang; Liu, Ying; Liang, Jing-Qiu; An, Yan; Liu, Bing

    2014-01-01

    The authors proposed a new Raman peak recognition method named bi-scale correlation algorithm. The algorithm uses the combination of the correlation coefficient and the local signal-to-noise ratio under two scales to achieve Raman peak identification. We compared the performance of the proposed algorithm with that of the traditional continuous wavelet transform method through MATLAB, and then tested the algorithm with real Raman spectra. The results show that the average time for identifying a Raman spectrum is 0.51 s with the algorithm, while it is 0.71 s with the continuous wavelet transform. When the signal-to-noise ratio of Raman peak is greater than or equal to 6 (modern Raman spectrometers feature an excellent signal-to-noise ratio), the recognition accuracy with the algorithm is higher than 99%, while it is less than 84% with the continuous wavelet transform method. The mean and the standard deviations of the peak position identification error of the algorithm are both less than that of the continuous wavelet transform method. Simulation analysis and experimental verification prove that the new algorithm possesses the following advantages: no needs of human intervention, no needs of de-noising and background removal operation, higher recognition speed and higher recognition accuracy. The proposed algorithm is operable in Raman peak identification.

  7. Mapped Landmark Algorithm for Precision Landing

    NASA Technical Reports Server (NTRS)

    Johnson, Andrew; Ansar, Adnan; Matthies, Larry

    2007-01-01

    A report discusses a computer vision algorithm for position estimation to enable precision landing during planetary descent. The Descent Image Motion Estimation System for the Mars Exploration Rovers has been used as a starting point for creating code for precision, terrain-relative navigation during planetary landing. The algorithm is designed to be general because it handles images taken at different scales and resolutions relative to the map, and can produce mapped landmark matches for any planetary terrain of sufficient texture. These matches provide a measurement of horizontal position relative to a known landing site specified on the surface map. Multiple mapped landmarks generated per image allow for automatic detection and elimination of bad matches. Attitude and position can be generated from each image; this image-based attitude measurement can be used by the onboard navigation filter to improve the attitude estimate, which will improve the position estimates. The algorithm uses normalized correlation of grayscale images, producing precise, sub-pixel images. The algorithm has been broken into two sub-algorithms: (1) FFT Map Matching (see figure), which matches a single large template by correlation in the frequency domain, and (2) Mapped Landmark Refinement, which matches many small templates by correlation in the spatial domain. Each relies on feature selection, the homography transform, and 3D image correlation. The algorithm is implemented in C++ and is rated at Technology Readiness Level (TRL) 4.

  8. A Multistrategy Optimization Improved Artificial Bee Colony Algorithm

    PubMed Central

    Liu, Wen

    2014-01-01

    Being prone to the shortcomings of premature and slow convergence rate of artificial bee colony algorithm, an improved algorithm was proposed. Chaotic reverse learning strategies were used to initialize swarm in order to improve the global search ability of the algorithm and keep the diversity of the algorithm; the similarity degree of individuals of the population was used to characterize the diversity of population; population diversity measure was set as an indicator to dynamically and adaptively adjust the nectar position; the premature and local convergence were avoided effectively; dual population search mechanism was introduced to the search stage of algorithm; the parallel search of dual population considerably improved the convergence rate. Through simulation experiments of 10 standard testing functions and compared with other algorithms, the results showed that the improved algorithm had faster convergence rate and the capacity of jumping out of local optimum faster. PMID:24982924

  9. Local multiplicative Schwarz algorithms for convection-diffusion equations

    NASA Technical Reports Server (NTRS)

    Cai, Xiao-Chuan; Sarkis, Marcus

    1995-01-01

    We develop a new class of overlapping Schwarz type algorithms for solving scalar convection-diffusion equations discretized by finite element or finite difference methods. The preconditioners consist of two components, namely, the usual two-level additive Schwarz preconditioner and the sum of some quadratic terms constructed by using products of ordered neighboring subdomain preconditioners. The ordering of the subdomain preconditioners is determined by considering the direction of the flow. We prove that the algorithms are optimal in the sense that the convergence rates are independent of the mesh size, as well as the number of subdomains. We show by numerical examples that the new algorithms are less sensitive to the direction of the flow than either the classical multiplicative Schwarz algorithms, and converge faster than the additive Schwarz algorithms. Thus, the new algorithms are more suitable for fluid flow applications than the classical additive or multiplicative Schwarz algorithms.

  10. Adaptively resizing populations: Algorithm, analysis, and first results

    NASA Technical Reports Server (NTRS)

    Smith, Robert E.; Smuda, Ellen

    1993-01-01

    Deciding on an appropriate population size for a given Genetic Algorithm (GA) application can often be critical to the algorithm's success. Too small, and the GA can fall victim to sampling error, affecting the efficacy of its search. Too large, and the GA wastes computational resources. Although advice exists for sizing GA populations, much of this advice involves theoretical aspects that are not accessible to the novice user. An algorithm for adaptively resizing GA populations is suggested. This algorithm is based on recent theoretical developments that relate population size to schema fitness variance. The suggested algorithm is developed theoretically, and simulated with expected value equations. The algorithm is then tested on a problem where population sizing can mislead the GA. The work presented suggests that the population sizing algorithm may be a viable way to eliminate the population sizing decision from the application of GA's.

  11. Selected-node stochastic simulation algorithm

    NASA Astrophysics Data System (ADS)

    Duso, Lorenzo; Zechner, Christoph

    2018-04-01

    Stochastic simulations of biochemical networks are of vital importance for understanding complex dynamics in cells and tissues. However, existing methods to perform such simulations are associated with computational difficulties and addressing those remains a daunting challenge to the present. Here we introduce the selected-node stochastic simulation algorithm (snSSA), which allows us to exclusively simulate an arbitrary, selected subset of molecular species of a possibly large and complex reaction network. The algorithm is based on an analytical elimination of chemical species, thereby avoiding explicit simulation of the associated chemical events. These species are instead described continuously in terms of statistical moments derived from a stochastic filtering equation, resulting in a substantial speedup when compared to Gillespie's stochastic simulation algorithm (SSA). Moreover, we show that statistics obtained via snSSA profit from a variance reduction, which can significantly lower the number of Monte Carlo samples needed to achieve a certain performance. We demonstrate the algorithm using several biological case studies for which the simulation time could be reduced by orders of magnitude.

  12. High-order Newton-penalty algorithms

    NASA Astrophysics Data System (ADS)

    Dussault, Jean-Pierre

    2005-10-01

    Recent efforts in differentiable non-linear programming have been focused on interior point methods, akin to penalty and barrier algorithms. In this paper, we address the classical equality constrained program solved using the simple quadratic loss penalty function/algorithm. The suggestion to use extrapolations to track the differentiable trajectory associated with penalized subproblems goes back to the classic monograph of Fiacco & McCormick. This idea was further developed by Gould who obtained a two-steps quadratically convergent algorithm using prediction steps and Newton correction. Dussault interpreted the prediction step as a combined extrapolation with respect to the penalty parameter and the residual of the first order optimality conditions. Extrapolation with respect to the residual coincides with a Newton step.We explore here higher-order extrapolations, thus higher-order Newton-like methods. We first consider high-order variants of the Newton-Raphson method applied to non-linear systems of equations. Next, we obtain improved asymptotic convergence results for the quadratic loss penalty algorithm by using high-order extrapolation steps.

  13. Coagulation algorithms with size binning

    NASA Technical Reports Server (NTRS)

    Statton, David M.; Gans, Jason; Williams, Eric

    1994-01-01

    The Smoluchowski equation describes the time evolution of an aerosol particle size distribution due to aggregation or coagulation. Any algorithm for computerized solution of this equation requires a scheme for describing the continuum of aerosol particle sizes as a discrete set. One standard form of the Smoluchowski equation accomplishes this by restricting the particle sizes to integer multiples of a basic unit particle size (the monomer size). This can be inefficient when particle concentrations over a large range of particle sizes must be calculated. Two algorithms employing a geometric size binning convention are examined: the first assumes that the aerosol particle concentration as a function of size can be considered constant within each size bin; the second approximates the concentration as a linear function of particle size within each size bin. The output of each algorithm is compared to an analytical solution in a special case of the Smoluchowski equation for which an exact solution is known . The range of parameters more appropriate for each algorithm is examined.

  14. Fast algorithm for bilinear transforms in optics

    NASA Astrophysics Data System (ADS)

    Ostrovsky, Andrey S.; Martinez-Niconoff, Gabriel C.; Ramos Romero, Obdulio; Cortes, Liliana

    2000-10-01

    The fast algorithm for calculating the bilinear transform in the optical system is proposed. This algorithm is based on the coherent-mode representation of the cross-spectral density function of the illumination. The algorithm is computationally efficient when the illumination is partially coherent. Numerical examples are studied and compared with the theoretical results.

  15. Algorithms For Integrating Nonlinear Differential Equations

    NASA Technical Reports Server (NTRS)

    Freed, A. D.; Walker, K. P.

    1994-01-01

    Improved algorithms developed for use in numerical integration of systems of nonhomogenous, nonlinear, first-order, ordinary differential equations. In comparison with integration algorithms, these algorithms offer greater stability and accuracy. Several asymptotically correct, thereby enabling retention of stability and accuracy when large increments of independent variable used. Accuracies attainable demonstrated by applying them to systems of nonlinear, first-order, differential equations that arise in study of viscoplastic behavior, spread of acquired immune-deficiency syndrome (AIDS) virus and predator/prey populations.

  16. Performance of the "CCS Algorithm" in real world patients.

    PubMed

    LaHaye, Stephen A; Olesen, Jonas B; Lacombe, Shawn P

    2015-06-01

    With the publication of the 2014 Focused Update of the Canadian Cardiovascular Society Guidelines for the Management of Atrial Fibrillation, the Canadian Cardiovascular Society Atrial Fibrillation Guidelines Committee has introduced a new triage and management algorithm; the so-called "CCS Algorithm". The CCS Algorithm is based upon expert opinion of the best available evidence; however, the CCS Algorithm has not yet been validated. Accordingly, the purpose of this study is to evaluate the performance of the CCS Algorithm in a cohort of real world patients. We compared the CCS Algorithm with the European Society of Cardiology (ESC) Algorithm in 172 hospital inpatients who are at risk of stroke due to non-valvular atrial fibrillation in whom anticoagulant therapy was being considered. The CCS Algorithm and the ESC Algorithm were concordant in 170/172 patients (99% of the time). There were two patients (1%) with vascular disease, but no other thromboembolic risk factors, which were classified as requiring oral anticoagulant therapy using the ESC Algorithm, but for whom ASA was recommended by the CCS Algorithm. The CCS Algorithm appears to be unnecessarily complicated in so far as it does not appear to provide any additional discriminatory value above and beyond the use of the ESC Algorithm, and its use could result in under treatment of patients, specifically female patients with vascular disease, whose real risk of stroke has been understated by the Guidelines.

  17. Aeon: Synthesizing Scheduling Algorithms from High-Level Models

    NASA Astrophysics Data System (ADS)

    Monette, Jean-Noël; Deville, Yves; van Hentenryck, Pascal

    This paper describes the aeon system whose aim is to synthesize scheduling algorithms from high-level models. A eon, which is entirely written in comet, receives as input a high-level model for a scheduling application which is then analyzed to generate a dedicated scheduling algorithm exploiting the structure of the model. A eon provides a variety of synthesizers for generating complete or heuristic algorithms. Moreover, synthesizers are compositional, making it possible to generate complex hybrid algorithms naturally. Preliminary experimental results indicate that this approach may be competitive with state-of-the-art search algorithms.

  18. Teaching Multiplication Algorithms from Other Cultures

    ERIC Educational Resources Information Center

    Lin, Cheng-Yao

    2007-01-01

    This article describes a number of multiplication algorithms from different cultures around the world: Hindu, Egyptian, Russian, Japanese, and Chinese. Students can learn these algorithms and better understand the operation and properties of multiplication.

  19. Stochastic reaction-diffusion algorithms for macromolecular crowding

    NASA Astrophysics Data System (ADS)

    Sturrock, Marc

    2016-06-01

    Compartment-based (lattice-based) reaction-diffusion algorithms are often used for studying complex stochastic spatio-temporal processes inside cells. In this paper the influence of macromolecular crowding on stochastic reaction-diffusion simulations is investigated. Reaction-diffusion processes are considered on two different kinds of compartmental lattice, a cubic lattice and a hexagonal close packed lattice, and solved using two different algorithms, the stochastic simulation algorithm and the spatiocyte algorithm (Arjunan and Tomita 2010 Syst. Synth. Biol. 4, 35-53). Obstacles (modelling macromolecular crowding) are shown to have substantial effects on the mean squared displacement and average number of molecules in the domain but the nature of these effects is dependent on the choice of lattice, with the cubic lattice being more susceptible to the effects of the obstacles. Finally, improvements for both algorithms are presented.

  20. Parallelization of a blind deconvolution algorithm

    NASA Astrophysics Data System (ADS)

    Matson, Charles L.; Borelli, Kathy J.

    2006-09-01

    Often it is of interest to deblur imagery in order to obtain higher-resolution images. Deblurring requires knowledge of the blurring function - information that is often not available separately from the blurred imagery. Blind deconvolution algorithms overcome this problem by jointly estimating both the high-resolution image and the blurring function from the blurred imagery. Because blind deconvolution algorithms are iterative in nature, they can take minutes to days to deblur an image depending how many frames of data are used for the deblurring and the platforms on which the algorithms are executed. Here we present our progress in parallelizing a blind deconvolution algorithm to increase its execution speed. This progress includes sub-frame parallelization and a code structure that is not specialized to a specific computer hardware architecture.

  1. Line-drawing algorithms for parallel machines

    NASA Technical Reports Server (NTRS)

    Pang, Alex T.

    1990-01-01

    The fact that conventional line-drawing algorithms, when applied directly on parallel machines, can lead to very inefficient codes is addressed. It is suggested that instead of modifying an existing algorithm for a parallel machine, a more efficient implementation can be produced by going back to the invariants in the definition. Popular line-drawing algorithms are compared with two alternatives; distance to a line (a point is on the line if sufficiently close to it) and intersection with a line (a point on the line if an intersection point). For massively parallel single-instruction-multiple-data (SIMD) machines (with thousands of processors and up), the alternatives provide viable line-drawing algorithms. Because of the pixel-per-processor mapping, their performance is independent of the line length and orientation.

  2. Convergence Rates of Finite Difference Stochastic Approximation Algorithms

    DTIC Science & Technology

    2016-06-01

    dfferences as gradient approximations. It is shown that the convergence of these algorithms can be accelerated by controlling the implementation of the...descent algorithm, under various updating schemes using finite dfferences as gradient approximations. It is shown that the convergence of these...the Kiefer-Wolfowitz algorithm and the mirror descent algorithm, under various updating schemes using finite differences as gradient approximations. It

  3. Information filtering via weighted heat conduction algorithm

    NASA Astrophysics Data System (ADS)

    Liu, Jian-Guo; Guo, Qiang; Zhang, Yi-Cheng

    2011-06-01

    In this paper, by taking into account effects of the user and object correlations on a heat conduction (HC) algorithm, a weighted heat conduction (WHC) algorithm is presented. We argue that the edge weight of the user-object bipartite network should be embedded into the HC algorithm to measure the object similarity. The numerical results indicate that both the accuracy and diversity could be improved greatly compared with the standard HC algorithm and the optimal values reached simultaneously. On the Movielens and Netflix datasets, the algorithmic accuracy, measured by the average ranking score, can be improved by 39.7% and 56.1% in the optimal case, respectively, and the diversity could reach 0.9587 and 0.9317 when the recommendation list equals to 5. Further statistical analysis indicates that, in the optimal case, the distributions of the edge weight are changed to the Poisson form, which may be the reason why HC algorithm performance could be improved. This work highlights the effect of edge weight on a personalized recommendation study, which maybe an important factor affecting personalized recommendation performance.

  4. A hierarchical exact accelerated stochastic simulation algorithm

    NASA Astrophysics Data System (ADS)

    Orendorff, David; Mjolsness, Eric

    2012-12-01

    A new algorithm, "HiER-leap" (hierarchical exact reaction-leaping), is derived which improves on the computational properties of the ER-leap algorithm for exact accelerated simulation of stochastic chemical kinetics. Unlike ER-leap, HiER-leap utilizes a hierarchical or divide-and-conquer organization of reaction channels into tightly coupled "blocks" and is thereby able to speed up systems with many reaction channels. Like ER-leap, HiER-leap is based on the use of upper and lower bounds on the reaction propensities to define a rejection sampling algorithm with inexpensive early rejection and acceptance steps. But in HiER-leap, large portions of intra-block sampling may be done in parallel. An accept/reject step is used to synchronize across blocks. This method scales well when many reaction channels are present and has desirable asymptotic properties. The algorithm is exact, parallelizable and achieves a significant speedup over the stochastic simulation algorithm and ER-leap on certain problems. This algorithm offers a potentially important step towards efficient in silico modeling of entire organisms.

  5. Multidimensional generalized-ensemble algorithms for complex systems.

    PubMed

    Mitsutake, Ayori; Okamoto, Yuko

    2009-06-07

    We give general formulations of the multidimensional multicanonical algorithm, simulated tempering, and replica-exchange method. We generalize the original potential energy function E(0) by adding any physical quantity V of interest as a new energy term. These multidimensional generalized-ensemble algorithms then perform a random walk not only in E(0) space but also in V space. Among the three algorithms, the replica-exchange method is the easiest to perform because the weight factor is just a product of regular Boltzmann-like factors, while the weight factors for the multicanonical algorithm and simulated tempering are not a priori known. We give a simple procedure for obtaining the weight factors for these two latter algorithms, which uses a short replica-exchange simulation and the multiple-histogram reweighting techniques. As an example of applications of these algorithms, we have performed a two-dimensional replica-exchange simulation and a two-dimensional simulated-tempering simulation using an alpha-helical peptide system. From these simulations, we study the helix-coil transitions of the peptide in gas phase and in aqueous solution.

  6. The Applications of Genetic Algorithms in Medicine.

    PubMed

    Ghaheri, Ali; Shoar, Saeed; Naderan, Mohammad; Hoseini, Sayed Shahabuddin

    2015-11-01

    A great wealth of information is hidden amid medical research data that in some cases cannot be easily analyzed, if at all, using classical statistical methods. Inspired by nature, metaheuristic algorithms have been developed to offer optimal or near-optimal solutions to complex data analysis and decision-making tasks in a reasonable time. Due to their powerful features, metaheuristic algorithms have frequently been used in other fields of sciences. In medicine, however, the use of these algorithms are not known by physicians who may well benefit by applying them to solve complex medical problems. Therefore, in this paper, we introduce the genetic algorithm and its applications in medicine. The use of the genetic algorithm has promising implications in various medical specialties including radiology, radiotherapy, oncology, pediatrics, cardiology, endocrinology, surgery, obstetrics and gynecology, pulmonology, infectious diseases, orthopedics, rehabilitation medicine, neurology, pharmacotherapy, and health care management. This review introduces the applications of the genetic algorithm in disease screening, diagnosis, treatment planning, pharmacovigilance, prognosis, and health care management, and enables physicians to envision possible applications of this metaheuristic method in their medical career.].

  7. The Applications of Genetic Algorithms in Medicine

    PubMed Central

    Ghaheri, Ali; Shoar, Saeed; Naderan, Mohammad; Hoseini, Sayed Shahabuddin

    2015-01-01

    A great wealth of information is hidden amid medical research data that in some cases cannot be easily analyzed, if at all, using classical statistical methods. Inspired by nature, metaheuristic algorithms have been developed to offer optimal or near-optimal solutions to complex data analysis and decision-making tasks in a reasonable time. Due to their powerful features, metaheuristic algorithms have frequently been used in other fields of sciences. In medicine, however, the use of these algorithms are not known by physicians who may well benefit by applying them to solve complex medical problems. Therefore, in this paper, we introduce the genetic algorithm and its applications in medicine. The use of the genetic algorithm has promising implications in various medical specialties including radiology, radiotherapy, oncology, pediatrics, cardiology, endocrinology, surgery, obstetrics and gynecology, pulmonology, infectious diseases, orthopedics, rehabilitation medicine, neurology, pharmacotherapy, and health care management. This review introduces the applications of the genetic algorithm in disease screening, diagnosis, treatment planning, pharmacovigilance, prognosis, and health care management, and enables physicians to envision possible applications of this metaheuristic method in their medical career.] PMID:26676060

  8. An adaptive grid algorithm for 3-D GIS landform optimization based on improved ant algorithm

    NASA Astrophysics Data System (ADS)

    Wu, Chenhan; Meng, Lingkui; Deng, Shijun

    2005-07-01

    The key technique of 3-D GIS is to realize quick and high-quality 3-D visualization, in which 3-D roaming system based on landform plays an important role. However how to increase efficiency of 3-D roaming engine and process a large amount of landform data is a key problem in 3-D landform roaming system and improper process of the problem would result in tremendous consumption of system resources. Therefore it has become the key of 3-D roaming system design that how to realize high-speed process of distributed data for landform DEM (Digital Elevation Model) and high-speed distributed modulation of various 3-D landform data resources. In the paper we improved the basic ant algorithm and designed the modulation strategy of 3-D GIS landform resources based on the improved ant algorithm. By initially hypothetic road weights σi , the change of the information factors in the original algorithm would transform from ˜τj to ∆τj+σi and the weights was decided by 3-D computative capacity of various nodes in network environment. So during the course of initial phase of task assignment, increasing the resource information factors of high task-accomplishing rate and decreasing ones of low accomplishing rate would make load accomplishing rate approach the same value as quickly as possible, then in the later process of task assignment, the load balanced ability of the system was further improved. Experimental results show by improving ant algorithm, our system not only decreases many disadvantage of the traditional ant algorithm, but also like ants looking for food effectively distributes the complicated landform algorithm to many computers to process cooperatively and gains a satisfying search result.

  9. A tuning algorithm for model predictive controllers based on genetic algorithms and fuzzy decision making.

    PubMed

    van der Lee, J H; Svrcek, W Y; Young, B R

    2008-01-01

    Model Predictive Control is a valuable tool for the process control engineer in a wide variety of applications. Because of this the structure of an MPC can vary dramatically from application to application. There have been a number of works dedicated to MPC tuning for specific cases. Since MPCs can differ significantly, this means that these tuning methods become inapplicable and a trial and error tuning approach must be used. This can be quite time consuming and can result in non-optimum tuning. In an attempt to resolve this, a generalized automated tuning algorithm for MPCs was developed. This approach is numerically based and combines a genetic algorithm with multi-objective fuzzy decision-making. The key advantages to this approach are that genetic algorithms are not problem specific and only need to be adapted to account for the number and ranges of tuning parameters for a given MPC. As well, multi-objective fuzzy decision-making can handle qualitative statements of what optimum control is, in addition to being able to use multiple inputs to determine tuning parameters that best match the desired results. This is particularly useful for multi-input, multi-output (MIMO) cases where the definition of "optimum" control is subject to the opinion of the control engineer tuning the system. A case study will be presented in order to illustrate the use of the tuning algorithm. This will include how different definitions of "optimum" control can arise, and how they are accounted for in the multi-objective decision making algorithm. The resulting tuning parameters from each of the definition sets will be compared, and in doing so show that the tuning parameters vary in order to meet each definition of optimum control, thus showing the generalized automated tuning algorithm approach for tuning MPCs is feasible.

  10. The psychopharmacology algorithm project at the Harvard South Shore Program: an algorithm for acute mania.

    PubMed

    Mohammad, Othman; Osser, David N

    2014-01-01

    This new algorithm for the pharmacotherapy of acute mania was developed by the Psychopharmacology Algorithm Project at the Harvard South Shore Program. The authors conducted a literature search in PubMed and reviewed key studies, other algorithms and guidelines, and their references. Treatments were prioritized considering three main considerations: (1) effectiveness in treating the current episode, (2) preventing potential relapses to depression, and (3) minimizing side effects over the short and long term. The algorithm presupposes that clinicians have made an accurate diagnosis, decided how to manage contributing medical causes (including substance misuse), discontinued antidepressants, and considered the patient's childbearing potential. We propose different algorithms for mixed and nonmixed mania. Patients with mixed mania may be treated first with a second-generation antipsychotic, of which the first choice is quetiapine because of its greater efficacy for depressive symptoms and episodes in bipolar disorder. Valproate and then either lithium or carbamazepine may be added. For nonmixed mania, lithium is the first-line recommendation. A second-generation antipsychotic can be added. Again, quetiapine is favored, but if quetiapine is unacceptable, risperidone is the next choice. Olanzapine is not considered a first-line treatment due to its long-term side effects, but it could be second-line. If the patient, whether mixed or nonmixed, is still refractory to the above medications, then depending on what has already been tried, consider carbamazepine, haloperidol, olanzapine, risperidone, and valproate first tier; aripiprazole, asenapine, and ziprasidone second tier; and clozapine third tier (because of its weaker evidence base and greater side effects). Electroconvulsive therapy may be considered at any point in the algorithm if the patient has a history of positive response or is intolerant of medications.

  11. Minimal-scan filtered backpropagation algorithms for diffraction tomography.

    PubMed

    Pan, X; Anastasio, M A

    1999-12-01

    The filtered backpropagation (FBPP) algorithm, originally developed by Devaney [Ultrason. Imaging 4, 336 (1982)], has been widely used for reconstructing images in diffraction tomography. It is generally known that the FBPP algorithm requires scattered data from a full angular range of 2 pi for exact reconstruction of a generally complex-valued object function. However, we reveal that one needs scattered data only over the angular range 0 < or = phi < or = 3 pi/2 for exact reconstruction of a generally complex-valued object function. Using this insight, we develop and analyze a family of minimal-scan filtered backpropagation (MS-FBPP) algorithms, which, unlike the FBPP algorithm, use scattered data acquired from view angles over the range 0 < or = phi < or = 3 pi/2. We show analytically that these MS-FBPP algorithms are mathematically identical to the FBPP algorithm. We also perform computer simulation studies for validation, demonstration, and comparison of these MS-FBPP algorithms. The numerical results in these simulation studies corroborate our theoretical assertions.

  12. FIVQ algorithm for interference hyper-spectral image compression

    NASA Astrophysics Data System (ADS)

    Wen, Jia; Ma, Caiwen; Zhao, Junsuo

    2014-07-01

    Based on the improved vector quantization (IVQ) algorithm [1] which was proposed in 2012, this paper proposes a further improved vector quantization (FIVQ) algorithm for LASIS (Large Aperture Static Imaging Spectrometer) interference hyper-spectral image compression. To get better image quality, IVQ algorithm takes both the mean values and the VQ indices as the encoding rules. Although IVQ algorithm can improve both the bit rate and the image quality, it still can be further improved in order to get much lower bit rate for the LASIS interference pattern with the special optical characteristics based on the pushing and sweeping in LASIS imaging principle. In the proposed algorithm FIVQ, the neighborhood of the encoding blocks of the interference pattern image, which are using the mean value rules, will be checked whether they have the same mean value as the current processing block. Experiments show the proposed algorithm FIVQ can get lower bit rate compared to that of the IVQ algorithm for the LASIS interference hyper-spectral sequences.

  13. PCA-LBG-based algorithms for VQ codebook generation

    NASA Astrophysics Data System (ADS)

    Tsai, Jinn-Tsong; Yang, Po-Yuan

    2015-04-01

    Vector quantisation (VQ) codebooks are generated by combining principal component analysis (PCA) algorithms with Linde-Buzo-Gray (LBG) algorithms. All training vectors are grouped according to the projected values of the principal components. The PCA-LBG-based algorithms include (1) PCA-LBG-Median, which selects the median vector of each group, (2) PCA-LBG-Centroid, which adopts the centroid vector of each group, and (3) PCA-LBG-Random, which randomly selects a vector of each group. The LBG algorithm finds a codebook based on the better vectors sent to an initial codebook by the PCA. The PCA performs an orthogonal transformation to convert a set of potentially correlated variables into a set of variables that are not linearly correlated. Because the orthogonal transformation efficiently distinguishes test image vectors, the proposed PCA-LBG-based algorithm is expected to outperform conventional algorithms in designing VQ codebooks. The experimental results confirm that the proposed PCA-LBG-based algorithms indeed obtain better results compared to existing methods reported in the literature.

  14. Improved pulse laser ranging algorithm based on high speed sampling

    NASA Astrophysics Data System (ADS)

    Gao, Xuan-yi; Qian, Rui-hai; Zhang, Yan-mei; Li, Huan; Guo, Hai-chao; He, Shi-jie; Guo, Xiao-kang

    2016-10-01

    Narrow pulse laser ranging achieves long-range target detection using laser pulse with low divergent beams. Pulse laser ranging is widely used in military, industrial, civil, engineering and transportation field. In this paper, an improved narrow pulse laser ranging algorithm is studied based on the high speed sampling. Firstly, theoretical simulation models have been built and analyzed including the laser emission and pulse laser ranging algorithm. An improved pulse ranging algorithm is developed. This new algorithm combines the matched filter algorithm and the constant fraction discrimination (CFD) algorithm. After the algorithm simulation, a laser ranging hardware system is set up to implement the improved algorithm. The laser ranging hardware system includes a laser diode, a laser detector and a high sample rate data logging circuit. Subsequently, using Verilog HDL language, the improved algorithm is implemented in the FPGA chip based on fusion of the matched filter algorithm and the CFD algorithm. Finally, the laser ranging experiment is carried out to test the improved algorithm ranging performance comparing to the matched filter algorithm and the CFD algorithm using the laser ranging hardware system. The test analysis result demonstrates that the laser ranging hardware system realized the high speed processing and high speed sampling data transmission. The algorithm analysis result presents that the improved algorithm achieves 0.3m distance ranging precision. The improved algorithm analysis result meets the expected effect, which is consistent with the theoretical simulation.

  15. An improved affine projection algorithm for active noise cancellation

    NASA Astrophysics Data System (ADS)

    Zhang, Congyan; Wang, Mingjiang; Han, Yufei; Sun, Yunzhuo

    2017-08-01

    Affine projection algorithm is a signal reuse algorithm, and it has a good convergence rate compared to other traditional adaptive filtering algorithm. There are two factors that affect the performance of the algorithm, which are step factor and the projection length. In the paper, we propose a new variable step size affine projection algorithm (VSS-APA). It dynamically changes the step size according to certain rules, so that it can get smaller steady-state error and faster convergence speed. Simulation results can prove that its performance is superior to the traditional affine projection algorithm and in the active noise control (ANC) applications, the new algorithm can get very good results.

  16. New Results in Astrodynamics Using Genetic Algorithms

    NASA Technical Reports Server (NTRS)

    Coverstone-Carroll, V.; Hartmann, J. W.; Williams, S. N.; Mason, W. J.

    1998-01-01

    Generic algorithms have gained popularity as an effective procedure for obtaining solutions to traditionally difficult space mission optimization problems. In this paper, a brief survey of the use of genetic algorithms to solve astrodynamics problems is presented and is followed by new results obtained from applying a Pareto genetic algorithm to the optimization of low-thrust interplanetary spacecraft missions.

  17. A new real-time tsunami detection algorithm

    NASA Astrophysics Data System (ADS)

    Chierici, F.; Embriaco, D.; Pignagnoli, L.

    2016-12-01

    Real-time tsunami detection algorithms play a key role in any Tsunami Early Warning System. We have developed a new algorithm for tsunami detection based on the real-time tide removal and real-time band-pass filtering of sea-bed pressure recordings. The algorithm greatly increases the tsunami detection probability, shortens the detection delay and enhances detection reliability, at low computational cost. The algorithm is designed to be used also in autonomous early warning systems with a set of input parameters and procedures which can be reconfigured in real time. We have also developed a methodology based on Monte Carlo simulations to test the tsunami detection algorithms. The algorithm performance is estimated by defining and evaluating statistical parameters, namely the detection probability, the detection delay, which are functions of the tsunami amplitude and wavelength, and the occurring rate of false alarms. Pressure data sets acquired by Bottom Pressure Recorders in different locations and environmental conditions have been used in order to consider real working scenarios in the test. We also present an application of the algorithm to the tsunami event which occurred at Haida Gwaii on October 28th, 2012 using data recorded by the Bullseye underwater node of Ocean Networks Canada. The algorithm successfully ran for test purpose in year-long missions onboard the GEOSTAR stand-alone multidisciplinary abyssal observatory, deployed in the Gulf of Cadiz during the EC project NEAREST and on NEMO-SN1 cabled observatory deployed in the Western Ionian Sea, operational node of the European research infrastructure EMSO.

  18. Dynamic Inertia Weight Binary Bat Algorithm with Neighborhood Search

    PubMed Central

    2017-01-01

    Binary bat algorithm (BBA) is a binary version of the bat algorithm (BA). It has been proven that BBA is competitive compared to other binary heuristic algorithms. Since the update processes of velocity in the algorithm are consistent with BA, in some cases, this algorithm also faces the premature convergence problem. This paper proposes an improved binary bat algorithm (IBBA) to solve this problem. To evaluate the performance of IBBA, standard benchmark functions and zero-one knapsack problems have been employed. The numeric results obtained by benchmark functions experiment prove that the proposed approach greatly outperforms the original BBA and binary particle swarm optimization (BPSO). Compared with several other heuristic algorithms on zero-one knapsack problems, it also verifies that the proposed algorithm is more able to avoid local minima. PMID:28634487

  19. Dynamic Inertia Weight Binary Bat Algorithm with Neighborhood Search.

    PubMed

    Huang, Xingwang; Zeng, Xuewen; Han, Rui

    2017-01-01

    Binary bat algorithm (BBA) is a binary version of the bat algorithm (BA). It has been proven that BBA is competitive compared to other binary heuristic algorithms. Since the update processes of velocity in the algorithm are consistent with BA, in some cases, this algorithm also faces the premature convergence problem. This paper proposes an improved binary bat algorithm (IBBA) to solve this problem. To evaluate the performance of IBBA, standard benchmark functions and zero-one knapsack problems have been employed. The numeric results obtained by benchmark functions experiment prove that the proposed approach greatly outperforms the original BBA and binary particle swarm optimization (BPSO). Compared with several other heuristic algorithms on zero-one knapsack problems, it also verifies that the proposed algorithm is more able to avoid local minima.

  20. A noniterative greedy algorithm for multiframe point correspondence.

    PubMed

    Shafique, Khurram; Shah, Mubarak

    2005-01-01

    This paper presents a framework for finding point correspondences in monocular image sequences over multiple frames. The general problem of multiframe point correspondence is NP-hard for three or more frames. A polynomial time algorithm for a restriction of this problem is presented and is used as the basis of the proposed greedy algorithm for the general problem. The greedy nature of the proposed algorithm allows it to be used in real-time systems for tracking and surveillance, etc. In addition, the proposed algorithm deals with the problems of occlusion, missed detections, and false positives by using a single noniterative greedy optimization scheme and, hence, reduces the complexity of the overall algorithm as compared to most existing approaches where multiple heuristics are used for the same purpose. While most greedy algorithms for point tracking do not allow for entry and exit of the points from the scene, this is not a limitation for the proposed algorithm. Experiments with real and synthetic data over a wide range of scenarios and system parameters are presented to validate the claims about the performance of the proposed algorithm.

  1. Refined genetic algorithm -- Economic dispatch example

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sheble, G.B.; Brittig, K.

    1995-02-01

    A genetic-based algorithm is used to solve an economic dispatch (ED) problem. The algorithm utilizes payoff information of perspective solutions to evaluate optimality. Thus, the constraints of classical LaGrangian techniques on unit curves are eliminated. Using an economic dispatch problem as a basis for comparison, several different techniques which enhance program efficiency and accuracy, such as mutation prediction, elitism, interval approximation and penalty factors, are explored. Two unique genetic algorithms are also compared. The results are verified for a sample problem using a classical technique.

  2. Queue and stack sorting algorithm optimization and performance analysis

    NASA Astrophysics Data System (ADS)

    Qian, Mingzhu; Wang, Xiaobao

    2018-04-01

    Sorting algorithm is one of the basic operation of a variety of software development, in data structures course specializes in all kinds of sort algorithm. The performance of the sorting algorithm is directly related to the efficiency of the software. A lot of excellent scientific research queue is constantly optimizing algorithm, algorithm efficiency better as far as possible, the author here further research queue combined with stacks of sorting algorithms, the algorithm is mainly used for alternating operation queue and stack storage properties, Thus avoiding the need for a large number of exchange or mobile operations in the traditional sort. Before the existing basis to continue research, improvement and optimization, the focus on the optimization of the time complexity of the proposed optimization and improvement, The experimental results show that the improved effectively, at the same time and the time complexity and space complexity of the algorithm, the stability study corresponding research. The improvement and optimization algorithm, improves the practicability.

  3. Testing the accuracy of redshift-space group-finding algorithms

    NASA Astrophysics Data System (ADS)

    Frederic, James J.

    1995-04-01

    Using simulated redshift surveys generated from a high-resolution N-body cosmological structure simulation, we study algorithms used to identify groups of galaxies in redshift space. Two algorithms are investigated; both are friends-of-friends schemes with variable linking lengths in the radial and transverse dimenisons. The chief difference between the algorithms is in the redshift linking length. The algorithm proposed by Huchra & Geller (1982) uses a generous linking length designed to find 'fingers of god,' while that of Nolthenius & White (1987) uses a smaller linking length to minimize contamination by projection. We find that neither of the algorithms studied is intrinsically superior to the other; rather, the ideal algorithm as well as the ideal algorithm parameters depends on the purpose for which groups are to be studied. The Huchra & Geller algorithm misses few real groups, at the cost of including some spurious groups and members, while the Nolthenius & White algorithm misses high velocity dispersion groups and members but is less likely to include interlopers in its group assignments. Adjusting the parameters of either algorithm results in a trade-off between group accuracy and completeness. In a companion paper we investigate the accuracy of virial mass estimates and clustering properties of groups identified using these algorithms.

  4. A Polynomial Time, Numerically Stable Integer Relation Algorithm

    NASA Technical Reports Server (NTRS)

    Ferguson, Helaman R. P.; Bailey, Daivd H.; Kutler, Paul (Technical Monitor)

    1998-01-01

    Let x = (x1, x2...,xn be a vector of real numbers. X is said to possess an integer relation if there exist integers a(sub i) not all zero such that a1x1 + a2x2 + ... a(sub n)Xn = 0. Beginning in 1977 several algorithms (with proofs) have been discovered to recover the a(sub i) given x. The most efficient of these existing integer relation algorithms (in terms of run time and the precision required of the input) has the drawback of being very unstable numerically. It often requires a numeric precision level in the thousands of digits to reliably recover relations in modest-sized test problems. We present here a new algorithm for finding integer relations, which we have named the "PSLQ" algorithm. It is proved in this paper that the PSLQ algorithm terminates with a relation in a number of iterations that is bounded by a polynomial in it. Because this algorithm employs a numerically stable matrix reduction procedure, it is free from the numerical difficulties, that plague other integer relation algorithms. Furthermore, its stability admits an efficient implementation with lower run times oil average than other algorithms currently in Use. Finally, this stability can be used to prove that relation bounds obtained from computer runs using this algorithm are numerically accurate.

  5. AeroADL: applying the integration of the Suomi-NPP science algorithms with the Algorithm Development Library to the calibration and validation task

    NASA Astrophysics Data System (ADS)

    Houchin, J. S.

    2014-09-01

    A common problem for the off-line validation of the calibration algorithms and algorithm coefficients is being able to run science data through the exact same software used for on-line calibration of that data. The Joint Polar Satellite System (JPSS) program solved part of this problem by making the Algorithm Development Library (ADL) available, which allows the operational algorithm code to be compiled and run on a desktop Linux workstation using flat file input and output. However, this solved only part of the problem, as the toolkit and methods to initiate the processing of data through the algorithms were geared specifically toward the algorithm developer, not the calibration analyst. In algorithm development mode, a limited number of sets of test data are staged for the algorithm once, and then run through the algorithm over and over as the software is developed and debugged. In calibration analyst mode, we are continually running new data sets through the algorithm, which requires significant effort to stage each of those data sets for the algorithm without additional tools. AeroADL solves this second problem by providing a set of scripts that wrap the ADL tools, providing both efficient means to stage and process an input data set, to override static calibration coefficient look-up-tables (LUT) with experimental versions of those tables, and to manage a library containing multiple versions of each of the static LUT files in such a way that the correct set of LUTs required for each algorithm are automatically provided to the algorithm without analyst effort. Using AeroADL, The Aerospace Corporation's analyst team has demonstrated the ability to quickly and efficiently perform analysis tasks for both the VIIRS and OMPS sensors with minimal training on the software tools.

  6. Automated Spectroscopic Analysis Using the Particle Swarm Optimization Algorithm: Implementing a Guided Search Algorithm to Autofit

    NASA Astrophysics Data System (ADS)

    Ervin, Katherine; Shipman, Steven

    2017-06-01

    While rotational spectra can be rapidly collected, their analysis (especially for complex systems) is seldom straightforward, leading to a bottleneck. The AUTOFIT program was designed to serve that need by quickly matching rotational constants to spectra with little user input and supervision. This program can potentially be improved by incorporating an optimization algorithm in the search for a solution. The Particle Swarm Optimization Algorithm (PSO) was chosen for implementation. PSO is part of a family of optimization algorithms called heuristic algorithms, which seek approximate best answers. This is ideal for rotational spectra, where an exact match will not be found without incorporating distortion constants, etc., which would otherwise greatly increase the size of the search space. PSO was tested for robustness against five standard fitness functions and then applied to a custom fitness function created for rotational spectra. This talk will explain the Particle Swarm Optimization algorithm and how it works, describe how Autofit was modified to use PSO, discuss the fitness function developed to work with spectroscopic data, and show our current results. Seifert, N.A., Finneran, I.A., Perez, C., Zaleski, D.P., Neill, J.L., Steber, A.L., Suenram, R.D., Lesarri, A., Shipman, S.T., Pate, B.H., J. Mol. Spec. 312, 13-21 (2015)

  7. The Psychopharmacology Algorithm Project at the Harvard South Shore Program: An Algorithm for Generalized Anxiety Disorder.

    PubMed

    Abejuela, Harmony Raylen; Osser, David N

    2016-01-01

    This revision of previous algorithms for the pharmacotherapy of generalized anxiety disorder was developed by the Psychopharmacology Algorithm Project at the Harvard South Shore Program. Algorithms from 1999 and 2010 and associated references were reevaluated. Newer studies and reviews published from 2008-14 were obtained from PubMed and analyzed with a focus on their potential to justify changes in the recommendations. Exceptions to the main algorithm for special patient populations, such as women of childbearing potential, pregnant women, the elderly, and those with common medical and psychiatric comorbidities, were considered. Selective serotonin reuptake inhibitors (SSRIs) are still the basic first-line medication. Early alternatives include duloxetine, buspirone, hydroxyzine, pregabalin, or bupropion, in that order. If response is inadequate, then the second recommendation is to try a different SSRI. Additional alternatives now include benzodiazepines, venlafaxine, kava, and agomelatine. If the response to the second SSRI is unsatisfactory, then the recommendation is to try a serotonin-norepinephrine reuptake inhibitor (SNRI). Other alternatives to SSRIs and SNRIs for treatment-resistant or treatment-intolerant patients include tricyclic antidepressants, second-generation antipsychotics, and valproate. This revision of the GAD algorithm responds to issues raised by new treatments under development (such as pregabalin) and organizes the evidence systematically for practical clinical application.

  8. Characterisation of edge turbulence in relation to edge magnetic field configuration in L-mode plasmas in the Mega Amp Spherical Tokamak.

    NASA Astrophysics Data System (ADS)

    Dewhurst, J.; Hnat, B.; Dudson, B.; Dendy, R. O.; Counsell, G. F.; Kirk, A.

    2007-12-01

    Almost all astrophysical and magnetically confined fusion plasmas are turbulent. Here, we examine ion saturation current (Isat) measurements of edge plasma turbulence for three MAST L-mode plasmas that differ primarily in their edge magnetic field configurations. First, absolute moments of the coarse grained data are examined to obtain accurate values of scaling exponents. The dual scaling behaviour is identified in all samples, with the temporal scale τ ≍ 40-60 μs separating the two regimes. Strong universality is then identified in the functional form of the probability density function (PDF) for Isat fluctuations, which is well approximated by the Fréchet distribution on temporal scales τ ≤ 40μs. For temporal scales τ > 40μs, the PDFs appear to converge to the Gumbel distribution, which has been previously identified as a universal feature of many other complex phenomena. The optimal fitting parameters k=1.15 for Fréchet and a=1.35 for Gumbel provide a simple quantitative characterisation of the full spectrum of fluctuations. We conclude that, to good approximation, the properties of the edge turbulence are independent of the edge magnetic field configuration.

  9. Characterization of edge turbulence in relation to edge magnetic field configuration in Ohmic L-mode plasmas in the Mega Amp Spherical Tokamak

    NASA Astrophysics Data System (ADS)

    Hnat, B.; Dudson, B. D.; Dendy, R. O.; Counsell, G. F.; Kirk, A.; MAST Team

    2008-08-01

    Ion saturation current (Isat) measurements of edge plasma turbulence are analysed for six MAST L-mode plasmas that differ primarily in their edge magnetic field configurations. The analysis techniques are designed to capture the strong nonlinearities of the datasets. First, absolute moments of the data are examined to obtain accurate values of scaling exponents. This confirms dual scaling behaviour in all samples, with the temporal scale τ ≈ 40-60 µs separating the two regimes. Strong universality is then identified in the functional form of the probability density function (PDF) for Isat fluctuations, which is well approximated by the Fréchet distribution on temporal scales τ <= 40 µs. For temporal scales τ > 40 µs, the PDFs appear to converge to the Gumbel distribution, which has been previously identified as a universal feature of many other complex phenomena. The optimal fitting parameters k = 1.15 for Fréchet and a = 1.35 for Gumbel provide a simple quantitative characterization of the full spectrum of fluctuations. It is concluded that, to good approximation, the properties of the edge turbulence are independent of the edge magnetic field configuration.

  10. Radiant{trademark} Liquid Radioisotope Intravascular Radiation Therapy System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eigler, N.; Whiting, J.; Chernomorsky, A.

    1998-01-16

    RADIANT{trademark} is manufactured by United States Surgical Corporation, Vascular Therapies Division, (formerly Progressive Angioplasty Systems). The system comprises a liquid {beta}-radiation source, a shielded isolation/transfer device (ISAT), modified over-the-wire or rapid exchange delivery balloons, and accessory kits. The liquid {beta}-source is Rhenium-188 in the form of sodium perrhenate (NaReO{sub 4}), Rhenium-188 is primarily a {beta}-emitter with a physical half-life of 17.0 hours. The maximum energy of the {beta}-particles is 2.1 MeV. The source is produced daily in the nuclear pharmacy hot lab by eluting a Tungsten-188/Rhenium-188 generator manufactured by Oak Ridge National Laboratory (ORNL). Using anion exchange columns and Milliporemore » filters the effluent is concentrated to approximately 100 mCi/ml, calibrated, and loaded into the (ISAT) which is subsequently transported to the cardiac catheterization laboratory. The delivery catheters are modified Champion{trademark} over-the-wire, and TNT{trademark} rapid exchange stent delivery balloons. These balloons have thickened polyethylene walls to augment puncture resistance; dual radio-opaque markers and specially configured connectors.« less

  11. Algorithm-Based Fault Tolerance Integrated with Replication

    NASA Technical Reports Server (NTRS)

    Some, Raphael; Rennels, David

    2008-01-01

    In a proposed approach to programming and utilization of commercial off-the-shelf computing equipment, a combination of algorithm-based fault tolerance (ABFT) and replication would be utilized to obtain high degrees of fault tolerance without incurring excessive costs. The basic idea of the proposed approach is to integrate ABFT with replication such that the algorithmic portions of computations would be protected by ABFT, and the logical portions by replication. ABFT is an extremely efficient, inexpensive, high-coverage technique for detecting and mitigating faults in computer systems used for algorithmic computations, but does not protect against errors in logical operations surrounding algorithms.

  12. Ascent guidance algorithm using lidar wind measurements

    NASA Technical Reports Server (NTRS)

    Cramer, Evin J.; Bradt, Jerre E.; Hardtla, John W.

    1990-01-01

    The formulation of a general nonlinear programming guidance algorithm that incorporates wind measurements in the computation of ascent guidance steering commands is discussed. A nonlinear programming (NLP) algorithm that is designed to solve a very general problem has the potential to address the diversity demanded by future launch systems. Using B-splines for the command functional form allows the NLP algorithm to adjust the shape of the command profile to achieve optimal performance. The algorithm flexibility is demonstrated by simulation of ascent with dynamic loading constraints through a set of random wind profiles with and without wind sensing capability.

  13. A Fast Implementation of the ISOCLUS Algorithm

    NASA Technical Reports Server (NTRS)

    Memarsadeghi, Nargess; Mount, David M.; Netanyahu, Nathan S.; LeMoigne, Jacqueline

    2003-01-01

    Unsupervised clustering is a fundamental tool in numerous image processing and remote sensing applications. For example, unsupervised clustering is often used to obtain vegetation maps of an area of interest. This approach is useful when reliable training data are either scarce or expensive, and when relatively little a priori information about the data is available. Unsupervised clustering methods play a significant role in the pursuit of unsupervised classification. One of the most popular and widely used clustering schemes for remote sensing applications is the ISOCLUS algorithm, which is based on the ISODATA method. The algorithm is given a set of n data points (or samples) in d-dimensional space, an integer k indicating the initial number of clusters, and a number of additional parameters. The general goal is to compute a set of cluster centers in d-space. Although there is no specific optimization criterion, the algorithm is similar in spirit to the well known k-means clustering method in which the objective is to minimize the average squared distance of each point to its nearest center, called the average distortion. One significant feature of ISOCLUS over k-means is that clusters may be merged or split, and so the final number of clusters may be different from the number k supplied as part of the input. This algorithm will be described in later in this paper. The ISOCLUS algorithm can run very slowly, particularly on large data sets. Given its wide use in remote sensing, its efficient computation is an important goal. We have developed a fast implementation of the ISOCLUS algorithm. Our improvement is based on a recent acceleration to the k-means algorithm, the filtering algorithm, by Kanungo et al.. They showed that, by storing the data in a kd-tree, it was possible to significantly reduce the running time of k-means. We have adapted this method for the ISOCLUS algorithm. For technical reasons, which are explained later, it is necessary to make a minor

  14. System engineering approach to GPM retrieval algorithms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rose, C. R.; Chandrasekar, V.

    2004-01-01

    System engineering principles and methods are very useful in large-scale complex systems for developing the engineering requirements from end-user needs. Integrating research into system engineering is a challenging task. The proposed Global Precipitation Mission (GPM) satellite will use a dual-wavelength precipitation radar to measure and map global precipitation with unprecedented accuracy, resolution and areal coverage. The satellite vehicle, precipitation radars, retrieval algorithms, and ground validation (GV) functions are all critical subsystems of the overall GPM system and each contributes to the success of the mission. Errors in the radar measurements and models can adversely affect the retrieved output values. Groundmore » validation (GV) systems are intended to provide timely feedback to the satellite and retrieval algorithms based on measured data. These GV sites will consist of radars and DSD measurement systems and also have intrinsic constraints. One of the retrieval algorithms being studied for use with GPM is the dual-wavelength DSD algorithm that does not use the surface reference technique (SRT). The underlying microphysics of precipitation structures and drop-size distributions (DSDs) dictate the types of models and retrieval algorithms that can be used to estimate precipitation. Many types of dual-wavelength algorithms have been studied. Meneghini (2002) analyzed the performance of single-pass dual-wavelength surface-reference-technique (SRT) based algorithms. Mardiana (2003) demonstrated that a dual-wavelength retrieval algorithm could be successfully used without the use of the SRT. It uses an iterative approach based on measured reflectivities at both wavelengths and complex microphysical models to estimate both No and Do at each range bin. More recently, Liao (2004) proposed a solution to the Do ambiguity problem in rain within the dual-wavelength algorithm and showed a possible melting layer model based on stratified spheres. With the

  15. A Parallel Rendering Algorithm for MIMD Architectures

    NASA Technical Reports Server (NTRS)

    Crockett, Thomas W.; Orloff, Tobias

    1991-01-01

    Applications such as animation and scientific visualization demand high performance rendering of complex three dimensional scenes. To deliver the necessary rendering rates, highly parallel hardware architectures are required. The challenge is then to design algorithms and software which effectively use the hardware parallelism. A rendering algorithm targeted to distributed memory MIMD architectures is described. For maximum performance, the algorithm exploits both object-level and pixel-level parallelism. The behavior of the algorithm is examined both analytically and experimentally. Its performance for large numbers of processors is found to be limited primarily by communication overheads. An experimental implementation for the Intel iPSC/860 shows increasing performance from 1 to 128 processors across a wide range of scene complexities. It is shown that minimal modifications to the algorithm will adapt it for use on shared memory architectures as well.

  16. Fast self contained exponential random deviate algorithm

    NASA Astrophysics Data System (ADS)

    Fernández, Julio F.

    1997-03-01

    An algorithm that generates random numbers with an exponential distribution and is about ten times faster than other well known algorithms has been reported before (J. F. Fernández and J. Rivero, Comput. Phys. 10), 83 (1996). That algorithm requires input of uniform random deviates. We now report a new version of it that needs no input and is nearly as fast. The only limitation we predict thus far for the quality of the output is the amount of computer memory available. Performance results under various tests will be reported. The algorithm works in close analogy to the set up that is often used in statistical physics in order to obtain the Gibb's distribution. N numbers, that are are stored in N registers, change with time according to the rules of the algorithm, keeping their sum constant. Further details will be given.

  17. Intermediary Variables and Algorithm Parameters for an Electronic Algorithm for Intravenous Insulin Infusion

    PubMed Central

    Braithwaite, Susan S.; Godara, Hemant; Song, Julie; Cairns, Bruce A.; Jones, Samuel W.; Umpierrez, Guillermo E.

    2009-01-01

    Background Algorithms for intravenous insulin infusion may assign the infusion rate (IR) by a two-step process. First, the previous insulin infusion rate (IRprevious) and the rate of change of blood glucose (BG) from the previous iteration of the algorithm are used to estimate the maintenance rate (MR) of insulin infusion. Second, the insulin IR for the next iteration (IRnext) is assigned to be commensurate with the MR and the distance of the current blood glucose (BGcurrent) from target. With use of a specific set of algorithm parameter values, a family of iso-MR curves is created, each giving IR as a function of MR and BG. Method To test the feasibility of estimating MR from the IRprevious and the previous rate of change of BG, historical hyperglycemic data points were used to compute the “maintenance rate cross step next estimate” (MRcsne). Historical cases had been treated with intravenous insulin infusion using a tabular protocol that estimated MR according to column-change rules. The mean IR on historical stable intervals (MRtrue), an estimate of the biologic value of MR, was compared to MRcsne during the hyperglycemic iteration immediately preceding the stable interval. Hypothetically calculated MRcsne-dependent IRnext was compared to IRnext assigned historically. An expanded theory of an algorithm is developed mathematically. Practical recommendations for computerization are proposed. Results The MRtrue determined on each of 30 stable intervals and the MRcsne during the immediately preceding hyperglycemic iteration differed, having medians with interquartile ranges 2.7 (1.2–3.7) and 3.2 (1.5–4.6) units/h, respectively. However, these estimates of MR were strongly correlated (R2 = 0.88). During hyperglycemia at 941 time points the IRnext assigned historically and the hypothetically calculated MRcsne-dependent IRnext differed, having medians with interquartile ranges 4.0 (3.0–6.0) and 4.6 (3.0–6.8) units/h, respectively, but these paired values

  18. The Impact of "Possible Patients" on Phenotyping Algorithms: Electronic Phenotype Algorithms Can Only Be Reproduced by Sharing Detailed Annotation Criteria.

    PubMed

    Kagawa, Rina; Kawazoe, Yoshimasa; Shinohara, Emiko; Imai, Takeshi; Ohe, Kazuhiko

    2017-01-01

    Phenotyping is an automated technique for identifying patients diagnosed with a particular disease based on electronic health records (EHRs). To evaluate phenotyping algorithms, which should be reproducible, the annotation of EHRs as a gold standard is critical. However, we have found that the different types of EHRs cannot be definitively annotated into CASEs or CONTROLs. The influence of such "possible patients" on phenotyping algorithms is unknown. To assess these issues, for four chronic diseases, we annotated EHRs by using information not directly referring to the diseases and developed two types of phenotyping algorithms for each disease. We confirmed that each disease included different types of possible patients. The performance of phenotyping algorithms differed depending on whether possible patients were considered as CASEs, and this was independent of the type of algorithms. Our results indicate that researchers must share annotation criteria for classifying the possible patients to reproduce phenotyping algorithms.

  19. A hybrid monkey search algorithm for clustering analysis.

    PubMed

    Chen, Xin; Zhou, Yongquan; Luo, Qifang

    2014-01-01

    Clustering is a popular data analysis and data mining technique. The k-means clustering algorithm is one of the most commonly used methods. However, it highly depends on the initial solution and is easy to fall into local optimum solution. In view of the disadvantages of the k-means method, this paper proposed a hybrid monkey algorithm based on search operator of artificial bee colony algorithm for clustering analysis and experiment on synthetic and real life datasets to show that the algorithm has a good performance than that of the basic monkey algorithm for clustering analysis.

  20. Flocking algorithm for autonomous flying robots.

    PubMed

    Virágh, Csaba; Vásárhelyi, Gábor; Tarcai, Norbert; Szörényi, Tamás; Somorjai, Gergő; Nepusz, Tamás; Vicsek, Tamás

    2014-06-01

    Animal swarms displaying a variety of typical flocking patterns would not exist without the underlying safe, optimal and stable dynamics of the individuals. The emergence of these universal patterns can be efficiently reconstructed with agent-based models. If we want to reproduce these patterns with artificial systems, such as autonomous aerial robots, agent-based models can also be used in their control algorithms. However, finding the proper algorithms and thus understanding the essential characteristics of the emergent collective behaviour requires thorough and realistic modeling of the robot and also the environment. In this paper, we first present an abstract mathematical model of an autonomous flying robot. The model takes into account several realistic features, such as time delay and locality of communication, inaccuracy of the on-board sensors and inertial effects. We present two decentralized control algorithms. One is based on a simple self-propelled flocking model of animal collective motion, the other is a collective target tracking algorithm. Both algorithms contain a viscous friction-like term, which aligns the velocities of neighbouring agents parallel to each other. We show that this term can be essential for reducing the inherent instabilities of such a noisy and delayed realistic system. We discuss simulation results on the stability of the control algorithms, and perform real experiments to show the applicability of the algorithms on a group of autonomous quadcopters. In our case, bio-inspiration works in two ways. On the one hand, the whole idea of trying to build and control a swarm of robots comes from the observation that birds tend to flock to optimize their behaviour as a group. On the other hand, by using a realistic simulation framework and studying the group behaviour of autonomous robots we can learn about the major factors influencing the flight of bird flocks.

  1. Survey of PRT Vehicle Management Algorithms

    DOT National Transportation Integrated Search

    1974-01-01

    The document summarizes the results of a literature survey of state of the art vehicle management algorithms applicable to Personal Rapid Transit Systems(PRT). The surveyed vehicle management algorithms are organized into a set of five major componen...

  2. Reactive Collision Avoidance Algorithm

    NASA Technical Reports Server (NTRS)

    Scharf, Daniel; Acikmese, Behcet; Ploen, Scott; Hadaegh, Fred

    2010-01-01

    The reactive collision avoidance (RCA) algorithm allows a spacecraft to find a fuel-optimal trajectory for avoiding an arbitrary number of colliding spacecraft in real time while accounting for acceleration limits. In addition to spacecraft, the technology can be used for vehicles that can accelerate in any direction, such as helicopters and submersibles. In contrast to existing, passive algorithms that simultaneously design trajectories for a cluster of vehicles working to achieve a common goal, RCA is implemented onboard spacecraft only when an imminent collision is detected, and then plans a collision avoidance maneuver for only that host vehicle, thus preventing a collision in an off-nominal situation for which passive algorithms cannot. An example scenario for such a situation might be when a spacecraft in the cluster is approaching another one, but enters safe mode and begins to drift. Functionally, the RCA detects colliding spacecraft, plans an evasion trajectory by solving the Evasion Trajectory Problem (ETP), and then recovers after the collision is avoided. A direct optimization approach was used to develop the algorithm so it can run in real time. In this innovation, a parameterized class of avoidance trajectories is specified, and then the optimal trajectory is found by searching over the parameters. The class of trajectories is selected as bang-off-bang as motivated by optimal control theory. That is, an avoiding spacecraft first applies full acceleration in a constant direction, then coasts, and finally applies full acceleration to stop. The parameter optimization problem can be solved offline and stored as a look-up table of values. Using a look-up table allows the algorithm to run in real time. Given a colliding spacecraft, the properties of the collision geometry serve as indices of the look-up table that gives the optimal trajectory. For multiple colliding spacecraft, the set of trajectories that avoid all spacecraft is rapidly searched on

  3. Adaptive algorithm of magnetic heading detection

    NASA Astrophysics Data System (ADS)

    Liu, Gong-Xu; Shi, Ling-Feng

    2017-11-01

    Magnetic data obtained from a magnetic sensor usually fluctuate in a certain range, which makes it difficult to estimate the magnetic heading accurately. In fact, magnetic heading information is usually submerged in noise because of all kinds of electromagnetic interference and the diversity of the pedestrian’s motion states. In order to solve this problem, a new adaptive algorithm based on the (typically) right-angled corridors of a building or residential buildings is put forward to process heading information. First, a 3D indoor localization platform is set up based on MPU9250. Then, several groups of data are measured by changing the experimental environment and pedestrian’s motion pace. The raw data from the attached inertial measurement unit are calibrated and arranged into a time-stamped array and written to a data file. Later, the data file is imported into MATLAB for processing and analysis using the proposed adaptive algorithm. Finally, the algorithm is verified by comparison with the existing algorithm. The experimental results show that the algorithm has strong robustness and good fault tolerance, which can detect the heading information accurately and in real-time.

  4. Comparison of subpixel image registration algorithms

    NASA Astrophysics Data System (ADS)

    Boye, R. R.; Nelson, C. L.

    2009-02-01

    Research into the use of multiframe superresolution has led to the development of algorithms for providing images with enhanced resolution using several lower resolution copies. An integral component of these algorithms is the determination of the registration of each of the low resolution images to a reference image. Without this information, no resolution enhancement can be attained. We have endeavored to find a suitable method for registering severely undersampled images by comparing several approaches. To test the algorithms, an ideal image is input to a simulated image formation program, creating several undersampled images with known geometric transformations. The registration algorithms are then applied to the set of low resolution images and the estimated registration parameters compared to the actual values. This investigation is limited to monochromatic images (extension to color images is not difficult) and only considers global geometric transformations. Each registration approach will be reviewed and evaluated with respect to the accuracy of the estimated registration parameters as well as the computational complexity required. In addition, the effects of image content, specifically spatial frequency content, as well as the immunity of the registration algorithms to noise will be discussed.

  5. An algorithmic approach to crustal deformation analysis

    NASA Technical Reports Server (NTRS)

    Iz, Huseyin Baki

    1987-01-01

    In recent years the analysis of crustal deformation measurements has become important as a result of current improvements in geodetic methods and an increasing amount of theoretical and observational data provided by several earth sciences. A first-generation data analysis algorithm which combines a priori information with current geodetic measurements was proposed. Relevant methods which can be used in the algorithm were discussed. Prior information is the unifying feature of this algorithm. Some of the problems which may arise through the use of a priori information in the analysis were indicated and preventive measures were demonstrated. The first step in the algorithm is the optimal design of deformation networks. The second step in the algorithm identifies the descriptive model of the deformation field. The final step in the algorithm is the improved estimation of deformation parameters. Although deformation parameters are estimated in the process of model discrimination, they can further be improved by the use of a priori information about them. According to the proposed algorithm this information must first be tested against the estimates calculated using the sample data only. Null-hypothesis testing procedures were developed for this purpose. Six different estimators which employ a priori information were examined. Emphasis was put on the case when the prior information is wrong and analytical expressions for possible improvements under incompatible prior information were derived.

  6. Robust algorithm for aligning two-dimensional chromatograms.

    PubMed

    Gros, Jonas; Nabi, Deedar; Dimitriou-Christidis, Petros; Rutler, Rebecca; Arey, J Samuel

    2012-11-06

    Comprehensive two-dimensional gas chromatography (GC × GC) chromatograms typically exhibit run-to-run retention time variability. Chromatogram alignment is often a desirable step prior to further analysis of the data, for example, in studies of environmental forensics or weathering of complex mixtures. We present a new algorithm for aligning whole GC × GC chromatograms. This technique is based on alignment points that have locations indicated by the user both in a target chromatogram and in a reference chromatogram. We applied the algorithm to two sets of samples. First, we aligned the chromatograms of twelve compositionally distinct oil spill samples, all analyzed using the same instrument parameters. Second, we applied the algorithm to two compositionally distinct wastewater extracts analyzed using two different instrument temperature programs, thus involving larger retention time shifts than the first sample set. For both sample sets, the new algorithm performed favorably compared to two other available alignment algorithms: that of Pierce, K. M.; Wood, Lianna F.; Wright, B. W.; Synovec, R. E. Anal. Chem.2005, 77, 7735-7743 and 2-D COW from Zhang, D.; Huang, X.; Regnier, F. E.; Zhang, M. Anal. Chem.2008, 80, 2664-2671. The new algorithm achieves the best matches of retention times for test analytes, avoids some artifacts which result from the other alignment algorithms, and incurs the least modification of quantitative signal information.

  7. Algorithm Diversity for Resilent Systems

    DTIC Science & Technology

    2016-06-27

    data structures. 15. SUBJECT TERMS computer security, software diversity, program transformation 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF 18...systematic method for transforming Datalog rules with general universal and existential quantification into efficient algorithms with precise complexity...worst case in the size of the ground rules. There are numerous choices during the transformation that lead to diverse algorithms and different

  8. Firefly Algorithm for Structural Search.

    PubMed

    Avendaño-Franco, Guillermo; Romero, Aldo H

    2016-07-12

    The problem of computational structure prediction of materials is approached using the firefly (FF) algorithm. Starting from the chemical composition and optionally using prior knowledge of similar structures, the FF method is able to predict not only known stable structures but also a variety of novel competitive metastable structures. This article focuses on the strengths and limitations of the algorithm as a multimodal global searcher. The algorithm has been implemented in software package PyChemia ( https://github.com/MaterialsDiscovery/PyChemia ), an open source python library for materials analysis. We present applications of the method to van der Waals clusters and crystal structures. The FF method is shown to be competitive when compared to other population-based global searchers.

  9. Comprehensive eye evaluation algorithm

    NASA Astrophysics Data System (ADS)

    Agurto, C.; Nemeth, S.; Zamora, G.; Vahtel, M.; Soliz, P.; Barriga, S.

    2016-03-01

    In recent years, several research groups have developed automatic algorithms to detect diabetic retinopathy (DR) in individuals with diabetes (DM), using digital retinal images. Studies have indicated that diabetics have 1.5 times the annual risk of developing primary open angle glaucoma (POAG) as do people without DM. Moreover, DM patients have 1.8 times the risk for age-related macular degeneration (AMD). Although numerous investigators are developing automatic DR detection algorithms, there have been few successful efforts to create an automatic algorithm that can detect other ocular diseases, such as POAG and AMD. Consequently, our aim in the current study was to develop a comprehensive eye evaluation algorithm that not only detects DR in retinal images, but also automatically identifies glaucoma suspects and AMD by integrating other personal medical information with the retinal features. The proposed system is fully automatic and provides the likelihood of each of the three eye disease. The system was evaluated in two datasets of 104 and 88 diabetic cases. For each eye, we used two non-mydriatic digital color fundus photographs (macula and optic disc centered) and, when available, information about age, duration of diabetes, cataracts, hypertension, gender, and laboratory data. Our results show that the combination of multimodal features can increase the AUC by up to 5%, 7%, and 8% in the detection of AMD, DR, and glaucoma respectively. Marked improvement was achieved when laboratory results were combined with retinal image features.

  10. Non-parametric diffeomorphic image registration with the demons algorithm.

    PubMed

    Vercauteren, Tom; Pennec, Xavier; Perchant, Aymeric; Ayache, Nicholas

    2007-01-01

    We propose a non-parametric diffeomorphic image registration algorithm based on Thirion's demons algorithm. The demons algorithm can be seen as an optimization procedure on the entire space of displacement fields. The main idea of our algorithm is to adapt this procedure to a space of diffeomorphic transformations. In contrast to many diffeomorphic registration algorithms, our solution is computationally efficient since in practice it only replaces an addition of free form deformations by a few compositions. Our experiments show that in addition to being diffeomorphic, our algorithm provides results that are similar to the ones from the demons algorithm but with transformations that are much smoother and closer to the true ones in terms of Jacobians.

  11. Fireworks algorithm for mean-VaR/CVaR models

    NASA Astrophysics Data System (ADS)

    Zhang, Tingting; Liu, Zhifeng

    2017-10-01

    Intelligent algorithms have been widely applied to portfolio optimization problems. In this paper, we introduce a novel intelligent algorithm, named fireworks algorithm, to solve the mean-VaR/CVaR model for the first time. The results show that, compared with the classical genetic algorithm, fireworks algorithm not only improves the optimization accuracy and the optimization speed, but also makes the optimal solution more stable. We repeat our experiments at different confidence levels and different degrees of risk aversion, and the results are robust. It suggests that fireworks algorithm has more advantages than genetic algorithm in solving the portfolio optimization problem, and it is feasible and promising to apply it into this field.

  12. Quantum algorithms for quantum field theories.

    PubMed

    Jordan, Stephen P; Lee, Keith S M; Preskill, John

    2012-06-01

    Quantum field theory reconciles quantum mechanics and special relativity, and plays a central role in many areas of physics. We developed a quantum algorithm to compute relativistic scattering probabilities in a massive quantum field theory with quartic self-interactions (φ(4) theory) in spacetime of four and fewer dimensions. Its run time is polynomial in the number of particles, their energy, and the desired precision, and applies at both weak and strong coupling. In the strong-coupling and high-precision regimes, our quantum algorithm achieves exponential speedup over the fastest known classical algorithm.

  13. Color transfer algorithm in medical images

    NASA Astrophysics Data System (ADS)

    Wang, Weihong; Xu, Yangfa

    2007-12-01

    In digital virtual human project, image data acquires from the freezing slice of human body specimen. The color and brightness between a group of images of a certain organ could be quite different. The quality of these images could bring great difficulty in edge extraction, segmentation, as well as 3D reconstruction process. Thus it is necessary to unify the color of the images. The color transfer algorithm is a good algorithm to deal with this kind of problem. This paper introduces the principle of this algorithm and uses it in the medical image processing.

  14. Complexity of the Quantum Adiabatic Algorithm

    NASA Technical Reports Server (NTRS)

    Hen, Itay

    2013-01-01

    The Quantum Adiabatic Algorithm (QAA) has been proposed as a mechanism for efficiently solving optimization problems on a quantum computer. Since adiabatic computation is analog in nature and does not require the design and use of quantum gates, it can be thought of as a simpler and perhaps more profound method for performing quantum computations that might also be easier to implement experimentally. While these features have generated substantial research in QAA, to date there is still a lack of solid evidence that the algorithm can outperform classical optimization algorithms.

  15. Fast parallel algorithm for slicing STL based on pipeline

    NASA Astrophysics Data System (ADS)

    Ma, Xulong; Lin, Feng; Yao, Bo

    2016-05-01

    In Additive Manufacturing field, the current researches of data processing mainly focus on a slicing process of large STL files or complicated CAD models. To improve the efficiency and reduce the slicing time, a parallel algorithm has great advantages. However, traditional algorithms can't make full use of multi-core CPU hardware resources. In the paper, a fast parallel algorithm is presented to speed up data processing. A pipeline mode is adopted to design the parallel algorithm. And the complexity of the pipeline algorithm is analyzed theoretically. To evaluate the performance of the new algorithm, effects of threads number and layers number are investigated by a serial of experiments. The experimental results show that the threads number and layers number are two remarkable factors to the speedup ratio. The tendency of speedup versus threads number reveals a positive relationship which greatly agrees with the Amdahl's law, and the tendency of speedup versus layers number also keeps a positive relationship agreeing with Gustafson's law. The new algorithm uses topological information to compute contours with a parallel method of speedup. Another parallel algorithm based on data parallel is used in experiments to show that pipeline parallel mode is more efficient. A case study at last shows a suspending performance of the new parallel algorithm. Compared with the serial slicing algorithm, the new pipeline parallel algorithm can make full use of the multi-core CPU hardware, accelerate the slicing process, and compared with the data parallel slicing algorithm, the new slicing algorithm in this paper adopts a pipeline parallel model, and a much higher speedup ratio and efficiency is achieved.

  16. Parallel Algorithms and Patterns

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Robey, Robert W.

    2016-06-16

    This is a powerpoint presentation on parallel algorithms and patterns. A parallel algorithm is a well-defined, step-by-step computational procedure that emphasizes concurrency to solve a problem. Examples of problems include: Sorting, searching, optimization, matrix operations. A parallel pattern is a computational step in a sequence of independent, potentially concurrent operations that occurs in diverse scenarios with some frequency. Examples are: Reductions, prefix scans, ghost cell updates. We only touch on parallel patterns in this presentation. It really deserves its own detailed discussion which Gabe Rockefeller would like to develop.

  17. Parallel Algorithms for Groebner-Basis Reduction

    DTIC Science & Technology

    1987-09-25

    22209 ELEMENT NO. NO. NO. ACCESSION NO. 11. TITLE (Include Security Classification) * PARALLEL ALGORITHMS FOR GROEBNER -BASIS REDUCTION 12. PERSONAL...All other editions are obsolete. Productivity Engineering in the UNIXt Environment p Parallel Algorithms for Groebner -Basis Reduction Technical Report

  18. A comparative study of AGN feedback algorithms

    NASA Astrophysics Data System (ADS)

    Wurster, J.; Thacker, R. J.

    2013-05-01

    Modelling active galactic nuclei (AGN) feedback in numerical simulations is both technically and theoretically challenging, with numerous approaches having been published in the literature. We present a study of five distinct approaches to modelling AGN feedback within gravitohydrodynamic simulations of major mergers of Milky Way-sized galaxies. To constrain differences to only be between AGN feedback models, all simulations start from the same initial conditions and use the same star formation algorithm. Most AGN feedback algorithms have five key aspects: the black hole accretion rate, energy feedback rate and method, particle accretion algorithm, black hole advection algorithm and black hole merger algorithm. All models follow different accretion histories, and in some cases, accretion rates differ by up to three orders of magnitude at any given time. We consider models with either thermal or kinetic feedback, with the associated energy deposited locally around the black hole. Each feedback algorithm modifies the region around the black hole to different extents, yielding gas densities and temperatures within r ˜ 200 pc that differ by up to six orders of magnitude at any given time. The particle accretion algorithms usually maintain good agreement between the total mass accreted by dot{M} dt and the total mass of gas particles removed from the simulation, although not all algorithms guarantee this to be true. The black hole advection algorithms dampen inappropriate dragging of the black holes by two-body interactions. Advecting the black hole a limited distance based upon local mass distributions has many desirably properties, such as avoiding large artificial jumps and allowing the possibility of the black hole remaining in a gas void. Lastly, two black holes instantly merge when given criteria are met, and we find a range of merger times for different criteria. This is important since the AGN feedback rate changes across the merger in a way that is dependent

  19. Aerocapture Guidance Algorithm Comparison Campaign

    NASA Technical Reports Server (NTRS)

    Rousseau, Stephane; Perot, Etienne; Graves, Claude; Masciarelli, James P.; Queen, Eric

    2002-01-01

    The aerocapture is a promising technique for the future human interplanetary missions. The Mars Sample Return was initially based on an insertion by aerocapture. A CNES orbiter Mars Premier was developed to demonstrate this concept. Mainly due to budget constraints, the aerocapture was cancelled for the French orbiter. A lot of studies were achieved during the three last years to develop and test different guidance algorithms (APC, EC, TPC, NPC). This work was shared between CNES and NASA, with a fruitful joint working group. To finish this study an evaluation campaign has been performed to test the different algorithms. The objective was to assess the robustness, accuracy, capability to limit the load, and the complexity of each algorithm. A simulation campaign has been specified and performed by CNES, with a similar activity on the NASA side to confirm the CNES results. This evaluation has demonstrated that the numerical guidance principal is not competitive compared to the analytical concepts. All the other algorithms are well adapted to guaranty the success of the aerocapture. The TPC appears to be the more robust, the APC the more accurate, and the EC appears to be a good compromise.

  20. Algorithms in Discrepancy Theory and Lattices

    NASA Astrophysics Data System (ADS)

    Ramadas, Harishchandra

    This thesis deals with algorithmic problems in discrepancy theory and lattices, and is based on two projects I worked on while at the University of Washington in Seattle. A brief overview is provided in Chapter 1 (Introduction). Chapter 2 covers joint work with Avi Levy and Thomas Rothvoss in the field of discrepancy minimization. A well-known theorem of Spencer shows that any set system with n sets over n elements admits a coloring of discrepancy O(√n). While the original proof was non-constructive, recent progress brought polynomial time algorithms by Bansal, Lovett and Meka, and Rothvoss. All those algorithms are randomized, even though Bansal's algorithm admitted a complicated derandomization. We propose an elegant deterministic polynomial time algorithm that is inspired by Lovett-Meka as well as the Multiplicative Weight Update method. The algorithm iteratively updates a fractional coloring while controlling the exponential weights that are assigned to the set constraints. A conjecture by Meka suggests that Spencer's bound can be generalized to symmetric matrices. We prove that n x n matrices that are block diagonal with block size q admit a coloring of discrepancy O(√n . √log(q)). Bansal, Dadush and Garg recently gave a randomized algorithm to find a vector x with entries in {-1,1} with ∥Ax∥infinity ≤ O(√log n) in polynomial time, where A is any matrix whose columns have length at most 1. We show that our method can be used to deterministically obtain such a vector. In Chapter 3, we discuss a result in the broad area of lattices and integer optimization, in joint work with Rebecca Hoberg, Thomas Rothvoss and Xin Yang. The number balancing (NBP) problem is the following: given real numbers a1,...,an in [0,1], find two disjoint subsets I1,I2 of [ n] so that the difference |sumi∈I1a i - sumi∈I2ai| of their sums is minimized. An application of the pigeonhole principle shows that there is always a solution where the difference is at most O √n/2

  1. Scalable Nearest Neighbor Algorithms for High Dimensional Data.

    PubMed

    Muja, Marius; Lowe, David G

    2014-11-01

    For many computer vision and machine learning problems, large training sets are key for good performance. However, the most computationally expensive part of many computer vision and machine learning algorithms consists of finding nearest neighbor matches to high dimensional vectors that represent the training data. We propose new algorithms for approximate nearest neighbor matching and evaluate and compare them with previous algorithms. For matching high dimensional features, we find two algorithms to be the most efficient: the randomized k-d forest and a new algorithm proposed in this paper, the priority search k-means tree. We also propose a new algorithm for matching binary features by searching multiple hierarchical clustering trees and show it outperforms methods typically used in the literature. We show that the optimal nearest neighbor algorithm and its parameters depend on the data set characteristics and describe an automated configuration procedure for finding the best algorithm to search a particular data set. In order to scale to very large data sets that would otherwise not fit in the memory of a single machine, we propose a distributed nearest neighbor matching framework that can be used with any of the algorithms described in the paper. All this research has been released as an open source library called fast library for approximate nearest neighbors (FLANN), which has been incorporated into OpenCV and is now one of the most popular libraries for nearest neighbor matching.

  2. Exact and heuristic algorithms for Space Information Flow.

    PubMed

    Uwitonze, Alfred; Huang, Jiaqing; Ye, Yuanqing; Cheng, Wenqing; Li, Zongpeng

    2018-01-01

    Space Information Flow (SIF) is a new promising research area that studies network coding in geometric space, such as Euclidean space. The design of algorithms that compute the optimal SIF solutions remains one of the key open problems in SIF. This work proposes the first exact SIF algorithm and a heuristic SIF algorithm that compute min-cost multicast network coding for N (N ≥ 3) given terminal nodes in 2-D Euclidean space. Furthermore, we find that the Butterfly network in Euclidean space is the second example besides the Pentagram network where SIF is strictly better than Euclidean Steiner minimal tree. The exact algorithm design is based on two key techniques: Delaunay triangulation and linear programming. Delaunay triangulation technique helps to find practically good candidate relay nodes, after which a min-cost multicast linear programming model is solved over the terminal nodes and the candidate relay nodes, to compute the optimal multicast network topology, including the optimal relay nodes selected by linear programming from all the candidate relay nodes and the flow rates on the connection links. The heuristic algorithm design is also based on Delaunay triangulation and linear programming techniques. The exact algorithm can achieve the optimal SIF solution with an exponential computational complexity, while the heuristic algorithm can achieve the sub-optimal SIF solution with a polynomial computational complexity. We prove the correctness of the exact SIF algorithm. The simulation results show the effectiveness of the heuristic SIF algorithm.

  3. New Parallel Algorithms for Landscape Evolution Model

    NASA Astrophysics Data System (ADS)

    Jin, Y.; Zhang, H.; Shi, Y.

    2017-12-01

    Most landscape evolution models (LEM) developed in the last two decades solve the diffusion equation to simulate the transportation of surface sediments. This numerical approach is difficult to parallelize due to the computation of drainage area for each node, which needs huge amount of communication if run in parallel. In order to overcome this difficulty, we developed two parallel algorithms for LEM with a stream net. One algorithm handles the partition of grid with traditional methods and applies an efficient global reduction algorithm to do the computation of drainage areas and transport rates for the stream net; the other algorithm is based on a new partition algorithm, which partitions the nodes in catchments between processes first, and then partitions the cells according to the partition of nodes. Both methods focus on decreasing communication between processes and take the advantage of massive computing techniques, and numerical experiments show that they are both adequate to handle large scale problems with millions of cells. We implemented the two algorithms in our program based on the widely used finite element library deal.II, so that it can be easily coupled with ASPECT.

  4. A new root-based direction-finding algorithm

    NASA Astrophysics Data System (ADS)

    Wasylkiwskyj, Wasyl; Kopriva, Ivica; DoroslovačKi, Miloš; Zaghloul, Amir I.

    2007-04-01

    Polynomial rooting direction-finding (DF) algorithms are a computationally efficient alternative to search-based DF algorithms and are particularly suitable for uniform linear arrays of physically identical elements provided that mutual interaction among the array elements can be either neglected or compensated for. A popular algorithm in such situations is Root Multiple Signal Classification (Root MUSIC (RM)), wherein the estimation of the directions of arrivals (DOA) requires the computation of the roots of a (2N - 2) -order polynomial, where N represents number of array elements. The DOA are estimated from the L pairs of roots closest to the unit circle, where L represents number of sources. In this paper we derive a modified root polynomial (MRP) algorithm requiring the calculation of only L roots in order to estimate the L DOA. We evaluate the performance of the MRP algorithm numerically and show that it is as accurate as the RM algorithm but with a significantly simpler algebraic structure. In order to demonstrate that the theoretically predicted performance can be achieved in an experimental setting, a decoupled array is emulated in hardware using phase shifters. The results are in excellent agreement with theory.

  5. Algorithms for Discovery of Multiple Markov Boundaries

    PubMed Central

    Statnikov, Alexander; Lytkin, Nikita I.; Lemeire, Jan; Aliferis, Constantin F.

    2013-01-01

    Algorithms for Markov boundary discovery from data constitute an important recent development in machine learning, primarily because they offer a principled solution to the variable/feature selection problem and give insight on local causal structure. Over the last decade many sound algorithms have been proposed to identify a single Markov boundary of the response variable. Even though faithful distributions and, more broadly, distributions that satisfy the intersection property always have a single Markov boundary, other distributions/data sets may have multiple Markov boundaries of the response variable. The latter distributions/data sets are common in practical data-analytic applications, and there are several reasons why it is important to induce multiple Markov boundaries from such data. However, there are currently no sound and efficient algorithms that can accomplish this task. This paper describes a family of algorithms TIE* that can discover all Markov boundaries in a distribution. The broad applicability as well as efficiency of the new algorithmic family is demonstrated in an extensive benchmarking study that involved comparison with 26 state-of-the-art algorithms/variants in 15 data sets from a diversity of application domains. PMID:25285052

  6. N-Dimensional LLL Reduction Algorithm with Pivoted Reflection

    PubMed Central

    Deng, Zhongliang; Zhu, Di

    2018-01-01

    The Lenstra-Lenstra-Lovász (LLL) lattice reduction algorithm and many of its variants have been widely used by cryptography, multiple-input-multiple-output (MIMO) communication systems and carrier phase positioning in global navigation satellite system (GNSS) to solve the integer least squares (ILS) problem. In this paper, we propose an n-dimensional LLL reduction algorithm (n-LLL), expanding the Lovász condition in LLL algorithm to n-dimensional space in order to obtain a further reduced basis. We also introduce pivoted Householder reflection into the algorithm to optimize the reduction time. For an m-order positive definite matrix, analysis shows that the n-LLL reduction algorithm will converge within finite steps and always produce better results than the original LLL reduction algorithm with n > 2. The simulations clearly prove that n-LLL is better than the original LLL in reducing the condition number of an ill-conditioned input matrix with 39% improvement on average for typical cases, which can significantly reduce the searching space for solving ILS problem. The simulation results also show that the pivoted reflection has significantly declined the number of swaps in the algorithm by 57%, making n-LLL a more practical reduction algorithm. PMID:29351224

  7. Honey Bees Inspired Optimization Method: The Bees Algorithm.

    PubMed

    Yuce, Baris; Packianather, Michael S; Mastrocinque, Ernesto; Pham, Duc Truong; Lambiase, Alfredo

    2013-11-06

    Optimization algorithms are search methods where the goal is to find an optimal solution to a problem, in order to satisfy one or more objective functions, possibly subject to a set of constraints. Studies of social animals and social insects have resulted in a number of computational models of swarm intelligence. Within these swarms their collective behavior is usually very complex. The collective behavior of a swarm of social organisms emerges from the behaviors of the individuals of that swarm. Researchers have developed computational optimization methods based on biology such as Genetic Algorithms, Particle Swarm Optimization, and Ant Colony. The aim of this paper is to describe an optimization algorithm called the Bees Algorithm, inspired from the natural foraging behavior of honey bees, to find the optimal solution. The algorithm performs both an exploitative neighborhood search combined with random explorative search. In this paper, after an explanation of the natural foraging behavior of honey bees, the basic Bees Algorithm and its improved versions are described and are implemented in order to optimize several benchmark functions, and the results are compared with those obtained with different optimization algorithms. The results show that the Bees Algorithm offering some advantage over other optimization methods according to the nature of the problem.

  8. Benchmarking monthly homogenization algorithms

    NASA Astrophysics Data System (ADS)

    Venema, V. K. C.; Mestre, O.; Aguilar, E.; Auer, I.; Guijarro, J. A.; Domonkos, P.; Vertacnik, G.; Szentimrey, T.; Stepanek, P.; Zahradnicek, P.; Viarre, J.; Müller-Westermeier, G.; Lakatos, M.; Williams, C. N.; Menne, M.; Lindau, R.; Rasol, D.; Rustemeier, E.; Kolokythas, K.; Marinova, T.; Andresen, L.; Acquaotta, F.; Fratianni, S.; Cheval, S.; Klancar, M.; Brunetti, M.; Gruber, C.; Prohom Duran, M.; Likso, T.; Esteban, P.; Brandsma, T.

    2011-08-01

    The COST (European Cooperation in Science and Technology) Action ES0601: Advances in homogenization methods of climate series: an integrated approach (HOME) has executed a blind intercomparison and validation study for monthly homogenization algorithms. Time series of monthly temperature and precipitation were evaluated because of their importance for climate studies and because they represent two important types of statistics (additive and multiplicative). The algorithms were validated against a realistic benchmark dataset. The benchmark contains real inhomogeneous data as well as simulated data with inserted inhomogeneities. Random break-type inhomogeneities were added to the simulated datasets modeled as a Poisson process with normally distributed breakpoint sizes. To approximate real world conditions, breaks were introduced that occur simultaneously in multiple station series within a simulated network of station data. The simulated time series also contained outliers, missing data periods and local station trends. Further, a stochastic nonlinear global (network-wide) trend was added. Participants provided 25 separate homogenized contributions as part of the blind study as well as 22 additional solutions submitted after the details of the imposed inhomogeneities were revealed. These homogenized datasets were assessed by a number of performance metrics including (i) the centered root mean square error relative to the true homogeneous value at various averaging scales, (ii) the error in linear trend estimates and (iii) traditional contingency skill scores. The metrics were computed both using the individual station series as well as the network average regional series. The performance of the contributions depends significantly on the error metric considered. Contingency scores by themselves are not very informative. Although relative homogenization algorithms typically improve the homogeneity of temperature data, only the best ones improve precipitation data

  9. Fast frequency acquisition via adaptive least squares algorithm

    NASA Technical Reports Server (NTRS)

    Kumar, R.

    1986-01-01

    A new least squares algorithm is proposed and investigated for fast frequency and phase acquisition of sinusoids in the presence of noise. This algorithm is a special case of more general, adaptive parameter-estimation techniques. The advantages of the algorithms are their conceptual simplicity, flexibility and applicability to general situations. For example, the frequency to be acquired can be time varying, and the noise can be nonGaussian, nonstationary and colored. As the proposed algorithm can be made recursive in the number of observations, it is not necessary to have a priori knowledge of the received signal-to-noise ratio or to specify the measurement time. This would be required for batch processing techniques, such as the fast Fourier transform (FFT). The proposed algorithm improves the frequency estimate on a recursive basis as more and more observations are obtained. When the algorithm is applied in real time, it has the extra advantage that the observations need not be stored. The algorithm also yields a real time confidence measure as to the accuracy of the estimator.

  10. Set covering algorithm, a subprogram of the scheduling algorithm for mission planning and logistic evaluation

    NASA Technical Reports Server (NTRS)

    Chang, H.

    1976-01-01

    A computer program using Lemke, Salkin and Spielberg's Set Covering Algorithm (SCA) to optimize a traffic model problem in the Scheduling Algorithm for Mission Planning and Logistics Evaluation (SAMPLE) was documented. SCA forms a submodule of SAMPLE and provides for input and output, subroutines, and an interactive feature for performing the optimization and arranging the results in a readily understandable form for output.

  11. MM Algorithms for Geometric and Signomial Programming

    PubMed Central

    Lange, Kenneth; Zhou, Hua

    2013-01-01

    This paper derives new algorithms for signomial programming, a generalization of geometric programming. The algorithms are based on a generic principle for optimization called the MM algorithm. In this setting, one can apply the geometric-arithmetic mean inequality and a supporting hyperplane inequality to create a surrogate function with parameters separated. Thus, unconstrained signomial programming reduces to a sequence of one-dimensional minimization problems. Simple examples demonstrate that the MM algorithm derived can converge to a boundary point or to one point of a continuum of minimum points. Conditions under which the minimum point is unique or occurs in the interior of parameter space are proved for geometric programming. Convergence to an interior point occurs at a linear rate. Finally, the MM framework easily accommodates equality and inequality constraints of signomial type. For the most important special case, constrained quadratic programming, the MM algorithm involves very simple updates. PMID:24634545

  12. MM Algorithms for Geometric and Signomial Programming.

    PubMed

    Lange, Kenneth; Zhou, Hua

    2014-02-01

    This paper derives new algorithms for signomial programming, a generalization of geometric programming. The algorithms are based on a generic principle for optimization called the MM algorithm. In this setting, one can apply the geometric-arithmetic mean inequality and a supporting hyperplane inequality to create a surrogate function with parameters separated. Thus, unconstrained signomial programming reduces to a sequence of one-dimensional minimization problems. Simple examples demonstrate that the MM algorithm derived can converge to a boundary point or to one point of a continuum of minimum points. Conditions under which the minimum point is unique or occurs in the interior of parameter space are proved for geometric programming. Convergence to an interior point occurs at a linear rate. Finally, the MM framework easily accommodates equality and inequality constraints of signomial type. For the most important special case, constrained quadratic programming, the MM algorithm involves very simple updates.

  13. SU-E-T-516: Dosimetric Validation of AcurosXB Algorithm in Comparison with AAA & CCC Algorithms for VMAT Technique.

    PubMed

    Kathirvel, M; Subramanian, V Sai; Arun, G; Thirumalaiswamy, S; Ramalingam, K; Kumar, S Ashok; Jagadeesh, K

    2012-06-01

    To dosimetrically validate AcurosXB algorithm for Volumetric Modulated Arc Therapy (VMAT) in comparison with standard clinical Anisotropic Analytic Algorithm(AAA) and Collapsed Cone Convolution(CCC) dose calculation algorithms. AcurosXB dose calculation algorithm is available with Varian Eclipse treatment planning system (V10). It uses grid-based Boltzmann equation solver to predict dose precisely in lesser time. This study was made to realize algorithms ability to predict dose accurately as its delivery for which five clinical cases each of Brain, Head&Neck, Thoracic, Pelvic and SBRT were taken. Verification plans were created on multicube phantom with iMatrixx-2D detector array and then dose prediction was done with AcurosXB, AAA & CCC (COMPASS System) algorithm and the same were delivered onto CLINAC-iX treatment machine. Delivered dose was captured in iMatrixx plane for all 25 plans. Measured dose was taken as reference to quantify the agreement between AcurosXB calculation algorithm against previously validated AAA and CCC algorithm. Gamma evaluation was performed with clinical criteria distance-to-agreement 3&2mm and dose difference 3&2% in omnipro-I'MRT software. Plans were evaluated in terms of correlation coefficient, quantitative area gamma and average gamma. Study shows good agreement between mean correlation 0.9979±0.0012, 0.9984±0.0009 & 0.9979±0.0011 for AAA, CCC & Acuros respectively. Mean area gamma for criteria 3mm/3% was found to be 98.80±1.04, 98.14±2.31, 98.08±2.01 and 2mm/2% was found to be 93.94±3.83, 87.17±10.54 & 92.36±5.46 for AAA, CCC & Acuros respectively. Mean average gamma for 3mm/3% was 0.26±0.07, 0.42±0.08, 0.28±0.09 and 2mm/2% was found to be 0.39±0.10, 0.64±0.11, 0.42±0.13 for AAA, CCC & Acuros respectively. This study demonstrated that the AcurosXB algorithm had a good agreement with the AAA & CCC in terms of dose prediction. In conclusion AcurosXB algorithm provides a valid, accurate and speedy alternative to AAA

  14. NWRA AVOSS Wake Vortex Prediction Algorithm. 3.1.1

    NASA Technical Reports Server (NTRS)

    Robins, R. E.; Delisi, D. P.; Hinton, David (Technical Monitor)

    2002-01-01

    This report provides a detailed description of the wake vortex prediction algorithm used in the Demonstration Version of NASA's Aircraft Vortex Spacing System (AVOSS). The report includes all equations used in the algorithm, an explanation of how to run the algorithm, and a discussion of how the source code for the algorithm is organized. Several appendices contain important supplementary information, including suggestions for enhancing the algorithm and results from test cases.

  15. Java implementation of Class Association Rule algorithms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tamura, Makio

    2007-08-30

    Java implementation of three Class Association Rule mining algorithms, NETCAR, CARapriori, and clustering based rule mining. NETCAR algorithm is a novel algorithm developed by Makio Tamura. The algorithm is discussed in a paper: UCRL-JRNL-232466-DRAFT, and would be published in a peer review scientific journal. The software is used to extract combinations of genes relevant with a phenotype from a phylogenetic profile and a phenotype profile. The phylogenetic profiles is represented by a binary matrix and a phenotype profile is represented by a binary vector. The present application of this software will be in genome analysis, however, it could be appliedmore » more generally.« less

  16. Automatic control algorithm effects on energy production

    NASA Technical Reports Server (NTRS)

    Mcnerney, G. M.

    1981-01-01

    A computer model was developed using actual wind time series and turbine performance data to simulate the power produced by the Sandia 17-m VAWT operating in automatic control. The model was used to investigate the influence of starting algorithms on annual energy production. The results indicate that, depending on turbine and local wind characteristics, a bad choice of a control algorithm can significantly reduce overall energy production. The model can be used to select control algorithms and threshold parameters that maximize long term energy production. The results from local site and turbine characteristics were generalized to obtain general guidelines for control algorithm design.

  17. A Collaborative Recommend Algorithm Based on Bipartite Community

    PubMed Central

    Fu, Yuchen; Liu, Quan; Cui, Zhiming

    2014-01-01

    The recommendation algorithm based on bipartite network is superior to traditional methods on accuracy and diversity, which proves that considering the network topology of recommendation systems could help us to improve recommendation results. However, existing algorithms mainly focus on the overall topology structure and those local characteristics could also play an important role in collaborative recommend processing. Therefore, on account of data characteristics and application requirements of collaborative recommend systems, we proposed a link community partitioning algorithm based on the label propagation and a collaborative recommendation algorithm based on the bipartite community. Then we designed numerical experiments to verify the algorithm validity under benchmark and real database. PMID:24955393

  18. Bio-inspired algorithms applied to molecular docking simulations.

    PubMed

    Heberlé, G; de Azevedo, W F

    2011-01-01

    Nature as a source of inspiration has been shown to have a great beneficial impact on the development of new computational methodologies. In this scenario, analyses of the interactions between a protein target and a ligand can be simulated by biologically inspired algorithms (BIAs). These algorithms mimic biological systems to create new paradigms for computation, such as neural networks, evolutionary computing, and swarm intelligence. This review provides a description of the main concepts behind BIAs applied to molecular docking simulations. Special attention is devoted to evolutionary algorithms, guided-directed evolutionary algorithms, and Lamarckian genetic algorithms. Recent applications of these methodologies to protein targets identified in the Mycobacterium tuberculosis genome are described.

  19. Traffic Noise Ground Attenuation Algorithm Evaluation

    NASA Astrophysics Data System (ADS)

    Herman, Lloyd Allen

    The Federal Highway Administration traffic noise prediction program, STAMINA 2.0, was evaluated for its accuracy. In addition, the ground attenuation algorithm used in the Ontario ORNAMENT method was evaluated to determine its potential to improve these predictions. Field measurements of sound levels were made at 41 sites on I-440 in Nashville, Tennessee in order to both study noise barrier effectiveness and to evaluate STAMINA 2.0 and the performance of the ORNAMENT ground attenuation algorithm. The measurement sites, which contain large variations in terrain, included several cross sections. Further, all sites contain some type of barrier, natural or constructed, which could more fully expose the strength and weaknesses of the ground attenuation algorithms. The noise barrier evaluation was accomplished in accordance with American National Standard Methods for Determination of Insertion Loss of Outdoor Noise Barriers which resulted in an evaluation of this standard. The entire 7.2 mile length of I-440 was modeled using STAMINA 2.0. A multiple run procedure was developed to emulate the results that would be obtained if the ORNAMENT algorithm was incorporated into STAMINA 2.0. Finally, the predicted noise levels based on STAMINA 2.0 and STAMINA with the ORNAMENT ground attenuation algorithm were compared with each other and with the field measurements. It was found that STAMINA 2.0 overpredicted noise levels by an average of over 2 dB for the receivers on I-440, whereas, the STAMINA with ORNAMENT ground attenuation algorithm overpredicted noise levels by an average of less than 0.5 dB. The mean errors for the two predictions were found to be statistically different from each other, and the mean error for the prediction with the ORNAMENT ground attenuation algorithm was not found to be statistically different from zero. The STAMINA 2.0 program predicts little, if any, ground attenuation for receivers at typical first-row distances from highways where noise barriers

  20. Bio-ALIRT biosurveillance detection algorithm evaluation.

    PubMed

    Siegrist, David; Pavlin, J

    2004-09-24

    Early detection of disease outbreaks by a medical biosurveillance system relies on two major components: 1) the contribution of early and reliable data sources and 2) the sensitivity, specificity, and timeliness of biosurveillance detection algorithms. This paper describes an effort to assess leading detection algorithms by arranging a common challenge problem and providing a common data set. The objectives of this study were to determine whether automated detection algorithms can reliably and quickly identify the onset of natural disease outbreaks that are surrogates for possible terrorist pathogen releases, and do so at acceptable false-alert rates (e.g., once every 2-6 weeks). Historic de-identified data were obtained from five metropolitan areas over 23 months; these data included International Classification of Diseases, Ninth Revision (ICD-9) codes related to respiratory and gastrointestinal illness syndromes. An outbreak detection group identified and labeled two natural disease outbreaks in these data and provided them to analysts for training of detection algorithms. All outbreaks in the remaining test data were identified but not revealed to the detection groups until after their analyses. The algorithms established a probability of outbreak for each day's counts. The probability of outbreak was assessed as an "actual" alert for different false-alert rates. The best algorithms were able to detect all of the outbreaks at false-alert rates of one every 2-6 weeks. They were often able to detect for the same day human investigators had identified as the true start of the outbreak. Because minimal data exists for an actual biologic attack, determining how quickly an algorithm might detect such an attack is difficult. However, application of these algorithms in combination with other data-analysis methods to historic outbreak data indicates that biosurveillance techniques for analyzing syndrome counts can rapidly detect seasonal respiratory and gastrointestinal