Sample records for algorithm ii nsga-ii

  1. An efficient non-dominated sorting method for evolutionary algorithms.

    PubMed

    Fang, Hongbing; Wang, Qian; Tu, Yi-Cheng; Horstemeyer, Mark F

    2008-01-01

    We present a new non-dominated sorting algorithm to generate the non-dominated fronts in multi-objective optimization with evolutionary algorithms, particularly the NSGA-II. The non-dominated sorting algorithm used by NSGA-II has a time complexity of O(MN(2)) in generating non-dominated fronts in one generation (iteration) for a population size N and M objective functions. Since generating non-dominated fronts takes the majority of total computational time (excluding the cost of fitness evaluations) of NSGA-II, making this algorithm faster will significantly improve the overall efficiency of NSGA-II and other genetic algorithms using non-dominated sorting. The new non-dominated sorting algorithm proposed in this study reduces the number of redundant comparisons existing in the algorithm of NSGA-II by recording the dominance information among solutions from their first comparisons. By utilizing a new data structure called the dominance tree and the divide-and-conquer mechanism, the new algorithm is faster than NSGA-II for different numbers of objective functions. Although the number of solution comparisons by the proposed algorithm is close to that of NSGA-II when the number of objectives becomes large, the total computational time shows that the proposed algorithm still has better efficiency because of the adoption of the dominance tree structure and the divide-and-conquer mechanism.

  2. Multi-Objective Constraint Satisfaction for Mobile Robot Area Defense

    DTIC Science & Technology

    2010-03-01

    17 NSGA-II non-dominated sorting genetic algorithm II . . . . . . . . . . . . . . . . . . . 17 jMetal Metaheuristic Algorithms in...to alert the other agents and ensure trust in the system. This research presents an algorithm that tasks robots to meet the two specific goals of...problem is defined as a constraint satisfaction problem solved using the Non-dominated Sorting Genetic Algorithm II (NSGA-II). Both goals of

  3. δ-Similar Elimination to Enhance Search Performance of Multiobjective Evolutionary Algorithms

    NASA Astrophysics Data System (ADS)

    Aguirre, Hernán; Sato, Masahiko; Tanaka, Kiyoshi

    In this paper, we propose δ-similar elimination to improve the search performance of multiobjective evolutionary algorithms in combinatorial optimization problems. This method eliminates similar individuals in objective space to fairly distribute selection among the different regions of the instantaneous Pareto front. We investigate four eliminating methods analyzing their effects using NSGA-II. In addition, we compare the search performance of NSGA-II enhanced by our method and NSGA-II enhanced by controlled elitism.

  4. Resonance assignment of the NMR spectra of disordered proteins using a multi-objective non-dominated sorting genetic algorithm.

    PubMed

    Yang, Yu; Fritzsching, Keith J; Hong, Mei

    2013-11-01

    A multi-objective genetic algorithm is introduced to predict the assignment of protein solid-state NMR (SSNMR) spectra with partial resonance overlap and missing peaks due to broad linewidths, molecular motion, and low sensitivity. This non-dominated sorting genetic algorithm II (NSGA-II) aims to identify all possible assignments that are consistent with the spectra and to compare the relative merit of these assignments. Our approach is modeled after the recently introduced Monte-Carlo simulated-annealing (MC/SA) protocol, with the key difference that NSGA-II simultaneously optimizes multiple assignment objectives instead of searching for possible assignments based on a single composite score. The multiple objectives include maximizing the number of consistently assigned peaks between multiple spectra ("good connections"), maximizing the number of used peaks, minimizing the number of inconsistently assigned peaks between spectra ("bad connections"), and minimizing the number of assigned peaks that have no matching peaks in the other spectra ("edges"). Using six SSNMR protein chemical shift datasets with varying levels of imperfection that was introduced by peak deletion, random chemical shift changes, and manual peak picking of spectra with moderately broad linewidths, we show that the NSGA-II algorithm produces a large number of valid and good assignments rapidly. For high-quality chemical shift peak lists, NSGA-II and MC/SA perform similarly well. However, when the peak lists contain many missing peaks that are uncorrelated between different spectra and have chemical shift deviations between spectra, the modified NSGA-II produces a larger number of valid solutions than MC/SA, and is more effective at distinguishing good from mediocre assignments by avoiding the hazard of suboptimal weighting factors for the various objectives. These two advantages, namely diversity and better evaluation, lead to a higher probability of predicting the correct assignment for a larger number of residues. On the other hand, when there are multiple equally good assignments that are significantly different from each other, the modified NSGA-II is less efficient than MC/SA in finding all the solutions. This problem is solved by a combined NSGA-II/MC algorithm, which appears to have the advantages of both NSGA-II and MC/SA. This combination algorithm is robust for the three most difficult chemical shift datasets examined here and is expected to give the highest-quality de novo assignment of challenging protein NMR spectra.

  5. Optimal Solutions of Multiproduct Batch Chemical Process Using Multiobjective Genetic Algorithm with Expert Decision System

    PubMed Central

    Mokeddem, Diab; Khellaf, Abdelhafid

    2009-01-01

    Optimal design problem are widely known by their multiple performance measures that are often competing with each other. In this paper, an optimal multiproduct batch chemical plant design is presented. The design is firstly formulated as a multiobjective optimization problem, to be solved using the well suited non dominating sorting genetic algorithm (NSGA-II). The NSGA-II have capability to achieve fine tuning of variables in determining a set of non dominating solutions distributed along the Pareto front in a single run of the algorithm. The NSGA-II ability to identify a set of optimal solutions provides the decision-maker DM with a complete picture of the optimal solution space to gain better and appropriate choices. Then an outranking with PROMETHEE II helps the decision-maker to finalize the selection of a best compromise. The effectiveness of NSGA-II method with multiojective optimization problem is illustrated through two carefully referenced examples. PMID:19543537

  6. A Bee Evolutionary Guiding Nondominated Sorting Genetic Algorithm II for Multiobjective Flexible Job-Shop Scheduling.

    PubMed

    Deng, Qianwang; Gong, Guiliang; Gong, Xuran; Zhang, Like; Liu, Wei; Ren, Qinghua

    2017-01-01

    Flexible job-shop scheduling problem (FJSP) is an NP-hard puzzle which inherits the job-shop scheduling problem (JSP) characteristics. This paper presents a bee evolutionary guiding nondominated sorting genetic algorithm II (BEG-NSGA-II) for multiobjective FJSP (MO-FJSP) with the objectives to minimize the maximal completion time, the workload of the most loaded machine, and the total workload of all machines. It adopts a two-stage optimization mechanism during the optimizing process. In the first stage, the NSGA-II algorithm with T iteration times is first used to obtain the initial population N , in which a bee evolutionary guiding scheme is presented to exploit the solution space extensively. In the second stage, the NSGA-II algorithm with GEN iteration times is used again to obtain the Pareto-optimal solutions. In order to enhance the searching ability and avoid the premature convergence, an updating mechanism is employed in this stage. More specifically, its population consists of three parts, and each of them changes with the iteration times. What is more, numerical simulations are carried out which are based on some published benchmark instances. Finally, the effectiveness of the proposed BEG-NSGA-II algorithm is shown by comparing the experimental results and the results of some well-known algorithms already existed.

  7. A Bee Evolutionary Guiding Nondominated Sorting Genetic Algorithm II for Multiobjective Flexible Job-Shop Scheduling

    PubMed Central

    Deng, Qianwang; Gong, Xuran; Zhang, Like; Liu, Wei; Ren, Qinghua

    2017-01-01

    Flexible job-shop scheduling problem (FJSP) is an NP-hard puzzle which inherits the job-shop scheduling problem (JSP) characteristics. This paper presents a bee evolutionary guiding nondominated sorting genetic algorithm II (BEG-NSGA-II) for multiobjective FJSP (MO-FJSP) with the objectives to minimize the maximal completion time, the workload of the most loaded machine, and the total workload of all machines. It adopts a two-stage optimization mechanism during the optimizing process. In the first stage, the NSGA-II algorithm with T iteration times is first used to obtain the initial population N, in which a bee evolutionary guiding scheme is presented to exploit the solution space extensively. In the second stage, the NSGA-II algorithm with GEN iteration times is used again to obtain the Pareto-optimal solutions. In order to enhance the searching ability and avoid the premature convergence, an updating mechanism is employed in this stage. More specifically, its population consists of three parts, and each of them changes with the iteration times. What is more, numerical simulations are carried out which are based on some published benchmark instances. Finally, the effectiveness of the proposed BEG-NSGA-II algorithm is shown by comparing the experimental results and the results of some well-known algorithms already existed. PMID:28458687

  8. Optimising operational amplifiers by evolutionary algorithms and gm/Id method

    NASA Astrophysics Data System (ADS)

    Tlelo-Cuautle, E.; Sanabria-Borbon, A. C.

    2016-10-01

    The evolutionary algorithm called non-dominated sorting genetic algorithm (NSGA-II) is applied herein in the optimisation of operational transconductance amplifiers. NSGA-II is accelerated by applying the gm/Id method to estimate reduced search spaces associated to widths (W) and lengths (L) of the metal-oxide-semiconductor field-effect-transistor (MOSFETs), and to guarantee their appropriate bias levels conditions. In addition, we introduce an integer encoding for the W/L sizes of the MOSFETs to avoid a post-processing step for rounding-off their values to be multiples of the integrated circuit fabrication technology. Finally, from the feasible solutions generated by NSGA-II, we introduce a second optimisation stage to guarantee that the final feasible W/L sizes solutions support process, voltage and temperature (PVT) variations. The optimisation results lead us to conclude that the gm/Id method and integer encoding are quite useful to accelerate the convergence of the evolutionary algorithm NSGA-II, while the second optimisation stage guarantees robustness of the feasible solutions to PVT variations.

  9. [Not Available].

    PubMed

    Mokeddem, Diab; Khellaf, Abdelhafid

    2009-01-01

    Optimal design problem are widely known by their multiple performance measures that are often competing with each other. In this paper, an optimal multiproduct batch chemical plant design is presented. The design is firstly formulated as a multiobjective optimization problem, to be solved using the well suited non dominating sorting genetic algorithm (NSGA-II). The NSGA-II have capability to achieve fine tuning of variables in determining a set of non dominating solutions distributed along the Pareto front in a single run of the algorithm. The NSGA-II ability to identify a set of optimal solutions provides the decision-maker DM with a complete picture of the optimal solution space to gain better and appropriate choices. Then an outranking with PROMETHEE II helps the decision-maker to finalize the selection of a best compromise. The effectiveness of NSGA-II method with multiojective optimization problem is illustrated through two carefully referenced examples.

  10. An improved NSGA - II algorithm for mixed model assembly line balancing

    NASA Astrophysics Data System (ADS)

    Wu, Yongming; Xu, Yanxia; Luo, Lifei; Zhang, Han; Zhao, Xudong

    2018-05-01

    Aiming at the problems of assembly line balancing and path optimization for material vehicles in mixed model manufacturing system, a multi-objective mixed model assembly line (MMAL), which is based on optimization objectives, influencing factors and constraints, is established. According to the specific situation, an improved NSGA-II algorithm based on ecological evolution strategy is designed. An environment self-detecting operator, which is used to detect whether the environment changes, is adopted in the algorithm. Finally, the effectiveness of proposed model and algorithm is verified by examples in a concrete mixing system.

  11. A hybrid multi-objective evolutionary algorithm for wind-turbine blade optimization

    NASA Astrophysics Data System (ADS)

    Sessarego, M.; Dixon, K. R.; Rival, D. E.; Wood, D. H.

    2015-08-01

    A concurrent-hybrid non-dominated sorting genetic algorithm (hybrid NSGA-II) has been developed and applied to the simultaneous optimization of the annual energy production, flapwise root-bending moment and mass of the NREL 5 MW wind-turbine blade. By hybridizing a multi-objective evolutionary algorithm (MOEA) with gradient-based local search, it is believed that the optimal set of blade designs could be achieved in lower computational cost than for a conventional MOEA. To measure the convergence between the hybrid and non-hybrid NSGA-II on a wind-turbine blade optimization problem, a computationally intensive case was performed using the non-hybrid NSGA-II. From this particular case, a three-dimensional surface representing the optimal trade-off between the annual energy production, flapwise root-bending moment and blade mass was achieved. The inclusion of local gradients in the blade optimization, however, shows no improvement in the convergence for this three-objective problem.

  12. Stochastic resource allocation in emergency departments with a multi-objective simulation optimization algorithm.

    PubMed

    Feng, Yen-Yi; Wu, I-Chin; Chen, Tzu-Li

    2017-03-01

    The number of emergency cases or emergency room visits rapidly increases annually, thus leading to an imbalance in supply and demand and to the long-term overcrowding of hospital emergency departments (EDs). However, current solutions to increase medical resources and improve the handling of patient needs are either impractical or infeasible in the Taiwanese environment. Therefore, EDs must optimize resource allocation given limited medical resources to minimize the average length of stay of patients and medical resource waste costs. This study constructs a multi-objective mathematical model for medical resource allocation in EDs in accordance with emergency flow or procedure. The proposed mathematical model is complex and difficult to solve because its performance value is stochastic; furthermore, the model considers both objectives simultaneously. Thus, this study develops a multi-objective simulation optimization algorithm by integrating a non-dominated sorting genetic algorithm II (NSGA II) with multi-objective computing budget allocation (MOCBA) to address the challenges of multi-objective medical resource allocation. NSGA II is used to investigate plausible solutions for medical resource allocation, and MOCBA identifies effective sets of feasible Pareto (non-dominated) medical resource allocation solutions in addition to effectively allocating simulation or computation budgets. The discrete event simulation model of ED flow is inspired by a Taiwan hospital case and is constructed to estimate the expected performance values of each medical allocation solution as obtained through NSGA II. Finally, computational experiments are performed to verify the effectiveness and performance of the integrated NSGA II and MOCBA method, as well as to derive non-dominated medical resource allocation solutions from the algorithms.

  13. Particle swarm optimization: an alternative in marine propeller optimization?

    NASA Astrophysics Data System (ADS)

    Vesting, F.; Bensow, R. E.

    2018-01-01

    This article deals with improving and evaluating the performance of two evolutionary algorithm approaches for automated engineering design optimization. Here a marine propeller design with constraints on cavitation nuisance is the intended application. For this purpose, the particle swarm optimization (PSO) algorithm is adapted for multi-objective optimization and constraint handling for use in propeller design. Three PSO algorithms are developed and tested for the optimization of four commercial propeller designs for different ship types. The results are evaluated by interrogating the generation medians and the Pareto front development. The same propellers are also optimized utilizing the well established NSGA-II genetic algorithm to provide benchmark results. The authors' PSO algorithms deliver comparable results to NSGA-II, but converge earlier and enhance the solution in terms of constraints violation.

  14. Chance-constrained multi-objective optimization of groundwater remediation design at DNAPLs-contaminated sites using a multi-algorithm genetically adaptive method

    NASA Astrophysics Data System (ADS)

    Ouyang, Qi; Lu, Wenxi; Hou, Zeyu; Zhang, Yu; Li, Shuai; Luo, Jiannan

    2017-05-01

    In this paper, a multi-algorithm genetically adaptive multi-objective (AMALGAM) method is proposed as a multi-objective optimization solver. It was implemented in the multi-objective optimization of a groundwater remediation design at sites contaminated by dense non-aqueous phase liquids. In this study, there were two objectives: minimization of the total remediation cost, and minimization of the remediation time. A non-dominated sorting genetic algorithm II (NSGA-II) was adopted to compare with the proposed method. For efficiency, the time-consuming surfactant-enhanced aquifer remediation simulation model was replaced by a surrogate model constructed by a multi-gene genetic programming (MGGP) technique. Similarly, two other surrogate modeling methods-support vector regression (SVR) and Kriging (KRG)-were employed to make comparisons with MGGP. In addition, the surrogate-modeling uncertainty was incorporated in the optimization model by chance-constrained programming (CCP). The results showed that, for the problem considered in this study, (1) the solutions obtained by AMALGAM incurred less remediation cost and required less time than those of NSGA-II, indicating that AMALGAM outperformed NSGA-II. It was additionally shown that (2) the MGGP surrogate model was more accurate than SVR and KRG; and (3) the remediation cost and time increased with the confidence level, which can enable decision makers to make a suitable choice by considering the given budget, remediation time, and reliability.

  15. Chance-constrained multi-objective optimization of groundwater remediation design at DNAPLs-contaminated sites using a multi-algorithm genetically adaptive method.

    PubMed

    Ouyang, Qi; Lu, Wenxi; Hou, Zeyu; Zhang, Yu; Li, Shuai; Luo, Jiannan

    2017-05-01

    In this paper, a multi-algorithm genetically adaptive multi-objective (AMALGAM) method is proposed as a multi-objective optimization solver. It was implemented in the multi-objective optimization of a groundwater remediation design at sites contaminated by dense non-aqueous phase liquids. In this study, there were two objectives: minimization of the total remediation cost, and minimization of the remediation time. A non-dominated sorting genetic algorithm II (NSGA-II) was adopted to compare with the proposed method. For efficiency, the time-consuming surfactant-enhanced aquifer remediation simulation model was replaced by a surrogate model constructed by a multi-gene genetic programming (MGGP) technique. Similarly, two other surrogate modeling methods-support vector regression (SVR) and Kriging (KRG)-were employed to make comparisons with MGGP. In addition, the surrogate-modeling uncertainty was incorporated in the optimization model by chance-constrained programming (CCP). The results showed that, for the problem considered in this study, (1) the solutions obtained by AMALGAM incurred less remediation cost and required less time than those of NSGA-II, indicating that AMALGAM outperformed NSGA-II. It was additionally shown that (2) the MGGP surrogate model was more accurate than SVR and KRG; and (3) the remediation cost and time increased with the confidence level, which can enable decision makers to make a suitable choice by considering the given budget, remediation time, and reliability. Copyright © 2017 Elsevier B.V. All rights reserved.

  16. Multi-objective Optimization of Pulsed Gas Metal Arc Welding Process Using Neuro NSGA-II

    NASA Astrophysics Data System (ADS)

    Pal, Kamal; Pal, Surjya K.

    2018-05-01

    Weld quality is a critical issue in fabrication industries where products are custom-designed. Multi-objective optimization results number of solutions in the pareto-optimal front. Mathematical regression model based optimization methods are often found to be inadequate for highly non-linear arc welding processes. Thus, various global evolutionary approaches like artificial neural network, genetic algorithm (GA) have been developed. The present work attempts with elitist non-dominated sorting GA (NSGA-II) for optimization of pulsed gas metal arc welding process using back propagation neural network (BPNN) based weld quality feature models. The primary objective to maintain butt joint weld quality is the maximization of tensile strength with minimum plate distortion. BPNN has been used to compute the fitness of each solution after adequate training, whereas NSGA-II algorithm generates the optimum solutions for two conflicting objectives. Welding experiments have been conducted on low carbon steel using response surface methodology. The pareto-optimal front with three ranked solutions after 20th generations was considered as the best without further improvement. The joint strength as well as transverse shrinkage was found to be drastically improved over the design of experimental results as per validated pareto-optimal solutions obtained.

  17. Metaheuristics-Assisted Combinatorial Screening of Eu2+-Doped Ca-Sr-Ba-Li-Mg-Al-Si-Ge-N Compositional Space in Search of a Narrow-Band Green Emitting Phosphor and Density Functional Theory Calculations.

    PubMed

    Lee, Jin-Woong; Singh, Satendra Pal; Kim, Minseuk; Hong, Sung Un; Park, Woon Bae; Sohn, Kee-Sun

    2017-08-21

    A metaheuristics-based design would be of great help in relieving the enormous experimental burdens faced during the combinatorial screening of a huge, multidimensional search space, while providing the same effect as total enumeration. In order to tackle the high-throughput powder processing complications and to secure practical phosphors, metaheuristics, an elitism-reinforced nondominated sorting genetic algorithm (NSGA-II), was employed in this study. The NSGA-II iteration targeted two objective functions. The first was to search for a higher emission efficacy. The second was to search for narrow-band green color emissions. The NSGA-II iteration finally converged on BaLi 2 Al 2 Si 2 N 6 :Eu 2+ phosphors in the Eu 2+ -doped Ca-Sr-Ba-Li-Mg-Al-Si-Ge-N compositional search space. The BaLi 2 Al 2 Si 2 N 6 :Eu 2+ phosphor, which was synthesized with no human intervention via the assistance of NSGA-II, was a clear single phase and gave an acceptable luminescence. The BaLi 2 Al 2 Si 2 N 6 :Eu 2+ phosphor as well as all other phosphors that appeared during the NSGA-II iterations were examined in detail by employing powder X-ray diffraction-based Rietveld refinement, X-ray absorption near edge structure, density functional theory calculation, and time-resolved photoluminescence. The thermodynamic stability and the band structure plausibility were confirmed, and more importantly a novel approach to the energy transfer analysis was also introduced for BaLi 2 Al 2 Si 2 N 6 :Eu 2+ phosphors.

  18. MULTIOBJECTIVE PARALLEL GENETIC ALGORITHM FOR WASTE MINIMIZATION

    EPA Science Inventory

    In this research we have developed an efficient multiobjective parallel genetic algorithm (MOPGA) for waste minimization problems. This MOPGA integrates PGAPack (Levine, 1996) and NSGA-II (Deb, 2000) with novel modifications. PGAPack is a master-slave parallel implementation of a...

  19. Optimal Bi-Objective Redundancy Allocation for Systems Reliability and Risk Management.

    PubMed

    Govindan, Kannan; Jafarian, Ahmad; Azbari, Mostafa E; Choi, Tsan-Ming

    2016-08-01

    In the big data era, systems reliability is critical to effective systems risk management. In this paper, a novel multiobjective approach, with hybridization of a known algorithm called NSGA-II and an adaptive population-based simulated annealing (APBSA) method is developed to solve the systems reliability optimization problems. In the first step, to create a good algorithm, we use a coevolutionary strategy. Since the proposed algorithm is very sensitive to parameter values, the response surface method is employed to estimate the appropriate parameters of the algorithm. Moreover, to examine the performance of our proposed approach, several test problems are generated, and the proposed hybrid algorithm and other commonly known approaches (i.e., MOGA, NRGA, and NSGA-II) are compared with respect to four performance measures: 1) mean ideal distance; 2) diversification metric; 3) percentage of domination; and 4) data envelopment analysis. The computational studies have shown that the proposed algorithm is an effective approach for systems reliability and risk management.

  20. Bi-Objective Flexible Job-Shop Scheduling Problem Considering Energy Consumption under Stochastic Processing Times.

    PubMed

    Yang, Xin; Zeng, Zhenxiang; Wang, Ruidong; Sun, Xueshan

    2016-01-01

    This paper presents a novel method on the optimization of bi-objective Flexible Job-shop Scheduling Problem (FJSP) under stochastic processing times. The robust counterpart model and the Non-dominated Sorting Genetic Algorithm II (NSGA-II) are used to solve the bi-objective FJSP with consideration of the completion time and the total energy consumption under stochastic processing times. The case study on GM Corporation verifies that the NSGA-II used in this paper is effective and has advantages to solve the proposed model comparing with HPSO and PSO+SA. The idea and method of the paper can be generalized widely in the manufacturing industry, because it can reduce the energy consumption of the energy-intensive manufacturing enterprise with less investment when the new approach is applied in existing systems.

  1. Bi-Objective Flexible Job-Shop Scheduling Problem Considering Energy Consumption under Stochastic Processing Times

    PubMed Central

    Zeng, Zhenxiang; Wang, Ruidong; Sun, Xueshan

    2016-01-01

    This paper presents a novel method on the optimization of bi-objective Flexible Job-shop Scheduling Problem (FJSP) under stochastic processing times. The robust counterpart model and the Non-dominated Sorting Genetic Algorithm II (NSGA-II) are used to solve the bi-objective FJSP with consideration of the completion time and the total energy consumption under stochastic processing times. The case study on GM Corporation verifies that the NSGA-II used in this paper is effective and has advantages to solve the proposed model comparing with HPSO and PSO+SA. The idea and method of the paper can be generalized widely in the manufacturing industry, because it can reduce the energy consumption of the energy-intensive manufacturing enterprise with less investment when the new approach is applied in existing systems. PMID:27907163

  2. A master-slave parallel hybrid multi-objective evolutionary algorithm for groundwater remediation design under general hydrogeological conditions

    NASA Astrophysics Data System (ADS)

    Wu, J.; Yang, Y.; Luo, Q.; Wu, J.

    2012-12-01

    This study presents a new hybrid multi-objective evolutionary algorithm, the niched Pareto tabu search combined with a genetic algorithm (NPTSGA), whereby the global search ability of niched Pareto tabu search (NPTS) is improved by the diversification of candidate solutions arose from the evolving nondominated sorting genetic algorithm II (NSGA-II) population. Also, the NPTSGA coupled with the commonly used groundwater flow and transport codes, MODFLOW and MT3DMS, is developed for multi-objective optimal design of groundwater remediation systems. The proposed methodology is then applied to a large-scale field groundwater remediation system for cleanup of large trichloroethylene (TCE) plume at the Massachusetts Military Reservation (MMR) in Cape Cod, Massachusetts. Furthermore, a master-slave (MS) parallelization scheme based on the Message Passing Interface (MPI) is incorporated into the NPTSGA to implement objective function evaluations in distributed processor environment, which can greatly improve the efficiency of the NPTSGA in finding Pareto-optimal solutions to the real-world application. This study shows that the MS parallel NPTSGA in comparison with the original NPTS and NSGA-II can balance the tradeoff between diversity and optimality of solutions during the search process and is an efficient and effective tool for optimizing the multi-objective design of groundwater remediation systems under complicated hydrogeologic conditions.

  3. Multi-objective parametric optimization of Inertance type pulse tube refrigerator using response surface methodology and non-dominated sorting genetic algorithm

    NASA Astrophysics Data System (ADS)

    Rout, Sachindra K.; Choudhury, Balaji K.; Sahoo, Ranjit K.; Sarangi, Sunil K.

    2014-07-01

    The modeling and optimization of a Pulse Tube Refrigerator is a complicated task, due to its complexity of geometry and nature. The aim of the present work is to optimize the dimensions of pulse tube and regenerator for an Inertance-Type Pulse Tube Refrigerator (ITPTR) by using Response Surface Methodology (RSM) and Non-Sorted Genetic Algorithm II (NSGA II). The Box-Behnken design of the response surface methodology is used in an experimental matrix, with four factors and two levels. The diameter and length of the pulse tube and regenerator are chosen as the design variables where the rest of the dimensions and operating conditions of the ITPTR are constant. The required output responses are the cold head temperature (Tcold) and compressor input power (Wcomp). Computational fluid dynamics (CFD) have been used to model and solve the ITPTR. The CFD results agreed well with those of the previously published paper. Also using the results from the 1-D simulation, RSM is conducted to analyse the effect of the independent variables on the responses. To check the accuracy of the model, the analysis of variance (ANOVA) method has been used. Based on the proposed mathematical RSM models a multi-objective optimization study, using the Non-sorted genetic algorithm II (NSGA-II) has been performed to optimize the responses.

  4. A Novel Multiobjective Evolutionary Algorithm Based on Regression Analysis

    PubMed Central

    Song, Zhiming; Wang, Maocai; Dai, Guangming; Vasile, Massimiliano

    2015-01-01

    As is known, the Pareto set of a continuous multiobjective optimization problem with m objective functions is a piecewise continuous (m − 1)-dimensional manifold in the decision space under some mild conditions. However, how to utilize the regularity to design multiobjective optimization algorithms has become the research focus. In this paper, based on this regularity, a model-based multiobjective evolutionary algorithm with regression analysis (MMEA-RA) is put forward to solve continuous multiobjective optimization problems with variable linkages. In the algorithm, the optimization problem is modelled as a promising area in the decision space by a probability distribution, and the centroid of the probability distribution is (m − 1)-dimensional piecewise continuous manifold. The least squares method is used to construct such a model. A selection strategy based on the nondominated sorting is used to choose the individuals to the next generation. The new algorithm is tested and compared with NSGA-II and RM-MEDA. The result shows that MMEA-RA outperforms RM-MEDA and NSGA-II on the test instances with variable linkages. At the same time, MMEA-RA has higher efficiency than the other two algorithms. A few shortcomings of MMEA-RA have also been identified and discussed in this paper. PMID:25874246

  5. Selection and placement of best management practices used to reduce water quality degradation in Lincoln Lake watershed

    NASA Astrophysics Data System (ADS)

    Rodriguez, Hector German; Popp, Jennie; Maringanti, Chetan; Chaubey, Indrajeet

    2011-01-01

    An increased loss of agricultural nutrients is a growing concern for water quality in Arkansas. Several studies have shown that best management practices (BMPs) are effective in controlling water pollution. However, those affected with water quality issues need water management plans that take into consideration BMPs selection, placement, and affordability. This study used a nondominated sorting genetic algorithm (NSGA-II). This multiobjective algorithm selects and locates BMPs that minimize nutrients pollution cost-effectively by providing trade-off curves (optimal fronts) between pollutant reduction and total net cost increase. The usefulness of this optimization framework was evaluated in the Lincoln Lake watershed. The final NSGA-II optimization model generated a number of near-optimal solutions by selecting from 35 BMPs (combinations of pasture management, buffer zones, and poultry litter application practices). Selection and placement of BMPs were analyzed under various cost solutions. The NSGA-II provides multiple solutions that could fit the water management plan for the watershed. For instance, by implementing all the BMP combinations recommended in the lowest-cost solution, total phosphorous (TP) could be reduced by at least 76% while increasing cost by less than 2% in the entire watershed. This value represents an increase in cost of 5.49 ha-1 when compared to the baseline. Implementing all the BMP combinations proposed with the medium- and the highest-cost solutions could decrease TP drastically but will increase cost by 24,282 (7%) and $82,306 (25%), respectively.

  6. Distributed query plan generation using multiobjective genetic algorithm.

    PubMed

    Panicker, Shina; Kumar, T V Vijay

    2014-01-01

    A distributed query processing strategy, which is a key performance determinant in accessing distributed databases, aims to minimize the total query processing cost. One way to achieve this is by generating efficient distributed query plans that involve fewer sites for processing a query. In the case of distributed relational databases, the number of possible query plans increases exponentially with respect to the number of relations accessed by the query and the number of sites where these relations reside. Consequently, computing optimal distributed query plans becomes a complex problem. This distributed query plan generation (DQPG) problem has already been addressed using single objective genetic algorithm, where the objective is to minimize the total query processing cost comprising the local processing cost (LPC) and the site-to-site communication cost (CC). In this paper, this DQPG problem is formulated and solved as a biobjective optimization problem with the two objectives being minimize total LPC and minimize total CC. These objectives are simultaneously optimized using a multiobjective genetic algorithm NSGA-II. Experimental comparison of the proposed NSGA-II based DQPG algorithm with the single objective genetic algorithm shows that the former performs comparatively better and converges quickly towards optimal solutions for an observed crossover and mutation probability.

  7. Distributed Query Plan Generation Using Multiobjective Genetic Algorithm

    PubMed Central

    Panicker, Shina; Vijay Kumar, T. V.

    2014-01-01

    A distributed query processing strategy, which is a key performance determinant in accessing distributed databases, aims to minimize the total query processing cost. One way to achieve this is by generating efficient distributed query plans that involve fewer sites for processing a query. In the case of distributed relational databases, the number of possible query plans increases exponentially with respect to the number of relations accessed by the query and the number of sites where these relations reside. Consequently, computing optimal distributed query plans becomes a complex problem. This distributed query plan generation (DQPG) problem has already been addressed using single objective genetic algorithm, where the objective is to minimize the total query processing cost comprising the local processing cost (LPC) and the site-to-site communication cost (CC). In this paper, this DQPG problem is formulated and solved as a biobjective optimization problem with the two objectives being minimize total LPC and minimize total CC. These objectives are simultaneously optimized using a multiobjective genetic algorithm NSGA-II. Experimental comparison of the proposed NSGA-II based DQPG algorithm with the single objective genetic algorithm shows that the former performs comparatively better and converges quickly towards optimal solutions for an observed crossover and mutation probability. PMID:24963513

  8. Development of mathematical models and optimization of the process parameters of laser surface hardened EN25 steel using elitist non-dominated sorting genetic algorithm

    NASA Astrophysics Data System (ADS)

    Vignesh, S.; Dinesh Babu, P.; Surya, G.; Dinesh, S.; Marimuthu, P.

    2018-02-01

    The ultimate goal of all production entities is to select the process parameters that would be of maximum strength, minimum wear and friction. The friction and wear are serious problems in most of the industries which are influenced by the working set of parameters, oxidation characteristics and mechanism involved in formation of wear. The experimental input parameters such as sliding distance, applied load, and temperature are utilized in finding out the optimized solution for achieving the desired output responses such as coefficient of friction, wear rate, and volume loss. The optimization is performed with the help of a novel method, Elitist Non-dominated Sorting Genetic Algorithm (NSGA-II) based on an evolutionary algorithm. The regression equations obtained using Response Surface Methodology (RSM) are used in determining the optimum process parameters. Further, the results achieved through desirability approach in RSM are compared with that of the optimized solution obtained through NSGA-II. The results conclude that proposed evolutionary technique is much effective and faster than the desirability approach.

  9. An efficient hybrid approach for multiobjective optimization of water distribution systems

    NASA Astrophysics Data System (ADS)

    Zheng, Feifei; Simpson, Angus R.; Zecchin, Aaron C.

    2014-05-01

    An efficient hybrid approach for the design of water distribution systems (WDSs) with multiple objectives is described in this paper. The objectives are the minimization of the network cost and maximization of the network resilience. A self-adaptive multiobjective differential evolution (SAMODE) algorithm has been developed, in which control parameters are automatically adapted by means of evolution instead of the presetting of fine-tuned parameter values. In the proposed method, a graph algorithm is first used to decompose a looped WDS into a shortest-distance tree (T) or forest, and chords (Ω). The original two-objective optimization problem is then approximated by a series of single-objective optimization problems of the T to be solved by nonlinear programming (NLP), thereby providing an approximate Pareto optimal front for the original whole network. Finally, the solutions at the approximate front are used to seed the SAMODE algorithm to find an improved front for the original entire network. The proposed approach is compared with two other conventional full-search optimization methods (the SAMODE algorithm and the NSGA-II) that seed the initial population with purely random solutions based on three case studies: a benchmark network and two real-world networks with multiple demand loading cases. Results show that (i) the proposed NLP-SAMODE method consistently generates better-quality Pareto fronts than the full-search methods with significantly improved efficiency; and (ii) the proposed SAMODE algorithm (no parameter tuning) exhibits better performance than the NSGA-II with calibrated parameter values in efficiently offering optimal fronts.

  10. Optimizing a multi-product closed-loop supply chain using NSGA-II, MOSA, and MOPSO meta-heuristic algorithms

    NASA Astrophysics Data System (ADS)

    Babaveisi, Vahid; Paydar, Mohammad Mahdi; Safaei, Abdul Sattar

    2018-07-01

    This study aims to discuss the solution methodology for a closed-loop supply chain (CLSC) network that includes the collection of used products as well as distribution of the new products. This supply chain is presented on behalf of the problems that can be solved by the proposed meta-heuristic algorithms. A mathematical model is designed for a CLSC that involves three objective functions of maximizing the profit, minimizing the total risk and shortages of products. Since three objective functions are considered, a multi-objective solution methodology can be advantageous. Therefore, several approaches have been studied and an NSGA-II algorithm is first utilized, and then the results are validated using an MOSA and MOPSO algorithms. Priority-based encoding, which is used in all the algorithms, is the core of the solution computations. To compare the performance of the meta-heuristics, random numerical instances are evaluated by four criteria involving mean ideal distance, spread of non-dominance solution, the number of Pareto solutions, and CPU time. In order to enhance the performance of the algorithms, Taguchi method is used for parameter tuning. Finally, sensitivity analyses are performed and the computational results are presented based on the sensitivity analyses in parameter tuning.

  11. Optimizing a multi-product closed-loop supply chain using NSGA-II, MOSA, and MOPSO meta-heuristic algorithms

    NASA Astrophysics Data System (ADS)

    Babaveisi, Vahid; Paydar, Mohammad Mahdi; Safaei, Abdul Sattar

    2017-07-01

    This study aims to discuss the solution methodology for a closed-loop supply chain (CLSC) network that includes the collection of used products as well as distribution of the new products. This supply chain is presented on behalf of the problems that can be solved by the proposed meta-heuristic algorithms. A mathematical model is designed for a CLSC that involves three objective functions of maximizing the profit, minimizing the total risk and shortages of products. Since three objective functions are considered, a multi-objective solution methodology can be advantageous. Therefore, several approaches have been studied and an NSGA-II algorithm is first utilized, and then the results are validated using an MOSA and MOPSO algorithms. Priority-based encoding, which is used in all the algorithms, is the core of the solution computations. To compare the performance of the meta-heuristics, random numerical instances are evaluated by four criteria involving mean ideal distance, spread of non-dominance solution, the number of Pareto solutions, and CPU time. In order to enhance the performance of the algorithms, Taguchi method is used for parameter tuning. Finally, sensitivity analyses are performed and the computational results are presented based on the sensitivity analyses in parameter tuning.

  12. Cost versus life cycle assessment-based environmental impact optimization of drinking water production plants.

    PubMed

    Capitanescu, F; Rege, S; Marvuglia, A; Benetto, E; Ahmadi, A; Gutiérrez, T Navarrete; Tiruta-Barna, L

    2016-07-15

    Empowering decision makers with cost-effective solutions for reducing industrial processes environmental burden, at both design and operation stages, is nowadays a major worldwide concern. The paper addresses this issue for the sector of drinking water production plants (DWPPs), seeking for optimal solutions trading-off operation cost and life cycle assessment (LCA)-based environmental impact while satisfying outlet water quality criteria. This leads to a challenging bi-objective constrained optimization problem, which relies on a computationally expensive intricate process-modelling simulator of the DWPP and has to be solved with limited computational budget. Since mathematical programming methods are unusable in this case, the paper examines the performances in tackling these challenges of six off-the-shelf state-of-the-art global meta-heuristic optimization algorithms, suitable for such simulation-based optimization, namely Strength Pareto Evolutionary Algorithm (SPEA2), Non-dominated Sorting Genetic Algorithm (NSGA-II), Indicator-based Evolutionary Algorithm (IBEA), Multi-Objective Evolutionary Algorithm based on Decomposition (MOEA/D), Differential Evolution (DE), and Particle Swarm Optimization (PSO). The results of optimization reveal that good reduction in both operating cost and environmental impact of the DWPP can be obtained. Furthermore, NSGA-II outperforms the other competing algorithms while MOEA/D and DE perform unexpectedly poorly. Copyright © 2016 Elsevier Ltd. All rights reserved.

  13. Multi-objective optimization of MOSFETs channel widths and supply voltage in the proposed dual edge-triggered static D flip-flop with minimum average power and delay by using fuzzy non-dominated sorting genetic algorithm-II.

    PubMed

    Keivanian, Farshid; Mehrshad, Nasser; Bijari, Abolfazl

    2016-01-01

    D Flip-Flop as a digital circuit can be used as a timing element in many sophisticated circuits. Therefore the optimum performance with the lowest power consumption and acceptable delay time will be critical issue in electronics circuits. The newly proposed Dual-Edge Triggered Static D Flip-Flop circuit layout is defined as a multi-objective optimization problem. For this, an optimum fuzzy inference system with fuzzy rules is proposed to enhance the performance and convergence of non-dominated sorting Genetic Algorithm-II by adaptive control of the exploration and exploitation parameters. By using proposed Fuzzy NSGA-II algorithm, the more optimum values for MOSFET channel widths and power supply are discovered in search space than ordinary NSGA types. What is more, the design parameters involving NMOS and PMOS channel widths and power supply voltage and the performance parameters including average power consumption and propagation delay time are linked. To do this, the required mathematical backgrounds are presented in this study. The optimum values for the design parameters of MOSFETs channel widths and power supply are discovered. Based on them the power delay product quantity (PDP) is 6.32 PJ at 125 MHz Clock Frequency, L = 0.18 µm, and T = 27 °C.

  14. Combining Multiobjective Optimization and Cluster Analysis to Study Vocal Fold Functional Morphology

    PubMed Central

    Palaparthi, Anil; Riede, Tobias

    2017-01-01

    Morphological design and the relationship between form and function have great influence on the functionality of a biological organ. However, the simultaneous investigation of morphological diversity and function is difficult in complex natural systems. We have developed a multiobjective optimization (MOO) approach in association with cluster analysis to study the form-function relation in vocal folds. An evolutionary algorithm (NSGA-II) was used to integrate MOO with an existing finite element model of the laryngeal sound source. Vocal fold morphology parameters served as decision variables and acoustic requirements (fundamental frequency, sound pressure level) as objective functions. A two-layer and a three-layer vocal fold configuration were explored to produce the targeted acoustic requirements. The mutation and crossover parameters of the NSGA-II algorithm were chosen to maximize a hypervolume indicator. The results were expressed using cluster analysis and were validated against a brute force method. Results from the MOO and the brute force approaches were comparable. The MOO approach demonstrated greater resolution in the exploration of the morphological space. In association with cluster analysis, MOO can efficiently explore vocal fold functional morphology. PMID:24771563

  15. A modified NSGA-II solution for a new multi-objective hub maximal covering problem under uncertain shipments

    NASA Astrophysics Data System (ADS)

    Ebrahimi Zade, Amir; Sadegheih, Ahmad; Lotfi, Mohammad Mehdi

    2014-07-01

    Hubs are centers for collection, rearrangement, and redistribution of commodities in transportation networks. In this paper, non-linear multi-objective formulations for single and multiple allocation hub maximal covering problems as well as the linearized versions are proposed. The formulations substantially mitigate complexity of the existing models due to the fewer number of constraints and variables. Also, uncertain shipments are studied in the context of hub maximal covering problems. In many real-world applications, any link on the path from origin to destination may fail to work due to disruption. Therefore, in the proposed bi-objective model, maximizing safety of the weakest path in the network is considered as the second objective together with the traditional maximum coverage goal. Furthermore, to solve the bi-objective model, a modified version of NSGA-II with a new dynamic immigration operator is developed in which the accurate number of immigrants depends on the results of the other two common NSGA-II operators, i.e. mutation and crossover. Besides validating proposed models, computational results confirm a better performance of modified NSGA-II versus traditional one.

  16. Multi-objective optimization in spatial planning: Improving the effectiveness of multi-objective evolutionary algorithms (non-dominated sorting genetic algorithm II)

    NASA Astrophysics Data System (ADS)

    Karakostas, Spiros

    2015-05-01

    The multi-objective nature of most spatial planning initiatives and the numerous constraints that are introduced in the planning process by decision makers, stakeholders, etc., synthesize a complex spatial planning context in which the concept of solid and meaningful optimization is a unique challenge. This article investigates new approaches to enhance the effectiveness of multi-objective evolutionary algorithms (MOEAs) via the adoption of a well-known metaheuristic: the non-dominated sorting genetic algorithm II (NSGA-II). In particular, the contribution of a sophisticated crossover operator coupled with an enhanced initialization heuristic is evaluated against a series of metrics measuring the effectiveness of MOEAs. Encouraging results emerge for both the convergence rate of the evolutionary optimization process and the occupation of valuable regions of the objective space by non-dominated solutions, facilitating the work of spatial planners and decision makers. Based on the promising behaviour of both heuristics, topics for further research are proposed to improve their effectiveness.

  17. AMOBH: Adaptive Multiobjective Black Hole Algorithm.

    PubMed

    Wu, Chong; Wu, Tao; Fu, Kaiyuan; Zhu, Yuan; Li, Yongbo; He, Wangyong; Tang, Shengwen

    2017-01-01

    This paper proposes a new multiobjective evolutionary algorithm based on the black hole algorithm with a new individual density assessment (cell density), called "adaptive multiobjective black hole algorithm" (AMOBH). Cell density has the characteristics of low computational complexity and maintains a good balance of convergence and diversity of the Pareto front. The framework of AMOBH can be divided into three steps. Firstly, the Pareto front is mapped to a new objective space called parallel cell coordinate system. Then, to adjust the evolutionary strategies adaptively, Shannon entropy is employed to estimate the evolution status. At last, the cell density is combined with a dominance strength assessment called cell dominance to evaluate the fitness of solutions. Compared with the state-of-the-art methods SPEA-II, PESA-II, NSGA-II, and MOEA/D, experimental results show that AMOBH has a good performance in terms of convergence rate, population diversity, population convergence, subpopulation obtention of different Pareto regions, and time complexity to the latter in most cases.

  18. Multi-Objectivising Combinatorial Optimisation Problems by Means of Elementary Landscape Decompositions.

    PubMed

    Ceberio, Josu; Calvo, Borja; Mendiburu, Alexander; Lozano, Jose A

    2018-02-15

    In the last decade, many works in combinatorial optimisation have shown that, due to the advances in multi-objective optimisation, the algorithms from this field could be used for solving single-objective problems as well. In this sense, a number of papers have proposed multi-objectivising single-objective problems in order to use multi-objective algorithms in their optimisation. In this article, we follow up this idea by presenting a methodology for multi-objectivising combinatorial optimisation problems based on elementary landscape decompositions of their objective function. Under this framework, each of the elementary landscapes obtained from the decomposition is considered as an independent objective function to optimise. In order to illustrate this general methodology, we consider four problems from different domains: the quadratic assignment problem and the linear ordering problem (permutation domain), the 0-1 unconstrained quadratic optimisation problem (binary domain), and the frequency assignment problem (integer domain). We implemented two widely known multi-objective algorithms, NSGA-II and SPEA2, and compared their performance with that of a single-objective GA. The experiments conducted on a large benchmark of instances of the four problems show that the multi-objective algorithms clearly outperform the single-objective approaches. Furthermore, a discussion on the results suggests that the multi-objective space generated by this decomposition enhances the exploration ability, thus permitting NSGA-II and SPEA2 to obtain better results in the majority of the tested instances.

  19. Derivation of Optimal Operating Rules for Large-scale Reservoir Systems Considering Multiple Trade-off

    NASA Astrophysics Data System (ADS)

    Zhang, J.; Lei, X.; Liu, P.; Wang, H.; Li, Z.

    2017-12-01

    Flood control operation of multi-reservoir systems such as parallel reservoirs and hybrid reservoirs often suffer from complex interactions and trade-off among tributaries and the mainstream. The optimization of such systems is computationally intensive due to nonlinear storage curves, numerous constraints and complex hydraulic connections. This paper aims to derive the optimal flood control operating rules based on the trade-off among tributaries and the mainstream using a new algorithm known as weighted non-dominated sorting genetic algorithm II (WNSGA II). WNSGA II could locate the Pareto frontier in non-dominated region efficiently due to the directed searching by weighted crowding distance, and the results are compared with those of conventional operating rules (COR) and single objective genetic algorithm (GA). Xijiang river basin in China is selected as a case study, with eight reservoirs and five flood control sections within four tributaries and the mainstream. Furthermore, the effects of inflow uncertainty have been assessed. Results indicate that: (1) WNSGA II could locate the non-dominated solutions faster and provide better Pareto frontier than the traditional non-dominated sorting genetic algorithm II (NSGA II) due to the weighted crowding distance; (2) WNSGA II outperforms COR and GA on flood control in the whole basin; (3) The multi-objective operating rules from WNSGA II deal with the inflow uncertainties better than COR. Therefore, the WNSGA II can be used to derive stable operating rules for large-scale reservoir systems effectively and efficiently.

  20. Multi-objective optimal design of sandwich panels using a genetic algorithm

    NASA Astrophysics Data System (ADS)

    Xu, Xiaomei; Jiang, Yiping; Pueh Lee, Heow

    2017-10-01

    In this study, an optimization problem concerning sandwich panels is investigated by simultaneously considering the two objectives of minimizing the panel mass and maximizing the sound insulation performance. First of all, the acoustic model of sandwich panels is discussed, which provides a foundation to model the acoustic objective function. Then the optimization problem is formulated as a bi-objective programming model, and a solution algorithm based on the non-dominated sorting genetic algorithm II (NSGA-II) is provided to solve the proposed model. Finally, taking an example of a sandwich panel that is expected to be used as an automotive roof panel, numerical experiments are carried out to verify the effectiveness of the proposed model and solution algorithm. Numerical results demonstrate in detail how the core material, geometric constraints and mechanical constraints impact the optimal designs of sandwich panels.

  1. Solving a bi-objective mathematical model for location-routing problem with time windows in multi-echelon reverse logistics using metaheuristic procedure

    NASA Astrophysics Data System (ADS)

    Ghezavati, V. R.; Beigi, M.

    2016-12-01

    During the last decade, the stringent pressures from environmental and social requirements have spurred an interest in designing a reverse logistics (RL) network. The success of a logistics system may depend on the decisions of the facilities locations and vehicle routings. The location-routing problem (LRP) simultaneously locates the facilities and designs the travel routes for vehicles among established facilities and existing demand points. In this paper, the location-routing problem with time window (LRPTW) and homogeneous fleet type and designing a multi-echelon, and capacitated reverse logistics network, are considered which may arise in many real-life situations in logistics management. Our proposed RL network consists of hybrid collection/inspection centers, recovery centers and disposal centers. Here, we present a new bi-objective mathematical programming (BOMP) for LRPTW in reverse logistic. Since this type of problem is NP-hard, the non-dominated sorting genetic algorithm II (NSGA-II) is proposed to obtain the Pareto frontier for the given problem. Several numerical examples are presented to illustrate the effectiveness of the proposed model and algorithm. Also, the present work is an effort to effectively implement the ɛ-constraint method in GAMS software for producing the Pareto-optimal solutions in a BOMP. The results of the proposed algorithm have been compared with the ɛ-constraint method. The computational results show that the ɛ-constraint method is able to solve small-size instances to optimality within reasonable computing times, and for medium-to-large-sized problems, the proposed NSGA-II works better than the ɛ-constraint.

  2. New mathematical modeling for a location-routing-inventory problem in a multi-period closed-loop supply chain in a car industry

    NASA Astrophysics Data System (ADS)

    Forouzanfar, F.; Tavakkoli-Moghaddam, R.; Bashiri, M.; Baboli, A.; Hadji Molana, S. M.

    2017-11-01

    This paper studies a location-routing-inventory problem in a multi-period closed-loop supply chain with multiple suppliers, producers, distribution centers, customers, collection centers, recovery, and recycling centers. In this supply chain, centers are multiple levels, a price increase factor is considered for operational costs at centers, inventory and shortage (including lost sales and backlog) are allowed at production centers, arrival time of vehicles of each plant to its dedicated distribution centers and also departure from them are considered, in such a way that the sum of system costs and the sum of maximum time at each level should be minimized. The aforementioned problem is formulated in the form of a bi-objective nonlinear integer programming model. Due to the NP-hard nature of the problem, two meta-heuristics, namely, non-dominated sorting genetic algorithm (NSGA-II) and multi-objective particle swarm optimization (MOPSO), are used in large sizes. In addition, a Taguchi method is used to set the parameters of these algorithms to enhance their performance. To evaluate the efficiency of the proposed algorithms, the results for small-sized problems are compared with the results of the ɛ-constraint method. Finally, four measuring metrics, namely, the number of Pareto solutions, mean ideal distance, spacing metric, and quality metric, are used to compare NSGA-II and MOPSO.

  3. Investigation on Reservoir Operation of Agricultural Water Resources Management for Drought Mitigation

    NASA Astrophysics Data System (ADS)

    Cheng, C. L.

    2015-12-01

    Investigation on Reservoir Operation of Agricultural Water Resources Management for Drought Mitigation Chung-Lien Cheng, Wen-Ping Tsai, Fi-John Chang* Department of Bioenvironmental Systems Engineering, National Taiwan University, Da-An District, Taipei 10617, Taiwan, ROC.Corresponding author: Fi-John Chang (changfj@ntu.edu.tw) AbstractIn Taiwan, the population growth and economic development has led to considerable and increasing demands for natural water resources in the last decades. Under such condition, water shortage problems have frequently occurred in northern Taiwan in recent years such that water is usually transferred from irrigation sectors to public sectors during drought periods. Facing the uneven spatial and temporal distribution of water resources and the problems of increasing water shortages, it is a primary and critical issue to simultaneously satisfy multiple water uses through adequate reservoir operations for sustainable water resources management. Therefore, we intend to build an intelligent reservoir operation system for the assessment of agricultural water resources management strategy in response to food security during drought periods. This study first uses the grey system to forecast the agricultural water demand during February and April for assessing future agricultural water demands. In the second part, we build an intelligent water resources system by using the non-dominated sorting genetic algorithm-II (NSGA-II), an optimization tool, for searching the water allocation series based on different water demand scenarios created from the first part to optimize the water supply operation for different water sectors. The results can be a reference guide for adequate agricultural water resources management during drought periods. Keywords: Non-dominated sorting genetic algorithm-II (NSGA-II); Grey System; Optimization; Agricultural Water Resources Management.

  4. An evolutionary algorithm technique for intelligence, surveillance, and reconnaissance plan optimization

    NASA Astrophysics Data System (ADS)

    Langton, John T.; Caroli, Joseph A.; Rosenberg, Brad

    2008-04-01

    To support an Effects Based Approach to Operations (EBAO), Intelligence, Surveillance, and Reconnaissance (ISR) planners must optimize collection plans within an evolving battlespace. A need exists for a decision support tool that allows ISR planners to rapidly generate and rehearse high-performing ISR plans that balance multiple objectives and constraints to address dynamic collection requirements for assessment. To meet this need we have designed an evolutionary algorithm (EA)-based "Integrated ISR Plan Analysis and Rehearsal System" (I2PARS) to support Effects-based Assessment (EBA). I2PARS supports ISR mission planning and dynamic replanning to coordinate assets and optimize their routes, allocation and tasking. It uses an evolutionary algorithm to address the large parametric space of route-finding problems which is sometimes discontinuous in the ISR domain because of conflicting objectives such as minimizing asset utilization yet maximizing ISR coverage. EAs are uniquely suited for generating solutions in dynamic environments and also allow user feedback. They are therefore ideal for "streaming optimization" and dynamic replanning of ISR mission plans. I2PARS uses the Non-dominated Sorting Genetic Algorithm (NSGA-II) to automatically generate a diverse set of high performing collection plans given multiple objectives, constraints, and assets. Intended end users of I2PARS include ISR planners in the Combined Air Operations Centers and Joint Intelligence Centers. Here we show the feasibility of applying the NSGA-II algorithm and EAs in general to the ISR planning domain. Unique genetic representations and operators for optimization within the ISR domain are presented along with multi-objective optimization criteria for ISR planning. Promising results of the I2PARS architecture design, early software prototype, and limited domain testing of the new algorithm are discussed. We also present plans for future research and development, as well as technology transition goals.

  5. Multiobjective immune algorithm with nondominated neighbor-based selection.

    PubMed

    Gong, Maoguo; Jiao, Licheng; Du, Haifeng; Bo, Liefeng

    2008-01-01

    Abstract Nondominated Neighbor Immune Algorithm (NNIA) is proposed for multiobjective optimization by using a novel nondominated neighbor-based selection technique, an immune inspired operator, two heuristic search operators, and elitism. The unique selection technique of NNIA only selects minority isolated nondominated individuals in the population. The selected individuals are then cloned proportionally to their crowding-distance values before heuristic search. By using the nondominated neighbor-based selection and proportional cloning, NNIA pays more attention to the less-crowded regions of the current trade-off front. We compare NNIA with NSGA-II, SPEA2, PESA-II, and MISA in solving five DTLZ problems, five ZDT problems, and three low-dimensional problems. The statistical analysis based on three performance metrics including the coverage of two sets, the convergence metric, and the spacing, show that the unique selection method is effective, and NNIA is an effective algorithm for solving multiobjective optimization problems. The empirical study on NNIA's scalability with respect to the number of objectives shows that the new algorithm scales well along the number of objectives.

  6. Explore the impacts of river flow and quality on biodiversity for water resources management by AI techniques

    NASA Astrophysics Data System (ADS)

    Chang, Fi-John; Tsai Tsai, Wen-Ping; Chang, Li-Chiu

    2016-04-01

    Water resources development is very challenging in Taiwan due to her diverse geographic environment and climatic conditions. To pursue sustainable water resources development, rationality and integrity is essential for water resources planning. River water quality and flow regimes are closely related to each other and affect river ecosystems simultaneously. This study aims to explore the complex impacts of water quality and flow regimes on fish community in order to comprehend the situations of the eco-hydrological system in the Danshui River of northern Taiwan. To make an effective and comprehensive strategy for sustainable water resources management, this study first models fish diversity through implementing a hybrid artificial neural network (ANN) based on long-term observational heterogeneity data of water quality, stream flow and fish species in the river. Then we use stream flow to estimate the loss of dissolved oxygen based on back-propagation neural networks (BPNNs). Finally, the non-dominated sorting genetic algorithm II (NSGA-II) is established for river flow management over the Shihmen Reservoir which is the main reservoir in this study area. In addition to satisfying the water demands of human beings and ecosystems, we also consider water quality for river flow management. The ecosystem requirement takes the form of maximizing fish diversity, which can be estimated by the hybrid ANN. The human requirement is to provide a higher satisfaction degree of water supply while the water quality requirement is to reduce the loss of dissolved oxygen in the river among flow stations. The results demonstrate that the proposed methodology can offer diversified alternative strategies for reservoir operation and improve reservoir operation strategies for producing downstream flows that could better meet both human and ecosystem needs as well as maintain river water quality. Keywords: Artificial intelligence (AI), Artificial neural networks (ANNs), Non-dominated sorting genetic algorithm II (NSGA-II), Sustainable water resources management, Flow regime, River ecosystem.

  7. Multi-objective optimal design of magnetorheological engine mount based on an improved non-dominated sorting genetic algorithm

    NASA Astrophysics Data System (ADS)

    Zheng, Ling; Duan, Xuwei; Deng, Zhaoxue; Li, Yinong

    2014-03-01

    A novel flow-mode magneto-rheological (MR) engine mount integrated a diaphragm de-coupler and the spoiler plate is designed and developed to isolate engine and the transmission from the chassis in a wide frequency range and overcome the stiffness in high frequency. A lumped parameter model of the MR engine mount in single degree of freedom system is further developed based on bond graph method to predict the performance of the MR engine mount accurately. The optimization mathematical model is established to minimize the total of force transmissibility over several frequency ranges addressed. In this mathematical model, the lumped parameters are considered as design variables. The maximum of force transmissibility and the corresponding frequency in low frequency range as well as individual lumped parameter are limited as constraints. The multiple interval sensitivity analysis method is developed to select the optimized variables and improve the efficiency of optimization process. An improved non-dominated sorting genetic algorithm (NSGA-II) is used to solve the multi-objective optimization problem. The synthesized distance between the individual in Pareto set and the individual in possible set in engineering is defined and calculated. A set of real design parameters is thus obtained by the internal relationship between the optimal lumped parameters and practical design parameters for the MR engine mount. The program flowchart for the improved non-dominated sorting genetic algorithm (NSGA-II) is given. The obtained results demonstrate the effectiveness of the proposed optimization approach in minimizing the total of force transmissibility over several frequency ranges addressed.

  8. Evaluation of medium-range ensemble flood forecasting based on calibration strategies and ensemble methods in Lanjiang Basin, Southeast China

    NASA Astrophysics Data System (ADS)

    Liu, Li; Gao, Chao; Xuan, Weidong; Xu, Yue-Ping

    2017-11-01

    Ensemble flood forecasts by hydrological models using numerical weather prediction products as forcing data are becoming more commonly used in operational flood forecasting applications. In this study, a hydrological ensemble flood forecasting system comprised of an automatically calibrated Variable Infiltration Capacity model and quantitative precipitation forecasts from TIGGE dataset is constructed for Lanjiang Basin, Southeast China. The impacts of calibration strategies and ensemble methods on the performance of the system are then evaluated. The hydrological model is optimized by the parallel programmed ε-NSGA II multi-objective algorithm. According to the solutions by ε-NSGA II, two differently parameterized models are determined to simulate daily flows and peak flows at each of the three hydrological stations. Then a simple yet effective modular approach is proposed to combine these daily and peak flows at the same station into one composite series. Five ensemble methods and various evaluation metrics are adopted. The results show that ε-NSGA II can provide an objective determination on parameter estimation, and the parallel program permits a more efficient simulation. It is also demonstrated that the forecasts from ECMWF have more favorable skill scores than other Ensemble Prediction Systems. The multimodel ensembles have advantages over all the single model ensembles and the multimodel methods weighted on members and skill scores outperform other methods. Furthermore, the overall performance at three stations can be satisfactory up to ten days, however the hydrological errors can degrade the skill score by approximately 2 days, and the influence persists until a lead time of 10 days with a weakening trend. With respect to peak flows selected by the Peaks Over Threshold approach, the ensemble means from single models or multimodels are generally underestimated, indicating that the ensemble mean can bring overall improvement in forecasting of flows. For peak values taking flood forecasts from each individual member into account is more appropriate.

  9. Density functional theory calculations for the band gap and formation energy of Pr4-xCaxSi12O3+xN18-x; a highly disordered compound with low symmetry and a large cell size.

    PubMed

    Hong, Sung Un; Singh, Satendra Pal; Pyo, Myoungho; Park, Woon Bae; Sohn, Kee-Sun

    2017-06-28

    A novel oxynitride compound, Pr 4-x Ca x Si 12 O 3+x N 18-x , synthesized using a solid-state route has been characterized as a monoclinic structure in the C2 space group using Rietveld refinement on synchrotron powder X-ray diffraction data. The crystal structure of this compound was disordered due to the random distribution of Ca/Pr and N/O ions at various Wyckoff sites. A pragmatic approach for an ab initio calculation based on density function theory (DFT) for this disordered compound has been implemented to calculate an acceptable value of the band gap and formation energy. In general, for the DFT calculation of a disordered compound, a sufficiently large super cell and infinite variety of ensemble configurations is adopted to simulate the random distribution of ions; however, such an approach is time consuming and cost ineffective. Even a single unit cell model gave rise to 43 008 independent configurations as an input model for the DFT calculations. Since it was nearly impossible to calculate the formation energy and the band gap energy for all 43 008 configurations, an elitist non-dominated sorting genetic algorithm (NSGA-II) was employed to find the plausible configurations. In the NSGA-II, all 43 008 configurations were mathematically treated as genomes and the calculated band gap and the formation energy as the objective (fitness) function. Generalized gradient approximation (GGA) was first employed in the preliminary screening using NSGA-II, and thereafter a hybrid functional calculation (HSE06) was executed only for the most plausible GGA-relaxed configurations with lower formation and higher band gap energies. The final band gap energy (3.62 eV) obtained after averaging over the selected configurations, resembles closely the experimental band gap value (4.11 eV).

  10. Optimal platform design using non-dominated sorting genetic algorithm II and technique for order of preference by similarity to ideal solution; application to automotive suspension system

    NASA Astrophysics Data System (ADS)

    Shojaeefard, Mohammad Hassan; Khalkhali, Abolfazl; Faghihian, Hamed; Dahmardeh, Masoud

    2018-03-01

    Unlike conventional approaches where optimization is performed on a unique component of a specific product, optimum design of a set of components for employing in a product family can cause significant reduction in costs. Increasing commonality and performance of the product platform simultaneously is a multi-objective optimization problem (MOP). Several optimization methods are reported to solve these MOPs. However, what is less discussed is how to find the trade-off points among the obtained non-dominated optimum points. This article investigates the optimal design of a product family using non-dominated sorting genetic algorithm II (NSGA-II) and proposes the employment of technique for order of preference by similarity to ideal solution (TOPSIS) method to find the trade-off points among the obtained non-dominated results while compromising all objective functions together. A case study for a family of suspension systems is presented, considering performance and commonality. The results indicate the effectiveness of the proposed method to obtain the trade-off points with the best possible performance while maximizing the common parts.

  11. A New Algorithm Using the Non-Dominated Tree to Improve Non-Dominated Sorting.

    PubMed

    Gustavsson, Patrik; Syberfeldt, Anna

    2018-01-01

    Non-dominated sorting is a technique often used in evolutionary algorithms to determine the quality of solutions in a population. The most common algorithm is the Fast Non-dominated Sort (FNS). This algorithm, however, has the drawback that its performance deteriorates when the population size grows. The same drawback applies also to other non-dominating sorting algorithms such as the Efficient Non-dominated Sort with Binary Strategy (ENS-BS). An algorithm suggested to overcome this drawback is the Divide-and-Conquer Non-dominated Sort (DCNS) which works well on a limited number of objectives but deteriorates when the number of objectives grows. This article presents a new, more efficient algorithm called the Efficient Non-dominated Sort with Non-Dominated Tree (ENS-NDT). ENS-NDT is an extension of the ENS-BS algorithm and uses a novel Non-Dominated Tree (NDTree) to speed up the non-dominated sorting. ENS-NDT is able to handle large population sizes and a large number of objectives more efficiently than existing algorithms for non-dominated sorting. In the article, it is shown that with ENS-NDT the runtime of multi-objective optimization algorithms such as the Non-Dominated Sorting Genetic Algorithm II (NSGA-II) can be substantially reduced.

  12. A new methodology for surcharge risk management in urban areas (case study: Gonbad-e-Kavus city).

    PubMed

    Hooshyaripor, Farhad; Yazdi, Jafar

    2017-02-01

    This research presents a simulation-optimization model for urban flood mitigation integrating Non-dominated Sorting Genetic Algorithm (NSGA-II) with Storm Water Management Model (SWMM) hydraulic model under a curve number-based hydrologic model of low impact development technologies in Gonbad-e-Kavus, a small city in the north of Iran. In the developed model, the best performance of the system relies on the optimal layout and capacity of retention ponds over the study area in order to reduce surcharge from the manholes underlying a set of storm event loads, while the available investment plays a restricting role. Thus, there is a multi-objective optimization problem with two conflicting objectives solved successfully by NSGA-II to find a set of optimal solutions known as the Pareto front. In order to analyze the results, a new factor, investment priority index (IPI), is defined which shows the risk of surcharging over the network and priority of the mitigation actions. The IPI is calculated using the probability of pond selection for candidate locations and average depth of the ponds in all Pareto front solutions. The IPI can help the decision makers to arrange a long-term progressive plan with the priority of high-risk areas when an optimal solution has been selected.

  13. Heterogeneous Multi-Robot Multi-Sensor Platform for Intruder Detection

    DTIC Science & Technology

    2009-09-15

    propagation model, with variance τi: si ~ N(b0i + b1i *logDi, τ i). The initial parameters (b0i, b1i, τ i ) of the model are unknown, and the training...that the advantage of MOO-learned mode would become more significant over time compared with the other mode. 1 2 3 4 5 6 7 0 0.05 0.1 0.15 0.2...nondominated sorting genetic algorithm for multi-objective optimization: NSGA-II,” in Parallel Problem Solving from Nature (PPSN VI), M. Schoenauer

  14. Constrained Multiobjective Biogeography Optimization Algorithm

    PubMed Central

    Mo, Hongwei; Xu, Zhidan; Xu, Lifang; Wu, Zhou; Ma, Haiping

    2014-01-01

    Multiobjective optimization involves minimizing or maximizing multiple objective functions subject to a set of constraints. In this study, a novel constrained multiobjective biogeography optimization algorithm (CMBOA) is proposed. It is the first biogeography optimization algorithm for constrained multiobjective optimization. In CMBOA, a disturbance migration operator is designed to generate diverse feasible individuals in order to promote the diversity of individuals on Pareto front. Infeasible individuals nearby feasible region are evolved to feasibility by recombining with their nearest nondominated feasible individuals. The convergence of CMBOA is proved by using probability theory. The performance of CMBOA is evaluated on a set of 6 benchmark problems and experimental results show that the CMBOA performs better than or similar to the classical NSGA-II and IS-MOEA. PMID:25006591

  15. A stochastic conflict resolution model for trading pollutant discharge permits in river systems.

    PubMed

    Niksokhan, Mohammad Hossein; Kerachian, Reza; Amin, Pedram

    2009-07-01

    This paper presents an efficient methodology for developing pollutant discharge permit trading in river systems considering the conflict of interests of involving decision-makers and the stakeholders. In this methodology, a trade-off curve between objectives is developed using a powerful and recently developed multi-objective genetic algorithm technique known as the Nondominated Sorting Genetic Algorithm-II (NSGA-II). The best non-dominated solution on the trade-off curve is defined using the Young conflict resolution theory, which considers the utility functions of decision makers and stakeholders of the system. These utility functions are related to the total treatment cost and a fuzzy risk of violating the water quality standards. The fuzzy risk is evaluated using the Monte Carlo analysis. Finally, an optimization model provides the trading discharge permit policies. The practical utility of the proposed methodology in decision-making is illustrated through a realistic example of the Zarjub River in the northern part of Iran.

  16. Water Quality Planning in Rivers: Assimilative Capacity and Dilution Flow.

    PubMed

    Hashemi Monfared, Seyed Arman; Dehghani Darmian, Mohsen; Snyder, Shane A; Azizyan, Gholamreza; Pirzadeh, Bahareh; Azhdary Moghaddam, Mehdi

    2017-11-01

    Population growth, urbanization and industrial expansion are consequentially linked to increasing pollution around the world. The sources of pollution are so vast and also include point and nonpoint sources, with intrinsic challenge for control and abatement. This paper focuses on pollutant concentrations and also the distance that the pollution is in contact with the river water as objective functions to determine two main necessary characteristics for water quality management in the river. These two necessary characteristics are named assimilative capacity and dilution flow. The mean area of unacceptable concentration [Formula: see text] and affected distance (X) are considered as two objective functions to determine the dilution flow by a non-dominated sorting genetic algorithm II (NSGA-II) optimization algorithm. The results demonstrate that the variation of river flow discharge in different seasons can modify the assimilation capacity up to 97%. Moreover, when using dilution flow as a water quality management tool, results reveal that the content of [Formula: see text] and X change up to 97% and 93%, respectively.

  17. Calibrating a Rainfall-Runoff and Routing Model for the Continental United States

    NASA Astrophysics Data System (ADS)

    Jankowfsky, S.; Li, S.; Assteerawatt, A.; Tillmanns, S.; Hilberts, A.

    2014-12-01

    Catastrophe risk models are widely used in the insurance industry to estimate the cost of risk. The models consist of hazard models linked to vulnerability and financial loss models. In flood risk models, the hazard model generates inundation maps. In order to develop country wide inundation maps for different return periods a rainfall-runoff and routing model is run using stochastic rainfall data. The simulated discharge and runoff is then input to a two dimensional inundation model, which produces the flood maps. In order to get realistic flood maps, the rainfall-runoff and routing models have to be calibrated with observed discharge data. The rainfall-runoff model applied here is a semi-distributed model based on the Topmodel (Beven and Kirkby, 1979) approach which includes additional snowmelt and evapotranspiration models. The routing model is based on the Muskingum-Cunge (Cunge, 1969) approach and includes the simulation of lakes and reservoirs using the linear reservoir approach. Both models were calibrated using the multiobjective NSGA-II (Deb et al., 2002) genetic algorithm with NLDAS forcing data and around 4500 USGS discharge gauges for the period from 1979-2013. Additional gauges having no data after 1979 were calibrated using CPC rainfall data. The model performed well in wetter regions and shows the difficulty of simulating areas with sinks such as karstic areas or dry areas. Beven, K., Kirkby, M., 1979. A physically based, variable contributing area model of basin hydrology. Hydrol. Sci. Bull. 24 (1), 43-69. Cunge, J.A., 1969. On the subject of a flood propagation computation method (Muskingum method), J. Hydr. Research, 7(2), 205-230. Deb, K., Pratap, A., Agarwal, S., Meyarivan, T., 2002. A fast and elitist multiobjective genetic algorithm: NSGA-II, IEEE Transactions on evolutionary computation, 6(2), 182-197.

  18. Multi Objective Optimization of Yarn Quality and Fibre Quality Using Evolutionary Algorithm

    NASA Astrophysics Data System (ADS)

    Ghosh, Anindya; Das, Subhasis; Banerjee, Debamalya

    2013-03-01

    The quality and cost of resulting yarn play a significant role to determine its end application. The challenging task of any spinner lies in producing a good quality yarn with added cost benefit. The present work does a multi-objective optimization on two objectives, viz. maximization of cotton yarn strength and minimization of raw material quality. The first objective function has been formulated based on the artificial neural network input-output relation between cotton fibre properties and yarn strength. The second objective function is formulated with the well known regression equation of spinning consistency index. It is obvious that these two objectives are conflicting in nature i.e. not a single combination of cotton fibre parameters does exist which produce maximum yarn strength and minimum cotton fibre quality simultaneously. Therefore, it has several optimal solutions from which a trade-off is needed depending upon the requirement of user. In this work, the optimal solutions are obtained with an elitist multi-objective evolutionary algorithm based on Non-dominated Sorting Genetic Algorithm II (NSGA-II). These optimum solutions may lead to the efficient exploitation of raw materials to produce better quality yarns at low costs.

  19. A simulation-optimization model for Stone column-supported embankment stability considering rainfall effect

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Deb, Kousik, E-mail: kousik@civil.iitkgp.ernet.in; Dhar, Anirban, E-mail: anirban@civil.iitkgp.ernet.in; Purohit, Sandip, E-mail: sandip.purohit91@gmail.com

    Landslide due to rainfall has been and continues to be one of the most important concerns of geotechnical engineering. The paper presents the variation of factor of safety of stone column-supported embankment constructed over soft soil due to change in water level for an incessant period of rainfall. A combined simulation-optimization based methodology has been proposed to predict the critical surface of failure of the embankment and to optimize the corresponding factor of safety under rainfall conditions using an evolutionary genetic algorithm NSGA-II (Non-Dominated Sorted Genetic Algorithm-II). It has been observed that the position of water table can be reliablymore » estimated with varying periods of infiltration using developed numerical method. The parametric study is presented to study the optimum factor of safety of the embankment and its corresponding critical failure surface under the steady-state infiltration condition. Results show that in case of floating stone columns, period of infiltration has no effect on factor of safety. Even critical failure surfaces for a particular floating column length remain same irrespective of rainfall duration.« less

  20. Multi-objective evolutionary algorithms for fuzzy classification in survival prediction.

    PubMed

    Jiménez, Fernando; Sánchez, Gracia; Juárez, José M

    2014-03-01

    This paper presents a novel rule-based fuzzy classification methodology for survival/mortality prediction in severe burnt patients. Due to the ethical aspects involved in this medical scenario, physicians tend not to accept a computer-based evaluation unless they understand why and how such a recommendation is given. Therefore, any fuzzy classifier model must be both accurate and interpretable. The proposed methodology is a three-step process: (1) multi-objective constrained optimization of a patient's data set, using Pareto-based elitist multi-objective evolutionary algorithms to maximize accuracy and minimize the complexity (number of rules) of classifiers, subject to interpretability constraints; this step produces a set of alternative (Pareto) classifiers; (2) linguistic labeling, which assigns a linguistic label to each fuzzy set of the classifiers; this step is essential to the interpretability of the classifiers; (3) decision making, whereby a classifier is chosen, if it is satisfactory, according to the preferences of the decision maker. If no classifier is satisfactory for the decision maker, the process starts again in step (1) with a different input parameter set. The performance of three multi-objective evolutionary algorithms, niched pre-selection multi-objective algorithm, elitist Pareto-based multi-objective evolutionary algorithm for diversity reinforcement (ENORA) and the non-dominated sorting genetic algorithm (NSGA-II), was tested using a patient's data set from an intensive care burn unit and a standard machine learning data set from an standard machine learning repository. The results are compared using the hypervolume multi-objective metric. Besides, the results have been compared with other non-evolutionary techniques and validated with a multi-objective cross-validation technique. Our proposal improves the classification rate obtained by other non-evolutionary techniques (decision trees, artificial neural networks, Naive Bayes, and case-based reasoning) obtaining with ENORA a classification rate of 0.9298, specificity of 0.9385, and sensitivity of 0.9364, with 14.2 interpretable fuzzy rules on average. Our proposal improves the accuracy and interpretability of the classifiers, compared with other non-evolutionary techniques. We also conclude that ENORA outperforms niched pre-selection and NSGA-II algorithms. Moreover, given that our multi-objective evolutionary methodology is non-combinational based on real parameter optimization, the time cost is significantly reduced compared with other evolutionary approaches existing in literature based on combinational optimization. Copyright © 2014 Elsevier B.V. All rights reserved.

  1. Sensitivity analysis of multi-objective optimization of CPG parameters for quadruped robot locomotion

    NASA Astrophysics Data System (ADS)

    Oliveira, Miguel; Santos, Cristina P.; Costa, Lino

    2012-09-01

    In this paper, a study based on sensitivity analysis is performed for a gait multi-objective optimization system that combines bio-inspired Central Patterns Generators (CPGs) and a multi-objective evolutionary algorithm based on NSGA-II. In this system, CPGs are modeled as autonomous differential equations, that generate the necessary limb movement to perform the required walking gait. In order to optimize the walking gait, a multi-objective problem with three conflicting objectives is formulated: maximization of the velocity, the wide stability margin and the behavioral diversity. The experimental results highlight the effectiveness of this multi-objective approach and the importance of the objectives to find different walking gait solutions for the quadruped robot.

  2. Development of closed-loop supply chain network in terms of corporate social responsibility.

    PubMed

    Pedram, Ali; Pedram, Payam; Yusoff, Nukman Bin; Sorooshian, Shahryar

    2017-01-01

    Due to the rise in awareness of environmental issues and the depletion of virgin resources, many firms have attempted to increase the sustainability of their activities. One efficient way to elevate sustainability is the consideration of corporate social responsibility (CSR) by designing a closed loop supply chain (CLSC). This paper has developed a mathematical model to increase corporate social responsibility in terms of job creation. Moreover the model, in addition to increasing total CLSC profit, provides a range of strategic decision solutions for decision makers to select a best action plan for a CLSC. A proposed multi-objective mixed-integer linear programming (MILP) model was solved with non-dominated sorting genetic algorithm II (NSGA-II). Fuzzy set theory was employed to select the best compromise solution from the Pareto-optimal solutions. A numerical example was used to validate the potential application of the proposed model. The results highlight the effect of CSR in the design of CLSC.

  3. Development of closed–loop supply chain network in terms of corporate social responsibility

    PubMed Central

    Pedram, Payam; Yusoff, Nukman Bin; Sorooshian, Shahryar

    2017-01-01

    Due to the rise in awareness of environmental issues and the depletion of virgin resources, many firms have attempted to increase the sustainability of their activities. One efficient way to elevate sustainability is the consideration of corporate social responsibility (CSR) by designing a closed loop supply chain (CLSC). This paper has developed a mathematical model to increase corporate social responsibility in terms of job creation. Moreover the model, in addition to increasing total CLSC profit, provides a range of strategic decision solutions for decision makers to select a best action plan for a CLSC. A proposed multi-objective mixed-integer linear programming (MILP) model was solved with non-dominated sorting genetic algorithm II (NSGA-II). Fuzzy set theory was employed to select the best compromise solution from the Pareto-optimal solutions. A numerical example was used to validate the potential application of the proposed model. The results highlight the effect of CSR in the design of CLSC. PMID:28384250

  4. A multi-stakeholder framework for urban runoff quality management: Application of social choice and bargaining techniques.

    PubMed

    Ghodsi, Seyed Hamed; Kerachian, Reza; Zahmatkesh, Zahra

    2016-04-15

    In this paper, an integrated framework is proposed for urban runoff management. To control and improve runoff quality and quantity, Low Impact Development (LID) practices are utilized. In order to determine the LIDs' areas and locations, the Non-dominated Sorting Genetic Algorithm-II (NSGA-II), which considers three objective functions of minimizing runoff volume, runoff pollution and implementation cost of LIDs, is utilized. In this framework, the Storm Water Management Model (SWMM) is used for stream flow simulation. The non-dominated solutions provided by the NSGA-II are considered as management scenarios. To select the most preferred scenario, interactions among the main stakeholders in the study area with conflicting utilities are incorporated by utilizing bargaining models including a non-cooperative game, Nash model and social choice procedures of Borda count and approval voting. Moreover, a new social choice procedure, named pairwise voting method, is proposed and applied. Based on each conflict resolution approach, a scenario is identified as the ideal solution providing the LIDs' areas, locations and implementation cost. The proposed framework is applied for urban water quality and quantity management in the northern part of Tehran metropolitan city, Iran. Results show that the proposed pairwise voting method tends to select a scenario with a higher percentage of reduction in TSS (Total Suspended Solid) load and runoff volume, in comparison with the Borda count and approval voting methods. Besides, the Nash method presents a management scenario with the highest cost for LIDs' implementation and the maximum values for percentage of runoff volume reduction and TSS removal. The results also signify that selection of an appropriate management scenario by stakeholders in the study area depends on the available financial resources and the relative importance of runoff quality improvement in comparison with reducing the runoff volume. Copyright © 2016 Elsevier B.V. All rights reserved.

  5. Multi-Objective Random Search Algorithm for Simultaneously Optimizing Wind Farm Layout and Number of Turbines

    NASA Astrophysics Data System (ADS)

    Feng, Ju; Shen, Wen Zhong; Xu, Chang

    2016-09-01

    A new algorithm for multi-objective wind farm layout optimization is presented. It formulates the wind turbine locations as continuous variables and is capable of optimizing the number of turbines and their locations in the wind farm simultaneously. Two objectives are considered. One is to maximize the total power production, which is calculated by considering the wake effects using the Jensen wake model combined with the local wind distribution. The other is to minimize the total electrical cable length. This length is assumed to be the total length of the minimal spanning tree that connects all turbines and is calculated by using Prim's algorithm. Constraints on wind farm boundary and wind turbine proximity are also considered. An ideal test case shows the proposed algorithm largely outperforms a famous multi-objective genetic algorithm (NSGA-II). In the real test case based on the Horn Rev 1 wind farm, the algorithm also obtains useful Pareto frontiers and provides a wide range of Pareto optimal layouts with different numbers of turbines for a real-life wind farm developer.

  6. Investigation on Multiple Algorithms for Multi-Objective Optimization of Gear Box

    NASA Astrophysics Data System (ADS)

    Ananthapadmanabhan, R.; Babu, S. Arun; Hareendranath, KR; Krishnamohan, C.; Krishnapillai, S.; A, Krishnan

    2016-09-01

    The field of gear design is an extremely important area in engineering. In this work a spur gear reduction unit is considered. A review of relevant literatures in the area of gear design indicates that compact design of gearbox involves a complicated engineering analysis. This work deals with the simultaneous optimization of the power and dimensions of a gearbox, which are of conflicting nature. The focus is on developing a design space which is based on module, pinion teeth and face-width by using MATLAB. The feasible points are obtained through different multi-objective algorithms using various constraints obtained from different novel literatures. Attention has been devoted in various novel constraints like critical scoring criterion number, flash temperature, minimum film thickness, involute interference and contact ratio. The output from various algorithms like genetic algorithm, fmincon (constrained nonlinear minimization), NSGA-II etc. are compared to generate the best result. Hence, this is a much more precise approach for obtaining practical values of the module, pinion teeth and face-width for a minimum centre distance and a maximum power transmission for any given material.

  7. Multi-objective optimization of a low specific speed centrifugal pump using an evolutionary algorithm

    NASA Astrophysics Data System (ADS)

    An, Zhao; Zhounian, Lai; Peng, Wu; Linlin, Cao; Dazhuan, Wu

    2016-07-01

    This paper describes the shape optimization of a low specific speed centrifugal pump at the design point. The target pump has already been manually modified on the basis of empirical knowledge. A genetic algorithm (NSGA-II) with certain enhancements is adopted to improve its performance further with respect to two goals. In order to limit the number of design variables without losing geometric information, the impeller is parametrized using the Bézier curve and a B-spline. Numerical simulation based on a Reynolds averaged Navier-Stokes (RANS) turbulent model is done in parallel to evaluate the flow field. A back-propagating neural network is constructed as a surrogate for performance prediction to save computing time, while initial samples are selected according to an orthogonal array. Then global Pareto-optimal solutions are obtained and analysed. The results manifest that unexpected flow structures, such as the secondary flow on the meridian plane, have diminished or vanished in the optimized pump.

  8. Multi-objective optimization of an industrial penicillin V bioreactor train using non-dominated sorting genetic algorithm.

    PubMed

    Lee, Fook Choon; Rangaiah, Gade Pandu; Ray, Ajay Kumar

    2007-10-15

    Bulk of the penicillin produced is used as raw material for semi-synthetic penicillin (such as amoxicillin and ampicillin) and semi-synthetic cephalosporins (such as cephalexin and cefadroxil). In the present paper, an industrial penicillin V bioreactor train is optimized for multiple objectives simultaneously. An industrial train, comprising a bank of identical bioreactors, is run semi-continuously in a synchronous fashion. The fermentation taking place in a bioreactor is modeled using a morphologically structured mechanism. For multi-objective optimization for two and three objectives, the elitist non-dominated sorting genetic algorithm (NSGA-II) is chosen. Instead of a single optimum as in the traditional optimization, a wide range of optimal design and operating conditions depicting trade-offs of key performance indicators such as batch cycle time, yield, profit and penicillin concentration, is successfully obtained. The effects of design and operating variables on the optimal solutions are discussed in detail. Copyright 2007 Wiley Periodicals, Inc.

  9. Design of isolated buildings with S-FBI system subjected to near-fault earthquakes using NSGA-II algorithm

    NASA Astrophysics Data System (ADS)

    Ozbulut, O. E.; Silwal, B.

    2014-04-01

    This study investigates the optimum design parameters of a superelastic friction base isolator (S-FBI) system through a multi-objective genetic algorithm and performance-based evaluation approach. The S-FBI system consists of a flat steel- PTFE sliding bearing and a superelastic NiTi shape memory alloy (SMA) device. Sliding bearing limits the transfer of shear across the isolation interface and provides damping from sliding friction. SMA device provides restoring force capability to the isolation system together with additional damping characteristics. A three-story building is modeled with S-FBI isolation system. Multiple-objective numerical optimization that simultaneously minimizes isolation-level displacements and superstructure response is carried out with a genetic algorithm (GA) in order to optimize S-FBI system. Nonlinear time history analyses of the building with S-FBI system are performed. A set of 20 near-field ground motion records are used in numerical simulations. Results show that S-FBI system successfully control response of the buildings against near-fault earthquakes without sacrificing in isolation efficacy and producing large isolation-level deformations.

  10. Efficient ecologic and economic operational rules for dammed systems by means of nondominated sorting genetic algorithm II

    NASA Astrophysics Data System (ADS)

    Niayifar, A.; Perona, P.

    2015-12-01

    River impoundment by dams is known to strongly affect the natural flow regime and in turn the river attributes and the related ecosystem biodiversity. Making hydropower sustainable implies to seek for innovative operational policies able to generate dynamic environmental flows while maintaining economic efficiency. For dammed systems, we build the ecological and economical efficiency plot for non-proportional flow redistribution operational rules compared to minimal flow operational. As for the case of small hydropower plants (e.g., see the companion paper by Gorla et al., this session), we use a four parameters Fermi-Dirac statistical distribution to mathematically formulate non-proportional redistribution rules. These rules allocate a fraction of water to the riverine environment depending on current reservoir inflows and storage. Riverine ecological benefits associated to dynamic environmental flows are computed by integrating the Weighted Usable Area (WUA) for fishes with Richter's hydrological indicators. Then, we apply nondominated sorting genetic algorithm II (NSGA-II) to an ensemble of non-proportional and minimal flow redistribution rules in order to generate the Pareto frontier showing the system performances in the ecologic and economic space. This fast and elitist multiobjective optimization method is eventually applied to a case study. It is found that non-proportional dynamic flow releases ensure maximal power production on the one hand, while conciliating ecological sustainability on the other hand. Much of the improvement in the environmental indicator is seen to arise from a better use of the reservoir storage dynamics, which allows to capture, and laminate flood events while recovering part of them for energy production. In conclusion, adopting such new operational policies would unravel a spectrum of globally-efficient performances of the dammed system when compared with those resulting from policies based on constant minimum flow releases.

  11. A multi-objective simulation-optimization model for in situ bioremediation of groundwater contamination: Application of bargaining theory

    NASA Astrophysics Data System (ADS)

    Raei, Ehsan; Nikoo, Mohammad Reza; Pourshahabi, Shokoufeh

    2017-08-01

    In the present study, a BIOPLUME III simulation model is coupled with a non-dominating sorting genetic algorithm (NSGA-II)-based model for optimal design of in situ groundwater bioremediation system, considering preferences of stakeholders. Ministry of Energy (MOE), Department of Environment (DOE), and National Disaster Management Organization (NDMO) are three stakeholders in the groundwater bioremediation problem in Iran. Based on the preferences of these stakeholders, the multi-objective optimization model tries to minimize: (1) cost; (2) sum of contaminant concentrations that violate standard; (3) contaminant plume fragmentation. The NSGA-II multi-objective optimization method gives Pareto-optimal solutions. A compromised solution is determined using fallback bargaining with impasse to achieve a consensus among the stakeholders. In this study, two different approaches are investigated and compared based on two different domains for locations of injection and extraction wells. At the first approach, a limited number of predefined locations is considered according to previous similar studies. At the second approach, all possible points in study area are investigated to find optimal locations, arrangement, and flow rate of injection and extraction wells. Involvement of the stakeholders, investigating all possible points instead of a limited number of locations for wells, and minimizing the contaminant plume fragmentation during bioremediation are new innovations in this research. Besides, the simulation period is divided into smaller time intervals for more efficient optimization. Image processing toolbox in MATLAB® software is utilized for calculation of the third objective function. In comparison with previous studies, cost is reduced using the proposed methodology. Dispersion of the contaminant plume is reduced in both presented approaches using the third objective function. Considering all possible points in the study area for determining the optimal locations of the wells in the second approach leads to more desirable results, i.e. decreasing the contaminant concentrations to a standard level and 20% to 40% cost reduction.

  12. Fatigue design of a cellular phone folder using regression model-based multi-objective optimization

    NASA Astrophysics Data System (ADS)

    Kim, Young Gyun; Lee, Jongsoo

    2016-08-01

    In a folding cellular phone, the folding device is repeatedly opened and closed by the user, which eventually results in fatigue damage, particularly to the front of the folder. Hence, it is important to improve the safety and endurance of the folder while also reducing its weight. This article presents an optimal design for the folder front that maximizes its fatigue endurance while minimizing its thickness. Design data for analysis and optimization were obtained experimentally using a test jig. Multi-objective optimization was carried out using a nonlinear regression model. Three regression methods were employed: back-propagation neural networks, logistic regression and support vector machines. The AdaBoost ensemble technique was also used to improve the approximation. Two-objective Pareto-optimal solutions were identified using the non-dominated sorting genetic algorithm (NSGA-II). Finally, a numerically optimized solution was validated against experimental product data, in terms of both fatigue endurance and thickness index.

  13. A tabu search evalutionary algorithm for multiobjective optimization: Application to a bi-criterion aircraft structural reliability problem

    NASA Astrophysics Data System (ADS)

    Long, Kim Chenming

    Real-world engineering optimization problems often require the consideration of multiple conflicting and noncommensurate objectives, subject to nonconvex constraint regions in a high-dimensional decision space. Further challenges occur for combinatorial multiobjective problems in which the decision variables are not continuous. Traditional multiobjective optimization methods of operations research, such as weighting and epsilon constraint methods, are ill-suited to solving these complex, multiobjective problems. This has given rise to the application of a wide range of metaheuristic optimization algorithms, such as evolutionary, particle swarm, simulated annealing, and ant colony methods, to multiobjective optimization. Several multiobjective evolutionary algorithms have been developed, including the strength Pareto evolutionary algorithm (SPEA) and the non-dominated sorting genetic algorithm (NSGA), for determining the Pareto-optimal set of non-dominated solutions. Although numerous researchers have developed a wide range of multiobjective optimization algorithms, there is a continuing need to construct computationally efficient algorithms with an improved ability to converge to globally non-dominated solutions along the Pareto-optimal front for complex, large-scale, multiobjective engineering optimization problems. This is particularly important when the multiple objective functions and constraints of the real-world system cannot be expressed in explicit mathematical representations. This research presents a novel metaheuristic evolutionary algorithm for complex multiobjective optimization problems, which combines the metaheuristic tabu search algorithm with the evolutionary algorithm (TSEA), as embodied in genetic algorithms. TSEA is successfully applied to bicriteria (i.e., structural reliability and retrofit cost) optimization of the aircraft tail structure fatigue life, which increases its reliability by prolonging fatigue life. A comparison for this application of the proposed algorithm, TSEA, with several state-of-the-art multiobjective optimization algorithms reveals that TSEA outperforms these algorithms by providing retrofit solutions with greater reliability for the same costs (i.e., closer to the Pareto-optimal front) after the algorithms are executed for the same number of generations. This research also demonstrates that TSEA competes with and, in some situations, outperforms state-of-the-art multiobjective optimization algorithms such as NSGA II and SPEA 2 when applied to classic bicriteria test problems in the technical literature and other complex, sizable real-world applications. The successful implementation of TSEA contributes to the safety of aeronautical structures by providing a systematic way to guide aircraft structural retrofitting efforts, as well as a potentially useful algorithm for a wide range of multiobjective optimization problems in engineering and other fields.

  14. Dynamic Appliances Scheduling in Collaborative MicroGrids System

    PubMed Central

    Bilil, Hasnae; Aniba, Ghassane; Gharavi, Hamid

    2017-01-01

    In this paper a new approach which is based on a collaborative system of MicroGrids (MG’s), is proposed to enable household appliance scheduling. To achieve this, appliances are categorized into flexible and non-flexible Deferrable Loads (DL’s), according to their electrical components. We propose a dynamic scheduling algorithm where users can systematically manage the operation of their electric appliances. The main challenge is to develop a flattening function calculus (reshaping) for both flexible and non-flexible DL’s. In addition, implementation of the proposed algorithm would require dynamically analyzing two successive multi-objective optimization (MOO) problems. The first targets the activation schedule of non-flexible DL’s and the second deals with the power profiles of flexible DL’s. The MOO problems are resolved by using a fast and elitist multi-objective genetic algorithm (NSGA-II). Finally, in order to show the efficiency of the proposed approach, a case study of a collaborative system that consists of 40 MG’s registered in the load curve for the flattening program has been developed. The results verify that the load curve can indeed become very flat by applying the proposed scheduling approach. PMID:28824226

  15. Optimal colour quality of LED clusters based on memory colours.

    PubMed

    Smet, Kevin; Ryckaert, Wouter R; Pointer, Michael R; Deconinck, Geert; Hanselaer, Peter

    2011-03-28

    The spectral power distributions of tri- and tetrachromatic clusters of Light-Emitting-Diodes, composed of simulated and commercially available LEDs, were optimized with a genetic algorithm to maximize the luminous efficacy of radiation and the colour quality as assessed by the memory colour quality metric developed by the authors. The trade-off of the colour quality as assessed by the memory colour metric and the luminous efficacy of radiation was investigated by calculating the Pareto optimal front using the NSGA-II genetic algorithm. Optimal peak wavelengths and spectral widths of the LEDs were derived, and over half of them were found to be close to Thornton's prime colours. The Pareto optimal fronts of real LED clusters were always found to be smaller than those of the simulated clusters. The effect of binning on designing a real LED cluster was investigated and was found to be quite large. Finally, a real LED cluster of commercially available AlGaInP, InGaN and phosphor white LEDs was optimized to obtain a higher score on memory colour quality scale than its corresponding CIE reference illuminant.

  16. Optimal Integration of Departures and Arrivals in Terminal Airspace

    NASA Technical Reports Server (NTRS)

    Xue, Min; Zelinski, Shannon Jean

    2013-01-01

    Coordination of operations with spatially and temporally shared resources, such as route segments, fixes, and runways, improves the efficiency of terminal airspace management. Problems in this category are, in general, computationally difficult compared to conventional scheduling problems. This paper presents a fast time algorithm formulation using a non-dominated sorting genetic algorithm (NSGA). It was first applied to a test problem introduced in existing literature. An experiment with a test problem showed that new methods can solve the 20 aircraft problem in fast time with a 65% or 440 second delay reduction using shared departure fixes. In order to test its application in a more realistic and complicated problem, the NSGA algorithm was applied to a problem in LAX terminal airspace, where interactions between 28% of LAX arrivals and 10% of LAX departures are resolved by spatial separation in current operations, which may introduce unnecessary delays. In this work, three types of separations - spatial, temporal, and hybrid separations - were formulated using the new algorithm. The hybrid separation combines both temporal and spatial separations. Results showed that although temporal separation achieved less delay than spatial separation with a small uncertainty buffer, spatial separation outperformed temporal separation when the uncertainty buffer was increased. Hybrid separation introduced much less delay than both spatial and temporal approaches. For a total of 15 interacting departures and arrivals, when compared to spatial separation, the delay reduction of hybrid separation varied between 11% or 3.1 minutes and 64% or 10.7 minutes corresponding to an uncertainty buffer from 0 to 60 seconds. Furthermore, as a comparison with the NSGA algorithm, a First-Come-First-Serve based heuristic method was implemented for the hybrid separation. Experiments showed that the results from the NSGA algorithm have 9% to 42% less delay than the heuristic method with varied uncertainty buffer sizes.

  17. AI techniques for optimizing multi-objective reservoir operation upon human and riverine ecosystem demands

    NASA Astrophysics Data System (ADS)

    Tsai, Wen-Ping; Chang, Fi-John; Chang, Li-Chiu; Herricks, Edwin E.

    2015-11-01

    Flow regime is the key driver of the riverine ecology. This study proposes a novel hybrid methodology based on artificial intelligence (AI) techniques for quantifying riverine ecosystems requirements and delivering suitable flow regimes that sustain river and floodplain ecology through optimizing reservoir operation. This approach addresses issues to better fit riverine ecosystem requirements with existing human demands. We first explored and characterized the relationship between flow regimes and fish communities through a hybrid artificial neural network (ANN). Then the non-dominated sorting genetic algorithm II (NSGA-II) was established for river flow management over the Shihmen Reservoir in northern Taiwan. The ecosystem requirement took the form of maximizing fish diversity, which could be estimated by the hybrid ANN. The human requirement was to provide a higher satisfaction degree of water supply. The results demonstrated that the proposed methodology could offer a number of diversified alternative strategies for reservoir operation and improve reservoir operational strategies producing downstream flows that could meet both human and ecosystem needs. Applications that make this methodology attractive to water resources managers benefit from the wide spread of Pareto-front (optimal) solutions allowing decision makers to easily determine the best compromise through the trade-off between reservoir operational strategies for human and ecosystem needs.

  18. A systematic approach for watershed ecological restoration strategy making: An application in the Taizi River Basin in northern China.

    PubMed

    Li, Mengdi; Fan, Juntao; Zhang, Yuan; Guo, Fen; Liu, Lusan; Xia, Rui; Xu, Zongxue; Wu, Fengchang

    2018-05-15

    Aiming to protect freshwater ecosystems, river ecological restoration has been brought into the research spotlight. However, it is challenging for decision makers to set appropriate objectives and select a combination of rehabilitation acts from numerous possible solutions to meet ecological, economic, and social demands. In this study, we developed a systematic approach to help make an optimal strategy for watershed restoration, which incorporated ecological security assessment and multi-objectives optimization (MOO) into the planning process to enhance restoration efficiency and effectiveness. The river ecological security status was evaluated by using a pressure-state-function-response (PSFR) assessment framework, and MOO was achieved by searching for the Pareto optimal solutions via Non-dominated Sorting Genetic Algorithm II (NSGA-II) to balance tradeoffs between different objectives. Further, we clustered the searched solutions into three types in terms of different optimized objective function values in order to provide insightful information for decision makers. The proposed method was applied in an example rehabilitation project in the Taizi River Basin in northern China. The MOO result in the Taizi River presented a set of Pareto optimal solutions that were classified into three types: I - high ecological improvement, high cost and high benefits solution; II - medial ecological improvement, medial cost and medial economic benefits solution; III - low ecological improvement, low cost and low economic benefits solution. The proposed systematic approach in our study can enhance the effectiveness of riverine ecological restoration project and could provide valuable reference for other ecological restoration planning. Copyright © 2018 Elsevier B.V. All rights reserved.

  19. Comprehensive optimization of friction stir weld parameters of lap joint AA1100 plates using artificial neural networks and modified NSGA-II

    NASA Astrophysics Data System (ADS)

    Khalkhali, Abolfazl; Ebrahimi-Nejad, Salman; Geran Malek, Nima

    2018-06-01

    Friction stir welding (FSW) process overcomes many difficulties arising in conventional fusion welding processes of aluminum alloys. The current paper presents a comprehensive investigation on the effects of rotational speed, traverse speed, tool tilt angle and tool pin profile on the longitudinal force, axial force, maximum temperature, tensile strength, percent elongation, grain size, micro-hardness of welded zone and welded zone thickness of AA1100 aluminum alloy sheets. Design of experiments (DOE) was applied using the Taguchi approach and subsequently, effects of the input parameter on process outputs were investigated using analysis of variance (ANOVA). A perceptron neural network model was developed to find a correlation between the inputs and outputs. Multi-objective optimization using modified NSGA-II was implemented followed by NIP and TOPSIS approaches to propose optimum points for each of the square, pentagon, hexagon, and circular pin profiles. Results indicate that the optimization process can reach horizontal and vertical forces as low as 1452 N and 2913 N, respectively and a grain size as low as 2 μm. This results in hardness values of up to 57.2 and tensile strength, elongation and joint thickness of 2126 N, 5.9% and 3.7 mm, respectively. The maximum operating temperature can also reach a sufficiently high value of 374 °C to provide adequate material flow.

  20. An improved robust buffer allocation method for the project scheduling problem

    NASA Astrophysics Data System (ADS)

    Ghoddousi, Parviz; Ansari, Ramin; Makui, Ahmad

    2017-04-01

    Unpredictable uncertainties cause delays and additional costs for projects. Often, when using traditional approaches, the optimizing procedure of the baseline project plan fails and leads to delays. In this study, a two-stage multi-objective buffer allocation approach is applied for robust project scheduling. In the first stage, some decisions are made on buffer sizes and allocation to the project activities. A set of Pareto-optimal robust schedules is designed using the meta-heuristic non-dominated sorting genetic algorithm (NSGA-II) based on the decisions made in the buffer allocation step. In the second stage, the Pareto solutions are evaluated in terms of the deviation from the initial start time and due dates. The proposed approach was implemented on a real dam construction project. The outcomes indicated that the obtained buffered schedule reduces the cost of disruptions by 17.7% compared with the baseline plan, with an increase of about 0.3% in the project completion time.

  1. Fourier-Mellin moment-based intertwining map for image encryption

    NASA Astrophysics Data System (ADS)

    Kaur, Manjit; Kumar, Vijay

    2018-03-01

    In this paper, a robust image encryption technique that utilizes Fourier-Mellin moments and intertwining logistic map is proposed. Fourier-Mellin moment-based intertwining logistic map has been designed to overcome the issue of low sensitivity of an input image. Multi-objective Non-Dominated Sorting Genetic Algorithm (NSGA-II) based on Reinforcement Learning (MNSGA-RL) has been used to optimize the required parameters of intertwining logistic map. Fourier-Mellin moments are used to make the secret keys more secure. Thereafter, permutation and diffusion operations are carried out on input image using secret keys. The performance of proposed image encryption technique has been evaluated on five well-known benchmark images and also compared with seven well-known existing encryption techniques. The experimental results reveal that the proposed technique outperforms others in terms of entropy, correlation analysis, a unified average changing intensity and the number of changing pixel rate. The simulation results reveal that the proposed technique provides high level of security and robustness against various types of attacks.

  2. An optimal design of wind turbine and ship structure based on neuro-response surface method

    NASA Astrophysics Data System (ADS)

    Lee, Jae-Chul; Shin, Sung-Chul; Kim, Soo-Young

    2015-07-01

    The geometry of engineering systems affects their performances. For this reason, the shape of engineering systems needs to be optimized in the initial design stage. However, engineering system design problems consist of multi-objective optimization and the performance analysis using commercial code or numerical analysis is generally time-consuming. To solve these problems, many engineers perform the optimization using the approximation model (response surface). The Response Surface Method (RSM) is generally used to predict the system performance in engineering research field, but RSM presents some prediction errors for highly nonlinear systems. The major objective of this research is to establish an optimal design method for multi-objective problems and confirm its applicability. The proposed process is composed of three parts: definition of geometry, generation of response surface, and optimization process. To reduce the time for performance analysis and minimize the prediction errors, the approximation model is generated using the Backpropagation Artificial Neural Network (BPANN) which is considered as Neuro-Response Surface Method (NRSM). The optimization is done for the generated response surface by non-dominated sorting genetic algorithm-II (NSGA-II). Through case studies of marine system and ship structure (substructure of floating offshore wind turbine considering hydrodynamics performances and bulk carrier bottom stiffened panels considering structure performance), we have confirmed the applicability of the proposed method for multi-objective side constraint optimization problems.

  3. Improved NSGA model for multi objective operation scheduling and its evaluation

    NASA Astrophysics Data System (ADS)

    Li, Weining; Wang, Fuyu

    2017-09-01

    Reasonable operation can increase the income of the hospital and improve the patient’s satisfactory level. In this paper, by using multi object operation scheduling method with improved NSGA algorithm, it shortens the operation time, reduces the operation costand lowers the operation risk, the multi-objective optimization model is established for flexible operation scheduling, through the MATLAB simulation method, the Pareto solution is obtained, the standardization of data processing. The optimal scheduling scheme is selected by using entropy weight -Topsis combination method. The results show that the algorithm is feasible to solve the multi-objective operation scheduling problem, and provide a reference for hospital operation scheduling.

  4. Hybrid Multi-Objective Optimization of Folsom Reservoir Operation to Maximize Storage in Whole Watershed

    NASA Astrophysics Data System (ADS)

    Goharian, E.; Gailey, R.; Maples, S.; Azizipour, M.; Sandoval Solis, S.; Fogg, G. E.

    2017-12-01

    The drought incidents and growing water scarcity in California have a profound effect on human, agricultural, and environmental water needs. California experienced multi-year droughts, which have caused groundwater overdraft and dropping groundwater levels, and dwindling of major reservoirs. These concerns call for a stringent evaluation of future water resources sustainability and security in the state. To answer to this call, Sustainable Groundwater Management Act (SGMA) was passed in 2014 to promise a sustainable groundwater management in California by 2042. SGMA refers to managed aquifer recharge (MAR) as a key management option, especially in areas with high variation in water availability intra- and inter-annually, to secure the refill of underground water storage and return of groundwater quality to a desirable condition. The hybrid optimization of an integrated water resources system provides an opportunity to adapt surface reservoir operations for enhancement in groundwater recharge. Here, to re-operate Folsom Reservoir, objectives are maximizing the storage in the whole American-Cosumnes watershed and maximizing hydropower generation from Folsom Reservoir. While a linear programing (LP) module tends to maximize the total groundwater recharge by distributing and spreading water over suitable lands in basin, a genetic based algorithm, Non-dominated Sorting Genetic Algorithm II (NSGA-II), layer above it controls releases from the reservoir to secure the hydropower generation, carry-over storage in reservoir, available water for replenishment, and downstream water requirements. The preliminary results show additional releases from the reservoir for groundwater recharge during high flow seasons. Moreover, tradeoffs between the objectives describe that new operation performs satisfactorily to increase the storage in the basin, with nonsignificant effects on other objectives.

  5. New methods versus the smart application of existing tools in the design of water distribution network

    NASA Astrophysics Data System (ADS)

    Cisty, Milan; Bajtek, Zbynek; Celar, Lubomir; Soldanova, Veronika

    2017-04-01

    Finding effective ways to build irrigation systems which meet irrigation demands and also achieve positive environmental and economic outcomes requires, among other activities, the development of new modelling tools. Due to the high costs associated with the necessary material and the installation of an irrigation water distribution system (WDS), it is essential to optimize the design of the WDS, while the hydraulic requirements (e.g., the required pressure on irrigation machines) of the network are gratified. In this work an optimal design of a water distribution network is proposed for large irrigation networks. In the present work, a multi-step optimization approach is proposed in such a way that the optimization is accomplished in two phases. In the first phase suboptimal solutions are searched for; in the second phase, the optimization problem is solved with a reduced search space based on these solutions, which significantly supports the finding of an optimal solution. The first phase of the optimization consists of several runs of the NSGA-II, which is applied in this phase by varying its parameters for every run, i.e., changing the population size, the number of generations, and the crossover and mutation parameters. This is done with the aim of obtaining different sub-optimal solutions which have a relatively low cost. These sub-optimal solutions are subsequently used in the second phase of the proposed methodology, in which the final optimization run is built on sub-optimal solutions from the previous phase. The purpose of the second phase is to improve the results of the first phase by searching through the reduced search space. The reduction is based on the minimum and maximum diameters for each pipe from all the networks from the first stage. In this phase, NSGA-II do not consider diameters which are outside of this range. After the NSGA-II second phase computations, the best result published so far for the Balerma benchmark network which was used for methodology testing was achieved in the presented work. The knowledge gained from these computational experiments lies not in offering a new advanced heuristic or hybrid optimization methods of a water distribution network, but in the fact that it is possible to obtain very good results with simple, known methods if they are properly used methodologically. ACKNOWLEDGEMENT This work was supported by the Slovak Research and Development Agency under Contract No. APVV-15-0489 and by the Scientific Grant Agency of the Ministry of Education of the Slovak Republic and the Slovak Academy of Sciences, Grant No. 1/0665/15.

  6. An optimized resistor pattern for temperature gradient control in microfluidics

    NASA Astrophysics Data System (ADS)

    Selva, Bertrand; Marchalot, Julien; Jullien, Marie-Caroline

    2009-06-01

    In this paper, we demonstrate the possibility of generating high-temperature gradients with a linear temperature profile when heating is provided in situ. Thanks to improved optimization algorithms, the shape of resistors, which constitute the heating source, is optimized by applying the genetic algorithm NSGA-II (acronym for the non-dominated sorting genetic algorithm) (Deb et al 2002 IEEE Trans. Evol. Comput. 6 2). Experimental validation of the linear temperature profile within the cavity is carried out using a thermally sensitive fluorophore, called Rhodamine B (Ross et al 2001 Anal. Chem. 73 4117-23, Erickson et al 2003 Lab Chip 3 141-9). The high level of agreement obtained between experimental and numerical results serves to validate the accuracy of this method for generating highly controlled temperature profiles. In the field of actuation, such a device is of potential interest since it allows for controlling bubbles or droplets moving by means of thermocapillary effects (Baroud et al 2007 Phys. Rev. E 75 046302). Digital microfluidics is a critical area in the field of microfluidics (Dreyfus et al 2003 Phys. Rev. Lett. 90 14) as well as in the so-called lab-on-a-chip technology. Through an example, the large application potential of such a technique is demonstrated, which entails handling a single bubble driven along a cavity using simple and tunable embedded resistors.

  7. A parallel optimization method for product configuration and supplier selection based on interval

    NASA Astrophysics Data System (ADS)

    Zheng, Jian; Zhang, Meng; Li, Guoxi

    2017-06-01

    In the process of design and manufacturing, product configuration is an important way of product development, and supplier selection is an essential component of supply chain management. To reduce the risk of procurement and maximize the profits of enterprises, this study proposes to combine the product configuration and supplier selection, and express the multiple uncertainties as interval numbers. An integrated optimization model of interval product configuration and supplier selection was established, and NSGA-II was put forward to locate the Pareto-optimal solutions to the interval multiobjective optimization model.

  8. A novel model of magnetorheological damper with hysteresis division

    NASA Astrophysics Data System (ADS)

    Yu, Jianqiang; Dong, Xiaomin; Zhang, Zonglun

    2017-10-01

    Due to the complex nonlinearity of magnetorheological (MR) behavior, the modeling of MR dampers is a challenge. A simple and effective model of MR damper remains a work in progress. A novel model of MR damper is proposed with force-velocity hysteresis division method in this study. A typical hysteresis loop of MR damper can be simply divided into two novel curves with the division idea. One is the backbone curve and the other is the branch curve. The exponential-family functions which capturing the characteristics of the two curves can simplify the model and improve the identification efficiency. To illustrate and validate the novel phenomenological model with hysteresis division idea, a dual-end MR damper is designed and tested. Based on the experimental data, the characteristics of the novel curves are investigated. To simplify the parameters identification and obtain the reversibility, the maximum force part, the non-dimensional backbone part and the non-dimensional branch part are derived from the two curves. The maximum force part and the non-dimensional part are in multiplication type add-rule. The maximum force part is dependent on the current and maximum velocity. The non-dominated sorting genetic algorithm II (NSGA II) based on the design of experiments (DOE) is employed to identify the parameters of the normalized shape functions. Comparative analysis is conducted based on the identification results. The analysis shows that the novel model with few identification parameters has higher accuracy and better predictive ability.

  9. A Distributed and Energy-Efficient Algorithm for Event K-Coverage in Underwater Sensor Networks

    PubMed Central

    Jiang, Peng; Xu, Yiming; Liu, Jun

    2017-01-01

    For event dynamic K-coverage algorithms, each management node selects its assistant node by using a greedy algorithm without considering the residual energy and situations in which a node is selected by several events. This approach affects network energy consumption and balance. Therefore, this study proposes a distributed and energy-efficient event K-coverage algorithm (DEEKA). After the network achieves 1-coverage, the nodes that detect the same event compete for the event management node with the number of candidate nodes and the average residual energy, as well as the distance to the event. Second, each management node estimates the probability of its neighbor nodes’ being selected by the event it manages with the distance level, the residual energy level, and the number of dynamic coverage event of these nodes. Third, each management node establishes an optimization model that uses expectation energy consumption and the residual energy variance of its neighbor nodes and detects the performance of the events it manages as targets. Finally, each management node uses a constrained non-dominated sorting genetic algorithm (NSGA-II) to obtain the Pareto set of the model and the best strategy via technique for order preference by similarity to an ideal solution (TOPSIS). The algorithm first considers the effect of harsh underwater environments on information collection and transmission. It also considers the residual energy of a node and a situation in which the node is selected by several other events. Simulation results show that, unlike the on-demand variable sensing K-coverage algorithm, DEEKA balances and reduces network energy consumption, thereby prolonging the network’s best service quality and lifetime. PMID:28106837

  10. A Distributed and Energy-Efficient Algorithm for Event K-Coverage in Underwater Sensor Networks.

    PubMed

    Jiang, Peng; Xu, Yiming; Liu, Jun

    2017-01-19

    For event dynamic K-coverage algorithms, each management node selects its assistant node by using a greedy algorithm without considering the residual energy and situations in which a node is selected by several events. This approach affects network energy consumption and balance. Therefore, this study proposes a distributed and energy-efficient event K-coverage algorithm (DEEKA). After the network achieves 1-coverage, the nodes that detect the same event compete for the event management node with the number of candidate nodes and the average residual energy, as well as the distance to the event. Second, each management node estimates the probability of its neighbor nodes' being selected by the event it manages with the distance level, the residual energy level, and the number of dynamic coverage event of these nodes. Third, each management node establishes an optimization model that uses expectation energy consumption and the residual energy variance of its neighbor nodes and detects the performance of the events it manages as targets. Finally, each management node uses a constrained non-dominated sorting genetic algorithm (NSGA-II) to obtain the Pareto set of the model and the best strategy via technique for order preference by similarity to an ideal solution (TOPSIS). The algorithm first considers the effect of harsh underwater environments on information collection and transmission. It also considers the residual energy of a node and a situation in which the node is selected by several other events. Simulation results show that, unlike the on-demand variable sensing K-coverage algorithm, DEEKA balances and reduces network energy consumption, thereby prolonging the network's best service quality and lifetime.

  11. Multi-Satellite Observation Scheduling for Large Area Disaster Emergency Response

    NASA Astrophysics Data System (ADS)

    Niu, X. N.; Tang, H.; Wu, L. X.

    2018-04-01

    an optimal imaging plan, plays a key role in coordinating multiple satellites to monitor the disaster area. In the paper, to generate imaging plan dynamically according to the disaster relief, we propose a dynamic satellite task scheduling method for large area disaster response. First, an initial robust scheduling scheme is generated by a robust satellite scheduling model in which both the profit and the robustness of the schedule are simultaneously maximized. Then, we use a multi-objective optimization model to obtain a series of decomposing schemes. Based on the initial imaging plan, we propose a mixed optimizing algorithm named HA_NSGA-II to allocate the decomposing results thus to obtain an adjusted imaging schedule. A real disaster scenario, i.e., 2008 Wenchuan earthquake, is revisited in terms of rapid response using satellite resources and used to evaluate the performance of the proposed method with state-of-the-art approaches. We conclude that our satellite scheduling model can optimize the usage of satellite resources so as to obtain images in disaster response in a more timely and efficient manner.

  12. Identification of a thermo-elasto-viscoplastic behavior law for the simulation of thermoforming of high impact polystyrene

    NASA Astrophysics Data System (ADS)

    Atmani, O.; Abbès, B.; Abbès, F.; Li, Y. M.; Batkam, S.

    2018-05-01

    Thermoforming of high impact polystyrene sheets (HIPS) requires technical knowledge on material behavior, mold type, mold material, and process variables. Accurate thermoforming simulations are needed in the optimization process. Determining the behavior of the material under thermoforming conditions is one of the key parameters for an accurate simulation. The aim of this work is to identify the thermomechanical behavior of HIPS in the thermoforming conditions. HIPS behavior is highly dependent on temperature and strain rate. In order to reproduce the behavior of such material, a thermo-elasto-viscoplastic constitutive law was implement in the finite element code ABAQUS. The proposed model parameters are considered as thermo-dependent. The strain-dependence effect is introduced using Prony series. Tensile tests were carried out at different temperatures and strain rates. The material parameters were then identified using a NSGA-II algorithm. To validate the rheological model, experimental blowing tests were carried out on a thermoforming pilot machine. To compare the numerical results with the experimental ones the thickness distribution and the bubble shape were investigated.

  13. Dealing with equality and benefit for water allocation in a lake watershed: A Gini-coefficient based stochastic optimization approach

    NASA Astrophysics Data System (ADS)

    Dai, C.; Qin, X. S.; Chen, Y.; Guo, H. C.

    2018-06-01

    A Gini-coefficient based stochastic optimization (GBSO) model was developed by integrating the hydrological model, water balance model, Gini coefficient and chance-constrained programming (CCP) into a general multi-objective optimization modeling framework for supporting water resources allocation at a watershed scale. The framework was advantageous in reflecting the conflicting equity and benefit objectives for water allocation, maintaining the water balance of watershed, and dealing with system uncertainties. GBSO was solved by the non-dominated sorting Genetic Algorithms-II (NSGA-II), after the parameter uncertainties of the hydrological model have been quantified into the probability distribution of runoff as the inputs of CCP model, and the chance constraints were converted to the corresponding deterministic versions. The proposed model was applied to identify the Pareto optimal water allocation schemes in the Lake Dianchi watershed, China. The optimal Pareto-front results reflected the tradeoff between system benefit (αSB) and Gini coefficient (αG) under different significance levels (i.e. q) and different drought scenarios, which reveals the conflicting nature of equity and efficiency in water allocation problems. A lower q generally implies a lower risk of violating the system constraints and a worse drought intensity scenario corresponds to less available water resources, both of which would lead to a decreased system benefit and a less equitable water allocation scheme. Thus, the proposed modeling framework could help obtain the Pareto optimal schemes under complexity and ensure that the proposed water allocation solutions are effective for coping with drought conditions, with a proper tradeoff between system benefit and water allocation equity.

  14. Estimation of the discharges of the multiple water level stations by multi-objective optimization

    NASA Astrophysics Data System (ADS)

    Matsumoto, Kazuhiro; Miyamoto, Mamoru; Yamakage, Yuzuru; Tsuda, Morimasa; Yanami, Hitoshi; Anai, Hirokazu; Iwami, Yoichi

    2016-04-01

    This presentation shows two aspects of the parameter identification to estimate the discharges of the multiple water level stations by multi-objective optimization. One is how to adjust the parameters to estimate the discharges accurately. The other is which optimization algorithms are suitable for the parameter identification. Regarding the previous studies, there is a study that minimizes the weighted error of the discharges of the multiple water level stations by single-objective optimization. On the other hand, there are some studies that minimize the multiple error assessment functions of the discharge of a single water level station by multi-objective optimization. This presentation features to simultaneously minimize the errors of the discharges of the multiple water level stations by multi-objective optimization. Abe River basin in Japan is targeted. The basin area is 567.0km2. There are thirteen rainfall stations and three water level stations. Nine flood events are investigated. They occurred from 2005 to 2012 and the maximum discharges exceed 1,000m3/s. The discharges are calculated with PWRI distributed hydrological model. The basin is partitioned into the meshes of 500m x 500m. Two-layer tanks are placed on each mesh. Fourteen parameters are adjusted to estimate the discharges accurately. Twelve of them are the hydrological parameters and two of them are the parameters of the initial water levels of the tanks. Three objective functions are the mean squared errors between the observed and calculated discharges at the water level stations. Latin Hypercube sampling is one of the uniformly sampling algorithms. The discharges are calculated with respect to the parameter values sampled by a simplified version of Latin Hypercube sampling. The observed discharge is surrounded by the calculated discharges. It suggests that it might be possible to estimate the discharge accurately by adjusting the parameters. In a sense, it is true that the discharge of a water level station can be accurately estimated by setting the parameter values optimized to the responding water level station. However, there are some cases that the calculated discharge by setting the parameter values optimized to one water level station does not meet the observed discharge at another water level station. It is important to estimate the discharges of all the water level stations in some degree of accuracy. It turns out to be possible to select the parameter values from the pareto optimal solutions by the condition that all the normalized errors by the minimum error of the responding water level station are under 3. The optimization performance of five implementations of the algorithms and a simplified version of Latin Hypercube sampling are compared. Five implementations are NSGA2 and PAES of an optimization software inspyred and MCO_NSGA2R, MOPSOCD and NSGA2R_NSGA2R of a statistical software R. NSGA2, PAES and MOPSOCD are the optimization algorithms of a genetic algorithm, an evolution strategy and a particle swarm optimization respectively. The number of the evaluations of the objective functions is 10,000. Two implementations of NSGA2 of R outperform the others. They are promising to be suitable for the parameter identification of PWRI distributed hydrological model.

  15. Optimization of multi-reservoir operation with a new hedging rule: application of fuzzy set theory and NSGA-II

    NASA Astrophysics Data System (ADS)

    Ahmadianfar, Iman; Adib, Arash; Taghian, Mehrdad

    2017-10-01

    The reservoir hedging rule curves are used to avoid severe water shortage during drought periods. In this method reservoir storage is divided into several zones, wherein the rationing factors are changed immediately when water storage level moves from one zone to another. In the present study, a hedging rule with fuzzy rationing factors was applied for creating a transition zone in up and down each rule curve, and then the rationing factor will be changed in this zone gradually. For this propose, a monthly simulation model was developed and linked to the non-dominated sorting genetic algorithm for calculation of the modified shortage index of two objective functions involving water supply of minimum flow and agriculture demands in a long-term simulation period. Zohre multi-reservoir system in south Iran has been considered as a case study. The results of the proposed hedging rule have improved the long-term system performance from 10 till 27 percent in comparison with the simple hedging rule, where these results demonstrate that the fuzzification of hedging factors increase the applicability and the efficiency of the new hedging rule in comparison to the conventional rule curve for mitigating the water shortage problem.

  16. Structural damage detection-oriented multi-type sensor placement with multi-objective optimization

    NASA Astrophysics Data System (ADS)

    Lin, Jian-Fu; Xu, You-Lin; Law, Siu-Seong

    2018-05-01

    A structural damage detection-oriented multi-type sensor placement method with multi-objective optimization is developed in this study. The multi-type response covariance sensitivity-based damage detection method is first introduced. Two objective functions for optimal sensor placement are then introduced in terms of the response covariance sensitivity and the response independence. The multi-objective optimization problem is formed by using the two objective functions, and the non-dominated sorting genetic algorithm (NSGA)-II is adopted to find the solution for the optimal multi-type sensor placement to achieve the best structural damage detection. The proposed method is finally applied to a nine-bay three-dimensional frame structure. Numerical results show that the optimal multi-type sensor placement determined by the proposed method can avoid redundant sensors and provide satisfactory results for structural damage detection. The restriction on the number of each type of sensors in the optimization can reduce the searching space in the optimization to make the proposed method more effective. Moreover, how to select a most optimal sensor placement from the Pareto solutions via the utility function and the knee point method is demonstrated in the case study.

  17. On 4-degree-of-freedom biodynamic models of seated occupants: Lumped-parameter modeling

    NASA Astrophysics Data System (ADS)

    Bai, Xian-Xu; Xu, Shi-Xu; Cheng, Wei; Qian, Li-Jun

    2017-08-01

    It is useful to develop an effective biodynamic model of seated human occupants to help understand the human vibration exposure to transportation vehicle vibrations and to help design and improve the anti-vibration devices and/or test dummies. This study proposed and demonstrated a methodology for systematically identifying the best configuration or structure of a 4-degree-of-freedom (4DOF) human vibration model and for its parameter identification. First, an equivalent simplification expression for the models was made. Second, all of the possible 23 structural configurations of the models were identified. Third, each of them was calibrated using the frequency response functions recommended in a biodynamic standard. An improved version of non-dominated sorting genetic algorithm (NSGA-II) based on Pareto optimization principle was used to determine the model parameters. Finally, a model evaluation criterion proposed in this study was used to assess the models and to identify the best one, which was based on both the goodness of curve fits and comprehensive goodness of the fits. The identified top configurations were better than those reported in the literature. This methodology may also be extended and used to develop the models with other DOFs.

  18. Optimal fault-tolerant control strategy of a solid oxide fuel cell system

    NASA Astrophysics Data System (ADS)

    Wu, Xiaojuan; Gao, Danhui

    2017-10-01

    For solid oxide fuel cell (SOFC) development, load tracking, heat management, air excess ratio constraint, high efficiency, low cost and fault diagnosis are six key issues. However, no literature studies the control techniques combining optimization and fault diagnosis for the SOFC system. An optimal fault-tolerant control strategy is presented in this paper, which involves four parts: a fault diagnosis module, a switching module, two backup optimizers and a controller loop. The fault diagnosis part is presented to identify the SOFC current fault type, and the switching module is used to select the appropriate backup optimizer based on the diagnosis result. NSGA-II and TOPSIS are employed to design the two backup optimizers under normal and air compressor fault states. PID algorithm is proposed to design the control loop, which includes a power tracking controller, an anode inlet temperature controller, a cathode inlet temperature controller and an air excess ratio controller. The simulation results show the proposed optimal fault-tolerant control method can track the power, temperature and air excess ratio at the desired values, simultaneously achieving the maximum efficiency and the minimum unit cost in the case of SOFC normal and even in the air compressor fault.

  19. A meta-heuristic approach supported by NSGA-II for the design and plan of supply chain networks considering new product development

    NASA Astrophysics Data System (ADS)

    Alizadeh Afrouzy, Zahra; Paydar, Mohammad Mahdi; Nasseri, Seyed Hadi; Mahdavi, Iraj

    2018-03-01

    There are many reasons for the growing interest in developing new product projects for any firm. The most embossed reason is surviving in a highly competitive industry which the customer tastes are changing rapidly. A well-managed supply chain network can provide the most profit for firms due to considering new product development. Along with profit, customer satisfaction and production of new products are goals which lead to a more efficient supply chain. As new products appear in the market, the old products could become obsolete, and then phased out. The most important parameter in a supply chain which considers new and developed products is the time that developed and new products are introduced and old products are phased out. With consideration of the factors noted above, this study proposes to design a tri-objective multi-echelon multi-product multi-period supply chain model, which incorporates product development and new product production and their effects on supply chain configuration. The supply chain under consideration is assumed to consist of suppliers, manufacturers, distributors and customer groups. In terms of overcoming NP-hardness of the proposed model and in order to solve the complicated problem, a non-dominated sorting genetic algorithm is employed. As there is no benchmark available in the literature, the non-dominated ranking genetic algorithm is developed to validate the results obtained and some test problems are provided to show the applicability of the proposed methodology and evaluate the performance of the algorithms.

  20. Investigation of trunk muscle activities during lifting using a multi-objective optimization-based model and intelligent optimization algorithms.

    PubMed

    Ghiasi, Mohammad Sadegh; Arjmand, Navid; Boroushaki, Mehrdad; Farahmand, Farzam

    2016-03-01

    A six-degree-of-freedom musculoskeletal model of the lumbar spine was developed to predict the activity of trunk muscles during light, moderate and heavy lifting tasks in standing posture. The model was formulated into a multi-objective optimization problem, minimizing the sum of the cubed muscle stresses and maximizing the spinal stability index. Two intelligent optimization algorithms, i.e., the vector evaluated particle swarm optimization (VEPSO) and nondominated sorting genetic algorithm (NSGA), were employed to solve the optimization problem. The optimal solution for each task was then found in the way that the corresponding in vivo intradiscal pressure could be reproduced. Results indicated that both algorithms predicted co-activity in the antagonistic abdominal muscles, as well as an increase in the stability index when going from the light to the heavy task. For all of the light, moderate and heavy tasks, the muscles' activities predictions of the VEPSO and the NSGA were generally consistent and in the same order of the in vivo electromyography data. The proposed methodology is thought to provide improved estimations for muscle activities by considering the spinal stability and incorporating the in vivo intradiscal pressure data.

  1. Implementing a Multiple Criteria Model Base in Co-Op with a Graphical User Interface Generator

    DTIC Science & Technology

    1993-09-23

    PROMETHEE ................................ 44 A. THE ALGORITHM S ................................... 44 1. Basic Algorithm of PROMETHEE I and... PROMETHEE II ..... 45 a. Use of the Algorithm in PROMETHEE I ............. 49 b. Use of the Algorithm in PROMETHEE II ............. 50 V 2. Algorithm of... PROMETHEE V ......................... 50 B. SCREEN DESIGNS OF PROMETHEE ...................... 51 1. PROMETHEE I and PROMETHEE II ................... 52 a

  2. Performance comparison of optimal fractional order hybrid fuzzy PID controllers for handling oscillatory fractional order processes with dead time.

    PubMed

    Das, Saptarshi; Pan, Indranil; Das, Shantanu

    2013-07-01

    Fuzzy logic based PID controllers have been studied in this paper, considering several combinations of hybrid controllers by grouping the proportional, integral and derivative actions with fuzzy inferencing in different forms. Fractional order (FO) rate of error signal and FO integral of control signal have been used in the design of a family of decomposed hybrid FO fuzzy PID controllers. The input and output scaling factors (SF) along with the integro-differential operators are tuned with real coded genetic algorithm (GA) to produce optimum closed loop performance by simultaneous consideration of the control loop error index and the control signal. Three different classes of fractional order oscillatory processes with various levels of relative dominance between time constant and time delay have been used to test the comparative merits of the proposed family of hybrid fractional order fuzzy PID controllers. Performance comparison of the different FO fuzzy PID controller structures has been done in terms of optimal set-point tracking, load disturbance rejection and minimal variation of manipulated variable or smaller actuator requirement etc. In addition, multi-objective Non-dominated Sorting Genetic Algorithm (NSGA-II) has been used to study the Pareto optimal trade-offs between the set point tracking and control signal, and the set point tracking and load disturbance performance for each of the controller structure to handle the three different types of processes. Copyright © 2013 ISA. Published by Elsevier Ltd. All rights reserved.

  3. Identification of significant factors in fatal-injury highway crashes using genetic algorithm and neural network.

    PubMed

    Li, Yunjie; Ma, Dongfang; Zhu, Mengtao; Zeng, Ziqiang; Wang, Yinhai

    2018-02-01

    Identification of the significant factors of traffic crashes has been a primary concern of the transportation safety research community for many years. A fatal-injury crash is a comprehensive result influenced by multiple variables involved at the moment of the crash scenario, the main idea of this paper is to explore the process of significant factors identification from a multi-objective optimization (MOP) standpoint. It proposes a data-driven model which combines the Non-dominated Sorting Genetic Algorithm (NSGA-II) with the Neural Network (NN) architecture to efficiently search for optimal solutions. This paper also defines the index of Factor Significance (F s ) for quantitative evaluation of the significance of each factor. Based on a set of three year data of crash records collected from three main interstate highways in the Washington State, the proposed method reveals that the top five significant factors for a better Fatal-injury crash identification are 1) Driver Conduct, 2) Vehicle Action, 3) Roadway Surface Condition, 4) Driver Restraint and 5) Driver Age. The most sensitive factors from a spatiotemporal perspective are the Hour of Day, Most Severe Sobriety, and Roadway Characteristics. The method and results in this paper provide new insights into the injury pattern of highway crashes and may be used to improve the understanding of, prevention of, and other enforcement efforts related to injury crashes in the future. Copyright © 2017. Published by Elsevier Ltd.

  4. ASR-9 processor augmentation card (9-PAC) phase II scan-scan correlator algorithms

    DOT National Transportation Integrated Search

    2001-04-26

    The report documents the scan-scan correlator (tracker) algorithm developed for Phase II of the ASR-9 Processor Augmentation Card (9-PAC) project. The improved correlation and tracking algorithms in 9-PAC Phase II decrease the incidence of false-alar...

  5. Evolutionary multiobjective design of a flexible caudal fin for robotic fish.

    PubMed

    Clark, Anthony J; Tan, Xiaobo; McKinley, Philip K

    2015-11-25

    Robotic fish accomplish swimming by deforming their bodies or other fin-like appendages. As an emerging class of embedded computing system, robotic fish are anticipated to play an important role in environmental monitoring, inspection of underwater structures, tracking of hazardous wastes and oil spills, and the study of live fish behaviors. While integration of flexible materials (into the fins and/or body) holds the promise of improved swimming performance (in terms of both speed and maneuverability) for these robots, such components also introduce significant design challenges due to the complex material mechanics and hydrodynamic interactions. The problem is further exacerbated by the need for the robots to meet multiple objectives (e.g., both speed and energy efficiency). In this paper, we propose an evolutionary multiobjective optimization approach to the design and control of a robotic fish with a flexible caudal fin. Specifically, we use the NSGA-II algorithm to investigate morphological and control parameter values that optimize swimming speed and power usage. Several evolved fin designs are validated experimentally with a small robotic fish, where fins of different stiffness values and sizes are printed with a multi-material 3D printer. Experimental results confirm the effectiveness of the proposed design approach in balancing the two competing objectives.

  6. A Multi-Objective Compounded Local Mobile Cloud Architecture Using Priority Queues to Process Multiple Jobs.

    PubMed

    Wei, Xiaohui; Sun, Bingyi; Cui, Jiaxu; Xu, Gaochao

    2016-01-01

    As a result of the greatly increased use of mobile devices, the disadvantages of portable devices have gradually begun to emerge. To solve these problems, the use of mobile cloud computing assisted by cloud data centers has been proposed. However, cloud data centers are always very far from the mobile requesters. In this paper, we propose an improved multi-objective local mobile cloud model: Compounded Local Mobile Cloud Architecture with Dynamic Priority Queues (LMCpri). This new architecture could briefly store jobs that arrive simultaneously at the cloudlet in different priority positions according to the result of auction processing, and then execute partitioning tasks on capable helpers. In the Scheduling Module, NSGA-II is employed as the scheduling algorithm to shorten processing time and decrease requester cost relative to PSO and sequential scheduling. The simulation results show that the number of iteration times that is defined to 30 is the best choice of the system. In addition, comparing with LMCque, LMCpri is able to effectively accommodate a requester who would like his job to be executed in advance and shorten execution time. Finally, we make a comparing experiment between LMCpri and cloud assisting architecture, and the results reveal that LMCpri presents a better performance advantage than cloud assisting architecture.

  7. A Multi-Objective Compounded Local Mobile Cloud Architecture Using Priority Queues to Process Multiple Jobs

    PubMed Central

    Wei, Xiaohui; Sun, Bingyi; Cui, Jiaxu; Xu, Gaochao

    2016-01-01

    As a result of the greatly increased use of mobile devices, the disadvantages of portable devices have gradually begun to emerge. To solve these problems, the use of mobile cloud computing assisted by cloud data centers has been proposed. However, cloud data centers are always very far from the mobile requesters. In this paper, we propose an improved multi-objective local mobile cloud model: Compounded Local Mobile Cloud Architecture with Dynamic Priority Queues (LMCpri). This new architecture could briefly store jobs that arrive simultaneously at the cloudlet in different priority positions according to the result of auction processing, and then execute partitioning tasks on capable helpers. In the Scheduling Module, NSGA-II is employed as the scheduling algorithm to shorten processing time and decrease requester cost relative to PSO and sequential scheduling. The simulation results show that the number of iteration times that is defined to 30 is the best choice of the system. In addition, comparing with LMCque, LMCpri is able to effectively accommodate a requester who would like his job to be executed in advance and shorten execution time. Finally, we make a comparing experiment between LMCpri and cloud assisting architecture, and the results reveal that LMCpri presents a better performance advantage than cloud assisting architecture. PMID:27419854

  8. Algorithms and sensitivity analyses for Stratospheric Aerosol and Gas Experiment II water vapor retrieval

    NASA Technical Reports Server (NTRS)

    Chu, W. P.; Chiou, E. W.; Larsen, J. C.; Thomason, L. W.; Rind, D.; Buglia, J. J.; Oltmans, S.; Mccormick, M. P.; Mcmaster, L. M.

    1993-01-01

    The operational inversion algorithm used for the retrieval of the water-vapor vertical profiles from the Stratospheric Aerosol and Gas Experiment II (SAGE II) occultation data is presented. Unlike the algorithm used for the retrieval of aerosol, O3, and NO2, the water-vapor retrieval algorithm accounts for the nonlinear relationship between the concentration versus the broad-band absorption characteristics of water vapor. Problems related to the accuracy of the computational scheme, the accuracy of the removal of other interfering species, and the expected uncertainty of the retrieved profile are examined. Results are presented on the error analysis of the SAGE II water vapor retrieval, indicating that the SAGE II instrument produced good quality water vapor data.

  9. Study and Optimization of Helicopter Subfloor Energy Absorption Structure with Foldcore Sandwich Structures

    NASA Astrophysics Data System (ADS)

    HuaZhi, Zhou; ZhiJin, Wang

    2017-11-01

    The intersection element is an important part of the helicopter subfloor structure. In order to improve the crashworthiness properties, the floor and the skin of the intersection element are replaced with foldcore sandwich structures. Foldcore is a kind of high-energy absorption structure. Compared with original structure, the new intersection element shows better buffering capacity and energy-absorption capacity. To reduce structure’s mass while maintaining the crashworthiness requirements satisfied, optimization of the intersection element geometric parameters is conducted. An optimization method using NSGA-II and Anisotropic Kriging is used. A significant CPU time saving can be obtained by replacing numerical model with Anisotropic Kriging surrogate model. The operation allows 17.15% reduce of the intersection element mass.

  10. Development of the atmospheric correction algorithm for the next generation geostationary ocean color sensor data

    NASA Astrophysics Data System (ADS)

    Lee, Kwon-Ho; Kim, Wonkook

    2017-04-01

    The geostationary ocean color imager-II (GOCI-II), designed to be focused on the ocean environmental monitoring with better spatial (250m for local and 1km for full disk) and spectral resolution (13 bands) then the current operational mission of the GOCI-I. GOCI-II will be launched in 2018. This study presents currently developing algorithm for atmospheric correction and retrieval of surface reflectance over land to be optimized with the sensor's characteristics. We first derived the top-of-atmosphere radiances as the proxy data derived from the parameterized radiative transfer code in the 13 bands of GOCI-II. Based on the proxy data, the algorithm has been made with cloud masking, gas absorption correction, aerosol inversion, computation of aerosol extinction correction. The retrieved surface reflectances are evaluated by the MODIS level 2 surface reflectance products (MOD09). For the initial test period, the algorithm gave error of within 0.05 compared to MOD09. Further work will be progressed to fully implement the GOCI-II Ground Segment system (G2GS) algorithm development environment. These atmospherically corrected surface reflectance product will be the standard GOCI-II product after launch.

  11. Sambot II: A self-assembly modular swarm robot

    NASA Astrophysics Data System (ADS)

    Zhang, Yuchao; Wei, Hongxing; Yang, Bo; Jiang, Cancan

    2018-04-01

    The new generation of self-assembly modular swarm robot Sambot II, based on the original generation of self-assembly modular swarm robot Sambot, adopting laser and camera module for information collecting, is introduced in this manuscript. The visual control algorithm of Sambot II is detailed and feasibility of the algorithm is verified by the laser and camera experiments. At the end of this manuscript, autonomous docking experiments of two Sambot II robots are presented. The results of experiments are showed and analyzed to verify the feasibility of whole scheme of Sambot II.

  12. Multi-objective optimisation of wastewater treatment plant control to reduce greenhouse gas emissions.

    PubMed

    Sweetapple, Christine; Fu, Guangtao; Butler, David

    2014-05-15

    This study investigates the potential of control strategy optimisation for the reduction of operational greenhouse gas emissions from wastewater treatment in a cost-effective manner, and demonstrates that significant improvements can be realised. A multi-objective evolutionary algorithm, NSGA-II, is used to derive sets of Pareto optimal operational and control parameter values for an activated sludge wastewater treatment plant, with objectives including minimisation of greenhouse gas emissions, operational costs and effluent pollutant concentrations, subject to legislative compliance. Different problem formulations are explored, to identify the most effective approach to emissions reduction, and the sets of optimal solutions enable identification of trade-offs between conflicting objectives. It is found that multi-objective optimisation can facilitate a significant reduction in greenhouse gas emissions without the need for plant redesign or modification of the control strategy layout, but there are trade-offs to consider: most importantly, if operational costs are not to be increased, reduction of greenhouse gas emissions is likely to incur an increase in effluent ammonia and total nitrogen concentrations. Design of control strategies for a high effluent quality and low costs alone is likely to result in an inadvertent increase in greenhouse gas emissions, so it is of key importance that effects on emissions are considered in control strategy development and optimisation. Copyright © 2014 Elsevier Ltd. All rights reserved.

  13. Exploring synergistic benefits of Water-Food-Energy Nexus through multi-objective reservoir optimization schemes.

    PubMed

    Uen, Tinn-Shuan; Chang, Fi-John; Zhou, Yanlai; Tsai, Wen-Ping

    2018-08-15

    This study proposed a holistic three-fold scheme that synergistically optimizes the benefits of the Water-Food-Energy (WFE) Nexus by integrating the short/long-term joint operation of a multi-objective reservoir with irrigation ponds in response to urbanization. The three-fold scheme was implemented step by step: (1) optimizing short-term (daily scale) reservoir operation for maximizing hydropower output and final reservoir storage during typhoon seasons; (2) simulating long-term (ten-day scale) water shortage rates in consideration of the availability of irrigation ponds for both agricultural and public sectors during non-typhoon seasons; and (3) promoting the synergistic benefits of the WFE Nexus in a year-round perspective by integrating the short-term optimization and long-term simulation of reservoir operations. The pivotal Shihmen Reservoir and 745 irrigation ponds located in Taoyuan City of Taiwan together with the surrounding urban areas formed the study case. The results indicated that the optimal short-term reservoir operation obtained from the non-dominated sorting genetic algorithm II (NSGA-II) could largely increase hydropower output but just slightly affected water supply. The simulation results of the reservoir coupled with irrigation ponds indicated that such joint operation could significantly reduce agricultural and public water shortage rates by 22.2% and 23.7% in average, respectively, as compared to those of reservoir operation excluding irrigation ponds. The results of year-round short/long-term joint operation showed that water shortage rates could be reduced by 10% at most, the food production rate could be increased by up to 47%, and the hydropower benefit could increase up to 9.33 million USD per year, respectively, in a wet year. Consequently, the proposed methodology could be a viable approach to promoting the synergistic benefits of the WFE Nexus, and the results provided unique insights for stakeholders and policymakers to pursue sustainable urban development plans. Copyright © 2018 Elsevier B.V. All rights reserved.

  14. SU-F-R-46: Predicting Distant Failure in Lung SBRT Using Multi-Objective Radiomics Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhou, Z; Folkert, M; Iyengar, P

    2016-06-15

    Purpose: To predict distant failure in lung stereotactic body radiation therapy (SBRT) in early stage non-small cell lung cancer (NSCLC) by using a new multi-objective radiomics model. Methods: Currently, most available radiomics models use the overall accuracy as the objective function. However, due to data imbalance, a single object may not reflect the performance of a predictive model. Therefore, we developed a multi-objective radiomics model which considers both sensitivity and specificity as the objective functions simultaneously. The new model is used to predict distant failure in lung SBRT using 52 patients treated at our institute. Quantitative imaging features of PETmore » and CT as well as clinical parameters are utilized to build the predictive model. Image features include intensity features (9), textural features (12) and geometric features (8). Clinical parameters for each patient include demographic parameters (4), tumor characteristics (8), treatment faction schemes (4) and pretreatment medicines (6). The modelling procedure consists of two steps: extracting features from segmented tumors in PET and CT; and selecting features and training model parameters based on multi-objective. Support Vector Machine (SVM) is used as the predictive model, while a nondominated sorting-based multi-objective evolutionary computation algorithm II (NSGA-II) is used for solving the multi-objective optimization. Results: The accuracy for PET, clinical, CT, PET+clinical, PET+CT, CT+clinical, PET+CT+clinical are 71.15%, 84.62%, 84.62%, 85.54%, 82.69%, 84.62%, 86.54%, respectively. The sensitivities for the above seven combinations are 41.76%, 58.33%, 50.00%, 50.00%, 41.67%, 41.67%, 58.33%, while the specificities are 80.00%, 92.50%, 90.00%, 97.50%, 92.50%, 97.50%, 97.50%. Conclusion: A new multi-objective radiomics model for predicting distant failure in NSCLC treated with SBRT was developed. The experimental results show that the best performance can be obtained by combining all features.« less

  15. Joint Inversion of 1-D Magnetotelluric and Surface-Wave Dispersion Data with an Improved Multi-Objective Genetic Algorithm and Application to the Data of the Longmenshan Fault Zone

    NASA Astrophysics Data System (ADS)

    Wu, Pingping; Tan, Handong; Peng, Miao; Ma, Huan; Wang, Mao

    2018-05-01

    Magnetotellurics and seismic surface waves are two prominent geophysical methods for deep underground exploration. Joint inversion of these two datasets can help enhance the accuracy of inversion. In this paper, we describe a method for developing an improved multi-objective genetic algorithm (NSGA-SBX) and applying it to two numerical tests to verify the advantages of the algorithm. Our findings show that joint inversion with the NSGA-SBX method can improve the inversion results by strengthening structural coupling when the discontinuities of the electrical and velocity models are consistent, and in case of inconsistent discontinuities between these models, joint inversion can retain the advantages of individual inversions. By applying the algorithm to four detection points along the Longmenshan fault zone, we observe several features. The Sichuan Basin demonstrates low S-wave velocity and high conductivity in the shallow crust probably due to thick sedimentary layers. The eastern margin of the Tibetan Plateau shows high velocity and high resistivity in the shallow crust, while two low velocity layers and a high conductivity layer are observed in the middle lower crust, probably indicating the mid-crustal channel flow. Along the Longmenshan fault zone, a high conductivity layer from 8 to 20 km is observed beneath the northern segment and decreases with depth beneath the middle segment, which might be caused by the elevated fluid content of the fault zone.

  16. Optimal Integration of Departure and Arrivals in Terminal Airspace

    NASA Technical Reports Server (NTRS)

    Xue, Min; Zelinski, Shannon Jean

    2012-01-01

    Coordination of operations with spatially and temporally shared resources such as route segments, fixes, and runways improves the efficiency of terminal airspace management. Problems in this category include scheduling and routing, thus they are normally difficult to solve compared with pure scheduling problems. In order to reduce the computational time, a fast time algorithm formulation using a non-dominated sorting genetic algorithm (NSGA) was introduced in this work and applied to a test case based on existing literature. The experiment showed that new method can solve the whole problem in fast time instead of solving sub-problems sequentially with a window technique. The results showed a 60% or 406 second delay reduction was achieved by sharing departure fixes (more details on the comparison with MILP results will be presented in the final paper). Furthermore, the NSGA algorithm was applied to a problem in LAX terminal airspace, where interactions between 28% of LAX arrivals and 10% of LAX departures are resolved by spatial segregation, which may introduce unnecessary delays. In this work, spatial segregation, temporal segregation, and hybrid segregation were formulated using the new algorithm. Results showed that spatial and temporal segregation approaches achieved similar delay. Hybrid segregation introduced much less delay than the other two approaches. For a total of 9 interacting departures and arrivals, delay reduction varied from 4 minutes to 6.4 minutes corresponding flight time uncertainty from 0 to 60 seconds. Considering the amount of flights that could be affected, total annual savings with hybrid segregation would be significant.

  17. A Regionalization Approach to select the final watershed parameter set among the Pareto solutions

    NASA Astrophysics Data System (ADS)

    Park, G. H.; Micheletty, P. D.; Carney, S.; Quebbeman, J.; Day, G. N.

    2017-12-01

    The calibration of hydrological models often results in model parameters that are inconsistent with those from neighboring basins. Considering that physical similarity exists within neighboring basins some of the physically related parameters should be consistent among them. Traditional manual calibration techniques require an iterative process to make the parameters consistent, which takes additional effort in model calibration. We developed a multi-objective optimization procedure to calibrate the National Weather Service (NWS) Research Distributed Hydrological Model (RDHM), using the Nondominant Sorting Genetic Algorithm (NSGA-II) with expert knowledge of the model parameter interrelationships one objective function. The multi-objective algorithm enables us to obtain diverse parameter sets that are equally acceptable with respect to the objective functions and to choose one from the pool of the parameter sets during a subsequent regionalization step. Although all Pareto solutions are non-inferior, we exclude some of the parameter sets that show extremely values for any of the objective functions to expedite the selection process. We use an apriori model parameter set derived from the physical properties of the watershed (Koren et al., 2000) to assess the similarity for a given parameter across basins. Each parameter is assigned a weight based on its assumed similarity, such that parameters that are similar across basins are given higher weights. The parameter weights are useful to compute a closeness measure between Pareto sets of nearby basins. The regionalization approach chooses the Pareto parameter sets that minimize the closeness measure of the basin being regionalized. The presentation will describe the results of applying the regionalization approach to a set of pilot basins in the Upper Colorado basin as part of a NASA-funded project.

  18. The novel EuroSCORE II algorithm predicts the hospital mortality of thoracic aortic surgery in 461 consecutive Japanese patients better than both the original additive and logistic EuroSCORE algorithms.

    PubMed

    Nishida, Takahiro; Sonoda, Hiromichi; Oishi, Yasuhisa; Tanoue, Yoshihisa; Nakashima, Atsuhiro; Shiokawa, Yuichi; Tominaga, Ryuji

    2014-04-01

    The European System for Cardiac Operative Risk Evaluation (EuroSCORE) II was developed to improve the overestimation of surgical risk associated with the original (additive and logistic) EuroSCOREs. The purpose of this study was to evaluate the significance of the EuroSCORE II by comparing its performance with that of the original EuroSCOREs in Japanese patients undergoing surgery on the thoracic aorta. We have calculated the predicted mortalities according to the additive EuroSCORE, logistic EuroSCORE and EuroSCORE II algorithms in 461 patients who underwent surgery on the thoracic aorta during a period of 20 years (1993-2013). The actual in-hospital mortality rates in the low- (additive EuroSCORE of 3-6), moderate- (7-11) and high-risk (≥11) groups (followed by overall mortality) were 1.3, 6.2 and 14.4% (7.2% overall), respectively. Among the three different risk groups, the expected mortality rates were 5.5 ± 0.6, 9.1 ± 0.7 and 13.5 ± 0.2% (9.5 ± 0.1% overall) by the additive EuroSCORE algorithm, 5.3 ± 0.1, 16 ± 0.4 and 42.4 ± 1.3% (19.9 ± 0.7% overall) by the logistic EuroSCORE algorithm and 1.6 ± 0.1, 5.2 ± 0.2 and 18.5 ± 1.3% (7.4 ± 0.4% overall) by the EuroSCORE II algorithm, indicating poor prediction (P < 0.0001) of the mortality in the high-risk group, especially by the logistic EuroSCORE. The areas under the receiver operating characteristic curves of the additive EuroSCORE, logistic EuroSCORE and EuroSCORE II algorithms were 0.6937, 0.7169 and 0.7697, respectively. Thus, the mortality expected by the EuroSCORE II more closely matched the actual mortality in all three risk groups. In contrast, the mortality expected by the logistic EuroSCORE overestimated the risks in the moderate- (P = 0.0002) and high-risk (P < 0.0001) patient groups. Although all of the original EuroSCOREs and EuroSCORE II appreciably predicted the surgical mortality for thoracic aortic surgery in Japanese patients, the EuroSCORE II best predicted the mortalities in all risk groups.

  19. A methodology for rapid vehicle scaling and configuration space exploration

    NASA Astrophysics Data System (ADS)

    Balaba, Davis

    2009-12-01

    The Configuration-space Exploration and Scaling Methodology (CESM) entails the representation of component or sub-system geometries as matrices of points in 3D space. These typically large matrices are reduced using minimal convex sets or convex hulls. This reduction leads to significant gains in collision detection speed at minimal approximation expense. (The Gilbert-Johnson-Keerthi algorithm [79] is used for collision detection purposes in this methodology.) Once the components are laid out, their collective convex hull (from here on out referred to as the super-hull) is used to approximate the inner mold line of the minimum enclosing envelope of the vehicle concept. A sectional slicing algorithm is used to extract the sectional dimensions of this envelope. An offset is added to these dimensions in order to come up with the sectional fuselage dimensions. Once the lift and control surfaces are added, vehicle level objective functions can be evaluated and compared to other designs. The size of the design space coupled with the fact that some key constraints such as the number of collisions are discontinuous, dictate that a domain-spanning optimization routine be used. Also, as this is a conceptual design tool, the goal is to provide the designer with a diverse baseline geometry space from which to chose. For these reasons, a domain-spanning algorithm with counter-measures against speciation and genetic drift is the recommended optimization approach. The Non-dominated Sorting Genetic Algorithm (NSGA-II) [60] is shown to work well for the proof of concept study. There are two major reasons why the need to evaluate higher fidelity, custom geometric scaling laws became a part of this body of work. First of all, historical-data based regressions become implicitly unreliable when the vehicle concept in question is designed around a disruptive technology. Second, it was shown that simpler approaches such as photographic scaling can result in highly suboptimal concepts even for very small scaling factors. Yet good scaling information is critical to the success of any conceptual design process. In the CESM methodology, it is assumed that the new technology has matured enough to permit the prediction of the scaling behavior of the various subsystems in response to requirement changes. Updated subsystem geometry data is generated by applying the new requirement settings to the affected subsystems. All collisions are then eliminated using the NSGA-II algorithm. This is done while minimizing the adverse impact on the vehicle packing density. Once all collisions are eliminated, the vehicle geometry is reconstructed and system level data such as fuselage volume can be harvested. This process is repeated for all requirement settings. Dimensional analysis and regression can be carried out using this data and all other pertinent metrics in the manner described by Mendez [124] and Segel [173]. The dominant parameters for each response show up as in the dimensionally consistent groups that form the independent variables. More importantly the impact of changes in any of these variables on system level dependent variables can be easily and rapidly evaluated. In this way, the conceptual design process can be accelerated without sacrificing analysis accuracy. Scaling laws for take-off gross weight and fuselage volume as functions of fuel cell specific power and power density for a notional General Aviation vehicle are derived for the proof of concept. CESM enables the designer to maintain design freedom by portably carrying multiple designs deeper into the design process. Also since CESM is a bottom-up approach, all proposed baseline concepts are implicitly volumetrically feasible. System level geometry parameters become fall-outs as opposed to inputs. This is a critical attribute as, without the benefit of experience, a designer would be hard pressed to set the appropriate ranges for such parameters for a vehicle built around a disruptive technology. Furthermore, scaling laws generated from custom data for each concept are subject to less design noise than say, regression based approaches. Through these laws, key physics-based characteristics of vehicle subsystems such as energy density can be mapped onto key system level metrics such as fuselage volume or take-off gross weight. These laws can then substitute some historical-data based analyses thereby improving the fidelity of the analyses and reducing design time. (Abstract shortened by UMI.)

  20. 1984-1995 Evolution of Stratospheric Aerosol Size, Surface Area, and Volume Derived by Combining SAGE II and CLAES Extinction Measurements

    NASA Technical Reports Server (NTRS)

    Russell, Philip B.; Bauman, Jill J.

    2000-01-01

    This SAGE II Science Team task focuses on the development of a multi-wavelength, multi- sensor Look-Up-Table (LUT) algorithm for retrieving information about stratospheric aerosols from global satellite-based observations of particulate extinction. The LUT algorithm combines the 4-wavelength SAGE II extinction measurements (0.385 <= lambda <= 1.02 microns) with the 7.96 micron and 12.82 micron extinction measurements from the Cryogenic Limb Array Etalon Spectrometer (CLAES) instrument, thus increasing the information content available from either sensor alone. The algorithm uses the SAGE II/CLAES composite spectra in month-latitude-altitude bins to retrieve values and uncertainties of particle effective radius R(sub eff), surface area S, volume V and size distribution width sigma(sub g).

  1. A Centered Projective Algorithm for Linear Programming

    DTIC Science & Technology

    1988-02-01

    zx/l to (PA Karmarkar’s algorithm iterates this procedure. An alternative method, the so-called affine variant (first proposed by Dikin [6] in 1967...trajectories, II. Legendre transform coordinates . central trajectories," manuscripts, to appear in Transactions of the American [6] I.I. Dikin ...34Iterative solution of problems of linear and quadratic programming," Soviet Mathematics Dokladv 8 (1967), 674-675. [7] I.I. Dikin , "On the speed of an

  2. Solar Occultation Retrieval Algorithm Development

    NASA Technical Reports Server (NTRS)

    Lumpe, Jerry D.

    2004-01-01

    This effort addresses the comparison and validation of currently operational solar occultation retrieval algorithms, and the development of generalized algorithms for future application to multiple platforms. initial development of generalized forward model algorithms capable of simulating transmission data from of the POAM II/III and SAGE II/III instruments. Work in the 2" quarter will focus on: completion of forward model algorithms, including accurate spectral characteristics for all instruments, and comparison of simulated transmission data with actual level 1 instrument data for specific occultation events.

  3. Development of a multiobjective optimization tool for the selection and placement of best management practices for nonpoint source pollution control

    NASA Astrophysics Data System (ADS)

    Maringanti, Chetan; Chaubey, Indrajeet; Popp, Jennie

    2009-06-01

    Best management practices (BMPs) are effective in reducing the transport of agricultural nonpoint source pollutants to receiving water bodies. However, selection of BMPs for placement in a watershed requires optimization of the available resources to obtain maximum possible pollution reduction. In this study, an optimization methodology is developed to select and place BMPs in a watershed to provide solutions that are both economically and ecologically effective. This novel approach develops and utilizes a BMP tool, a database that stores the pollution reduction and cost information of different BMPs under consideration. The BMP tool replaces the dynamic linkage of the distributed parameter watershed model during optimization and therefore reduces the computation time considerably. Total pollutant load from the watershed, and net cost increase from the baseline, were the two objective functions minimized during the optimization process. The optimization model, consisting of a multiobjective genetic algorithm (NSGA-II) in combination with a watershed simulation tool (Soil Water and Assessment Tool (SWAT)), was developed and tested for nonpoint source pollution control in the L'Anguille River watershed located in eastern Arkansas. The optimized solutions provided a trade-off between the two objective functions for sediment, phosphorus, and nitrogen reduction. The results indicated that buffer strips were very effective in controlling the nonpoint source pollutants from leaving the croplands. The optimized BMP plans resulted in potential reductions of 33%, 32%, and 13% in sediment, phosphorus, and nitrogen loads, respectively, from the watershed.

  4. Multi-objective optimization of discrete time-cost tradeoff problem in project networks using non-dominated sorting genetic algorithm

    NASA Astrophysics Data System (ADS)

    Shahriari, Mohammadreza

    2016-06-01

    The time-cost tradeoff problem is one of the most important and applicable problems in project scheduling area. There are many factors that force the mangers to crash the time. This factor could be early utilization, early commissioning and operation, improving the project cash flow, avoiding unfavorable weather conditions, compensating the delays, and so on. Since there is a need to allocate extra resources to short the finishing time of project and the project managers are intended to spend the lowest possible amount of money and achieve the maximum crashing time, as a result, both direct and indirect costs will be influenced in the project, and here, we are facing into the time value of money. It means that when we crash the starting activities in a project, the extra investment will be tied in until the end date of the project; however, when we crash the final activities, the extra investment will be tied in for a much shorter period. This study is presenting a two-objective mathematical model for balancing compressing the project time with activities delay to prepare a suitable tool for decision makers caught in available facilities and due to the time of projects. Also drawing the scheduling problem to real world conditions by considering nonlinear objective function and the time value of money are considered. The presented problem was solved using NSGA-II, and the effect of time compressing reports on the non-dominant set.

  5. Smart reconfigurable parabolic space antenna for variable electromagnetic patterns

    NASA Astrophysics Data System (ADS)

    Kalra, Sahil; Datta, Rituparna; Munjal, B. S.; Bhattacharya, Bishakh

    2018-02-01

    An application of reconfigurable parabolic space antenna for satellite is discussed in this paper. The present study focuses on shape morphing of flexible parabolic antenna actuated with Shape Memory Alloy (SMA) wires. The antenna is able to transmit the signals to the desired footprint on earth with a desired gain value. SMA wire based actuation with a locking device is developed for a precise control of Antenna shape. The locking device is efficient to hold the structure in deformed configuration during power cutoff from the system. The maximum controllable deflection at any point using such actuation system is about 25mm with a precision of ±100 m. In order to control the shape of the antenna in a closed feedback loop, a Proportional, Integral and Derivative (PID) based controller is developed using LabVIEW (NI) and experiments are performed. Numerical modeling and analysis of the structure is carried out using finite element software ABAQUS. For data reduction and fast computation, stiffness matrix generated by ABAQUS is condensed by Guyan Reduction technique and shape optimization is performed using Non-dominated Sorting Genetic Algorithm (NSGA-II). The matching in comparative study between numerical and experimental set-up shows efficacy of our method. Thereafter, Electro-Magnetic (EM) simulations of the deformed shape is carried out using electromagnetic field simulation, High Frequency Structure Simulator (HFSS). The proposed design is envisaged to be very effective for multipurpose application of satellite system in the future missions of Indian Space Research Organization (ISRO).

  6. Nitrogen and sulfur co-doped porous graphene aerogel as an efficient electrode material for high performance supercapacitor in ionic liquid electrolyte

    NASA Astrophysics Data System (ADS)

    Chen, Yujuan; Liu, Zhaoen; Sun, Li; Lu, Zhiwei; Zhuo, Kelei

    2018-06-01

    Nitrogen and sulfur co-doped graphene aerogel (NS-GA) is prepared by one-pot process. The as-prepared materials are investigated as supercapacitors electrodes in an ionic liquid (1-ethyl-3-methylimidazolium tetrafluoroborate, EMIMBF4) electrolyte. The NS-GA is characterized using X-ray diffraction, X-ray photoelectron spectroscopy, and Raman spectroscopy scanning electron microscopy. The results show that the NS-GA has hierarchical porous structure. Electrochemical performance is investigated by cycle voltammetry and galvanostatic charge-discharge. Notably, the supercapacitor based on the NS-GA-5 possesses a maximum energy density of 100.7 Wh kg-1 at power density of 0.94 kW kg-1. The electrode materials also offer a large specific capacitance of 203.2 F g-1 at a current density of 1 A g-1 and the capacitance retention of NS-GA-5 is 90% after 3000 cycles at a scan rate of 2 A g-1. The NS-GA-5 with numerous advantages including low cost and remarkable electrochemical behaviors can be a promising electrode material for the application of supercapacitors.

  7. Proposed method to construct Boolean functions with maximum possible annihilator immunity

    NASA Astrophysics Data System (ADS)

    Goyal, Rajni; Panigrahi, Anupama; Bansal, Rohit

    2017-07-01

    Nonlinearity and Algebraic(annihilator) immunity are two core properties of a Boolean function because optimum values of Annihilator Immunity and nonlinearity are required to resist fast algebraic attack and differential cryptanalysis respectively. For a secure cypher system, Boolean function(S-Boxes) should resist maximum number of attacks. It is possible if a Boolean function has optimal trade-off among its properties. Before constructing Boolean functions, we fixed the criteria of our constructions based on its properties. In present work, our construction is based on annihilator immunity and nonlinearity. While keeping above facts in mind,, we have developed a multi-objective evolutionary approach based on NSGA-II and got the optimum value of annihilator immunity with good bound of nonlinearity. We have constructed balanced Boolean functions having the best trade-off among balancedness, Annihilator immunity and nonlinearity for 5, 6 and 7 variables by the proposed method.

  8. Pricing Resources in LTE Networks through Multiobjective Optimization

    PubMed Central

    Lai, Yung-Liang; Jiang, Jehn-Ruey

    2014-01-01

    The LTE technology offers versatile mobile services that use different numbers of resources. This enables operators to provide subscribers or users with differential quality of service (QoS) to boost their satisfaction. On one hand, LTE operators need to price the resources high for maximizing their profits. On the other hand, pricing also needs to consider user satisfaction with allocated resources and prices to avoid “user churn,” which means subscribers will unsubscribe services due to dissatisfaction with allocated resources or prices. In this paper, we study the pricing resources with profits and satisfaction optimization (PRPSO) problem in the LTE networks, considering the operator profit and subscribers' satisfaction at the same time. The problem is modelled as nonlinear multiobjective optimization with two optimal objectives: (1) maximizing operator profit and (2) maximizing user satisfaction. We propose to solve the problem based on the framework of the NSGA-II. Simulations are conducted for evaluating the proposed solution. PMID:24526889

  9. Pricing resources in LTE networks through multiobjective optimization.

    PubMed

    Lai, Yung-Liang; Jiang, Jehn-Ruey

    2014-01-01

    The LTE technology offers versatile mobile services that use different numbers of resources. This enables operators to provide subscribers or users with differential quality of service (QoS) to boost their satisfaction. On one hand, LTE operators need to price the resources high for maximizing their profits. On the other hand, pricing also needs to consider user satisfaction with allocated resources and prices to avoid "user churn," which means subscribers will unsubscribe services due to dissatisfaction with allocated resources or prices. In this paper, we study the pricing resources with profits and satisfaction optimization (PRPSO) problem in the LTE networks, considering the operator profit and subscribers' satisfaction at the same time. The problem is modelled as nonlinear multiobjective optimization with two optimal objectives: (1) maximizing operator profit and (2) maximizing user satisfaction. We propose to solve the problem based on the framework of the NSGA-II. Simulations are conducted for evaluating the proposed solution.

  10. Hyperparameterization of soil moisture statistical models for North America with Ensemble Learning Models (Elm)

    NASA Astrophysics Data System (ADS)

    Steinberg, P. D.; Brener, G.; Duffy, D.; Nearing, G. S.; Pelissier, C.

    2017-12-01

    Hyperparameterization, of statistical models, i.e. automated model scoring and selection, such as evolutionary algorithms, grid searches, and randomized searches, can improve forecast model skill by reducing errors associated with model parameterization, model structure, and statistical properties of training data. Ensemble Learning Models (Elm), and the related Earthio package, provide a flexible interface for automating the selection of parameters and model structure for machine learning models common in climate science and land cover classification, offering convenient tools for loading NetCDF, HDF, Grib, or GeoTiff files, decomposition methods like PCA and manifold learning, and parallel training and prediction with unsupervised and supervised classification, clustering, and regression estimators. Continuum Analytics is using Elm to experiment with statistical soil moisture forecasting based on meteorological forcing data from NASA's North American Land Data Assimilation System (NLDAS). There Elm is using the NSGA-2 multiobjective optimization algorithm for optimizing statistical preprocessing of forcing data to improve goodness-of-fit for statistical models (i.e. feature engineering). This presentation will discuss Elm and its components, including dask (distributed task scheduling), xarray (data structures for n-dimensional arrays), and scikit-learn (statistical preprocessing, clustering, classification, regression), and it will show how NSGA-2 is being used for automate selection of soil moisture forecast statistical models for North America.

  11. A spline-based approach for computing spatial impulse responses.

    PubMed

    Ellis, Michael A; Guenther, Drake; Walker, William F

    2007-05-01

    Computer simulations are an essential tool for the design of phased-array ultrasonic imaging systems. FIELD II, which determines the two-way temporal response of a transducer at a point in space, is the current de facto standard for ultrasound simulation tools. However, the need often arises to obtain two-way spatial responses at a single point in time, a set of dimensions for which FIELD II is not well optimized. This paper describes an analytical approach for computing the two-way, far-field, spatial impulse response from rectangular transducer elements under arbitrary excitation. The described approach determines the response as the sum of polynomial functions, making computational implementation quite straightforward. The proposed algorithm, named DELFI, was implemented as a C routine under Matlab and results were compared to those obtained under similar conditions from the well-established FIELD II program. Under the specific conditions tested here, the proposed algorithm was approximately 142 times faster than FIELD II for computing spatial sensitivity functions with similar amounts of error. For temporal sensitivity functions with similar amounts of error, the proposed algorithm was about 1.7 times slower than FIELD II using rectangular elements and 19.2 times faster than FIELD II using triangular elements. DELFI is shown to be an attractive complement to FIELD II, especially when spatial responses are needed at a specific point in time.

  12. Issues in the assessment of personality disorder and substance abuse using the Millon Clinical Multiaxial Inventory (MCMI-II).

    PubMed

    Flynn, P M; McCann, J T; Fairbank, J A

    1995-05-01

    Substance abuse treatment clients often present other severe mental health problems that affect treatment outcomes. Hence, screening and assessment for psychological distress and personality disorder are an important part of effective treatment, discharge, and aftercare planning. The Millon Clinical Multiaxial Inventory-II (MCMI-II) frequently is used for this purpose. In this paper, several issues of concern to MCMI-II users are addressed. These include the extent to which MCMI-II scales correspond to DSM-III-R disorders; overdiagnosis of disorders using the MCMI-II; accuracy of MCMI-II diagnostic cut-off scores; and the clinical utility of MCMI-II diagnostic algorithms. Approaches to addressing these issues are offered.

  13. Discharge-nitrate data clustering for characterizing surface-subsurface flow interaction and calibration of a hydrologic model

    NASA Astrophysics Data System (ADS)

    Shrestha, R. R.; Rode, M.

    2008-12-01

    Concentration of reactive chemicals has different chemical signatures in baseflow and surface runoff. Previous studies on nitrate export from a catchment indicate that the transport processes are driven by subsurface flow. Therefore nitrate signature can be used for understanding the event and pre-event contributions to streamflow and surface-subsurface flow interactions. The study uses flow and nitrate concentration time series data for understanding the relationship between these two variables. Unsupervised artificial neural network based learning method called self organizing map is used for the identification of clusters in the datasets. Based on the cluster results, five different pattern in the datasets are identified which correspond to (i) baseflow, (ii) subsurface flow increase, (iii) surface runoff increase, (iv) surface runoff recession, and (v) subsurface flow decrease regions. The cluster results in combination with a hydrologic model are used for discharge separation. For this purpose, a multi-objective optimization tool NSGA-II is used, where violation of cluster results is used as one of the objective functions. The results show that the use of cluster results as supplementary information for the calibration of a hydrologic model gives a plausible simulation of subsurface flow as well total runoff at the catchment outlet. The study is undertaken using data from the Weida catchment in the North-Eastern Germany, which is a sub-catchment of the Weisse Elster river in the Elbe river basin.

  14. Advanced Avionics Verification and Validation Phase II (AAV&V-II)

    DTIC Science & Technology

    1999-01-01

    Algorithm 2-8 2.7 The Weak Control Dependence Algorithm 2-8 2.8 The Indirect Dependence Algorithms 2-9 2.9 Improvements to the Pleiades Object...describes some modifications made to the Pleiades object management system to increase the speed of the analysis. 2.1 THE INTERPROCEDURAL CONTROL FLOW...slow as the edges in the graph increased. The time to insert edges was addressed by enhancements to the Pleiades object management system, which are

  15. Micromagnetic measurement for characterization of ferromagnetic materials' microstructural properties

    NASA Astrophysics Data System (ADS)

    Zhang, Shuo; Shi, Xiaodong; Udpa, Lalita; Deng, Yiming

    2018-05-01

    Magnetic Barkhausen noise (MBN) is measured in low carbon steels and the relationship between carbon content and parameter extracted from MBN signal has been investigated. The parameter is extracted experimentally by fitting the original profiles with two Gaussian curves. The gap between two peaks (ΔG) of fitted Gaussian curves shows a better linear relationship with carbon contents of samples in the experiment. The result has been validated with simulation by Monte Carlo method. To ensure the sensitivity of measurement, advanced multi-objective optimization algorithm Non-dominant sorting genetic algorithm III (NSGA III) has been used to fulfill the optimization of the magnetic core of sensor.

  16. Acceleration of block-matching algorithms using a custom instruction-based paradigm on a Nios II microprocessor

    NASA Astrophysics Data System (ADS)

    González, Diego; Botella, Guillermo; García, Carlos; Prieto, Manuel; Tirado, Francisco

    2013-12-01

    This contribution focuses on the optimization of matching-based motion estimation algorithms widely used for video coding standards using an Altera custom instruction-based paradigm and a combination of synchronous dynamic random access memory (SDRAM) with on-chip memory in Nios II processors. A complete profile of the algorithms is achieved before the optimization, which locates code leaks, and afterward, creates a custom instruction set, which is then added to the specific design, enhancing the original system. As well, every possible memory combination between on-chip memory and SDRAM has been tested to achieve the best performance. The final throughput of the complete designs are shown. This manuscript outlines a low-cost system, mapped using very large scale integration technology, which accelerates software algorithms by converting them into custom hardware logic blocks and showing the best combination between on-chip memory and SDRAM for the Nios II processor.

  17. Padé approximations for Painlevé I and II transcendents

    NASA Astrophysics Data System (ADS)

    Novokshenov, V. Yu.

    2009-06-01

    We use a version of the Fair-Luke algorithm to find the Padé approximate solutions of the Painlevé I and II equations. We find the distributions of poles for the well-known Ablowitz-Segur and Hastings-McLeod solutions of the Painlevé II equation. We show that the Boutroux tritronquée solution of the Painleé I equation has poles only in the critical sector of the complex plane. The algorithm allows checking other analytic properties of the Painlevé transcendents, such as the asymptotic behavior at infinity in the complex plane.

  18. SAGE Version 7.0 Algorithm: Application to SAGE II

    NASA Technical Reports Server (NTRS)

    Damadeo, R. P; Zawodny, J. M.; Thomason, L. W.; Iyer, N.

    2013-01-01

    This paper details the Stratospheric Aerosol and Gas Experiments (SAGE) version 7.0 algorithm and how it is applied to SAGE II. Changes made between the previous (v6.2) and current (v7.0) versions are described and their impacts on the data products explained for both coincident event comparisons and time-series analysis. Users of the data will notice a general improvement in all of the SAGE II data products, which are now in better agreement with more modern data sets (e.g. SAGE III) and more robust for use with trend studies.

  19. Hybrid fuzzy cluster ensemble framework for tumor clustering from biomolecular data.

    PubMed

    Yu, Zhiwen; Chen, Hantao; You, Jane; Han, Guoqiang; Li, Le

    2013-01-01

    Cancer class discovery using biomolecular data is one of the most important tasks for cancer diagnosis and treatment. Tumor clustering from gene expression data provides a new way to perform cancer class discovery. Most of the existing research works adopt single-clustering algorithms to perform tumor clustering is from biomolecular data that lack robustness, stability, and accuracy. To further improve the performance of tumor clustering from biomolecular data, we introduce the fuzzy theory into the cluster ensemble framework for tumor clustering from biomolecular data, and propose four kinds of hybrid fuzzy cluster ensemble frameworks (HFCEF), named as HFCEF-I, HFCEF-II, HFCEF-III, and HFCEF-IV, respectively, to identify samples that belong to different types of cancers. The difference between HFCEF-I and HFCEF-II is that they adopt different ensemble generator approaches to generate a set of fuzzy matrices in the ensemble. Specifically, HFCEF-I applies the affinity propagation algorithm (AP) to perform clustering on the sample dimension and generates a set of fuzzy matrices in the ensemble based on the fuzzy membership function and base samples selected by AP. HFCEF-II adopts AP to perform clustering on the attribute dimension, generates a set of subspaces, and obtains a set of fuzzy matrices in the ensemble by performing fuzzy c-means on subspaces. Compared with HFCEF-I and HFCEF-II, HFCEF-III and HFCEF-IV consider the characteristics of HFCEF-I and HFCEF-II. HFCEF-III combines HFCEF-I and HFCEF-II in a serial way, while HFCEF-IV integrates HFCEF-I and HFCEF-II in a concurrent way. HFCEFs adopt suitable consensus functions, such as the fuzzy c-means algorithm or the normalized cut algorithm (Ncut), to summarize generated fuzzy matrices, and obtain the final results. The experiments on real data sets from UCI machine learning repository and cancer gene expression profiles illustrate that 1) the proposed hybrid fuzzy cluster ensemble frameworks work well on real data sets, especially biomolecular data, and 2) the proposed approaches are able to provide more robust, stable, and accurate results when compared with the state-of-the-art single clustering algorithms and traditional cluster ensemble approaches.

  20. Dynamic water allocation policies improve the global efficiency of storage systems

    NASA Astrophysics Data System (ADS)

    Niayifar, Amin; Perona, Paolo

    2017-06-01

    Water impoundment by dams strongly affects the river natural flow regime, its attributes and the related ecosystem biodiversity. Fostering the sustainability of water uses e.g., hydropower systems thus implies searching for innovative operational policies able to generate Dynamic Environmental Flows (DEF) that mimic natural flow variability. The objective of this study is to propose a Direct Policy Search (DPS) framework based on defining dynamic flow release rules to improve the global efficiency of storage systems. The water allocation policies proposed for dammed systems are an extension of previously developed flow redistribution rules for small hydropower plants by Razurel et al. (2016).The mathematical form of the Fermi-Dirac statistical distribution applied to lake equations for the stored water in the dam is used to formulate non-proportional redistribution rules that partition the flow for energy production and environmental use. While energy production is computed from technical data, riverine ecological benefits associated with DEF are computed by integrating the Weighted Usable Area (WUA) for fishes with Richter's hydrological indicators. Then, multiobjective evolutionary algorithms (MOEAs) are applied to build ecological versus economic efficiency plot and locate its (Pareto) frontier. This study benchmarks two MOEAs (NSGA II and Borg MOEA) and compares their efficiency in terms of the quality of Pareto's frontier and computational cost. A detailed analysis of dam characteristics is performed to examine their impact on the global system efficiency and choice of the best redistribution rule. Finally, it is found that non-proportional flow releases can statistically improve the global efficiency, specifically the ecological one, of the hydropower system when compared to constant minimal flows.

  1. Input selection and performance optimization of ANN-based streamflow forecasts in the drought-prone Murray Darling Basin region using IIS and MODWT algorithm

    NASA Astrophysics Data System (ADS)

    Prasad, Ramendra; Deo, Ravinesh C.; Li, Yan; Maraseni, Tek

    2017-11-01

    Forecasting streamflow is vital for strategically planning, utilizing and redistributing water resources. In this paper, a wavelet-hybrid artificial neural network (ANN) model integrated with iterative input selection (IIS) algorithm (IIS-W-ANN) is evaluated for its statistical preciseness in forecasting monthly streamflow, and it is then benchmarked against M5 Tree model. To develop hybrid IIS-W-ANN model, a global predictor matrix is constructed for three local hydrological sites (Richmond, Gwydir, and Darling River) in Australia's agricultural (Murray-Darling) Basin. Model inputs comprised of statistically significant lagged combination of streamflow water level, are supplemented by meteorological data (i.e., precipitation, maximum and minimum temperature, mean solar radiation, vapor pressure and evaporation) as the potential model inputs. To establish robust forecasting models, iterative input selection (IIS) algorithm is applied to screen the best data from the predictor matrix and is integrated with the non-decimated maximum overlap discrete wavelet transform (MODWT) applied on the IIS-selected variables. This resolved the frequencies contained in predictor data while constructing a wavelet-hybrid (i.e., IIS-W-ANN and IIS-W-M5 Tree) model. Forecasting ability of IIS-W-ANN is evaluated via correlation coefficient (r), Willmott's Index (WI), Nash-Sutcliffe Efficiency (ENS), root-mean-square-error (RMSE), and mean absolute error (MAE), including the percentage RMSE and MAE. While ANN models are seen to outperform M5 Tree executed for all hydrological sites, the IIS variable selector was efficient in determining the appropriate predictors, as stipulated by the better performance of the IIS coupled (ANN and M5 Tree) models relative to the models without IIS. When IIS-coupled models are integrated with MODWT, the wavelet-hybrid IIS-W-ANN and IIS-W-M5 Tree are seen to attain significantly accurate performance relative to their standalone counterparts. Importantly, IIS-W-ANN model accuracy outweighs IIS-ANN, as evidenced by a larger r and WI (by 7.5% and 3.8%, respectively) and a lower RMSE (by 21.3%). In comparison to the IIS-W-M5 Tree model, IIS-W-ANN model yielded larger values of WI = 0.936-0.979 and ENS = 0.770-0.920. Correspondingly, the errors (RMSE and MAE) ranged from 0.162-0.487 m and 0.139-0.390 m, respectively, with relative errors, RRMSE = (15.65-21.00) % and MAPE = (14.79-20.78) %. Distinct geographic signature is evident where the most and least accurately forecasted streamflow data is attained for the Gwydir and Darling River, respectively. Conclusively, this study advocates the efficacy of iterative input selection, allowing the proper screening of model predictors, and subsequently, its integration with MODWT resulting in enhanced performance of the models applied in streamflow forecasting.

  2. Optimal design of tunable phononic bandgap plates under equibiaxial stretch

    NASA Astrophysics Data System (ADS)

    Hedayatrasa, Saeid; Abhary, Kazem; Uddin, M. S.; Guest, James K.

    2016-05-01

    Design and application of phononic crystal (PhCr) acoustic metamaterials has been a topic with tremendous growth of interest in the last decade due to their promising capabilities to manipulate acoustic and elastodynamic waves. Phononic controllability of waves through a particular PhCr is limited only to the spectrums located within its fixed bandgap frequency. Hence the ability to tune a PhCr is desired to add functionality over its variable bandgap frequency or for switchability. Deformation induced bandgap tunability of elastomeric PhCr solids and plates with prescribed topology have been studied by other researchers. Principally the internal stress state and distorted geometry of a deformed phononic crystal plate (PhP) changes its effective stiffness and leads to deformation induced tunability of resultant modal band structure. Thus the microstructural topology of a PhP can be altered so that specific tunability features are met through prescribed deformation. In the present study novel tunable PhPs of this kind with optimized bandgap efficiency-tunability of guided waves are computationally explored and evaluated. Low loss transmission of guided waves throughout thin walled structures makes them ideal for fabrication of low loss ultrasound devices and structural health monitoring purposes. Various tunability targets are defined to enhance or degrade complete bandgaps of plate waves through macroscopic tensile deformation. Elastomeric hyperelastic material is considered which enables recoverable micromechanical deformation under tuning finite stretch. Phononic tunability through stable deformation of phononic lattice is specifically required and so any topology showing buckling instability under assumed deformation is disregarded. Nondominated sorting genetic algorithm (GA) NSGA-II is adopted for evolutionary multiobjective topology optimization of hypothesized tunable PhP with square symmetric unit-cell and relevant topologies are analyzed through finite element method. Following earlier studies by the authors, specialized GA algorithm, topology mapping, assessment and analysis techniques are employed to get feasible porous topologies of assumed thick PhP, efficiently.

  3. Robust optimization of supersonic ORC nozzle guide vanes

    NASA Astrophysics Data System (ADS)

    Bufi, Elio A.; Cinnella, Paola

    2017-03-01

    An efficient Robust Optimization (RO) strategy is developed for the design of 2D supersonic Organic Rankine Cycle turbine expanders. The dense gas effects are not-negligible for this application and they are taken into account describing the thermodynamics by means of the Peng-Robinson-Stryjek-Vera equation of state. The design methodology combines an Uncertainty Quantification (UQ) loop based on a Bayesian kriging model of the system response to the uncertain parameters, used to approximate statistics (mean and variance) of the uncertain system output, a CFD solver, and a multi-objective non-dominated sorting algorithm (NSGA), also based on a Kriging surrogate of the multi-objective fitness function, along with an adaptive infill strategy for surrogate enrichment at each generation of the NSGA. The objective functions are the average and variance of the isentropic efficiency. The blade shape is parametrized by means of a Free Form Deformation (FFD) approach. The robust optimal blades are compared to the baseline design (based on the Method of Characteristics) and to a blade obtained by means of a deterministic CFD-based optimization.

  4. The utility and limitations of current web-available algorithms to predict peptides recognized by CD4 T cells in response to pathogen infection #

    PubMed Central

    Chaves, Francisco A.; Lee, Alvin H.; Nayak, Jennifer; Richards, Katherine A.; Sant, Andrea J.

    2012-01-01

    The ability to track CD4 T cells elicited in response to pathogen infection or vaccination is critical because of the role these cells play in protective immunity. Coupled with advances in genome sequencing of pathogenic organisms, there is considerable appeal for implementation of computer-based algorithms to predict peptides that bind to the class II molecules, forming the complex recognized by CD4 T cells. Despite recent progress in this area, there is a paucity of data regarding their success in identifying actual pathogen-derived epitopes. In this study, we sought to rigorously evaluate the performance of multiple web-available algorithms by comparing their predictions and our results using purely empirical methods for epitope discovery in influenza that utilized overlapping peptides and cytokine Elispots, for three independent class II molecules. We analyzed the data in different ways, trying to anticipate how an investigator might use these computational tools for epitope discovery. We come to the conclusion that currently available algorithms can indeed facilitate epitope discovery, but all shared a high degree of false positive and false negative predictions. Therefore, efficiencies were low. We also found dramatic disparities among algorithms and between predicted IC50 values and true dissociation rates of peptide:MHC class II complexes. We suggest that improved success of predictive algorithms will depend less on changes in computational methods or increased data sets and more on changes in parameters used to “train” the algorithms that factor in elements of T cell repertoire and peptide acquisition by class II molecules. PMID:22467652

  5. Fuzzy physical programming for Space Manoeuvre Vehicles trajectory optimization based on hp-adaptive pseudospectral method

    NASA Astrophysics Data System (ADS)

    Chai, Runqi; Savvaris, Al; Tsourdos, Antonios

    2016-06-01

    In this paper, a fuzzy physical programming (FPP) method has been introduced for solving multi-objective Space Manoeuvre Vehicles (SMV) skip trajectory optimization problem based on hp-adaptive pseudospectral methods. The dynamic model of SMV is elaborated and then, by employing hp-adaptive pseudospectral methods, the problem has been transformed to nonlinear programming (NLP) problem. According to the mission requirements, the solutions were calculated for each single-objective scenario. To get a compromised solution for each target, the fuzzy physical programming (FPP) model is proposed. The preference function is established with considering the fuzzy factor of the system such that a proper compromised trajectory can be acquired. In addition, the NSGA-II is tested to obtain the Pareto-optimal solution set and verify the Pareto optimality of the FPP solution. Simulation results indicate that the proposed method is effective and feasible in terms of dealing with the multi-objective skip trajectory optimization for the SMV.

  6. Impact of Spatial Pumping Patterns on Groundwater Management

    NASA Astrophysics Data System (ADS)

    Yin, J.; Tsai, F. T. C.

    2017-12-01

    Challenges exist to manage groundwater resources while maintaining a balance between groundwater quantity and quality because of anthropogenic pumping activities as well as complex subsurface environment. In this study, to address the impact of spatial pumping pattern on groundwater management, a mixed integer nonlinear multi-objective model is formulated by integrating three objectives within a management framework to: (i) maximize total groundwater withdrawal from potential wells; (ii) minimize total electricity cost for well pumps; and (iii) attain groundwater level at selected monitoring locations as close as possible to the target level. Binary variables are used in the groundwater management model to control the operative status of pumping wells. The NSGA-II is linked with MODFLOW to solve the multi-objective problem. The proposed method is applied to a groundwater management problem in the complex Baton Rouge aquifer system, southeastern Louisiana. Results show that (a) non-dominated trade-off solutions under various spatial distributions of active pumping wells can be achieved. Each solution is optimal with regard to its corresponding objectives; (b) operative status, locations and pumping rates of pumping wells are significant to influence the distribution of hydraulic head, which in turn influence the optimization results; (c) A wide range of optimal solutions is obtained such that decision makers can select the most appropriate solution through negotiation with different stakeholders. This technique is beneficial to finding out the optimal extent to which three objectives including water supply concern, energy concern and subsidence concern can be balanced.

  7. Genetic Algorithm Based Framework for Automation of Stochastic Modeling of Multi-Season Streamflows

    NASA Astrophysics Data System (ADS)

    Srivastav, R. K.; Srinivasan, K.; Sudheer, K.

    2009-05-01

    Synthetic streamflow data generation involves the synthesis of likely streamflow patterns that are statistically indistinguishable from the observed streamflow data. The various kinds of stochastic models adopted for multi-season streamflow generation in hydrology are: i) parametric models which hypothesize the form of the periodic dependence structure and the distributional form a priori (examples are PAR, PARMA); disaggregation models that aim to preserve the correlation structure at the periodic level and the aggregated annual level; ii) Nonparametric models (examples are bootstrap/kernel based methods), which characterize the laws of chance, describing the stream flow process, without recourse to prior assumptions as to the form or structure of these laws; (k-nearest neighbor (k-NN), matched block bootstrap (MABB)); non-parametric disaggregation model. iii) Hybrid models which blend both parametric and non-parametric models advantageously to model the streamflows effectively. Despite many of these developments that have taken place in the field of stochastic modeling of streamflows over the last four decades, accurate prediction of the storage and the critical drought characteristics has been posing a persistent challenge to the stochastic modeler. This is partly because, usually, the stochastic streamflow model parameters are estimated by minimizing a statistically based objective function (such as maximum likelihood (MLE) or least squares (LS) estimation) and subsequently the efficacy of the models is being validated based on the accuracy of prediction of the estimates of the water-use characteristics, which requires large number of trial simulations and inspection of many plots and tables. Still accurate prediction of the storage and the critical drought characteristics may not be ensured. In this study a multi-objective optimization framework is proposed to find the optimal hybrid model (blend of a simple parametric model, PAR(1) model and matched block bootstrap (MABB) ) based on the explicit objective functions of minimizing the relative bias and relative root mean square error in estimating the storage capacity of the reservoir. The optimal parameter set of the hybrid model is obtained based on the search over a multi- dimensional parameter space (involving simultaneous exploration of the parametric (PAR(1)) as well as the non-parametric (MABB) components). This is achieved using the efficient evolutionary search based optimization tool namely, non-dominated sorting genetic algorithm - II (NSGA-II). This approach helps in reducing the drudgery involved in the process of manual selection of the hybrid model, in addition to predicting the basic summary statistics dependence structure, marginal distribution and water-use characteristics accurately. The proposed optimization framework is used to model the multi-season streamflows of River Beaver and River Weber of USA. In case of both the rivers, the proposed GA-based hybrid model yields a much better prediction of the storage capacity (where simultaneous exploration of both parametric and non-parametric components is done) when compared with the MLE-based hybrid models (where the hybrid model selection is done in two stages, thus probably resulting in a sub-optimal model). This framework can be further extended to include different linear/non-linear hybrid stochastic models at other temporal and spatial scales as well.

  8. Simulation of an enhanced TCAS 2 system in operation

    NASA Technical Reports Server (NTRS)

    Rojas, R. G.; Law, P.; Burnside, W. D.

    1987-01-01

    Described is a computer simulation of a Boeing 737 aircraft equipped with an enhanced Traffic and Collision Avoidance System (TCAS II). In particular, an algorithm is developed which permits the computer simulation of the tracking of a target airplane by a Boeing 373 which has a TCAS II array mounted on top of its fuselage. This algorithm has four main components: namely, the target path, the noise source, the alpha-beta filter, and threat detection. The implementation of each of these four components is described. Furthermore, the areas where the present algorithm needs to be improved are also mentioned.

  9. On adaptive learning rate that guarantees convergence in feedforward networks.

    PubMed

    Behera, Laxmidhar; Kumar, Swagat; Patnaik, Awhan

    2006-09-01

    This paper investigates new learning algorithms (LF I and LF II) based on Lyapunov function for the training of feedforward neural networks. It is observed that such algorithms have interesting parallel with the popular backpropagation (BP) algorithm where the fixed learning rate is replaced by an adaptive learning rate computed using convergence theorem based on Lyapunov stability theory. LF II, a modified version of LF I, has been introduced with an aim to avoid local minima. This modification also helps in improving the convergence speed in some cases. Conditions for achieving global minimum for these kind of algorithms have been studied in detail. The performances of the proposed algorithms are compared with BP algorithm and extended Kalman filtering (EKF) on three bench-mark function approximation problems: XOR, 3-bit parity, and 8-3 encoder. The comparisons are made in terms of number of learning iterations and computational time required for convergence. It is found that the proposed algorithms (LF I and II) are much faster in convergence than other two algorithms to attain same accuracy. Finally, the comparison is made on a complex two-dimensional (2-D) Gabor function and effect of adaptive learning rate for faster convergence is verified. In a nutshell, the investigations made in this paper help us better understand the learning procedure of feedforward neural networks in terms of adaptive learning rate, convergence speed, and local minima.

  10. Classification of case-II waters using hyperspectral (HICO) data over North Indian Ocean

    NASA Astrophysics Data System (ADS)

    Srinivasa Rao, N.; Ramarao, E. P.; Srinivas, K.; Deka, P. C.

    2016-05-01

    State of the art Ocean color algorithms are proven for retrieving the ocean constituents (chlorophyll-a, CDOM and Suspended Sediments) in case-I waters. However, these algorithms could not perform well at case-II waters because of the optical complexity. Hyperspectral data is found to be promising to classify the case-II waters. The aim of this study is to propose the spectral bands for future Ocean color sensors to classify the case-II waters. Study has been performed with Rrs's of HICO at estuaries of the river Indus and GBM of North Indian Ocean. Appropriate field samples are not available to validate and propose empirical models to retrieve concentrations. The sensor HICO is not currently operational to plan validation exercise. Aqua MODIS data at case-I and Case-II waters are used as complementary to in- situ. Analysis of Spectral reflectance curves suggests the band ratios of Rrs 484 nm and Rrs 581 nm, Rrs 490 nm and Rrs 426 nm to classify the Chlorophyll -a and CDOM respectively. Rrs 610 nm gives the best scope for suspended sediment retrieval. The work suggests the need for ocean color sensors with central wavelength's of 426, 484, 490, 581 and 610 nm to estimate the concentrations of Chl-a, Suspended Sediments and CDOM in case-II waters.

  11. Genetic algorithm enhanced by machine learning in dynamic aperture optimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Yongjun; Cheng, Weixing; Yu, Li Hua

    With the aid of machine learning techniques, the genetic algorithm has been enhanced and applied to the multi-objective optimization problem presented by the dynamic aperture of the National Synchrotron Light Source II (NSLS-II) Storage Ring. During the evolution processes employed by the genetic algorithm, the population is classified into different clusters in the search space. The clusters with top average fitness are given “elite” status. Intervention on the population is implemented by repopulating some potentially competitive candidates based on the experience learned from the accumulated data. These candidates replace randomly selected candidates among the original data pool. The average fitnessmore » of the population is therefore improved while diversity is not lost. Maintaining diversity ensures that the optimization is global rather than local. The quality of the population increases and produces more competitive descendants accelerating the evolution process significantly. When identifying the distribution of optimal candidates, they appear to be located in isolated islands within the search space. Some of these optimal candidates have been experimentally confirmed at the NSLS-II storage ring. Furthermore, the machine learning techniques that exploit the genetic algorithm can also be used in other population-based optimization problems such as particle swarm algorithm.« less

  12. Genetic algorithm enhanced by machine learning in dynamic aperture optimization

    NASA Astrophysics Data System (ADS)

    Li, Yongjun; Cheng, Weixing; Yu, Li Hua; Rainer, Robert

    2018-05-01

    With the aid of machine learning techniques, the genetic algorithm has been enhanced and applied to the multi-objective optimization problem presented by the dynamic aperture of the National Synchrotron Light Source II (NSLS-II) Storage Ring. During the evolution processes employed by the genetic algorithm, the population is classified into different clusters in the search space. The clusters with top average fitness are given "elite" status. Intervention on the population is implemented by repopulating some potentially competitive candidates based on the experience learned from the accumulated data. These candidates replace randomly selected candidates among the original data pool. The average fitness of the population is therefore improved while diversity is not lost. Maintaining diversity ensures that the optimization is global rather than local. The quality of the population increases and produces more competitive descendants accelerating the evolution process significantly. When identifying the distribution of optimal candidates, they appear to be located in isolated islands within the search space. Some of these optimal candidates have been experimentally confirmed at the NSLS-II storage ring. The machine learning techniques that exploit the genetic algorithm can also be used in other population-based optimization problems such as particle swarm algorithm.

  13. Genetic algorithm enhanced by machine learning in dynamic aperture optimization

    DOE PAGES

    Li, Yongjun; Cheng, Weixing; Yu, Li Hua; ...

    2018-05-29

    With the aid of machine learning techniques, the genetic algorithm has been enhanced and applied to the multi-objective optimization problem presented by the dynamic aperture of the National Synchrotron Light Source II (NSLS-II) Storage Ring. During the evolution processes employed by the genetic algorithm, the population is classified into different clusters in the search space. The clusters with top average fitness are given “elite” status. Intervention on the population is implemented by repopulating some potentially competitive candidates based on the experience learned from the accumulated data. These candidates replace randomly selected candidates among the original data pool. The average fitnessmore » of the population is therefore improved while diversity is not lost. Maintaining diversity ensures that the optimization is global rather than local. The quality of the population increases and produces more competitive descendants accelerating the evolution process significantly. When identifying the distribution of optimal candidates, they appear to be located in isolated islands within the search space. Some of these optimal candidates have been experimentally confirmed at the NSLS-II storage ring. Furthermore, the machine learning techniques that exploit the genetic algorithm can also be used in other population-based optimization problems such as particle swarm algorithm.« less

  14. Discovery of a phosphor for light emitting diode applications and its structural determination, Ba(Si,Al)5(O,N)8:Eu2+.

    PubMed

    Park, Woon Bae; Singh, Satendra Pal; Sohn, Kee-Sun

    2014-02-12

    Most of the novel phosphors that appear in the literature are either a variant of well-known materials or a hybrid material consisting of well-known materials. This situation has actually led to intellectual property (IP) complications in industry and several lawsuits have been the result. Therefore, the definition of a novel phosphor for use in light-emitting diodes should be clarified. A recent trend in phosphor-related IP applications has been to focus on the novel crystallographic structure, so that a slight composition variance and/or the hybrid of a well-known material would not qualify from either a scientific or an industrial point of view. In our previous studies, we employed a systematic materials discovery strategy combining heuristics optimization and a high-throughput process to secure the discovery of genuinely novel and brilliant phosphors that would be immediately ready for use in light emitting diodes. Despite such an achievement, this strategy requires further refinement to prove its versatility under any circumstance. To accomplish such demands, we improved our discovery strategy by incorporating an elitism-involved nondominated sorting genetic algorithm (NSGA-II) that would guarantee the discovery of truly novel phosphors in the present investigation. Using the improved discovery strategy, we discovered an Eu(2+)-doped AB5X8 (A = Sr or Ba, B = Si and Al, X = O and N) phosphor in an orthorhombic structure (A21am) with lattice parameters a = 9.48461(3) Å, b = 13.47194(6) Å, c = 5.77323(2) Å, α = β = γ = 90°, which cannot be found in any of the existing inorganic compound databases.

  15. Timing Analysis with INTEGRAL: Comparing Different Reconstruction Algorithms

    NASA Technical Reports Server (NTRS)

    Grinberg, V.; Kreykenboehm, I.; Fuerst, F.; Wilms, J.; Pottschmidt, K.; Bel, M. Cadolle; Rodriquez, J.; Marcu, D. M.; Suchy, S.; Markowitz, A.; hide

    2010-01-01

    INTEGRAL is one of the few instruments capable of detecting X-rays above 20keV. It is therefore in principle well suited for studying X-ray variability in this regime. Because INTEGRAL uses coded mask instruments for imaging, the reconstruction of light curves of X-ray sources is highly non-trivial. We present results from the comparison of two commonly employed algorithms, which primarily measure flux from mask deconvolution (ii-lc-extract) and from calculating the pixel illuminated fraction (ii-light). Both methods agree well for timescales above about 10 s, the highest time resolution for which image reconstruction is possible. For higher time resolution, ii-light produces meaningful results, although the overall variance of the lightcurves is not preserved.

  16. An Evaluation of the WSSC (Weapon System Support Cost) Cost Allocation Algorithms. II. Installation Support.

    DTIC Science & Technology

    1983-06-01

    S XX3OXX, or XX37XX is found. As a result, the following two host-financed tenant support accounts currently will be treated as unit operations costs ... Horngren , Cost Accounting : A Managerial Emphasis, Prentice-Hall Inc., Englewood Cliffs, NJ, 1972. 10. D. B. Levine and J. M. Jondrow, "The...WSSC COST ALLOCATION Technical Report ~ALGORITHMS II: INSTALLATION SUPPORT 6. PERFORMING ORG. REPORT NUMBER 7. AUTHOR( S ) 9. CONTRACT OR GRANT NUMBER

  17. Adaptive Decision Making and Coordination in Variable Structure Organizations

    DTIC Science & Technology

    1994-09-01

    behavior of the net. The design problem is addressed by (a) focusing on algorithms that relate structural properties of’ the Petri Net model to... behavioral characteristics; and (b) by incorporating design requirements in the Lattice algorithm. ’K94-30756 9 4 9 2 P 0 8 II083II Bl l~ll i1111 I! 14...the more resource- consuming the process is. The architecture designer has to deal with these two parameters and perform some tradeoffs. The more

  18. Status of the calibration and alignment framework at the Belle II experiment

    NASA Astrophysics Data System (ADS)

    Dossett, D.; Sevior, M.; Ritter, M.; Kuhr, T.; Bilka, T.; Yaschenko, S.; Belle Software Group, II

    2017-10-01

    The Belle II detector at the Super KEKB e+e-collider plans to take first collision data in 2018. The monetary and CPU time costs associated with storing and processing the data mean that it is crucial for the detector components at Belle II to be calibrated quickly and accurately. A fast and accurate calibration system would allow the high level trigger to increase the efficiency of event selection, and can give users analysis-quality reconstruction promptly. A flexible framework to automate the fast production of calibration constants is being developed in the Belle II Analysis Software Framework (basf2). Detector experts only need to create two components from C++ base classes in order to use the automation system. The first collects data from Belle II event data files and outputs much smaller files to pass to the second component. This runs the main calibration algorithm to produce calibration constants ready for upload into the conditions database. A Python framework coordinates the input files, order of processing, and submission of jobs. Splitting the operation into collection and algorithm processing stages allows the framework to optionally parallelize the collection stage on a batch system.

  19. Coordination Logic for Repulsive Resolution Maneuvers

    NASA Technical Reports Server (NTRS)

    Narkawicz, Anthony J.; Munoz, Cesar A.; Dutle, Aaron M.

    2016-01-01

    This paper presents an algorithm for determining the direction an aircraft should maneuver in the event of a potential conflict with another aircraft. The algorithm is implicitly coordinated, meaning that with perfectly reliable computations and information, it will in- dependently provide directional information that is guaranteed to be coordinated without any additional information exchange or direct communication. The logic is inspired by the logic of TCAS II, the airborne system designed to reduce the risk of mid-air collisions between aircraft. TCAS II provides pilots with only vertical resolution advice, while the proposed algorithm, using a similar logic, provides implicitly coordinated vertical and horizontal directional advice.

  20. Online Learning Flight Control for Intelligent Flight Control Systems (IFCS)

    NASA Technical Reports Server (NTRS)

    Niewoehner, Kevin R.; Carter, John (Technical Monitor)

    2001-01-01

    The research accomplishments for the cooperative agreement 'Online Learning Flight Control for Intelligent Flight Control Systems (IFCS)' include the following: (1) previous IFC program data collection and analysis; (2) IFC program support site (configured IFC systems support network, configured Tornado/VxWorks OS development system, made Configuration and Documentation Management Systems Internet accessible); (3) Airborne Research Test Systems (ARTS) II Hardware (developed hardware requirements specification, developing environmental testing requirements, hardware design, and hardware design development); (4) ARTS II software development laboratory unit (procurement of lab style hardware, configured lab style hardware, and designed interface module equivalent to ARTS II faceplate); (5) program support documentation (developed software development plan, configuration management plan, and software verification and validation plan); (6) LWR algorithm analysis (performed timing and profiling on algorithm); (7) pre-trained neural network analysis; (8) Dynamic Cell Structures (DCS) Neural Network Analysis (performing timing and profiling on algorithm); and (9) conducted technical interchange and quarterly meetings to define IFC research goals.

  1. User's manual for the BNW-II optimization code for dry/wet-cooled power plants

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Braun, D.J.; Bamberger, J.A.; Braun, D.J.

    1978-05-01

    This volume provides a listing of the BNW-II dry/wet ammonia heat rejection optimization code and is an appendix to Volume I which gives a narrative description of the code's algorithms as well as logic, input and output information.

  2. Evaluation of chlorophyll-a retrieval algorithms based on MERIS bands for optically varying eutrophic inland lakes.

    PubMed

    Lyu, Heng; Li, Xiaojun; Wang, Yannan; Jin, Qi; Cao, Kai; Wang, Qiao; Li, Yunmei

    2015-10-15

    Fourteen field campaigns were conducted in five inland lakes during different seasons between 2006 and 2013, and a total of 398 water samples with varying optical characteristics were collected. The characteristics were analyzed based on remote sensing reflectance, and an automatic cluster two-step method was applied for water classification. The inland waters could be clustered into three types, which we labeled water types I, II and III. From water types I to III, the effect of the phytoplankton on the optical characteristics gradually decreased. Four chlorophyll-a retrieval algorithms for Case II water, a two-band, three-band, four-band and SCI (Synthetic Chlorophyll Index) algorithm were evaluated for three water types based on the MERIS bands. Different MERIS bands were used for the three water types in each of the four algorithms. The four algorithms had different levels of retrieval accuracy for each water type, and no single algorithm could be successfully applied to all water types. For water types I and III, the three-band algorithm performed the best, while the four-band algorithm had the highest retrieval accuracy for water type II. However, the three-band algorithm is preferable to the two-band algorithm for turbid eutrophic inland waters. The SCI algorithm is recommended for highly turbid water with a higher concentration of total suspended solids. Our research indicates that the chlorophyll-a concentration retrieval by remote sensing for optically contrasted inland water requires a specific algorithm that is based on the optical characteristics of inland water bodies to obtain higher estimation accuracy. Copyright © 2015 Elsevier B.V. All rights reserved.

  3. Optimal Reference Strain Structure for Studying Dynamic Responses of Flexible Rockets

    NASA Technical Reports Server (NTRS)

    Tsushima, Natsuki; Su, Weihua; Wolf, Michael G.; Griffin, Edwin D.; Dumoulin, Marie P.

    2017-01-01

    In the proposed paper, the optimal design of reference strain structures (RSS) will be performed targeting for the accurate observation of the dynamic bending and torsion deformation of a flexible rocket. It will provide the detailed description of the finite-element (FE) model of a notional flexible rocket created in MSC.Patran. The RSS will be attached longitudinally along the side of the rocket and to track the deformation of the thin-walled structure under external loads. An integrated surrogate-based multi-objective optimization approach will be developed to find the optimal design of the RSS using the FE model. The Kriging method will be used to construct the surrogate model. For the data sampling and the performance evaluation, static/transient analyses will be performed with MSC.Natran/Patran. The multi-objective optimization will be solved with NSGA-II to minimize the difference between the strains of the launch vehicle and RSS. Finally, the performance of the optimal RSS will be evaluated by checking its strain-tracking capability in different numerical simulations of the flexible rocket.

  4. A novel iterative scheme and its application to differential equations.

    PubMed

    Khan, Yasir; Naeem, F; Šmarda, Zdeněk

    2014-01-01

    The purpose of this paper is to employ an alternative approach to reconstruct the standard variational iteration algorithm II proposed by He, including Lagrange multiplier, and to give a simpler formulation of Adomian decomposition and modified Adomian decomposition method in terms of newly proposed variational iteration method-II (VIM). Through careful investigation of the earlier variational iteration algorithm and Adomian decomposition method, we find unnecessary calculations for Lagrange multiplier and also repeated calculations involved in each iteration, respectively. Several examples are given to verify the reliability and efficiency of the method.

  5. Clinical utility of the DSM-5 alternative model for borderline personality disorder: Differential diagnostic accuracy of the BFI, SCID-II-PQ, and PID-5.

    PubMed

    Fowler, J Christopher; Madan, Alok; Allen, Jon G; Patriquin, Michelle; Sharp, Carla; Oldham, John M; Frueh, B Christopher

    2018-01-01

    With the publication of DSM 5 alternative model for personality disorders it is critical to assess the components of the model against evidence-based models such as the five factor model and the DSM-IV-TR categorical model. This study explored the relative clinical utility of these models in screening for borderline personality disorder (BPD). Receiver operator characteristics and diagnostic efficiency statistics were calculated for three personality measures to ascertain the relative diagnostic efficiency of each measure. A total of 1653 adult inpatients at a specialist psychiatric hospital completed SCID-II interviews. Sample 1 (n=653) completed the SCID-II interviews, SCID-II Questionnaire (SCID-II-PQ) and the Big Five Inventory (BFI), while Sample 2 (n=1,000) completed the SCID-II interviews, Personality Inventory for DSM5 (PID-5) and the BFI. BFI measure evidenced moderate accuracy for two composites: High Neuroticism+ low agreeableness composite (AUC=0.72, SE=0.01, p<0.001) and High Neuroticism+ Low+Low Conscientiousness (AUC=0.73, SE=0.01, p<0.0001). The SCID-II-PQ evidenced moderate-to-excellent accuracy (AUC=0.86, SE=0.02, p<0.0001) with a good balance of specificity (SP=0.80) and sensitivity (SN=0.78). The PID-5 BPD algorithm (consisting of elevated emotional lability, anxiousness, separation insecurity, hostility, depressivity, impulsivity, and risk taking) evidenced moderate-to-excellent accuracy (AUC=0.87, SE=0.01, p<0.0001) with a good balance of specificity (SP=0.76) and sensitivity (SN=0.81). Findings generally support the use of SCID-II-PQ and PID-5 BPD algorithm for screening purposes. Furthermore, findings support the accuracy of the DSM 5 alternative model Criteria B trait constellation for diagnosing BPD. Limitations of the study include the single inpatient setting and use of two discrete samples to assess PID-5 and SCID-II-PQ. Copyright © 2017 Elsevier Inc. All rights reserved.

  6. Comparison of Different Post-Processing Algorithms for Dynamic Susceptibility Contrast Perfusion Imaging of Cerebral Gliomas.

    PubMed

    Kudo, Kohsuke; Uwano, Ikuko; Hirai, Toshinori; Murakami, Ryuji; Nakamura, Hideo; Fujima, Noriyuki; Yamashita, Fumio; Goodwin, Jonathan; Higuchi, Satomi; Sasaki, Makoto

    2017-04-10

    The purpose of the present study was to compare different software algorithms for processing DSC perfusion images of cerebral tumors with respect to i) the relative CBV (rCBV) calculated, ii) the cutoff value for discriminating low- and high-grade gliomas, and iii) the diagnostic performance for differentiating these tumors. Following approval of institutional review board, informed consent was obtained from all patients. Thirty-five patients with primary glioma (grade II, 9; grade III, 8; and grade IV, 18 patients) were included. DSC perfusion imaging was performed with 3-Tesla MRI scanner. CBV maps were generated by using 11 different algorithms of four commercially available software and one academic program. rCBV of each tumor compared to normal white matter was calculated by ROI measurements. Differences in rCBV value were compared between algorithms for each tumor grade. Receiver operator characteristics analysis was conducted for the evaluation of diagnostic performance of different algorithms for differentiating between different grades. Several algorithms showed significant differences in rCBV, especially for grade IV tumors. When differentiating between low- (II) and high-grade (III/IV) tumors, the area under the ROC curve (Az) was similar (range 0.85-0.87), and there were no significant differences in Az between any pair of algorithms. In contrast, the optimal cutoff values varied between algorithms (range 4.18-6.53). rCBV values of tumor and cutoff values for discriminating low- and high-grade gliomas differed between software packages, suggesting that optimal software-specific cutoff values should be used for diagnosis of high-grade gliomas.

  7. Feature and Statistical Model Development in Structural Health Monitoring

    NASA Astrophysics Data System (ADS)

    Kim, Inho

    All structures suffer wear and tear because of impact, excessive load, fatigue, corrosion, etc. in addition to inherent defects during their manufacturing processes and their exposure to various environmental effects. These structural degradations are often imperceptible, but they can severely affect the structural performance of a component, thereby severely decreasing its service life. Although previous studies of Structural Health Monitoring (SHM) have revealed extensive prior knowledge on the parts of SHM processes, such as the operational evaluation, data processing, and feature extraction, few studies have been conducted from a systematical perspective, the statistical model development. The first part of this dissertation, the characteristics of inverse scattering problems, such as ill-posedness and nonlinearity, reviews ultrasonic guided wave-based structural health monitoring problems. The distinctive features and the selection of the domain analysis are investigated by analytically searching the conditions of the uniqueness solutions for ill-posedness and are validated experimentally. Based on the distinctive features, a novel wave packet tracing (WPT) method for damage localization and size quantification is presented. This method involves creating time-space representations of the guided Lamb waves (GLWs), collected at a series of locations, with a spatially dense distribution along paths at pre-selected angles with respect to the direction, normal to the direction of wave propagation. The fringe patterns due to wave dispersion, which depends on the phase velocity, are selected as the primary features that carry information, regarding the wave propagation and scattering. The following part of this dissertation presents a novel damage-localization framework, using a fully automated process. In order to construct the statistical model for autonomous damage localization deep-learning techniques, such as restricted Boltzmann machine and deep belief network, are trained and utilized to interpret nonlinear far-field wave patterns. Next, a novel bridge scour estimation approach that comprises advantages of both empirical and data-driven models is developed. Two field datasets from the literature are used, and a Support Vector Machine (SVM), a machine-learning algorithm, is used to fuse the field data samples and classify the data with physical phenomena. The Fast Non-dominated Sorting Genetic Algorithm (NSGA-II) is evaluated on the model performance objective functions to search for Pareto optimal fronts.

  8. Artificial Neural Network Modeling and Genetic Algorithm Optimization for Cadmium Removal from Aqueous Solutions by Reduced Graphene Oxide-Supported Nanoscale Zero-Valent Iron (nZVI/rGO) Composites

    PubMed Central

    Fan, Mingyi; Li, Tongjun; Hu, Jiwei; Cao, Rensheng; Wei, Xionghui; Shi, Xuedan; Ruan, Wenqian

    2017-01-01

    Reduced graphene oxide-supported nanoscale zero-valent iron (nZVI/rGO) composites were synthesized in the present study by chemical deposition method and were then characterized by various methods, such as Fourier-transform infrared spectroscopy (FTIR) and X-ray photoelectron spectroscopy (XPS). The nZVI/rGO composites prepared were utilized for Cd(II) removal from aqueous solutions in batch mode at different initial Cd(II) concentrations, initial pH values, contact times, and operating temperatures. Response surface methodology (RSM) and artificial neural network hybridized with genetic algorithm (ANN-GA) were used for modeling the removal efficiency of Cd(II) and optimizing the four removal process variables. The average values of prediction errors for the RSM and ANN-GA models were 6.47% and 1.08%. Although both models were proven to be reliable in terms of predicting the removal efficiency of Cd(II), the ANN-GA model was found to be more accurate than the RSM model. In addition, experimental data were fitted to the Langmuir, Freundlich, and Dubinin-Radushkevich (D-R) isotherms. It was found that the Cd(II) adsorption was best fitted to the Langmuir isotherm. Examination on thermodynamic parameters revealed that the removal process was spontaneous and exothermic in nature. Furthermore, the pseudo-second-order model can better describe the kinetics of Cd(II) removal with a good R2 value than the pseudo-first-order model. PMID:28772901

  9. Artificial Neural Network Modeling and Genetic Algorithm Optimization for Cadmium Removal from Aqueous Solutions by Reduced Graphene Oxide-Supported Nanoscale Zero-Valent Iron (nZVI/rGO) Composites.

    PubMed

    Fan, Mingyi; Li, Tongjun; Hu, Jiwei; Cao, Rensheng; Wei, Xionghui; Shi, Xuedan; Ruan, Wenqian

    2017-05-17

    Reduced graphene oxide-supported nanoscale zero-valent iron (nZVI/rGO) composites were synthesized in the present study by chemical deposition method and were then characterized by various methods, such as Fourier-transform infrared spectroscopy (FTIR) and X-ray photoelectron spectroscopy (XPS). The nZVI/rGO composites prepared were utilized for Cd(II) removal from aqueous solutions in batch mode at different initial Cd(II) concentrations, initial pH values, contact times, and operating temperatures. Response surface methodology (RSM) and artificial neural network hybridized with genetic algorithm (ANN-GA) were used for modeling the removal efficiency of Cd(II) and optimizing the four removal process variables. The average values of prediction errors for the RSM and ANN-GA models were 6.47% and 1.08%. Although both models were proven to be reliable in terms of predicting the removal efficiency of Cd(II), the ANN-GA model was found to be more accurate than the RSM model. In addition, experimental data were fitted to the Langmuir, Freundlich, and Dubinin-Radushkevich (D-R) isotherms. It was found that the Cd(II) adsorption was best fitted to the Langmuir isotherm. Examination on thermodynamic parameters revealed that the removal process was spontaneous and exothermic in nature. Furthermore, the pseudo-second-order model can better describe the kinetics of Cd(II) removal with a good R² value than the pseudo-first-order model.

  10. First Time Rapid and Accurate Detection of Massive Number of Metal Absorption Lines in the Early Universe Using Deep Neural Network

    NASA Astrophysics Data System (ADS)

    Zhao, Yinan; Ge, Jian; Yuan, Xiaoyong; Li, Xiaolin; Zhao, Tiffany; Wang, Cindy

    2018-01-01

    Metal absorption line systems in the distant quasar spectra have been used as one of the most powerful tools to probe gas content in the early Universe. The MgII λλ 2796, 2803 doublet is one of the most popular metal absorption lines and has been used to trace gas and global star formation at redshifts between ~0.5 to 2.5. In the past, machine learning algorithms have been used to detect absorption lines systems in the large sky survey, such as Principle Component Analysis, Gaussian Process and decision tree, but the overall detection process is not only complicated, but also time consuming. It usually takes a few months to go through the entire quasar spectral dataset from each of the Sloan Digital Sky Survey (SDSS) data release. In this work, we applied the deep neural network, or “ deep learning” algorithms, in the most recently SDSS DR14 quasar spectra and were able to randomly search 20000 quasar spectra and detect 2887 strong Mg II absorption features in just 9 seconds. Our detection algorithms were verified with previously released DR12 and DR7 data and published Mg II catalog and the detection accuracy is 90%. This is the first time that deep neural network has demonstrated its promising power in both speed and accuracy in replacing tedious, repetitive human work in searching for narrow absorption patterns in a big dataset. We will present our detection algorithms and also statistical results of the newly detected Mg II absorption lines.

  11. Channel and Switchbox Routing Using a Greedy Based Channel Algorithm with Outward Scanning Technique.

    DTIC Science & Technology

    1988-12-01

    ol ) V. CONCLUSION AND DISCUSSION......................... ... 6 APPENDIX A. NPGS ROUTER USER GUIDE........................6 APPENDIX B. C PROGRAM...problem and shows some of the terminology. previously mentioned. that is peculiar to VISI routing. Clq C4 C4 C4 -4 C4 Clq -4- o C CCD Co -4 q 04 -4 oL ...34 l II 92-. -.-- -.-- , -.... -4--*- -*-- tC I I + 62- -- - ----- -. .t -- +* 0C ’l i II I o -- - ..... 4+ - -+- j- - --- +-+-g9! 6 Ol ... ... "II g4

  12. MOLA II Laser Transmitter Calibration and Performance. 1.2

    NASA Technical Reports Server (NTRS)

    Afzal, Robert S.; Smith, David E. (Technical Monitor)

    1997-01-01

    The goal of the document is to explain the algorithm for determining the laser output energy from the telemetry data within the return packets from MOLA II. A simple algorithm is developed to convert the raw start detector data into laser energy, measured in millijoules. This conversion is dependent on three variables, start detector counts, array heat sink temperature and start detector temperature. All these values are contained within the return packets. The conversion is applied to the GSFC Thermal Vacuum data as well as the in-space data to date and shows good correlation.

  13. NetMHCIIpan-2.0 - Improved pan-specific HLA-DR predictions using a novel concurrent alignment and weight optimization training procedure.

    PubMed

    Nielsen, Morten; Justesen, Sune; Lund, Ole; Lundegaard, Claus; Buus, Søren

    2010-11-13

    Binding of peptides to Major Histocompatibility class II (MHC-II) molecules play a central role in governing responses of the adaptive immune system. MHC-II molecules sample peptides from the extracellular space allowing the immune system to detect the presence of foreign microbes from this compartment. Predicting which peptides bind to an MHC-II molecule is therefore of pivotal importance for understanding the immune response and its effect on host-pathogen interactions. The experimental cost associated with characterizing the binding motif of an MHC-II molecule is significant and large efforts have therefore been placed in developing accurate computer methods capable of predicting this binding event. Prediction of peptide binding to MHC-II is complicated by the open binding cleft of the MHC-II molecule, allowing binding of peptides extending out of the binding groove. Moreover, the genes encoding the MHC molecules are immensely diverse leading to a large set of different MHC molecules each potentially binding a unique set of peptides. Characterizing each MHC-II molecule using peptide-screening binding assays is hence not a viable option. Here, we present an MHC-II binding prediction algorithm aiming at dealing with these challenges. The method is a pan-specific version of the earlier published allele-specific NN-align algorithm and does not require any pre-alignment of the input data. This allows the method to benefit also from information from alleles covered by limited binding data. The method is evaluated on a large and diverse set of benchmark data, and is shown to significantly out-perform state-of-the-art MHC-II prediction methods. In particular, the method is found to boost the performance for alleles characterized by limited binding data where conventional allele-specific methods tend to achieve poor prediction accuracy. The method thus shows great potential for efficient boosting the accuracy of MHC-II binding prediction, as accurate predictions can be obtained for novel alleles at highly reduced experimental costs. Pan-specific binding predictions can be obtained for all alleles with know protein sequence and the method can benefit by including data in the training from alleles even where only few binders are known. The method and benchmark data are available at http://www.cbs.dtu.dk/services/NetMHCIIpan-2.0.

  14. Inversion of oceanic constituents in case I and II waters with genetic programming algorithms.

    PubMed

    Chami, Malik; Robilliard, Denis

    2002-10-20

    A stochastic inverse technique based on agenetic programming (GP) algorithm was developed toinvert oceanic constituents from simulated data for case I and case II water applications. The simulations were carried out with the Ordre Successifs Ocean Atmosphere (OSOA) radiative transfer model. They include the effects of oceanic substances such as algal-related chlorophyll, nonchlorophyllous suspended matter, and dissolved organic matter. The synthetic data set also takes into account the directional effects of particles through a variation of their phase function that makes the simulated data realistic. It is shown that GP can be successfully applied to the inverse problem with acceptable stability in the presence of realistic noise in the data. GP is compared with neural network methodology for case I waters; GP exhibits similar retrieval accuracy, which is greater than for traditional techniques such as band ratio algorithms. The application of GP to real satellite data [a Sea-viewing Wide Field-of-view Sensor (SeaWiFS)] was also carried out for case I waters as a validation. Good agreement was obtained when GP results were compared with the SeaWiFS empirical algorithm. For case II waters the accuracy of GP is less than 33%, which remains satisfactory, at the present time, for remote-sensing purposes.

  15. The 2013 ACC/AHA 10-year atherosclerotic cardiovascular disease risk index is better than SCORE and QRisk II in rheumatoid arthritis: is it enough?

    PubMed

    Ozen, Gulsen; Sunbul, Murat; Atagunduz, Pamir; Direskeneli, Haner; Tigen, Kursat; Inanc, Nevsun

    2016-03-01

    To determine the ability of the new American College of Cardiology and American Heart Association (ACC/AHA) 10-year atherosclerotic cardiovascular disease (ASCVD) risk algorithm in detecting high cardiovascular (CV) risk, RA patients identified by carotid ultrasonography (US) were compared with Systematic Coronary Risk Evaluation (SCORE) and QRisk II algorithms. SCORE, QRisk II, 2013 ACC/AHA 10-year ASCVD risk and EULAR recommended modified versions were calculated in 216 RA patients. In sonographic evaluation, carotid intima-media thickness >0.90 mm and/or carotid plaques were used as the gold standard test for subclinical atherosclerosis and high CV risk (US+). Eleven (5.1%), 15 (6.9%) and 44 (20.4%) patients were defined as having high CV risk according to SCORE, QRisk II and ACC/AHA 10-year ASCVD risk, respectively. Fifty-two (24.1%) patients were US + and of those, 8 (15.4%), 7 (13.5%) and 23 (44.2%) patients were classified as high CV risk according to SCORE, QRisk II and ACC/AHA 10-year ASCVD risk, respectively. The ACC/AHA 10-year ASCVD risk index better identified US + patients than SCORE and QRisk II (P < 0.0001). With EULAR modification, reclassification from moderate to high risk occurred only in two, five and seven patients according to SCORE, QRisk II and ACC/AHA 10-year ASCVD risk, respectively. The 2013 ACC/AHA 10-year ASCVD risk estimator was better than the SCORE and QRisk II indices in RA, but still failed to identify 55% of high risk patients. Furthermore adjustment of threshold and EULAR modification did not work well. © The Author 2015. Published by Oxford University Press on behalf of the British Society for Rheumatology. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  16. Worldwide genetic variability of the Duffy binding protein: insights into Plasmodium vivax vaccine development.

    PubMed

    Nóbrega de Sousa, Taís; Carvalho, Luzia Helena; Alves de Brito, Cristiana Ferreira

    2011-01-01

    The dependence of Plasmodium vivax on invasion mediated by Duffy binding protein (DBP) makes this protein a prime candidate for development of a vaccine. However, the development of a DBP-based vaccine might be hampered by the high variability of the protein ligand (DBP(II)), known to bias the immune response toward a specific DBP variant. Here, the hypothesis being investigated is that the analysis of the worldwide DBP(II) sequences will allow us to determine the minimum number of haplotypes (MNH) to be included in a DBP-based vaccine of broad coverage. For that, all DBP(II) sequences available were compiled and MNH was based on the most frequent nonsynonymous single nucleotide polymorphisms, the majority mapped on B and T cell epitopes. A preliminary analysis of DBP(II) genetic diversity from eight malaria-endemic countries estimated that a number between two to six DBP haplotypes (17 in total) would target at least 50% of parasite population circulating in each endemic region. Aiming to avoid region-specific haplotypes, we next analyzed the MNH that broadly cover worldwide parasite population. The results demonstrated that seven haplotypes would be required to cover around 60% of DBP(II) sequences available. Trying to validate these selected haplotypes per country, we found that five out of the eight countries will be covered by the MNH (67% of parasite populations, range 48-84%). In addition, to identify related subgroups of DBP(II) sequences we used a Bayesian clustering algorithm. The algorithm grouped all DBP(II) sequences in six populations that were independent of geographic origin, with ancestral populations present in different proportions in each country. In conclusion, in this first attempt to undertake a global analysis about DBP(II) variability, the results suggest that the development of DBP-based vaccine should consider multi-haplotype strategies; otherwise a putative P. vivax vaccine may not target some parasite populations.

  17. Direct and Electronic Health Record Access to the Clinical Decision Support for Immunizations in the Minnesota Immunization Information System.

    PubMed

    Rajamani, Sripriya; Bieringer, Aaron; Wallerius, Stephanie; Jensen, Daniel; Winden, Tamara; Muscoplat, Miriam Halstead

    2016-01-01

    Immunization information systems (IIS) are population-based and confidential computerized systems maintained by public health agencies containing individual data on immunizations from participating health care providers. IIS hold comprehensive vaccination histories given across providers and over time. An important aspect to IIS is the clinical decision support for immunizations (CDSi), consisting of vaccine forecasting algorithms to determine needed immunizations. The study objective was to analyze the CDSi presentation by IIS in Minnesota (Minnesota Immunization Information Connection [MIIC]) through direct access by IIS interface and by access through electronic health records (EHRs) to outline similarities and differences. The immunization data presented were similar across the three systems examined, but with varying ability to integrate data across MIIC and EHR, which impacts immunization data reconciliation. Study findings will lead to better understanding of immunization data display, clinical decision support, and user functionalities with the ultimate goal of promoting IIS CDSi to improve vaccination rates.

  18. 49 CFR 236.1033 - Communications and security requirements.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... shall: (1) Use an algorithm approved by the National Institute of Standards (NIST) or a similarly...; or (ii) When the key algorithm reaches its lifespan as defined by the standards body responsible for approval of the algorithm. (c) The cleartext form of the cryptographic keys shall be protected from...

  19. 49 CFR 236.1033 - Communications and security requirements.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... shall: (1) Use an algorithm approved by the National Institute of Standards (NIST) or a similarly...; or (ii) When the key algorithm reaches its lifespan as defined by the standards body responsible for approval of the algorithm. (c) The cleartext form of the cryptographic keys shall be protected from...

  20. 49 CFR 236.1033 - Communications and security requirements.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... shall: (1) Use an algorithm approved by the National Institute of Standards (NIST) or a similarly...; or (ii) When the key algorithm reaches its lifespan as defined by the standards body responsible for approval of the algorithm. (c) The cleartext form of the cryptographic keys shall be protected from...

  1. 49 CFR 236.1033 - Communications and security requirements.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... shall: (1) Use an algorithm approved by the National Institute of Standards (NIST) or a similarly...; or (ii) When the key algorithm reaches its lifespan as defined by the standards body responsible for approval of the algorithm. (c) The cleartext form of the cryptographic keys shall be protected from...

  2. 49 CFR 236.1033 - Communications and security requirements.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... shall: (1) Use an algorithm approved by the National Institute of Standards (NIST) or a similarly...; or (ii) When the key algorithm reaches its lifespan as defined by the standards body responsible for approval of the algorithm. (c) The cleartext form of the cryptographic keys shall be protected from...

  3. Type Ia supernova rate studies from the SDSS-II Supernova Study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dilday, Benjamin

    2008-08-01

    The author presents new measurements of the type Ia SN rate from the SDSS-II Supernova Survey. The SDSS-II Supernova Survey was carried out during the Fall months (Sept.-Nov.) of 2005-2007 and discovered ~ 500 spectroscopically confirmed SNe Ia with densely sampled (once every ~ 4 days), multi-color light curves. Additionally, the SDSS-II Supernova Survey has discovered several hundred SNe Ia candidates with well-measured light curves, but without spectroscopic confirmation of type. This total, achieved in 9 months of observing, represents ~ 15-20% of the total SNe Ia discovered worldwide since 1885. The author describes some technical details of the SNmore » Survey observations and SN search algorithms that contributed to the extremely high-yield of discovered SNe and that are important as context for the SDSS-II Supernova Survey SN Ia rate measurements.« less

  4. Algorithm Development for the Multi-Fluid Plasma Model

    DTIC Science & Technology

    2011-05-30

    392, Sep 1995. [13] L Chacon , DC Barnes, DA Knoll, and GH Miley. An implicit energy- conservative 2D Fokker-Planck algorithm. Journal of Computational...Physics, 157(2):618–653, 2000. [14] L Chacon , DC Barnes, DA Knoll, and GH Miley. An implicit energy- conservative 2D Fokker-Planck algorithm - II

  5. Homology modeling, binding site identification and docking study of human angiotensin II type I (Ang II-AT1) receptor.

    PubMed

    Vyas, Vivek K; Ghate, Manjunath; Patel, Kinjal; Qureshi, Gulamnizami; Shah, Surmil

    2015-08-01

    Ang II-AT1 receptors play an important role in mediating virtually all of the physiological actions of Ang II. Several drugs (SARTANs) are available, which can block the AT1 receptor effectively and lower the blood pressure in the patients with hypertension. Currently, there is no experimental Ang II-AT1 structure available; therefore, in this study we modeled Ang II-AT1 receptor structure using homology modeling followed by identification and characterization of binding sites and thereby assessing druggability of the receptor. Homology models were constructed using MODELLER and I-TASSER server, refined and validated using PROCHECK in which 96.9% of 318 residues were present in the favoured regions of the Ramachandran plots. Various Ang II-AT1 receptor antagonist drugs are available in the market as antihypertensive drug, so we have performed docking study with the binding site prediction algorithms to predict different binding pockets on the modeled proteins. The identification of 3D structures and binding sites for various known drugs will guide us for the structure-based drug design of novel compounds as Ang II-AT1 receptor antagonists for the treatment of hypertension. Copyright © 2015 Elsevier Masson SAS. All rights reserved.

  6. Open-access MIMIC-II database for intensive care research.

    PubMed

    Lee, Joon; Scott, Daniel J; Villarroel, Mauricio; Clifford, Gari D; Saeed, Mohammed; Mark, Roger G

    2011-01-01

    The critical state of intensive care unit (ICU) patients demands close monitoring, and as a result a large volume of multi-parameter data is collected continuously. This represents a unique opportunity for researchers interested in clinical data mining. We sought to foster a more transparent and efficient intensive care research community by building a publicly available ICU database, namely Multiparameter Intelligent Monitoring in Intensive Care II (MIMIC-II). The data harnessed in MIMIC-II were collected from the ICUs of Beth Israel Deaconess Medical Center from 2001 to 2008 and represent 26,870 adult hospital admissions (version 2.6). MIMIC-II consists of two major components: clinical data and physiological waveforms. The clinical data, which include patient demographics, intravenous medication drip rates, and laboratory test results, were organized into a relational database. The physiological waveforms, including 125 Hz signals recorded at bedside and corresponding vital signs, were stored in an open-source format. MIMIC-II data were also deidentified in order to remove protected health information. Any interested researcher can gain access to MIMIC-II free of charge after signing a data use agreement and completing human subjects training. MIMIC-II can support a wide variety of research studies, ranging from the development of clinical decision support algorithms to retrospective clinical studies. We anticipate that MIMIC-II will be an invaluable resource for intensive care research by stimulating fair comparisons among different studies.

  7. Hydro-environmental management of groundwater resources: A fuzzy-based multi-objective compromise approach

    NASA Astrophysics Data System (ADS)

    Alizadeh, Mohammad Reza; Nikoo, Mohammad Reza; Rakhshandehroo, Gholam Reza

    2017-08-01

    Sustainable management of water resources necessitates close attention to social, economic and environmental aspects such as water quality and quantity concerns and potential conflicts. This study presents a new fuzzy-based multi-objective compromise methodology to determine the socio-optimal and sustainable policies for hydro-environmental management of groundwater resources, which simultaneously considers the conflicts and negotiation of involved stakeholders, uncertainties in decision makers' preferences, existing uncertainties in the groundwater parameters and groundwater quality and quantity issues. The fuzzy multi-objective simulation-optimization model is developed based on qualitative and quantitative groundwater simulation model (MODFLOW and MT3D), multi-objective optimization model (NSGA-II), Monte Carlo analysis and Fuzzy Transformation Method (FTM). Best compromise solutions (best management policies) on trade-off curves are determined using four different Fuzzy Social Choice (FSC) methods. Finally, a unanimity fallback bargaining method is utilized to suggest the most preferred FSC method. Kavar-Maharloo aquifer system in Fars, Iran, as a typical multi-stakeholder multi-objective real-world problem is considered to verify the proposed methodology. Results showed an effective performance of the framework for determining the most sustainable allocation policy in groundwater resource management.

  8. Installation of automatic control at experimental breeder reactor II

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Larson, H.A.; Booty, W.F.; Chick, D.R.

    1985-08-01

    The Experimental Breeder Reactor II (EBR-II) has been modified to permit automatic control capability. Necessary mechanical and electrical changes were made on a regular control rod position; motor, gears, and controller were replaced. A digital computer system was installed that has the programming capability for varied power profiles. The modifications permit transient testing at EBR-II. Experiments were run that increased power linearly as much as 4 MW/s (16% of initial power of 25 MW(thermal)/s), held power constant, and decreased power at a rate no slower than the increase rate. Thus the performance of the automatic control algorithm, the mechanical andmore » electrical control equipment, and the qualifications of the driver fuel for future power change experiments were all demonstrated.« less

  9. Artificial neural network (ANN) approach for modeling of Pb(II) adsorption from aqueous solution by Antep pistachio (Pistacia Vera L.) shells.

    PubMed

    Yetilmezsoy, Kaan; Demirel, Sevgi

    2008-05-30

    A three-layer artificial neural network (ANN) model was developed to predict the efficiency of Pb(II) ions removal from aqueous solution by Antep pistachio (Pistacia Vera L.) shells based on 66 experimental sets obtained in a laboratory batch study. The effect of operational parameters such as adsorbent dosage, initial concentration of Pb(II) ions, initial pH, operating temperature, and contact time were studied to optimise the conditions for maximum removal of Pb(II) ions. On the basis of batch test results, optimal operating conditions were determined to be an initial pH of 5.5, an adsorbent dosage of 1.0 g, an initial Pb(II) concentration of 30 ppm, and a temperature of 30 degrees C. Experimental results showed that a contact time of 45 min was generally sufficient to achieve equilibrium. After backpropagation (BP) training combined with principal component analysis (PCA), the ANN model was able to predict adsorption efficiency with a tangent sigmoid transfer function (tansig) at hidden layer with 11 neurons and a linear transfer function (purelin) at output layer. The Levenberg-Marquardt algorithm (LMA) was found as the best of 11 BP algorithms with a minimum mean squared error (MSE) of 0.000227875. The linear regression between the network outputs and the corresponding targets were proven to be satisfactory with a correlation coefficient of about 0.936 for five model variables used in this study.

  10. A risk-based multi-objective model for optimal placement of sensors in water distribution system

    NASA Astrophysics Data System (ADS)

    Naserizade, Sareh S.; Nikoo, Mohammad Reza; Montaseri, Hossein

    2018-02-01

    In this study, a new stochastic model based on Conditional Value at Risk (CVaR) and multi-objective optimization methods is developed for optimal placement of sensors in water distribution system (WDS). This model determines minimization of risk which is caused by simultaneous multi-point contamination injection in WDS using CVaR approach. The CVaR considers uncertainties of contamination injection in the form of probability distribution function and calculates low-probability extreme events. In this approach, extreme losses occur at tail of the losses distribution function. Four-objective optimization model based on NSGA-II algorithm is developed to minimize losses of contamination injection (through CVaR of affected population and detection time) and also minimize the two other main criteria of optimal placement of sensors including probability of undetected events and cost. Finally, to determine the best solution, Preference Ranking Organization METHod for Enrichment Evaluation (PROMETHEE), as a subgroup of Multi Criteria Decision Making (MCDM) approach, is utilized to rank the alternatives on the trade-off curve among objective functions. Also, sensitivity analysis is done to investigate the importance of each criterion on PROMETHEE results considering three relative weighting scenarios. The effectiveness of the proposed methodology is examined through applying it to Lamerd WDS in the southwestern part of Iran. The PROMETHEE suggests 6 sensors with suitable distribution that approximately cover all regions of WDS. Optimal values related to CVaR of affected population and detection time as well as probability of undetected events for the best optimal solution are equal to 17,055 persons, 31 mins and 0.045%, respectively. The obtained results of the proposed methodology in Lamerd WDS show applicability of CVaR-based multi-objective simulation-optimization model for incorporating the main uncertainties of contamination injection in order to evaluate extreme value of losses in WDS.

  11. Efficient Learning Algorithms with Limited Information

    ERIC Educational Resources Information Center

    De, Anindya

    2013-01-01

    The thesis explores efficient learning algorithms in settings which are more restrictive than the PAC model of learning (Valiant) in one of the following two senses: (i) The learning algorithm has a very weak access to the unknown function, as in, it does not get labeled samples for the unknown function (ii) The error guarantee required from the…

  12. Neuro-evolutionary computing paradigm for Painlevé equation-II in nonlinear optics

    NASA Astrophysics Data System (ADS)

    Ahmad, Iftikhar; Ahmad, Sufyan; Awais, Muhammad; Ul Islam Ahmad, Siraj; Asif Zahoor Raja, Muhammad

    2018-05-01

    The aim of this study is to investigate the numerical treatment of the Painlevé equation-II arising in physical models of nonlinear optics through artificial intelligence procedures by incorporating a single layer structure of neural networks optimized with genetic algorithms, sequential quadratic programming and active set techniques. We constructed a mathematical model for the nonlinear Painlevé equation-II with the help of networks by defining an error-based cost function in mean square sense. The performance of the proposed technique is validated through statistical analyses by means of the one-way ANOVA test conducted on a dataset generated by a large number of independent runs.

  13. Construction of type-II QC-LDPC codes with fast encoding based on perfect cyclic difference sets

    NASA Astrophysics Data System (ADS)

    Li, Ling-xiang; Li, Hai-bing; Li, Ji-bi; Jiang, Hua

    2017-09-01

    In view of the problems that the encoding complexity of quasi-cyclic low-density parity-check (QC-LDPC) codes is high and the minimum distance is not large enough which leads to the degradation of the error-correction performance, the new irregular type-II QC-LDPC codes based on perfect cyclic difference sets (CDSs) are constructed. The parity check matrices of these type-II QC-LDPC codes consist of the zero matrices with weight of 0, the circulant permutation matrices (CPMs) with weight of 1 and the circulant matrices with weight of 2 (W2CMs). The introduction of W2CMs in parity check matrices makes it possible to achieve the larger minimum distance which can improve the error- correction performance of the codes. The Tanner graphs of these codes have no girth-4, thus they have the excellent decoding convergence characteristics. In addition, because the parity check matrices have the quasi-dual diagonal structure, the fast encoding algorithm can reduce the encoding complexity effectively. Simulation results show that the new type-II QC-LDPC codes can achieve a more excellent error-correction performance and have no error floor phenomenon over the additive white Gaussian noise (AWGN) channel with sum-product algorithm (SPA) iterative decoding.

  14. Image compression evaluation for digital cinema: the case of Star Wars: Episode II

    NASA Astrophysics Data System (ADS)

    Schnuelle, David L.

    2003-05-01

    A program of evaluation of compression algorithms proposed for use in a digital cinema application is described and the results presented in general form. The work was intended to aid in the selection of a compression system to be used for the digital cinema release of Star Wars: Episode II, in May 2002. An additional goal was to provide feedback to the algorithm proponents on what parameters and performance levels the feature film industry is looking for in digital cinema compression. The primary conclusion of the test program is that any of the current digital cinema compression proponents will work for digital cinema distribution to today's theaters.

  15. Basics of identification measurement technology

    NASA Astrophysics Data System (ADS)

    Klikushin, Yu N.; Kobenko, V. Yu; Stepanov, P. P.

    2018-01-01

    All available algorithms and suitable for pattern recognition do not give 100% guarantee, therefore there is a field of scientific night activity in this direction, studies are relevant. It is proposed to develop existing technologies for pattern recognition in the form of application of identification measurements. The purpose of the study is to identify the possibility of recognizing images using identification measurement technologies. In solving problems of pattern recognition, neural networks and hidden Markov models are mainly used. A fundamentally new approach to the solution of problems of pattern recognition based on the technology of identification signal measurements (IIS) is proposed. The essence of IIS technology is the quantitative evaluation of the shape of images using special tools and algorithms.

  16. A Low Cost Matching Motion Estimation Sensor Based on the NIOS II Microprocessor

    PubMed Central

    González, Diego; Botella, Guillermo; Meyer-Baese, Uwe; García, Carlos; Sanz, Concepción; Prieto-Matías, Manuel; Tirado, Francisco

    2012-01-01

    This work presents the implementation of a matching-based motion estimation sensor on a Field Programmable Gate Array (FPGA) and NIOS II microprocessor applying a C to Hardware (C2H) acceleration paradigm. The design, which involves several matching algorithms, is mapped using Very Large Scale Integration (VLSI) technology. These algorithms, as well as the hardware implementation, are presented here together with an extensive analysis of the resources needed and the throughput obtained. The developed low-cost system is practical for real-time throughput and reduced power consumption and is useful in robotic applications, such as tracking, navigation using an unmanned vehicle, or as part of a more complex system. PMID:23201989

  17. Fe II emission lines. I - Chromospheric spectra of red giants

    NASA Technical Reports Server (NTRS)

    Judge, P. G.; Jordan, C.

    1991-01-01

    A 'difference filtering' algorithm developed by Ayers (1979) is used to construct high-quality high-dispersion long-wavelength IUE spectra of three giant stars. Measurements of all the emission lines seen between 2230 and 3100 A are tabulated. The emission spectrum of Fe II is discussed in comparison with other lines whose formation mechanisms are well understood. Systematic changes in the Fe II spectrum are related to the different physical conditions in the three stars, and examples are given of line profiles and ratios which can be used to determine conditions in the outer atomspheres of giants. It is concluded that most of the Fe II emission results from collisional excitation and/or absorption of photospheric photons at optical wavelengths, but some lines are formed by fluorescence, being photoexcited by other strong chromospheric lines. Between 10 and 20 percent of the radiative losses of Fe II arise from 10 eV levels radiatively excited by the strong chromospheric H Ly-alpha line.

  18. Developing a Screening Algorithm for Type II Diabetes Mellitus in the Resource-Limited Setting of Rural Tanzania.

    PubMed

    West, Caroline; Ploth, David; Fonner, Virginia; Mbwambo, Jessie; Fredrick, Francis; Sweat, Michael

    2016-04-01

    Noncommunicable diseases are on pace to outnumber infectious disease as the leading cause of death in sub-Saharan Africa, yet many questions remain unanswered with concern toward effective methods of screening for type II diabetes mellitus (DM) in this resource-limited setting. We aim to design a screening algorithm for type II DM that optimizes sensitivity and specificity of identifying individuals with undiagnosed DM, as well as affordability to health systems and individuals. Baseline demographic and clinical data, including hemoglobin A1c (HbA1c), were collected from 713 participants using probability sampling of the general population. We used these data, along with model parameters obtained from the literature, to mathematically model 8 purposed DM screening algorithms, while optimizing the sensitivity and specificity using Monte Carlo and Latin Hypercube simulation. An algorithm that combines risk assessment and measurement of fasting blood glucose was found to be superior for the most resource-limited settings (sensitivity 68%, sensitivity 99% and cost per patient having DM identified as $2.94). Incorporating HbA1c testing improves the sensitivity to 75.62%, but raises the cost per DM case identified to $6.04. The preferred algorithms are heavily biased to diagnose those with more severe cases of DM. Using basic risk assessment tools and fasting blood sugar testing in lieu of HbA1c testing in resource-limited settings could allow for significantly more feasible DM screening programs with reasonable sensitivity and specificity. Copyright © 2016 Southern Society for Clinical Investigation. Published by Elsevier Inc. All rights reserved.

  19. Development and Translation of Hybrid Optoacoustic/Ultrasonic Tomography for Early Breast Cancer Detection

    DTIC Science & Technology

    2014-09-01

    to develop an optimized system design and associated image reconstruction algorithms for a hybrid three-dimensional (3D) breast imaging system that...research is to develop an optimized system design and associated image reconstruction algorithms for a hybrid three-dimensional (3D) breast imaging ...i) developed time-of- flight extraction algorithms to perform USCT, (ii) developing image reconstruction algorithms for USCT, (iii) developed

  20. An algorithmic approach to the brain biopsy--part II.

    PubMed

    Prayson, Richard A; Kleinschmidt-DeMasters, B K

    2006-11-01

    The formulation of appropriate differential diagnoses for a slide is essential to the practice of surgical pathology but can be particularly challenging for residents and fellows. Algorithmic flow charts can help the less experienced pathologist to systematically consider all possible choices and eliminate incorrect diagnoses. They can assist pathologists-in-training in developing orderly, sequential, and logical thinking skills when confronting difficult cases. To present an algorithmic flow chart as an approach to formulating differential diagnoses for lesions seen in surgical neuropathology. An algorithmic flow chart to be used in teaching residents. Algorithms are not intended to be final diagnostic answers on any given case. Algorithms do not substitute for training received from experienced mentors nor do they substitute for comprehensive reading by trainees of reference textbooks. Algorithmic flow diagrams can, however, direct the viewer to the correct spot in reference texts for further in-depth reading once they hone down their diagnostic choices to a smaller number of entities. The best feature of algorithms is that they remind the user to consider all possibilities on each case, even if they can be quickly eliminated from further consideration. In Part II, we assist the resident in arriving at the correct diagnosis for neuropathologic lesions containing granulomatous inflammation, macrophages, or abnormal blood vessels.

  1. Whole-brain MRI phenotyping in dysplasia-related frontal lobe epilepsy.

    PubMed

    Hong, Seok-Jun; Bernhardt, Boris C; Schrader, Dewi S; Bernasconi, Neda; Bernasconi, Andrea

    2016-02-16

    To perform whole-brain morphometry in patients with frontal lobe epilepsy and evaluate the utility of group-level patterns for individualized diagnosis and prognosis. We compared MRI-based cortical thickness and folding complexity between 2 frontal lobe epilepsy cohorts with histologically verified focal cortical dysplasia (FCD) (13 type I; 28 type II) and 41 closely matched controls. Pattern learning algorithms evaluated the utility of group-level findings to predict histologic FCD subtype, the side of the seizure focus, and postsurgical seizure outcome in single individuals. Relative to controls, FCD type I displayed multilobar cortical thinning that was most marked in ipsilateral frontal cortices. Conversely, type II showed thickening in temporal and postcentral cortices. Cortical folding also diverged, with increased complexity in prefrontal cortices in type I and decreases in type II. Group-level findings successfully guided automated FCD subtype classification (type I: 100%; type II: 96%), seizure focus lateralization (type I: 92%; type II: 86%), and outcome prediction (type I: 92%; type II: 82%). FCD subtypes relate to diverse whole-brain structural phenotypes. While cortical thickening in type II may indicate delayed pruning, a thin cortex in type I likely results from combined effects of seizure excitotoxicity and the primary malformation. Group-level patterns have a high translational value in guiding individualized diagnostics. © 2016 American Academy of Neurology.

  2. Using Genetic Algorithm and MODFLOW to Characterize Aquifer System of Northwest Florida

    EPA Science Inventory

    By integrating Genetic Algorithm and MODFLOW2005, an optimizing tool is developed to characterize the aquifer system of Region II, Northwest Florida. The history and the newest available observation data of the aquifer system is fitted automatically by using the numerical model c...

  3. A Car Transportation System in Cooperation by Multiple Mobile Robots for Each Wheel: iCART II

    NASA Astrophysics Data System (ADS)

    Kashiwazaki, Koshi; Yonezawa, Naoaki; Kosuge, Kazuhiro; Sugahara, Yusuke; Hirata, Yasuhisa; Endo, Mitsuru; Kanbayashi, Takashi; Shinozuka, Hiroyuki; Suzuki, Koki; Ono, Yuki

    The authors proposed a car transportation system, iCART (intelligent Cooperative Autonomous Robot Transporters), for automation of mechanical parking systems by two mobile robots. However, it was difficult to downsize the mobile robot because the length of it requires at least the wheelbase of a car. This paper proposes a new car transportation system, iCART II (iCART - type II), based on “a-robot-for-a-wheel” concept. A prototype system, MRWheel (a Mobile Robot for a Wheel), is designed and downsized less than half the conventional robot. First, a method for lifting up a wheel by MRWheel is described. In general, it is very difficult for mobile robots such as MRWheel to move to desired positions without motion errors caused by slipping, etc. Therefore, we propose a follower's motion error estimation algorithm based on the internal force applied to each follower by extending a conventional leader-follower type decentralized control algorithm for cooperative object transportation. The proposed algorithm enables followers to estimate their motion errors and enables the robots to transport a car to a desired position. In addition, we analyze and prove the stability and convergence of the resultant system with the proposed algorithm. In order to extract only the internal force from the force applied to each robot, we also propose a model-based external force compensation method. Finally, proposed methods are applied to the car transportation system, the experimental results confirm their validity.

  4. Hyperspectral Image Classification for Land Cover Based on an Improved Interval Type-II Fuzzy C-Means Approach

    PubMed Central

    Li, Zhao-Liang

    2018-01-01

    Few studies have examined hyperspectral remote-sensing image classification with type-II fuzzy sets. This paper addresses image classification based on a hyperspectral remote-sensing technique using an improved interval type-II fuzzy c-means (IT2FCM*) approach. In this study, in contrast to other traditional fuzzy c-means-based approaches, the IT2FCM* algorithm considers the ranking of interval numbers and the spectral uncertainty. The classification results based on a hyperspectral dataset using the FCM, IT2FCM, and the proposed improved IT2FCM* algorithms show that the IT2FCM* method plays the best performance according to the clustering accuracy. In this paper, in order to validate and demonstrate the separability of the IT2FCM*, four type-I fuzzy validity indexes are employed, and a comparative analysis of these fuzzy validity indexes also applied in FCM and IT2FCM methods are made. These four indexes are also applied into different spatial and spectral resolution datasets to analyze the effects of spectral and spatial scaling factors on the separability of FCM, IT2FCM, and IT2FCM* methods. The results of these validity indexes from the hyperspectral datasets show that the improved IT2FCM* algorithm have the best values among these three algorithms in general. The results demonstrate that the IT2FCM* exhibits good performance in hyperspectral remote-sensing image classification because of its ability to handle hyperspectral uncertainty. PMID:29373548

  5. ARPANET Routing Algorithm Improvements. Volume 1

    DTIC Science & Technology

    1980-08-01

    the currently active PROCESS (at the head of the scheduling list). TIME returns the time. PASSIVATE puts the CURRENT PROCESS to sleep , and wakes uD...Ccntract. Network buffer managemenL issues are discussed and a new buffer managemet scheme for f the ARPANET is designed. Logical addressing is discussed...and a design is given for a logical addressing scheme suitable for ARPANET or DIN II. The applicability of ARPANET Routing to DIN II is evaluaced. The

  6. HOLIMO II: a digital holographic instrument for ground-based in-situ observations of microphysical properties of mixed-phase clouds

    NASA Astrophysics Data System (ADS)

    Henneberger, J.; Fugal, J. P.; Stetzer, O.; Lohmann, U.

    2013-05-01

    Measurements of the microphysical properties of mixed-phase clouds with high spatial resolution are important to understand the processes inside these clouds. This work describes the design and characterization of the newly developed ground-based field instrument HOLIMO II (HOLographic Imager for Microscopic Objects II). HOLIMO II uses digital in-line holography to in-situ image cloud particles in a well defined sample volume. By an automated algorithm, two-dimensional images of single cloud particles between 6 and 250 μm in diameter are obtained and the size spectrum, the concentration and water content of clouds are calculated. By testing the sizing algorithm with monosized beads a systematic overestimation near the resolution limit was found, which has been used to correct the measurements. Field measurements from the high altitude research station Jungfraujoch, Switzerland, are presented. The measured number size distributions are in good agreement with parallel measurements by a fog monitor (FM-100, DMT, Boulder USA). The field data shows that HOLIMO II is capable of measuring the number size distribution with a high spatial resolution and determines ice crystal shape, thus providing a method of quantifying variations in microphysical properties. A case study over a period of 8 h has been analyzed, exploring the transition from a liquid to a mixed-phase cloud, which is the longest observation of a cloud with a holographic device. During the measurement period, the cloud does not completely glaciate, contradicting earlier assumptions of the dominance of the Wegener-Bergeron-Findeisen (WBF) process.

  7. HOLIMO II: a digital holographic instrument for ground-based in situ observations of microphysical properties of mixed-phase clouds

    NASA Astrophysics Data System (ADS)

    Henneberger, J.; Fugal, J. P.; Stetzer, O.; Lohmann, U.

    2013-11-01

    Measurements of the microphysical properties of mixed-phase clouds with high spatial resolution are important to understand the processes inside these clouds. This work describes the design and characterization of the newly developed ground-based field instrument HOLIMO II (HOLographic Imager for Microscopic Objects II). HOLIMO II uses digital in-line holography to in situ image cloud particles in a well-defined sample volume. By an automated algorithm, two-dimensional images of single cloud particles between 6 and 250 μm in diameter are obtained and the size spectrum, the concentration and water content of clouds are calculated. By testing the sizing algorithm with monosized beads a systematic overestimation near the resolution limit was found, which has been used to correct the measurements. Field measurements from the high altitude research station Jungfraujoch, Switzerland, are presented. The measured number size distributions are in good agreement with parallel measurements by a fog monitor (FM-100, DMT, Boulder USA). The field data shows that HOLIMO II is capable of measuring the number size distribution with a high spatial resolution and determines ice crystal shape, thus providing a method of quantifying variations in microphysical properties. A case study over a period of 8 h has been analyzed, exploring the transition from a liquid to a mixed-phase cloud, which is the longest observation of a cloud with a holographic device. During the measurement period, the cloud does not completely glaciate, contradicting earlier assumptions of the dominance of the Wegener-Bergeron-Findeisen (WBF) process.

  8. Multi-Objective Scheduling for the Cluster II Constellation

    NASA Technical Reports Server (NTRS)

    Johnston, Mark D.; Giuliano, Mark

    2011-01-01

    This paper describes the application of the MUSE multiobjecctive scheduling framework to the Cluster II WBD scheduling domain. Cluster II is an ESA four-spacecraft constellation designed to study the plasma environment of the Earth and it's magnetosphere. One of the instruments on each of the four spacecraft is the Wide Band Data (WBD) plasma wave experiment. We have applied the MUSE evolutionary algorithm to the scheduling problem represented by this instrument, and the result has been adopted and utilized by the WBD schedulers for nearly a year. This paper describes the WBD scheduling problem, its representation in MUSE, and some of the visualization elements that provide insight into objective value tradeoffs.

  9. The Icarus challenge - Predicting vulnerability to climate change using an algorithm-based species' trait approach

    EPA Science Inventory

    The Icarus challenge - Predicting vulnerability to climate change using an algorithm-based species’ trait approachHenry Lee II, Christina Folger, Deborah A. Reusser, Patrick Clinton, and Rene Graham1 U.S. EPA, Western Ecology Division, Newport, OR USA E-mail: lee.henry@ep...

  10. Using Genetic Algorithm and MODFLOW to Characterize Aquifer System of Northwest Florida (Published Proceedings)

    EPA Science Inventory

    By integrating Genetic Algorithm and MODFLOW2005, an optimizing tool is developed to characterize the aquifer system of Region II, Northwest Florida. The history and the newest available observation data of the aquifer system is fitted automatically by using the numerical model c...

  11. Deeper Insights into the Circumgalactic Medium using Multivariate Analysis Methods

    NASA Astrophysics Data System (ADS)

    Lewis, James; Churchill, Christopher W.; Nielsen, Nikole M.; Kacprzak, Glenn

    2017-01-01

    Drawing from a database of galaxies whose surrounding gas has absorption from MgII, called the MgII-Absorbing Galaxy Catalog (MAGIICAT, Neilsen et al 2013), we studied the circumgalactic medium (CGM) for a sample of 47 galaxies. Using multivariate analysis, in particular the k-means clustering algorithm, we determined that simultaneously examining column density (N), rest-frame B-K color, virial mass, and azimuthal angle (the projected angle between the galaxy major axis and the quasar line of sight) yields two distinct populations: (1) bluer, lower mass galaxies with higher column density along the minor axis, and (2) redder, higher mass galaxies with lower column density along the major axis. We support this grouping by running (i) two-sample, two-dimensional Kolmogorov-Smirnov (KS) tests on each of the six bivariate planes and (ii) two-sample KS tests on each of the four variables to show that the galaxies significantly cluster into two independent populations. To account for the fact that 16 of our 47 galaxies have upper limits on N, we performed Monte-Carlo tests whereby we replaced upper limits with random deviates drawn from a Schechter distribution fit, f(N). These tests strengthen the results of the KS tests. We examined the behavior of the MgII λ2796 absorption line equivalent width and velocity width for each galaxy population. We find that equivalent width and velocity width do not show similar characteristic distinctions between the two galaxy populations. We discuss the k-means clustering algorithm for optimizing the analysis of populations within datasets as opposed to using arbitrary bivariate subsample cuts. We also discuss the power of the k-means clustering algorithm in extracting deeper physical insight into the CGM in relationship to host galaxies.

  12. Evaluation of bioMérieux's Dissociated Vidas Lyme IgM II and IgG II as a First-Tier Diagnostic Assay for Lyme Disease

    PubMed Central

    Delorey, Mark J.; Replogle, Adam; Sexton, Christopher; Schriefer, Martin E.

    2017-01-01

    ABSTRACT The recommended laboratory diagnostic approach for Lyme disease is a standard two-tiered testing (STTT) algorithm where the first tier is typically an enzyme immunoassay (EIA) that if positive or equivocal is reflexed to Western immunoblotting as the second tier. bioMérieux manufactures one of the most commonly used first-tier EIAs in the United States, the combined IgM/IgG Vidas test (LYT). Recently, bioMérieux launched its dissociated first-tier tests, the Vidas Lyme IgM II (LYM) and IgG II (LYG) EIAs, which use purified recombinant test antigens and a different algorithm than STTT. The dissociated LYM/LYG EIAs were evaluated against the combined LYT EIA using samples from 471 well-characterized Lyme patients and controls. Statistical analyses were conducted to assess the performance of these EIAs as first-tier tests and when used in two-tiered algorithms, including a modified two-tiered testing (MTTT) approach where the second-tier test was a C6 EIA. Similar sensitivities and specificities were obtained for the two testing strategies (LYT versus LYM/LYG) when used as first-tier tests (sensitivity, 83 to 85%; specificity, 85 to 88%) with an observed agreement of 80%. Sensitivities of 68 to 69% and 76 to 77% and specificities of 97% and 98 to 99% resulted when the two EIA strategies were followed by Western immunoblotting and when used in an MTTT, respectively. The MTTT approach resulted in significantly higher sensitivities than did STTT. Overall, the LYM/LYG EIAs performed equivalently to the LYT EIA in test-to-test comparisons or as first-tier assays in STTT or MTTT with few exceptions. PMID:28330884

  13. Cohomology of line bundles: Applications

    NASA Astrophysics Data System (ADS)

    Blumenhagen, Ralph; Jurke, Benjamin; Rahn, Thorsten; Roschy, Helmut

    2012-01-01

    Massless modes of both heterotic and Type II string compactifications on compact manifolds are determined by vector bundle valued cohomology classes. Various applications of our recent algorithm for the computation of line bundle valued cohomology classes over toric varieties are presented. For the heterotic string, the prime examples are so-called monad constructions on Calabi-Yau manifolds. In the context of Type II orientifolds, one often needs to compute cohomology for line bundles on finite group action coset spaces, necessitating us to generalize our algorithm to this case. Moreover, we exemplify that the different terms in Batyrev's formula and its generalizations can be given a one-to-one cohomological interpretation. Furthermore, we derive a combinatorial closed form expression for two Hodge numbers of a codimension two Calabi-Yau fourfold.

  14. The Value of the SYNTAX Score II in Predicting Clinical Outcomes in Patients Undergoing Transcatheter Aortic Valve Implantation.

    PubMed

    Ryan, Nicola; Nombela-Franco, Luis; Jiménez-Quevedo, Pilar; Biagioni, Corina; Salinas, Pablo; Aldazábal, Andrés; Cerrato, Enrico; Gonzalo, Nieves; Del Trigo, María; Núñez-Gil, Iván; Fernández-Ortiz, Antonio; Macaya, Carlos; Escaned, Javier

    2017-11-27

    The predictive value of the SYNTAX score (SS) for clinical outcomes after transcatheter aortic valve implantation (TAVI) is very limited and could potentially be improved by the combination of anatomic and clinical variables, the SS-II. We aimed to evaluate the value of the SS-II in predicting outcomes in patients undergoing TAVI. A total of 402 patients with severe symptomatic aortic stenosis undergoing transfemoral TAVI were included. Preprocedural TAVI angiograms were reviewed and the SS-I and SS-II were calculated using the SS algorithms. Patients were stratified in 3 groups according to SS-II tertiles. The coprimary endpoints were all-cause death and major adverse cardiovascular events (MACE), a composite of all-cause death, cerebrovascular event, or myocardial infarction at 1 year. Increased SS-II was associated with higher 30-day mortality (P=.036) and major bleeding (P=.015). The 1-year risk of death and MACE was higher among patients in the 3rd SS-II tertile (HR, 2.60; P=.002 and HR, 2.66; P<.001) and was similar among patients in the 2nd tertile (HR, 1.27; P=.507 and HR, 1.05; P=.895) compared with patients in the 1st tertile. The highest SS-II tertile was an independent predictor of long-term mortality (P=.046) and MACE (P=.001). The SS-II seems more suited to predict clinical outcomes in patients undergoing TAVI than the SS-I. Increased SS-II was associated with poorer clinical outcomes at 1 and 4 years post-TAVI, independently of the presence of coronary artery disease. Copyright © 2017 Sociedad Española de Cardiología. Published by Elsevier España, S.L.U. All rights reserved.

  15. Parallel and Distributed Methods for Constrained Nonconvex Optimization—Part I: Theory

    NASA Astrophysics Data System (ADS)

    Scutari, Gesualdo; Facchinei, Francisco; Lampariello, Lorenzo

    2017-04-01

    In Part I of this paper, we proposed and analyzed a novel algorithmic framework for the minimization of a nonconvex (smooth) objective function, subject to nonconvex constraints, based on inner convex approximations. This Part II is devoted to the application of the framework to some resource allocation problems in communication networks. In particular, we consider two non-trivial case-study applications, namely: (generalizations of) i) the rate profile maximization in MIMO interference broadcast networks; and the ii) the max-min fair multicast multigroup beamforming problem in a multi-cell environment. We develop a new class of algorithms enjoying the following distinctive features: i) they are \\emph{distributed} across the base stations (with limited signaling) and lead to subproblems whose solutions are computable in closed form; and ii) differently from current relaxation-based schemes (e.g., semidefinite relaxation), they are proved to always converge to d-stationary solutions of the aforementioned class of nonconvex problems. Numerical results show that the proposed (distributed) schemes achieve larger worst-case rates (resp. signal-to-noise interference ratios) than state-of-the-art centralized ones while having comparable computational complexity.

  16. Novel histopathologic feature identified through image analysis augments stage II colorectal cancer clinical reporting

    PubMed Central

    Caie, Peter D.; Zhou, Ying; Turnbull, Arran K.; Oniscu, Anca; Harrison, David J.

    2016-01-01

    A number of candidate histopathologic factors show promise in identifying stage II colorectal cancer (CRC) patients at a high risk of disease-specific death, however they can suffer from low reproducibility and none have replaced classical pathologic staging. We developed an image analysis algorithm which standardized the quantification of specific histopathologic features and exported a multi-parametric feature-set captured without bias. The image analysis algorithm was executed across a training set (n = 50) and the resultant big data was distilled through decision tree modelling to identify the most informative parameters to sub-categorize stage II CRC patients. The most significant, and novel, parameter identified was the ‘sum area of poorly differentiated clusters’ (AreaPDC). This feature was validated across a second cohort of stage II CRC patients (n = 134) (HR = 4; 95% CI, 1.5– 11). Finally, the AreaPDC was integrated with the significant features within the clinical pathology report, pT stage and differentiation, into a novel prognostic index (HR = 7.5; 95% CI, 3–18.5) which improved upon current clinical staging (HR = 4.26; 95% CI, 1.7– 10.3). The identification of poorly differentiated clusters as being highly significant in disease progression presents evidence to suggest that these features could be the source of novel targets to decrease the risk of disease specific death. PMID:27322148

  17. Heterogeneous Vision Data Fusion for Independently Moving Cameras

    DTIC Science & Technology

    2010-03-01

    target detection , tracking , and identification over a large terrain. The goal of the project is to investigate and evaluate the existing image...fusion algorithms, develop new real-time algorithms for Category-II image fusion, and apply these algorithms in moving target detection and tracking . The...moving target detection and classification. 15. SUBJECT TERMS Image Fusion, Target Detection , Moving Cameras, IR Camera, EO Camera 16. SECURITY

  18. Stochastic and Deterministic Crystal Structure Solution Methods in GSAS-II: Monte Carlo/Simulated Annealing Versus Charge Flipping

    DOE PAGES

    Von Dreele, Robert

    2017-08-29

    One of the goals in developing GSAS-II was to expand from the capabilities of the original General Structure Analysis System (GSAS) which largely encompassed just structure refinement and post refinement analysis. GSAS-II has been written almost entirely in Python loaded with graphics, GUI and mathematical packages (matplotlib, pyOpenGL, wxpython, numpy and scipy). Thus, GSAS-II has a fully developed modern GUI as well as extensive graphical display of data and results. However, the structure and operation of Python has required new approaches to many of the algorithms used in crystal structure analysis. The extensions beyond GSAS include image calibration/integration as wellmore » as peak fitting and unit cell indexing for powder data which are precursors for structure solution. Structure solution within GSAS-II begins with either Pawley or LeBail extracted structure factors from powder data or those measured in a single crystal experiment. Both charge flipping and Monte Carlo-Simulated Annealing techniques are available; the former can be applied to (3+1) incommensurate structures as well as conventional 3D structures.« less

  19. In silico prediction of ROCK II inhibitors by different classification approaches.

    PubMed

    Cai, Chuipu; Wu, Qihui; Luo, Yunxia; Ma, Huili; Shen, Jiangang; Zhang, Yongbin; Yang, Lei; Chen, Yunbo; Wen, Zehuai; Wang, Qi

    2017-11-01

    ROCK II is an important pharmacological target linked to central nervous system disorders such as Alzheimer's disease. The purpose of this research is to generate ROCK II inhibitor prediction models by machine learning approaches. Firstly, four sets of descriptors were calculated with MOE 2010 and PaDEL-Descriptor, and optimized by F-score and linear forward selection methods. In addition, four classification algorithms were used to initially build 16 classifiers with k-nearest neighbors [Formula: see text], naïve Bayes, Random forest, and support vector machine. Furthermore, three sets of structural fingerprint descriptors were introduced to enhance the predictive capacity of classifiers, which were assessed with fivefold cross-validation, test set validation and external test set validation. The best two models, MFK + MACCS and MLR + SubFP, have both MCC values of 0.925 for external test set. After that, a privileged substructure analysis was performed to reveal common chemical features of ROCK II inhibitors. Finally, binding modes were analyzed to identify relationships between molecular descriptors and activity, while main interactions were revealed by comparing the docking interaction of the most potent and the weakest ROCK II inhibitors. To the best of our knowledge, this is the first report on ROCK II inhibitors utilizing machine learning approaches that provides a new method for discovering novel ROCK II inhibitors.

  20. Global Climate Monitoring with the EOS PM-Platform's Advanced Microwave Scanning Radiometer (AMSR-E)

    NASA Technical Reports Server (NTRS)

    Spencer, Roy W.

    2002-01-01

    The Advanced Microwave Scanning 2 Radiometer (AMSR-E) is being built by NASDA to fly on NASA's PM Platform (now called Aqua) in December 2000. This is in addition to a copy of AMSR that will be launched on Japan's ADEOS-II satellite in 2001. The AMSRs improve upon the window frequency radiometer heritage of the SSM/I and SMMR instruments. Major improvements over those instruments include channels spanning the 6.9 GHz to 89 GHz frequency range, and higher spatial resolution from a 1.6 m reflector (AMSR-E) and 2.0 m reflector (ADEOS-II AMSR). The ADEOS-II AMSR also will have 50.3 and 52.8 GHz channels, providing sensitivity to lower tropospheric temperature. NASA funds an AMSR-E Science Team to provide algorithms for the routine production of a number of standard geophysical products. These products will be generated by the AMSR-E Science Investigator-led Processing System (SIPS) at the Global Hydrology Resource Center (GHRC) in Huntsville, Alabama. While there is a separate NASDA-sponsored activity to develop algorithms and produce products from AMSR, as well as a Joint (NASDA-NASA) AMSR Science Team 3 activity, here I will review only the AMSR-E Team's algorithms and how they benefit from the new capabilities that AMSR-E will provide. The US Team's products will be archived at the National Snow and Ice Data Center (NSIDC).

  1. Limited Memory Block Krylov Subspace Optimization for Computing Dominant Singular Value Decompositions

    DTIC Science & Technology

    2012-03-22

    with performance profiles, Math. Program., 91 (2002), pp. 201–213. [6] P. DRINEAS, R. KANNAN, AND M. W. MAHONEY , Fast Monte Carlo algorithms for matrices...computing invariant subspaces of non-Hermitian matri- ces, Numer. Math., 25 ( 1975 /76), pp. 123–136. [25] , Matrix algorithms Vol. II: Eigensystems

  2. Genetic loci associated with delayed clearance of Plasmodium falciparum following artemisinin treatment in Southeast Asia

    DTIC Science & Technology

    2013-01-02

    intensity data from the SNP array were normalized using the Affymetrix GeneChip Targeted Genotyping Analysis Software ( GTGS ). To assess robustness of SNP...calls, genotypes were called using three algorithms: (i) GTGS , (ii) illuminus (27), and (iii) a heuristic algorithm based on discrete cutoffs of

  3. SEBAL-A: A remote sensing ET algorithm that accounts for advection with limited data. Part II: Test for transferability

    USDA-ARS?s Scientific Manuscript database

    Because the Surface Energy Balance Algorithm for Land (SEBAL) tends to underestimate ET under conditions of advection, the model was modified by incorporating an advection component as part of the energy usable for crop evapotranspiration (ET). The modification involved the estimation of advected en...

  4. Systems Design and Pilot Operation of a Regional Center for Technical Processing for the Libraries of the New England State Universities. NELINET, New England Library Information Network. Progress Report, July 1, 1967 - March 30, 1968, Volume II, Appendices.

    ERIC Educational Resources Information Center

    Agenbroad, James E.; And Others

    Included in this volume of appendices to LI 000 979 are acquisitions flow charts; a current operations questionnaire; an algorithm for splitting the Library of Congress call number; analysis of the Machine-Readable Cataloging (MARC II) format; production problems and decisions; operating procedures for information transmittal in the New England…

  5. Quantized Overcomplete Expansions: Analysis, Synthesis and Algorithms

    DTIC Science & Technology

    1995-07-01

    would be in the spirit of the Lempel - Ziv algorithm . The decoder would have to be aware of changes in the dictionary, but depending on the nature of the...37 3.4 A General Vector Compression Algorithm Based on Frames : : : : : : : : : : 40 ii 3.4.1 Design Considerations...x3.3. Along with exploring general properties of matching pursuit, we are interested in its application to compressing data vectors in RN. A general

  6. On the accuracy of stratospheric aerosol extinction derived from in situ size distribution measurements and surface area density derived from remote SAGE II and HALOE extinction measurements

    DOE PAGES

    Kovilakam, Mahesh; Deshler, Terry

    2015-08-26

    In situ stratospheric aerosol measurements, from University of Wyoming optical particle counters (OPCs), are compared with Stratospheric Aerosol Gas Experiment (SAGE) II (versions 6.2 and 7.0) and Halogen Occultation Experiment (HALOE) satellite measurements to investigate differences between SAGE II/HALOE-measured extinction and derived surface area and OPC-derived extinction and surface area. Coincident OPC and SAGE II measurements are compared for a volcanic (1991-1996) and nonvolcanic (1997-2005) period. OPC calculated extinctions agree with SAGE II measurements, within instrumental uncertainty, during the volcanic period, but have been a factor of 2 low during the nonvolcanic period. Three systematic errors associated with the OPCmore » measurements, anisokineticity, inlet particle evaporation, and counting efficiency, were investigated. An overestimation of the OPC counting efficiency is found to be the major source of systematic error. With this correction OPC calculated extinction increases by 15-30% (30-50%) for the volcanic (nonvolcanic) measurements. These changes significantly improve the comparison with SAGE II and HALOE extinctions in the nonvolcanic cases but slightly degrade the agreement in the volcanic period. These corrections have impacts on OPC-derived surface area density, exacerbating the poor agreement between OPC and SAGE II (version 6.2) surface areas. Furthermore, this disparity is reconciled with SAGE II version 7.0 surface areas. For both the volcanic and nonvolcanic cases these changes in OPC counting efficiency and in the operational SAGE II surface area algorithm leave the derived surface areas from both platforms in significantly better agreement and within the ± 40% precision of the OPC moment calculations.« less

  7. On the accuracy of stratospheric aerosol extinction derived from in situ size distribution measurements and surface area density derived from remote SAGE II and HALOE extinction measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kovilakam, Mahesh; Deshler, Terry

    In situ stratospheric aerosol measurements, from University of Wyoming optical particle counters (OPCs), are compared with Stratospheric Aerosol Gas Experiment (SAGE) II (versions 6.2 and 7.0) and Halogen Occultation Experiment (HALOE) satellite measurements to investigate differences between SAGE II/HALOE-measured extinction and derived surface area and OPC-derived extinction and surface area. Coincident OPC and SAGE II measurements are compared for a volcanic (1991-1996) and nonvolcanic (1997-2005) period. OPC calculated extinctions agree with SAGE II measurements, within instrumental uncertainty, during the volcanic period, but have been a factor of 2 low during the nonvolcanic period. Three systematic errors associated with the OPCmore » measurements, anisokineticity, inlet particle evaporation, and counting efficiency, were investigated. An overestimation of the OPC counting efficiency is found to be the major source of systematic error. With this correction OPC calculated extinction increases by 15-30% (30-50%) for the volcanic (nonvolcanic) measurements. These changes significantly improve the comparison with SAGE II and HALOE extinctions in the nonvolcanic cases but slightly degrade the agreement in the volcanic period. These corrections have impacts on OPC-derived surface area density, exacerbating the poor agreement between OPC and SAGE II (version 6.2) surface areas. Furthermore, this disparity is reconciled with SAGE II version 7.0 surface areas. For both the volcanic and nonvolcanic cases these changes in OPC counting efficiency and in the operational SAGE II surface area algorithm leave the derived surface areas from both platforms in significantly better agreement and within the ± 40% precision of the OPC moment calculations.« less

  8. Measurement of the Inclusive Jet Cross Section using the k(T) algorithm in p anti-p collisions at s**(1/2) = 1.96-TeV with the CDF II Detector

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abulencia, A.; /Illinois U., Urbana; Adelman, J.

    2007-01-01

    The authors report on measurements of the inclusive jet production cross section as a function of the jet transverse momentum in p{bar p} collisions at {radical}s = 1.96 TeV, using the k{sub T} algorithm and a data sample corresponding to 1.0 fb{sup -1} collected with the Collider Detector at Fermilab in Run II. The measurements are carried out in five different jet rapidity regions with |y{sup jet}| < 2.1 and transverse momentum in the range 54 < p{sub T}{sup jet} < 700 GeV/c. Next-to-leading order perturbative QCD predictions are in good agreement with the measured cross sections.

  9. Viewing-zone control of integral imaging display using a directional projection and elemental image resizing method.

    PubMed

    Alam, Md Ashraful; Piao, Mei-Lan; Bang, Le Thanh; Kim, Nam

    2013-10-01

    Viewing-zone control of integral imaging (II) displays using a directional projection and elemental image (EI) resizing method is proposed. Directional projection of EIs with the same size of microlens pitch causes an EI mismatch at the EI plane. In this method, EIs are generated computationally using a newly introduced algorithm: the directional elemental image generation and resizing algorithm considering the directional projection geometry of each pixel as well as an EI resizing method to prevent the EI mismatch. Generated EIs are projected as a collimated projection beam with a predefined directional angle, either horizontally or vertically. The proposed II display system allows reconstruction of a 3D image within a predefined viewing zone that is determined by the directional projection angle.

  10. Intelligent Use of CFAR Algorithms

    DTIC Science & Technology

    1993-05-01

    the reference windows can raise the threshold too high in many CFAR algorithms and result in masking of targets. GCMLD is a modification of CMLD that...AD-A267 755 RL-TR-93-75 III 11 III II liiI Interim Report May 1993 INTELLIGENT USE OF CFAR ALGORITHMS Kaman Sciences Corporation P. Antonik, B...AND DATES COVERED IMay 1993 Inte ’rim Jan 92 - Se2 92 4. TITLE AND SUBTITLE 5. FUNDING NUMBERS INTELLIGENT USE OF CFAR ALGORITHMS C - F30602-91-C-0017

  11. Electrons and photons at High Level Trigger in CMS for Run II

    NASA Astrophysics Data System (ADS)

    Anuar, Afiq A.

    2015-12-01

    The CMS experiment has been designed with a 2-level trigger system. The first level is implemented using custom-designed electronics. The second level is the so-called High Level Trigger (HLT), a streamlined version of the CMS offline reconstruction software running on a computer farm. For Run II of the Large Hadron Collider, the increase in center-of-mass energy and luminosity will raise the event rate to a level challenging for the HLT algorithms. New approaches have been studied to keep the HLT output rate manageable while maintaining thresholds low enough to cover physics analyses. The strategy mainly relies on porting online the ingredients that have been successfully applied in the offline reconstruction, thus allowing to move HLT selection closer to offline cuts. Improvements in HLT electron and photon definitions will be presented, focusing in particular on: updated clustering algorithm and the energy calibration procedure, new Particle-Flow-based isolation approach and pileup mitigation techniques, and the electron-dedicated track fitting algorithm based on Gaussian Sum Filter.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Von Dreele, Robert

    One of the goals in developing GSAS-II was to expand from the capabilities of the original General Structure Analysis System (GSAS) which largely encompassed just structure refinement and post refinement analysis. GSAS-II has been written almost entirely in Python loaded with graphics, GUI and mathematical packages (matplotlib, pyOpenGL, wxpython, numpy and scipy). Thus, GSAS-II has a fully developed modern GUI as well as extensive graphical display of data and results. However, the structure and operation of Python has required new approaches to many of the algorithms used in crystal structure analysis. The extensions beyond GSAS include image calibration/integration as wellmore » as peak fitting and unit cell indexing for powder data which are precursors for structure solution. Structure solution within GSAS-II begins with either Pawley or LeBail extracted structure factors from powder data or those measured in a single crystal experiment. Both charge flipping and Monte Carlo-Simulated Annealing techniques are available; the former can be applied to (3+1) incommensurate structures as well as conventional 3D structures.« less

  13. Angular Superresolution for a Scanning Antenna with Simulated Complex Scatterer-Type Targets

    DTIC Science & Technology

    2002-05-01

    Approved for public release; distribution unlimited. The Scan- MUSIC (MUltiple SIgnal Classification), or SMUSIC, algorithm was developed by the Millimeter...with the use of a single rotatable sensor scanning in an angular region of interest. This algorithm has been adapted and extended from the MUSIC ...simulation. Abstract ii iii Contents 1. Introduction 1 2. Extension of the MUSIC Algorithm for Scanning Antenna 2 2.1 Subvector Averaging Method

  14. New techniques for modeling the reliability of reactor pressure vessels

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnson, K.I.; Simonen, F.A.; Liebetrau, A.M.

    1985-12-01

    In recent years several probabilistic fracture mechanics codes, including the VISA code, have been developed to predict the reliability of reactor pressure vessels. This paper describes new modeling techniques used in a second generation of the VISA code entitled VISA-II. Results are presented that show the sensitivity of vessel reliability predictions to such factors as inservice inspection to detect flaws, random positioning of flaws within the vessel walls thickness, and fluence distributions that vary through-out the vessel. The algorithms used to implement these modeling techniques are also described. Other new options in VISA-II are also described in this paper. Themore » effect of vessel cladding has been included in the heat transfer, stress, and fracture mechanics solutions in VISA-II. The algorithm for simulating flaws has been changed to consider an entire vessel rather than a single flaw in a single weld. The flaw distribution was changed to include the distribution of both flaw depth and length. A menu of several alternate equations has been included to predict the shift in RTNDT. For flaws that arrest and later re-initiate, an option was also included to allow correlating the current arrest thoughness with subsequent initiation toughnesses. 21 refs.« less

  15. Can Depression be Diagnosed by Response to Mother's Face? A Personalized Attachment-Based Paradigm for Diagnostic fMRI

    PubMed Central

    Zhang, Xian; Yaseen, Zimri S.; Galynker, Igor I.; Hirsch, Joy; Winston, Arnold

    2011-01-01

    Objective Objective measurement of depression remains elusive. Depression has been associated with insecure attachment, and both have been associated with changes in brain reactivity in response to viewing standard emotional and neutral faces. In this study, we developed a method to calculate predicted scores for the Beck Depression Inventory II (BDI-II) using personalized stimuli: fMRI imaging of subjects viewing pictures of their own mothers. Methods 28 female subjects ages 18–30 (14 healthy controls and 14 unipolar depressed diagnosed by MINI psychiatric interview) were scored on the Beck Depression Inventory II (BDI-II) and the Adult Attachment Interview (AAI) coherence of mind scale of global attachment security. Subjects viewed pictures of Mother (M), Friend (F) and Stranger (S), during functional magnetic resonance imaging (fMRI). Using a principal component regression method (PCR), a predicted Beck Depression Inventory II (BDI-II) score was obtained from activity patterns in the paracingulate gyrus (Brodmann area 32) and compared to clinical diagnosis and the measured BDI-II score. The same procedure was performed for AAI coherence of mind scores. Results Activity patterns in BA-32 identified depressed subjects. The categorical agreement between the derived BDI-II score (using the standard clinical cut-score of 14 on the BDI-II) and depression diagnosis by MINI psychiatric interview was 89%, with sensitivity 85.7% and specificity 92.8%. Predicted and measured BDI-II scores had a correlation of 0.55. Prediction of attachment security was not statistically significant. Conclusions Brain activity in response to viewing one's mother may be diagnostic of depression. Functional magnetic resonance imaging using personalized paradigms has the potential to provide objective assessments, even when behavioral measures are not informative. Further, fMRI based diagnostic algorithms may enhance our understanding of the neural mechanisms of depression by identifying distinctive neural features of the illness. PMID:22180777

  16. MHC2NNZ: A novel peptide binding prediction approach for HLA DQ molecules

    NASA Astrophysics Data System (ADS)

    Xie, Jiang; Zeng, Xu; Lu, Dongfang; Liu, Zhixiang; Wang, Jiao

    2017-07-01

    The major histocompatibility complex class II (MHC-II) molecule plays a crucial role in immunology. Computational prediction of MHC-II binding peptides can help researchers understand the mechanism of immune systems and design vaccines. Most of the prediction algorithms for MHC-II to date have made large efforts in human leukocyte antigen (HLA, the name of MHC in Human) molecules encoded in the DR locus. However, HLA DQ molecules are equally important and have only been made less progress because it is more difficult to handle them experimentally. In this study, we propose an artificial neural network-based approach called MHC2NNZ to predict peptides binding to HLA DQ molecules. Unlike previous artificial neural network-based methods, MHC2NNZ not only considers sequence similarity features but also captures the chemical and physical properties, and a novel method incorporating these properties is proposed to represent peptide flanking regions (PFR). Furthermore, MHC2NNZ improves the prediction accuracy by combining with amino acid preference at more specific positions of the peptides binding core. By evaluating on 3549 peptides binding to six most frequent HLA DQ molecules, MHC2NNZ is demonstrated to outperform other state-of-the-art MHC-II prediction methods.

  17. EUPDF-II: An Eulerian Joint Scalar Monte Carlo PDF Module : User's Manual

    NASA Technical Reports Server (NTRS)

    Raju, M. S.; Liu, Nan-Suey (Technical Monitor)

    2004-01-01

    EUPDF-II provides the solution for the species and temperature fields based on an evolution equation for PDF (Probability Density Function) and it is developed mainly for application with sprays, combustion, parallel computing, and unstructured grids. It is designed to be massively parallel and could easily be coupled with any existing gas-phase CFD and spray solvers. The solver accommodates the use of an unstructured mesh with mixed elements of either triangular, quadrilateral, and/or tetrahedral type. The manual provides the user with an understanding of the various models involved in the PDF formulation, its code structure and solution algorithm, and various other issues related to parallelization and its coupling with other solvers. The source code of EUPDF-II will be available with National Combustion Code (NCC) as a complete package.

  18. VizieR Online Data Catalog: Catalog of strong MgII absorbers (Lawther+, 2012)

    NASA Astrophysics Data System (ADS)

    Lawther, D.; Paarup, T.; Schmidt, M.; Vestergaard, M.; Hjorth, J.; Malesani, D.

    2012-08-01

    Here we present a catalog of strong (rest equivalent width Wr> intervening Mg II absorbers in the SDSS Data Release 7 quasar catalog (2010AJ....139.2360S, Cat. VII/260). The intervening absorbers were found by a semi-automatic algorithm written in IDL - for details of the algorithm see section 2 of our paper. A subset of the absorbers have been visually inspected - see the MAN_OK flag in the catalog. The number of sightlines searched, tabulated by absorber redshift, i.e. g(z), is available as an ASCII table (for S/N>8 and S/N>15). All analysis in our paper is based on the SNR>8 coverage, and considers only sight-lines towards non-BAL quasars. Any questions regarding the catalog should be sent to Daniel Lawther (unclellama(at)gmail.com). (3 data files).

  19. Optic cup segmentation: type-II fuzzy thresholding approach and blood vessel extraction

    PubMed Central

    Almazroa, Ahmed; Alodhayb, Sami; Raahemifar, Kaamran; Lakshminarayanan, Vasudevan

    2017-01-01

    We introduce here a new technique for segmenting optic cup using two-dimensional fundus images. Cup segmentation is the most challenging part of image processing of the optic nerve head due to the complexity of its structure. Using the blood vessels to segment the cup is important. Here, we report on blood vessel extraction using first a top-hat transform and Otsu’s segmentation function to detect the curves in the blood vessels (kinks) which indicate the cup boundary. This was followed by an interval type-II fuzzy entropy procedure. Finally, the Hough transform was applied to approximate the cup boundary. The algorithm was evaluated on 550 fundus images from a large dataset, which contained three different sets of images, where the cup was manually marked by six ophthalmologists. On one side, the accuracy of the algorithm was tested on the three image sets independently. The final cup detection accuracy in terms of area and centroid was calculated to be 78.2% of 441 images. Finally, we compared the algorithm performance with manual markings done by the six ophthalmologists. The agreement was determined between the ophthalmologists as well as the algorithm. The best agreement was between ophthalmologists one, two and five in 398 of 550 images, while the algorithm agreed with them in 356 images. PMID:28515636

  20. Optic cup segmentation: type-II fuzzy thresholding approach and blood vessel extraction.

    PubMed

    Almazroa, Ahmed; Alodhayb, Sami; Raahemifar, Kaamran; Lakshminarayanan, Vasudevan

    2017-01-01

    We introduce here a new technique for segmenting optic cup using two-dimensional fundus images. Cup segmentation is the most challenging part of image processing of the optic nerve head due to the complexity of its structure. Using the blood vessels to segment the cup is important. Here, we report on blood vessel extraction using first a top-hat transform and Otsu's segmentation function to detect the curves in the blood vessels (kinks) which indicate the cup boundary. This was followed by an interval type-II fuzzy entropy procedure. Finally, the Hough transform was applied to approximate the cup boundary. The algorithm was evaluated on 550 fundus images from a large dataset, which contained three different sets of images, where the cup was manually marked by six ophthalmologists. On one side, the accuracy of the algorithm was tested on the three image sets independently. The final cup detection accuracy in terms of area and centroid was calculated to be 78.2% of 441 images. Finally, we compared the algorithm performance with manual markings done by the six ophthalmologists. The agreement was determined between the ophthalmologists as well as the algorithm. The best agreement was between ophthalmologists one, two and five in 398 of 550 images, while the algorithm agreed with them in 356 images.

  1. The Design and Implementation of a Read Prediction Buffer

    DTIC Science & Technology

    1992-12-01

    City, State, and ZIP Code) 7b ADDRESS (City, State. and ZIP Code) 8a. NAME OF FUNDING /SPONSORING 8b. OFFICE SYMBOL 9 PROCUREMENT INSTRUMENT... 9 E. THESIS STRUCTURE.. . .... ............... 9 II. READ PREDICTION ALGORITHM AND BUFFER DESIGN 10 A. THE READ PREDICTION ALGORITHM...29 Figure 9 . Basic Multiplexer Cell .... .......... .. 30 Figure 10. Block Diagram Simulation Labels ......... 38 viii I. INTRODUCTION A

  2. Predicting patchy particle crystals: variable box shape simulations and evolutionary algorithms.

    PubMed

    Bianchi, Emanuela; Doppelbauer, Günther; Filion, Laura; Dijkstra, Marjolein; Kahl, Gerhard

    2012-06-07

    We consider several patchy particle models that have been proposed in literature and we investigate their candidate crystal structures in a systematic way. We compare two different algorithms for predicting crystal structures: (i) an approach based on Monte Carlo simulations in the isobaric-isothermal ensemble and (ii) an optimization technique based on ideas of evolutionary algorithms. We show that the two methods are equally successful and provide consistent results on crystalline phases of patchy particle systems.

  3. National dosimetric audit network finds discrepancies in AAA lung inhomogeneity corrections.

    PubMed

    Dunn, Leon; Lehmann, Joerg; Lye, Jessica; Kenny, John; Kron, Tomas; Alves, Andrew; Cole, Andrew; Zifodya, Jackson; Williams, Ivan

    2015-07-01

    This work presents the Australian Clinical Dosimetry Service's (ACDS) findings of an investigation of systematic discrepancies between treatment planning system (TPS) calculated and measured audit doses. Specifically, a comparison between the Anisotropic Analytic Algorithm (AAA) and other common dose-calculation algorithms in regions downstream (≥2cm) from low-density material in anthropomorphic and slab phantom geometries is presented. Two measurement setups involving rectilinear slab-phantoms (ACDS Level II audit) and anthropomorphic geometries (ACDS Level III audit) were used in conjunction with ion chamber (planar 2D array and Farmer-type) measurements. Measured doses were compared to calculated doses for a variety of cases, with and without the presence of inhomogeneities and beam-modifiers in 71 audits. Results demonstrate a systematic AAA underdose with an average discrepancy of 2.9 ± 1.2% when the AAA algorithm is implemented in regions distal from lung-tissue interfaces, when lateral beams are used with anthropomorphic phantoms. This systemic discrepancy was found for all Level III audits of facilities using the AAA algorithm. This discrepancy is not seen when identical measurements are compared for other common dose-calculation algorithms (average discrepancy -0.4 ± 1.7%), including the Acuros XB algorithm also available with the Eclipse TPS. For slab phantom geometries (Level II audits), with similar measurement points downstream from inhomogeneities this discrepancy is also not seen. Crown Copyright © 2015. Published by Elsevier Ltd. All rights reserved.

  4. Diagnostic Criteria for Temporomandibular Disorders (DC/TMD) for Clinical and Research Applications: Recommendations of the International RDC/TMD Consortium Network* and Orofacial Pain Special Interest Group†

    PubMed Central

    Schiffman, Eric; Ohrbach, Richard; Truelove, Edmond; Look, John; Anderson, Gary; Goulet, Jean-Paul; List, Thomas; Svensson, Peter; Gonzalez, Yoly; Lobbezoo, Frank; Michelotti, Ambra; Brooks, Sharon L.; Ceusters, Werner; Drangsholt, Mark; Ettlin, Dominik; Gaul, Charly; Goldberg, Louis J.; Haythornthwaite, Jennifer A.; Hollender, Lars; Jensen, Rigmor; John, Mike T.; De Laat, Antoon; de Leeuw, Reny; Maixner, William; van der Meulen, Marylee; Murray, Greg M.; Nixdorf, Donald R.; Palla, Sandro; Petersson, Arne; Pionchon, Paul; Smith, Barry; Visscher, Corine M.; Zakrzewska, Joanna; Dworkin, Samuel F.

    2015-01-01

    Aims The original Research Diagnostic Criteria for Temporomandibular Disorders (RDC/TMD) Axis I diagnostic algorithms have been demonstrated to be reliable. However, the Validation Project determined that the RDC/TMD Axis I validity was below the target sensitivity of ≥ 0.70 and specificity of ≥ 0.95. Consequently, these empirical results supported the development of revised RDC/TMD Axis I diagnostic algorithms that were subsequently demonstrated to be valid for the most common pain-related TMD and for one temporomandibular joint (TMJ) intra-articular disorder. The original RDC/TMD Axis II instruments were shown to be both reliable and valid. Working from these findings and revisions, two international consensus workshops were convened, from which recommendations were obtained for the finalization of new Axis I diagnostic algorithms and new Axis II instruments. Methods Through a series of workshops and symposia, a panel of clinical and basic science pain experts modified the revised RDC/TMD Axis I algorithms by using comprehensive searches of published TMD diagnostic literature followed by review and consensus via a formal structured process. The panel's recommendations for further revision of the Axis I diagnostic algorithms were assessed for validity by using the Validation Project's data set, and for reliability by using newly collected data from the ongoing TMJ Impact Project—the follow-up study to the Validation Project. New Axis II instruments were identified through a comprehensive search of the literature providing valid instruments that, relative to the RDC/TMD, are shorter in length, are available in the public domain, and currently are being used in medical settings. Results The newly recommended Diagnostic Criteria for TMD (DC/TMD) Axis I protocol includes both a valid screener for detecting any pain-related TMD as well as valid diagnostic criteria for differentiating the most common pain-related TMD (sensitivity ≥ 0.86, specificity ≥ 0.98) and for one intra-articular disorder (sensitivity of 0.80 and specificity of 0.97). Diagnostic criteria for other common intra-articular disorders lack adequate validity for clinical diagnoses but can be used for screening purposes. Inter-examiner reliability for the clinical assessment associated with the validated DC/TMD criteria for pain-related TMD is excellent (kappa ≥ 0.85). Finally, a comprehensive classification system that includes both the common and less common TMD is also presented. The Axis II protocol retains selected original RDC/TMD screening instruments augmented with new instruments to assess jaw function as well as behavioral and additional psychosocial factors. The Axis II protocol is divided into screening and comprehensive self-report instrument sets. The screening instruments’ 41 questions assess pain intensity, pain-related disability, psychological distress, jaw functional limitations, and parafunctional behaviors, and a pain drawing is used to assess locations of pain. The comprehensive instruments, composed of 81 questions, assess in further detail jaw functional limitations and psychological distress as well as additional constructs of anxiety and presence of comorbid pain conditions. Conclusion The recommended evidence-based new DC/TMD protocol is appropriate for use in both clinical and research settings. More comprehensive instruments augment short and simple screening instruments for Axis I and Axis II. These validated instruments allow for identification of patients with a range of simple to complex TMD presentations. PMID:24482784

  5. Measurements of reduced corkscrew motion on the ETA-II linear induction accelerator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Allen, S.L.; Brand, H.R.; Chambers, F.W.

    1991-05-01

    The ETA-II linear induction accelerator is used to drive a microwave free electron laser (FEL). Corkscrew motion, which previously limited performance, has been reduced by: (1) an improved pulse distribution system which reduces energy sweep, (2) improved magnetic alignment achieved with a stretched wire alignment technique (SWAT) and (3) a unique magnetic tuning algorithm. Experiments have been carried out on a 20-cell version of ETA-II operating at 1500 A and 2.7 MeV. The measured transverse beam motion is less than 0.5 mm for 40 ns of the pulse, an improvement of a factor of 2 to 3 over previous results.more » Details of the computerized tuning procedure, estimates of the corkscrew phase, and relevance of these results to future FEL experiments are presented. 11 refs.« less

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Deka, Deepjyoti; Backhaus, Scott N.; Chertkov, Michael

    Traditionally power distribution networks are either not observable or only partially observable. This complicates development and implementation of new smart grid technologies, such as those related to demand response, outage detection and management, and improved load-monitoring. In this two part paper, inspired by proliferation of the metering technology, we discuss estimation problems in structurally loopy but operationally radial distribution grids from measurements, e.g. voltage data, which are either already available or can be made available with a relatively minor investment. In Part I, the objective is to learn the operational layout of the grid. Part II of this paper presentsmore » algorithms that estimate load statistics or line parameters in addition to learning the grid structure. Further, Part II discusses the problem of structure estimation for systems with incomplete measurement sets. Our newly suggested algorithms apply to a wide range of realistic scenarios. The algorithms are also computationally efficient – polynomial in time– which is proven theoretically and illustrated computationally on a number of test cases. The technique developed can be applied to detect line failures in real time as well as to understand the scope of possible adversarial attacks on the grid.« less

  7. Reconciliation of Gene and Species Trees

    PubMed Central

    Rusin, L. Y.; Lyubetskaya, E. V.; Gorbunov, K. Y.; Lyubetsky, V. A.

    2014-01-01

    The first part of the paper briefly overviews the problem of gene and species trees reconciliation with the focus on defining and algorithmic construction of the evolutionary scenario. Basic ideas are discussed for the aspects of mapping definitions, costs of the mapping and evolutionary scenario, imposing time scales on a scenario, incorporating horizontal gene transfers, binarization and reconciliation of polytomous trees, and construction of species trees and scenarios. The review does not intend to cover the vast diversity of literature published on these subjects. Instead, the authors strived to overview the problem of the evolutionary scenario as a central concept in many areas of evolutionary research. The second part provides detailed mathematical proofs for the solutions of two problems: (i) inferring a gene evolution along a species tree accounting for various types of evolutionary events and (ii) trees reconciliation into a single species tree when only gene duplications and losses are allowed. All proposed algorithms have a cubic time complexity and are mathematically proved to find exact solutions. Solving algorithms for problem (ii) can be naturally extended to incorporate horizontal transfers, other evolutionary events, and time scales on the species tree. PMID:24800245

  8. Providing reliable route guidance : phase II.

    DOT National Transportation Integrated Search

    2010-12-20

    The overarching goal of the project is to enhance travel reliability of highway users by providing : them with reliable route guidance produced from newly developed routing algorithms that : are validated and implemented with real traffic data. To th...

  9. 40 CFR 86.1809-12 - Prohibition of defeat devices.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... manufacturer must provide an explanation containing detailed information regarding test programs, engineering evaluations, design specifications, calibrations, on-board computer algorithms, and design strategies..., with the Part II certification application, an engineering evaluation demonstrating to the satisfaction...

  10. A simple and effective figure caption detection system for old-style documents

    NASA Astrophysics Data System (ADS)

    Liu, Zongyi; Zhou, Hanning

    2011-01-01

    Identifying figure captions has wide applications in producing high quality e-books such as kindle books or ipad books. In this paper, we present a rule-based system to detect horizontal figure captions in old-style documents. Our algorithm consists of three steps: (i) segment images into regions of different types such as text and figures, (ii) search the best caption region candidate based on heuristic rules such as region alignments and distances, and (iii) expand caption regions identified in step (ii) with its neighboring text-regions in order to correct oversegmentation errors. We test our algorithm using 81 images collected from old-style books, with each image containing at least one figure area. We show that the approach is able to correctly detect figure captions from images with different layouts, and we also measure its performances in terms of both precision rate and recall rate.

  11. Tunable, Flexible and Efficient Optimization of Control Pulses for Superconducting Qubits, part II - Applications

    NASA Astrophysics Data System (ADS)

    AsséMat, Elie; Machnes, Shai; Tannor, David; Wilhelm-Mauch, Frank

    In part I, we presented the theoretic foundations of the GOAT algorithm for the optimal control of quantum systems. Here in part II, we focus on several applications of GOAT to superconducting qubits architecture. First, we consider a control-Z gate on Xmons qubits with an Erf parametrization of the optimal pulse. We show that a fast and accurate gate can be obtained with only 16 parameters, as compared to hundreds of parameters required in other algorithms. We present numerical evidences that such parametrization should allow an efficient in-situ calibration of the pulse. Next, we consider the flux-tunable coupler by IBM. We show optimization can be carried out in a more realistic model of the system than was employed in the original study, which is expected to further simplify the calibration process. Moreover, GOAT reduced the complexity of the optimal pulse to only 6 Fourier components, composed with analytic wrappers.

  12. Scalable Faceted Ranking in Tagging Systems

    NASA Astrophysics Data System (ADS)

    Orlicki, José I.; Alvarez-Hamelin, J. Ignacio; Fierens, Pablo I.

    Nowadays, web collaborative tagging systems which allow users to upload, comment on and recommend contents, are growing. Such systems can be represented as graphs where nodes correspond to users and tagged-links to recommendations. In this paper we analyze the problem of computing a ranking of users with respect to a facet described as a set of tags. A straightforward solution is to compute a PageRank-like algorithm on a facet-related graph, but it is not feasible for online computation. We propose an alternative: (i) a ranking for each tag is computed offline on the basis of tag-related subgraphs; (ii) a faceted order is generated online by merging rankings corresponding to all the tags in the facet. Based on the graph analysis of YouTube and Flickr, we show that step (i) is scalable. We also present efficient algorithms for step (ii), which are evaluated by comparing their results with two gold standards.

  13. A review of data fusion techniques.

    PubMed

    Castanedo, Federico

    2013-01-01

    The integration of data and knowledge from several sources is known as data fusion. This paper summarizes the state of the data fusion field and describes the most relevant studies. We first enumerate and explain different classification schemes for data fusion. Then, the most common algorithms are reviewed. These methods and algorithms are presented using three different categories: (i) data association, (ii) state estimation, and (iii) decision fusion.

  14. Social Circles Detection from Ego Network and Profile Information

    DTIC Science & Technology

    2014-12-19

    response, including the time for reviewing instructions, searching existing data sources, gathering and maintaining the data needed, and completing... algorithm used to infer k-clique communities is expo- nential, which makes this technique unfeasible when treating egonets with a large number of users...atic when considering RBMs. This inconvenient was positively solved implementing a sparsity treatment with the RBM algorithm . (ii) The ground truth was

  15. Algorithmic characterization results for the Kerr-NUT-(A)dS space-time. II. KIDs for the Kerr-(A)(de Sitter) family

    NASA Astrophysics Data System (ADS)

    Paetz, Tim-Torben

    2017-04-01

    We characterize Cauchy data sets leading to vacuum space-times with vanishing Mars-Simon tensor. This approach provides an algorithmic procedure to check whether a given initial data set (Σ ,hi j,Ki j) evolves into a space-time which is locally isometric to a member of the Kerr-(A)(dS) family.

  16. TargetSpy: a supervised machine learning approach for microRNA target prediction.

    PubMed

    Sturm, Martin; Hackenberg, Michael; Langenberger, David; Frishman, Dmitrij

    2010-05-28

    Virtually all currently available microRNA target site prediction algorithms require the presence of a (conserved) seed match to the 5' end of the microRNA. Recently however, it has been shown that this requirement might be too stringent, leading to a substantial number of missed target sites. We developed TargetSpy, a novel computational approach for predicting target sites regardless of the presence of a seed match. It is based on machine learning and automatic feature selection using a wide spectrum of compositional, structural, and base pairing features covering current biological knowledge. Our model does not rely on evolutionary conservation, which allows the detection of species-specific interactions and makes TargetSpy suitable for analyzing unconserved genomic sequences.In order to allow for an unbiased comparison of TargetSpy to other methods, we classified all algorithms into three groups: I) no seed match requirement, II) seed match requirement, and III) conserved seed match requirement. TargetSpy predictions for classes II and III are generated by appropriate postfiltering. On a human dataset revealing fold-change in protein production for five selected microRNAs our method shows superior performance in all classes. In Drosophila melanogaster not only our class II and III predictions are on par with other algorithms, but notably the class I (no-seed) predictions are just marginally less accurate. We estimate that TargetSpy predicts between 26 and 112 functional target sites without a seed match per microRNA that are missed by all other currently available algorithms. Only a few algorithms can predict target sites without demanding a seed match and TargetSpy demonstrates a substantial improvement in prediction accuracy in that class. Furthermore, when conservation and the presence of a seed match are required, the performance is comparable with state-of-the-art algorithms. TargetSpy was trained on mouse and performs well in human and drosophila, suggesting that it may be applicable to a broad range of species. Moreover, we have demonstrated that the application of machine learning techniques in combination with upcoming deep sequencing data results in a powerful microRNA target site prediction tool http://www.targetspy.org.

  17. TargetSpy: a supervised machine learning approach for microRNA target prediction

    PubMed Central

    2010-01-01

    Background Virtually all currently available microRNA target site prediction algorithms require the presence of a (conserved) seed match to the 5' end of the microRNA. Recently however, it has been shown that this requirement might be too stringent, leading to a substantial number of missed target sites. Results We developed TargetSpy, a novel computational approach for predicting target sites regardless of the presence of a seed match. It is based on machine learning and automatic feature selection using a wide spectrum of compositional, structural, and base pairing features covering current biological knowledge. Our model does not rely on evolutionary conservation, which allows the detection of species-specific interactions and makes TargetSpy suitable for analyzing unconserved genomic sequences. In order to allow for an unbiased comparison of TargetSpy to other methods, we classified all algorithms into three groups: I) no seed match requirement, II) seed match requirement, and III) conserved seed match requirement. TargetSpy predictions for classes II and III are generated by appropriate postfiltering. On a human dataset revealing fold-change in protein production for five selected microRNAs our method shows superior performance in all classes. In Drosophila melanogaster not only our class II and III predictions are on par with other algorithms, but notably the class I (no-seed) predictions are just marginally less accurate. We estimate that TargetSpy predicts between 26 and 112 functional target sites without a seed match per microRNA that are missed by all other currently available algorithms. Conclusion Only a few algorithms can predict target sites without demanding a seed match and TargetSpy demonstrates a substantial improvement in prediction accuracy in that class. Furthermore, when conservation and the presence of a seed match are required, the performance is comparable with state-of-the-art algorithms. TargetSpy was trained on mouse and performs well in human and drosophila, suggesting that it may be applicable to a broad range of species. Moreover, we have demonstrated that the application of machine learning techniques in combination with upcoming deep sequencing data results in a powerful microRNA target site prediction tool http://www.targetspy.org. PMID:20509939

  18. Inventory Uncertainty Quantification using TENDL Covariance Data in Fispact-II

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eastwood, J.W.; Morgan, J.G.; Sublet, J.-Ch., E-mail: jean-christophe.sublet@ccfe.ac.uk

    2015-01-15

    The new inventory code Fispact-II provides predictions of inventory, radiological quantities and their uncertainties using nuclear data covariance information. Central to the method is a novel fast pathways search algorithm using directed graphs. The pathways output provides (1) an aid to identifying important reactions, (2) fast estimates of uncertainties, (3) reduced models that retain important nuclides and reactions for use in the code's Monte Carlo sensitivity analysis module. Described are the methods that are being implemented for improving uncertainty predictions, quantification and propagation using the covariance data that the recent nuclear data libraries contain. In the TENDL library, above themore » upper energy of the resolved resonance range, a Monte Carlo method in which the covariance data come from uncertainties of the nuclear model calculations is used. The nuclear data files are read directly by FISPACT-II without any further intermediate processing. Variance and covariance data are processed and used by FISPACT-II to compute uncertainties in collapsed cross sections, and these are in turn used to predict uncertainties in inventories and all derived radiological data.« less

  19. New techniques for modeling the reliability of reactor pressure vessels

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnson, K.I.; Simonen, F.A.; Liebetrau, A.M.

    1986-01-01

    In recent years several probabilistic fracture mechanics codes, including the VISA code, have been developed to predict the reliability of reactor pressure vessels. This paper describes several new modeling techniques used in a second generation of the VISA code entitled VISA-II. Results are presented that show the sensitivity of vessel reliability predictions to such factors as inservice inspection to detect flaws, random positioning of flaws within the vessel wall thickness, and fluence distributions that vary throughout the vessel. The algorithms used to implement these modeling techniques are also described. Other new options in VISA-II are also described in this paper.more » The effect of vessel cladding has been included in the heat transfer, stress, and fracture mechanics solutions in VISA-II. The algorithms for simulating flaws has been changed to consider an entire vessel rather than a single flaw in a single weld. The flaw distribution was changed to include the distribution of both flaw depth and length. A menu of several alternate equations has been included to predict the shift in RT/sub NDT/. For flaws that arrest and later re-initiate, an option was also included to allow correlating the current arrest toughness with subsequent initiation toughnesses.« less

  20. Theory of post-block 2 VLBI observable extraction

    NASA Technical Reports Server (NTRS)

    Lowe, Stephen T.

    1992-01-01

    The algorithms used in the post-Block II fringe-fitting software called 'Fit' are described. The steps needed to derive the very long baseline interferometry (VLBI) charged-particle corrected group delay, phase delay rate, and phase delay (the latter without resolving cycle ambiguities) are presented beginning with the set of complex fringe phasors as a function of observation frequency and time. The set of complex phasors is obtained from the JPL/CIT Block II correlator. The output of Fit is the set of charged-particle corrected observables (along with ancillary information) in a form amenable to the software program 'Modest.'

  1. Ocean Simulation Model for Internal Waves

    DTIC Science & Technology

    1990-08-01

    MODEL1.DAT U D 4°NZ Real U,V,W, (U(i),V(i),W(i), Z Z(i), i= 1 ,NZ) MODEL1.AUX U S 3°NZ Real NT,NX,NZ,DT, NTNX,NZDT, DXDZ,T0, LAT , DX,DZ,T0, LAT , LON,AZ LON...Oceanographic and Atmospheric Research Laboratory, Stennis Space Center, Mississippi 39529-5004. 9 1 .... .... . .. ...8 I I I II I I Foreword The effects of the...I A. Background 1 B. Project Objectives 1 C. Purpose of This Manual 1 II. Background and Derivation of Algorithms 2 A. Stochastic Representation of

  2. Mechanistic design data from ODOT instrumented pavement sites : phase II report.

    DOT National Transportation Integrated Search

    2017-03-01

    This investigation examined data obtained from three previously-instrumented pavement test sites in Oregon. Data processing algorithms and templates were developed for each test site that facilitated full processing of all the data to build databases...

  3. 40 CFR 86.1809-10 - Prohibition of defeat devices.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... detailed information regarding test programs, engineering evaluations, design specifications, calibrations, on-board computer algorithms, and design strategies incorporated for operation both during and... HLDT/MDPVs the manufacturer must submit, with the Part II certification application, an engineering...

  4. A quasi-Newton algorithm for large-scale nonlinear equations.

    PubMed

    Huang, Linghua

    2017-01-01

    In this paper, the algorithm for large-scale nonlinear equations is designed by the following steps: (i) a conjugate gradient (CG) algorithm is designed as a sub-algorithm to obtain the initial points of the main algorithm, where the sub-algorithm's initial point does not have any restrictions; (ii) a quasi-Newton algorithm with the initial points given by sub-algorithm is defined as main algorithm, where a new nonmonotone line search technique is presented to get the step length [Formula: see text]. The given nonmonotone line search technique can avoid computing the Jacobian matrix. The global convergence and the [Formula: see text]-order convergent rate of the main algorithm are established under suitable conditions. Numerical results show that the proposed method is competitive with a similar method for large-scale problems.

  5. Greedy Algorithms for Nonnegativity-Constrained Simultaneous Sparse Recovery

    PubMed Central

    Kim, Daeun; Haldar, Justin P.

    2016-01-01

    This work proposes a family of greedy algorithms to jointly reconstruct a set of vectors that are (i) nonnegative and (ii) simultaneously sparse with a shared support set. The proposed algorithms generalize previous approaches that were designed to impose these constraints individually. Similar to previous greedy algorithms for sparse recovery, the proposed algorithms iteratively identify promising support indices. In contrast to previous approaches, the support index selection procedure has been adapted to prioritize indices that are consistent with both the nonnegativity and shared support constraints. Empirical results demonstrate for the first time that the combined use of simultaneous sparsity and nonnegativity constraints can substantially improve recovery performance relative to existing greedy algorithms that impose less signal structure. PMID:26973368

  6. Analysis of estimation algorithms for CDTI and CAS applications

    NASA Technical Reports Server (NTRS)

    Goka, T.

    1985-01-01

    Estimation algorithms for Cockpit Display of Traffic Information (CDTI) and Collision Avoidance System (CAS) applications were analyzed and/or developed. The algorithms are based on actual or projected operational and performance characteristics of an Enhanced TCAS II traffic sensor developed by Bendix and the Federal Aviation Administration. Three algorithm areas are examined and discussed. These are horizontal x and y, range and altitude estimation algorithms. Raw estimation errors are quantified using Monte Carlo simulations developed for each application; the raw errors are then used to infer impacts on the CDTI and CAS applications. Applications of smoothing algorithms to CDTI problems are also discussed briefly. Technical conclusions are summarized based on the analysis of simulation results.

  7. Plasmid mapping computer program.

    PubMed Central

    Nolan, G P; Maina, C V; Szalay, A A

    1984-01-01

    Three new computer algorithms are described which rapidly order the restriction fragments of a plasmid DNA which has been cleaved with two restriction endonucleases in single and double digestions. Two of the algorithms are contained within a single computer program (called MPCIRC). The Rule-Oriented algorithm, constructs all logical circular map solutions within sixty seconds (14 double-digestion fragments) when used in conjunction with the Permutation method. The program is written in Apple Pascal and runs on an Apple II Plus Microcomputer with 64K of memory. A third algorithm is described which rapidly maps double digests and uses the above two algorithms as adducts. Modifications of the algorithms for linear mapping are also presented. PMID:6320105

  8. LSPRAY-II: A Lagrangian Spray Module

    NASA Technical Reports Server (NTRS)

    Raju, M. S.

    2004-01-01

    LSPRAY-II is a Lagrangian spray solver developed for application with parallel computing and unstructured grids. It is designed to be massively parallel and could easily be coupled with any existing gas-phase flow and/or Monte Carlo Probability Density Function (PDF) solvers. The solver accommodates the use of an unstructured mesh with mixed elements of either triangular, quadrilateral, and/or tetrahedral type for the gas flow grid representation. It is mainly designed to predict the flow, thermal and transport properties of a rapidly vaporizing spray because of its importance in aerospace application. The manual provides the user with an understanding of various models involved in the spray formulation, its code structure and solution algorithm, and various other issues related to parallelization and its coupling with other solvers. With the development of LSPRAY-II, we have advanced the state-of-the-art in spray computations in several important ways.

  9. A Review of Data Fusion Techniques

    PubMed Central

    2013-01-01

    The integration of data and knowledge from several sources is known as data fusion. This paper summarizes the state of the data fusion field and describes the most relevant studies. We first enumerate and explain different classification schemes for data fusion. Then, the most common algorithms are reviewed. These methods and algorithms are presented using three different categories: (i) data association, (ii) state estimation, and (iii) decision fusion. PMID:24288502

  10. Carrier-to-noise power estimation for the Block 5 Receiver

    NASA Technical Reports Server (NTRS)

    Monk, A. M.

    1991-01-01

    Two possible algorithms for the carrier to noise power (P sub c/N sub 0) estimation in the Block V Receiver are analyzed and their performances compared. The expected value and the variance of each estimator algorithm are derived. The two algorithms examined are known as the I arm estimator, which relies on samples from only the in-phase arm of the digital phase lock loop, and the IQ arm estimator, which uses both in-phase and quadrature-phase arm signals. The IQ arm algorithm is currently implemented in the Advanced Receiver II (ARX II). Both estimators are biased. The performance degradation due to phase jitter in the carrier tracking loop is taken into account. Curves of the expected value and the signal to noise ratio of the P sub c/N sub 0 estimators vs. actual P sub c/N sub 0 are shown. From this, it is clear that the I arm estimator performs better than the IQ arm estimator when the data to noise power ratio (P sub d/N sub 0) is high, i.e., at high P sub c/N sub 0 values and a significant modulation index. When P sub d/N sub 0 is low, the two estimators have essentially the same performance.

  11. Advanced Traffic Signal Control Algorithms Phase II

    DOT National Transportation Integrated Search

    2015-12-15

    The goal of the project was to design and implement an in-vehicle system that calculates and provide speed advice to the driver of the vehicle, using Signal Phase and Timing (SPaT) and Geometric Information Description (GID) information of the signal...

  12. 40 CFR 86.1809-12 - Prohibition of defeat devices.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... programs, engineering evaluations, design specifications, calibrations, on-board computer algorithms, and... manufacturer must submit, with the Part II certification application, an engineering evaluation demonstrating... vehicles, the engineering evaluation must also include particulate emissions. [75 FR 25685, May 7, 2010] ...

  13. Nonlinear 0-1 Programming: II. Dominance Relations and Algorithms. Revision.

    DTIC Science & Technology

    1983-02-01

    Polynomial Programming," Management Science, 18B, 1972, p. 328-343. [22] W. Zangwill, "Media Selection by Decision Programming," Journal of Advertising Research , 5, 1965, p. 30-36. -- 4 * ,-; ;- ...-. .*. .- * % t T. P.9 . *FILMED 4-85 DTIC

  14. Adjuvant Chemotherapy for Stage II Right- and Left-Sided Colon Cancer: Analysis of SEER-Medicare Data

    PubMed Central

    Weiss, Jennifer M.; Schumacher, Jessica; Allen, Glenn O.; Neuman, Heather; Lange, Erin O’Connor; LoConte, Noelle K.; Greenberg, Caprice C.; Smith, Maureen A.

    2014-01-01

    Purpose Survival benefit from adjuvant chemotherapy is established for stage III colon cancer; however, uncertainty exists for stage II patients. Tumor heterogeneity, specifically microsatellite instability (MSI) which is more common in right-sided cancers, may be the reason for this observation. We examined the relationship between adjuvant chemotherapy and overall 5-year mortality for stage II colon cancer by location (right- versus left-side) as a surrogate for MSI. Methods Using Surveillance, Epidemiology, and End Results (SEER)-Medicare data, we identified Medicare beneficiaries from 1992 to 2005 with AJCC stage II (n=23,578) and III (n=17,148) primary adenocarcinoma of the colon who underwent surgery for curative intent. Overall 5-year mortality was examined with Kaplan-Meier survival analysis and Cox proportional hazards regression with propensity score weighting. Results Eighteen percent (n=2,941) of stage II patients with right-sided cancer and 22% (n=1,693) with left-sided cancer received adjuvant chemotherapy. After adjustment, overall 5-year survival benefit from chemotherapy was observed only for stage III patients (right-sided: HR 0.64; 95% CI, 0.59–0.68, p<0.001 and left-sided: HR 0.61; 95% CI, 0.56–0.68, p<0.001). No survival benefit was observed for stage II patients with either right-sided (HR 0.97; 95% CI, 0.87–1.09, p=0.64) or left-sided cancer (HR 0.97; 95% CI, 0.84–1.12, p=0.68). Conclusions Among Medicare patients with stage II colon cancer, a substantial number receive adjuvant chemotherapy. Adjuvant chemotherapy did not improve overall 5-year survival for either right- or left-sided colon cancers. Our results reinforce existing guidelines and should be considered in treatment algorithms for older adults with stage II colon cancer. PMID:24643898

  15. Smart sensors II; Proceedings of the Seminar, San Diego, CA, July 31, August 1, 1980

    NASA Astrophysics Data System (ADS)

    Barbe, D. F.

    1980-01-01

    Topics discussed include technology for smart sensors, smart sensors for tracking and surveillance, and techniques and algorithms for smart sensors. Papers are presented on the application of very large scale integrated circuits to smart sensors, imaging charge-coupled devices for deep-space surveillance, ultra-precise star tracking using charge coupled devices, and automatic target identification of blurred images with super-resolution features. Attention is also given to smart sensors for terminal homing, algorithms for estimating image position, and the computational efficiency of multiple image registration algorithms.

  16. Relationship Between Tumor Gene Expression and Recurrence in Four Independent Studies of Patients With Stage II/III Colon Cancer Treated With Surgery Alone or Surgery Plus Adjuvant Fluorouracil Plus Leucovorin

    PubMed Central

    O'Connell, Michael J.; Lavery, Ian; Yothers, Greg; Paik, Soonmyung; Clark-Langone, Kim M.; Lopatin, Margarita; Watson, Drew; Baehner, Frederick L.; Shak, Steven; Baker, Joffre; Cowens, J. Wayne; Wolmark, Norman

    2010-01-01

    Purpose These studies were conducted to determine the relationship between quantitative tumor gene expression and risk of cancer recurrence in patients with stage II or III colon cancer treated with surgery alone or surgery plus fluorouracil (FU) and leucovorin (LV) to develop multigene algorithms to quantify the risk of recurrence as well as the likelihood of differential treatment benefit of FU/LV adjuvant chemotherapy for individual patients. Patients and Methods We performed quantitative reverse transcription polymerase chain reaction (RT-qPCR) on RNA extracted from fixed, paraffin-embedded (FPE) tumor blocks from patients with stage II or III colon cancer who were treated with surgery alone (n = 270 from National Surgical Adjuvant Breast and Bowel Project [NSABP] C-01/C-02 and n = 765 from Cleveland Clinic [CC]) or surgery plus FU/LV (n = 308 from NSABP C-04 and n = 508 from NSABP C-06). Overall, 761 candidate genes were studied in C-01/C-02 and C-04, and a subset of 375 genes was studied in CC/C-06. Results A combined analysis of the four studies identified 48 genes significantly associated with risk of recurrence and 66 genes significantly associated with FU/LV benefit (with four genes in common). Seven recurrence-risk genes, six FU/LV-benefit genes, and five reference genes were selected, and algorithms were developed to identify groups of patients with low, intermediate, and high likelihood of recurrence and benefit from FU/LV. Conclusion RT-qPCR of FPE colon cancer tissue applied to four large independent populations has been used to develop multigene algorithms for estimating recurrence risk and benefit from FU/LV. These algorithms are being independently validated, and their clinical utility is being evaluated in the Quick and Simple and Reliable (QUASAR) study. PMID:20679606

  17. Relationship between tumor gene expression and recurrence in four independent studies of patients with stage II/III colon cancer treated with surgery alone or surgery plus adjuvant fluorouracil plus leucovorin.

    PubMed

    O'Connell, Michael J; Lavery, Ian; Yothers, Greg; Paik, Soonmyung; Clark-Langone, Kim M; Lopatin, Margarita; Watson, Drew; Baehner, Frederick L; Shak, Steven; Baker, Joffre; Cowens, J Wayne; Wolmark, Norman

    2010-09-01

    These studies were conducted to determine the relationship between quantitative tumor gene expression and risk of cancer recurrence in patients with stage II or III colon cancer treated with surgery alone or surgery plus fluorouracil (FU) and leucovorin (LV) to develop multigene algorithms to quantify the risk of recurrence as well as the likelihood of differential treatment benefit of FU/LV adjuvant chemotherapy for individual patients. We performed quantitative reverse transcription polymerase chain reaction (RT-qPCR) on RNA extracted from fixed, paraffin-embedded (FPE) tumor blocks from patients with stage II or III colon cancer who were treated with surgery alone (n = 270 from National Surgical Adjuvant Breast and Bowel Project [NSABP] C-01/C-02 and n = 765 from Cleveland Clinic [CC]) or surgery plus FU/LV (n = 308 from NSABP C-04 and n = 508 from NSABP C-06). Overall, 761 candidate genes were studied in C-01/C-02 and C-04, and a subset of 375 genes was studied in CC/C-06. A combined analysis of the four studies identified 48 genes significantly associated with risk of recurrence and 66 genes significantly associated with FU/LV benefit (with four genes in common). Seven recurrence-risk genes, six FU/LV-benefit genes, and five reference genes were selected, and algorithms were developed to identify groups of patients with low, intermediate, and high likelihood of recurrence and benefit from FU/LV. RT-qPCR of FPE colon cancer tissue applied to four large independent populations has been used to develop multigene algorithms for estimating recurrence risk and benefit from FU/LV. These algorithms are being independently validated, and their clinical utility is being evaluated in the Quick and Simple and Reliable (QUASAR) study.

  18. Galaxy clustering dependence on the [O II] emission line luminosity in the local Universe

    NASA Astrophysics Data System (ADS)

    Favole, Ginevra; Rodríguez-Torres, Sergio A.; Comparat, Johan; Prada, Francisco; Guo, Hong; Klypin, Anatoly; Montero-Dorta, Antonio D.

    2017-11-01

    We study the galaxy clustering dependence on the [O II] emission line luminosity in the SDSS DR7 Main galaxy sample at mean redshift z ∼ 0.1. We select volume-limited samples of galaxies with different [O II] luminosity thresholds and measure their projected, monopole and quadrupole two-point correlation functions. We model these observations using the 1 h-1 Gpc MultiDark-Planck cosmological simulation and generate light cones with the SUrvey GenerAtoR algorithm. To interpret our results, we adopt a modified (Sub)Halo Abundance Matching scheme, accounting for the stellar mass incompleteness of the emission line galaxies. The satellite fraction constitutes an extra parameter in this model and allows to optimize the clustering fit on both small and intermediate scales (i.e. rp ≲ 30 h-1 Mpc), with no need of any velocity bias correction. We find that, in the local Universe, the [O II] luminosity correlates with all the clustering statistics explored and with the galaxy bias. This latter quantity correlates more strongly with the SDSS r-band magnitude than [O II] luminosity. In conclusion, we propose a straightforward method to produce reliable clustering models, entirely built on the simulation products, which provides robust predictions of the typical ELG host halo masses and satellite fraction values. The SDSS galaxy data, MultiDark mock catalogues and clustering results are made publicly available.

  19. Comparing the effects of positive and negative feedback in information-integration category learning.

    PubMed

    Freedberg, Michael; Glass, Brian; Filoteo, J Vincent; Hazeltine, Eliot; Maddox, W Todd

    2017-01-01

    Categorical learning is dependent on feedback. Here, we compare how positive and negative feedback affect information-integration (II) category learning. Ashby and O'Brien (2007) demonstrated that both positive and negative feedback are required to solve II category problems when feedback was not guaranteed on each trial, and reported no differences between positive-only and negative-only feedback in terms of their effectiveness. We followed up on these findings and conducted 3 experiments in which participants completed 2,400 II categorization trials across three days under 1 of 3 conditions: positive feedback only (PFB), negative feedback only (NFB), or both types of feedback (CP; control partial). An adaptive algorithm controlled the amount of feedback given to each group so that feedback was nearly equated. Using different feedback control procedures, Experiments 1 and 2 demonstrated that participants in the NFB and CP group were able to engage II learning strategies, whereas the PFB group was not. Additionally, the NFB group was able to achieve significantly higher accuracy than the PFB group by Day 3. Experiment 3 revealed that these differences remained even when we equated the information received on feedback trials. Thus, negative feedback appears significantly more effective for learning II category structures. This suggests that the human implicit learning system may be capable of learning in the absence of positive feedback.

  20. Visualization of grid-generated turbulence in He II using PTV

    NASA Astrophysics Data System (ADS)

    Mastracci, B.; Guo, W.

    2017-12-01

    Due to its low viscosity, cryogenic He II has potential use for simulating large-scale, high Reynolds number turbulent flow in a compact and efficient apparatus. To realize this potential, the behavior of the fluid in the simplest cases, such as turbulence generated by flow past a mesh grid, must be well understood. We have designed, constructed, and commissioned an apparatus to visualize the evolution of turbulence in the wake of a mesh grid towed through He II. Visualization is accomplished using the particle tracking velocimetry (PTV) technique, where μm-sized tracer particles are introduced to the flow, illuminated with a planar laser sheet, and recorded by a scientific imaging camera; the particles move with the fluid, and tracking their motion with a computer algorithm results in a complete map of the turbulent velocity field in the imaging region. In our experiment, this region is inside a carefully designed He II filled cast acrylic channel measuring approximately 16 × 16 × 330 mm. One of three different grids, which have mesh numbers M = 3, 3.75, or 5 mm, can be attached to the pulling system which moves it through the channel with constant velocity up to 600 mm/s. The consequent motion of the solidified deuterium tracer particles is used to investigate the energy statistics, effective kinematic viscosity, and quantized vortex dynamics in turbulent He II.

  1. Java-based Graphical User Interface for MAVERIC-II

    NASA Technical Reports Server (NTRS)

    Seo, Suk Jai

    2005-01-01

    A computer program entitled "Marshall Aerospace Vehicle Representation in C II, (MAVERIC-II)" is a vehicle flight simulation program written primarily in the C programming language. It is written by James W. McCarter at NASA/Marshall Space Flight Center. The goal of the MAVERIC-II development effort is to provide a simulation tool that facilitates the rapid development of high-fidelity flight simulations for launch, orbital, and reentry vehicles of any user-defined configuration for all phases of flight. MAVERIC-II has been found invaluable in performing flight simulations for various Space Transportation Systems. The flexibility provided by MAVERIC-II has allowed several different launch vehicles, including the Saturn V, a Space Launch Initiative Two-Stage-to-Orbit concept and a Shuttle-derived launch vehicle, to be simulated during ascent and portions of on-orbit flight in an extremely efficient manner. It was found that MAVERIC-II provided the high fidelity vehicle and flight environment models as well as the program modularity to allow efficient integration, modification and testing of advanced guidance and control algorithms. In addition to serving as an analysis tool for techno logy development, many researchers have found MAVERIC-II to be an efficient, powerful analysis tool that evaluates guidance, navigation, and control designs, vehicle robustness, and requirements. MAVERIC-II is currently designed to execute in a UNIX environment. The input to the program is composed of three segments: 1) the vehicle models such as propulsion, aerodynamics, and guidance, navigation, and control 2) the environment models such as atmosphere and gravity, and 3) a simulation framework which is responsible for executing the vehicle and environment models and propagating the vehicle s states forward in time and handling user input/output. MAVERIC users prepare data files for the above models and run the simulation program. They can see the output on screen and/or store in files and examine the output data later. Users can also view the output stored in output files by calling a plotting program such as gnuplot. A typical scenario of the use of MAVERIC consists of three-steps; editing existing input data files, running MAVERIC, and plotting output results.

  2. 40 CFR 86.1809-10 - Prohibition of defeat devices.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... programs, engineering evaluations, design specifications, calibrations, on-board computer algorithms, and..., with the Part II certification application, an engineering evaluation demonstrating to the satisfaction... not occur in the temperature range of 20 to 86 °F. For diesel vehicles, the engineering evaluation...

  3. Deductive Synthesis of the Unification Algorithm,

    DTIC Science & Technology

    1981-06-01

    DEDUCTIVE SYNTHESIS OF THE I - UNIFICATION ALGORITHM Zohar Manna Richard Waldinger I F? Computer Science Department Artificial Intelligence Center...theorem proving," Artificial Intelligence Journal, Vol. 9, No. 1, pp. 1-35. Boyer, R. S. and J S. Moore [Jan. 19751, "Proving theorems about LISP...d’Intelligence Artificielle , U.E.R. de Luminy, Universit6 d’ Aix-Marseille II. Green, C. C. [May 1969], "Application of theorem proving to problem

  4. Hardware Prototyping of Neural Network based Fetal Electrocardiogram Extraction

    NASA Astrophysics Data System (ADS)

    Hasan, M. A.; Reaz, M. B. I.

    2012-01-01

    The aim of this paper is to model the algorithm for Fetal ECG (FECG) extraction from composite abdominal ECG (AECG) using VHDL (Very High Speed Integrated Circuit Hardware Description Language) for FPGA (Field Programmable Gate Array) implementation. Artificial Neural Network that provides efficient and effective ways of separating FECG signal from composite AECG signal has been designed. The proposed method gives an accuracy of 93.7% for R-peak detection in FHR monitoring. The designed VHDL model is synthesized and fitted into Altera's Stratix II EP2S15F484C3 using the Quartus II version 8.0 Web Edition for FPGA implementation.

  5. Prospects of detection of the first sources with SKA using matched filters

    NASA Astrophysics Data System (ADS)

    Ghara, Raghunath; Choudhury, T. Roy; Datta, Kanan K.; Mellema, Garrelt; Choudhuri, Samir; Majumdar, Suman; Giri, Sambit K.

    2018-05-01

    The matched filtering technique is an efficient method to detect H ii bubbles and absorption regions in radio interferometric observations of the redshifted 21-cm signal from the epoch of reionization and the Cosmic Dawn. Here, we present an implementation of this technique to the upcoming observations such as the SKA1-low for a blind search of absorption regions at the Cosmic Dawn. The pipeline explores four dimensional parameter space on the simulated mock visibilities using a MCMC algorithm. The framework is able to efficiently determine the positions and sizes of the absorption/H ii regions in the field of view.

  6. Real Time Implementation of an LPC Algorithm. Speech Signal Processing Research at CHI

    DTIC Science & Technology

    1975-05-01

    SIGNAL PROCESSING HARDWARE 2-1 2.1 INTRODUCTION 2-1 2.2 TWO- CHANNEL AUDIO SIGNAL SYSTEM 2-2 2.3 MULTI- CHANNEL AUDIO SIGNAL SYSTEM 2-5 2.3.1... Channel Audio Signal System 2-30 I ii kv^i^ünt«.jfc*. ji .„* ,:-v*. ’.ii. *.. ...... — ■ -,,.,-c-» —ipponp ■^ TOHaBWgBpwiBWgPlpaiPWgW v.«.wN...Messages .... 1-55 1-13. Lost or Out of Order Message 1-56 2-1. Block Diagram of Two- Channel Audio Signal System . . 2-3 2-2. Block Diagram of Audio

  7. Classification of ring artifacts for their effective removal using type adaptive correction schemes.

    PubMed

    Anas, Emran Mohammad Abu; Lee, Soo Yeol; Hasan, Kamrul

    2011-06-01

    High resolution tomographic images acquired with a digital X-ray detector are often degraded by the so called ring artifacts. In this paper, a detail analysis including the classification, detection and correction of these ring artifacts is presented. At first, a novel idea for classifying rings into two categories, namely type I and type II rings, is proposed based on their statistical characteristics. The defective detector elements and the dusty scintillator screens result in type I ring and the mis-calibrated detector elements lead to type II ring. Unlike conventional approaches, we emphasize here on the separate detection and correction schemes for each type of rings for their effective removal. For the detection of type I ring, the histogram of the responses of the detector elements is used and a modified fast image inpainting algorithm is adopted to correct the responses of the defective pixels. On the other hand, to detect the type II ring, first a simple filtering scheme is presented based on the fast Fourier transform (FFT) to smooth the sum curve derived form the type I ring corrected projection data. The difference between the sum curve and its smoothed version is then used to detect their positions. Then, to remove the constant bias suffered by the responses of the mis-calibrated detector elements with view angle, an estimated dc shift is subtracted from them. The performance of the proposed algorithm is evaluated using real micro-CT images and is compared with three recently reported algorithms. Simulation results demonstrate superior performance of the proposed technique as compared to the techniques reported in the literature. Copyright © 2011 Elsevier Ltd. All rights reserved.

  8. PySeqLab: an open source Python package for sequence labeling and segmentation.

    PubMed

    Allam, Ahmed; Krauthammer, Michael

    2017-11-01

    Text and genomic data are composed of sequential tokens, such as words and nucleotides that give rise to higher order syntactic constructs. In this work, we aim at providing a comprehensive Python library implementing conditional random fields (CRFs), a class of probabilistic graphical models, for robust prediction of these constructs from sequential data. Python Sequence Labeling (PySeqLab) is an open source package for performing supervised learning in structured prediction tasks. It implements CRFs models, that is discriminative models from (i) first-order to higher-order linear-chain CRFs, and from (ii) first-order to higher-order semi-Markov CRFs (semi-CRFs). Moreover, it provides multiple learning algorithms for estimating model parameters such as (i) stochastic gradient descent (SGD) and its multiple variations, (ii) structured perceptron with multiple averaging schemes supporting exact and inexact search using 'violation-fixing' framework, (iii) search-based probabilistic online learning algorithm (SAPO) and (iv) an interface for Broyden-Fletcher-Goldfarb-Shanno (BFGS) and the limited-memory BFGS algorithms. Viterbi and Viterbi A* are used for inference and decoding of sequences. Using PySeqLab, we built models (classifiers) and evaluated their performance in three different domains: (i) biomedical Natural language processing (NLP), (ii) predictive DNA sequence analysis and (iii) Human activity recognition (HAR). State-of-the-art performance comparable to machine-learning based systems was achieved in the three domains without feature engineering or the use of knowledge sources. PySeqLab is available through https://bitbucket.org/A_2/pyseqlab with tutorials and documentation. ahmed.allam@yale.edu or michael.krauthammer@yale.edu. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com

  9. Recent Improvements to the Finite-Fault Rupture Detector Algorithm: FinDer II

    NASA Astrophysics Data System (ADS)

    Smith, D.; Boese, M.; Heaton, T. H.

    2015-12-01

    Constraining the finite-fault rupture extent and azimuth is crucial for accurately estimating ground-motion in large earthquakes. Detecting and modeling finite-fault ruptures in real-time is thus essential to both earthquake early warning (EEW) and rapid emergency response. Following extensive real-time and offline testing, the finite-fault rupture detector algorithm, FinDer (Böse et al., 2012 & 2015), was successfully integrated into the California-wide ShakeAlert EEW demonstration system. Since April 2015, FinDer has been scanning real-time waveform data from approximately 420 strong-motion stations in California for peak ground acceleration (PGA) patterns indicative of earthquakes. FinDer analyzes strong-motion data by comparing spatial images of observed PGA with theoretical templates modeled from empirical ground-motion prediction equations (GMPEs). If the correlation between the observed and theoretical PGA is sufficiently high, a report is sent to ShakeAlert including the estimated centroid position, length, and strike, and their uncertainties, of an ongoing fault rupture. Rupture estimates are continuously updated as new data arrives. As part of a joint effort between USGS Menlo Park, ETH Zurich, and Caltech, we have rewritten FinDer in C++ to obtain a faster and more flexible implementation. One new feature of FinDer II is that multiple contour lines of high-frequency PGA are computed and correlated with templates, allowing the detection of both large earthquakes and much smaller (~ M3.5) events shortly after their nucleation. Unlike previous EEW algorithms, FinDer II thus provides a modeling approach for both small-magnitude point-source and larger-magnitude finite-fault ruptures with consistent error estimates for the entire event magnitude range.

  10. Machine learning: a useful radiological adjunct in determination of a newly diagnosed glioma's grade and IDH status.

    PubMed

    De Looze, Céline; Beausang, Alan; Cryan, Jane; Loftus, Teresa; Buckley, Patrick G; Farrell, Michael; Looby, Seamus; Reilly, Richard; Brett, Francesca; Kearney, Hugh

    2018-05-16

    Machine learning methods have been introduced as a computer aided diagnostic tool, with applications to glioma characterisation on MRI. Such an algorithmic approach may provide a useful adjunct for a rapid and accurate diagnosis of a glioma. The aim of this study is to devise a machine learning algorithm that may be used by radiologists in routine practice to aid diagnosis of both: WHO grade and IDH mutation status in de novo gliomas. To evaluate the status quo, we interrogated the accuracy of neuroradiology reports in relation to WHO grade: grade II 96.49% (95% confidence intervals [CI] 0.88, 0.99); III 36.51% (95% CI 0.24, 0.50); IV 72.9% (95% CI 0.67, 0.78). We derived five MRI parameters from the same diagnostic brain scans, in under two minutes per case, and then supplied these data to a random forest algorithm. Machine learning resulted in a high level of accuracy in prediction of tumour grade: grade II/III; area under the receiver operating characteristic curve (AUC) = 98%, sensitivity = 0.82, specificity = 0.94; grade II/IV; AUC = 100%, sensitivity = 1.0, specificity = 1.0; grade III/IV; AUC = 97%, sensitivity = 0.83, specificity = 0.97. Furthermore, machine learning also facilitated the discrimination of IDH status: AUC of 88%, sensitivity = 0.81, specificity = 0.77. These data demonstrate the ability of machine learning to accurately classify diffuse gliomas by both WHO grade and IDH status from routine MRI alone-without significant image processing, which may facilitate usage as a diagnostic adjunct in clinical practice.

  11. DoE Phase II SBIR: Spectrally-Assisted Vehicle Tracking

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Villeneuve, Pierre V.

    2013-02-28

    The goal of this Phase II SBIR is to develop a prototype software package to demonstrate spectrally-aided vehicle tracking performance. The primary application is to demonstrate improved target vehicle tracking performance in complex environments where traditional spatial tracker systems may show reduced performance. Example scenarios in Figure 1 include a) the target vehicle obscured by a large structure for an extended period of time, or b), the target engaging in extreme maneuvers amongst other civilian vehicles. The target information derived from spatial processing is unable to differentiate between the green versus the red vehicle. Spectral signature exploitation enables comparison ofmore » new candidate targets with existing track signatures. The ambiguity in this confusing scenario is resolved by folding spectral analysis results into each target nomination and association processes. Figure 3 shows a number of example spectral signatures from a variety of natural and man-made materials. The work performed over the two-year effort was divided into three general areas: algorithm refinement, software prototype development, and prototype performance demonstration. The tasks performed under this Phase II to accomplish the program goals were as follows: 1. Acquire relevant vehicle target datasets to support prototype. 2. Refine algorithms for target spectral feature exploitation. 3. Implement a prototype multi-hypothesis target tracking software package. 4. Demonstrate and quantify tracking performance using relevant data.« less

  12. An Interactive and Comprehensive Working Environment for High-Energy Physics Software with Python and Jupyter Notebooks

    NASA Astrophysics Data System (ADS)

    Braun, N.; Hauth, T.; Pulvermacher, C.; Ritter, M.

    2017-10-01

    Today’s analyses for high-energy physics (HEP) experiments involve processing a large amount of data with highly specialized algorithms. The contemporary workflow from recorded data to final results is based on the execution of small scripts - often written in Python or ROOT macros which call complex compiled algorithms in the background - to perform fitting procedures and generate plots. During recent years interactive programming environments, such as Jupyter, became popular. Jupyter allows to develop Python-based applications, so-called notebooks, which bundle code, documentation and results, e.g. plots. Advantages over classical script-based approaches is the feature to recompute only parts of the analysis code, which allows for fast and iterative development, and a web-based user frontend, which can be hosted centrally and only requires a browser on the user side. In our novel approach, Python and Jupyter are tightly integrated into the Belle II Analysis Software Framework (basf2), currently being developed for the Belle II experiment in Japan. This allows to develop code in Jupyter notebooks for every aspect of the event simulation, reconstruction and analysis chain. These interactive notebooks can be hosted as a centralized web service via jupyterhub with docker and used by all scientists of the Belle II Collaboration. Because of its generality and encapsulation, the setup can easily be scaled to large installations.

  13. Type II fuzzy systems for amyloid plaque segmentation in transgenic mouse brains for Alzheimer's disease quantification

    NASA Astrophysics Data System (ADS)

    Khademi, April; Hosseinzadeh, Danoush

    2014-03-01

    Alzheimer's disease (AD) is the most common form of dementia in the elderly characterized by extracellular deposition of amyloid plaques (AP). Using animal models, AP loads have been manually measured from histological specimens to understand disease etiology, as well as response to treatment. Due to the manual nature of these approaches, obtaining the AP load is labourious, subjective and error prone. Automated algorithms can be designed to alleviate these challenges by objectively segmenting AP. In this paper, we focus on the development of a novel algorithm for AP segmentation based on robust preprocessing and a Type II fuzzy system. Type II fuzzy systems are much more advantageous over the traditional Type I fuzzy systems, since ambiguity in the membership function may be modeled and exploited to generate excellent segmentation results. The ambiguity in the membership function is defined as an adaptively changing parameter that is tuned based on the local contrast characteristics of the image. Using transgenic mouse brains with AP ground truth, validation studies were carried out showing a high degree of overlap and low degree of oversegmentation (0.8233 and 0.0917, respectively). The results highlight that such a framework is able to handle plaques of various types (diffuse, punctate), plaques with varying Aβ concentrations as well as intensity variation caused by treatment effects or staining variability.

  14. Cloud-based NEXRAD Data Processing and Analysis for Hydrologic Applications

    NASA Astrophysics Data System (ADS)

    Seo, B. C.; Demir, I.; Keem, M.; Goska, R.; Weber, J.; Krajewski, W. F.

    2016-12-01

    The real-time and full historical archive of NEXRAD Level II data, covering the entire United States from 1991 to present, recently became available on Amazon cloud S3. This provides a new opportunity to rebuild the Hydro-NEXRAD software system that enabled users to access vast amounts of NEXRAD radar data in support of a wide range of research. The system processes basic radar data (Level II) and delivers radar-rainfall products based on the user's custom selection of features such as space and time domain, river basin, rainfall product space and time resolution, and rainfall estimation algorithms. The cloud-based new system can eliminate prior challenges faced by Hydro-NEXRAD data acquisition and processing: (1) temporal and spatial limitation arising from the limited data storage; (2) archive (past) data ingestion and format conversion; and (3) separate data processing flow for the past and real-time Level II data. To enhance massive data processing and computational efficiency, the new system is implemented and tested for the Iowa domain. This pilot study begins by ingesting rainfall metadata and implementing Hydro-NEXRAD capabilities on the cloud using the new polarimetric features, as well as the existing algorithm modules and scripts. The authors address the reliability and feasibility of cloud computation and processing, followed by an assessment of response times from an interactive web-based system.

  15. An overview of remote sensing of chlorophyll fluorescence

    NASA Astrophysics Data System (ADS)

    Xing, Xiao-Gang; Zhao, Dong-Zhi; Liu, Yu-Guang; Yang, Jian-Hong; Xiu, Peng; Wang, Lin

    2007-03-01

    Besides empirical algorithms with the blue-green ratio, the algorithms based on fluorescence are also important and valid methods for retrieving chlorophyll-a concentration in the ocean waters, especially for Case II waters and the sea with algal blooming. This study reviews the history of initial cognitions, investigations and detailed approaches towards chlorophyll fluorescence, and then introduces the biological mechanism of fluorescence remote sensing and main spectral characteristics such as the positive correlation between fluorescence and chlorophyll concentration, the red shift phenomena. Meanwhile, there exist many influence factors that increase complexity of fluorescence remote sensing, such as fluorescence quantum yield, physiological status of various algae, substances with related optical property in the ocean, atmospheric absorption etc. Based on these cognitions, scientists have found two ways to calculate the amount of fluorescence detected by ocean color sensors: fluorescence line height and reflectance ratio. These two ways are currently the foundation for retrieval of chlorophyl l - a concentration in the ocean. As the in-situ measurements and synchronous satellite data are continuously being accumulated, the fluorescence remote sensing of chlorophyll-a concentration in Case II waters should be recognized more thoroughly and new algorithms could be expected.

  16. Study on transient beam loading compensation for China ADS proton linac injector II

    NASA Astrophysics Data System (ADS)

    Gao, Zheng; He, Yuan; Wang, Xian-Wu; Chang, Wei; Zhang, Rui-Feng; Zhu, Zheng-Long; Zhang, Sheng-Hu; Chen, Qi; Powers, Tom

    2016-05-01

    Significant transient beam loading effects were observed during beam commissioning tests of prototype II of the injector for the accelerator driven sub-critical (ADS) system, which took place at the Institute of Modern Physics, Chinese Academy of Sciences, between October and December 2014. During these tests experiments were performed with continuous wave (CW) operation of the cavities with pulsed beam current, and the system was configured to make use of a prototype digital low level radio frequency (LLRF) controller. The system was originally operated in pulsed mode with a simple proportional plus integral and deviation (PID) feedback control algorithm, which was not able to maintain the desired gradient regulation during pulsed 10 mA beam operations. A unique simple transient beam loading compensation method which made use of a combination of proportional and integral (PI) feedback and feedforward control algorithm was implemented in order to significantly reduce the beam induced transient effect in the cavity gradients. The superconducting cavity field variation was reduced to less than 1.7% after turning on this control algorithm. The design and experimental results of this system are presented in this paper. Supported by National Natural Science Foundation of China (91426303, 11525523)

  17. Nonsequential Computation and Laws of Nature.

    DTIC Science & Technology

    1986-05-01

    computing engines arose as a byproduct of the Manhattan Project in World War II. Broadly speaking, their purpose was to compute numerical solutions to...nature, and to representing algorithms in structures of space and time. After the Manhattan Project had been fulfilled, computer designers quickly pro

  18. A Real-Time Marker-Based Visual Sensor Based on a FPGA and a Soft Core Processor

    PubMed Central

    Tayara, Hilal; Ham, Woonchul; Chong, Kil To

    2016-01-01

    This paper introduces a real-time marker-based visual sensor architecture for mobile robot localization and navigation. A hardware acceleration architecture for post video processing system was implemented on a field-programmable gate array (FPGA). The pose calculation algorithm was implemented in a System on Chip (SoC) with an Altera Nios II soft-core processor. For every frame, single pass image segmentation and Feature Accelerated Segment Test (FAST) corner detection were used for extracting the predefined markers with known geometries in FPGA. Coplanar PosIT algorithm was implemented on the Nios II soft-core processor supplied with floating point hardware for accelerating floating point operations. Trigonometric functions have been approximated using Taylor series and cubic approximation using Lagrange polynomials. Inverse square root method has been implemented for approximating square root computations. Real time results have been achieved and pixel streams have been processed on the fly without any need to buffer the input frame for further implementation. PMID:27983714

  19. BetaTPred: prediction of beta-TURNS in a protein using statistical algorithms.

    PubMed

    Kaur, Harpreet; Raghava, G P S

    2002-03-01

    beta-turns play an important role from a structural and functional point of view. beta-turns are the most common type of non-repetitive structures in proteins and comprise on average, 25% of the residues. In the past numerous methods have been developed to predict beta-turns in a protein. Most of these prediction methods are based on statistical approaches. In order to utilize the full potential of these methods, there is a need to develop a web server. This paper describes a web server called BetaTPred, developed for predicting beta-TURNS in a protein from its amino acid sequence. BetaTPred allows the user to predict turns in a protein using existing statistical algorithms. It also allows to predict different types of beta-TURNS e.g. type I, I', II, II', VI, VIII and non-specific. This server assists the users in predicting the consensus beta-TURNS in a protein. The server is accessible from http://imtech.res.in/raghava/betatpred/

  20. Programming an interim report on the SETL project. Part I: generalities. Part II: the SETL language and examples of its use

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schwartz, J T

    1975-06-01

    A summary of work during the past several years on SETL, a new programming language drawing its dictions and basic concepts from the mathematical theory of sets, is presented. The work was started with the idea that a programming language modeled after an appropriate version of the formal language of mathematics might allow a programming style with some of the succinctness of mathematics, and that this might ultimately enable one to express and experiment with more complex algorithms than are now within reach. Part I discusses the general approach followed in the work. Part II focuses directly on the detailsmore » of the SETL language as it is now defined. It describes the facilities of SETL, includes short libraries of miscellaneous and of code optimization algorithms illustrating the use of SETL, and gives a detailed description of the manner in which the set-theoretic primitives provided by SETL are currently implemented. (RWR)« less

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang Yueyang; Deng Licai; Liu Chao

    A total of {approx}640, 000 objects from the LAMOST pilot survey have been publicly released. In this work, we present a catalog of DA white dwarfs (DAWDs) from the entire pilot survey. We outline a new algorithm for the selection of white dwarfs (WDs) by fitting Sersic profiles to the Balmer H{beta}, H{gamma}, and H{delta} lines of the spectra, and calculating the equivalent width of the Ca II K line. Two thousand nine hundred sixty-four candidates are selected by constraining the fitting parameters and the equivalent width of the Ca II K line. All the spectra of candidates are visuallymore » inspected. We identify 230 DAWDs (59 of which are already included in the Villanova and SDSS WD catalogs), 20 of which are DAWDs with non-degenerate companions. In addition, 128 candidates are classified as DAWDs/subdwarfs, which means the classifications are ambiguous. The result is consistent with the expected DAWD number estimated based on the LEGUE target selection algorithm.« less

  2. A Real-Time Marker-Based Visual Sensor Based on a FPGA and a Soft Core Processor.

    PubMed

    Tayara, Hilal; Ham, Woonchul; Chong, Kil To

    2016-12-15

    This paper introduces a real-time marker-based visual sensor architecture for mobile robot localization and navigation. A hardware acceleration architecture for post video processing system was implemented on a field-programmable gate array (FPGA). The pose calculation algorithm was implemented in a System on Chip (SoC) with an Altera Nios II soft-core processor. For every frame, single pass image segmentation and Feature Accelerated Segment Test (FAST) corner detection were used for extracting the predefined markers with known geometries in FPGA. Coplanar PosIT algorithm was implemented on the Nios II soft-core processor supplied with floating point hardware for accelerating floating point operations. Trigonometric functions have been approximated using Taylor series and cubic approximation using Lagrange polynomials. Inverse square root method has been implemented for approximating square root computations. Real time results have been achieved and pixel streams have been processed on the fly without any need to buffer the input frame for further implementation.

  3. Algorithmic Complexity. Volume II.

    DTIC Science & Technology

    1982-06-01

    digital computers, this improvement will go unnoticed if only a few complex products are to be taken, however it can become increasingly important as...computed in the reverse order. If the products are formed moving from the top of the tree downward, and then the divisions are performed going from the...the reverse order, going up the tree. (r- a mod m means that r is the remainder when a is divided by M.) The overall running time of the algorithm is

  4. Temporally Adjusted Complex Ambiguity Function Mapping Algorithm for Geolocating Radio Frequency Signals

    DTIC Science & Technology

    2014-12-01

    Introduction 1.1 Background In today’s world of high -tech warfare, we have developed the ability to deploy virtually any type of ordnance quickly and... ANSI Std. 239–18 i THIS PAGE INTENTIONALLY LEFT BLANK ii Approved for public release; distribution is unlimited TEMPORALLY ADJUSTED COMPLEX AMBIGUITY...this time due to time constraints and the high computational complexity involved in the current implementation of the Moss algorithm. Full maps, with

  5. Delay compensation in integrated communication and control systems. II - Implementation and verification

    NASA Technical Reports Server (NTRS)

    Luck, Rogelio; Ray, Asok

    1990-01-01

    The implementation and verification of the delay-compensation algorithm are addressed. The delay compensator has been experimentally verified at an IEEE 802.4 network testbed for velocity control of a DC servomotor. The performance of the delay-compensation algorithm was also examined by combined discrete-event and continuous-time simulation of the flight control system of an advanced aircraft that uses the SAE (Society of Automotive Engineers) linear token passing bus for data communications.

  6. Computation of repetitions and regularities of biologically weighted sequences.

    PubMed

    Christodoulakis, M; Iliopoulos, C; Mouchard, L; Perdikuri, K; Tsakalidis, A; Tsichlas, K

    2006-01-01

    Biological weighted sequences are used extensively in molecular biology as profiles for protein families, in the representation of binding sites and often for the representation of sequences produced by a shotgun sequencing strategy. In this paper, we address three fundamental problems in the area of biologically weighted sequences: (i) computation of repetitions, (ii) pattern matching, and (iii) computation of regularities. Our algorithms can be used as basic building blocks for more sophisticated algorithms applied on weighted sequences.

  7. Towards Development of Clustering Applications for Large-Scale Comparative Genotyping and Kinship Analysis Using Y-Short Tandem Repeats.

    PubMed

    Seman, Ali; Sapawi, Azizian Mohd; Salleh, Mohd Zaki

    2015-06-01

    Y-chromosome short tandem repeats (Y-STRs) are genetic markers with practical applications in human identification. However, where mass identification is required (e.g., in the aftermath of disasters with significant fatalities), the efficiency of the process could be improved with new statistical approaches. Clustering applications are relatively new tools for large-scale comparative genotyping, and the k-Approximate Modal Haplotype (k-AMH), an efficient algorithm for clustering large-scale Y-STR data, represents a promising method for developing these tools. In this study we improved the k-AMH and produced three new algorithms: the Nk-AMH I (including a new initial cluster center selection), the Nk-AMH II (including a new dominant weighting value), and the Nk-AMH III (combining I and II). The Nk-AMH III was the superior algorithm, with mean clustering accuracy that increased in four out of six datasets and remained at 100% in the other two. Additionally, the Nk-AMH III achieved a 2% higher overall mean clustering accuracy score than the k-AMH, as well as optimal accuracy for all datasets (0.84-1.00). With inclusion of the two new methods, the Nk-AMH III produced an optimal solution for clustering Y-STR data; thus, the algorithm has potential for further development towards fully automatic clustering of any large-scale genotypic data.

  8. Predicting the survival of diabetes using neural network

    NASA Astrophysics Data System (ADS)

    Mamuda, Mamman; Sathasivam, Saratha

    2017-08-01

    Data mining techniques at the present time are used in predicting diseases of health care industries. Neural Network is one among the prevailing method in data mining techniques of an intelligent field for predicting diseases in health care industries. This paper presents a study on the prediction of the survival of diabetes diseases using different learning algorithms from the supervised learning algorithms of neural network. Three learning algorithms are considered in this study: (i) The levenberg-marquardt learning algorithm (ii) The Bayesian regulation learning algorithm and (iii) The scaled conjugate gradient learning algorithm. The network is trained using the Pima Indian Diabetes Dataset with the help of MATLAB R2014(a) software. The performance of each algorithm is further discussed through regression analysis. The prediction accuracy of the best algorithm is further computed to validate the accurate prediction

  9. Advancing computational methods for calibration of the Soil and Water Assessment Tool (SWAT): Application for modeling climate change impacts on water resources in the Upper Neuse Watershed of North Carolina

    NASA Astrophysics Data System (ADS)

    Ercan, Mehmet Bulent

    Watershed-scale hydrologic models are used for a variety of applications from flood prediction, to drought analysis, to water quality assessments. A particular challenge in applying these models is calibration of the model parameters, many of which are difficult to measure at the watershed-scale. A primary goal of this dissertation is to contribute new computational methods and tools for calibration of watershed-scale hydrologic models and the Soil and Water Assessment Tool (SWAT) model, in particular. SWAT is a physically-based, watershed-scale hydrologic model developed to predict the impact of land management practices on water quality and quantity. The dissertation follows a manuscript format meaning it is comprised of three separate but interrelated research studies. The first two research studies focus on SWAT model calibration, and the third research study presents an application of the new calibration methods and tools to study climate change impacts on water resources in the Upper Neuse Watershed of North Carolina using SWAT. The objective of the first two studies is to overcome computational challenges associated with calibration of SWAT models. The first study evaluates a parallel SWAT calibration tool built using the Windows Azure cloud environment and a parallel version of the Dynamically Dimensioned Search (DDS) calibration method modified to run in Azure. The calibration tool was tested for six model scenarios constructed using three watersheds of increasing size (the Eno, Upper Neuse, and Neuse) for both a 2 year and 10 year simulation duration. Leveraging the cloud as an on demand computing resource allowed for a significantly reduced calibration time such that calibration of the Neuse watershed went from taking 207 hours on a personal computer to only 3.4 hours using 256 cores in the Azure cloud. The second study aims at increasing SWAT model calibration efficiency by creating an open source, multi-objective calibration tool using the Non-Dominated Sorting Genetic Algorithm II (NSGA-II). This tool was demonstrated through an application for the Upper Neuse Watershed in North Carolina, USA. The objective functions used for the calibration were Nash-Sutcliffe (E) and Percent Bias (PB), and the objective sites were the Flat, Little, and Eno watershed outlets. The results show that the use of multi-objective calibration algorithms for SWAT calibration improved model performance especially in terms of minimizing PB compared to the single objective model calibration. The third study builds upon the first two studies by leveraging the new calibration methods and tools to study future climate impacts on the Upper Neuse watershed. Statistically downscaled outputs from eight Global Circulation Models (GCMs) were used for both low and high emission scenarios to drive a well calibrated SWAT model of the Upper Neuse watershed. The objective of the study was to understand the potential hydrologic response of the watershed, which serves as a public water supply for the growing Research Triangle Park region of North Carolina, under projected climate change scenarios. The future climate change scenarios, in general, indicate an increase in precipitation and temperature for the watershed in coming decades. The SWAT simulations using the future climate scenarios, in general, suggest an increase in soil water and water yield, and a decrease in evapotranspiration within the Upper Neuse watershed. In summary, this dissertation advances the field of watershed-scale hydrologic modeling by (i) providing some of the first work to apply cloud computing for the computationally-demanding task of model calibration; (ii) providing a new, open source library that can be used by SWAT modelers to perform multi-objective calibration of their models; and (iii) advancing understanding of climate change impacts on water resources for an important watershed in the Research Triangle Park region of North Carolina. The third study leveraged the methodological advances presented in the first two studies. Therefore, the dissertation contains three independent by interrelated studies that collectively advance the field of watershed-scale hydrologic modeling and analysis.

  10. SAGE II V7.00 Release Notes

    Atmospheric Science Data Center

    2013-02-21

    ... algorithms from SAGE III v4.00 Ceased removal of the water vapor extinction in the 600nm channel due to uncertainty in the H2O ... within the main aerosol layer generally reflecting excellent quality in previous versions. There is some minor decrease in extinction at ...

  11. 48 CFR 252.227-7013 - Rights in technical data-Noncommercial items.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... causing a computer to perform a specific operation or series of operations. (3) Computer software means computer programs, source code, source code listings, object code listings, design details, algorithms... or will be developed exclusively with Government funds; (ii) Studies, analyses, test data, or similar...

  12. 48 CFR 252.227-7013 - Rights in technical data-Noncommercial items.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... causing a computer to perform a specific operation or series of operations. (3) Computer software means computer programs, source code, source code listings, object code listings, design details, algorithms... or will be developed exclusively with Government funds; (ii) Studies, analyses, test data, or similar...

  13. 48 CFR 252.227-7013 - Rights in technical data-Noncommercial items.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... causing a computer to perform a specific operation or series of operations. (3) Computer software means computer programs, source code, source code listings, object code listings, design details, algorithms... or will be developed exclusively with Government funds; (ii) Studies, analyses, test data, or similar...

  14. 48 CFR 252.227-7013 - Rights in technical data-Noncommercial items.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... causing a computer to perform a specific operation or series of operations. (3) Computer software means computer programs, source code, source code listings, object code listings, design details, algorithms... developed exclusively with Government funds; (ii) Studies, analyses, test data, or similar data produced for...

  15. [Research and realization of signal processing algorithms based on FPGA in digital ophthalmic ultrasonography imaging].

    PubMed

    Fang, Simin; Zhou, Sheng; Wang, Xiaochun; Ye, Qingsheng; Tian, Ling; Ji, Jianjun; Wang, Yanqun

    2015-01-01

    To design and improve signal processing algorithms of ophthalmic ultrasonography based on FPGA. Achieved three signal processing modules: full parallel distributed dynamic filter, digital quadrature demodulation, logarithmic compression, using Verilog HDL hardware language in Quartus II. Compared to the original system, the hardware cost is reduced, the whole image shows clearer and more information of the deep eyeball contained in the image, the depth of detection increases from 5 cm to 6 cm. The new algorithms meet the design requirements and achieve the system's optimization that they can effectively improve the image quality of existing equipment.

  16. Effect of centrifuge test on blood serum lipids index of cadet pilots.

    PubMed

    Wochyński, Zbigniew; Kowalczuk, Krzysztof; Kłossowski, Marek; Sobiech, Krzysztof A

    2016-01-01

    This study aimed at investigating the relationship between the lipid index (WS) in the examined cadets and duration of exposure to +Gz in the human centrifuge. The study involved 19 first-year cadets of the Polish Air Force Academy in Dęblin. Tests in the human centrifuge were repeated twice, i.e. prior to (test I) and 45 days after (test II). After exposure to +Gz, the examined cadets were divided into 2 groups. Group I (N=11) included cadets subjected to a shorter total duration of exposure to +Gz, while group II (N=8) included cadets with a longer total duration of exposure to +Gz. Total cholesterol (TC), high density lipoprotein (HDL), triglycerides (TG), and apolipoproteins A1 and B were assayed in blood serum prior to (assay A) and after (assay B) both exposures to +Gz. Low density lipoprotein (LDL) level was estimated from the Friedewald formula. WS is an own mathematical algorithm. WS was higher in group II, assay A - 10.0 and B - 10.08 of test I in the human centrifuge than in group I where the WS values were 6.91 and 6.96, respectively. WS was also higher in group II in assay A - 10.0 and B -10.1 of test II in the human centrifuge than in group I - 6.96 and 6.80, respectively. The higher value of WS in group II, both after the first and second exposure to +Gz in human centrifuge, in comparison with group I, indicated its usefulness for determination of the maximum capability of applying acceleration of the interval type during training in the human centrifuge.

  17. A support vector machine for spectral classification of emission-line galaxies from the Sloan Digital Sky Survey

    NASA Astrophysics Data System (ADS)

    Shi, Fei; Liu, Yu-Yan; Sun, Guang-Lan; Li, Pei-Yu; Lei, Yu-Ming; Wang, Jian

    2015-10-01

    The emission-lines of galaxies originate from massive young stars or supermassive blackholes. As a result, spectral classification of emission-line galaxies into star-forming galaxies, active galactic nucleus (AGN) hosts, or compositions of both relates closely to formation and evolution of galaxy. To find efficient and automatic spectral classification method, especially in large surveys and huge data bases, a support vector machine (SVM) supervised learning algorithm is applied to a sample of emission-line galaxies from the Sloan Digital Sky Survey (SDSS) data release 9 (DR9) provided by the Max Planck Institute and the Johns Hopkins University (MPA/JHU). A two-step approach is adopted. (i) The SVM must be trained with a subset of objects that are known to be AGN hosts, composites or star-forming galaxies, treating the strong emission-line flux measurements as input feature vectors in an n-dimensional space, where n is the number of strong emission-line flux ratios. (ii) After training on a sample of emission-line galaxies, the remaining galaxies are automatically classified. In the classification process, we use a 10-fold cross-validation technique. We show that the classification diagrams based on the [N II]/Hα versus other emission-line ratio, such as [O III]/Hβ, [Ne III]/[O II], ([O III]λ4959+[O III]λ5007)/[O III]λ4363, [O II]/Hβ, [Ar III]/[O III], [S II]/Hα, and [O I]/Hα, plus colour, allows us to separate unambiguously AGN hosts, composites or star-forming galaxies. Among them, the diagram of [N II]/Hα versus [O III]/Hβ achieved an accuracy of 99 per cent to separate the three classes of objects. The other diagrams above give an accuracy of ˜91 per cent.

  18. Dentate gyrus-cornu ammonis (CA) 4 volume is decreased and associated with depressive episodes and lipid peroxidation in bipolar II disorder: Longitudinal and cross-sectional analyses.

    PubMed

    Elvsåshagen, Torbjørn; Zuzarte, Pedro; Westlye, Lars T; Bøen, Erlend; Josefsen, Dag; Boye, Birgitte; Hol, Per K; Malt, Ulrik F; Young, L Trevor; Andreazza, Ana C

    2016-12-01

    Reduced dentate gyrus volume and increased oxidative stress have emerged as potential pathophysiological mechanisms in bipolar disorder. However, the relationship between dentate gyrus volume and peripheral oxidative stress markers remains unknown. Here, we examined dentate gyrus-cornu ammonis (CA) 4 volume longitudinally in patients with bipolar II disorder (BD-II) and healthy controls and investigated whether BD-II is associated with elevated peripheral levels of oxidative stress. We acquired high-resolution structural 3T-magnetic resonance imaging (MRI) images and quantified hippocampal subfield volumes using an automated segmentation algorithm in individuals with BD-II (n=29) and controls (n=33). The participants were scanned twice, at study inclusion and on average 2.4 years later. In addition, we measured peripheral levels of two lipid peroxidation markers (4-hydroxy-2-nonenal [4-HNE] and lipid hydroperoxides [LPH]). First, we demonstrated that the automated hippocampal subfield segmentation technique employed in this work reliably measured dentate gyrus-CA4 volume. Second, we found a decreased left dentate gyrus-CA4 volume in patients and that a larger number of depressive episodes between T1 and T2 predicted greater volume decline. Finally, we showed that 4-HNE was elevated in BD-II and that 4-HNE was negatively associated with left and right dentate gyrus-CA4 volumes in patients. These results are consistent with a role for the dentate gyrus in the pathophysiology of bipolar disorder and suggest that depressive episodes and elevated oxidative stress might contribute to hippocampal volume decreases. In addition, these findings provide further support for the hypothesis that peripheral lipid peroxidation markers may reflect brain alterations in bipolar disorders. © 2016 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  19. Improved WIMP-search reach of the CDMS II germanium data

    DOE PAGES

    Agnese, R.

    2015-10-12

    CDMS II data from the five-tower runs at the Soudan Underground Laboratory were reprocessed with an improved charge-pulse fitting algorithm. Two new analysis techniques to reject surface-event backgrounds were applied to the 612 kg days germanium-detector weakly interacting massive particle (WIMP)-search exposure. An extended analysis was also completed by decreasing the 10 keV analysis threshold to ~5 keV, to increase sensitivity near a WIMP mass of 8 GeV/c 2. After unblinding, there were zero candidate events above a deposited energy of 10 keV and six events in the lower-threshold analysis. This yielded minimum WIMP-nucleon spin-independent scattering cross-section limits of 1.8×10more » –44 and 1.18×10 –41 at 90% confidence for 60 and 8.6 GeV/c 2 WIMPs, respectively. This improves the previous CDMS II result by a factor of 2.4 (2.7) for 60 (8.6) GeV/c 2 WIMPs.« less

  20. Improved WIMP-search reach of the CDMS II germanium data

    DOE PAGES

    Agnese, R.

    2015-10-12

    CDMS II data from the five-tower runs at the Soudan Underground Laboratory were reprocessed with an improved charge-pulse fitting algorithm. Two new analysis techniques to reject surface-event backgrounds were applied to the 612 kg days germanium-detector weakly interacting massive particle (WIMP)-search exposure. An extended analysis was also completed by decreasing the 10 keV analysis threshold to ~5 keV, to increase sensitivity near a WIMP mass of 8 GeV/c 2. After unblinding, there were zero candidate events above a deposited energy of 10 keV and six events in the lower-threshold analysis. This yielded minimum WIMP-nucleon spin-independent scattering cross-section limits of 1.8×10more » –44 and 1.18×10 –41 at 90% confidence for 60 and 8.6 GeV/c 2 WIMPs, respectively. Furthermore, this improves the previous CDMS II result by a factor of 2.4 (2.7) for 60 (8.6) GeV/c 2 WIMPs.« less

  1. Study of the transverse beam motion in the DARHT Phase II accelerator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Yu-Jiuan; Fawley, W M; Houck, T L

    1998-08-20

    The accelerator for the second-axis of the Dual Axis Radiographic Hydrodynamic Test (DARHT) facility will accelerate a 4-kA, 3-MeV, 2--µs long electron current pulse to 20 MeV. The energy variation of the beam within the flat-top portion of the current pulse is (plus or equal to) 0.5%. The performance of the DARHT Phase II radiographic machine requires the transverse beam motion to be much less than the beam spot size which is about 1.5 mm diameter on the x-ray converter. In general, the leading causes of the transverse beam motion in an accelerator are the beam breakup instability (BBU) andmore » the corkscrew motion. We have modeled the transverse beam motion in the DARHT Phase II accelerator with various magnetic tunes and accelerator cell configurations by using the BREAKUP code. The predicted sensitivity of corkscrew motion and BBU growth to different tuning algorithms will be presented.« less

  2. The ETA-II linear induction accelerator and IMP wiggler: A high-average-power millimeter-wave free-electron-laser for plasma heating

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Allen, S.L.; Scharlemann, E.T.

    1992-05-01

    We have constructed a 140-GHz free-electron laser to generate high-average-power microwaves for heating the MTX tokamak plasma. A 5.5-m steady-state wiggler (intense Microwave Prototype-IMP) has been installed at the end of the upgraded 60-cell ETA-II accelerator, and is configured as an FEL amplifier for the output of a 140-GHz long-pulse gyrotron. Improvements in the ETA-II accelerator include a multicable-feed power distribution network, better magnetic alignment using a stretched-wire alignment technique (SWAT). and a computerized tuning algorithm that directly minimizes the transverse sweep (corkscrew motion) of the electron beam. The upgrades were first tested on the 20-cell, 3-MeV front end ofmore » ETA-II and resulted in greatly improved energy flatness and reduced corkscrew motion. The upgrades were then incorporated into the full 60-cell configuration of ETA-II, along with modifications to allow operation in 50-pulse bursts at pulse repetition frequencies up to 5 kHz. The pulse power modifications were developed and tested on the High Average Power Test Stand (HAPTS), and have significantly reduced the voltage and timing jitter of the MAG 1D magnetic pulse compressors. The 2-3 kA. 6-7 MeV beam from ETA-II is transported to the IMP wiggler, which has been reconfigured as a laced wiggler, with both permanent magnets and electromagnets, for high magnetic field operation. Tapering of the wiggler magnetic field is completely computer controlled and can be optimized based on the output power. The microwaves from the FEL are transmitted to the MTX tokamak by a windowless quasi-optical microwave transmission system. Experiments at MTX are focused on studies of electron-cyclotron-resonance heating (ECRH) of the plasma. We summarize here the accelerator and pulse power modifications, and describe the status of ETA-II, IMP, and MTX operations.« less

  3. Object Occlusion Detection Using Automatic Camera Calibration for a Wide-Area Video Surveillance System

    PubMed Central

    Jung, Jaehoon; Yoon, Inhye; Paik, Joonki

    2016-01-01

    This paper presents an object occlusion detection algorithm using object depth information that is estimated by automatic camera calibration. The object occlusion problem is a major factor to degrade the performance of object tracking and recognition. To detect an object occlusion, the proposed algorithm consists of three steps: (i) automatic camera calibration using both moving objects and a background structure; (ii) object depth estimation; and (iii) detection of occluded regions. The proposed algorithm estimates the depth of the object without extra sensors but with a generic red, green and blue (RGB) camera. As a result, the proposed algorithm can be applied to improve the performance of object tracking and object recognition algorithms for video surveillance systems. PMID:27347978

  4. An FPGA-based trigger for the phase II of the MEG experiment

    NASA Astrophysics Data System (ADS)

    Baldini, A.; Bemporad, C.; Cei, F.; Galli, L.; Grassi, M.; Morsani, F.; Nicolò, D.; Ritt, S.; Venturini, M.

    2016-07-01

    For the phase II of MEG, we are going to develop a combined trigger and DAQ system. Here we focus on the former side, which operates an on-line reconstruction of detector signals and event selection within 450 μs from event occurrence. Trigger concentrator boards (TCB) are under development to gather data from different crates, each connected to a set of detector channels, to accomplish higher-level algorithms to issue a trigger in the case of a candidate signal event. We describe the major features of the new system, in comparison with phase I, as well as its performances in terms of selection efficiency and background rejection.

  5. Recent developments in software for the Belle II aerogel RICH

    NASA Astrophysics Data System (ADS)

    Šantelj, L.; Adachi, I.; Dolenec, R.; Hataya, K.; Iori, S.; Iwata, S.; Kakuno, H.; Kataura, R.; Kawai, H.; Kindo, H.; Kobayashi, T.; Korpar, S.; Križan, P.; Kumita, T.; Mrvar, M.; Nishida, S.; Ogawa, K.; Ogawa, S.; Pestotnik, R.; Sumiyoshi, T.; Tabata, M.; Yonenaga, M.; Yusa, Y.

    2017-12-01

    For the Belle II spectrometer a proximity focusing RICH counter with an aerogel radiator (ARICH) will be employed as a PID system in the forward end-cap region of the spectrometer. The detector will provide about 4σ separation of pions and kaons up to momenta of 3.5 GeV/c, at the kinematic limits of the experiment. We present the up-to-date status of the ARICH simulation and reconstruction software, focusing on the recent improvements of the reconstruction algorithms and detector description in the Geant4 simulation. In addition, as a demonstration of detector readout software functionality we show the first cosmic ray Cherenkov rings observed in the ARICH.

  6. Mathematical and Statistical Software Index.

    DTIC Science & Technology

    1986-08-01

    geometric) mean HMEAN - harmonic mean MEDIAN - median MODE - mode QUANT - quantiles OGIVE - distribution curve IQRNG - interpercentile range RANGE ... range mutliphase pivoting algorithm cross-classification multiple discriminant analysis cross-tabul ation mul tipl e-objecti ve model curve fitting...Statistics). .. .. .... ...... ..... ...... ..... .. 21 *RANGEX (Correct Correlations for Curtailment of Range ). .. .. .... ...... ... 21 *RUMMAGE II (Analysis

  7. Statistical Analysis of the LMS and Modified Stochastic Gradient Algorithms

    DTIC Science & Technology

    1989-05-14

    siloare of the input data and incorporated directly Into recurisive descriptions and/or nonuniform weighted mov- the altorithmn ar, a data-dependent time...houlsotaion- are al.~, ii%ed tito rteod the welitht tranotienitwbhav- These results are a measure of how rapidly the algo- lair . The Mrnuation,; aalggeut

  8. Network Aggregation in Transportation Planning : Volume II : A Fixed Point Method for Treating Traffic Equilibria

    DOT National Transportation Integrated Search

    1978-04-01

    Volume 2 defines a new algorithm for the network equilibrium model that works in the space of path flows and is based on the theory of fixed point method. The goals of the study were broadly defined as the identification of aggregation practices and ...

  9. 14 CFR 255.4 - Display of information.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... and the weight given to each criterion and the specifications used by the system's programmers in constructing the algorithm. (c) Systems shall not use any factors directly or indirectly relating to carrier...” connecting flights; and (iv) The weight given to each criterion in paragraphs (c)(3)(ii) and (iii) of this...

  10. 48 CFR 252.227-7013 - Rights in technical data-Noncommercial items.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... causing a computer to perform a specific operation or series of operations. (3) Computer software means computer programs, source code, source code listings, object code listings, design details, algorithms... funds; (ii) Studies, analyses, test data, or similar data produced for this contract, when the study...

  11. Efficient Pricing Technique for Resource Allocation Problem in Downlink OFDM Cognitive Radio Networks

    NASA Astrophysics Data System (ADS)

    Abdulghafoor, O. B.; Shaat, M. M. R.; Ismail, M.; Nordin, R.; Yuwono, T.; Alwahedy, O. N. A.

    2017-05-01

    In this paper, the problem of resource allocation in OFDM-based downlink cognitive radio (CR) networks has been proposed. The purpose of this research is to decrease the computational complexity of the resource allocation algorithm for downlink CR network while concerning the interference constraint of primary network. The objective has been secured by adopting pricing scheme to develop power allocation algorithm with the following concerns: (i) reducing the complexity of the proposed algorithm and (ii) providing firm power control to the interference introduced to primary users (PUs). The performance of the proposed algorithm is tested for OFDM- CRNs. The simulation results show that the performance of the proposed algorithm approached the performance of the optimal algorithm at a lower computational complexity, i.e., O(NlogN), which makes the proposed algorithm suitable for more practical applications.

  12. Algorithmic support for graphic images rotation in avionics

    NASA Astrophysics Data System (ADS)

    Kniga, E. V.; Gurjanov, A. V.; Shukalov, A. V.; Zharinov, I. O.

    2018-05-01

    The avionics device designing has an actual problem of development and research algorithms to rotate the images which are being shown in the on-board display. The image rotation algorithms are a part of program software of avionics devices, which are parts of the on-board computers of the airplanes and helicopters. Images to be rotated have the flight location map fragments. The image rotation in the display system can be done as a part of software or mechanically. The program option is worse than the mechanic one in its rotation speed. The comparison of some test images of rotation several algorithms is shown which are being realized mechanically with the program environment Altera QuartusII.

  13. Distributed Database Control and Allocation. Volume 2. Performance Analysis of Concurrency Control Algorithms.

    DTIC Science & Technology

    1983-10-01

    Concurrency Control Algorithms Computer Corporation of America Wente K. Lin, Philip A. Bernstein, Nathan Goodman and Jerry Nolte APPROVED FOR PUBLIC ...84 03 IZ 004 ’KV This report has been reviewed by the RADC Public Affairs Office (PA) an is releasable to the National Technical Information Service...NTIS). At NTIS it will be releasable to the general public , including foreign na~ions. RADC-TR-83-226, Vol II (of three) has been reviewed and is

  14. Algorithms and Heuristics for Time-Window-Constrained Traveling Salesman Problems.

    DTIC Science & Technology

    1985-09-01

    w-r.- v-- n - ,u-,, u- v-v-.: .r-r-ri v-. r, - t -. \\ _ . . . S :.:, 1 .J - 1 5 ,*’:: C - V * t_ t. . 4’ *,W Ii NAVAL POSTGRADUATE SCHOOL Monterey...q- -- Computational experience is re- ported for all the heuristics and algorithms we develop. DD IFOAN3 1473 EDITION OF I NOV 65 IS OBSOLETE N ...Approved by Ri .R n ~~Advisor Ric E... a, R. shencSo deader A-lan R. Washburn Chairman, -~ Department of Operaiions Research Knealg--.T _ yarshall

  15. Modeling and prediction of copper removal from aqueous solutions by nZVI/rGO magnetic nanocomposites using ANN-GA and ANN-PSO.

    PubMed

    Fan, Mingyi; Hu, Jiwei; Cao, Rensheng; Xiong, Kangning; Wei, Xionghui

    2017-12-21

    Reduced graphene oxide-supported nanoscale zero-valent iron (nZVI/rGO) magnetic nanocomposites were prepared and then applied in the Cu(II) removal from aqueous solutions. Scanning electron microscopy, transmission electron microscopy, X-ray photoelectron spectroscopy and superconduction quantum interference device magnetometer were performed to characterize the nZVI/rGO nanocomposites. In order to reduce the number of experiments and the economic cost, response surface methodology (RSM) combined with artificial intelligence (AI) techniques, such as artificial neural network (ANN), genetic algorithm (GA) and particle swarm optimization (PSO), has been utilized as a major tool that can model and optimize the removal processes, because a tremendous advance has recently been made on AI that may result in extensive applications. Based on RSM, ANN-GA and ANN-PSO were employed to model the Cu(II) removal process and optimize the operating parameters, e.g., operating temperature, initial pH, initial concentration and contact time. The ANN-PSO model was proven to be an effective tool for modeling and optimizing the Cu(II) removal with a low absolute error and a high removal efficiency. Furthermore, the isotherm, kinetic, thermodynamic studies and the XPS analysis were performed to explore the mechanisms of Cu(II) removal process.

  16. A customizable system for real-time image processing using the Blackfin DSProcessor and the MicroC/OS-II real-time kernel

    NASA Astrophysics Data System (ADS)

    Coffey, Stephen; Connell, Joseph

    2005-06-01

    This paper presents a development platform for real-time image processing based on the ADSP-BF533 Blackfin processor and the MicroC/OS-II real-time operating system (RTOS). MicroC/OS-II is a completely portable, ROMable, pre-emptive, real-time kernel. The Blackfin Digital Signal Processors (DSPs), incorporating the Analog Devices/Intel Micro Signal Architecture (MSA), are a broad family of 16-bit fixed-point products with a dual Multiply Accumulate (MAC) core. In addition, they have a rich instruction set with variable instruction length and both DSP and MCU functionality thus making them ideal for media based applications. Using the MicroC/OS-II for task scheduling and management, the proposed system can capture and process raw RGB data from any standard 8-bit greyscale image sensor in soft real-time and then display the processed result using a simple PC graphical user interface (GUI). Additionally, the GUI allows configuration of the image capture rate and the system and core DSP clock rates thereby allowing connectivity to a selection of image sensors and memory devices. The GUI also allows selection from a set of image processing algorithms based in the embedded operating system.

  17. JANE: efficient mapping of prokaryotic ESTs and variable length sequence reads on related template genomes

    PubMed Central

    2009-01-01

    Background ESTs or variable sequence reads can be available in prokaryotic studies well before a complete genome is known. Use cases include (i) transcriptome studies or (ii) single cell sequencing of bacteria. Without suitable software their further analysis and mapping would have to await finalization of the corresponding genome. Results The tool JANE rapidly maps ESTs or variable sequence reads in prokaryotic sequencing and transcriptome efforts to related template genomes. It provides an easy-to-use graphics interface for information retrieval and a toolkit for EST or nucleotide sequence function prediction. Furthermore, we developed for rapid mapping an enhanced sequence alignment algorithm which reassembles and evaluates high scoring pairs provided from the BLAST algorithm. Rapid assembly on and replacement of the template genome by sequence reads or mapped ESTs is achieved. This is illustrated (i) by data from Staphylococci as well as from a Blattabacteria sequencing effort, (ii) mapping single cell sequencing reads is shown for poribacteria to sister phylum representative Rhodopirellula Baltica SH1. The algorithm has been implemented in a web-server accessible at http://jane.bioapps.biozentrum.uni-wuerzburg.de. Conclusion Rapid prokaryotic EST mapping or mapping of sequence reads is achieved applying JANE even without knowing the cognate genome sequence. PMID:19943962

  18. Performance of the CMS precision electromagnetic calorimeter at LHC Run II and prospects for High-Luminosity LHC

    NASA Astrophysics Data System (ADS)

    Zhang, Zhicai

    2018-04-01

    Many physics analyses using the Compact Muon Solenoid (CMS) detector at the LHC require accurate, high-resolution electron and photon energy measurements. Following the excellent performance achieved during LHC Run I at center-of-mass energies of 7 and 8 TeV, the CMS electromagnetic calorimeter (ECAL) is operating at the LHC with proton-proton collisions at 13 TeV center-of-mass energy. The instantaneous luminosity delivered by the LHC during Run II has achieved unprecedented levels. The average number of concurrent proton-proton collisions per bunch-crossing (pileup) has reached up to 40 interactions in 2016 and may increase further in 2017. These high pileup levels necessitate a retuning of the ECAL readout and trigger thresholds and reconstruction algorithms. In addition, the energy response of the detector must be precisely calibrated and monitored. We present new reconstruction algorithms and calibration strategies that were implemented to maintain the excellent performance of the CMS ECAL throughout Run II. We will show performance results from the 2015-2016 data taking periods and provide an outlook on the expected Run II performance in the years to come. Beyond the LHC, challenging running conditions for CMS are expected after the High-Luminosity upgrade of the LHC (HL-LHC) . We review the design and R&D studies for the CMS ECAL and present first test beam studies. Particular challenges at HL-LHC are the harsh radiation environment, the increasing data rates, and the extreme level of pile-up events, with up to 200 simultaneous proton-proton collisions. We present test beam results of hadron irradiated PbWO crystals up to fluences expected at the HL-LHC . We also report on the R&D for the new readout and trigger electronics, which must be upgraded due to the increased trigger and latency requirements at the HL-LHC.

  19. Pharmacophore Modelling and 4D-QSAR Study of Ruthenium(II) Arene Complexes as Anticancer Agents (Inhibitors) by Electron Conformational- Genetic Algorithm Method.

    PubMed

    Yavuz, Sevtap Caglar; Sabanci, Nazmiye; Saripinar, Emin

    2018-01-01

    The EC-GA method was employed in this study as a 4D-QSAR method, for the identification of the pharmacophore (Pha) of ruthenium(II) arene complex derivatives and quantitative prediction of activity. The arrangement of the computed geometric and electronic parameters for atoms and bonds of each compound occurring in a matrix is known as the electron-conformational matrix of congruity (ECMC). It contains the data from HF/3-21G level calculations. Compounds were represented by a group of conformers for each compound rather than a single conformation, known as fourth dimension to generate the model. ECMCs were compared within a certain range of tolerance values by using the EMRE program and the responsible pharmacophore group for ruthenium(II) arene complex derivatives was found. For selecting the sub-parameter which had the most effect on activity in the series and the calculation of theoretical activity values, the non-linear least square method and genetic algorithm which are included in the EMRE program were used. In addition, compounds were classified as the training and test set and the accuracy of the models was tested by cross-validation statistically. The model for training and test sets attained by the optimum 10 parameters gave highly satisfactory results with R2 training= 0.817, q 2=0.718 and SEtraining=0.066, q2 ext1 = 0.867, q2 ext2 = 0.849, q2 ext3 =0.895, ccctr = 0.895, ccctest = 0.930 and cccall = 0.905. Since there is no 4D-QSAR research on metal based organic complexes in the literature, this study is original and gives a powerful tool to the design of novel and selective ruthenium(II) arene complexes. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  20. Optimization of PET-MR Registrations for Nonhuman Primates Using Mutual Information Measures: A Multi-Transform Method (MTM)

    PubMed Central

    Sandiego, Christine M.; Weinzimmer, David; Carson, Richard E.

    2012-01-01

    An important step in PET brain kinetic analysis is the registration of functional data to an anatomical MR image. Typically, PET-MR registrations in nonhuman primate neuroreceptor studies used PET images acquired early post-injection, (e.g., 0–10 min) to closely resemble the subject’s MR image. However, a substantial fraction of these registrations (~25%) fail due to the differences in kinetics and distribution for various radiotracer studies and conditions (e.g., blocking studies). The Multi-Transform Method (MTM) was developed to improve the success of registrations between PET and MR images. Two algorithms were evaluated, MTM-I and MTM-II. The approach involves creating multiple transformations by registering PET images of different time intervals, from a dynamic study, to a single reference (i.e., MR image) (MTM-I) or to multiple reference images (i.e., MR and PET images pre-registered to the MR) (MTM-II). Normalized mutual information was used to compute similarity between the transformed PET images and the reference image(s) to choose the optimal transformation. This final transformation is used to map the dynamic dataset into the animal’s anatomical MR space, required for kinetic analysis. The chosen transformed from MTM-I and MTM-II were evaluated using visual rating scores to assess the quality of spatial alignment between the resliced PET and reference. One hundred twenty PET datasets involving eleven different tracers from 3 different scanners were used to evaluate the MTM algorithms. Studies were performed with baboons and rhesus monkeys on the HR+, HRRT, and Focus-220. Successful transformations increased from 77.5%, 85.8%, to 96.7% using the 0–10 min method, MTM-I, and MTM-II, respectively, based on visual rating scores. The Multi-Transform Methods proved to be a robust technique for PET-MR registrations for a wide range of PET studies. PMID:22926293

  1. Optimal estimation retrieval of aerosol microphysical properties from SAGE~II satellite observations in the volcanically unperturbed lower stratosphere

    NASA Astrophysics Data System (ADS)

    Wurl, D.; Grainger, R. G.; McDonald, A. J.; Deshler, T.

    2010-05-01

    Stratospheric aerosol particles under non-volcanic conditions are typically smaller than 0.1 μm. Due to fundamental limitations of the scattering theory in the Rayleigh limit, these tiny particles are hard to measure by satellite instruments. As a consequence, current estimates of global aerosol properties retrieved from spectral aerosol extinction measurements tend to be strongly biased. Aerosol surface area densities, for instance, are observed to be about 40% smaller than those derived from correlative in situ measurements (Deshler et al., 2003). An accurate knowledge of the global distribution of aerosol properties is, however, essential to better understand and quantify the role they play in atmospheric chemistry, dynamics, radiation and climate. To address this need a new retrieval algorithm was developed, which employs a nonlinear Optimal Estimation (OE) method to iteratively solve for the monomodal size distribution parameters which are statistically most consistent with both the satellite-measured multi-wavelength aerosol extinction data and a priori information. By thus combining spectral extinction measurements (at visible to near infrared wavelengths) with prior knowledge of aerosol properties at background level, even the smallest particles are taken into account which are practically invisible to optical remote sensing instruments. The performance of the OE retrieval algorithm was assessed based on synthetic spectral extinction data generated from both monomodal and small-mode-dominant bimodal sulphuric acid aerosol size distributions. For monomodal background aerosol, the new algorithm was shown to fairly accurately retrieve the particle sizes and associated integrated properties (surface area and volume densities), even in the presence of large extinction uncertainty. The associated retrieved uncertainties are a good estimate of the true errors. In the case of bimodal background aerosol, where the retrieved (monomodal) size distributions naturally differ from the correct bimodal values, the associated surface area (A) and volume densities (V) are, nevertheless, fairly accurately retrieved, except at values larger than 1.0 μm2 cm-3 (A) and 0.05 μm3 cm-3 (V), where they tend to underestimate the true bimodal values. Due to the limited information content in the SAGE II spectral extinction measurements this kind of forward model error cannot be avoided here. Nevertheless, the retrieved uncertainties are a good estimate of the true errors in the retrieved integrated properties, except where the surface area density exceeds the 1.0 μm2 cm-3 threshold. When applied to near-global SAGE II satellite extinction measured in 1999 the retrieved OE surface area and volume densities are observed to be larger by, respectively, 20-50% and 10-40% compared to those estimates obtained by the SAGE~II operational retrieval algorithm. An examination of the OE algorithm biases with in situ data indicates that the new OE aerosol property estimates tend to be more realistic than previous estimates obtained from remotely sensed data through other retrieval techniques. Based on the results of this study we therefore suggest that the new Optimal Estimation retrieval algorithm is able to contribute to an advancement in aerosol research by considerably improving current estimates of aerosol properties in the lower stratosphere under low aerosol loading conditions.

  2. US-VISIT Identity Matching Algorithm Evaluation Program: ADIS Algorithm Evaluation Project Plan Update

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grant, C W; Lenderman, J S; Gansemer, J D

    This document is an update to the 'ADIS Algorithm Evaluation Project Plan' specified in the Statement of Work for the US-VISIT Identity Matching Algorithm Evaluation Program, as deliverable II.D.1. The original plan was delivered in August 2010. This document modifies the plan to reflect modified deliverables reflecting delays in obtaining a database refresh. This document describes the revised schedule of the program deliverables. The detailed description of the processes used, the statistical analysis processes and the results of the statistical analysis will be described fully in the program deliverables. The US-VISIT Identity Matching Algorithm Evaluation Program is work performed bymore » Lawrence Livermore National Laboratory (LLNL) under IAA HSHQVT-07-X-00002 P00004 from the Department of Homeland Security (DHS).« less

  3. Overview of implementation of DARPA GPU program in SAIC

    NASA Astrophysics Data System (ADS)

    Braunreiter, Dennis; Furtek, Jeremy; Chen, Hai-Wen; Healy, Dennis

    2008-04-01

    This paper reviews the implementation of DARPA MTO STAP-BOY program for both Phase I and II conducted at Science Applications International Corporation (SAIC). The STAP-BOY program conducts fast covariance factorization and tuning techniques for space-time adaptive process (STAP) Algorithm Implementation on Graphics Processor unit (GPU) Architectures for Embedded Systems. The first part of our presentation on the DARPA STAP-BOY program will focus on GPU implementation and algorithm innovations for a prototype radar STAP algorithm. The STAP algorithm will be implemented on the GPU, using stream programming (from companies such as PeakStream, ATI Technologies' CTM, and NVIDIA) and traditional graphics APIs. This algorithm will include fast range adaptive STAP weight updates and beamforming applications, each of which has been modified to exploit the parallel nature of graphics architectures.

  4. A Social Potential Fields Approach for Self-Deployment and Self-Healing in Hierarchical Mobile Wireless Sensor Networks

    PubMed Central

    González-Parada, Eva; Cano-García, Jose; Aguilera, Francisco; Sandoval, Francisco; Urdiales, Cristina

    2017-01-01

    Autonomous mobile nodes in mobile wireless sensor networks (MWSN) allow self-deployment and self-healing. In both cases, the goals are: (i) to achieve adequate coverage; and (ii) to extend network life. In dynamic environments, nodes may use reactive algorithms so that each node locally decides when and where to move. This paper presents a behavior-based deployment and self-healing algorithm based on the social potential fields algorithm. In the proposed algorithm, nodes are attached to low cost robots to autonomously navigate in the coverage area. The proposed algorithm has been tested in environments with and without obstacles. Our study also analyzes the differences between non-hierarchical and hierarchical routing configurations in terms of network life and coverage. PMID:28075364

  5. A Social Potential Fields Approach for Self-Deployment and Self-Healing in Hierarchical Mobile Wireless Sensor Networks.

    PubMed

    González-Parada, Eva; Cano-García, Jose; Aguilera, Francisco; Sandoval, Francisco; Urdiales, Cristina

    2017-01-09

    Autonomous mobile nodes in mobile wireless sensor networks (MWSN) allow self-deployment and self-healing. In both cases, the goals are: (i) to achieve adequate coverage; and (ii) to extend network life. In dynamic environments, nodes may use reactive algorithms so that each node locally decides when and where to move. This paper presents a behavior-based deployment and self-healing algorithm based on the social potential fields algorithm. In the proposed algorithm, nodes are attached to low cost robots to autonomously navigate in the coverage area. The proposed algorithm has been tested in environments with and without obstacles. Our study also analyzes the differences between non-hierarchical and hierarchical routing configurations in terms of network life and coverage.

  6. Hybrid genetic algorithm with an adaptive penalty function for fitting multimodal experimental data: application to exchange-coupled non-Kramers binuclear iron active sites.

    PubMed

    Beaser, Eric; Schwartz, Jennifer K; Bell, Caleb B; Solomon, Edward I

    2011-09-26

    A Genetic Algorithm (GA) is a stochastic optimization technique based on the mechanisms of biological evolution. These algorithms have been successfully applied in many fields to solve a variety of complex nonlinear problems. While they have been used with some success in chemical problems such as fitting spectroscopic and kinetic data, many have avoided their use due to the unconstrained nature of the fitting process. In engineering, this problem is now being addressed through incorporation of adaptive penalty functions, but their transfer to other fields has been slow. This study updates the Nanakorrn Adaptive Penalty function theory, expanding its validity beyond maximization problems to minimization as well. The expanded theory, using a hybrid genetic algorithm with an adaptive penalty function, was applied to analyze variable temperature variable field magnetic circular dichroism (VTVH MCD) spectroscopic data collected on exchange coupled Fe(II)Fe(II) enzyme active sites. The data obtained are described by a complex nonlinear multimodal solution space with at least 6 to 13 interdependent variables and are costly to search efficiently. The use of the hybrid GA is shown to improve the probability of detecting the global optimum. It also provides large gains in computational and user efficiency. This method allows a full search of a multimodal solution space, greatly improving the quality and confidence in the final solution obtained, and can be applied to other complex systems such as fitting of other spectroscopic or kinetics data.

  7. Subjective audio quality evaluation of embedded-optimization-based distortion precompensation algorithms.

    PubMed

    Defraene, Bruno; van Waterschoot, Toon; Diehl, Moritz; Moonen, Marc

    2016-07-01

    Subjective audio quality evaluation experiments have been conducted to assess the performance of embedded-optimization-based precompensation algorithms for mitigating perceptible linear and nonlinear distortion in audio signals. It is concluded with statistical significance that the perceived audio quality is improved by applying an embedded-optimization-based precompensation algorithm, both in case (i) nonlinear distortion and (ii) a combination of linear and nonlinear distortion is present. Moreover, a significant positive correlation is reported between the collected subjective and objective PEAQ audio quality scores, supporting the validity of using PEAQ to predict the impact of linear and nonlinear distortion on the perceived audio quality.

  8. Implementation of LSCMA adaptive array terminal for mobile satellite communications

    NASA Astrophysics Data System (ADS)

    Zhou, Shun; Wang, Huali; Xu, Zhijun

    2007-11-01

    This paper considers the application of adaptive array antenna based on the least squares constant modulus algorithm (LSCMA) for interference rejection in mobile SATCOM terminals. A two-element adaptive array scheme is implemented with a combination of ADI TS201S DSP chips and Altera Stratix II FPGA device, which makes a cooperating computation for adaptive beamforming. Its interference suppressing performance is verified via Matlab simulations. Digital hardware system is implemented to execute the operations of LSCMA beamforming algorithm that is represented by an algorithm flowchart. The result of simulations and test indicate that this scheme can improve the anti-jamming performance of terminals.

  9. Applying a Genetic Algorithm to Reconfigurable Hardware

    NASA Technical Reports Server (NTRS)

    Wells, B. Earl; Weir, John; Trevino, Luis; Patrick, Clint; Steincamp, Jim

    2004-01-01

    This paper investigates the feasibility of applying genetic algorithms to solve optimization problems that are implemented entirely in reconfgurable hardware. The paper highlights the pe$ormance/design space trade-offs that must be understood to effectively implement a standard genetic algorithm within a modem Field Programmable Gate Array, FPGA, reconfgurable hardware environment and presents a case-study where this stochastic search technique is applied to standard test-case problems taken from the technical literature. In this research, the targeted FPGA-based platform and high-level design environment was the Starbridge Hypercomputing platform, which incorporates multiple Xilinx Virtex II FPGAs, and the Viva TM graphical hardware description language.

  10. Radical Computing II

    DTIC Science & Technology

    1984-06-01

    A.Arays, G.V.Sibiriskov. The AVTO -ANALTZE J. Comput. Math. and Mth. Phys., v. 11, N.4, Progrn eg System. J. Comput. Math. and Cinpur. 1971, pp. 1071...1075. Mach., No.3, Kharkov, 1972. 2. S.A.Abhrmov. On Sam Algorithms for Algebraic 13. Z.A.Arays, C.V.Sibiriakov. AVTO -AALM.K. Novo- Transformstions of

  11. Impact of mechanism vibration characteristics by joint clearance and optimization design of its multi-objective robustness

    NASA Astrophysics Data System (ADS)

    Zeng, Baoping; Wang, Chao; Zhang, Yu; Gong, Yajun; Hu, Sanbao

    2017-12-01

    Joint clearances and friction characteristics significantly influence the mechanism vibration characteristics; for example: as for joint clearances, the shaft and bearing of its clearance joint collide to bring about the dynamic normal contact force and tangential coulomb friction force while the mechanism works; thus, the whole system may vibrate; moreover, the mechanism is under contact-impact with impact force constraint from free movement under action of the above dynamic forces; in addition, the mechanism topology structure also changes. The constraint relationship between joints may be established by a repeated complex nonlinear dynamic process (idle stroke - contact-impact - elastic compression - rebound - impact relief - idle stroke movement - contact-impact). Analysis of vibration characteristics of joint parts is still a challenging open task by far. The dynamic equations for any mechanism with clearance is often a set of strong coupling, high-dimensional and complex time-varying nonlinear differential equations which are solved very difficultly. Moreover, complicated chaotic motions very sensitive to initial values in impact and vibration due to clearance let high-precision simulation and prediction of their dynamic behaviors be more difficult; on the other hand, their subsequent wearing necessarily leads to some certain fluctuation of structure clearance parameters, which acts as one primary factor for vibration of the mechanical system. A dynamic model was established to the device for opening the deepwater robot cabin door with joint clearance by utilizing the finite element method and analysis was carried out to its vibration characteristics in this study. Moreover, its response model was carried out by utilizing the DOE method and then the robust optimization design was performed to sizes of the joint clearance and the friction coefficient change range so that the optimization design results may be regarded as reference data for selecting bearings and controlling manufacturing process parameters for the opening mechanism. Several optimization objectives such as x/y/z accelerations for various measuring points and dynamic reaction forces of mounting brackets, and a few constraints including manufacturing process were taken into account in the optimization models, which were solved by utilizing the multi-objective genetic algorithm (NSGA-II). The vibration characteristics of the optimized opening mechanism are superior to those of the original design. In addition, the numerical forecast results are in good agreement with the test results of the prototype.

  12. Coach simplified structure modeling and optimization study based on the PBM method

    NASA Astrophysics Data System (ADS)

    Zhang, Miaoli; Ren, Jindong; Yin, Ying; Du, Jian

    2016-09-01

    For the coach industry, rapid modeling and efficient optimization methods are desirable for structure modeling and optimization based on simplified structures, especially for use early in the concept phase and with capabilities of accurately expressing the mechanical properties of structure and with flexible section forms. However, the present dimension-based methods cannot easily meet these requirements. To achieve these goals, the property-based modeling (PBM) beam modeling method is studied based on the PBM theory and in conjunction with the characteristics of coach structure of taking beam as the main component. For a beam component of concrete length, its mechanical characteristics are primarily affected by the section properties. Four section parameters are adopted to describe the mechanical properties of a beam, including the section area, the principal moments of inertia about the two principal axles, and the torsion constant of the section. Based on the equivalent stiffness strategy, expressions for the above section parameters are derived, and the PBM beam element is implemented in HyperMesh software. A case is realized using this method, in which the structure of a passenger coach is simplified. The model precision is validated by comparing the basic performance of the total structure with that of the original structure, including the bending and torsion stiffness and the first-order bending and torsional modal frequencies. Sensitivity analysis is conducted to choose design variables. The optimal Latin hypercube experiment design is adopted to sample the test points, and polynomial response surfaces are used to fit these points. To improve the bending and torsion stiffness and the first-order torsional frequency and taking the allowable maximum stresses of the braking and left turning conditions as constraints, the multi-objective optimization of the structure is conducted using the NSGA-II genetic algorithm on the ISIGHT platform. The result of the Pareto solution set is acquired, and the selection strategy of the final solution is discussed. The case study demonstrates that the mechanical performances of the structure can be well-modeled and simulated by PBM beam. Because of the merits of fewer parameters and convenience of use, this method is suitable to be applied in the concept stage. Another merit is that the optimization results are the requirements for the mechanical performance of the beam section instead of those of the shape and dimensions, bringing flexibility to the succeeding design.

  13. High-fidelity digital recording and playback sphygmomanometry system: device description and proof of concept.

    PubMed

    Lee, Jongshill; Chee, Youngjoon; Kim, Inyoung; Karpettas, Nikos; Kollias, Anastasios; Atkins, Neil; Stergiou, George S; O'Brien, Eoin

    2015-10-01

    This study describes the development of a new digital sphygmocorder (DS-II), which allows the digital recording and playback of the Korotkoff sounds, together with cuff pressure waveform, and its performance in a pilot validation study. A condenser microphone and stethoscope head detect Korotkoff sounds and an electronic chip, dedicated to audio-signal processing, is used to record high-quality sounds. Systolic and diastolic blood pressure (SBP/DBP) are determined from the recorded signals with an automatic beat detection algorithm that displays the cuff pressure at each beat on the monitor. Recordings of Korotkoff sounds, with the cuff pressure waveforms, and the simultaneous on-site assessments of SBP/DBP were performed during 100 measurements in 10 individuals. The observers reassessed the recorded signals to verify their accuracy and differences were calculated. The features of the high-fidelity DS-II, the technical specifications and the assessment procedures utilizing the playback software are described. Interobserver absolute differences (mean±SD) in measurements were 0.7±1.1/1.3±1.3 mmHg (SBP/DBP) with a mercury sphygmomanometer and 0.3±0.9/0.8±1.2 mmHg with the DS-II. The absolute DS-II mercury sphygmomanometer differences were 1.3±1.9/1.5±1.3 mmHg (SBP/DBP). The high-fidelity DS-II device presents satisfactory agreement with simultaneous measurements of blood pressure with a mercury sphygmomanometer. The device will be a valuable methodology for validating new blood pressure measurement technologies and devices.

  14. NAVSIM 2: A computer program for simulating aided-inertial navigation for aircraft

    NASA Technical Reports Server (NTRS)

    Bjorkman, William S.

    1987-01-01

    NAVSIM II, a computer program for analytical simulation of aided-inertial navigation for aircraft, is described. The description is supported by a discussion of the program's application to the design and analysis of aided-inertial navigation systems as well as instructions for utilizing the program and for modifying it to accommodate new models, constraints, algorithms and scenarios. NAVSIM II simulates an airborne inertial navigation system built around a strapped-down inertial measurement unit and aided in its function by GPS, Doppler radar, altimeter, airspeed, and position-fix measurements. The measurements are incorporated into the navigation estimate via a UD-form Kalman filter. The simulation was designed and implemented using structured programming techniques and with particular attention to user-friendly operation.

  15. Man against machine: diagnostic performance of a deep learning convolutional neural network for dermoscopic melanoma recognition in comparison to 58 dermatologists.

    PubMed

    Haenssle, H A; Fink, C; Schneiderbauer, R; Toberer, F; Buhl, T; Blum, A; Kalloo, A; Hassen, A Ben Hadj; Thomas, L; Enk, A; Uhlmann, L

    2018-05-28

    Deep learning convolutional neural networks (CNN) may facilitate melanoma detection, but data comparing a CNN's diagnostic performance to larger groups of dermatologists are lacking. Google's Inception v4 CNN architecture was trained and validated using dermoscopic images and corresponding diagnoses. In a comparative cross-sectional reader study a 100-image test-set was used (level-I: dermoscopy only; level-II: dermoscopy plus clinical information and images). Main outcome measures were sensitivity, specificity and area under the curve (AUC) of receiver operating characteristics (ROC) for diagnostic classification (dichotomous) of lesions by the CNN versus an international group of 58 dermatologists during level-I or -II of the reader study. Secondary end points included the dermatologists' diagnostic performance in their management decisions and differences in the diagnostic performance of dermatologists during level-I and -II of the reader study. Additionally, the CNN's performance was compared with the top-five algorithms of the 2016 International Symposium on Biomedical Imaging (ISBI) challenge. In level-I dermatologists achieved a mean (±standard deviation) sensitivity and specificity for lesion classification of 86.6% (±9.3%) and 71.3% (±11.2%), respectively. More clinical information (level-II) improved the sensitivity to 88.9% (±9.6%, P = 0.19) and specificity to 75.7% (±11.7%, P < 0.05). The CNN ROC curve revealed a higher specificity of 82.5% when compared with dermatologists in level-I (71.3%, P < 0.01) and level-II (75.7%, P < 0.01) at their sensitivities of 86.6% and 88.9%, respectively. The CNN ROC AUC was greater than the mean ROC area of dermatologists (0.86 versus 0.79, P < 0.01). The CNN scored results close to the top three algorithms of the ISBI 2016 challenge. For the first time we compared a CNN's diagnostic performance with a large international group of 58 dermatologists, including 30 experts. Most dermatologists were outperformed by the CNN. Irrespective of any physicians' experience, they may benefit from assistance by a CNN's image classification. This study was registered at the German Clinical Trial Register (DRKS-Study-ID: DRKS00013570; https://www.drks.de/drks_web/).

  16. The Rise and Fall of Type Ia Supernova Light Curves in the SDSS-II Supernova Survey

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hayden, Brian T.; /Notre Dame U.; Garnavich, Peter M.

    2010-01-01

    We analyze the rise and fall times of Type Ia supernova (SN Ia) light curves discovered by the Sloan Digital Sky Survey-II (SDSS-II) Supernova Survey. From a set of 391 light curves k-corrected to the rest-frame B and V bands, we find a smaller dispersion in the rising portion of the light curve compared to the decline. This is in qualitative agreement with computer models which predict that variations in radioactive nickel yield have less impact on the rise than on the spread of the decline rates. The differences we find in the rise and fall properties suggest that amore » single 'stretch' correction to the light curve phase does not properly model the range of SN Ia light curve shapes. We select a subset of 105 light curves well observed in both rise and fall portions of the light curves and develop a '2-stretch' fit algorithm which estimates the rise and fall times independently. We find the average time from explosion to B-band peak brightness is 17.38 {+-} 0.17 days, but with a spread of rise times which range from 13 days to 23 days. Our average rise time is shorter than the 19.5 days found in previous studies; this reflects both the different light curve template used and the application of the 2-stretch algorithm. The SDSS-II supernova set and the local SNe Ia with well-observed early light curves show no significant differences in their average rise-time properties. We find that slow-declining events tend to have fast rise times, but that the distribution of rise minus fall time is broad and single peaked. This distribution is in contrast to the bimodality in this parameter that was first suggested by Strovink (2007) from an analysis of a small set of local SNe Ia. We divide the SDSS-II sample in half based on the rise minus fall value, t{sub r} - t{sub f} {approx}< 2 days and t{sub r} - t{sub f} > 2 days, to search for differences in their host galaxy properties and Hubble residuals; we find no difference in host galaxy properties or Hubble residuals in our sample.« less

  17. Metabolic Surgery in the Treatment Algorithm for Type 2 Diabetes: A Joint Statement by International Diabetes Organizations.

    PubMed

    Rubino, Francesco; Nathan, David M; Eckel, Robert H; Schauer, Philip R; Alberti, K George M M; Zimmet, Paul Z; Del Prato, Stefano; Ji, Linong; Sadikot, Shaukat M; Herman, William H; Amiel, Stephanie A; Kaplan, Lee M; Taroncher-Oldenburg, Gaspar; Cummings, David E

    2016-07-01

    Despite growing evidence that bariatric/metabolic surgery powerfully improves type 2 diabetes (T2D), existing diabetes treatment algorithms do not include surgical options. The 2nd Diabetes Surgery Summit (DSS-II), an international consensus conference, was convened in collaboration with leading diabetes organizations to develop global guidelines to inform clinicians and policymakers about benefits and limitations of metabolic surgery for T2D. A multidisciplinary group of 48 international clinicians/scholars (75% nonsurgeons), including representatives of leading diabetes organizations, participated in DSS-II. After evidence appraisal (MEDLINE [1 January 2005-30 September 2015]), three rounds of Delphi-like questionnaires were used to measure consensus for 32 data-based conclusions. These drafts were presented at the combined DSS-II and 3rd World Congress on Interventional Therapies for Type 2 Diabetes (London, U.K., 28-30 September 2015), where they were open to public comment by other professionals and amended face-to-face by the Expert Committee. Given its role in metabolic regulation, the gastrointestinal tract constitutes a meaningful target to manage T2D. Numerous randomized clinical trials, albeit mostly short/midterm, demonstrate that metabolic surgery achieves excellent glycemic control and reduces cardiovascular risk factors. On the basis of such evidence, metabolic surgery should be recommended to treat T2D in patients with class III obesity (BMI≥40 kg/m(2)) and in those with class II obesity (BMI 35.0-39.9 kg/m(2)) when hyperglycemia is inadequately controlled by lifestyle and optimal medical therapy. Surgery should also be considered for patients with T2D and BMI 30.0-34.9 kg/m(2) if hyperglycemia is inadequately controlled despite optimal treatment with either oral or injectable medications. These BMI thresholds should be reduced by 2.5 kg/m(2) for Asian patients. Although additional studies are needed to further demonstrate long-term benefits, there is sufficient clinical and mechanistic evidence to support inclusion of metabolic surgery among antidiabetes interventions for people with T2D and obesity. To date, the DSS-II guidelines have been formally endorsed by 45 worldwide medical and scientific societies. Health care regulators should introduce appropriate reimbursement policies. Copyright © 2016. Published by Elsevier Inc.

  18. Metabolic Surgery in the Treatment Algorithm for Type 2 Diabetes: a Joint Statement by International Diabetes Organizations.

    PubMed

    Rubino, Francesco; Nathan, David M; Eckel, Robert H; Schauer, Philip R; Alberti, K George M M; Zimmet, Paul Z; Del Prato, Stefano; Ji, Linong; Sadikot, Shaukat M; Herman, William H; Amiel, Stephanie A; Kaplan, Lee M; Taroncher-Oldenburg, Gaspar; Cummings, David E

    2017-01-01

    Despite growing evidence that bariatric/metabolic surgery powerfully improves type 2 diabetes (T2D), existing diabetes treatment algorithms do not include surgical options. The 2nd Diabetes Surgery Summit (DSS-II), an international consensus conference, was convened in collaboration with leading diabetes organizations to develop global guidelines to inform clinicians and policymakers about benefits and limitations of metabolic surgery for T2D. A multidisciplinary group of 48 international clinicians/scholars (75% nonsurgeons), including representatives of leading diabetes organizations, participated in DSS-II. After evidence appraisal (MEDLINE [1 January 2005-30 September 2015]), three rounds of Delphi-like questionnaires were used to measure consensus for 32 data-based conclusions. These drafts were presented at the combined DSS-II and 3rd World Congress on Interventional Therapies for Type 2 Diabetes (London, U.K., 28-30 September 2015), where they were open to public comment by other professionals and amended face-to-face by the Expert Committee. Given its role in metabolic regulation, the gastrointestinal tract constitutes a meaningful target to manage T2D. Numerous randomized clinical trials, albeit mostly short/midterm, demonstrate that metabolic surgery achieves excellent glycemic control and reduces cardiovascular risk factors. On the basis of such evidence, metabolic surgery should be recommended to treat T2D in patients with class III obesity (BMI ≥40 kg/m 2 ) and in those with class II obesity (BMI 35.0-39.9 kg/m 2 ) when hyperglycemia is inadequately controlled by lifestyle and optimal medical therapy. Surgery should also be considered for patients with T2D and BMI 30.0-34.9 kg/m 2 if hyperglycemia is inadequately controlled despite optimal treatment with either oral or injectable medications. These BMI thresholds should be reduced by 2.5 kg/m 2 for Asian patients. Although additional studies are needed to further demonstrate long-term benefits, there is sufficient clinical and mechanistic evidence to support inclusion of metabolic surgery among antidiabetes interventions for people with T2D and obesity. To date, the DSS-II guidelines have been formally endorsed by 45 worldwide medical and scientific societies. Health care regulators should introduce appropriate reimbursement policies.

  19. Experimental validation of improved 3D SBP positioning algorithm in PET applications using UW Phase II Board

    NASA Astrophysics Data System (ADS)

    Jorge, L. S.; Bonifacio, D. A. B.; DeWitt, Don; Miyaoka, R. S.

    2016-12-01

    Continuous scintillator-based detectors have been considered as a competitive and cheaper approach than highly pixelated discrete crystal positron emission tomography (PET) detectors, despite the need for algorithms to estimate 3D gamma interaction position. In this work, we report on the implementation of a positioning algorithm to estimate the 3D interaction position in a continuous crystal PET detector using a Field Programmable Gate Array (FPGA). The evaluated method is the Statistics-Based Processing (SBP) technique that requires light response function and event position characterization. An algorithm has been implemented using the Verilog language and evaluated using a data acquisition board that contains an Altera Stratix III FPGA. The 3D SBP algorithm was previously successfully implemented on a Stratix II FPGA using simulated data and a different module design. In this work, improvements were made to the FPGA coding of the 3D positioning algorithm, reducing the total memory usage to around 34%. Further the algorithm was evaluated using experimental data from a continuous miniature crystal element (cMiCE) detector module. Using our new implementation, average FWHM (Full Width at Half Maximum) for the whole block is 1.71±0.01 mm, 1.70±0.01 mm and 1.632±0.005 mm for x, y and z directions, respectively. Using a pipelined architecture, the FPGA is able to process 245,000 events per second for interactions inside of the central area of the detector that represents 64% of the total block area. The weighted average of the event rate by regional area (corner, border and central regions) is about 198,000 events per second. This event rate is greater than the maximum expected coincidence rate for any given detector module in future PET systems using the cMiCE detector design.

  20. Lidar-based door and stair detection from a mobile robot

    NASA Astrophysics Data System (ADS)

    Bansal, Mayank; Southall, Ben; Matei, Bogdan; Eledath, Jayan; Sawhney, Harpreet

    2010-04-01

    We present an on-the-move LIDAR-based object detection system for autonomous and semi-autonomous unmanned vehicle systems. In this paper we make several contributions: (i) we describe an algorithm for real-time detection of objects such as doors and stairs in indoor environments; (ii) we describe efficient data structures and algorithms for processing 3D point clouds acquired by laser scanners in a streaming manner, which minimize the memory copying and access. We show qualitative results demonstrating the effectiveness of our approach on runs in an indoor office environment.

  1. Machine learning for medical images analysis.

    PubMed

    Criminisi, A

    2016-10-01

    This article discusses the application of machine learning for the analysis of medical images. Specifically: (i) We show how a special type of learning models can be thought of as automatically optimized, hierarchically-structured, rule-based algorithms, and (ii) We discuss how the issue of collecting large labelled datasets applies to both conventional algorithms as well as machine learning techniques. The size of the training database is a function of model complexity rather than a characteristic of machine learning methods. Crown Copyright © 2016. Published by Elsevier B.V. All rights reserved.

  2. Data Centric Sensor Stream Reduction for Real-Time Applications in Wireless Sensor Networks

    PubMed Central

    Aquino, Andre Luiz Lins; Nakamura, Eduardo Freire

    2009-01-01

    This work presents a data-centric strategy to meet deadlines in soft real-time applications in wireless sensor networks. This strategy considers three main aspects: (i) The design of real-time application to obtain the minimum deadlines; (ii) An analytic model to estimate the ideal sample size used by data-reduction algorithms; and (iii) Two data-centric stream-based sampling algorithms to perform data reduction whenever necessary. Simulation results show that our data-centric strategies meet deadlines without loosing data representativeness. PMID:22303145

  3. Comment on "Comment on 'Constant temperature molecular dynamics simulations by means of a stochastic collision model. II. The harmonic oscillator' [J. Chem. Phys. 104, 3732 (1996)]" [J. Chem. Phys. 106, 1646 (1997)].

    PubMed

    Kast, Stefan M

    2004-03-08

    An argument brought forward by Sholl and Fichthorn against the stochastic collision-based constant temperature algorithm for molecular dynamics simulations developed by Kast et al. is refuted. It is demonstrated that the large temperature fluctuations noted by Sholl and Fichthorn are due to improperly chosen initial conditions within their formulation of the algorithm. With the original form or by suitable initialization of their variant no deficient behavior is observed.

  4. Benefits Analysis Of Alternative Secondary National Ambient ...

    EPA Pesticide Factsheets

    ... These elasticities are comparable 4-154 ... Q *^ M< *ï* *J 2 1- ZU II II II 11 II II II II II II II II II II II II II II II II II II j| II II II Ps || ïo ON < t>- -o rj rs wo -o iiT rjO'Gr'j ...

  5. A Standardized Approach for Category II Fetal Heart Rate with Significant Decelerations: Maternal and Neonatal Outcomes.

    PubMed

    Shields, Laurence E; Wiesner, Suzanne; Klein, Catherine; Pelletreau, Barbara; Hedriana, Herman L

    2018-06-12

     To determine if a standardized intervention process for Category II fetal heart rates (FHRs) with significant decels (SigDecels) would improve neonatal outcome and to determine the impact on mode of delivery rates.  Patients with Category II FHRs from six hospitals were prospectively managed using a standardized approach based on the presence of recurrent SigDecels. Maternal and neonatal outcomes were compared between pre- (6 months) and post-(11 months) implementation. Neonatal outcomes were: 5-minute APGAR scores of <7, <5, <3, and severe unexpected newborn complications (UNC). Maternal outcomes included primary cesarean and operative vaginal birth rates of eligible deliveries.  Post implementation there were 8,515 eligible deliveries, 3,799 (44.6%) were screened, and 361 (9.5%) met criteria for recurrent SigDecels. Compliance with the algorithm was 97.8%. The algorithm recommended delivery in 68.0% of cases. Relative to pre-implementation, 5-minute APGAR score of <7 were reduced by 24.6% ( p  < 0.05) and severe UNC by -26.6%, p  = < .05. The rate of primary cesarean decreased (19.8 vs 18.3%, p  < 0.05), while there were nonsignificant increases in vaginal (74.6 vs 75.8%, p  = 0.13) and operative vaginal births (5.7 vs 5.9%, p  = 0.6) CONCLUSION:  Standardized management of recurrent SigDecels reduced the rate of 5-minute APGAR scores of < 7 and severe UNC. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.

  6. The Ultimate Big Data Enterprise Initiative: Defining Functional Capabilities for an International Information System (IIS) for Orbital Space Data (OSD)

    NASA Astrophysics Data System (ADS)

    Raygan, R.

    Global collaboration in support of an International Information System (IIS) for Orbital Space Data (OSD) literally requires a global enterprise. As with many information technology enterprise initiatives attempting to coral the desires of business with the budgets and limitations of technology, Space Situational Awareness (SSA) includes many of the same challenges: 1) Adaptive / Intuitive Dash Board that facilitates User Experience Design for a variety of users. 2) Asset Management of hundreds of thousands of objects moving at thousands of miles per hour hundreds of miles in space. 3) Normalization and integration of diverse data in various languages, possibly hidden or protected from easy access. 4) Expectations of near real-time information availability coupled with predictive analysis to affect decisions before critical points of no return, such as Space Object Conjunction Assessment (CA). 5) Data Ownership, management, taxonomy, and accuracy. 6) Integrated metrics and easily modified algorithms for "what if" analysis. This paper proposes an approach to define the functional capabilities for an IIS for OSD. These functional capabilities not only address previously identified gaps in current systems but incorporate lessons learned from other big data, enterprise, and agile information technology initiatives that correlate to the space domain. Viewing the IIS as the "data service provider" allows adoption of existing information technology processes which strengthen governance and ensure service consumers certain levels of service dependability and accuracy.

  7. Machine Learning-Assisted Network Inference Approach to Identify a New Class of Genes that Coordinate the Functionality of Cancer Networks.

    PubMed

    Ghanat Bari, Mehrab; Ung, Choong Yong; Zhang, Cheng; Zhu, Shizhen; Li, Hu

    2017-08-01

    Emerging evidence indicates the existence of a new class of cancer genes that act as "signal linkers" coordinating oncogenic signals between mutated and differentially expressed genes. While frequently mutated oncogenes and differentially expressed genes, which we term Class I cancer genes, are readily detected by most analytical tools, the new class of cancer-related genes, i.e., Class II, escape detection because they are neither mutated nor differentially expressed. Given this hypothesis, we developed a Machine Learning-Assisted Network Inference (MALANI) algorithm, which assesses all genes regardless of expression or mutational status in the context of cancer etiology. We used 8807 expression arrays, corresponding to 9 cancer types, to build more than 2 × 10 8 Support Vector Machine (SVM) models for reconstructing a cancer network. We found that ~3% of ~19,000 not differentially expressed genes are Class II cancer gene candidates. Some Class II genes that we found, such as SLC19A1 and ATAD3B, have been recently reported to associate with cancer outcomes. To our knowledge, this is the first study that utilizes both machine learning and network biology approaches to uncover Class II cancer genes in coordinating functionality in cancer networks and will illuminate our understanding of how genes are modulated in a tissue-specific network contribute to tumorigenesis and therapy development.

  8. Algorithms that eliminate the effects of calibration artefact and trial-imposed offsets of Masimo oximeter in BOOST-NZ trial.

    PubMed

    Zahari, Marina; Lee, Dominic Savio; Darlow, Brian Alexander

    2016-10-01

    The displayed readings of Masimo pulse oximeters used in the Benefits Of Oxygen Saturation Targeting (BOOST) II and related trials in very preterm babies were influenced by trial-imposed offsets and an artefact in the calibration software. A study was undertaken to implement new algorithms that eliminate the effects of offsets and artefact. In the BOOST-New Zealand trial, oxygen saturations were averaged and stored every 10 s up to 36 weeks' post-menstrual age. Two-hundred and fifty-seven of 340 babies enrolled in the trial had at least two weeks of stored data. Oxygen saturation distribution patterns corresponding with a +3 % or -3 % offset in the 85-95 % range were identified together with that due to the calibration artefact. Algorithms involving linear and quadratic interpolations were developed, implemented on each baby of the dataset and validated using the data of a UK preterm baby, as recorded from Masimo oximeters with the original software and a non-offset Siemens oximeter. Saturation distributions obtained were compared for both groups. There were a flat region at saturations 85-87 % and a peak at 96 % from the lower saturation target oximeters, and at 93-95 and 84 % respectively from the higher saturation target oximeters. The algorithms lowered the peaks and redistributed the accumulated frequencies to the flat regions and artefact at 87-90 %. The resulting distributions were very close to those obtained from the Siemens oximeter. The artefact and offsets of the Masimo oximeter's software had been addressed to determine the true saturation readings through the use of novel algorithms. The implementation would enable New Zealand data be included in the meta-analysis of BOOST II trials, and be used in neonatal oxygen studies.

  9. A robust algorithm for automated target recognition using precomputed radar cross sections

    NASA Astrophysics Data System (ADS)

    Ehrman, Lisa M.; Lanterman, Aaron D.

    2004-09-01

    Passive radar is an emerging technology that offers a number of unique benefits, including covert operation. Many such systems are already capable of detecting and tracking aircraft. The goal of this work is to develop a robust algorithm for adding automated target recognition (ATR) capabilities to existing passive radar systems. In previous papers, we proposed conducting ATR by comparing the precomputed RCS of known targets to that of detected targets. To make the precomputed RCS as accurate as possible, a coordinated flight model is used to estimate aircraft orientation. Once the aircraft's position and orientation are known, it is possible to determine the incident and observed angles on the aircraft, relative to the transmitter and receiver. This makes it possible to extract the appropriate radar cross section (RCS) from our simulated database. This RCS is then scaled to account for propagation losses and the receiver's antenna gain. A Rician likelihood model compares these expected signals from different targets to the received target profile. We have previously employed Monte Carlo runs to gauge the probability of error in the ATR algorithm; however, generation of a statistically significant set of Monte Carlo runs is computationally intensive. As an alternative to Monte Carlo runs, we derive the relative entropy (also known as Kullback-Liebler distance) between two Rician distributions. Since the probability of Type II error in our hypothesis testing problem can be expressed as a function of the relative entropy via Stein's Lemma, this provides us with a computationally efficient method for determining an upper bound on our algorithm's performance. It also provides great insight into the types of classification errors we can expect from our algorithm. This paper compares the numerically approximated probability of Type II error with the results obtained from a set of Monte Carlo runs.

  10. Validation of the 12-gene colon cancer recurrence score as a predictor of recurrence risk in stage II and III rectal cancer patients.

    PubMed

    Reimers, Marlies S; Kuppen, Peter J K; Lee, Mark; Lopatin, Margarita; Tezcan, Haluk; Putter, Hein; Clark-Langone, Kim; Liefers, Gerrit Jan; Shak, Steve; van de Velde, Cornelis J H

    2014-11-01

    The 12-gene Recurrence Score assay is a validated predictor of recurrence risk in stage II and III colon cancer patients. We conducted a prospectively designed study to validate this assay for prediction of recurrence risk in stage II and III rectal cancer patients from the Dutch Total Mesorectal Excision (TME) trial. RNA was extracted from fixed paraffin-embedded primary rectal tumor tissue from stage II and III patients randomized to TME surgery alone, without (neo)adjuvant treatment. Recurrence Score was assessed by quantitative real time-polymerase chain reaction using previously validated colon cancer genes and algorithm. Data were analysed by Cox proportional hazards regression, adjusting for stage and resection margin status. All statistical tests were two-sided. Recurrence Score predicted risk of recurrence (hazard ratio [HR] = 1.57, 95% confidence interval [CI] = 1.11 to 2.21, P = .01), risk of distant recurrence (HR = 1.50, 95% CI = 1.04 to 2.17, P = .03), and rectal cancer-specific survival (HR = 1.64, 95% CI = 1.15 to 2.34, P = .007). The effect of Recurrence Score was most prominent in stage II patients and attenuated with more advanced stage (P(interaction) ≤ .007 for each endpoint). In stage II, five-year cumulative incidence of recurrence ranged from 11.1% in the predefined low Recurrence Score group (48.5% of patients) to 43.3% in the high Recurrence Score group (23.1% of patients). The 12-gene Recurrence Score is a predictor of recurrence risk and cancer-specific survival in rectal cancer patients treated with surgery alone, suggesting a similar underlying biology in colon and rectal cancers. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  11. Improved prediction of MHC class I and class II epitopes using a novel Gibbs sampling approach.

    PubMed

    Nielsen, Morten; Lundegaard, Claus; Worning, Peder; Hvid, Christina Sylvester; Lamberth, Kasper; Buus, Søren; Brunak, Søren; Lund, Ole

    2004-06-12

    Prediction of which peptides will bind a specific major histocompatibility complex (MHC) constitutes an important step in identifying potential T-cell epitopes suitable as vaccine candidates. MHC class II binding peptides have a broad length distribution complicating such predictions. Thus, identifying the correct alignment is a crucial part of identifying the core of an MHC class II binding motif. In this context, we wish to describe a novel Gibbs motif sampler method ideally suited for recognizing such weak sequence motifs. The method is based on the Gibbs sampling method, and it incorporates novel features optimized for the task of recognizing the binding motif of MHC classes I and II. The method locates the binding motif in a set of sequences and characterizes the motif in terms of a weight-matrix. Subsequently, the weight-matrix can be applied to identifying effectively potential MHC binding peptides and to guiding the process of rational vaccine design. We apply the motif sampler method to the complex problem of MHC class II binding. The input to the method is amino acid peptide sequences extracted from the public databases of SYFPEITHI and MHCPEP and known to bind to the MHC class II complex HLA-DR4(B1*0401). Prior identification of information-rich (anchor) positions in the binding motif is shown to improve the predictive performance of the Gibbs sampler. Similarly, a consensus solution obtained from an ensemble average over suboptimal solutions is shown to outperform the use of a single optimal solution. In a large-scale benchmark calculation, the performance is quantified using relative operating characteristics curve (ROC) plots and we make a detailed comparison of the performance with that of both the TEPITOPE method and a weight-matrix derived using the conventional alignment algorithm of ClustalW. The calculation demonstrates that the predictive performance of the Gibbs sampler is higher than that of ClustalW and in most cases also higher than that of the TEPITOPE method.

  12. Development of CDMS-II Surface Event Rejection Techniques and Their Extensions to Lower Energy Thresholds

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hofer, Thomas James

    2014-12-01

    The CDMS-II phase of the Cryogenic Dark Matter Search, a dark matter direct-detection experiment, was operated at the Soudan Underground Laboratory from 2003 to 2008. The full payload consisted of 30 ZIP detectors, totaling approximately 1.1 kg of Si and 4.8 kg of Ge, operated at temperatures of 50 mK. The ZIP detectors read out both ionization and phonon pulses from scatters within the crystals; channel segmentation and analysis of pulse timing parameters allowed e ective ducialization of the crystal volumes and background rejection su cient to set world-leading limits at the times of their publications. A full re-analysis ofmore » the CDMS-II data was motivated by an improvement in the event reconstruction algorithms which improved the resolution of ionization energy and timing information. The Ge data were re-analyzed using three distinct background-rejection techniques; the Si data from runs 125 - 128 were analyzed for the rst time using the most successful of the techniques from the Ge re-analysis. The results of these analyses prompted a novel \\mid-threshold" analysis, wherein energy thresholds were lowered but background rejection using phonon timing information was still maintained. This technique proved to have signi cant discrimination power, maintaining adequate signal acceptance and minimizing background leakage. The primary background for CDMS-II analyses comes from surface events, whose poor ionization collection make them di cult to distinguish from true nuclear recoil events. The novel detector technology of SuperCDMS, the successor to CDMS-II, uses interleaved electrodes to achieve full ionization collection for events occurring at the top and bottom detector surfaces. This, along with dual-sided ionization and phonon instrumentation, allows for excellent ducialization and relegates the surface-event rejection techniques of CDMS-II to a secondary level of background discrimination. Current and future SuperCDMS results hold great promise for mid- to low-mass WIMP-search results.« less

  13. Structure Predictions of Two Bauhinia variegata Lectins Reveal Patterns of C-Terminal Properties in Single Chain Legume Lectins

    PubMed Central

    Moreira, Gustavo M. S. G.; Conceição, Fabricio R.; McBride, Alan J. A.; Pinto, Luciano da S.

    2013-01-01

    Bauhinia variegata lectins (BVL-I and BVL-II) are single chain lectins isolated from the plant Bauhinia variegata. Single chain lectins undergo post-translational processing on its N-terminal and C-terminal regions, which determines their physiological targeting, carbohydrate binding activity and pattern of quaternary association. These two lectins are isoforms, BVL-I being highly glycosylated, and thus far, it has not been possible to determine their structures. The present study used prediction and validation algorithms to elucidate the likely structures of BVL-I and -II. The program Bhageerath-H was chosen from among three different structure prediction programs due to its better overall reliability. In order to predict the C-terminal region cleavage sites, other lectins known to have this modification were analysed and three rules were created: (1) the first amino acid of the excised peptide is small or hydrophobic; (2) the cleavage occurs after an acid, polar, or hydrophobic residue, but not after a basic one; and (3) the cleavage spot is located 5-8 residues after a conserved Leu amino acid. These rules predicted that BVL-I and –II would have fifteen C-terminal residues cleaved, and this was confirmed experimentally by Edman degradation sequencing of BVL-I. Furthermore, the C-terminal analyses predicted that only BVL-II underwent α-helical folding in this region, similar to that seen in SBA and DBL. Conversely, BVL-I and -II contained four conserved regions of a GS-I association, providing evidence of a previously undescribed X4+unusual oligomerisation between the truncated BVL-I and the intact BVL-II. This is the first report on the structural analysis of lectins from Bauhinia spp. and therefore is important for the characterisation C-terminal cleavage and patterns of quaternary association of single chain lectins. PMID:24260572

  14. Structure predictions of two Bauhinia variegata lectins reveal patterns of C-terminal properties in single chain legume lectins.

    PubMed

    Moreira, Gustavo M S G; Conceição, Fabricio R; McBride, Alan J A; Pinto, Luciano da S

    2013-01-01

    Bauhinia variegata lectins (BVL-I and BVL-II) are single chain lectins isolated from the plant Bauhinia variegata. Single chain lectins undergo post-translational processing on its N-terminal and C-terminal regions, which determines their physiological targeting, carbohydrate binding activity and pattern of quaternary association. These two lectins are isoforms, BVL-I being highly glycosylated, and thus far, it has not been possible to determine their structures. The present study used prediction and validation algorithms to elucidate the likely structures of BVL-I and -II. The program Bhageerath-H was chosen from among three different structure prediction programs due to its better overall reliability. In order to predict the C-terminal region cleavage sites, other lectins known to have this modification were analysed and three rules were created: (1) the first amino acid of the excised peptide is small or hydrophobic; (2) the cleavage occurs after an acid, polar, or hydrophobic residue, but not after a basic one; and (3) the cleavage spot is located 5-8 residues after a conserved Leu amino acid. These rules predicted that BVL-I and -II would have fifteen C-terminal residues cleaved, and this was confirmed experimentally by Edman degradation sequencing of BVL-I. Furthermore, the C-terminal analyses predicted that only BVL-II underwent α-helical folding in this region, similar to that seen in SBA and DBL. Conversely, BVL-I and -II contained four conserved regions of a GS-I association, providing evidence of a previously undescribed X4+unusual oligomerisation between the truncated BVL-I and the intact BVL-II. This is the first report on the structural analysis of lectins from Bauhinia spp. and therefore is important for the characterisation C-terminal cleavage and patterns of quaternary association of single chain lectins.

  15. Fast Algorithms for Earth Mover Distance Based on Optimal Transport and L1 Regularization II

    DTIC Science & Technology

    2016-09-01

    of optimal transport, the EMD problem can be reformulated as a familiar L1 minimization. We use a regularization which gives us a unique solution for...plays a central role in many applications, including image processing, computer vision and statistics etc. [13, 17, 20, 24]. The EMD is a metric defined

  16. Contributions to Engineering Models of Human-Computer Interaction. Volume 1.

    DTIC Science & Technology

    1988-05-06

    for those readers wishing to replicate my results. Volume II is on file in the Carnegie-Mellon library and is available upon request from the author...a 50 mieec 10 10 10 10 170 170 170 Moo, sf 4 170 Mooc Figure 4-20: Schedule chart of the perception-wait algorithm for the detection span task

  17. Photometry of Standard Stars and Open Star Clusters

    NASA Astrophysics Data System (ADS)

    Jefferies, Amanda; Frinchaboy, Peter

    2010-10-01

    Photometric CCD observations of open star clusters and standard stars were carried out at the McDonald Observatory in Fort Davis, Texas. This data was analyzed using aperture photometry algorithms (DAOPHOT II and ALLSTAR) and the IRAF software package. Color-magnitude diagrams of these clusters were produced, showing the evolution of each cluster along the main sequence.

  18. An MDI (Minimum Discrimination Information) Model and an Algorithm for Composite Hypotheses Testing and Estimation in Marketing. Revision 2.

    DTIC Science & Technology

    1982-09-01

    considered to be Markovian and the fact that Ehrenberg has been openly critical of the use of first-order Markov processes in describing consumer ... behavior -/ disinclines us to treating these data in this manner. We Shall therefore interpret the p (i,i) as joint rather than conditional probabilities

  19. 40 CFR 85.2215 - Two speed idle test-EPA 91.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 18 2010-07-01 2010-07-01 false Two speed idle test-EPA 91. 85.2215... Tests § 85.2215 Two speed idle test—EPA 91. (a) General requirements—(1) Exhaust gas sampling algorithm...) of this section, consists of an idle mode followed by a high-speed mode. (ii) The second-chance high...

  20. 40 CFR 85.2215 - Two speed idle test-EPA 91.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 40 Protection of Environment 18 2011-07-01 2011-07-01 false Two speed idle test-EPA 91. 85.2215... Tests § 85.2215 Two speed idle test—EPA 91. (a) General requirements—(1) Exhaust gas sampling algorithm...) of this section, consists of an idle mode followed by a high-speed mode. (ii) The second-chance high...

  1. A microcomputer program for analysis of nucleic acid hybridization data

    PubMed Central

    Green, S.; Field, J.K.; Green, C.D.; Beynon, R.J.

    1982-01-01

    The study of nucleic acid hybridization is facilitated by computer mediated fitting of theoretical models to experimental data. This paper describes a non-linear curve fitting program, using the `Patternsearch' algorithm, written in BASIC for the Apple II microcomputer. The advantages and disadvantages of using a microcomputer for local data processing are discussed. Images PMID:7071017

  2. Design and Implementation of Hybrid CORDIC Algorithm Based on Phase Rotation Estimation for NCO

    PubMed Central

    Zhang, Chaozhu; Han, Jinan; Li, Ke

    2014-01-01

    The numerical controlled oscillator has wide application in radar, digital receiver, and software radio system. Firstly, this paper introduces the traditional CORDIC algorithm. Then in order to improve computing speed and save resources, this paper proposes a kind of hybrid CORDIC algorithm based on phase rotation estimation applied in numerical controlled oscillator (NCO). Through estimating the direction of part phase rotation, the algorithm reduces part phase rotation and add-subtract unit, so that it decreases delay. Furthermore, the paper simulates and implements the numerical controlled oscillator by Quartus II software and Modelsim software. Finally, simulation results indicate that the improvement over traditional CORDIC algorithm is achieved in terms of ease of computation, resource utilization, and computing speed/delay while maintaining the precision. It is suitable for high speed and precision digital modulation and demodulation. PMID:25110750

  3. Input variable selection for data-driven models of Coriolis flowmeters for two-phase flow measurement

    NASA Astrophysics Data System (ADS)

    Wang, Lijuan; Yan, Yong; Wang, Xue; Wang, Tao

    2017-03-01

    Input variable selection is an essential step in the development of data-driven models for environmental, biological and industrial applications. Through input variable selection to eliminate the irrelevant or redundant variables, a suitable subset of variables is identified as the input of a model. Meanwhile, through input variable selection the complexity of the model structure is simplified and the computational efficiency is improved. This paper describes the procedures of the input variable selection for the data-driven models for the measurement of liquid mass flowrate and gas volume fraction under two-phase flow conditions using Coriolis flowmeters. Three advanced input variable selection methods, including partial mutual information (PMI), genetic algorithm-artificial neural network (GA-ANN) and tree-based iterative input selection (IIS) are applied in this study. Typical data-driven models incorporating support vector machine (SVM) are established individually based on the input candidates resulting from the selection methods. The validity of the selection outcomes is assessed through an output performance comparison of the SVM based data-driven models and sensitivity analysis. The validation and analysis results suggest that the input variables selected from the PMI algorithm provide more effective information for the models to measure liquid mass flowrate while the IIS algorithm provides a fewer but more effective variables for the models to predict gas volume fraction.

  4. Mapping the receptor site for alpha-scorpion toxins on a Na+ channel voltage sensor.

    PubMed

    Wang, Jinti; Yarov-Yarovoy, Vladimir; Kahn, Roy; Gordon, Dalia; Gurevitz, Michael; Scheuer, Todd; Catterall, William A

    2011-09-13

    The α-scorpions toxins bind to the resting state of Na(+) channels and inhibit fast inactivation by interaction with a receptor site formed by domains I and IV. Mutants T1560A, F1610A, and E1613A in domain IV had lower affinities for Leiurus quinquestriatus hebraeus toxin II (LqhII), and mutant E1613R had ~73-fold lower affinity. Toxin dissociation was accelerated by depolarization and increased by these mutations, whereas association rates at negative membrane potentials were not changed. These results indicate that Thr1560 in the S1-S2 loop, Phe1610 in the S3 segment, and Glu1613 in the S3-S4 loop in domain IV participate in toxin binding. T393A in the SS2-S6 loop in domain I also had lower affinity for LqhII, indicating that this extracellular loop may form a secondary component of the receptor site. Analysis with the Rosetta-Membrane algorithm resulted in a model of LqhII binding to the voltage sensor in a resting state, in which amino acid residues in an extracellular cleft formed by the S1-S2 and S3-S4 loops in domain IV interact with two faces of the wedge-shaped LqhII molecule. The conserved gating charges in the S4 segment are in an inward position and form ion pairs with negatively charged amino acid residues in the S2 and S3 segments of the voltage sensor. This model defines the structure of the resting state of a voltage sensor of Na(+) channels and reveals its mode of interaction with a gating modifier toxin.

  5. SEALDH-II-An Autonomous, Holistically Controlled, First Principles TDLAS Hygrometer for Field and Airborne Applications: Design-Setup-Accuracy/Stability Stress Test.

    PubMed

    Buchholz, Bernhard; Kallweit, Sören; Ebert, Volker

    2016-12-30

    Instrument operation in harsh environments often significantly impacts the trust level of measurement data. While commercial instrument manufacturers clearly define the deployment conditions to achieve trustworthy data in typical standard applications, it is frequently unavoidable in scientific field applications to operate instruments outside these commercial standard application specifications. Scientific instrumentation, however, is employing cutting-edge technology and often highly optimized but also lacks long-term field tests to assess the field vs. laboratory performance. Recently, we developed the Selective Extractive Laser Diode Hygrometer (SEALDH-II), which addresses field and especially airborne applications as well as metrological laboratory validations. SEALDH-II targets reducing deviations between airborne hygrometers (currently up to 20% between the most advanced hygrometers) with a new holistic, internal control and validation concept, which guarantees the transfer of the laboratory performance into a field scenario by capturing more than 80 instrument internal "housekeeping" data to nearly perfectly control SEALDH-II's health status. SEALDH-II uses a calibration-free, first principles based, direct Tuneable Diode Laser Absorption Spectroscopy (dTDLAS) approach, to cover the entire atmospheric humidity measurement range from about 3 to 40,000 ppmv with a calculated maximum uncertainty of 4.3% ± 3 ppmv. This is achieved not only by innovations in internal instrument monitoring and design, but also by active control algorithms such as a high resolution spectral stabilization. This paper describes the setup, working principles, and instrument stabilization, as well as its precision validation and long-term stress tests in an environmental chamber over an environmental temperature and humidity range of ΔT = 50 K and ΔRH = 80% RH, respectively.

  6. Template-based de novo design for type II kinase inhibitors and its extented application to acetylcholinesterase inhibitors.

    PubMed

    Su, Bo-Han; Huang, Yi-Syuan; Chang, Chia-Yun; Tu, Yi-Shu; Tseng, Yufeng J

    2013-10-31

    There is a compelling need to discover type II inhibitors targeting the unique DFG-out inactive kinase conformation since they are likely to possess greater potency and selectivity relative to traditional type I inhibitors. Using a known inhibitor, such as a currently available and approved drug or inhibitor, as a template to design new drugs via computational de novo design is helpful when working with known ligand-receptor interactions. This study proposes a new template-based de novo design protocol to discover new inhibitors that preserve and also optimize the binding interactions of the type II kinase template. First, sorafenib (Nexavar) and nilotinib (Tasigna), two type II inhibitors with different ligand-receptor interactions, were selected as the template compounds. The five-step protocol can reassemble each drug from a large fragment library. Our procedure demonstrates that the selected template compounds can be successfully reassembled while the key ligand-receptor interactions are preserved. Furthermore, to demonstrate that the algorithm is able to construct more potent compounds, we considered kinase inhibitors and other protein dataset, acetylcholinesterase (AChE) inhibitors. The de novo optimization was initiated using a template compound possessing a less than optimal activity from a series of aminoisoquinoline and TAK-285 inhibiting type II kinases, and E2020 derivatives inhibiting AChE respectively. Three compounds with greater potency than the template compound were discovered that were also included in the original congeneric series. This template-based lead optimization protocol with the fragment library can help to design compounds with preferred binding interactions of known inhibitors automatically and further optimize the compounds in the binding pockets.

  7. An apparatus for generation and quantitative measurement of homogeneous isotropic turbulence in He ii

    NASA Astrophysics Data System (ADS)

    Mastracci, Brian; Guo, Wei

    2018-01-01

    The superfluid phase of helium-4, known as He ii, exhibits extremely small kinematic viscosity and may be a useful tool for economically producing and studying high Reynolds number turbulent flow. Such applications are not currently possible because a comprehensive understanding of the complex two-fluid behavior of He ii is lacking. This situation could be remedied by a systematic investigation of simple, well controlled turbulence that can be directly compared with theoretical models. To this end, we have developed a new apparatus that combines flow visualization with second sound attenuation to study turbulence in the wake of a mesh grid towed through a He ii filled channel. One of three mesh grids (mesh number M = 3, 3.75, or 5 mm) can be pulled at speeds between 0.1 and 60 cm/s through a cast acrylic flow channel which has a 16 mm × 16 mm cross section and measures 330 mm long. The motion of solidified deuterium tracer particles, with diameter of the order 1 μm, in the resulting flow is captured by a high speed camera, and a particle tracking velocimetry algorithm resolves the Lagrangian particle trajectories through the turbulent flow field. A pair of oscillating superleak second sound transducers installed in the channel allows complementary measurement of vortex line density in the superfluid throughout the turbulent decay process. Success in early experiments demonstrates the effectiveness of both probes, and preliminary analysis of the data shows that both measurements strongly correlate with each other. Further investigations will provide comprehensive information that can be used to address open questions about turbulence in He ii and move toward the application of this fluid to high Reynolds number fluid research.

  8. The use of the decision tree technique and image cytometry to characterize aggressiveness in World Health Organization (WHO) grade II superficial transitional cell carcinomas of the bladder.

    PubMed

    Decaestecker, C; van Velthoven, R; Petein, M; Janssen, T; Salmon, I; Pasteels, J L; van Ham, P; Schulman, C; Kiss, R

    1996-03-01

    The aggressiveness of human bladder tumours can be assessed by means of various classification systems, including the one proposed by the World Health Organization (WHO). According to the WHO classification, three levels of malignancy are identified as grades I (low), II (intermediate), and III (high). This classification system operates satisfactorily for two of the three grades in forecasting clinical progression, most grade I tumours being associated with good prognoses and most grade III with bad. In contrast, the grade II group is very heterogeneous in terms of their clinical behaviour. The present study used two computer-assisted methods to investigate whether it is possible to sub-classify grade II tumours: computer-assisted microscope analysis (image cytometry) of Feulgen-stained nuclei and the Decision Tree Technique. This latter technique belongs to the Supervised Learning Algorithm and enables an objective assessment to be made of the diagnostic value associated with a given parameter. The combined use of these two methods in a series of 292 superficial transitional cell carcinomas shows that it is possible to identify one subgroup of grade II tumours which behave clinically like grade I tumours and a second subgroup which behaves clinically like grade III tumours. Of the nine ploidy-related parameters computed by means of image cytometry [the DNA index (DI), DNA histogram type (DHT), and the percentages of diploid, hyperdiploid, triploid, hypertriploid, tetraploid, hypertetraploid, and polyploid cell nuclei], it was the percentage of hyperdiploid and hypertetraploid cell nuclei which enabled identification, rather than conventional parameters such as the DI or the DHT.

  9. SAGE II Measurements of Stratospheric Aerosol Properties at Non-Volcanic Levels

    NASA Technical Reports Server (NTRS)

    Thomason, Larry W.; Burton, Sharon P.; Luo, Bei-Ping; Peter, Thomas

    2008-01-01

    Since 2000, stratospheric aerosol levels have been relatively stable and at the lowest levels observed in the historical record. Given the challenges of making satellite measurements of aerosol properties at these levels, we have performed a study of the sensitivity of the product to the major components of the processing algorithm used in the production of SAGE II aerosol extinction measurements and the retrieval process that produces the operational surface area density (SAD) product. We find that the aerosol extinction measurements, particularly at 1020 nm, remain robust and reliable at the observed aerosol levels. On the other hand, during background periods, the SAD operational product has an uncertainty of at least a factor of 2 during due to the lack of sensitivity to particles with radii less than 100 nm.

  10. BBU and Corkscrew Growth Predictions for the Darht Second Axis Accelerator

    NASA Astrophysics Data System (ADS)

    Chen, Y. J.; Fawley, W. M.

    2001-06-01

    This paper discusses the means by which we plan to control BBU and corkscrew growth in DARHT-II. In section 2 we present the current design for the solenoidal field tune; since the last PAC meeting in 1999, the design beam current has been lowered from 4 to 2 kA which has lowered the necessary field strengths. In Sec. 3 we discuss the present predictions for the expected BBU growth; these predictions were made having used recent experimental measurements for the impedance of the DARHT-II accelerator cells. Finally, in Sec. 4 we present our most recent calculations for the expected corkscrew growth and also the expected performance of the tuning-V algorithm, which can reduce this growth by more than an order of magnitude.

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bradonjic, Milan; Elsasser, Robert; Friedrich, Tobias

    In this work, we consider the random broadcast time on random geometric graphs (RGGs). The classic random broadcast model, also known as push algorithm, is defined as: starting with one informed node, in each succeeding round every informed node chooses one of its neighbors uniformly at random and informs it. We consider the random broadcast time on RGGs, when with high probability: (i) RGG is connected, (ii) when there exists the giant component in RGG. We show that the random broadcast time is bounded by {Omicron}({radical} n + diam(component)), where diam(component) is a diameter of the entire graph, or themore » giant component, for the regimes (i), or (ii), respectively. In other words, for both regimes, we derive the broadcast time to be {Theta}(diam(G)), which is asymptotically optimal.« less

  12. Spontaneous Intramuscular Hematomas of the Abdomen and Pelvis: A New Multilevel Algorithm to Direct Transarterial Embolization and Patient Management.

    PubMed

    Popov, Milen; Sotiriadis, Charalampos; Gay, Frederique; Jouannic, Anne-Marie; Lachenal, Yann; Hajdu, Steven D; Doenz, Francesco; Qanadli, Salah D

    2017-04-01

    To report our experience using a multilevel patient management algorithm to direct transarterial embolization (TAE) in managing spontaneous intramuscular hematoma (SIMH). From May 2006 to January 2014, twenty-seven patients with SIMH had been referred for TAE to our Radiology department. Clinical status and coagulation characteristics of the patients are analyzed. An algorithm integrating CT findings is suggested to manage SIMH. Patients were classified into three groups: Type I, SIMH with no active bleeding (AB); Type II, SIMH with AB and no muscular fascia rupture (MFR); and Type III, SIMH with MFR and AB. Type II is furthermore subcategorized as IIa, IIb and IIc. Types IIb, IIc and III were considered for TAE. The method of embolization as well as the material been used are described. Continuous variables are presented as mean ± SD. Categorical variables are reported as percentages. Technical success, clinical success, complications and 30-day mortality (d30 M) were analyzed. Two patients (7.5%) had Type IIb, four (15%) Type IIc and 21 (77.5%) presented Type III. The detailed CT and CTA findings, embolization procedure and materials used are described. Technical success was 96% with a complication rate of 4%. Clinical success was 88%. The bleeding-related thirty-day mortality was 15% (all with Type III). TAE is a safe and efficient technique to control bleeding that should be considered in selected SIMH as soon as possible. The proposed algorithm integrating CT features provides a comprehensive chart to select patients for TAE. 4.

  13. Nonlinear Multiobjective MPC-Based Optimal Operation of a High Consistency Refining System in Papermaking

    DOE PAGES

    Li, Mingjie; Zhou, Ping; Wang, Hong; ...

    2017-09-19

    As one of the most important unit in the papermaking industry, the high consistency (HC) refining system is confronted with challenges such as improving pulp quality, energy saving, and emissions reduction in its operation processes. Here in this correspondence, an optimal operation of HC refining system is presented using nonlinear multiobjective model predictive control strategies that aim at set-point tracking objective of pulp quality, economic objective, and specific energy (SE) consumption objective, respectively. First, a set of input and output data at different times are employed to construct the subprocess model of the state process model for the HC refiningmore » system, and then the Wiener-type model can be obtained through combining the mechanism model of Canadian Standard Freeness and the state process model that determines their structures based on Akaike information criterion. Second, the multiobjective optimization strategy that optimizes both the set-point tracking objective of pulp quality and SE consumption is proposed simultaneously, which uses NSGA-II approach to obtain the Pareto optimal set. Furthermore, targeting at the set-point tracking objective of pulp quality, economic objective, and SE consumption objective, the sequential quadratic programming method is utilized to produce the optimal predictive controllers. In conclusion, the simulation results demonstrate that the proposed methods can make the HC refining system provide a better performance of set-point tracking of pulp quality when these predictive controllers are employed. In addition, while the optimal predictive controllers orienting with comprehensive economic objective and SE consumption objective, it has been shown that they have significantly reduced the energy consumption.« less

  14. Nonlinear Multiobjective MPC-Based Optimal Operation of a High Consistency Refining System in Papermaking

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Mingjie; Zhou, Ping; Wang, Hong

    As one of the most important unit in the papermaking industry, the high consistency (HC) refining system is confronted with challenges such as improving pulp quality, energy saving, and emissions reduction in its operation processes. Here in this correspondence, an optimal operation of HC refining system is presented using nonlinear multiobjective model predictive control strategies that aim at set-point tracking objective of pulp quality, economic objective, and specific energy (SE) consumption objective, respectively. First, a set of input and output data at different times are employed to construct the subprocess model of the state process model for the HC refiningmore » system, and then the Wiener-type model can be obtained through combining the mechanism model of Canadian Standard Freeness and the state process model that determines their structures based on Akaike information criterion. Second, the multiobjective optimization strategy that optimizes both the set-point tracking objective of pulp quality and SE consumption is proposed simultaneously, which uses NSGA-II approach to obtain the Pareto optimal set. Furthermore, targeting at the set-point tracking objective of pulp quality, economic objective, and SE consumption objective, the sequential quadratic programming method is utilized to produce the optimal predictive controllers. In conclusion, the simulation results demonstrate that the proposed methods can make the HC refining system provide a better performance of set-point tracking of pulp quality when these predictive controllers are employed. In addition, while the optimal predictive controllers orienting with comprehensive economic objective and SE consumption objective, it has been shown that they have significantly reduced the energy consumption.« less

  15. Network-level accident-mapping: Distance based pattern matching using artificial neural network.

    PubMed

    Deka, Lipika; Quddus, Mohammed

    2014-04-01

    The objective of an accident-mapping algorithm is to snap traffic accidents onto the correct road segments. Assigning accidents onto the correct segments facilitate to robustly carry out some key analyses in accident research including the identification of accident hot-spots, network-level risk mapping and segment-level accident risk modelling. Existing risk mapping algorithms have some severe limitations: (i) they are not easily 'transferable' as the algorithms are specific to given accident datasets; (ii) they do not perform well in all road-network environments such as in areas of dense road network; and (iii) the methods used do not perform well in addressing inaccuracies inherent in and type of road environment. The purpose of this paper is to develop a new accident mapping algorithm based on the common variables observed in most accident databases (e.g. road name and type, direction of vehicle movement before the accident and recorded accident location). The challenges here are to: (i) develop a method that takes into account uncertainties inherent to the recorded traffic accident data and the underlying digital road network data, (ii) accurately determine the type and proportion of inaccuracies, and (iii) develop a robust algorithm that can be adapted for any accident set and road network of varying complexity. In order to overcome these challenges, a distance based pattern-matching approach is used to identify the correct road segment. This is based on vectors containing feature values that are common in the accident data and the network data. Since each feature does not contribute equally towards the identification of the correct road segments, an ANN approach using the single-layer perceptron is used to assist in "learning" the relative importance of each feature in the distance calculation and hence the correct link identification. The performance of the developed algorithm was evaluated based on a reference accident dataset from the UK confirming that the accuracy is much better than other methods. Crown Copyright © 2014. Published by Elsevier Ltd. All rights reserved.

  16. An Orthogonal Evolutionary Algorithm With Learning Automata for Multiobjective Optimization.

    PubMed

    Dai, Cai; Wang, Yuping; Ye, Miao; Xue, Xingsi; Liu, Hailin

    2016-12-01

    Research on multiobjective optimization problems becomes one of the hottest topics of intelligent computation. In order to improve the search efficiency of an evolutionary algorithm and maintain the diversity of solutions, in this paper, the learning automata (LA) is first used for quantization orthogonal crossover (QOX), and a new fitness function based on decomposition is proposed to achieve these two purposes. Based on these, an orthogonal evolutionary algorithm with LA for complex multiobjective optimization problems with continuous variables is proposed. The experimental results show that in continuous states, the proposed algorithm is able to achieve accurate Pareto-optimal sets and wide Pareto-optimal fronts efficiently. Moreover, the comparison with the several existing well-known algorithms: nondominated sorting genetic algorithm II, decomposition-based multiobjective evolutionary algorithm, decomposition-based multiobjective evolutionary algorithm with an ensemble of neighborhood sizes, multiobjective optimization by LA, and multiobjective immune algorithm with nondominated neighbor-based selection, on 15 multiobjective benchmark problems, shows that the proposed algorithm is able to find more accurate and evenly distributed Pareto-optimal fronts than the compared ones.

  17. Explicit Building Block Multiobjective Evolutionary Computation: Methods and Applications

    DTIC Science & Technology

    2005-06-16

    which is introduced in 1990 by Richard Dawkins in his book ”The Selfish Gene .” [34] 356 E.5.7 Pareto Envelop-based Selection Algorithm I and II...IGC Intelligent Gene Collector . . . . . . . . . . . . . . . . . 59 OED Orthogonal Experimental Design . . . . . . . . . . . . . 59 MED Main Effect...complete one experiment 74 `′ The string length hold within the computer (can be longer than number of genes

  18. Studies in Ambulatory Care Quality Assessment in the Indian Health Service. Volume II: Appraisal of System Performance.

    ERIC Educational Resources Information Center

    Nutting, Paul A.; And Others

    Six Indian Health Service (IHS) units, chosen in a non-random manner, were evaluated via a quality assessment methodology currently under development by the IHS Office of Research and Development. A set of seven health problems (tracers) was selected to represent major health problems, and clinical algorithms (process maps) were constructed for…

  19. Scan Line Difference Compression Algorithm Simulation Study.

    DTIC Science & Technology

    1985-08-01

    introduced during the signal transmission process. ----------- SLDC Encoder------- I Image I IConditionedl IConditioned I LError Control I I Source I...I Error Control _____ _struction - Decoder I I Decoder I ----------- SLDC Decoder-------- Figure A-I. -- Overall Data Compression Process This...of noise or an effective channel coding subsystem providing the necessary error control . A- 2 ~~~~~~~~~ ..* : ~ -. . .- .** - .. . .** .* ... . . The

  20. Advances in QCD sum-rule calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Melikhov, Dmitri

    2016-01-22

    We review the recent progress in the applications of QCD sum rules to hadron properties with the emphasis on the following selected problems: (i) development of new algorithms for the extraction of ground-state parameters from two-point correlators; (ii) form factors at large momentum transfers from three-point vacuum correlation functions: (iii) properties of exotic tetraquark hadrons from correlation functions of four-quark currents.

  1. 76 FR 61054 - Approval and Promulgation of State Implementation Plans; State of Colorado Regulation Number 3...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-10-03

    ....aa; II.D.1.bb; II.D.1.kk; II.D.1.nn; II.D.1.oo; II.D.1.aaa; II.D.1.bbb; II.D.1.ccc; II.D.1.fff; II.D...; II.D.1.y; II.D.1.aa; II.D.1.bb; II.D.1.kk; II.D.1.nn; II.D.1.oo; II.D.1.aaa; II.D.1.bbb; II.D.1.ccc...

  2. Multi-objective optimization of combustion, performance and emission parameters in a jatropha biodiesel engine using Non-dominated sorting genetic algorithm-II

    NASA Astrophysics Data System (ADS)

    Dhingra, Sunil; Bhushan, Gian; Dubey, Kashyap Kumar

    2014-03-01

    The present work studies and identifies the different variables that affect the output parameters involved in a single cylinder direct injection compression ignition (CI) engine using jatropha biodiesel. Response surface methodology based on Central composite design (CCD) is used to design the experiments. Mathematical models are developed for combustion parameters (Brake specific fuel consumption (BSFC) and peak cylinder pressure (Pmax)), performance parameter brake thermal efficiency (BTE) and emission parameters (CO, NO x , unburnt HC and smoke) using regression techniques. These regression equations are further utilized for simultaneous optimization of combustion (BSFC, Pmax), performance (BTE) and emission (CO, NO x , HC, smoke) parameters. As the objective is to maximize BTE and minimize BSFC, Pmax, CO, NO x , HC, smoke, a multiobjective optimization problem is formulated. Nondominated sorting genetic algorithm-II is used in predicting the Pareto optimal sets of solution. Experiments are performed at suitable optimal solutions for predicting the combustion, performance and emission parameters to check the adequacy of the proposed model. The Pareto optimal sets of solution can be used as guidelines for the end users to select optimal combination of engine output and emission parameters depending upon their own requirements.

  3. Phylogenetic Analyses of Meloidogyne Small Subunit rDNA.

    PubMed

    De Ley, Irma Tandingan; De Ley, Paul; Vierstraete, Andy; Karssen, Gerrit; Moens, Maurice; Vanfleteren, Jacques

    2002-12-01

    Phylogenies were inferred from nearly complete small subunit (SSU) 18S rDNA sequences of 12 species of Meloidogyne and 4 outgroup taxa (Globodera pallida, Nacobbus abberans, Subanguina radicicola, and Zygotylenchus guevarai). Alignments were generated manually from a secondary structure model, and computationally using ClustalX and Treealign. Trees were constructed using distance, parsimony, and likelihood algorithms in PAUP* 4.0b4a. Obtained tree topologies were stable across algorithms and alignments, supporting 3 clades: clade I = [M. incognita (M. javanica, M. arenaria)]; clade II = M. duytsi and M. maritima in an unresolved trichotomy with (M. hapla, M. microtyla); and clade III = (M. exigua (M. graminicola, M. chitwoodi)). Monophyly of [(clade I, clade II) clade III] was given maximal bootstrap support (mbs). M. artiellia was always a sister taxon to this joint clade, while M. ichinohei was consistently placed with mbs as a basal taxon within the genus. Affinities with the outgroup taxa remain unclear, although G. pallida and S. radicicola were never placed as closest relatives of Meloidogyne. Our results show that SSU sequence data are useful in addressing deeper phylogeny within Meloidogyne, and that both M. ichinohei and M. artiellia are credible outgroups for phylogenetic analysis of speciations among the major species.

  4. Identification of GATC- and CCGG- recognizing Type II REases and their putative specificity-determining positions using Scan2S—a novel motif scan algorithm with optional secondary structure constraints

    PubMed Central

    Niv, Masha Y.; Skrabanek, Lucy; Roberts, Richard J.; Scheraga, Harold A.; Weinstein, Harel

    2008-01-01

    Restriction endonucleases (REases) are DNA-cleaving enzymes that have become indispensable tools in molecular biology. Type II REases are highly divergent in sequence despite their common structural core, function and, in some cases, common specificities towards DNA sequences. This makes it difficult to identify and classify them functionally based on sequence, and has hampered the efforts of specificity-engineering. Here, we define novel REase sequence motifs, which extend beyond the PD-(D/E)XK hallmark, and incorporate secondary structure information. The automated search using these motifs is carried out with a newly developed fast regular expression matching algorithm that accommodates long patterns with optional secondary structure constraints. Using this new tool, named Scan2S, motifs derived from REases with specificity towards GATC- and CGGG-containing DNA sequences successfully identify REases of the same specificity. Notably, some of these sequences are not identified by standard sequence detection tools. The new motifs highlight potential specificity-determining positions that do not fully overlap for the GATC- and the CCGG-recognizing REases and are candidates for specificity re-engineering. PMID:17972284

  5. Phylogenetic Analyses of Meloidogyne Small Subunit rDNA

    PubMed Central

    De Ley, Irma Tandingan; De Ley, Paul; Vierstraete, Andy; Karssen, Gerrit; Moens, Maurice; Vanfleteren, Jacques

    2002-01-01

    Phylogenies were inferred from nearly complete small subunit (SSU) 18S rDNA sequences of 12 species of Meloidogyne and 4 outgroup taxa (Globodera pallida, Nacobbus abberans, Subanguina radicicola, and Zygotylenchus guevarai). Alignments were generated manually from a secondary structure model, and computationally using ClustalX and Treealign. Trees were constructed using distance, parsimony, and likelihood algorithms in PAUP* 4.0b4a. Obtained tree topologies were stable across algorithms and alignments, supporting 3 clades: clade I = [M. incognita (M. javanica, M. arenaria)]; clade II = M. duytsi and M. maritima in an unresolved trichotomy with (M. hapla, M. microtyla); and clade III = (M. exigua (M. graminicola, M. chitwoodi)). Monophyly of [(clade I, clade II) clade III] was given maximal bootstrap support (mbs). M. artiellia was always a sister taxon to this joint clade, while M. ichinohei was consistently placed with mbs as a basal taxon within the genus. Affinities with the outgroup taxa remain unclear, although G. pallida and S. radicicola were never placed as closest relatives of Meloidogyne. Our results show that SSU sequence data are useful in addressing deeper phylogeny within Meloidogyne, and that both M. ichinohei and M. artiellia are credible outgroups for phylogenetic analysis of speciations among the major species. PMID:19265950

  6. Identification of GATC- and CCGG-recognizing Type II REases and their putative specificity-determining positions using Scan2S--a novel motif scan algorithm with optional secondary structure constraints.

    PubMed

    Niv, Masha Y; Skrabanek, Lucy; Roberts, Richard J; Scheraga, Harold A; Weinstein, Harel

    2008-05-01

    Restriction endonucleases (REases) are DNA-cleaving enzymes that have become indispensable tools in molecular biology. Type II REases are highly divergent in sequence despite their common structural core, function and, in some cases, common specificities towards DNA sequences. This makes it difficult to identify and classify them functionally based on sequence, and has hampered the efforts of specificity-engineering. Here, we define novel REase sequence motifs, which extend beyond the PD-(D/E)XK hallmark, and incorporate secondary structure information. The automated search using these motifs is carried out with a newly developed fast regular expression matching algorithm that accommodates long patterns with optional secondary structure constraints. Using this new tool, named Scan2S, motifs derived from REases with specificity towards GATC- and CGGG-containing DNA sequences successfully identify REases of the same specificity. Notably, some of these sequences are not identified by standard sequence detection tools. The new motifs highlight potential specificity-determining positions that do not fully overlap for the GATC- and the CCGG-recognizing REases and are candidates for specificity re-engineering.

  7. An efficient grid layout algorithm for biological networks utilizing various biological attributes

    PubMed Central

    Kojima, Kaname; Nagasaki, Masao; Jeong, Euna; Kato, Mitsuru; Miyano, Satoru

    2007-01-01

    Background Clearly visualized biopathways provide a great help in understanding biological systems. However, manual drawing of large-scale biopathways is time consuming. We proposed a grid layout algorithm that can handle gene-regulatory networks and signal transduction pathways by considering edge-edge crossing, node-edge crossing, distance measure between nodes, and subcellular localization information from Gene Ontology. Consequently, the layout algorithm succeeded in drastically reducing these crossings in the apoptosis model. However, for larger-scale networks, we encountered three problems: (i) the initial layout is often very far from any local optimum because nodes are initially placed at random, (ii) from a biological viewpoint, human layouts still exceed automatic layouts in understanding because except subcellular localization, it does not fully utilize biological information of pathways, and (iii) it employs a local search strategy in which the neighborhood is obtained by moving one node at each step, and automatic layouts suggest that simultaneous movements of multiple nodes are necessary for better layouts, while such extension may face worsening the time complexity. Results We propose a new grid layout algorithm. To address problem (i), we devised a new force-directed algorithm whose output is suitable as the initial layout. For (ii), we considered that an appropriate alignment of nodes having the same biological attribute is one of the most important factors of the comprehension, and we defined a new score function that gives an advantage to such configurations. For solving problem (iii), we developed a search strategy that considers swapping nodes as well as moving a node, while keeping the order of the time complexity. Though a naïve implementation increases by one order, the time complexity, we solved this difficulty by devising a method that caches differences between scores of a layout and its possible updates. Conclusion Layouts of the new grid layout algorithm are compared with that of the previous algorithm and human layout in an endothelial cell model, three times as large as the apoptosis model. The total cost of the result from the new grid layout algorithm is similar to that of the human layout. In addition, its convergence time is drastically reduced (40% reduction). PMID:17338825

  8. Evaluating the Global Precipitation Measurement mission with NOAA/NSSL Multi-Radar Multisensor: current status and future directions.

    NASA Astrophysics Data System (ADS)

    Kirstetter, P. E.; Petersen, W. A.; Gourley, J. J.; Kummerow, C.; Huffman, G. J.; Turk, J.; Tanelli, S.; Maggioni, V.; Anagnostou, E. N.; Hong, Y.; Schwaller, M.

    2017-12-01

    Accurate characterization of uncertainties in space-borne precipitation estimates is critical for many applications including water budget studies or prediction of natural hazards at the global scale. The GPM precipitation Level II (active and passive) and Level III (IMERG) estimates are compared to the high quality and high resolution NEXRAD-based precipitation estimates derived from the NOAA/NSSL's Multi-Radar, Multi-Sensor (MRMS) platform. A surface reference is derived from the MRMS suite of products to be accurate with known uncertainty bounds and measured at a resolution below the pixel sizes of any GPM estimate, providing great flexibility in matching to grid scales or footprints. It provides an independent and consistent reference research framework for directly evaluating GPM precipitation products across a large number of meteorological regimes as a function of resolution, accuracy and sample size. The consistency of the ground and space-based sensors in term of precipitation detection, typology and quantification are systematically evaluated. Satellite precipitation retrievals are further investigated in terms of precipitation distributions, systematic biases and random errors, influence of precipitation sub-pixel variability and comparison between satellite products. Prognostic analysis directly provides feedback to algorithm developers on how to improve the satellite estimates. Specific factors for passive (e.g. surface conditions for GMI) and active (e.g. non uniform beam filling for DPR) sensors are investigated. This cross products characterization acts as a bridge to intercalibrate microwave measurements from the GPM constellation satellites and propagate to the combined and global precipitation estimates. Precipitation features previously used to analyze Level II satellite estimates under various precipitation processes are now intoduced for Level III to test several assumptions in the IMERG algorithm. Specifically, the contribution of Level II is explicitly characterized and a rigorous characterization is performed to migrate across scales fully understanding the propagation of errors from Level II to Level III. Perpectives are presented to advance the use of uncertainty as an integral part of QPE for ground-based and space-borne sensors

  9. Ordered Backward XPath Axis Processing against XML Streams

    NASA Astrophysics Data System (ADS)

    Nizar M., Abdul; Kumar, P. Sreenivasa

    Processing of backward XPath axes against XML streams is challenging for two reasons: (i) Data is not cached for future access. (ii) Query contains steps specifying navigation to the data that already passed by. While there are some attempts to process parent and ancestor axes, there are very few proposals to process ordered backward axes namely, preceding and preceding-sibling. For ordered backward axis processing, the algorithm, in addition to overcoming the limitations on data availability, has to take care of ordering constraints imposed by these axes. In this paper, we show how backward ordered axes can be effectively represented using forward constraints. We then discuss an algorithm for XML stream processing of XPath expressions containing ordered backward axes. The algorithm uses a layered cache structure to systematically accumulate query results. Our experiments show that the new algorithm gains remarkable speed up over the existing algorithm without compromising on bufferspace requirement.

  10. 160-fold acceleration of the Smith-Waterman algorithm using a field programmable gate array (FPGA)

    PubMed Central

    Li, Isaac TS; Shum, Warren; Truong, Kevin

    2007-01-01

    Background To infer homology and subsequently gene function, the Smith-Waterman (SW) algorithm is used to find the optimal local alignment between two sequences. When searching sequence databases that may contain hundreds of millions of sequences, this algorithm becomes computationally expensive. Results In this paper, we focused on accelerating the Smith-Waterman algorithm by using FPGA-based hardware that implemented a module for computing the score of a single cell of the SW matrix. Then using a grid of this module, the entire SW matrix was computed at the speed of field propagation through the FPGA circuit. These modifications dramatically accelerated the algorithm's computation time by up to 160 folds compared to a pure software implementation running on the same FPGA with an Altera Nios II softprocessor. Conclusion This design of FPGA accelerated hardware offers a new promising direction to seeking computation improvement of genomic database searching. PMID:17555593

  11. 160-fold acceleration of the Smith-Waterman algorithm using a field programmable gate array (FPGA).

    PubMed

    Li, Isaac T S; Shum, Warren; Truong, Kevin

    2007-06-07

    To infer homology and subsequently gene function, the Smith-Waterman (SW) algorithm is used to find the optimal local alignment between two sequences. When searching sequence databases that may contain hundreds of millions of sequences, this algorithm becomes computationally expensive. In this paper, we focused on accelerating the Smith-Waterman algorithm by using FPGA-based hardware that implemented a module for computing the score of a single cell of the SW matrix. Then using a grid of this module, the entire SW matrix was computed at the speed of field propagation through the FPGA circuit. These modifications dramatically accelerated the algorithm's computation time by up to 160 folds compared to a pure software implementation running on the same FPGA with an Altera Nios II softprocessor. This design of FPGA accelerated hardware offers a new promising direction to seeking computation improvement of genomic database searching.

  12. SCREEN: A simple layperson administered screening algorithm in low resource international settings significantly reduces waiting time for critically ill children in primary healthcare clinics.

    PubMed

    Hansoti, Bhakti; Jenson, Alexander; Kironji, Antony G; Katz, Joanne; Levin, Scott; Rothman, Richard; Kelen, Gabor D; Wallis, Lee A

    2017-01-01

    In low resource settings, an inadequate number of trained healthcare workers and high volumes of children presenting to Primary Healthcare Centers (PHC) result in prolonged waiting times and significant delays in identifying and evaluating critically ill children. The Sick Children Require Emergency Evaluation Now (SCREEN) program, a simple six-question screening algorithm administered by lay healthcare workers, was developed in 2014 to rapidly identify critically ill children and to expedite their care at the point of entry into a clinic. We sought to determine the impact of SCREEN on waiting times for critically ill children post real world implementation in Cape Town, South Africa. This is a prospective, observational implementation-effectiveness hybrid study that sought to determine: (1) the impact of SCREEN implementation on waiting times as a primary outcome measure, and (2) the effectiveness of the SCREEN tool in accurately identifying critically ill children when utilised by the QM and adherence by the QM to the SCREEN algorithm as secondary outcome measures. The study was conducted in two phases, Phase I control (pre-SCREEN implementation- three months in 2014) and Phase II (post-SCREEN implementation-two distinct three month periods in 2016). In Phase I, 1600 (92.38%) of 1732 children presenting to 4 clinics, had sufficient data for analysis and comprised the control sample. In Phase II, all 3383 of the children presenting to the 26 clinics during the sampling time frame had sufficient data for analysis. The proportion of critically ill children who saw a professional nurse within 10 minutes increased tenfold from 6.4% to 64% (Phase I to Phase II) with the median time to seeing a professional nurse reduced from 100.3 minutes to 4.9 minutes, (p < .001, respectively). Overall layperson screening compared to Integrated Management of Childhood Illnesses (IMCI) designation by a nurse had a sensitivity of 94.2% and a specificity of 88.1%, despite large variance in adherence to the SCREEN algorithm across clinics. The SCREEN program when implemented in a real-world setting can significantly reduce waiting times for critically ill children in PHCs, however further work is required to improve the implementation of this innovative program.

  13. A framework for porting the NeuroBayes machine learning algorithm to FPGAs

    NASA Astrophysics Data System (ADS)

    Baehr, S.; Sander, O.; Heck, M.; Feindt, M.; Becker, J.

    2016-01-01

    The NeuroBayes machine learning algorithm is deployed for online data reduction at the pixel detector of Belle II. In order to test, characterize and easily adapt its implementation on FPGAs, a framework was developed. Within the framework an HDL model, written in python using MyHDL, is used for fast exploration of possible configurations. Under usage of input data from physics simulations figures of merit like throughput, accuracy and resource demand of the implementation are evaluated in a fast and flexible way. Functional validation is supported by usage of unit tests and HDL simulation for chosen configurations.

  14. A combined direct/inverse three-dimensional transonic wing design method for vector computers

    NASA Technical Reports Server (NTRS)

    Weed, R. A.; Carlson, L. A.; Anderson, W. K.

    1984-01-01

    A three-dimensional transonic-wing design algorithm for vector computers is developed, and the results of sample computations are presented graphically. The method incorporates the direct/inverse scheme of Carlson (1975), a Cartesian grid system with boundary conditions applied at a mean plane, and a potential-flow solver based on the conservative form of the full potential equation and using the ZEBRA II vectorizable solution algorithm of South et al. (1980). The accuracy and consistency of the method with regard to direct and inverse analysis and trailing-edge closure are verified in the test computations.

  15. Differentiation of Candida albicans, Candida glabrata, and Candida krusei by FT-IR and chemometrics by CHROMagar™ Candida.

    PubMed

    Wohlmeister, Denise; Vianna, Débora Renz Barreto; Helfer, Virginia Etges; Calil, Luciane Noal; Buffon, Andréia; Fuentefria, Alexandre Meneghello; Corbellini, Valeriano Antonio; Pilger, Diogo André

    2017-10-01

    Pathogenic Candida species are detected in clinical infections. CHROMagar™ is a phenotypical method used to identify Candida species, although it has limitations, which indicates the need for more sensitive and specific techniques. Infrared Spectroscopy (FT-IR) is an analytical vibrational technique used to identify patterns of metabolic fingerprint of biological matrixes, particularly whole microbial cell systems as Candida sp. in association of classificatory chemometrics algorithms. On the other hand, Soft Independent Modeling by Class Analogy (SIMCA) is one of the typical algorithms still little employed in microbiological classification. This study demonstrates the applicability of the FT-IR-technique by specular reflectance associated with SIMCA to discriminate Candida species isolated from vaginal discharges and grown on CHROMagar™. The differences in spectra of C. albicans, C. glabrata and C. krusei were suitable for use in the discrimination of these species, which was observed by PCA. Then, a SIMCA model was constructed with standard samples of three species and using the spectral region of 1792-1561cm -1 . All samples (n=48) were properly classified based on the chromogenic method using CHROMagar™ Candida. In total, 93.4% (n=45) of the samples were correctly and unambiguously classified (Class I). Two samples of C. albicans were classified correctly, though these could have been C. glabrata (Class II). Also, one C. glabrata sample could have been classified as C. krusei (Class II). Concerning these three samples, one triplicate of each was included in Class II and two in Class I. Therefore, FT-IR associated with SIMCA can be used to identify samples of C. albicans, C. glabrata, and C. krusei grown in CHROMagar™ Candida aiming to improve clinical applications of this technique. Copyright © 2017 Elsevier B.V. All rights reserved.

  16. SAGE III solar ozone measurements: Initial results

    NASA Technical Reports Server (NTRS)

    Wang, Hsiang-Jui; Cunnold, Derek M.; Trepte, Chip; Thomason, Larry W.; Zawodny, Joseph M.

    2006-01-01

    Results from two retrieval algorithms, o3-aer and o3-mlr , used for SAGE III solar occultation ozone measurements in the stratosphere and upper troposphere are compared. The main differences between these two retrieved (version 3.0) ozone are found at altitudes above 40 km and below 15 km. Compared to correlative measurements, the SAGE II type ozone retrievals (o3-aer) provide better precisions above 40 km and do not induce artificial hemispheric differences in upper stratospheric ozone. The multiple linear regression technique (o3_mlr), however, can yield slightly more accurate ozone (by a few percent) in the lower stratosphere and upper troposphere. By using SAGE III (version 3.0) ozone from both algorithms and in their preferred regions, the agreement between SAGE III and correlative measurements is shown to be approx.5% down to 17 km. Below 17 km SAGE III ozone values are systematically higher, by 10% at 13 km, and a small hemispheric difference (a few percent) appears. Compared to SAGE III and HALOE, SAGE II ozone has the best accuracy in the lowest few kilometers of the stratosphere. Estimated precision in SAGE III ozone is about 5% or better between 20 and 40 km and approx.10% at 50 km. The precision below 20 km is difficult to evaluate because of limited coincidences between SAGE III and sondes. SAGE III ozone values are systematically slightly larger (2-3%) than those from SAGE II but the profile shapes are remarkably similar for altitudes above 15 km. There is no evidence of any relative drift or time dependent differences between these two instruments for altitudes above 15-20 km.

  17. Spectral Confusion for Cosmological Surveys of Redshifted C II Emission

    NASA Technical Reports Server (NTRS)

    Kogut, A.; Dwek, E.; Moseley, S. H.

    2015-01-01

    Far-infrared cooling lines are ubiquitous features in the spectra of star-forming galaxies. Surveys of redshifted fine-structure lines provide a promising new tool to study structure formation and galactic evolution at redshifts including the epoch of reionization as well as the peak of star formation. Unlike neutral hydrogen surveys, where the 21 cm line is the only bright line, surveys of redshifted fine-structure lines suffer from confusion generated by line broadening, spectral overlap of different lines, and the crowding of sources with redshift. We use simulations to investigate the resulting spectral confusion and derive observing parameters to minimize these effects in pencilbeam surveys of redshifted far-IR line emission. We generate simulated spectra of the 17 brightest far-IR lines in galaxies, covering the 150-1300 µm wavelength region corresponding to redshifts 0 < z < 7, and develop a simple iterative algorithm that successfully identifies the 158 µm [C II] line and other lines. Although the [C II] line is a principal coolant for the interstellar medium, the assumption that the brightest observed lines in a given line of sight are always [C II] lines is a poor approximation to the simulated spectra once other lines are included. Blind line identification requires detection of fainter companion lines from the same host galaxies, driving survey sensitivity requirements. The observations require moderate spectral resolution 700 < R < 4000 with angular resolution between 20? and 10', sufficiently narrow to minimize confusion yet sufficiently large to include a statistically meaningful number of sources.

  18. Statins, the renin-angiotensin-aldosterone system and hypertension - a tale of another beneficial effect of statins.

    PubMed

    Drapala, Adrian; Sikora, Mariusz; Ufnal, Marcin

    2014-09-01

    Statins, a class of lipid lowering drugs, decrease mortality associated with cardiovascular events. As hypercholesterolemia is often accompanied by hypertension, a large number of patients receive therapy with statins and antihypertensive drugs which act via the renin-angiotensin-aldosterone system (RAAS). New guidelines published by the American Heart Association and American College of Cardiology on the treatment of dyslipidaemia and the reduction of atherosclerotic cardiovascular risk, which use a risk prediction algorithm based on risk factors such as hypertension but not low-density lipoprotein (LDL) level, may even further increase the number of patients receiving such concomitant therapy. In this paper we review studies on an interaction between statins, the RAAS and antihypertensive drugs acting via the RAAS. Accumulating evidence suggests that the combination of statins and drugs affecting the RAAS exerts a synergistic effect on the circulatory system. For example, statins may lower arterial blood pressure and augment the effect of antihypertensive drugs acting via the RAAS. Statins may interact with the RAAS in a number of ways i.e. to decrease the expression of receptors for angiotensin II (Ang II), inhibit the Ang II-dependent intracellular signalling, reduce the RAAS-dependent oxidative stress and inflammation as well as inhibit the synthesis of Ang II and aldosterone. Although statins given either alone or together with antihypertensive drugs acting via the RAAS may lower arterial blood pressure, further research is needed to evaluate the mechanisms and their therapeutic significance. © The Author(s) 2014.

  19. Free metal ion depletion by "Good's" buffers. III. N-(2-acetamido)iminodiacetic acid, 2:1 complexes with zinc(II), cobalt(II), nickel(II), and copper(II); amide deprotonation by Zn(II), Co(II), and Cu(II).

    PubMed

    Lance, E A; Rhodes, C W; Nakon, R

    1983-09-01

    Potentiometric, visible, infrared, electron spin, and nuclear magnetic resonance studies of the complexation of N-(2-acetamido)iminodiacetic acid (H2ADA) by Ca(II), Mg(II), Mn(II), Zn(II), Co(II), Ni(II), and Cu(II) are reported. Ca(II) and Mg(II) were found not to form 2:1 ADA2- to M(II) complexes, while Mn(II), Cu(II), Ni(II), Zn(II), and Co(II) did form 2:1 metal chelates at or below physiological pH values. Co(II) and Zn(II), but not Cu(II), were found to induce stepwise deprotonation of the amide groups to form [M(H-1ADA)4-(2)]. Formation (affinity) constants for the various metal complexes are reported, and the probable structures of the various metal chelates in solution are discussed on the basis of various spectral data.

  20. Metabolic Surgery in the Treatment Algorithm for Type 2 Diabetes: A Joint Statement by International Diabetes Organizations.

    PubMed

    Rubino, Francesco; Nathan, David M; Eckel, Robert H; Schauer, Philip R; Alberti, K George M M; Zimmet, Paul Z; Del Prato, Stefano; Ji, Linong; Sadikot, Shaukat M; Herman, William H; Amiel, Stephanie A; Kaplan, Lee M; Taroncher-Oldenburg, Gaspar; Cummings, David E

    2016-06-01

    Despite growing evidence that bariatric/metabolic surgery powerfully improves type 2 diabetes (T2D), existing diabetes treatment algorithms do not include surgical options. The 2nd Diabetes Surgery Summit (DSS-II), an international consensus conference, was convened in collaboration with leading diabetes organizations to develop global guidelines to inform clinicians and policymakers about benefits and limitations of metabolic surgery for T2D. A multidisciplinary group of 48 international clinicians/scholars (75% nonsurgeons), including representatives of leading diabetes organizations, participated in DSS-II. After evidence appraisal (MEDLINE [1 January 2005-30 September 2015]), three rounds of Delphi-like questionnaires were used to measure consensus for 32 data-based conclusions. These drafts were presented at the combined DSS-II and 3rd World Congress on Interventional Therapies for Type 2 Diabetes (London, U.K., 28-30 September 2015), where they were open to public comment by other professionals and amended face-to-face by the Expert Committee. Given its role in metabolic regulation, the gastrointestinal tract constitutes a meaningful target to manage T2D. Numerous randomized clinical trials, albeit mostly short/midterm, demonstrate that metabolic surgery achieves excellent glycemic control and reduces cardiovascular risk factors. On the basis of such evidence, metabolic surgery should be recommended to treat T2D in patients with class III obesity (BMI ≥40 kg/m(2)) and in those with class II obesity (BMI 35.0-39.9 kg/m(2)) when hyperglycemia is inadequately controlled by lifestyle and optimal medical therapy. Surgery should also be considered for patients with T2D and BMI 30.0-34.9 kg/m(2) if hyperglycemia is inadequately controlled despite optimal treatment with either oral or injectable medications. These BMI thresholds should be reduced by 2.5 kg/m(2) for Asian patients. Although additional studies are needed to further demonstrate long-term benefits, there is sufficient clinical and mechanistic evidence to support inclusion of metabolic surgery among antidiabetes interventions for people with T2D and obesity. To date, the DSS-II guidelines have been formally endorsed by 45 worldwide medical and scientific societies. Health care regulators should introduce appropriate reimbursement policies. © 2016 by the American Diabetes Association. Readers may use this article as long as the work is properly cited, the use is educational and not for profit, and the work is not altered.

  1. MATLAB-implemented estimation procedure for model-based assessment of hepatic insulin degradation from standard intravenous glucose tolerance test data.

    PubMed

    Di Nardo, Francesco; Mengoni, Michele; Morettini, Micaela

    2013-05-01

    Present study provides a novel MATLAB-based parameter estimation procedure for individual assessment of hepatic insulin degradation (HID) process from standard frequently-sampled intravenous glucose tolerance test (FSIGTT) data. Direct access to the source code, offered by MATLAB, enabled us to design an optimization procedure based on the alternating use of Gauss-Newton's and Levenberg-Marquardt's algorithms, which assures the full convergence of the process and the containment of computational time. Reliability was tested by direct comparison with the application, in eighteen non-diabetic subjects, of well-known kinetic analysis software package SAAM II, and by application on different data. Agreement between MATLAB and SAAM II was warranted by intraclass correlation coefficients ≥0.73; no significant differences between corresponding mean parameter estimates and prediction of HID rate; and consistent residual analysis. Moreover, MATLAB optimization procedure resulted in a significant 51% reduction of CV% for the worst-estimated parameter by SAAM II and in maintaining all model-parameter CV% <20%. In conclusion, our MATLAB-based procedure was suggested as a suitable tool for the individual assessment of HID process. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  2. Integral field spectroscopy of a sample of nearby galaxies. II. Properties of the H ii regions

    NASA Astrophysics Data System (ADS)

    Sánchez, S. F.; Rosales-Ortega, F. F.; Marino, R. A.; Iglesias-Páramo, J.; Vílchez, J. M.; Kennicutt, R. C.; Díaz, A. I.; Mast, D.; Monreal-Ibero, A.; García-Benito, R.; Bland-Hawthorn, J.; Pérez, E.; González Delgado, R.; Husemann, B.; López-Sánchez, Á. R.; Cid Fernandes, R.; Kehrig, C.; Walcher, C. J.; Gil de Paz, A.; Ellis, S.

    2012-10-01

    We analyse the spectroscopic properties of thousands of H ii regions identified in 38 face-on spiral galaxies. All galaxies were observed out to 2.4 effective radii using integral field spectroscopy (IFS) over the wavelength range ~3700 to ~6900 Å. The near uniform sample has been assembled from the PPAK IFS Nearby Galaxy (PINGS) survey and a sample described in Paper I. We develop a new automatic procedure to detect H ii regions, based on the contrast of the Hα intensity maps extracted from the datacubes. Once detected, the algorithm provides us with the integrated spectra of each individual segmented region. In total, we derive good quality spectroscopic information for ~2600 independent H ii regions/complexes. This is by far the largest H ii region survey of its kind. Our selection criteria and the use of 3D spectroscopy guarantee that we cover the regions in an unbiased way. A well-tested automatic decoupling procedure has been applied to remove the underlying stellar population, deriving the main properties (intensity, dispersion and velocity) of the strongest emission lines in the considered wavelength range (covering from [O ii] λ3727 to [S ii] λ6731). A final catalogue of the spectroscopic properties of H ii regions has been created for each galaxy, which includes information on morphology, spiral structure, gaskinematics, and surface brightness of the underlying stellar population. In the current study, we focus on the understanding of the average properties of the H ii regions and their radial distributions. We find a significant change in the ionisation characteristics of H ii regions within r < 0.25 re due to contamination from sources with different ionising characteristics, as we discuss. We find that the gas-phase oxygen abundance and the Hα equivalent width present a negative and positive gradient, respectively. The distribution of slopes is statistically compatible with a random Gaussian distribution around the mean value, if the radial distances are measured in units of the respective effective radius. No difference in the slope is found for galaxies of different morphologies, e.g. barred/non-barred, grand-design/flocculent. Therefore, the effective radius is a universal scale length for gradients in the evolution of galaxies. Some properties have a large variance across each object and between galaxies (e.g. electron density) without a clear characteristic value. But other properties are well described by an average value either galaxy by galaxy or among the different galaxies (e.g. dust attenuation). Based on observations collected at the Centro Astronómico Hispano Alemán (CAHA) at Calar Alto, operated jointly by the Max-Planck Institut für Astronomie and the Instituto de Astrofísica de Andalucía (CSIC).Appendices are available in electronic form at http://www.aanda.orgCatalogues are only available at the CDS via anonymous ftp to cdsarc.u-strasbg.fr (130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/546/A2

  3. Enhancing the performance of MOEAs: an experimental presentation of a new fitness guided mutation operator

    NASA Astrophysics Data System (ADS)

    Liagkouras, K.; Metaxiotis, K.

    2017-01-01

    Multi-objective evolutionary algorithms (MOEAs) are currently a dynamic field of research that has attracted considerable attention. Mutation operators have been utilized by MOEAs as variation mechanisms. In particular, polynomial mutation (PLM) is one of the most popular variation mechanisms and has been utilized by many well-known MOEAs. In this paper, we revisit the PLM operator and we propose a fitness-guided version of the PLM. Experimental results obtained by non-dominated sorting genetic algorithm II and strength Pareto evolutionary algorithm 2 show that the proposed fitness-guided mutation operator outperforms the classical PLM operator, based on different performance metrics that evaluate both the proximity of the solutions to the Pareto front and their dispersion on it.

  4. Fast Optimization for Aircraft Descent and Approach Trajectory

    NASA Technical Reports Server (NTRS)

    Luchinsky, Dmitry G.; Schuet, Stefan; Brenton, J.; Timucin, Dogan; Smith, David; Kaneshige, John

    2017-01-01

    We address problem of on-line scheduling of the aircraft descent and approach trajectory. We formulate a general multiphase optimal control problem for optimization of the descent trajectory and review available methods of its solution. We develop a fast algorithm for solution of this problem using two key components: (i) fast inference of the dynamical and control variables of the descending trajectory from the low dimensional flight profile data and (ii) efficient local search for the resulting reduced dimensionality non-linear optimization problem. We compare the performance of the proposed algorithm with numerical solution obtained using optimal control toolbox General Pseudospectral Optimal Control Software. We present results of the solution of the scheduling problem for aircraft descent using novel fast algorithm and discuss its future applications.

  5. Irrigation water allocation optimization using multi-objective evolutionary algorithm (MOEA) - a review

    NASA Astrophysics Data System (ADS)

    Fanuel, Ibrahim Mwita; Mushi, Allen; Kajunguri, Damian

    2018-03-01

    This paper analyzes more than 40 papers with a restricted area of application of Multi-Objective Genetic Algorithm, Non-Dominated Sorting Genetic Algorithm-II and Multi-Objective Differential Evolution (MODE) to solve the multi-objective problem in agricultural water management. The paper focused on different application aspects which include water allocation, irrigation planning, crop pattern and allocation of available land. The performance and results of these techniques are discussed. The review finds that there is a potential to use MODE to analyzed the multi-objective problem, the application is more significance due to its advantage of being simple and powerful technique than any Evolutionary Algorithm. The paper concludes with the hopeful new trend of research that demand effective use of MODE; inclusion of benefits derived from farm byproducts and production costs into the model.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Li; Gu, Chun; Xu, Lixin, E-mail: xulixin@ustc.edu.cn

    The self-adapting algorithms are improved to optimize a beam configuration in the direct drive laser fusion system with the solid state lasers. A configuration of 32 laser beams is proposed for achieving a high uniformity illumination, with a root-mean-square deviation at 10{sup −4} level. In our optimization, the parameters such as beam number, beam arrangement, and beam intensity profile are taken into account. The illumination uniformity robustness versus the parameters such as intensity profile deviations, power imbalance, intensity profile noise, the pointing error, and the target position error is also discussed. In this study, the model is assumed a solid-spheremore » illumination, and refraction effects of incident light on the corona are not considered. Our results may have a potential application in the design of the direct-drive laser fusion of the Shen Guang-II Upgrading facility (SG-II-U, China).« less

  7. Design of a minimally constraining, passively supported gait training exoskeleton: ALEX II.

    PubMed

    Winfree, Kyle N; Stegall, Paul; Agrawal, Sunil K

    2011-01-01

    This paper discusses the design of a new, minimally constraining, passively supported gait training exoskeleton known as ALEX II. This device builds on the success and extends the features of the ALEX I device developed at the University of Delaware. Both ALEX (Active Leg EXoskeleton) devices have been designed to supply a controllable torque to a subject's hip and knee joint. The current control strategy makes use of an assist-as-needed algorithm. Following a brief review of previous work motivating this redesign, we discuss the key mechanical features of the new ALEX device. A short investigation was conducted to evaluate the effectiveness of the control strategy and impact of the exoskeleton on the gait of six healthy subjects. This paper concludes with a comparison between the subjects' gait both in and out of the exoskeleton. © 2011 IEEE

  8. A recursively formulated first-order semianalytic artificial satellite theory based on the generalized method of averaging. Volume 1: The generalized method of averaging applied to the artificial satellite problem

    NASA Technical Reports Server (NTRS)

    Mcclain, W. D.

    1977-01-01

    A recursively formulated, first-order, semianalytic artificial satellite theory, based on the generalized method of averaging is presented in two volumes. Volume I comprehensively discusses the theory of the generalized method of averaging applied to the artificial satellite problem. Volume II presents the explicit development in the nonsingular equinoctial elements of the first-order average equations of motion. The recursive algorithms used to evaluate the first-order averaged equations of motion are also presented in Volume II. This semianalytic theory is, in principle, valid for a term of arbitrary degree in the expansion of the third-body disturbing function (nonresonant cases only) and for a term of arbitrary degree and order in the expansion of the nonspherical gravitational potential function.

  9. A TCAS-II Resolution Advisory Detection Algorithm

    NASA Technical Reports Server (NTRS)

    Munoz, Cesar; Narkawicz, Anthony; Chamberlain, James

    2013-01-01

    The Traffic Alert and Collision Avoidance System (TCAS) is a family of airborne systems designed to reduce the risk of mid-air collisions between aircraft. TCASII, the current generation of TCAS devices, provides resolution advisories that direct pilots to maintain or increase vertical separation when aircraft distance and time parameters are beyond designed system thresholds. This paper presents a mathematical model of the TCASII Resolution Advisory (RA) logic that assumes accurate aircraft state information. Based on this model, an algorithm for RA detection is also presented. This algorithm is analogous to a conflict detection algorithm, but instead of predicting loss of separation, it predicts resolution advisories. It has been formally verified that for a kinematic model of aircraft trajectories, this algorithm completely and correctly characterizes all encounter geometries between two aircraft that lead to a resolution advisory within a given lookahead time interval. The RA detection algorithm proposed in this paper is a fundamental component of a NASA sense and avoid concept for the integration of Unmanned Aircraft Systems in civil airspace.

  10. Detection of Coronal Mass Ejections Using Multiple Features and Space-Time Continuity

    NASA Astrophysics Data System (ADS)

    Zhang, Ling; Yin, Jian-qin; Lin, Jia-ben; Feng, Zhi-quan; Zhou, Jin

    2017-07-01

    Coronal Mass Ejections (CMEs) release tremendous amounts of energy in the solar system, which has an impact on satellites, power facilities and wireless transmission. To effectively detect a CME in Large Angle Spectrometric Coronagraph (LASCO) C2 images, we propose a novel algorithm to locate the suspected CME regions, using the Extreme Learning Machine (ELM) method and taking into account the features of the grayscale and the texture. Furthermore, space-time continuity is used in the detection algorithm to exclude the false CME regions. The algorithm includes three steps: i) define the feature vector which contains textural and grayscale features of a running difference image; ii) design the detection algorithm based on the ELM method according to the feature vector; iii) improve the detection accuracy rate by using the decision rule of the space-time continuum. Experimental results show the efficiency and the superiority of the proposed algorithm in the detection of CMEs compared with other traditional methods. In addition, our algorithm is insensitive to most noise.

  11. Self-adaptive multi-objective harmony search for optimal design of water distribution networks

    NASA Astrophysics Data System (ADS)

    Choi, Young Hwan; Lee, Ho Min; Yoo, Do Guen; Kim, Joong Hoon

    2017-11-01

    In multi-objective optimization computing, it is important to assign suitable parameters to each optimization problem to obtain better solutions. In this study, a self-adaptive multi-objective harmony search (SaMOHS) algorithm is developed to apply the parameter-setting-free technique, which is an example of a self-adaptive methodology. The SaMOHS algorithm attempts to remove some of the inconvenience from parameter setting and selects the most adaptive parameters during the iterative solution search process. To verify the proposed algorithm, an optimal least cost water distribution network design problem is applied to three different target networks. The results are compared with other well-known algorithms such as multi-objective harmony search and the non-dominated sorting genetic algorithm-II. The efficiency of the proposed algorithm is quantified by suitable performance indices. The results indicate that SaMOHS can be efficiently applied to the search for Pareto-optimal solutions in a multi-objective solution space.

  12. Application and Effects of Linguistic Functions on Information Retrieval in a German Language Full-Text Database: Comparison between Retrieval in Abstract and Full Text.

    ERIC Educational Resources Information Center

    Tauchert, Wolfgang; And Others

    1991-01-01

    Describes the PADOK-II project in Germany, which was designed to give information on the effects of linguistic algorithms on retrieval in a full-text database, the German Patent Information System (GPI). Relevance assessments are discussed, statistical evaluations are described, and searches are compared for the full-text section versus the…

  13. Geometric Folding Algorithms: Bridging Theory to Practice

    DTIC Science & Technology

    2009-11-03

    orthogonal polyhedron can be folded from a single, universal crease pattern (box pleating). II. ORIGAMI DESIGN a.) Developed mathematical theory for what...happens in paper between creases, in particular for the case of circular creases. b.) Circular crease origami on permanent exhibition at MoMA in New...Developing mathematical theory of Robert Lang’s TreeMaker framework for efficiently folding tree-shaped origami bases.

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Yongjun; Yang, Lingyun

    We report an efficient dynamic aperture (DA) optimization approach using multiobjective genetic algorithm (MOGA), which is driven by nonlinear driving terms computation. It was found that having small low order driving terms is a necessary but insufficient condition of having a decent DA. Then direct DA tracking simulation is implemented among the last generation candidates to select the best solutions. The approach was demonstrated successfully in optimizing NSLS-II storage ring DA.

  15. Constrained Fisher Scoring for a Mixture of Factor Analyzers

    DTIC Science & Technology

    2016-09-01

    expectation -maximization algorithm with similar computational requirements. Lastly, we demonstrate the efficacy of the proposed method for learning a... expectation maximization 44 Gene T Whipps 301 394 2372Unclassified Unclassified Unclassified UU ii Approved for public release; distribution is unlimited...14 3.6 Relationship with Expectation -Maximization 16 4. Simulation Examples 16 4.1 Synthetic MFA Example 17 4.2 Manifold Learning Example 22 5

  16. Scalability of Robotic Controllers: An Evaluation of Controller Options-Experiment II

    DTIC Science & Technology

    2011-09-01

    for the Soldier, to ensure mission success while maximizing the survivability and lethality through the synergistic interaction of equipment...based touch interface for gloved finger interactions . This interface had to have larger-than-normal touch-screen buttons for commanding the robot...C.; Hill, S.; Pillalamarri, K. Extreme Scalability: Designing Interfaces and Algorithms for Soldier-Robotic Swarm Interaction , Year 2; ARL- TR

  17. The Third Ambient Aspirin Polymorph

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shtukenberg, Alexander G.; Hu, Chunhua T.; Zhu, Qiang

    Polymorphism in aspirin (acetylsalicylic acid), one of the most widely consumed medications, was equivocal until the structure of a second polymorph II, similar in structure to the original form I, was reported in 2005. Here, the third ambient polymorph of aspirin is described. Lastly, it was crystallized from the melt and its structure was determined using a combination of X-ray powder diffraction analysis and crystal structure prediction algorithms.

  18. The Third Ambient Aspirin Polymorph

    DOE PAGES

    Shtukenberg, Alexander G.; Hu, Chunhua T.; Zhu, Qiang; ...

    2017-05-17

    Polymorphism in aspirin (acetylsalicylic acid), one of the most widely consumed medications, was equivocal until the structure of a second polymorph II, similar in structure to the original form I, was reported in 2005. Here, the third ambient polymorph of aspirin is described. Lastly, it was crystallized from the melt and its structure was determined using a combination of X-ray powder diffraction analysis and crystal structure prediction algorithms.

  19. Application of multi-objective controller to optimal tuning of PID gains for a hydraulic turbine regulating system using adaptive grid particle swam optimization.

    PubMed

    Chen, Zhihuan; Yuan, Yanbin; Yuan, Xiaohui; Huang, Yuehua; Li, Xianshan; Li, Wenwu

    2015-05-01

    A hydraulic turbine regulating system (HTRS) is one of the most important components of hydropower plant, which plays a key role in maintaining safety, stability and economical operation of hydro-electrical installations. At present, the conventional PID controller is widely applied in the HTRS system for its practicability and robustness, and the primary problem with respect to this control law is how to optimally tune the parameters, i.e. the determination of PID controller gains for satisfactory performance. In this paper, a kind of multi-objective evolutionary algorithms, named adaptive grid particle swarm optimization (AGPSO) is applied to solve the PID gains tuning problem of the HTRS system. This newly AGPSO optimized method, which differs from a traditional one-single objective optimization method, is designed to take care of settling time and overshoot level simultaneously, in which a set of non-inferior alternatives solutions (i.e. Pareto solution) is generated. Furthermore, a fuzzy-based membership value assignment method is employed to choose the best compromise solution from the obtained Pareto set. An illustrative example associated with the best compromise solution for parameter tuning of the nonlinear HTRS system is introduced to verify the feasibility and the effectiveness of the proposed AGPSO-based optimization approach, as compared with two another prominent multi-objective algorithms, i.e. Non-dominated Sorting Genetic Algorithm II (NSGAII) and Strength Pareto Evolutionary Algorithm II (SPEAII), for the quality and diversity of obtained Pareto solutions set. Consequently, simulation results show that this AGPSO optimized approach outperforms than compared methods with higher efficiency and better quality no matter whether the HTRS system works under unload or load conditions. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.

  20. Managing the Sick Child in the Era of Declining Malaria Transmission: Development of ALMANACH, an Electronic Algorithm for Appropriate Use of Antimicrobials.

    PubMed

    Rambaud-Althaus, Clotilde; Shao, Amani Flexson; Kahama-Maro, Judith; Genton, Blaise; d'Acremont, Valérie

    2015-01-01

    To review the available knowledge on epidemiology and diagnoses of acute infections in children aged 2 to 59 months in primary care setting and develop an electronic algorithm for the Integrated Management of Childhood Illness to reach optimal clinical outcome and rational use of medicines. A structured literature review in Medline, Embase and the Cochrane Database of Systematic Review (CDRS) looked for available estimations of diseases prevalence in outpatients aged 2-59 months, and for available evidence on i) accuracy of clinical predictors, and ii) performance of point-of-care tests for targeted diseases. A new algorithm for the management of childhood illness (ALMANACH) was designed based on evidence retrieved and results of a study on etiologies of fever in Tanzanian children outpatients. The major changes in ALMANACH compared to IMCI (2008 version) are the following: i) assessment of 10 danger signs, ii) classification of non-severe children into febrile and non-febrile illness, the latter receiving no antibiotics, iii) classification of pneumonia based on a respiratory rate threshold of 50 assessed twice for febrile children 12-59 months; iv) malaria rapid diagnostic test performed for all febrile children. In the absence of identified source of fever at the end of the assessment, v) urine dipstick performed for febrile children <2 years to consider urinary tract infection, vi) classification of 'possible typhoid' for febrile children >2 years with abdominal tenderness; and lastly vii) classification of 'likely viral infection' in case of negative results. This smartphone-run algorithm based on new evidence and two point-of-care tests should improve the quality of care of <5 year children and lead to more rational use of antimicrobials.

  1. Spontaneous Intramuscular Hematomas of the Abdomen and Pelvis: A New Multilevel Algorithm to Direct Transarterial Embolization and Patient Management

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Popov, Milen; Sotiriadis, Charalampos; Gay, Frederique

    PurposeTo report our experience using a multilevel patient management algorithm to direct transarterial embolization (TAE) in managing spontaneous intramuscular hematoma (SIMH).Materials and MethodsFrom May 2006 to January 2014, twenty-seven patients with SIMH had been referred for TAE to our Radiology department. Clinical status and coagulation characteristics of the patients are analyzed. An algorithm integrating CT findings is suggested to manage SIMH. Patients were classified into three groups: Type I, SIMH with no active bleeding (AB); Type II, SIMH with AB and no muscular fascia rupture (MFR); and Type III, SIMH with MFR and AB. Type II is furthermore subcategorized asmore » IIa, IIb and IIc. Types IIb, IIc and III were considered for TAE. The method of embolization as well as the material been used are described. Continuous variables are presented as mean ± SD. Categorical variables are reported as percentages. Technical success, clinical success, complications and 30-day mortality (d30 M) were analyzed.ResultsTwo patients (7.5%) had Type IIb, four (15%) Type IIc and 21 (77.5%) presented Type III. The detailed CT and CTA findings, embolization procedure and materials used are described. Technical success was 96% with a complication rate of 4%. Clinical success was 88%. The bleeding-related thirty-day mortality was 15% (all with Type III).ConclusionTAE is a safe and efficient technique to control bleeding that should be considered in selected SIMH as soon as possible. The proposed algorithm integrating CT features provides a comprehensive chart to select patients for TAE.Level of Evidence4.« less

  2. Managing the Sick Child in the Era of Declining Malaria Transmission: Development of ALMANACH, an Electronic Algorithm for Appropriate Use of Antimicrobials

    PubMed Central

    Rambaud-Althaus, Clotilde; Shao, Amani Flexson; Genton, Blaise; d’Acremont, Valérie

    2015-01-01

    Objective To review the available knowledge on epidemiology and diagnoses of acute infections in children aged 2 to 59 months in primary care setting and develop an electronic algorithm for the Integrated Management of Childhood Illness to reach optimal clinical outcome and rational use of medicines. Methods A structured literature review in Medline, Embase and the Cochrane Database of Systematic Review (CDRS) looked for available estimations of diseases prevalence in outpatients aged 2-59 months, and for available evidence on i) accuracy of clinical predictors, and ii) performance of point-of-care tests for targeted diseases. A new algorithm for the management of childhood illness (ALMANACH) was designed based on evidence retrieved and results of a study on etiologies of fever in Tanzanian children outpatients. Findings The major changes in ALMANACH compared to IMCI (2008 version) are the following: i) assessment of 10 danger signs, ii) classification of non-severe children into febrile and non-febrile illness, the latter receiving no antibiotics, iii) classification of pneumonia based on a respiratory rate threshold of 50 assessed twice for febrile children 12-59 months; iv) malaria rapid diagnostic test performed for all febrile children. In the absence of identified source of fever at the end of the assessment, v) urine dipstick performed for febrile children <2years to consider urinary tract infection, vi) classification of ‘possible typhoid’ for febrile children >2 years with abdominal tenderness; and lastly vii) classification of ‘likely viral infection’ in case of negative results. Conclusion This smartphone-run algorithm based on new evidence and two point-of-care tests should improve the quality of care of <5 year children and lead to more rational use of antimicrobials. PMID:26161753

  3. Reliability of Modern Scores to Predict Long-Term Mortality After Isolated Aortic Valve Operations.

    PubMed

    Barili, Fabio; Pacini, Davide; D'Ovidio, Mariangela; Ventura, Martina; Alamanni, Francesco; Di Bartolomeo, Roberto; Grossi, Claudio; Davoli, Marina; Fusco, Danilo; Perucci, Carlo; Parolari, Alessandro

    2016-02-01

    Contemporary scores for estimating perioperative death have been proposed to also predict also long-term death. The aim of the study was to evaluate the performance of the updated European System for Cardiac Operative Risk Evaluation II, The Society of Thoracic Surgeons Predicted Risk of Mortality score, and the Age, Creatinine, Left Ventricular Ejection Fraction score for predicting long-term mortality in a contemporary cohort of isolated aortic valve replacement (AVR). We also sought to develop for each score a simple algorithm based on predicted perioperative risk to predict long-term survival. Complete data on 1,444 patients who underwent isolated AVR in a 7-year period were retrieved from three prospective institutional databases and linked with the Italian Tax Register Information System. Data were evaluated with performance analyses and time-to-event semiparametric regression. Survival was 83.0% ± 1.1% at 5 years and 67.8 ± 1.9% at 8 years. Discrimination and calibration of all three scores both worsened for prediction of death at 1 year and 5 years. Nonetheless, a significant relationship was found between long-term survival and quartiles of scores (p < 0.0001). The estimated perioperative risk by each model was used to develop an algorithm to predict long-term death. The hazard ratios for death were 1.1 (95% confidence interval, 1.07 to 1.12) for European System for Cardiac Operative Risk Evaluation II, 1.34 (95% CI, 1.28 to 1.40) for the Society of Thoracic Surgeons score, and 1.08 (95% CI, 1.06 to 1.10) for the Age, Creatinine, Left Ventricular Ejection Fraction score. The predicted risk generated by European System for Cardiac Operative Risk Evaluation II, The Society of Thoracic Surgeons score, and Age, Creatinine, Left Ventricular Ejection Fraction scores cannot also be considered a direct estimate of the long-term risk for death. Nonetheless, the three scores can be used to derive an estimate of long-term risk of death in patients who undergo isolated AVR with the use of a simple algorithm. Copyright © 2016 The Society of Thoracic Surgeons. Published by Elsevier Inc. All rights reserved.

  4. Effect of Cu(II), Cd(II) and Zn(II) on Pb(II) biosorption by algae Gelidium-derived materials.

    PubMed

    Vilar, Vítor J P; Botelho, Cidália M S; Boaventura, Rui A R

    2008-06-15

    Biosorption of Pb(II), Cu(II), Cd(II) and Zn(II) from binary metal solutions onto the algae Gelidium sesquipedale, an algal industrial waste and a waste-based composite material was investigated at pH 5.3, in a batch system. Binary Pb(II)/Cu(II), Pb(II)/Cd(II) and Pb(II)/Zn(II) solutions have been tested. For the same equilibrium concentrations of both metal ions (1 mmol l(-1)), approximately 66, 85 and 86% of the total uptake capacity of the biosorbents is taken by lead ions in the systems Pb(II)/Cu(II), Pb(II)/Cd(II) and Pb(II)/Zn(II), respectively. Two-metal results were fitted to a discrete and a continuous model, showing the inhibition of the primary metal biosorption by the co-cation. The model parameters suggest that Cd(II) and Zn(II) have the same decreasing effect on the Pb(II) uptake capacity. The uptake of Pb(II) was highly sensitive to the presence of Cu(II). From the discrete model it was possible to obtain the Langmuir affinity constant for Pb(II) biosorption. The presence of the co-cations decreases the apparent affinity of Pb(II). The experimental results were successfully fitted by the continuous model, at different pH values, for each biosorbent. The following sequence for the equilibrium affinity constants was found: Pb>Cu>Cd approximately Zn.

  5. Approximating Smooth Step Functions Using Partial Fourier Series Sums

    DTIC Science & Technology

    2012-09-01

    interp1(xt(ii), smoothstepbez( t(ii), min(t(ii)), max(t(ii)), ’y’), t(ii), ’linear’, ’ extrap ’); ii = find( abs(t - tau/2) <= epi ); iii = t(ii...interp1( xt(ii), smoothstepbez( rt, min(rt), max(rt), ’y’), t(ii), ’linear’, ’ extrap ’ ); % stepm(ii) = 1 - interp1(xt(ii), smoothstepbez( t(ii...min(t(ii)), max(t(ii)), ’y’), t(ii), ’linear’, ’ extrap ’); In this case, because x is also defined as a function of the independent parameter

  6. Biotechnological applications of mobile group II introns and their reverse transcriptases: gene targeting, RNA-seq, and non-coding RNA analysis.

    PubMed

    Enyeart, Peter J; Mohr, Georg; Ellington, Andrew D; Lambowitz, Alan M

    2014-01-13

    Mobile group II introns are bacterial retrotransposons that combine the activities of an autocatalytic intron RNA (a ribozyme) and an intron-encoded reverse transcriptase to insert site-specifically into DNA. They recognize DNA target sites largely by base pairing of sequences within the intron RNA and achieve high DNA target specificity by using the ribozyme active site to couple correct base pairing to RNA-catalyzed intron integration. Algorithms have been developed to program the DNA target site specificity of several mobile group II introns, allowing them to be made into 'targetrons.' Targetrons function for gene targeting in a wide variety of bacteria and typically integrate at efficiencies high enough to be screened easily by colony PCR, without the need for selectable markers. Targetrons have found wide application in microbiological research, enabling gene targeting and genetic engineering of bacteria that had been intractable to other methods. Recently, a thermostable targetron has been developed for use in bacterial thermophiles, and new methods have been developed for using targetrons to position recombinase recognition sites, enabling large-scale genome-editing operations, such as deletions, inversions, insertions, and 'cut-and-pastes' (that is, translocation of large DNA segments), in a wide range of bacteria at high efficiency. Using targetrons in eukaryotes presents challenges due to the difficulties of nuclear localization and sub-optimal magnesium concentrations, although supplementation with magnesium can increase integration efficiency, and directed evolution is being employed to overcome these barriers. Finally, spurred by new methods for expressing group II intron reverse transcriptases that yield large amounts of highly active protein, thermostable group II intron reverse transcriptases from bacterial thermophiles are being used as research tools for a variety of applications, including qRT-PCR and next-generation RNA sequencing (RNA-seq). The high processivity and fidelity of group II intron reverse transcriptases along with their novel template-switching activity, which can directly link RNA-seq adaptor sequences to cDNAs during reverse transcription, open new approaches for RNA-seq and the identification and profiling of non-coding RNAs, with potentially wide applications in research and biotechnology.

  7. Microjets in the penumbra of a sunspot

    NASA Astrophysics Data System (ADS)

    Drews, Ainar; Rouppe van der Voort, Luc

    2017-06-01

    Context. Penumbral microjets (PMJs) are short-lived jets found in the penumbra of sunspots, first observed in wide-band Ca II H line observations as localized brightenings, and are thought to be caused by magnetic reconnection. Earlier work on PMJs has focused on smaller samples of by-eye selected events and case studies. Aims: It is our goal to present an automated study of a large sample of PMJs to place the basic statistics of PMJs on a sure footing and to study the PMJ Ca II 8542 Å spectral profile in detail. Methods: High spatial resolution and spectrally well-sampled observations in the Ca II 8542 Å line obtained from the Swedish 1-m Solar Telescope (SST) were reduced by a principle component analysis and subsequently used in the automated detection of PMJs using the simple machine learning algorithm k-nearest neighbour. PMJ detections were verified with co-temporal Ca II H line observations. Results: We find a total of 453 tracked PMJ events, 4253 PMJs detections tallied over all timeframes, and a detection rate of 21 events per timestep. From these, an average length, width and lifetime of 640 km, 210 km and 90 s are obtained. The average PMJ Ca II 8542 Å line profile is characterized by enhanced inner wings, often in the form of one or two distinct peaks, and a brighter line core as compared to the quiet-Sun average. Average blue and red peak positions are determined at - 10.4 km s-1 and + 10.2 km s-1 offsets from the Ca II 8542 Å line core. We find several clusters of PMJ hot-spots within the sunspot penumbra, in which PMJ events occur in the same general area repeatedly over time. Conclusions: Our results indicate smaller average PMJs sizes and longer lifetimes compared to previously published values, but with statistics still in the same orders of magnitude. The investigation and analysis of the PMJ line profiles strengthens the proposed heating of PMJs to transition region temperatures. The presented statistics on PMJs form a solid basis for future investigations and numerical modelling of PMJs.

  8. Firefly Mating Algorithm for Continuous Optimization Problems

    PubMed Central

    Ritthipakdee, Amarita; Premasathian, Nol; Jitkongchuen, Duangjai

    2017-01-01

    This paper proposes a swarm intelligence algorithm, called firefly mating algorithm (FMA), for solving continuous optimization problems. FMA uses genetic algorithm as the core of the algorithm. The main feature of the algorithm is a novel mating pair selection method which is inspired by the following 2 mating behaviors of fireflies in nature: (i) the mutual attraction between males and females causes them to mate and (ii) fireflies of both sexes are of the multiple-mating type, mating with multiple opposite sex partners. A female continues mating until her spermatheca becomes full, and, in the same vein, a male can provide sperms for several females until his sperm reservoir is depleted. This new feature enhances the global convergence capability of the algorithm. The performance of FMA was tested with 20 benchmark functions (sixteen 30-dimensional functions and four 2-dimensional ones) against FA, ALC-PSO, COA, MCPSO, LWGSODE, MPSODDS, DFOA, SHPSOS, LSA, MPDPGA, DE, and GABC algorithms. The experimental results showed that the success rates of our proposed algorithm with these functions were higher than those of other algorithms and the proposed algorithm also required fewer numbers of iterations to reach the global optima. PMID:28808442

  9. Firefly Mating Algorithm for Continuous Optimization Problems.

    PubMed

    Ritthipakdee, Amarita; Thammano, Arit; Premasathian, Nol; Jitkongchuen, Duangjai

    2017-01-01

    This paper proposes a swarm intelligence algorithm, called firefly mating algorithm (FMA), for solving continuous optimization problems. FMA uses genetic algorithm as the core of the algorithm. The main feature of the algorithm is a novel mating pair selection method which is inspired by the following 2 mating behaviors of fireflies in nature: (i) the mutual attraction between males and females causes them to mate and (ii) fireflies of both sexes are of the multiple-mating type, mating with multiple opposite sex partners. A female continues mating until her spermatheca becomes full, and, in the same vein, a male can provide sperms for several females until his sperm reservoir is depleted. This new feature enhances the global convergence capability of the algorithm. The performance of FMA was tested with 20 benchmark functions (sixteen 30-dimensional functions and four 2-dimensional ones) against FA, ALC-PSO, COA, MCPSO, LWGSODE, MPSODDS, DFOA, SHPSOS, LSA, MPDPGA, DE, and GABC algorithms. The experimental results showed that the success rates of our proposed algorithm with these functions were higher than those of other algorithms and the proposed algorithm also required fewer numbers of iterations to reach the global optima.

  10. 78 FR 69302 - National Oil and Hazardous Substances Pollution Contingency Plan; National Priorities List...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-11-19

    ..., II-T, III-C, I-C, II-U, I-B, I-E, I-G, I-H, I-I, I-J, I-L, I-M, I-P, II-G, II-I, II-P, III-D, I-K, I..., I-H, I-I, I-J, I-L, I-M, I-P, II-G, II-I, II-P, III-D, I-K, I-N, I-O, I-S, II-E, II-L, II-M, II-R, I... 102(h) of CERCLA, to document that all environmental impacts associated with the DON's activities on...

  11. Forebody and base region real gas flow in severe planetary entry by a factored implicit numerical method. II - Equilibrium reactive gas

    NASA Technical Reports Server (NTRS)

    Davy, W. C.; Green, M. J.; Lombard, C. K.

    1981-01-01

    The factored-implicit, gas-dynamic algorithm has been adapted to the numerical simulation of equilibrium reactive flows. Changes required in the perfect gas version of the algorithm are developed, and the method of coupling gas-dynamic and chemistry variables is discussed. A flow-field solution that approximates a Jovian entry case was obtained by this method and compared with the same solution obtained by HYVIS, a computer program much used for the study of planetary entry. Comparison of surface pressure distribution and stagnation line shock-layer profiles indicates that the two solutions agree well.

  12. Automatic protein structure solution from weak X-ray data

    NASA Astrophysics Data System (ADS)

    Skubák, Pavol; Pannu, Navraj S.

    2013-11-01

    Determining new protein structures from X-ray diffraction data at low resolution or with a weak anomalous signal is a difficult and often an impossible task. Here we propose a multivariate algorithm that simultaneously combines the structure determination steps. In tests on over 140 real data sets from the protein data bank, we show that this combined approach can automatically build models where current algorithms fail, including an anisotropically diffracting 3.88 Å RNA polymerase II data set. The method seamlessly automates the process, is ideal for non-specialists and provides a mathematical framework for successfully combining various sources of information in image processing.

  13. [Prevention of gastrointestinal bleeding in patients with advanced burns].

    PubMed

    Vagner, D O; Krylov, K M; Verbitsky, V G; Shlyk, I V

    2018-01-01

    To reduce the incidence of gastrointestinal bleeding in patients with advanced burns by developing a prophylactic algorithm. The study consisted of retrospective group of 488 patients with thermal burns grade II-III over 20% of body surface area and prospective group of 135 patients with a similar thermal trauma. Standard clinical and laboratory examination was applied. Instrumental survey included fibrogastroduodenoscopy, endoscopic pH-metry and invasive volumetric monitoring (PICCO plus). Statistical processing was carried out with Microsoft Office Excel 2007 and IBM SPSS 20.0. New algorithm significantly decreased incidence of gastrointestinal bleeding (p<0.001) and mortality rate (p=0.006) in patients with advanced burns.

  14. An FPGA-based heterogeneous image fusion system design method

    NASA Astrophysics Data System (ADS)

    Song, Le; Lin, Yu-chi; Chen, Yan-hua; Zhao, Mei-rong

    2011-08-01

    Taking the advantages of FPGA's low cost and compact structure, an FPGA-based heterogeneous image fusion platform is established in this study. Altera's Cyclone IV series FPGA is adopted as the core processor of the platform, and the visible light CCD camera and infrared thermal imager are used as the image-capturing device in order to obtain dualchannel heterogeneous video images. Tailor-made image fusion algorithms such as gray-scale weighted averaging, maximum selection and minimum selection methods are analyzed and compared. VHDL language and the synchronous design method are utilized to perform a reliable RTL-level description. Altera's Quartus II 9.0 software is applied to simulate and implement the algorithm modules. The contrast experiments of various fusion algorithms show that, preferably image quality of the heterogeneous image fusion can be obtained on top of the proposed system. The applied range of the different fusion algorithms is also discussed.

  15. Event-chain Monte Carlo algorithms for three- and many-particle interactions

    NASA Astrophysics Data System (ADS)

    Harland, J.; Michel, M.; Kampmann, T. A.; Kierfeld, J.

    2017-02-01

    We generalize the rejection-free event-chain Monte Carlo algorithm from many-particle systems with pairwise interactions to systems with arbitrary three- or many-particle interactions. We introduce generalized lifting probabilities between particles and obtain a general set of equations for lifting probabilities, the solution of which guarantees maximal global balance. We validate the resulting three-particle event-chain Monte Carlo algorithms on three different systems by comparison with conventional local Monte Carlo simulations: i) a test system of three particles with a three-particle interaction that depends on the enclosed triangle area; ii) a hard-needle system in two dimensions, where needle interactions constitute three-particle interactions of the needle end points; iii) a semiflexible polymer chain with a bending energy, which constitutes a three-particle interaction of neighboring chain beads. The examples demonstrate that the generalization to many-particle interactions broadens the applicability of event-chain algorithms considerably.

  16. Long-term ELBARA-II Assistance to SMOS Land Product and Algorithm Validation at the Valencia Anchor Station (MELBEX Experiment 2010-2013)

    NASA Astrophysics Data System (ADS)

    Lopez-Baeza, Ernesto; Wigneron, Jean-Pierre; Schwank, Mike; Miernecki, Maciej; Kerr, Yann; Casal, Tania; Delwart, Steven; Fernandez-Moran, Roberto; Mecklenburg, Susanne; Coll Pajaron, M. Amparo; Salgado Hernanz, Paula

    The main activity of the Valencia Anchor Station (VAS) is currently now to support the validation of SMOS (Soil Moisture and Ocean Salinity) Level 2 and 3 land products (soil moisture, SM, and vegetation optical depth, TAU). With this aim, the European Space Agency (ESA) has provided the Climatology from Satellites Group of the University of Valencia with an ELBARA-II microwave radiometer under a loan agreement since September 2009. During this time, brightness temperatures (TB) have continuously been acquired, except during normal maintenance or minor repair interruptions. ELBARA-II is an L-band dual-polarization radiometer with two channels (1400-1418 MHz, 1409-1427 MHz). It is continuously measuring over a vineyard field (El Renegado, Caudete de las Fuentes, Valencia) from a 15 m platform with a constant protocol for calibration and angular scanning measurements with the aim to assisting the validation of SMOS land products and the calibration of the L-MEB (L-Band Emission of the Biosphere) -basis for the SMOS Level 2 Land Processor- over the VAS validation site. One of the advantages of using the VAS site is the possibility of studying two different environmental conditions along the year. While the vine cycle extends mainly between April and October, during the rest of the year the area remains under bare soil conditions, adequate for the calibration of the soil model. The measurement protocol currently running has shown to be robust during the whole operation time and will be extended in time as much as possible to continue providing a long-term data set of ELBARA-II TB measurements and retrieved SM and TAU. This data set is also showing to be useful in support of SMOS scientific activities: the VAS area and, specifically the ELBARA-II site, offer good conditions to control the long-term evolution of SMOS Level 2 and Level 3 land products and interpret eventual anomalies that may obscure sensor hidden biases. In addition, SM and TAU that are currently retrieved from the ELBARA-II TB data by inversion of the L-MEB model, can also be compared to the Level 2 and Level 3 SMOS products. L-band ELBARA-II measurements provide area-integrated estimations of SM and TAU that are much more representative of the soil and vegetation conditions at field scale than ground measurements (from capacitive probes for SM and destructive measurements for TAU). For instance, Miernecki et al., (2012) and Wigneron et al. (2012) showed that very good correlations could be obtained from TB data and SM retrievals obtained from both SMOS and ELBARA-II over the 2010-2011 time period. The analysis of the quality of these correlations over a long time period can be very useful to evaluate the SMOS measurements and retrieved products (Level 2 and 3). The present work that extends the analysis over almost 4 years now (2010-2013) emphasizes the need to (i) maintain the long-time record of ELBARA-II measurements (ii) enhance as much as possible the control over other parameters, especially, soil roughness (SR), vegetation water content (VWC) and surface temperature, to interpret the retrieved results obtained from both SMOS and ELBARA-II instruments.

  17. Comparison of magnetic resonance imaging and video capsule enteroscopy in diagnosing small-bowel pathology: localization-dependent diagnostic yield.

    PubMed

    Böcker, Ulrich; Dinter, Dietmar; Litterer, Caroline; Hummel, Frank; Knebel, Phillip; Franke, Andreas; Weiss, Christel; Singer, Manfred V; Löhr, J-Matthias

    2010-04-01

    New technology has considerably advanced the diagnosis of small-bowel pathology. However, its significance in clinical algorithms has not yet been fully assessed. The aim of the present analysis was to compare the diagnostic utility and yield of video-capsule enteroscopy (VCE) to that of magnetic resonance imaging (MRI) in patients with suspected or established Crohn's disease (Group I), obscure gastrointestinal blood loss (Group II), or suspected tumors (Group III). Forty-six out of 182 patients who underwent both modalities were included: 21 in Group I, 20 in Group II, and five in Group III. Pathology was assessed in three predetermined sections of the small bowel (upper, middle, and lower). The McNemar and Wilcoxon tests were used for statistical analysis. In Group I, lesions were found by VCE in nine of the 21 patients and by MRI in six. In five patients, both modalities showed pathology. In Group II, pathological changes were detected in 11 of the 20 patients by VCE and in eight patients by MRI. In five cases, pathology was found with both modalities. In Group III, neither modality showed small-bowel pathology. For the patient groups combined, diagnostic yield was 43% with VCE and 30% with MRI. The diagnostic yield of VCE was superior to that of MRI in the upper small bowel in both Groups I and II. VCE is superior to MRI for the detection of lesions related to Crohn's disease or obscure gastrointestinal bleeding in the upper small bowel.

  18. Desertification in the south Junggar Basin, 2000-2009: Part II. Model development and trend analysis

    NASA Astrophysics Data System (ADS)

    Jiang, Miao; Lin, Yi

    2018-07-01

    The substantial objective of desertification monitoring is to derive its development trend, which facilitates pre-making policies to handle its potential influences. Aiming at this extreme goal, previous studies have proposed a large number of remote sensing (RS) based methods to retrieve multifold indicators, as reviewed in Part I. However, most of these indicators individually capable of characterizing a single aspect of land attributes, e.g., albedo quantifying land surface reflectivity, cannot show a full picture of desertification processes; few comprehensive RS-based models have either been published. To fill this gap, this Part II was dedicated to developing a RS information model for comprehensively characterizing the desertification and deriving its trend, based on the indicators retrieved in Part I in the same case of the south Junggar Basin, China in the last decade (2000-2009). The proposed model was designed to have three dominant component modules, i.e., the vegetation-relevant sub-model, the soil-relevant sub-model, and the water-relevant sub-model, which synthesize all of the retrieved indicators to integrally reflect the processes of desertification; based on the model-output indices, the desertification trends were derived using the least absolute deviation fitting algorithm. Tests indicated that the proposed model did work and the study area showed different development tendencies for different desertification levels. Overall, this Part II established a new comprehensive RS information model for desertification risk assessment and its trend deriving, and the whole study comprising Part I and Part II advanced a relatively standard framework for RS-based desertification monitoring.

  19. [Algorithm for estimating chlorophyll-a concentration in case II water body based on bio-optical model].

    PubMed

    Yang, Wei; Chen, Jin; Mausushita, Bunki

    2009-01-01

    In the present study, a novel retrieval method for estimating chlorophyll-a concentration in case II waters based on bio-optical model was proposed and was tested with the data measured in the laboratory. A series of reflectance spectra, with which the concentration of each sample constituent (for example chlorophyll-a, NPSS etc.) was obtained from accurate experiments, were used to calculate the absorption and backscattering coefficients of the constituents of the case II waters. Then non-negative least square method was applied to calculate the concentration of chlorophyll-a and non-phytoplankton suspended sediments (NPSS). Green algae was firstly collected from the Kasumigaura lake in Japan and then cultured in the laboratory. The reflectance spectra of waters with different amounts of phytoplankton and NPSS were measured in the dark room using FieldSpec Pro VNIR (Analytical Spectral Devises Inc. , Boulder, CO, USA). In order to validate whether this method can be applied in multispectral data (for example Landsat TM), the spectra measured in the laboratory were resampled with Landsat TM bands 1, 2, 3 and 4. Different combinations of TM bands were compared to derive the most appropriate wavelength for detecting chlorophyll-a in case II water for green algae. The results indicated that the combination of TM bands 2, 3 and 4 achieved much better accuracy than other combinations, and the estimated concentration of chlorophyll-a was significantly more accurate than empirical methods. It is expected that this method can be directly applied to the real remotely sensed image because it is based on bio-optical model.

  20. Novel Mechanism for Disrupted Circadian Blood Pressure Rhythm in a Rat Model of Metabolic Syndrome—The Critical Role of Angiotensin II

    PubMed Central

    Sueta, Daisuke; Kataoka, Keiichiro; Koibuchi, Nobutaka; Toyama, Kensuke; Uekawa, Ken; Katayama, Tetsuji; MingJie, Ma; Nakagawa, Takashi; Waki, Hidefumi; Maeda, Masanobu; Yasuda, Osamu; Matsui, Kunihiko; Ogawa, Hisao; Kim‐Mitsuyama, Shokei

    2013-01-01

    Background This study was performed to determine the characteristics and mechanism of hypertension in SHR/NDmcr‐cp(+/+) rats (SHRcp), a new model of metabolic syndrome, with a focus on the autonomic nervous system, aldosterone, and angiotensin II. Methods and Results We measured arterial blood pressure (BP) in SHRcp by radiotelemetry combined with spectral analysis using a fast Fourier transformation algorithm and examined the effect of azilsartan, an AT1 receptor blocker. Compared with control Wistar‐Kyoto rats (WKY) and SHR, SHRcp exhibited a nondipper‐type hypertension and displayed increased urinary norepinephrine excretion and increased urinary and plasma aldosterone levels. Compared with WKY and SHR, SHRcp were characterized by an increase in the low‐frequency power (LF) of systolic BP and a decrease in spontaneous baroreflex gain (sBRG), indicating autonomic dysfunction. Thus, SHRcp are regarded as a useful model of human hypertension with metabolic syndrome. Oral administration of azilsartan once daily persistently lowered BP during the light period (inactive phase) and the dark period (active phase) in SHRcp more than in WKY and SHR. Thus, angiotensin II seems to be involved in the mechanism of disrupted diurnal BP rhythm in SHRcp. Azilsartan significantly reduced urinary norepinephrine and aldosterone excretion and significantly increased urinary sodium excretion in SHRcp. Furthermore, azilsartan significantly reduced LF of systolic BP and significantly increased sBRG in SHRcp. Conclusions These results strongly suggest that impairment of autonomic function and increased aldosterone in SHRcp mediate the effect of angiotensin II on circadian blood pressure rhythms. PMID:23629805

  1. Relative abundance of chemical forms of Cu(II) and Cd(II) on soybean roots as influenced by pH, cations and organic acids

    PubMed Central

    Zhou, Qin; Liu, Zhao-dong; Liu, Yuan; Jiang, Jun; Xu, Ren-kou

    2016-01-01

    Little information is available on chemical forms of heavy metals on integrate plant roots. KNO3 (1 M), 0.05M EDTA at pH6 and 0.01 M HCl were used sequentially to extract the exchangeable, complexed and precipitated forms of Cu(II) and Cd(II) from soybean roots and then to investigate chemical form distribution of Cu(II) and Cd(II) on soybean roots. Cu(II) and Cd(II) adsorbed on soybean roots were mainly exchangeable form, followed by complexed form, while their precipitated forms were very low under acidic conditions. Soybean roots had a higher adsorption affinity to Cu(II) than Cd(II), leading to higher toxic of Cu(II) than Cd(II). An increase in solution pH increased negative charge on soybean and thus increased exchangeable Cu(II) and Cd(II) on the roots. Ca2+, Mg2+ and NH4+ reduced exchangeable Cu(II) and Cd(II) levels on soybean roots and these cations showed greater effects on Cd(II) than Cu(II) due to greater adsorption affinity of the roots to Cu(II) than Cd(II). L-malic and citric acids decreased exchangeable and complexed Cu(II) on soybean roots. In conclusion, Cu(II) and Cd(II) mainly existed as exchangeable and complexed forms on soybean roots. Ca2+ and Mg2+ cations and citric and L-malic acids can potentially alleviate Cu(II) and Cd(II) toxicity to plants. PMID:27805020

  2. Using Poisson-regularized inversion of Bremsstrahlung emission to extract full electron energy distribution functions from x-ray pulse-height detector data

    NASA Astrophysics Data System (ADS)

    Swanson, C.; Jandovitz, P.; Cohen, S. A.

    2018-02-01

    We measured Electron Energy Distribution Functions (EEDFs) from below 200 eV to over 8 keV and spanning five orders-of-magnitude in intensity, produced in a low-power, RF-heated, tandem mirror discharge in the PFRC-II apparatus. The EEDF was obtained from the x-ray energy distribution function (XEDF) using a novel Poisson-regularized spectrum inversion algorithm applied to pulse-height spectra that included both Bremsstrahlung and line emissions. The XEDF was measured using a specially calibrated Amptek Silicon Drift Detector (SDD) pulse-height system with 125 eV FWHM at 5.9 keV. The algorithm is found to out-perform current leading x-ray inversion algorithms when the error due to counting statistics is high.

  3. New Parallel Algorithms for Landscape Evolution Model

    NASA Astrophysics Data System (ADS)

    Jin, Y.; Zhang, H.; Shi, Y.

    2017-12-01

    Most landscape evolution models (LEM) developed in the last two decades solve the diffusion equation to simulate the transportation of surface sediments. This numerical approach is difficult to parallelize due to the computation of drainage area for each node, which needs huge amount of communication if run in parallel. In order to overcome this difficulty, we developed two parallel algorithms for LEM with a stream net. One algorithm handles the partition of grid with traditional methods and applies an efficient global reduction algorithm to do the computation of drainage areas and transport rates for the stream net; the other algorithm is based on a new partition algorithm, which partitions the nodes in catchments between processes first, and then partitions the cells according to the partition of nodes. Both methods focus on decreasing communication between processes and take the advantage of massive computing techniques, and numerical experiments show that they are both adequate to handle large scale problems with millions of cells. We implemented the two algorithms in our program based on the widely used finite element library deal.II, so that it can be easily coupled with ASPECT.

  4. A multipopulation PSO based memetic algorithm for permutation flow shop scheduling.

    PubMed

    Liu, Ruochen; Ma, Chenlin; Ma, Wenping; Li, Yangyang

    2013-01-01

    The permutation flow shop scheduling problem (PFSSP) is part of production scheduling, which belongs to the hardest combinatorial optimization problem. In this paper, a multipopulation particle swarm optimization (PSO) based memetic algorithm (MPSOMA) is proposed in this paper. In the proposed algorithm, the whole particle swarm population is divided into three subpopulations in which each particle evolves itself by the standard PSO and then updates each subpopulation by using different local search schemes such as variable neighborhood search (VNS) and individual improvement scheme (IIS). Then, the best particle of each subpopulation is selected to construct a probabilistic model by using estimation of distribution algorithm (EDA) and three particles are sampled from the probabilistic model to update the worst individual in each subpopulation. The best particle in the entire particle swarm is used to update the global optimal solution. The proposed MPSOMA is compared with two recently proposed algorithms, namely, PSO based memetic algorithm (PSOMA) and hybrid particle swarm optimization with estimation of distribution algorithm (PSOEDA), on 29 well-known PFFSPs taken from OR-library, and the experimental results show that it is an effective approach for the PFFSP.

  5. Management of convulsive status epilepticus in children: an adapted clinical practice guideline for pediatricians in Saudi Arabia

    PubMed Central

    Bashiri, Fahad A.; Hamad, Muddathir H.; Amer, Yasser S.; Abouelkheir, Manal M.; Mohamed, Sarar; Kentab, Amal Y.; Salih, Mustafa A.; Nasser, Mohammad N. Al; Al-Eyadhy, Ayman A.; Othman, Mohammed A. Al; Al-Ahmadi, Tahani; Iqbal, Shaikh M.; Somily, Ali M.; Wahabi, Hayfaa A.; Hundallah, Khalid J.; Alwadei, Ali H.; Albaradie, Raidah S.; Al-Twaijri, Waleed A.; Jan, Mohammed M.; Al-Otaibi, Faisal; Alnemri, Abdulrahman M.; Al-Ansary, Lubna A.

    2017-01-01

    Objective: To increase the use of evidence-based approaches in the diagnosis, investigations and treatment of Convulsive Status Epilepticus (CSE) in children in relevant care settings. Method: A Clinical Practice Guideline (CPG) adaptation group was formulated at a university hospital in Riyadh. The group utilized 2 CPG validated tools including the ADAPTE method and the AGREE II instrument. Results: The group adapted 3 main categories of recommendations from one Source CPG. The recommendations cover; (i)first-line treatment of CSE in the community; (ii)treatment of CSE in the hospital; and (iii)refractory CSE. Implementation tools were built to enhance knowledge translation of these recommendations including a clinical algorithm, audit criteria, and a computerized provider order entry. Conclusion: A clinical practice guideline for the Saudi healthcare context was formulated using a guideline adaptation process to support relevant clinicians managing CSE in children. PMID:28416791

  6. Low-pass filtering of noisy field Schlumberger sounding curves. Part II: Application

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ghosh, N.; Wadhwa, R.S.; Shrotri, B.S.

    1986-02-01

    The basic principles of the application of the linear system theory for smoothing noise-degraded d.c. geoelectrical sounding curves were recently established by Patella. A field Schlumberger sounding is presented to demonstrate first their application and validity. To achieve this purpose, firstly it is pointed out that the required smoothing or low-pass filtering can be considered as an intrinsic property of the transformation of original Schlumberger sounding curves into pole-pole (two-electrode) curves. Then the authors sketch a numerical algorithm to perform the transformation, opportunely modified from a known procedure for transforming dipole diagrams into Schlumberger ones. Finally they show a fieldmore » example with the double aim of demonstrating (i) the high quality of the low-pass filtering, and (ii) the reliability of the transformed pole-pole curve as far as quantitative interpretation is concerned.« less

  7. HeartMate II left ventricular assist system: from concept to first clinical use.

    PubMed

    Griffith, B P; Kormos, R L; Borovetz, H S; Litwak, K; Antaki, J F; Poirier, V L; Butler, K C

    2001-03-01

    The HeartMate II left ventricular assist device (LVAD) (ThermoCardiosystems, Inc, Woburn, MA) has evolved from 1991 when a partnership was struck between the McGowan Center of the University of Pittsburgh and Nimbus Company. Early iterations were conceptually based on axial-flow mini-pumps (Hemopump) and began with purge bearings. As the project developed, so did the understanding of new bearings, computational fluid design and flow visualization, and speed control algorithms. The acquisition of Nimbus by ThermoCardiosystems, Inc (TCI) sped developments of cannulas, controller, and power/monitor units. The system has been successfully tested in more than 40 calves since 1997 and the first human implant occurred in July 2000. Multicenter safety and feasibility trials are planned for Europe and soon thereafter a trial will be started in the United States to test 6-month survival in end-stage heart failure.

  8. Raman spectroscopic sensing of carbonate intercalation in breast microcalcifications at stereotactic biopsy

    PubMed Central

    Sathyavathi, R.; Saha, Anushree; Soares, Jaqueline S.; Spegazzini, Nicolas; McGee, Sasha; Rao Dasari, Ramachandra; Fitzmaurice, Maryann; Barman, Ishan

    2015-01-01

    Microcalcifications are an early mammographic sign of breast cancer and frequent target for stereotactic biopsy. Despite their indisputable value, microcalcifications, particularly of the type II variety that are comprised of calcium hydroxyapatite deposits, remain one of the least understood disease markers. Here we employed Raman spectroscopy to elucidate the relationship between pathogenicity of breast lesions in fresh biopsy cores and composition of type II microcalcifications. Using a chemometric model of chemical-morphological constituents, acquired Raman spectra were translated to characterize chemical makeup of the lesions. We find that increase in carbonate intercalation in the hydroxyapatite lattice can be reliably employed to differentiate benign from malignant lesions, with algorithms based only on carbonate and cytoplasmic protein content exhibiting excellent negative predictive value (93–98%). Our findings highlight the importance of calcium carbonate, an underrated constituent of microcalcifications, as a spectroscopic marker in breast pathology evaluation and pave the way for improved biopsy guidance. PMID:25927331

  9. Raman spectroscopic sensing of carbonate intercalation in breast microcalcifications at stereotactic biopsy

    NASA Astrophysics Data System (ADS)

    Sathyavathi, R.; Saha, Anushree; Soares, Jaqueline S.; Spegazzini, Nicolas; McGee, Sasha; Rao Dasari, Ramachandra; Fitzmaurice, Maryann; Barman, Ishan

    2015-04-01

    Microcalcifications are an early mammographic sign of breast cancer and frequent target for stereotactic biopsy. Despite their indisputable value, microcalcifications, particularly of the type II variety that are comprised of calcium hydroxyapatite deposits, remain one of the least understood disease markers. Here we employed Raman spectroscopy to elucidate the relationship between pathogenicity of breast lesions in fresh biopsy cores and composition of type II microcalcifications. Using a chemometric model of chemical-morphological constituents, acquired Raman spectra were translated to characterize chemical makeup of the lesions. We find that increase in carbonate intercalation in the hydroxyapatite lattice can be reliably employed to differentiate benign from malignant lesions, with algorithms based only on carbonate and cytoplasmic protein content exhibiting excellent negative predictive value (93-98%). Our findings highlight the importance of calcium carbonate, an underrated constituent of microcalcifications, as a spectroscopic marker in breast pathology evaluation and pave the way for improved biopsy guidance.

  10. Size-resolved ultrafine particle composition analysis 2. Houston

    NASA Astrophysics Data System (ADS)

    Phares, Denis J.; Rhoads, Kevin P.; Johnston, Murray V.; Wexler, Anthony S.

    2003-04-01

    Between 23 August and 18 September 2000, a single-ultrafine-particle mass spectrometer (RSMS-II) was deployed just east of Houston as part of a sampling intensive during the Houston Supersite Experiment. The sampling site was located just north of the major industrial emission sources. RSMS-II, which simultaneously measures the aerodynamic size and composition of individual ultrafine aerosols, is well suited to resolving some of the chemistry associated with secondary particle formation. Roughly 27,000 aerosol mass spectra were acquired during the intensive period. These were classified and labeled based on the spectral peak patterns using the neural networks algorithm, ART-2a. The frequency of occurrence of each particle class was correlated with time and wind direction. Some classes were present continuously, while others appeared intermittently or for very short time durations. The most frequently detected species at the site were potassium and silicon, with lesser amounts of organics and heavier metals.

  11. Raman spectroscopic sensing of carbonate intercalation in breast microcalcifications at stereotactic biopsy.

    PubMed

    Sathyavathi, R; Saha, Anushree; Soares, Jaqueline S; Spegazzini, Nicolas; McGee, Sasha; Rao Dasari, Ramachandra; Fitzmaurice, Maryann; Barman, Ishan

    2015-04-30

    Microcalcifications are an early mammographic sign of breast cancer and frequent target for stereotactic biopsy. Despite their indisputable value, microcalcifications, particularly of the type II variety that are comprised of calcium hydroxyapatite deposits, remain one of the least understood disease markers. Here we employed Raman spectroscopy to elucidate the relationship between pathogenicity of breast lesions in fresh biopsy cores and composition of type II microcalcifications. Using a chemometric model of chemical-morphological constituents, acquired Raman spectra were translated to characterize chemical makeup of the lesions. We find that increase in carbonate intercalation in the hydroxyapatite lattice can be reliably employed to differentiate benign from malignant lesions, with algorithms based only on carbonate and cytoplasmic protein content exhibiting excellent negative predictive value (93-98%). Our findings highlight the importance of calcium carbonate, an underrated constituent of microcalcifications, as a spectroscopic marker in breast pathology evaluation and pave the way for improved biopsy guidance.

  12. Market-Based Coordination of Thermostatically Controlled Loads—Part II: Unknown Parameters and Case Studies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Sen; Zhang, Wei; Lian, Jianming

    This two-part paper considers the coordination of a population of Thermostatically Controlled Loads (TCLs) with unknown parameters to achieve group objectives. The problem involves designing the bidding and market clearing strategy to motivate self-interested users to realize efficient energy allocation subject to a peak power constraint. The companion paper (Part I) formulates the problem and proposes a load coordination framework using the mechanism design approach. To address the unknown parameters, Part II of this paper presents a joint state and parameter estimation framework based on the expectation maximization algorithm. The overall framework is then validated using real-world weather data andmore » price data, and is compared with other approaches in terms of aggregated power response. Simulation results indicate that our coordination framework can effectively improve the efficiency of the power grid operations and reduce power congestion at key times.« less

  13. Track vertex reconstruction with neural networks at the first level trigger of Belle II

    NASA Astrophysics Data System (ADS)

    Neuhaus, Sara; Skambraks, Sebastian; Kiesling, Christian

    2017-08-01

    The track trigger is one of the main components of the Belle II first level trigger, taking input from the Central Drift Chamber (CDC). It consists of several stages, first combining hits to track segments, followed by a 2D track finding in the transverse plane and finally a 3D track reconstruction. The results of the track trigger are the track multiplicity, the momentum vector of each track and the longitudinal displacement of the origin or production vertex of each track ("z-vertex"). The latter allows to reject background tracks from outside of the interaction region and thus to suppress a large fraction of the machine background. This contribution focuses on the track finding stage using Hough transforms and on the z-vertex reconstruction with neural networks. We describe the algorithms and show performance studies on simulated events.

  14. Applications of multiple change point detections to monthly streamflow and rainfall in Xijiang River in southern China, part II: trend and mean

    NASA Astrophysics Data System (ADS)

    Chen, Yongqin David; Jiang, Jianmin; Zhu, Yuxiang; Huang, Changxing; Zhang, Qiang

    2018-05-01

    This article, as part II, illustrates applications of other two algorithms, i.e., the scanning F test of change points in trend and the scanning t test of change points in mean, to both series of the normalized streamflow index (NSI) at Makou section in the Xijiang River and the normalized precipitation index (NPI) over the watershed of Xijiang River. The results from these two tests show mainly positive coherency of changes between the NSI and NPI. However, some minor negative coherency patches may expose somewhat impacts of human activities, but they were often associated with nearly normal climate periods. These suggest that the runoff still depends upon well the precipitation in the Xijiang catchment. The anthropogenic disturbances have not yet reached up to violating natural relationship on the whole in this river.

  15. Navier-Stokes simulation of external/internal transonic flow on the forebody/inlet of the AV-8B Harrier II

    NASA Technical Reports Server (NTRS)

    Mysko, Stephen J.; Chyu, Wei J.; Stortz, Michael W.; Chow, Chuen-Yen

    1993-01-01

    In this work, the computation of combined external/internal transonic flow on the complex forebody/inlet configuration of the AV-8B Harrier II is performed. The actual aircraft has been measured and its surface and surrounding domain, in which the fuselage and inlet have a common wall, have been described using structured grids. The 'thin-layer' Navier-Stokes equations were used to model the flow along with the Chimera embedded multi-block technique. A fully conservative, alternating direction implicit (ADI), approximately factored, partially fluxsplit algorithm was employed to perform the computation. Comparisons to some experimental wind tunnel data yielded good agreement for flow at zero incidence and angle of attack. The aim of this paper is to provide a methodology or computational tool for the numerical solution of complex external/internal flows.

  16. HNF4alpha dysfunction as a molecular rational for cyclosporine induced hypertension.

    PubMed

    Niehof, Monika; Borlak, Jürgen

    2011-01-27

    Induction of tolerance against grafted organs is achieved by the immunosuppressive agent cyclosporine, a prominent member of the calcineurin inhibitors. Unfortunately, its lifetime use is associated with hypertension and nephrotoxicity. Several mechanism for cyclosporine induced hypertension have been proposed, i.e. activation of the sympathetic nervous system, endothelin-mediated systemic vasoconstriction, impaired vasodilatation secondary to reduction in prostaglandin and nitric oxide, altered cytosolic calcium translocation, and activation of the renin-angiotensin system (RAS). In this regard the molecular basis for undue RAS activation and an increased signaling of the vasoactive oligopeptide angiotensin II (AngII) remain elusive. Notably, angiotensinogen (AGT) is the precursor of AngII and transcriptional regulation of AGT is controlled by the hepatic nuclear factor HNF4alpha. To better understand the molecular events associated with cyclosporine induced hypertension, we investigated the effect of cyclosporine on HNF4alpha expression and activity and searched for novel HNF4alpha target genes among members of the RAS cascade. Using bioinformatic algorithm and EMSA bandshift assays we identified angiotensin II receptor type 1 (AGTR1), angiotensin I converting enzyme (ACE), and angiotensin I converting enzyme 2 (ACE2) as genes targeted by HNF4alpha. Notably, cyclosporine represses HNF4alpha gene and protein expression and its DNA-binding activity at consensus sequences to AGT, AGTR1, ACE, and ACE2. Consequently, the gene expression of AGT, AGTR1, and ACE2 was significantly reduced as evidenced by quantitative real-time RT-PCR. While RAS is composed of a sophisticated interplay between multiple factors we propose a decrease of ACE2 to enforce AngII signaling via AGTR1 to ultimately result in vasoconstriction and hypertension. Taken collectively we demonstrate cyclosporine to repress HNF4alpha activity through calcineurin inhibitor mediated inhibition of nuclear factor of activation of T-cells (NFAT) which in turn represses HNF4alpha that leads to a disturbed balance of RAS.

  17. Sparsity-Based Representation for Classification Algorithms and Comparison Results for Transient Acoustic Signals

    DTIC Science & Technology

    2016-05-01

    large but correlated noise and signal interference (i.e., low -rank interference). Another contribution is the implementation of deep learning...representation, low rank, deep learning 52 Tung-Duong Tran-Luu 301-394-3082Unclassified Unclassified Unclassified UU ii Approved for public release; distribution...Classification of Acoustic Transients 6 3.2 Joint Sparse Representation with Low -Rank Interference 7 3.3 Simultaneous Group-and-Joint Sparse Representation

  18. Solid Phase Extraction of Trace Al(III), Fe(II), Co(II), Cu(II), Cd(II) and Pb(II) Ions in Beverages on Functionalized Polymer Microspheres Prior to Flame Atomic Absorption Spectrometric Determinations.

    PubMed

    Berber, Hale; Alpdogan, Güzin

    2017-01-01

    In this study, poly(glycidyl methacrylate-methyl methacrylate-divinylbenzene) was synthesized in the form of microspheres, and then functionalized by 2-aminobenzothiazole ligand. The sorption properties of these functionalized microspheres were investigated for separation, preconcentration and determination of Al(III), Fe(II), Co(II), Cu(II), Cd(II) and Pb(II) ions using flame atomic absorption spectrometry. The optimum pH values for quantitative sorption were 2 - 4, 5 - 8, 6 - 8, 4 - 6, 2 - 6 and 2 - 3 for Al(III), Fe(II), Co(II), Cu(II), Cd(II) and Pb(II), respectively, and also the highest sorption capacity of the functionalized microspheres was found to be for Cu(II) with the value of 1.87 mmol g -1 . The detection limits (3σ; N = 6) obtained for the studied metals in the optimal conditions were observed in the range of 0.26 - 2.20 μg L -1 . The proposed method was successfully applied to different beverage samples for the determination of Al(III), Fe(II), Co(II), Cu(II), Cd(II) and Pb(II) ions, with the relative standard deviation of <3.7%.

  19. A Two-Wheel Observing Mode for the MAP Spacecraft

    NASA Technical Reports Server (NTRS)

    Starin, Scott R.; ODonnell, James R., Jr.

    2001-01-01

    The Microwave Anisotropy Probe (MAP) is a follow-on to the Differential Microwave Radiometer (DMR) instrument on the Cosmic Background Explorer (COBE). Due to the MAP project's limited mass, power, and budget, a traditional reliability concept including fully redundant components was not feasible. The MAP design employs selective hardware redundancy, along with backup software modes and algorithms, to improve the odds of mission success. This paper describes the effort to develop a backup control mode, known as Observing II, that will allow the MAP science mission to continue in the event of a failure of one of its three reaction wheel assemblies. This backup science mode requires a change from MAP's nominal zero-momentum control system to a momentum-bias system. In this system, existing thruster-based control modes are used to establish a momentum bias about the sun line sufficient to spin the spacecraft up to the desired scan rate. Natural spacecraft dynamics exhibits spin and nutation similar to the nominal MAP science mode with different relative rotation rates, so the two reaction wheels are used to establish and maintain the desired nutation angle from the sun line. Detailed descriptions of the ObservingII control algorithm and simulation results will be presented, along with the operational considerations of performing the rest of MAP's necessary functions with only two wheels.

  20. On the definition of the concepts thinking, consciousness, and conscience.

    PubMed Central

    Monin, A S

    1992-01-01

    A complex system (CS) is defined as a set of elements, with connections between them, singled out of the environment, capable of getting information from the environment, capable of making decisions (i.e., of choosing between alternatives), and having purposefulness (i.e., an urge towards preferable states or other goals). Thinking is a process that takes place (or which can take place) in some of the CS and consists of (i) receiving information from the environment (and from itself), (ii) memorizing the information, (iii) the subconscious, and (iv) consciousness. Life is a process that takes place in some CS and consists of functions i and ii, as well as (v) reproduction with passing of hereditary information to progeny, and (vi) oriented energy and matter exchange with the environment sufficient for the maintenance of all life processes. Memory is a complex of processes of placing information in memory banks, keeping it there, and producing it according to prescriptions available in the system or to inquiries arising in it. Consciousness is a process of realization by the thinking CS of some set of algorithms consisting of the comparison of its knowledge, intentions, decisions, and actions with reality--i.e., with accumulated and continuously received internal and external information. Conscience is a realization of an algorithm of good and evil pattern recognition. PMID:1631060

  1. Fuzzy multiobjective models for optimal operation of a hydropower system

    NASA Astrophysics Data System (ADS)

    Teegavarapu, Ramesh S. V.; Ferreira, André R.; Simonovic, Slobodan P.

    2013-06-01

    Optimal operation models for a hydropower system using new fuzzy multiobjective mathematical programming models are developed and evaluated in this study. The models use (i) mixed integer nonlinear programming (MINLP) with binary variables and (ii) integrate a new turbine unit commitment formulation along with water quality constraints used for evaluation of reservoir downstream impairment. Reardon method used in solution of genetic algorithm optimization problems forms the basis for development of a new fuzzy multiobjective hydropower system optimization model with creation of Reardon type fuzzy membership functions. The models are applied to a real-life hydropower reservoir system in Brazil. Genetic Algorithms (GAs) are used to (i) solve the optimization formulations to avoid computational intractability and combinatorial problems associated with binary variables in unit commitment, (ii) efficiently address Reardon method formulations, and (iii) deal with local optimal solutions obtained from the use of traditional gradient-based solvers. Decision maker's preferences are incorporated within fuzzy mathematical programming formulations to obtain compromise operating rules for a multiobjective reservoir operation problem dominated by conflicting goals of energy production, water quality and conservation releases. Results provide insight into compromise operation rules obtained using the new Reardon fuzzy multiobjective optimization framework and confirm its applicability to a variety of multiobjective water resources problems.

  2. Music to knowledge: A visual programming environment for the development and evaluation of music information retrieval techniques

    NASA Astrophysics Data System (ADS)

    Ehmann, Andreas F.; Downie, J. Stephen

    2005-09-01

    The objective of the International Music Information Retrieval Systems Evaluation Laboratory (IMIRSEL) project is the creation of a large, secure corpus of audio and symbolic music data accessible to the music information retrieval (MIR) community for the testing and evaluation of various MIR techniques. As part of the IMIRSEL project, a cross-platform JAVA based visual programming environment called Music to Knowledge (M2K) is being developed for a variety of music information retrieval related tasks. The primary objective of M2K is to supply the MIR community with a toolset that provides the ability to rapidly prototype algorithms, as well as foster the sharing of techniques within the MIR community through the use of a standardized set of tools. Due to the relatively large size of audio data and the computational costs associated with some digital signal processing and machine learning techniques, M2K is also designed to support distributed computing across computing clusters. In addition, facilities to allow the integration of non-JAVA based (e.g., C/C++, MATLAB, etc.) algorithms and programs are provided within M2K. [Work supported by the Andrew W. Mellon Foundation and NSF Grants No. IIS-0340597 and No. IIS-0327371.

  3. Hybrid multiscale modeling and prediction of cancer cell behavior

    PubMed Central

    Habibi, Jafar

    2017-01-01

    Background Understanding cancer development crossing several spatial-temporal scales is of great practical significance to better understand and treat cancers. It is difficult to tackle this challenge with pure biological means. Moreover, hybrid modeling techniques have been proposed that combine the advantages of the continuum and the discrete methods to model multiscale problems. Methods In light of these problems, we have proposed a new hybrid vascular model to facilitate the multiscale modeling and simulation of cancer development with respect to the agent-based, cellular automata and machine learning methods. The purpose of this simulation is to create a dataset that can be used for prediction of cell phenotypes. By using a proposed Q-learning based on SVR-NSGA-II method, the cells have the capability to predict their phenotypes autonomously that is, to act on its own without external direction in response to situations it encounters. Results Computational simulations of the model were performed in order to analyze its performance. The most striking feature of our results is that each cell can select its phenotype at each time step according to its condition. We provide evidence that the prediction of cell phenotypes is reliable. Conclusion Our proposed model, which we term a hybrid multiscale modeling of cancer cell behavior, has the potential to combine the best features of both continuum and discrete models. The in silico results indicate that the 3D model can represent key features of cancer growth, angiogenesis, and its related micro-environment and show that the findings are in good agreement with biological tumor behavior. To the best of our knowledge, this paper is the first hybrid vascular multiscale modeling of cancer cell behavior that has the capability to predict cell phenotypes individually by a self-generated dataset. PMID:28846712

  4. Hybrid multiscale modeling and prediction of cancer cell behavior.

    PubMed

    Zangooei, Mohammad Hossein; Habibi, Jafar

    2017-01-01

    Understanding cancer development crossing several spatial-temporal scales is of great practical significance to better understand and treat cancers. It is difficult to tackle this challenge with pure biological means. Moreover, hybrid modeling techniques have been proposed that combine the advantages of the continuum and the discrete methods to model multiscale problems. In light of these problems, we have proposed a new hybrid vascular model to facilitate the multiscale modeling and simulation of cancer development with respect to the agent-based, cellular automata and machine learning methods. The purpose of this simulation is to create a dataset that can be used for prediction of cell phenotypes. By using a proposed Q-learning based on SVR-NSGA-II method, the cells have the capability to predict their phenotypes autonomously that is, to act on its own without external direction in response to situations it encounters. Computational simulations of the model were performed in order to analyze its performance. The most striking feature of our results is that each cell can select its phenotype at each time step according to its condition. We provide evidence that the prediction of cell phenotypes is reliable. Our proposed model, which we term a hybrid multiscale modeling of cancer cell behavior, has the potential to combine the best features of both continuum and discrete models. The in silico results indicate that the 3D model can represent key features of cancer growth, angiogenesis, and its related micro-environment and show that the findings are in good agreement with biological tumor behavior. To the best of our knowledge, this paper is the first hybrid vascular multiscale modeling of cancer cell behavior that has the capability to predict cell phenotypes individually by a self-generated dataset.

  5. Invariant Chain Complexes and Clusters as Platforms for MIF Signaling

    PubMed Central

    Lindner, Robert

    2017-01-01

    Invariant chain (Ii/CD74) has been identified as a surface receptor for migration inhibitory factor (MIF). Most cells that express Ii also synthesize major histocompatibility complex class II (MHC II) molecules, which depend on Ii as a chaperone and a targeting factor. The assembly of nonameric complexes consisting of one Ii trimer and three MHC II molecules (each of which is a heterodimer) has been regarded as a prerequisite for efficient delivery to the cell surface. Due to rapid endocytosis, however, only low levels of Ii-MHC II complexes are displayed on the cell surface of professional antigen presenting cells and very little free Ii trimers. The association of Ii and MHC II has been reported to block the interaction with MIF, thus questioning the role of surface Ii as a receptor for MIF on MHC II-expressing cells. Recent work offers a potential solution to this conundrum: Many Ii-complexes at the cell surface appear to be under-saturated with MHC II, leaving unoccupied Ii subunits as potential binding sites for MIF. Some of this work also sheds light on novel aspects of signal transduction by Ii-bound MIF in B-lymphocytes: membrane raft association of Ii-MHC II complexes enables MIF to target Ii-MHC II to antigen-clustered B-cell-receptors (BCR) and to foster BCR-driven signaling and intracellular trafficking. PMID:28208600

  6. Binding Selectivity of Methanobactin from Methylosinus trichosporium OB3b for Copper(I), Silver(I), Zinc(II), Nickel(II), Cobalt(II), Manganese(II), Lead(II), and Iron(II)

    NASA Astrophysics Data System (ADS)

    McCabe, Jacob W.; Vangala, Rajpal; Angel, Laurence A.

    2017-12-01

    Methanobactin (Mb) from Methylosinus trichosporium OB3b is a member of a class of metal binding peptides identified in methanotrophic bacteria. Mb will selectively bind and reduce Cu(II) to Cu(I), and is thought to mediate the acquisition of the copper cofactor for the enzyme methane monooxygenase. These copper chelating properties of Mb make it potentially useful as a chelating agent for treatment of diseases where copper plays a role including Wilson's disease, cancers, and neurodegenerative diseases. Utilizing traveling wave ion mobility-mass spectrometry (TWIMS), the competition for the Mb copper binding site from Ag(I), Pb(II), Co(II), Fe(II), Mn(II), Ni(II), and Zn(II) has been determined by a series of metal ion titrations, pH titrations, and metal ion displacement titrations. The TWIMS analyses allowed for the explicit identification and quantification of all the individual Mb species present during the titrations and measured their collision cross-sections and collision-induced dissociation patterns. The results showed Ag(I) and Ni(II) could irreversibly bind to Mb and not be effectively displaced by Cu(I), whereas Ag(I) could also partially displace Cu(I) from the Mb complex. At pH ≈ 6.5, the Mb binding selectivity follows the order Ag(I)≈Cu(I)>Ni(II)≈Zn(II)>Co(II)>>Mn(II)≈Pb(II)>Fe(II), and at pH 7.5 to 10.4 the order is Ag(I)>Cu(I)>Ni(II)>Co(II)>Zn(II)>Mn(II)≈Pb(II)>Fe(II). Breakdown curves of the disulfide reduced Cu(I) and Ag(I) complexes showed a correlation existed between their relative stability and their compact folded structure indicated by their CCS. Fluorescence spectroscopy, which allowed the determination of the binding constant, compared well with the TWIMS analyses, with the exception of the Ni(II) complex. [Figure not available: see fulltext.

  7. Binding Selectivity of Methanobactin from Methylosinus trichosporium OB3b for Copper(I), Silver(I), Zinc(II), Nickel(II), Cobalt(II), Manganese(II), Lead(II), and Iron(II).

    PubMed

    McCabe, Jacob W; Vangala, Rajpal; Angel, Laurence A

    2017-12-01

    Methanobactin (Mb) from Methylosinus trichosporium OB3b is a member of a class of metal binding peptides identified in methanotrophic bacteria. Mb will selectively bind and reduce Cu(II) to Cu(I), and is thought to mediate the acquisition of the copper cofactor for the enzyme methane monooxygenase. These copper chelating properties of Mb make it potentially useful as a chelating agent for treatment of diseases where copper plays a role including Wilson's disease, cancers, and neurodegenerative diseases. Utilizing traveling wave ion mobility-mass spectrometry (TWIMS), the competition for the Mb copper binding site from Ag(I), Pb(II), Co(II), Fe(II), Mn(II), Ni(II), and Zn(II) has been determined by a series of metal ion titrations, pH titrations, and metal ion displacement titrations. The TWIMS analyses allowed for the explicit identification and quantification of all the individual Mb species present during the titrations and measured their collision cross-sections and collision-induced dissociation patterns. The results showed Ag(I) and Ni(II) could irreversibly bind to Mb and not be effectively displaced by Cu(I), whereas Ag(I) could also partially displace Cu(I) from the Mb complex. At pH ≈ 6.5, the Mb binding selectivity follows the order Ag(I)≈Cu(I)>Ni(II)≈Zn(II)>Co(II)>Mn(II)≈Pb(II)>Fe(II), and at pH 7.5 to 10.4 the order is Ag(I)>Cu(I)>Ni(II)>Co(II)>Zn(II)>Mn(II)≈Pb(II)>Fe(II). Breakdown curves of the disulfide reduced Cu(I) and Ag(I) complexes showed a correlation existed between their relative stability and their compact folded structure indicated by their CCS. Fluorescence spectroscopy, which allowed the determination of the binding constant, compared well with the TWIMS analyses, with the exception of the Ni(II) complex. Graphical abstract ᅟ.

  8. Efficient Controls for Finitely Convergent Sequential Algorithms

    PubMed Central

    Chen, Wei; Herman, Gabor T.

    2010-01-01

    Finding a feasible point that satisfies a set of constraints is a common task in scientific computing: examples are the linear feasibility problem and the convex feasibility problem. Finitely convergent sequential algorithms can be used for solving such problems; an example of such an algorithm is ART3, which is defined in such a way that its control is cyclic in the sense that during its execution it repeatedly cycles through the given constraints. Previously we found a variant of ART3 whose control is no longer cyclic, but which is still finitely convergent and in practice it usually converges faster than ART3 does. In this paper we propose a general methodology for automatic transformation of finitely convergent sequential algorithms in such a way that (i) finite convergence is retained and (ii) the speed of convergence is improved. The first of these two properties is proven by mathematical theorems, the second is illustrated by applying the algorithms to a practical problem. PMID:20953327

  9. System for Anomaly and Failure Detection (SAFD) system development

    NASA Technical Reports Server (NTRS)

    Oreilly, D.

    1992-01-01

    This task specified developing the hardware and software necessary to implement the System for Anomaly and Failure Detection (SAFD) algorithm, developed under Technology Test Bed (TTB) Task 21, on the TTB engine stand. This effort involved building two units; one unit to be installed in the Block II Space Shuttle Main Engine (SSME) Hardware Simulation Lab (HSL) at Marshall Space Flight Center (MSFC), and one unit to be installed at the TTB engine stand. Rocketdyne personnel from the HSL performed the task. The SAFD algorithm was developed as an improvement over the current redline system used in the Space Shuttle Main Engine Controller (SSMEC). Simulation tests and execution against previous hot fire tests demonstrated that the SAFD algorithm can detect engine failure as much as tens of seconds before the redline system recognized the failure. Although the current algorithm only operates during steady state conditions (engine not throttling), work is underway to expand the algorithm to work during transient condition.

  10. Algorithms of maximum likelihood data clustering with applications

    NASA Astrophysics Data System (ADS)

    Giada, Lorenzo; Marsili, Matteo

    2002-12-01

    We address the problem of data clustering by introducing an unsupervised, parameter-free approach based on maximum likelihood principle. Starting from the observation that data sets belonging to the same cluster share a common information, we construct an expression for the likelihood of any possible cluster structure. The likelihood in turn depends only on the Pearson's coefficient of the data. We discuss clustering algorithms that provide a fast and reliable approximation to maximum likelihood configurations. Compared to standard clustering methods, our approach has the advantages that (i) it is parameter free, (ii) the number of clusters need not be fixed in advance and (iii) the interpretation of the results is transparent. In order to test our approach and compare it with standard clustering algorithms, we analyze two very different data sets: time series of financial market returns and gene expression data. We find that different maximization algorithms produce similar cluster structures whereas the outcome of standard algorithms has a much wider variability.

  11. A novel global Harmony Search method based on Ant Colony Optimisation algorithm

    NASA Astrophysics Data System (ADS)

    Fouad, Allouani; Boukhetala, Djamel; Boudjema, Fares; Zenger, Kai; Gao, Xiao-Zhi

    2016-03-01

    The Global-best Harmony Search (GHS) is a stochastic optimisation algorithm recently developed, which hybridises the Harmony Search (HS) method with the concept of swarm intelligence in the particle swarm optimisation (PSO) to enhance its performance. In this article, a new optimisation algorithm called GHSACO is developed by incorporating the GHS with the Ant Colony Optimisation algorithm (ACO). Our method introduces a novel improvisation process, which is different from that of the GHS in the following aspects. (i) A modified harmony memory (HM) representation and conception. (ii) The use of a global random switching mechanism to monitor the choice between the ACO and GHS. (iii) An additional memory consideration selection rule using the ACO random proportional transition rule with a pheromone trail update mechanism. The proposed GHSACO algorithm has been applied to various benchmark functions and constrained optimisation problems. Simulation results demonstrate that it can find significantly better solutions when compared with the original HS and some of its variants.

  12. Kernel Recursive Least-Squares Temporal Difference Algorithms with Sparsification and Regularization

    PubMed Central

    Zhu, Qingxin; Niu, Xinzheng

    2016-01-01

    By combining with sparse kernel methods, least-squares temporal difference (LSTD) algorithms can construct the feature dictionary automatically and obtain a better generalization ability. However, the previous kernel-based LSTD algorithms do not consider regularization and their sparsification processes are batch or offline, which hinder their widespread applications in online learning problems. In this paper, we combine the following five techniques and propose two novel kernel recursive LSTD algorithms: (i) online sparsification, which can cope with unknown state regions and be used for online learning, (ii) L 2 and L 1 regularization, which can avoid overfitting and eliminate the influence of noise, (iii) recursive least squares, which can eliminate matrix-inversion operations and reduce computational complexity, (iv) a sliding-window approach, which can avoid caching all history samples and reduce the computational cost, and (v) the fixed-point subiteration and online pruning, which can make L 1 regularization easy to implement. Finally, simulation results on two 50-state chain problems demonstrate the effectiveness of our algorithms. PMID:27436996

  13. Kernel Recursive Least-Squares Temporal Difference Algorithms with Sparsification and Regularization.

    PubMed

    Zhang, Chunyuan; Zhu, Qingxin; Niu, Xinzheng

    2016-01-01

    By combining with sparse kernel methods, least-squares temporal difference (LSTD) algorithms can construct the feature dictionary automatically and obtain a better generalization ability. However, the previous kernel-based LSTD algorithms do not consider regularization and their sparsification processes are batch or offline, which hinder their widespread applications in online learning problems. In this paper, we combine the following five techniques and propose two novel kernel recursive LSTD algorithms: (i) online sparsification, which can cope with unknown state regions and be used for online learning, (ii) L 2 and L 1 regularization, which can avoid overfitting and eliminate the influence of noise, (iii) recursive least squares, which can eliminate matrix-inversion operations and reduce computational complexity, (iv) a sliding-window approach, which can avoid caching all history samples and reduce the computational cost, and (v) the fixed-point subiteration and online pruning, which can make L 1 regularization easy to implement. Finally, simulation results on two 50-state chain problems demonstrate the effectiveness of our algorithms.

  14. Structural alteration of hexagonal birnessite by aqueous Mn(II): Impacts on Ni(II) sorption

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lefkowitz, Joshua P.; Elzinga, Evert J.

    We studied the impacts of aqueous Mn(II) (1 mM) on the sorption of Ni(II) (200 μM) by hexagonal birnessite (0.1 g L- 1) at pH 6.5 and 7.5 with batch experiments and XRD, ATR-FTIR and Ni K-edge EXAFS analyses. In the absence of Mn(II)aq, sorbed Ni(II) was coordinated predominantly as triple corner-sharing complexes at layer vacancies at both pH values. Introduction of Mn(II)aq into Ni(II)-birnessite suspensions at pH 6.5 caused Ni(II) desorption and led to the formation of edge-sharing Ni(II) complexes. This was attributed to competitive displacement of Ni(II) from layer vacancies by either Mn(II) or by Mn(III) formed throughmore » interfacial Mn(II)-Mn(IV) comproportionation, and/or incorporation of Ni(II) into the birnessite lattice promoted by Mn(II)-catalyzed recrystallization of the sorbent. Similar to Mn(II)aq, the presence of HEPES or MES caused the formation of edge-sharing Ni(II) sorption complexes in Ni(II)-birnessite suspensions, which was attributed to partial reduction of the sorbent by the buffers. At pH 7.5, interaction with aqueous Mn(II) caused reductive transformation of birnessite into secondary feitknechtite that incorporated Ni(II), enhancing removal of Ni(II) from solution. These results demonstrate that reductive alteration of phyllomanganates may significantly affect the speciation and solubility of Ni(II) in anoxic and suboxic environments.« less

  15. Manganese acquisition by Lactobacillus plantarum

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Archibald, F.S.; Duong, M.N.

    1984-04-01

    Lactobacillus plantarum has an unusually high Mn(II) requirement for growth and accumulated over 30 mM intracellular Mn(II). The acquisition of Mn(II) by L. plantarum occurred via a specific active transport system powered by the transmembrane proton gradient. The Mn(II) uptake system has a K/sub m/ of 0.2 ..mu..M and a V/sub max/ of 24 nmol mg/sup -1/ of protein min/sup -1/. Above a medium Mn(II) concentration of 200 ..mu..M, the intracellular Mn(II) level was independent of the medium Mn(II) and unresponsive to oxygen stresses but was reduced by phosphate limitation. At a pH of 5.5, citrate, isocitrate, and cis-aconitate effectivelymore » promoted MN(II) uptake, although measurable levels of 1,5-(/sup 14/C)citrate were not accumulated. When cells were presented with equimolar Mn(II) and Cd(II), Cd(II) was preferentially taken up by the Mn(II) transport system. Both Mn(II) and Cd(II) uptake were greatly increased by Mn(II) starvation. Mn(II) uptake by Mn(II)-starved cells was subject to a negative feedback regulatory mechanism functioning less than 1 min after exposure of the cells to Mn(II) and independent of protein synthesis. When presented with a relatively large amount of exogenous Mn(II), Mn(II)-starved cells exhibited a measurable efflux of their internal Mn(II), but the rate was only a small fraction of the maximal Mn(II) uptake rate.« less

  16. A motif detection and classification method for peptide sequences using genetic programming.

    PubMed

    Tomita, Yasuyuki; Kato, Ryuji; Okochi, Mina; Honda, Hiroyuki

    2008-08-01

    An exploration of common rules (property motifs) in amino acid sequences has been required for the design of novel sequences and elucidation of the interactions between molecules controlled by the structural or physical environment. In the present study, we developed a new method to search property motifs that are common in peptide sequence data. Our method comprises the following two characteristics: (i) the automatic determination of the position and length of common property motifs by calculating the physicochemical similarity of amino acids, and (ii) the quick and effective exploration of motif candidates that discriminates the positives and negatives by the introduction of genetic programming (GP). Our method was evaluated by two types of model data sets. First, the intentionally buried property motifs were searched in the artificially derived peptide data containing intentionally buried property motifs. As a result, the expected property motifs were correctly extracted by our algorithm. Second, the peptide data that interact with MHC class II molecules were analyzed as one of the models of biologically active peptides with buried motifs in various lengths. Twofold MHC class II binding peptides were identified with the rule using our method, compared to the existing scoring matrix method. In conclusion, our GP based motif searching approach enabled to obtain knowledge of functional aspects of the peptides without any prior knowledge.

  17. A Search for WIMP Dark Matter Using an Optimized Chi-square Technique on the Final Data from the Cryogenic Dark Matter Search Experiment (CDMS II)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Manungu Kiveni, Joseph

    2012-12-01

    This dissertation describes the results of a WIMP search using CDMS II data sets accumulated at the Soudan Underground Laboratory in Minnesota. Results from the original analysis of these data were published in 2009; two events were observed in the signal region with an expected leakage of 0.9 events. Further investigation revealed an issue with the ionization-pulse reconstruction algorithm leading to a software upgrade and a subsequent reanalysis of the data. As part of the reanalysis, I performed an advanced discrimination technique to better distinguish (potential) signal events from backgrounds using a 5-dimensional chi-square method. This dataanalysis technique combines themore » event information recorded for each WIMP-search event to derive a backgrounddiscrimination parameter capable of reducing the expected background to less than one event, while maintaining high efficiency for signal events. Furthermore, optimizing the cut positions of this 5-dimensional chi-square parameter for the 14 viable germanium detectors yields an improved expected sensitivity to WIMP interactions relative to previous CDMS results. This dissertation describes my improved (and optimized) discrimination technique and the results obtained from a blind application to the reanalyzed CDMS II WIMP-search data.« less

  18. Incorrect electrode cable connection during electrocardiographic recording.

    PubMed

    Batchvarov, Velislav N; Malik, Marek; Camm, A John

    2007-11-01

    Incorrect electrode cable connections during electrocardiographic (ECG) recording can simulate rhythm or conduction disturbance, myocardial ischaemia and infarction, as well as other clinically important abnormalities. When only precordial or only limb cables, excluding the neutral cable, have been interchanged the waveforms in the different leads are re-arranged, inverted, or unchanged, whereas the duration of intervals is not changed. The mistake can be recognized by the presence of unusual P-QRS patterns (e.g. negative P-QRS in lead I or II, positive in lead AVR, P-QRS complexes of opposite direction in leads I and V6, etc.), change in the P-QRS axis, or abnormal precordial QRS-T wave progression. Interchange of limb cables with the neutral cable distorts Wilson's terminal and the morphology of all precordial and unipolar limb leads. The telltale sign of the mistake is the presence of (almost) a flat line in lead I, II or III. Interchange of even one of the limb cables, except for the neutral cable, with a precordial cable distorts the morphology of most leads and leaves not more than one lead (I, II, or III) unchanged. Computerized algorithms for detection of lead misplacement, such as those based on artificial neural networks, or on correlation between original and reconstructed leads, have been developed.

  19. BBU and Corkscrew Growth Predictions for the Darht Second Axis Accelerator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Y.J.; Fawley, W.M.

    2001-06-12

    The second axis accelerator of the Dual Axis Radiographic Hydrodynamic Test (DARHT-II) facility will produce a 2-kA, 20-MeV, 2-{micro}s output electron beam with a design goal of less than 1000 {pi} mm-mrad normalized transverse emittance. In order to meet this goal, both the beam breakup instability (BBJ) and transverse corkscrew motion (due to chromatic phase advance) must be limited in growth. Using data from recent experimental measurements of the transverse impedance of actual DARHT-II accelerator cells by Briggs et al. [2], they have used the LLNL BREAKUP code to predict BBU and corkscrew growth in DARHT-II. The results suggest thatmore » BBU growth should not seriously degrade the final achievable spot size at the x-ray converter, presuming the initial excitation level is of the order 100 microns or smaller. For control of corkscrew growth, a major concern is the number of tuning shots needed to utilize effectively the tuning-V algorithm [3]. Presuming that the solenoid magnet alignment falls within spec, they believe that possibly as few as 50-100 shots will be necessary to set the dipole corrector magnet currents. They give some specific examples of tune determination for a hypothetical set of alignment errors.« less

  20. BBU and Corkscrew Growth Predictions for the DARHT Second Axis Accelerator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Y J; Fawley, W M

    2001-06-12

    The second axis accelerator of the Dual Axis Radiographic Hydrodynamic Test (DARHT-II) facility will produce a 2-kA, 20-MeV, 2-{micro}s output electron beam with a design goal of less than 1000 {pi} mm-mrad normalized transverse emittance. In order to meet this goal, both the beam breakup instability (BBU) and transverse ''corkscrew'' motion (due to chromatic phase advance) must be limited in growth. Using data from recent experimental measurements of the transverse impedance of actual DARHT-II accelerator cells by Briggs et al., they have used the LLNL BREAKUP code to predict BBU and corkscrew growth in DARHT-II. The results suggest that BBUmore » growth should not seriously degrade the final achievable spot size at the x-ray converter, presuming the initial excitation level is of the order 100 microns or smaller. For control of corkscrew growth, a major concern is the number of ''tuning'' shots needed to utilize effectively the ''tuning-V'' algorithm. Presuming that the solenoid magnet alignment falls within spec, they believe that possibly as few as 50-100 shots will be necessary to set the dipole corrector magnet currents. They give some specific examples of tune determination for a hypothetical set of alignment errors.« less

Top