Sample records for vector ordinal optimization

  1. Multiple Ordinal Regression by Maximizing the Sum of Margins

    PubMed Central

    Hamsici, Onur C.; Martinez, Aleix M.

    2016-01-01

    Human preferences are usually measured using ordinal variables. A system whose goal is to estimate the preferences of humans and their underlying decision mechanisms requires to learn the ordering of any given sample set. We consider the solution of this ordinal regression problem using a Support Vector Machine algorithm. Specifically, the goal is to learn a set of classifiers with common direction vectors and different biases correctly separating the ordered classes. Current algorithms are either required to solve a quadratic optimization problem, which is computationally expensive, or are based on maximizing the minimum margin (i.e., a fixed margin strategy) between a set of hyperplanes, which biases the solution to the closest margin. Another drawback of these strategies is that they are limited to order the classes using a single ranking variable (e.g., perceived length). In this paper, we define a multiple ordinal regression algorithm based on maximizing the sum of the margins between every consecutive class with respect to one or more rankings (e.g., perceived length and weight). We provide derivations of an efficient, easy-to-implement iterative solution using a Sequential Minimal Optimization procedure. We demonstrate the accuracy of our solutions in several datasets. In addition, we provide a key application of our algorithms in estimating human subjects’ ordinal classification of attribute associations to object categories. We show that these ordinal associations perform better than the binary one typically employed in the literature. PMID:26529784

  2. Three-Dimensional Orthogonal Co-ordinates

    ERIC Educational Resources Information Center

    Astin, J.

    1974-01-01

    A systematic approach to general orthogonal co-ordinates, suitable for use near the end of a beginning vector analysis course, is presented. It introduces students to tensor quantities and shows how equations and quantities needed in classical problems can be determined. (Author/LS)

  3. A multi-layer discrete-ordinate method for vector radiative transfer in a vertically-inhomogeneous, emitting and scattering atmosphere. I - Theory. II - Application

    NASA Technical Reports Server (NTRS)

    Weng, Fuzhong

    1992-01-01

    A theory is developed for discretizing the vector integro-differential radiative transfer equation including both solar and thermal radiation. A complete solution and boundary equations are obtained using the discrete-ordinate method. An efficient numerical procedure is presented for calculating the phase matrix and achieving computational stability. With natural light used as a beam source, the Stokes parameters from the model proposed here are compared with the analytical solutions of Chandrasekhar (1960) for a Rayleigh scattering atmosphere. The model is then applied to microwave frequencies with a thermal source, and the brightness temperatures are compared with those from Stamnes'(1988) radiative transfer model.

  4. Optimizing the maximum reported cluster size in the spatial scan statistic for ordinal data.

    PubMed

    Kim, Sehwi; Jung, Inkyung

    2017-01-01

    The spatial scan statistic is an important tool for spatial cluster detection. There have been numerous studies on scanning window shapes. However, little research has been done on the maximum scanning window size or maximum reported cluster size. Recently, Han et al. proposed to use the Gini coefficient to optimize the maximum reported cluster size. However, the method has been developed and evaluated only for the Poisson model. We adopt the Gini coefficient to be applicable to the spatial scan statistic for ordinal data to determine the optimal maximum reported cluster size. Through a simulation study and application to a real data example, we evaluate the performance of the proposed approach. With some sophisticated modification, the Gini coefficient can be effectively employed for the ordinal model. The Gini coefficient most often picked the optimal maximum reported cluster sizes that were the same as or smaller than the true cluster sizes with very high accuracy. It seems that we can obtain a more refined collection of clusters by using the Gini coefficient. The Gini coefficient developed specifically for the ordinal model can be useful for optimizing the maximum reported cluster size for ordinal data and helpful for properly and informatively discovering cluster patterns.

  5. Optimizing the maximum reported cluster size in the spatial scan statistic for ordinal data

    PubMed Central

    Kim, Sehwi

    2017-01-01

    The spatial scan statistic is an important tool for spatial cluster detection. There have been numerous studies on scanning window shapes. However, little research has been done on the maximum scanning window size or maximum reported cluster size. Recently, Han et al. proposed to use the Gini coefficient to optimize the maximum reported cluster size. However, the method has been developed and evaluated only for the Poisson model. We adopt the Gini coefficient to be applicable to the spatial scan statistic for ordinal data to determine the optimal maximum reported cluster size. Through a simulation study and application to a real data example, we evaluate the performance of the proposed approach. With some sophisticated modification, the Gini coefficient can be effectively employed for the ordinal model. The Gini coefficient most often picked the optimal maximum reported cluster sizes that were the same as or smaller than the true cluster sizes with very high accuracy. It seems that we can obtain a more refined collection of clusters by using the Gini coefficient. The Gini coefficient developed specifically for the ordinal model can be useful for optimizing the maximum reported cluster size for ordinal data and helpful for properly and informatively discovering cluster patterns. PMID:28753674

  6. Quantitative characterisation of audio data by ordinal symbolic dynamics

    NASA Astrophysics Data System (ADS)

    Aschenbrenner, T.; Monetti, R.; Amigó, J. M.; Bunk, W.

    2013-06-01

    Ordinal symbolic dynamics has developed into a valuable method to describe complex systems. Recently, using the concept of transcripts, the coupling behaviour of systems was assessed, combining the properties of the symmetric group with information theoretic ideas. In this contribution, methods from the field of ordinal symbolic dynamics are applied to the characterisation of audio data. Coupling complexity between frequency bands of solo violin music, as a fingerprint of the instrument, is used for classification purposes within a support vector machine scheme. Our results suggest that coupling complexity is able to capture essential characteristics, sufficient to distinguish among different violins.

  7. Reduction from cost-sensitive ordinal ranking to weighted binary classification.

    PubMed

    Lin, Hsuan-Tien; Li, Ling

    2012-05-01

    We present a reduction framework from ordinal ranking to binary classification. The framework consists of three steps: extracting extended examples from the original examples, learning a binary classifier on the extended examples with any binary classification algorithm, and constructing a ranker from the binary classifier. Based on the framework, we show that a weighted 0/1 loss of the binary classifier upper-bounds the mislabeling cost of the ranker, both error-wise and regret-wise. Our framework allows not only the design of good ordinal ranking algorithms based on well-tuned binary classification approaches, but also the derivation of new generalization bounds for ordinal ranking from known bounds for binary classification. In addition, our framework unifies many existing ordinal ranking algorithms, such as perceptron ranking and support vector ordinal regression. When compared empirically on benchmark data sets, some of our newly designed algorithms enjoy advantages in terms of both training speed and generalization performance over existing algorithms. In addition, the newly designed algorithms lead to better cost-sensitive ordinal ranking performance, as well as improved listwise ranking performance.

  8. A Modified Penalty Parameter Approach for Optimal Estimation of UH with Simultaneous Estimation of Infiltration Parameters

    NASA Astrophysics Data System (ADS)

    Bhattacharjya, Rajib Kumar

    2018-05-01

    The unit hydrograph and the infiltration parameters of a watershed can be obtained from observed rainfall-runoff data by using inverse optimization technique. This is a two-stage optimization problem. In the first stage, the infiltration parameters are obtained and the unit hydrograph ordinates are estimated in the second stage. In order to combine this two-stage method into a single stage one, a modified penalty parameter approach is proposed for converting the constrained optimization problem to an unconstrained one. The proposed approach is designed in such a way that the model initially obtains the infiltration parameters and then searches the optimal unit hydrograph ordinates. The optimization model is solved using Genetic Algorithms. A reduction factor is used in the penalty parameter approach so that the obtained optimal infiltration parameters are not destroyed during subsequent generation of genetic algorithms, required for searching optimal unit hydrograph ordinates. The performance of the proposed methodology is evaluated by using two example problems. The evaluation shows that the model is superior, simple in concept and also has the potential for field application.

  9. Ordinal feature selection for iris and palmprint recognition.

    PubMed

    Sun, Zhenan; Wang, Libin; Tan, Tieniu

    2014-09-01

    Ordinal measures have been demonstrated as an effective feature representation model for iris and palmprint recognition. However, ordinal measures are a general concept of image analysis and numerous variants with different parameter settings, such as location, scale, orientation, and so on, can be derived to construct a huge feature space. This paper proposes a novel optimization formulation for ordinal feature selection with successful applications to both iris and palmprint recognition. The objective function of the proposed feature selection method has two parts, i.e., misclassification error of intra and interclass matching samples and weighted sparsity of ordinal feature descriptors. Therefore, the feature selection aims to achieve an accurate and sparse representation of ordinal measures. And, the optimization subjects to a number of linear inequality constraints, which require that all intra and interclass matching pairs are well separated with a large margin. Ordinal feature selection is formulated as a linear programming (LP) problem so that a solution can be efficiently obtained even on a large-scale feature pool and training database. Extensive experimental results demonstrate that the proposed LP formulation is advantageous over existing feature selection methods, such as mRMR, ReliefF, Boosting, and Lasso for biometric recognition, reporting state-of-the-art accuracy on CASIA and PolyU databases.

  10. Fusagene vectors: a novel strategy for the expression of multiple genes from a single cistron.

    PubMed

    Gäken, J; Jiang, J; Daniel, K; van Berkel, E; Hughes, C; Kuiper, M; Darling, D; Tavassoli, M; Galea-Lauri, J; Ford, K; Kemeny, M; Russell, S; Farzaneh, F

    2000-12-01

    Transduction of cells with multiple genes, allowing their stable and co-ordinated expression, is difficult with the available methodologies. A method has been developed for expression of multiple gene products, as fusion proteins, from a single cistron. The encoded proteins are post-synthetically cleaved and processed into each of their constituent proteins as individual, biologically active factors. Specifically, linkers encoding cleavage sites for the Golgi expressed endoprotease, furin, have been incorporated between in-frame cDNA sequences encoding different secreted or membrane bound proteins. With this strategy we have developed expression vectors encoding multiple proteins (IL-2 and B7.1, IL-4 and B7.1, IL-4 and IL-2, IL-12 p40 and p35, and IL-12 p40, p35 and IL-2 ). Transduction and analysis of over 100 individual clones, derived from murine and human tumour cell lines, demonstrate the efficient expression and biological activity of each of the encoded proteins. Fusagene vectors enable the co-ordinated expression of multiple gene products from a single, monocistronic, expression cassette.

  11. Ordinal optimization and its application to complex deterministic problems

    NASA Astrophysics Data System (ADS)

    Yang, Mike Shang-Yu

    1998-10-01

    We present in this thesis a new perspective to approach a general class of optimization problems characterized by large deterministic complexities. Many problems of real-world concerns today lack analyzable structures and almost always involve high level of difficulties and complexities in the evaluation process. Advances in computer technology allow us to build computer models to simulate the evaluation process through numerical means, but the burden of high complexities remains to tax the simulation with an exorbitant computing cost for each evaluation. Such a resource requirement makes local fine-tuning of a known design difficult under most circumstances, let alone global optimization. Kolmogorov equivalence of complexity and randomness in computation theory is introduced to resolve this difficulty by converting the complex deterministic model to a stochastic pseudo-model composed of a simple deterministic component and a white-noise like stochastic term. The resulting randomness is then dealt with by a noise-robust approach called Ordinal Optimization. Ordinal Optimization utilizes Goal Softening and Ordinal Comparison to achieve an efficient and quantifiable selection of designs in the initial search process. The approach is substantiated by a case study in the turbine blade manufacturing process. The problem involves the optimization of the manufacturing process of the integrally bladed rotor in the turbine engines of U.S. Air Force fighter jets. The intertwining interactions among the material, thermomechanical, and geometrical changes makes the current FEM approach prohibitively uneconomical in the optimization process. The generalized OO approach to complex deterministic problems is applied here with great success. Empirical results indicate a saving of nearly 95% in the computing cost.

  12. An approach to solve group-decision-making problems with ordinal interval numbers.

    PubMed

    Fan, Zhi-Ping; Liu, Yang

    2010-10-01

    The ordinal interval number is a form of uncertain preference information in group decision making (GDM), while it is seldom discussed in the existing research. This paper investigates how the ranking order of alternatives is determined based on preference information of ordinal interval numbers in GDM problems. When ranking a large quantity of ordinal interval numbers, the efficiency and accuracy of the ranking process are critical. A new approach is proposed to rank alternatives using ordinal interval numbers when every ranking ordinal in an ordinal interval number is thought to be uniformly and independently distributed in its interval. First, we give the definition of possibility degree on comparing two ordinal interval numbers and the related theory analysis. Then, to rank alternatives, by comparing multiple ordinal interval numbers, a collective expectation possibility degree matrix on pairwise comparisons of alternatives is built, and an optimization model based on this matrix is constructed. Furthermore, an algorithm is also presented to rank alternatives by solving the model. Finally, two examples are used to illustrate the use of the proposed approach.

  13. Habitat suitability and ecological niche profile of major malaria vectors in Cameroon

    PubMed Central

    2009-01-01

    Background Suitability of environmental conditions determines a species distribution in space and time. Understanding and modelling the ecological niche of mosquito disease vectors can, therefore, be a powerful predictor of the risk of exposure to the pathogens they transmit. In Africa, five anophelines are responsible for over 95% of total malaria transmission. However, detailed knowledge of the geographic distribution and ecological requirements of these species is to date still inadequate. Methods Indoor-resting mosquitoes were sampled from 386 villages covering the full range of ecological settings available in Cameroon, Central Africa. Using a predictive species distribution modeling approach based only on presence records, habitat suitability maps were constructed for the five major malaria vectors Anopheles gambiae, Anopheles funestus, Anopheles arabiensis, Anopheles nili and Anopheles moucheti. The influence of 17 climatic, topographic, and land use variables on mosquito geographic distribution was assessed by multivariate regression and ordination techniques. Results Twenty-four anopheline species were collected, of which 17 are known to transmit malaria in Africa. Ecological Niche Factor Analysis, Habitat Suitability modeling and Canonical Correspondence Analysis revealed marked differences among the five major malaria vector species, both in terms of ecological requirements and niche breadth. Eco-geographical variables (EGVs) related to human activity had the highest impact on habitat suitability for the five major malaria vectors, with areas of low population density being of marginal or unsuitable habitat quality. Sunlight exposure, rainfall, evapo-transpiration, relative humidity, and wind speed were among the most discriminative EGVs separating "forest" from "savanna" species. Conclusions The distribution of major malaria vectors in Cameroon is strongly affected by the impact of humans on the environment, with variables related to proximity to human settings being among the best predictors of habitat suitability. The ecologically more tolerant species An. gambiae and An. funestus were recorded in a wide range of eco-climatic settings. The other three major vectors, An. arabiensis, An. moucheti, and An. nili, were more specialized. Ecological niche and species distribution modelling should help improve malaria vector control interventions by targeting places and times where the impact on vector populations and disease transmission can be optimized. PMID:20028559

  14. Habitat suitability and ecological niche profile of major malaria vectors in Cameroon.

    PubMed

    Ayala, Diego; Costantini, Carlo; Ose, Kenji; Kamdem, Guy C; Antonio-Nkondjio, Christophe; Agbor, Jean-Pierre; Awono-Ambene, Parfait; Fontenille, Didier; Simard, Frédéric

    2009-12-23

    Suitability of environmental conditions determines a species distribution in space and time. Understanding and modelling the ecological niche of mosquito disease vectors can, therefore, be a powerful predictor of the risk of exposure to the pathogens they transmit. In Africa, five anophelines are responsible for over 95% of total malaria transmission. However, detailed knowledge of the geographic distribution and ecological requirements of these species is to date still inadequate. Indoor-resting mosquitoes were sampled from 386 villages covering the full range of ecological settings available in Cameroon, Central Africa. Using a predictive species distribution modeling approach based only on presence records, habitat suitability maps were constructed for the five major malaria vectors Anopheles gambiae, Anopheles funestus, Anopheles arabiensis, Anopheles nili and Anopheles moucheti. The influence of 17 climatic, topographic, and land use variables on mosquito geographic distribution was assessed by multivariate regression and ordination techniques. Twenty-four anopheline species were collected, of which 17 are known to transmit malaria in Africa. Ecological Niche Factor Analysis, Habitat Suitability modeling and Canonical Correspondence Analysis revealed marked differences among the five major malaria vector species, both in terms of ecological requirements and niche breadth. Eco-geographical variables (EGVs) related to human activity had the highest impact on habitat suitability for the five major malaria vectors, with areas of low population density being of marginal or unsuitable habitat quality. Sunlight exposure, rainfall, evapo-transpiration, relative humidity, and wind speed were among the most discriminative EGVs separating "forest" from "savanna" species. The distribution of major malaria vectors in Cameroon is strongly affected by the impact of humans on the environment, with variables related to proximity to human settings being among the best predictors of habitat suitability. The ecologically more tolerant species An. gambiae and An. funestus were recorded in a wide range of eco-climatic settings. The other three major vectors, An. arabiensis, An. moucheti, and An. nili, were more specialized. Ecological niche and species distribution modelling should help improve malaria vector control interventions by targeting places and times where the impact on vector populations and disease transmission can be optimized.

  15. Statistical Optimality in Multipartite Ranking and Ordinal Regression.

    PubMed

    Uematsu, Kazuki; Lee, Yoonkyung

    2015-05-01

    Statistical optimality in multipartite ranking is investigated as an extension of bipartite ranking. We consider the optimality of ranking algorithms through minimization of the theoretical risk which combines pairwise ranking errors of ordinal categories with differential ranking costs. The extension shows that for a certain class of convex loss functions including exponential loss, the optimal ranking function can be represented as a ratio of weighted conditional probability of upper categories to lower categories, where the weights are given by the misranking costs. This result also bridges traditional ranking methods such as proportional odds model in statistics with various ranking algorithms in machine learning. Further, the analysis of multipartite ranking with different costs provides a new perspective on non-smooth list-wise ranking measures such as the discounted cumulative gain and preference learning. We illustrate our findings with simulation study and real data analysis.

  16. Discontinuous finite element method for vector radiative transfer

    NASA Astrophysics Data System (ADS)

    Wang, Cun-Hai; Yi, Hong-Liang; Tan, He-Ping

    2017-03-01

    The discontinuous finite element method (DFEM) is applied to solve the vector radiative transfer in participating media. The derivation in a discrete form of the vector radiation governing equations is presented, in which the angular space is discretized by the discrete-ordinates approach with a local refined modification, and the spatial domain is discretized into finite non-overlapped discontinuous elements. The elements in the whole solution domain are connected by modelling the boundary numerical flux between adjacent elements, which makes the DFEM numerically stable for solving radiative transfer equations. Several various problems of vector radiative transfer are tested to verify the performance of the developed DFEM, including vector radiative transfer in a one-dimensional parallel slab containing a Mie/Rayleigh/strong forward scattering medium and a two-dimensional square medium. The fact that DFEM results agree very well with the benchmark solutions in published references shows that the developed DFEM in this paper is accurate and effective for solving vector radiative transfer problems.

  17. A Guided Tour of Mathematical Methods

    NASA Astrophysics Data System (ADS)

    Snieder, Roel

    2009-04-01

    1. Introduction; 2. Dimensional analysis; 3. Power series; 4. Spherical and cylindrical co-ordinates; 5. The gradient; 6. The divergence of a vector field; 7. The curl of a vector field; 8. The theorem of Gauss; 9. The theorem of Stokes; 10. The Laplacian; 11. Conservation laws; 12. Scale analysis; 13. Linear algebra; 14. The Dirac delta function; 15. Fourier analysis; 16. Analytic functions; 17. Complex integration; 18. Green's functions: principles; 19. Green's functions: examples; 20. Normal modes; 21. Potential theory; 22. Cartesian tensors; 23. Perturbation theory; 24. Asymptotic evaluation of integrals; 25. Variational calculus; 26. Epilogue, on power and knowledge; References.

  18. Importance of intersectoral co-ordination in the control of communicable diseases, with special reference to plague in Tanzania.

    PubMed

    Kilonzo, B S

    1994-07-01

    Human health, agriculture, including livestock, energy, education, wildlife, construction, forestry and trade sectors are inter-related and their co-ordination is an important pre-requisite for successful control of most communicable diseases including plague. Similar linkage between research, policy, training and extension activities in each sector are essential for any successful control strategy. Inadequate agricultural produce, inaccessibility of people to the available food and ignorance on proper preparation and usage of available food materials are responsible for malnutrition, and malnourished people are very vulnerable to disease. Irrigation schemes facilitate breeding of various disease vectors and transmission of some communicable diseases. Forests are ecologically favourable for some disease vectors and reservoirs for tsetse flies and rodents, while deforestation leads to soil erosion, lack of rainfall and consequently reduced productivity in agriculture which may result in poor nutrition of the population. Wildlife and livestock serve as reservoirs and/or carriers of various zoonoses including plague, trypanosomiasis and rabies. Lack of proper co-ordination of these sectors in communicable disease control programmes can result in serious and undesirable consequences. Indiscriminate killing of rodents in order to minimize food damage by these vermin forces their flea ectoparasites to seek alternative hosts, including man, a development which may result in transmission of plague from rodents to man. Similarly, avoidance of proper quarantine during plague epidemics, an undertaking which is usually aimed at maintaining economic and social links with places outside the affected focus, can result in the disease becoming widespread and consequently make any control strategies more difficult and expensive.(ABSTRACT TRUNCATED AT 250 WORDS)

  19. Interpretation for scales of measurement linking with abstract algebra.

    PubMed

    Sawamura, Jitsuki; Morishita, Shigeru; Ishigooka, Jun

    2014-01-01

    THE STEVENS CLASSIFICATION OF LEVELS OF MEASUREMENT INVOLVES FOUR TYPES OF SCALE: "Nominal", "Ordinal", "Interval" and "Ratio". This classification has been used widely in medical fields and has accomplished an important role in composition and interpretation of scale. With this classification, levels of measurements appear organized and validated. However, a group theory-like systematization beckons as an alternative because of its logical consistency and unexceptional applicability in the natural sciences but which may offer great advantages in clinical medicine. According to this viewpoint, the Stevens classification is reformulated within an abstract algebra-like scheme; 'Abelian modulo additive group' for "Ordinal scale" accompanied with 'zero', 'Abelian additive group' for "Interval scale", and 'field' for "Ratio scale". Furthermore, a vector-like display arranges a mixture of schemes describing the assessment of patient states. With this vector-like notation, data-mining and data-set combination is possible on a higher abstract structure level based upon a hierarchical-cluster form. Using simple examples, we show that operations acting on the corresponding mixed schemes of this display allow for a sophisticated means of classifying, updating, monitoring, and prognosis, where better data mining/data usage and efficacy is expected.

  20. A Method for Optimal Load Dispatch of a Multi-zone Power System with Zonal Exchange Constraints

    NASA Astrophysics Data System (ADS)

    Hazarika, Durlav; Das, Ranjay

    2018-04-01

    This paper presented a method for economic generation scheduling of a multi-zone power system having inter zonal operational constraints. For this purpose, the generator rescheduling for a multi area power system having inter zonal operational constraints has been represented as a two step optimal generation scheduling problem. At first, the optimal generation scheduling has been carried out for the zone having surplus or deficient generation with proper spinning reserve using co-ordination equation. The power exchange required for the deficit zones and zones having no generation are estimated based on load demand and generation for the zone. The incremental transmission loss formulas for the transmission lines participating in the power transfer process among the zones are formulated. Using these, incremental transmission loss expression in co-ordination equation, the optimal generation scheduling for the zonal exchange has been determined. Simulation is carried out on IEEE 118 bus test system to examine the applicability and validity of the method.

  1. Co-occurrence of viruses and mosquitoes at the vectors' optimal climate range: An underestimated risk to temperate regions?

    PubMed

    Blagrove, Marcus S C; Caminade, Cyril; Waldmann, Elisabeth; Sutton, Elizabeth R; Wardeh, Maya; Baylis, Matthew

    2017-06-01

    Mosquito-borne viruses have been estimated to cause over 100 million cases of human disease annually. Many methodologies have been developed to help identify areas most at risk from transmission of these viruses. However, generally, these methodologies focus predominantly on the effects of climate on either the vectors or the pathogens they spread, and do not consider the dynamic interaction between the optimal conditions for both vector and virus. Here, we use a new approach that considers the complex interplay between the optimal temperature for virus transmission, and the optimal climate for the mosquito vectors. Using published geolocated data we identified temperature and rainfall ranges in which a number of mosquito vectors have been observed to co-occur with West Nile virus, dengue virus or chikungunya virus. We then investigated whether the optimal climate for co-occurrence of vector and virus varies between "warmer" and "cooler" adapted vectors for the same virus. We found that different mosquito vectors co-occur with the same virus at different temperatures, despite significant overlap in vector temperature ranges. Specifically, we found that co-occurrence correlates with the optimal climatic conditions for the respective vector; cooler-adapted mosquitoes tend to co-occur with the same virus in cooler conditions than their warmer-adapted counterparts. We conclude that mosquitoes appear to be most able to transmit virus in the mosquitoes' optimal climate range, and hypothesise that this may be due to proportionally over-extended vector longevity, and other increased fitness attributes, within this optimal range. These results suggest that the threat posed by vector-competent mosquito species indigenous to temperate regions may have been underestimated, whilst the threat arising from invasive tropical vectors moving to cooler temperate regions may be overestimated.

  2. Ordinal preference elicitation methods in health economics and health services research: using discrete choice experiments and ranking methods.

    PubMed

    Ali, Shehzad; Ronaldson, Sarah

    2012-09-01

    The predominant method of economic evaluation is cost-utility analysis, which uses cardinal preference elicitation methods, including the standard gamble and time trade-off. However, such approach is not suitable for understanding trade-offs between process attributes, non-health outcomes and health outcomes to evaluate current practices, develop new programmes and predict demand for services and products. Ordinal preference elicitation methods including discrete choice experiments and ranking methods are therefore commonly used in health economics and health service research. Cardinal methods have been criticized on the grounds of cognitive complexity, difficulty of administration, contamination by risk and preference attitudes, and potential violation of underlying assumptions. Ordinal methods have gained popularity because of reduced cognitive burden, lower degree of abstract reasoning, reduced measurement error, ease of administration and ability to use both health and non-health outcomes. The underlying assumptions of ordinal methods may be violated when respondents use cognitive shortcuts, or cannot comprehend the ordinal task or interpret attributes and levels, or use 'irrational' choice behaviour or refuse to trade-off certain attributes. CURRENT USE AND GROWING AREAS: Ordinal methods are commonly used to evaluate preference for attributes of health services, products, practices, interventions, policies and, more recently, to estimate utility weights. AREAS FOR ON-GOING RESEARCH: There is growing research on developing optimal designs, evaluating the rationalization process, using qualitative tools for developing ordinal methods, evaluating consistency with utility theory, appropriate statistical methods for analysis, generalizability of results and comparing ordinal methods against each other and with cardinal measures.

  3. Detection of illegal transfer of videos over the Internet

    NASA Astrophysics Data System (ADS)

    Chaisorn, Lekha; Sainui, Janya; Manders, Corey

    2010-07-01

    In this paper, a method for detecting infringements or modifications of a video in real-time is proposed. The method first segments a video stream into shots, after which it extracts some reference frames as keyframes. This process is performed employing a Singular Value Decomposition (SVD) technique developed in this work. Next, for each input video (represented by its keyframes), ordinal-based signature and SIFT (Scale Invariant Feature Transform) descriptors are generated. The ordinal-based method employs a two-level bitmap indexing scheme to construct the index for each video signature. The first level clusters all input keyframes into k clusters while the second level converts the ordinal-based signatures into bitmap vectors. On the other hand, the SIFT-based method directly uses the descriptors as the index. Given a suspect video (being streamed or transferred on the Internet), we generate the signature (ordinal and SIFT descriptors) then we compute similarity between its signature and those signatures in the database based on ordinal signature and SIFT descriptors separately. For similarity measure, besides the Euclidean distance, Boolean operators are also utilized during the matching process. We have tested our system by performing several experiments on 50 videos (each about 1/2 hour in duration) obtained from the TRECVID 2006 data set. For experiments set up, we refer to the conditions provided by TRECVID 2009 on "Content-based copy detection" task. In addition, we also refer to the requirements issued in the call for proposals by MPEG standard on the similar task. Initial result shows that our framework is effective and robust. As compared to our previous work, on top of the achievement we obtained by reducing the storage space and time taken in the ordinal based method, by introducing the SIFT features, we could achieve an overall accuracy in F1 measure of about 96% (improved about 8%).

  4. Order-constrained linear optimization.

    PubMed

    Tidwell, Joe W; Dougherty, Michael R; Chrabaszcz, Jeffrey S; Thomas, Rick P

    2017-11-01

    Despite the fact that data and theories in the social, behavioural, and health sciences are often represented on an ordinal scale, there has been relatively little emphasis on modelling ordinal properties. The most common analytic framework used in psychological science is the general linear model, whose variants include ANOVA, MANOVA, and ordinary linear regression. While these methods are designed to provide the best fit to the metric properties of the data, they are not designed to maximally model ordinal properties. In this paper, we develop an order-constrained linear least-squares (OCLO) optimization algorithm that maximizes the linear least-squares fit to the data conditional on maximizing the ordinal fit based on Kendall's τ. The algorithm builds on the maximum rank correlation estimator (Han, 1987, Journal of Econometrics, 35, 303) and the general monotone model (Dougherty & Thomas, 2012, Psychological Review, 119, 321). Analyses of simulated data indicate that when modelling data that adhere to the assumptions of ordinary least squares, OCLO shows minimal bias, little increase in variance, and almost no loss in out-of-sample predictive accuracy. In contrast, under conditions in which data include a small number of extreme scores (fat-tailed distributions), OCLO shows less bias and variance, and substantially better out-of-sample predictive accuracy, even when the outliers are removed. We show that the advantages of OCLO over ordinary least squares in predicting new observations hold across a variety of scenarios in which researchers must decide to retain or eliminate extreme scores when fitting data. © 2017 The British Psychological Society.

  5. CSOLNP: Numerical Optimization Engine for Solving Non-linearly Constrained Problems.

    PubMed

    Zahery, Mahsa; Maes, Hermine H; Neale, Michael C

    2017-08-01

    We introduce the optimizer CSOLNP, which is a C++ implementation of the R package RSOLNP (Ghalanos & Theussl, 2012, Rsolnp: General non-linear optimization using augmented Lagrange multiplier method. R package version, 1) alongside some improvements. CSOLNP solves non-linearly constrained optimization problems using a Sequential Quadratic Programming (SQP) algorithm. CSOLNP, NPSOL (a very popular implementation of SQP method in FORTRAN (Gill et al., 1986, User's guide for NPSOL (version 4.0): A Fortran package for nonlinear programming (No. SOL-86-2). Stanford, CA: Stanford University Systems Optimization Laboratory), and SLSQP (another SQP implementation available as part of the NLOPT collection (Johnson, 2014, The NLopt nonlinear-optimization package. Retrieved from http://ab-initio.mit.edu/nlopt)) are three optimizers available in OpenMx package. These optimizers are compared in terms of runtimes, final objective values, and memory consumption. A Monte Carlo analysis of the performance of the optimizers was performed on ordinal and continuous models with five variables and one or two factors. While the relative difference between the objective values is less than 0.5%, CSOLNP is in general faster than NPSOL and SLSQP for ordinal analysis. As for continuous data, none of the optimizers performs consistently faster than the others. In terms of memory usage, we used Valgrind's heap profiler tool, called Massif, on one-factor threshold models. CSOLNP and NPSOL consume the same amount of memory, while SLSQP uses 71 MB more memory than the other two optimizers.

  6. Estimated breeding values for canine hip dysplasia radiographic traits in a cohort of Australian German Shepherd dogs.

    PubMed

    Wilson, Bethany J; Nicholas, Frank W; James, John W; Wade, Claire M; Thomson, Peter C

    2013-01-01

    Canine hip dysplasia (CHD) is a serious and common musculoskeletal disease of pedigree dogs and therefore represents both an important welfare concern and an imperative breeding priority. The typical heritability estimates for radiographic CHD traits suggest that the accuracy of breeding dog selection could be substantially improved by the use of estimated breeding values (EBVs) in place of selection based on phenotypes of individuals. The British Veterinary Association/Kennel Club scoring method is a complex measure composed of nine bilateral ordinal traits, intended to evaluate both early and late dysplastic changes. However, the ordinal nature of the traits may represent a technical challenge for calculation of EBVs using linear methods. The purpose of the current study was to calculate EBVs of British Veterinary Association/Kennel Club traits in the Australian population of German Shepherd Dogs, using linear (both as individual traits and a summed phenotype), binary and ordinal methods to determine the optimal method for EBV calculation. Ordinal EBVs correlated well with linear EBVs (r = 0.90-0.99) and somewhat well with EBVs for the sum of the individual traits (r = 0.58-0.92). Correlation of ordinal and binary EBVs varied widely (r = 0.24-0.99) depending on the trait and cut-point considered. The ordinal EBVs have increased accuracy (0.48-0.69) of selection compared with accuracies from individual phenotype-based selection (0.40-0.52). Despite the high correlations between linear and ordinal EBVs, the underlying relationship between EBVs calculated by the two methods was not always linear, leading us to suggest that ordinal models should be used wherever possible. As the population of German Shepherd Dogs which was studied was purportedly under selection for the traits studied, we examined the EBVs for evidence of a genetic trend in these traits and found substantial genetic improvement over time. This study suggests the use of ordinal EBVs could increase the rate of genetic improvement in this population.

  7. Development and validation of P-MODTRAN7 and P-MCScene, 1D and 3D polarimetric radiative transfer models

    NASA Astrophysics Data System (ADS)

    Hawes, Frederick T.; Berk, Alexander; Richtsmeier, Steven C.

    2016-05-01

    A validated, polarimetric 3-dimensional simulation capability, P-MCScene, is being developed by generalizing Spectral Sciences' Monte Carlo-based synthetic scene simulation model, MCScene, to include calculation of all 4 Stokes components. P-MCScene polarimetric optical databases will be generated by a new version (MODTRAN7) of the government-standard MODTRAN radiative transfer algorithm. The conversion of MODTRAN6 to a polarimetric model is being accomplished by (1) introducing polarimetric data, by (2) vectorizing the MODTRAN radiation calculations and by (3) integrating the newly revised and validated vector discrete ordinate model VDISORT3. Early results, presented here, demonstrate a clear pathway to the long-term goal of fully validated polarimetric models.

  8. A novel, kinetically stable, catalytically active, all-ferric, nitrite-bound complex of Paracoccus pantotrophus cytochrome cd1.

    PubMed Central

    Allen, James W A; Higham, Christopher W; Zajicek, Richard S; Watmough, Nicholas J; Ferguson, Stuart J

    2002-01-01

    The oxidized form of Paracoccus pantotrophus cytochrome cd(1) nitrite reductase, as isolated, has bis-histidinyl co-ordination of the c haem and His/Tyr co-ordination of the d(1) haem. On reduction, the haem co-ordinations change to His/Met and His/vacant respectively. If the latter form of the enzyme is reoxidized, a conformer is generated in which the ferric c haem is His/Met co-ordinated; this can revert to the 'as isolated' state of the enzyme over approx. 20 min at room temperature. However, addition of nitrite to the enzyme after a cycle of reduction and reoxidation produces a kinetically stable, all-ferric complex with nitrite bound to the d(1) haem and His/Met co-ordination of the c haem. This complex is catalytically active with the physiological electron donor protein pseudoazurin. The effective dissociation constant for nitrite is 2 mM. Evidence is presented that d(1) haem is optimized to bind nitrite, as opposed to other anions that are commonly good ligands to ferric haem. The all-ferric nitrite bound state of the enzyme could not be generated stoichiometrically by mixing nitrite with the 'as isolated' conformer of cytochrome cd(1) without redox cycling. PMID:12086580

  9. R programming for parameters estimation of geographically weighted ordinal logistic regression (GWOLR) model based on Newton Raphson

    NASA Astrophysics Data System (ADS)

    Zuhdi, Shaifudin; Saputro, Dewi Retno Sari

    2017-03-01

    GWOLR model used for represent relationship between dependent variable has categories and scale of category is ordinal with independent variable influenced the geographical location of the observation site. Parameters estimation of GWOLR model use maximum likelihood provide system of nonlinear equations and hard to be found the result in analytic resolution. By finishing it, it means determine the maximum completion, this thing associated with optimizing problem. The completion nonlinear system of equations optimize use numerical approximation, which one is Newton Raphson method. The purpose of this research is to make iteration algorithm Newton Raphson and program using R software to estimate GWOLR model. Based on the research obtained that program in R can be used to estimate the parameters of GWOLR model by forming a syntax program with command "while".

  10. A new vector radiative transfer model as a part of SCIATRAN 3.0 software package.

    NASA Astrophysics Data System (ADS)

    Rozanov, Alexei; Rozanov, Vladimir; Burrows, John P.

    The SCIATRAN 3.0 package is a result of further development of the SCIATRAN 2.x software family which, similar to previous versions, comprises a radiative transfer model and a retrieval block. A major improvement was achieved in comparison to previous software versions by adding the vector mode to the radiative transfer model. Thus, the well-established Discrete Ordinate solver can now be run in the vector mode to calculate the scattered solar radiation including polarization, i.e., to simulate all four components of the Stockes vector. Similar to the scalar version, the simulations can be performed for any viewing geometry typical for atmospheric observations in the UV-Vis-NIR spectral range (nadir, limb, off-axis, etc.) as well as for any observer position within or outside the Earth's atmosphere. Similar to the precursor version, the new model is freely available for non-commercial use via the web page of the University of Bremen. In this presentation a short description of the software package, especially of the new vector radiative transfer model will be given, including remarks on the availability for the scientific community. Furthermore, comparisons to other vector models will be shown and some example problems will be considered where the polarization of the observed radiation must be accounted for to obtain high quality results.

  11. Fast Quaternion Attitude Estimation from Two Vector Measurements

    NASA Technical Reports Server (NTRS)

    Markley, F. Landis; Bauer, Frank H. (Technical Monitor)

    2001-01-01

    Many spacecraft attitude determination methods use exactly two vector measurements. The two vectors are typically the unit vector to the Sun and the Earth's magnetic field vector for coarse "sun-mag" attitude determination or unit vectors to two stars tracked by two star trackers for fine attitude determination. Existing closed-form attitude estimates based on Wahba's optimality criterion for two arbitrarily weighted observations are somewhat slow to evaluate. This paper presents two new fast quaternion attitude estimation algorithms using two vector observations, one optimal and one suboptimal. The suboptimal method gives the same estimate as the TRIAD algorithm, at reduced computational cost. Simulations show that the TRIAD estimate is almost as accurate as the optimal estimate in representative test scenarios.

  12. The mean-square error optimal linear discriminant function and its application to incomplete data vectors

    NASA Technical Reports Server (NTRS)

    Walker, H. F.

    1979-01-01

    In many pattern recognition problems, data vectors are classified although one or more of the data vector elements are missing. This problem occurs in remote sensing when the ground is obscured by clouds. Optimal linear discrimination procedures for classifying imcomplete data vectors are discussed.

  13. Optimal four-impulse rendezvous between coplanar elliptical orbits

    NASA Astrophysics Data System (ADS)

    Wang, JianXia; Baoyin, HeXi; Li, JunFeng; Sun, FuChun

    2011-04-01

    Rendezvous in circular or near circular orbits has been investigated in great detail, while rendezvous in arbitrary eccentricity elliptical orbits is not sufficiently explored. Among the various optimization methods proposed for fuel optimal orbital rendezvous, Lawden's primer vector theory is favored by many researchers with its clear physical concept and simplicity in solution. Prussing has applied the primer vector optimization theory to minimum-fuel, multiple-impulse, time-fixed orbital rendezvous in a near circular orbit and achieved great success. Extending Prussing's work, this paper will employ the primer vector theory to study trajectory optimization problems of arbitrary eccentricity elliptical orbit rendezvous. Based on linearized equations of relative motion on elliptical reference orbit (referred to as T-H equations), the primer vector theory is used to deal with time-fixed multiple-impulse optimal rendezvous between two coplanar, coaxial elliptical orbits with arbitrary large eccentricity. A parameter adjustment method is developed for the prime vector to satisfy the Lawden's necessary condition for the optimal solution. Finally, the optimal multiple-impulse rendezvous solution including the time, direction and magnitudes of the impulse is obtained by solving the two-point boundary value problem. The rendezvous error of the linearized equation is also analyzed. The simulation results confirmed the analyzed results that the rendezvous error is small for the small eccentricity case and is large for the higher eccentricity. For better rendezvous accuracy of high eccentricity orbits, a combined method of multiplier penalty function with the simplex search method is used for local optimization. The simplex search method is sensitive to the initial values of optimization variables, but the simulation results show that initial values with the primer vector theory, and the local optimization algorithm can improve the rendezvous accuracy effectively with fast convergence, because the optimal results obtained by the primer vector theory are already very close to the actual optimal solution. If the initial values are taken randomly, it is difficult to converge to the optimal solution.

  14. Vector-model-supported approach in prostate plan optimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Eva Sau Fan; Department of Health Technology and Informatics, The Hong Kong Polytechnic University; Wu, Vincent Wing Cheung

    Lengthy time consumed in traditional manual plan optimization can limit the use of step-and-shoot intensity-modulated radiotherapy/volumetric-modulated radiotherapy (S&S IMRT/VMAT). A vector model base, retrieving similar radiotherapy cases, was developed with respect to the structural and physiologic features extracted from the Digital Imaging and Communications in Medicine (DICOM) files. Planning parameters were retrieved from the selected similar reference case and applied to the test case to bypass the gradual adjustment of planning parameters. Therefore, the planning time spent on the traditional trial-and-error manual optimization approach in the beginning of optimization could be reduced. Each S&S IMRT/VMAT prostate reference database comprised 100more » previously treated cases. Prostate cases were replanned with both traditional optimization and vector-model-supported optimization based on the oncologists' clinical dose prescriptions. A total of 360 plans, which consisted of 30 cases of S&S IMRT, 30 cases of 1-arc VMAT, and 30 cases of 2-arc VMAT plans including first optimization and final optimization with/without vector-model-supported optimization, were compared using the 2-sided t-test and paired Wilcoxon signed rank test, with a significance level of 0.05 and a false discovery rate of less than 0.05. For S&S IMRT, 1-arc VMAT, and 2-arc VMAT prostate plans, there was a significant reduction in the planning time and iteration with vector-model-supported optimization by almost 50%. When the first optimization plans were compared, 2-arc VMAT prostate plans had better plan quality than 1-arc VMAT plans. The volume receiving 35 Gy in the femoral head for 2-arc VMAT plans was reduced with the vector-model-supported optimization compared with the traditional manual optimization approach. Otherwise, the quality of plans from both approaches was comparable. Vector-model-supported optimization was shown to offer much shortened planning time and iteration number without compromising the plan quality.« less

  15. A Power Transformers Fault Diagnosis Model Based on Three DGA Ratios and PSO Optimization SVM

    NASA Astrophysics Data System (ADS)

    Ma, Hongzhe; Zhang, Wei; Wu, Rongrong; Yang, Chunyan

    2018-03-01

    In order to make up for the shortcomings of existing transformer fault diagnosis methods in dissolved gas-in-oil analysis (DGA) feature selection and parameter optimization, a transformer fault diagnosis model based on the three DGA ratios and particle swarm optimization (PSO) optimize support vector machine (SVM) is proposed. Using transforming support vector machine to the nonlinear and multi-classification SVM, establishing the particle swarm optimization to optimize the SVM multi classification model, and conducting transformer fault diagnosis combined with the cross validation principle. The fault diagnosis results show that the average accuracy of test method is better than the standard support vector machine and genetic algorithm support vector machine, and the proposed method can effectively improve the accuracy of transformer fault diagnosis is proved.

  16. Interactive optimization approach for optimal impulsive rendezvous using primer vector and evolutionary algorithms

    NASA Astrophysics Data System (ADS)

    Luo, Ya-Zhong; Zhang, Jin; Li, Hai-yang; Tang, Guo-Jin

    2010-08-01

    In this paper, a new optimization approach combining primer vector theory and evolutionary algorithms for fuel-optimal non-linear impulsive rendezvous is proposed. The optimization approach is designed to seek the optimal number of impulses as well as the optimal impulse vectors. In this optimization approach, adding a midcourse impulse is determined by an interactive method, i.e. observing the primer-magnitude time history. An improved version of simulated annealing is employed to optimize the rendezvous trajectory with the fixed-number of impulses. This interactive approach is evaluated by three test cases: coplanar circle-to-circle rendezvous, same-circle rendezvous and non-coplanar rendezvous. The results show that the interactive approach is effective and efficient in fuel-optimal non-linear rendezvous design. It can guarantee solutions, which satisfy the Lawden's necessary optimality conditions.

  17. An optimal control strategies using vaccination and fogging in dengue fever transmission model

    NASA Astrophysics Data System (ADS)

    Fitria, Irma; Winarni, Pancahayani, Sigit; Subchan

    2017-08-01

    This paper discussed regarding a model and an optimal control problem of dengue fever transmission. We classified the model as human and vector (mosquito) population classes. For the human population, there are three subclasses, such as susceptible, infected, and resistant classes. Then, for the vector population, we divided it into wiggler, susceptible, and infected vector classes. Thus, the model consists of six dynamic equations. To minimize the number of dengue fever cases, we designed two optimal control variables in the model, the giving of fogging and vaccination. The objective function of this optimal control problem is to minimize the number of infected human population, the number of vector, and the cost of the controlling efforts. By giving the fogging optimally, the number of vector can be minimized. In this case, we considered the giving of vaccination as a control variable because it is one of the efforts that are being developed to reduce the spreading of dengue fever. We used Pontryagin Minimum Principle to solve the optimal control problem. Furthermore, the numerical simulation results are given to show the effect of the optimal control strategies in order to minimize the epidemic of dengue fever.

  18. Parallel-vector computation for linear structural analysis and non-linear unconstrained optimization problems

    NASA Technical Reports Server (NTRS)

    Nguyen, D. T.; Al-Nasra, M.; Zhang, Y.; Baddourah, M. A.; Agarwal, T. K.; Storaasli, O. O.; Carmona, E. A.

    1991-01-01

    Several parallel-vector computational improvements to the unconstrained optimization procedure are described which speed up the structural analysis-synthesis process. A fast parallel-vector Choleski-based equation solver, pvsolve, is incorporated into the well-known SAP-4 general-purpose finite-element code. The new code, denoted PV-SAP, is tested for static structural analysis. Initial results on a four processor CRAY 2 show that using pvsolve reduces the equation solution time by a factor of 14-16 over the original SAP-4 code. In addition, parallel-vector procedures for the Golden Block Search technique and the BFGS method are developed and tested for nonlinear unconstrained optimization. A parallel version of an iterative solver and the pvsolve direct solver are incorporated into the BFGS method. Preliminary results on nonlinear unconstrained optimization test problems, using pvsolve in the analysis, show excellent parallel-vector performance indicating that these parallel-vector algorithms can be used in a new generation of finite-element based structural design/analysis-synthesis codes.

  19. Analytical Approach to the Fuel Optimal Impulsive Transfer Problem Using Primer Vector Method

    NASA Astrophysics Data System (ADS)

    Fitrianingsih, E.; Armellin, R.

    2018-04-01

    One of the objectives of mission design is selecting an optimum orbital transfer which often translated as a transfer which requires minimum propellant consumption. In order to assure the selected trajectory meets the requirement, the optimality of transfer should first be analyzed either by directly calculating the ΔV of the candidate trajectories and select the one that gives a minimum value or by evaluating the trajectory according to certain criteria of optimality. The second method is performed by analyzing the profile of the modulus of the thrust direction vector which is known as primer vector. Both methods come with their own advantages and disadvantages. However, it is possible to use the primer vector method to verify if the result from the direct method is truly optimal or if the ΔV can be reduced further by implementing correction maneuver to the reference trajectory. In addition to its capability to evaluate the transfer optimality without the need to calculate the transfer ΔV, primer vector also enables us to identify the time and position to apply correction maneuver in order to optimize a non-optimum transfer. This paper will present the analytical approach to the fuel optimal impulsive transfer using primer vector method. The validity of the method is confirmed by comparing the result to those from the numerical method. The investigation of the optimality of direct transfer is used to give an example of the application of the method. The case under study is the prograde elliptic transfers from Earth to Mars. The study enables us to identify the optimality of all the possible transfers.

  20. Image correlation based method for the analysis of collagen fibers patterns

    NASA Astrophysics Data System (ADS)

    Rosa, Ramon G. T.; Pratavieira, Sebastião.; Kurachi, Cristina

    2015-06-01

    The collagen fibers are one of the most important structural proteins in skin, being responsible for its strength and flexibility. It is known that their properties, like fibers density, ordination and mean diameter can be affected by several skin conditions, what makes these properties a good parameter to be used on the diagnosis and evaluation of skin aging, cancer, healing, among other conditions. There is, however, a need for methods capable of analyzing quantitatively the organization patterns of these fibers. To address this need, we developed a method based on the autocorrelation function of the images that allows the construction of vector field plots of the fibers directions and does not require any kind of curve fitting or optimization. The analyzed images were obtained through Second Harmonic Generation Imaging Microscopy. This paper presents a concise review on the autocorrelation function and some of its applications to image processing, details the developed method and the results obtained through the analysis of hystopathological slides of landrace porcine skin. The method has high accuracy on the determination of the fibers direction and presents high performance. We look forward to perform further studies keeping track of different skin conditions over time.

  1. A Hybrid Neuro-Fuzzy Model For Integrating Large Earth-Science Datasets

    NASA Astrophysics Data System (ADS)

    Porwal, A.; Carranza, J.; Hale, M.

    2004-12-01

    A GIS-based hybrid neuro-fuzzy approach to integration of large earth-science datasets for mineral prospectivity mapping is described. It implements a Takagi-Sugeno type fuzzy inference system in the framework of a four-layered feed-forward adaptive neural network. Each unique combination of the datasets is considered a feature vector whose components are derived by knowledge-based ordinal encoding of the constituent datasets. A subset of feature vectors with a known output target vector (i.e., unique conditions known to be associated with either a mineralized or a barren location) is used for the training of an adaptive neuro-fuzzy inference system. Training involves iterative adjustment of parameters of the adaptive neuro-fuzzy inference system using a hybrid learning procedure for mapping each training vector to its output target vector with minimum sum of squared error. The trained adaptive neuro-fuzzy inference system is used to process all feature vectors. The output for each feature vector is a value that indicates the extent to which a feature vector belongs to the mineralized class or the barren class. These values are used to generate a prospectivity map. The procedure is demonstrated by an application to regional-scale base metal prospectivity mapping in a study area located in the Aravalli metallogenic province (western India). A comparison of the hybrid neuro-fuzzy approach with pure knowledge-driven fuzzy and pure data-driven neural network approaches indicates that the former offers a superior method for integrating large earth-science datasets for predictive spatial mathematical modelling.

  2. IEP (Individualized Educational Program) Co-operation between Optimal Support of Students with Special Needs

    NASA Astrophysics Data System (ADS)

    Ogoshi, Yasuhiro; Nakai, Akio; Ogoshi, Sakiko; Mitsuhashi, Yoshinori; Araki, Chikahiro

    A key aspect of the optimal support of students with special needs is co-ordination and co-operation between school, home and specialized agencies. Communication between these entities is of prime importance and can be facilitated through the use of a support system implementing ICF guidelines as outlined. This communication system can be considered to be a preventative rather than allopathic support.

  3. Optimal control of malaria: combining vector interventions and drug therapies.

    PubMed

    Khamis, Doran; El Mouden, Claire; Kura, Klodeta; Bonsall, Michael B

    2018-04-24

    The sterile insect technique and transgenic equivalents are considered promising tools for controlling vector-borne disease in an age of increasing insecticide and drug-resistance. Combining vector interventions with artemisinin-based therapies may achieve the twin goals of suppressing malaria endemicity while managing artemisinin resistance. While the cost-effectiveness of these controls has been investigated independently, their combined usage has not been dynamically optimized in response to ecological and epidemiological processes. An optimal control framework based on coupled models of mosquito population dynamics and malaria epidemiology is used to investigate the cost-effectiveness of combining vector control with drug therapies in homogeneous environments with and without vector migration. The costs of endemic malaria are weighed against the costs of administering artemisinin therapies and releasing modified mosquitoes using various cost structures. Larval density dependence is shown to reduce the cost-effectiveness of conventional sterile insect releases compared with transgenic mosquitoes with a late-acting lethal gene. Using drug treatments can reduce the critical vector control release ratio necessary to cause disease fadeout. Combining vector control and drug therapies is the most effective and efficient use of resources, and using optimized implementation strategies can substantially reduce costs.

  4. Vector-model-supported optimization in volumetric-modulated arc stereotactic radiotherapy planning for brain metastasis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Eva Sau Fan; Department of Health Technology and Informatics, The Hong Kong Polytechnic University; Wu, Vincent Wing Cheung

    Long planning time in volumetric-modulated arc stereotactic radiotherapy (VMA-SRT) cases can limit its clinical efficiency and use. A vector model could retrieve previously successful radiotherapy cases that share various common anatomic features with the current case. The prsent study aimed to develop a vector model that could reduce planning time by applying the optimization parameters from those retrieved reference cases. Thirty-six VMA-SRT cases of brain metastasis (gender, male [n = 23], female [n = 13]; age range, 32 to 81 years old) were collected and used as a reference database. Another 10 VMA-SRT cases were planned with both conventional optimization and vector-model-supported optimization, followingmore » the oncologists' clinical dose prescriptions. Planning time and plan quality measures were compared using the 2-sided paired Wilcoxon signed rank test with a significance level of 0.05, with positive false discovery rate (pFDR) of less than 0.05. With vector-model-supported optimization, there was a significant reduction in the median planning time, a 40% reduction from 3.7 to 2.2 hours (p = 0.002, pFDR = 0.032), and for the number of iterations, a 30% reduction from 8.5 to 6.0 (p = 0.006, pFDR = 0.047). The quality of plans from both approaches was comparable. From these preliminary results, vector-model-supported optimization can expedite the optimization of VMA-SRT for brain metastasis while maintaining plan quality.« less

  5. A linearized theory method of constrained optimization for supersonic cruise wing design

    NASA Technical Reports Server (NTRS)

    Miller, D. S.; Carlson, H. W.; Middleton, W. D.

    1976-01-01

    A linearized theory wing design and optimization procedure which allows physical realism and practical considerations to be imposed as constraints on the optimum (least drag due to lift) solution is discussed and examples of application are presented. In addition to the usual constraints on lift and pitching moment, constraints are imposed on wing surface ordinates and wing upper surface pressure levels and gradients. The design procedure also provides the capability of including directly in the optimization process the effects of other aircraft components such as a fuselage, canards, and nacelles.

  6. Vectorization of transport and diffusion computations on the CDC Cyber 205

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abu-Shumays, I.K.

    1986-01-01

    The development and testing of alternative numerical methods and computational algorithms specifically designed for the vectorization of transport and diffusion computations on a Control Data Corporation (CDC) Cyber 205 vector computer are described. Two solution methods for the discrete ordinates approximation to the transport equation are summarized and compared. Factors of 4 to 7 reduction in run times for certain large transport problems were achieved on a Cyber 205 as compared with run times on a CDC-7600. The solution of tridiagonal systems of linear equations, central to several efficient numerical methods for multidimensional diffusion computations and essential for fluid flowmore » and other physics and engineering problems, is also dealt with. Among the methods tested, a combined odd-even cyclic reduction and modified Cholesky factorization algorithm for solving linear symmetric positive definite tridiagonal systems is found to be the most effective for these systems on a Cyber 205. For large tridiagonal systems, computation with this algorithm is an order of magnitude faster on a Cyber 205 than computation with the best algorithm for tridiagonal systems on a CDC-7600.« less

  7. Harmonic reduction of Direct Torque Control of six-phase induction motor.

    PubMed

    Taheri, A

    2016-07-01

    In this paper, a new switching method in Direct Torque Control (DTC) of a six-phase induction machine for reduction of current harmonics is introduced. Selecting a suitable vector in each sampling period is an ordinal method in the ST-DTC drive of a six-phase induction machine. The six-phase induction machine has 64 voltage vectors and divided further into four groups. In the proposed DTC method, the suitable voltage vectors are selected from two vector groups. By a suitable selection of two vectors in each sampling period, the harmonic amplitude is decreased more, in and various comparison to that of the ST-DTC drive. The harmonics loss is greater reduced, while the electromechanical energy is decreased with switching loss showing a little increase. Spectrum analysis of the phase current in the standard and new switching table DTC of the six-phase induction machine and determination for the amplitude of each harmonics is proposed in this paper. The proposed method has a less sampling time in comparison to the ordinary method. The Harmonic analyses of the current in the low and high speed shows the performance of the presented method. The simplicity of the proposed method and its implementation without any extra hardware is other advantages of the proposed method. The simulation and experimental results show the preference of the proposed method. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  8. Some Remarks on GMRES for Transport Theory

    NASA Technical Reports Server (NTRS)

    Patton, Bruce W.; Holloway, James Paul

    2003-01-01

    We review some work on the application of GMRES to the solution of the discrete ordinates transport equation in one-dimension. We note that GMRES can be applied directly to the angular flux vector, or it can be applied to only a vector of flux moments as needed to compute the scattering operator of the transport equation. In the former case we illustrate both the delights and defects of ILU right-preconditioners for problems with anisotropic scatter and for problems with upscatter. When working with flux moments we note that GMRES can be used as an accelerator for any existing transport code whose solver is based on a stationary fixed-point iteration, including transport sweeps and DSA transport sweeps. We also provide some numerical illustrations of this idea. We finally show how space can be traded for speed by taking multiple transport sweeps per GMRES iteration. Key Words: transport equation, GMRES, Krylov subspace

  9. Effective Clipart Image Vectorization through Direct Optimization of Bezigons.

    PubMed

    Yang, Ming; Chao, Hongyang; Zhang, Chi; Guo, Jun; Yuan, Lu; Sun, Jian

    2016-02-01

    Bezigons, i.e., closed paths composed of Bézier curves, have been widely employed to describe shapes in image vectorization results. However, most existing vectorization techniques infer the bezigons by simply approximating an intermediate vector representation (such as polygons). Consequently, the resultant bezigons are sometimes imperfect due to accumulated errors, fitting ambiguities, and a lack of curve priors, especially for low-resolution images. In this paper, we describe a novel method for vectorizing clipart images. In contrast to previous methods, we directly optimize the bezigons rather than using other intermediate representations; therefore, the resultant bezigons are not only of higher fidelity compared with the original raster image but also more reasonable because they were traced by a proficient expert. To enable such optimization, we have overcome several challenges and have devised a differentiable data energy as well as several curve-based prior terms. To improve the efficiency of the optimization, we also take advantage of the local control property of bezigons and adopt an overlapped piecewise optimization strategy. The experimental results show that our method outperforms both the current state-of-the-art method and commonly used commercial software in terms of bezigon quality.

  10. GOSAT CO2 retrieval results using TANSO-CAI aerosol information over East Asia

    NASA Astrophysics Data System (ADS)

    KIM, M.; Kim, W.; Jung, Y.; Lee, S.; Kim, J.; Lee, H.; Boesch, H.; Goo, T. Y.

    2015-12-01

    In the satellite remote sensing of CO2, incorrect aerosol information could induce large errors as previous studies suggested. Many factors, such as, aerosol type, wavelength dependency of AOD, aerosol polarization effect and etc. have been main error sources. Due to these aerosol effects, large number of data retrieved are screened out in quality control, or retrieval errors tend to increase if not screened out, especially in East Asia where aerosol concentrations are fairly high. To reduce these aerosol induced errors, a CO2 retrieval algorithm using the simultaneous TANSO-CAI aerosol information is developed. This algorithm adopts AOD and aerosol type information as a priori information from the CAI aerosol retrieval algorithm. The CO2 retrieval algorithm based on optimal estimation method and VLIDORT, a vector discrete ordinate radiative transfer model. The CO2 algorithm, developed with various state vectors to find accurate CO2 concentration, shows reasonable results when compared with other dataset. This study concentrates on the validation of retrieved results with the ground-based TCCON measurements in East Asia and the comparison with the previous retrieval from ACOS, NIES, and UoL. Although, the retrieved CO2 concentration is lower than previous results by ppm's, it shows similar trend and high correlation with previous results. Retrieved data and TCCON measurements data are compared at three stations of Tsukuba, Saga, Anmyeondo in East Asia, with the collocation criteria of ±2°in latitude/longitude and ±1 hours of GOSAT passing time. Compared results also show similar trend with good correlation. Based on the TCCON comparison results, bias correction equation is calculated and applied to the East Asia data.

  11. Adaptive track scheduling to optimize concurrency and vectorization in GeantV

    DOE PAGES

    Apostolakis, J.; Bandieramonte, M.; Bitzes, G.; ...

    2015-05-22

    The GeantV project is focused on the R&D of new particle transport techniques to maximize parallelism on multiple levels, profiting from the use of both SIMD instructions and co-processors for the CPU-intensive calculations specific to this type of applications. In our approach, vectors of tracks belonging to multiple events and matching different locality criteria must be gathered and dispatched to algorithms having vector signatures. While the transport propagates tracks and changes their individual states, data locality becomes harder to maintain. The scheduling policy has to be changed to maintain efficient vectors while keeping an optimal level of concurrency. The modelmore » has complex dynamics requiring tuning the thresholds to switch between the normal regime and special modes, i.e. prioritizing events to allow flushing memory, adding new events in the transport pipeline to boost locality, dynamically adjusting the particle vector size or switching between vector to single track mode when vectorization causes only overhead. Lastly, this work requires a comprehensive study for optimizing these parameters to make the behaviour of the scheduler self-adapting, presenting here its initial results.« less

  12. Virtual head rotation reveals a process of route reconstruction from human vestibular signals

    PubMed Central

    Day, Brian L; Fitzpatrick, Richard C

    2005-01-01

    The vestibular organs can feed perceptual processes that build a picture of our route as we move about in the world. However, raw vestibular signals do not define the path taken because, during travel, the head can undergo accelerations unrelated to the route and also be orientated in any direction to vary the signal. This study investigated the computational process by which the brain transforms raw vestibular signals for the purpose of route reconstruction. We electrically stimulated the vestibular nerves of human subjects to evoke a virtual head rotation fixed in skull co-ordinates and measure its perceptual effect. The virtual head rotation caused subjects to perceive an illusory whole-body rotation that was a cyclic function of head-pitch angle. They perceived whole-body yaw rotation in one direction with the head pitched forwards, the opposite direction with the head pitched backwards, and no rotation with the head in an intermediate position. A model based on vector operations and the anatomy and firing properties of semicircular canals precisely predicted these perceptions. In effect, a neural process computes the vector dot product between the craniocentric vestibular vector of head rotation and the gravitational unit vector. This computation yields the signal of body rotation in the horizontal plane that feeds our perception of the route travelled. PMID:16002439

  13. Multiprocessing MCNP on an IBM RS/6000 cluster

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McKinney, G.W.; West, J.T.

    1993-01-01

    The advent of high-performance computer systems has brought to maturity programming concepts like vectorization, multiprocessing, and multitasking. While there are many schools of thought as to the most significant factor in obtaining order-of-magnitude increases in performance, such speedup can only be achieved by integrating the computer system and application code. Vectorization leads to faster manipulation of arrays by overlapping instruction CPU cycles. Discrete ordinates codes, which require the solving of large matrices, have proved to be major benefactors of vectorization. Monte Carlo transport, on the other hand, typically contains numerous logic statements and requires extensive redevelopment to benefit from vectorization.more » Multiprocessing and multitasking provide additional CPU cycles via multiple processors. Such systems are generally designed with either common memory access (multitasking) or distributed memory access. In both cases, theoretical speedup, as a function of the number of processors (P) and the fraction of task time that multiprocesses (f), can be formulated using Amdahl's Law S ((f,P) = 1 f + f/P). However, for most applications this theoretical limit cannot be achieved, due to additional terms not included in Amdahl's Law. Monte Carlo transport is a natural candidate for multiprocessing, since the particle tracks are generally independent and the precision of the result increases as the square root of the number of particles tracked.« less

  14. Machine learning techniques applied to the determination of road suitability for the transportation of dangerous substances.

    PubMed

    Matías, J M; Taboada, J; Ordóñez, C; Nieto, P G

    2007-08-17

    This article describes a methodology to model the degree of remedial action required to make short stretches of a roadway suitable for dangerous goods transport (DGT), particularly pollutant substances, using different variables associated with the characteristics of each segment. Thirty-one factors determining the impact of an accident on a particular stretch of road were identified and subdivided into two major groups: accident probability factors and accident severity factors. Given the number of factors determining the state of a particular road segment, the only viable statistical methods for implementing the model were machine learning techniques, such as multilayer perceptron networks (MLPs), classification trees (CARTs) and support vector machines (SVMs). The results produced by these techniques on a test sample were more favourable than those produced by traditional discriminant analysis, irrespective of whether dimensionality reduction techniques were applied. The best results were obtained using SVMs specifically adapted to ordinal data. This technique takes advantage of the ordinal information contained in the data without penalising the computational load. Furthermore, the technique permits the estimation of the utility function that is latent in expert knowledge.

  15. On the role of cost-sensitive learning in multi-class brain-computer interfaces.

    PubMed

    Devlaminck, Dieter; Waegeman, Willem; Wyns, Bart; Otte, Georges; Santens, Patrick

    2010-06-01

    Brain-computer interfaces (BCIs) present an alternative way of communication for people with severe disabilities. One of the shortcomings in current BCI systems, recently put forward in the fourth BCI competition, is the asynchronous detection of motor imagery versus resting state. We investigated this extension to the three-class case, in which the resting state is considered virtually lying between two motor classes, resulting in a large penalty when one motor task is misclassified into the other motor class. We particularly focus on the behavior of different machine-learning techniques and on the role of multi-class cost-sensitive learning in such a context. To this end, four different kernel methods are empirically compared, namely pairwise multi-class support vector machines (SVMs), two cost-sensitive multi-class SVMs and kernel-based ordinal regression. The experimental results illustrate that ordinal regression performs better than the other three approaches when a cost-sensitive performance measure such as the mean-squared error is considered. By contrast, multi-class cost-sensitive learning enables us to control the number of large errors made between two motor tasks.

  16. Explicit time integration of finite element models on a vectorized, concurrent computer with shared memory

    NASA Technical Reports Server (NTRS)

    Gilbertsen, Noreen D.; Belytschko, Ted

    1990-01-01

    The implementation of a nonlinear explicit program on a vectorized, concurrent computer with shared memory is described and studied. The conflict between vectorization and concurrency is described and some guidelines are given for optimal block sizes. Several example problems are summarized to illustrate the types of speed-ups which can be achieved by reprogramming as compared to compiler optimization.

  17. Research on intrusion detection based on Kohonen network and support vector machine

    NASA Astrophysics Data System (ADS)

    Shuai, Chunyan; Yang, Hengcheng; Gong, Zeweiyi

    2018-05-01

    In view of the problem of low detection accuracy and the long detection time of support vector machine, which directly applied to the network intrusion detection system. Optimization of SVM parameters can greatly improve the detection accuracy, but it can not be applied to high-speed network because of the long detection time. a method based on Kohonen neural network feature selection is proposed to reduce the optimization time of support vector machine parameters. Firstly, this paper is to calculate the weights of the KDD99 network intrusion data by Kohonen network and select feature by weight. Then, after the feature selection is completed, genetic algorithm (GA) and grid search method are used for parameter optimization to find the appropriate parameters and classify them by support vector machines. By comparing experiments, it is concluded that feature selection can reduce the time of parameter optimization, which has little influence on the accuracy of classification. The experiments suggest that the support vector machine can be used in the network intrusion detection system and reduce the missing rate.

  18. Identifying ideal brow vector position: empirical analysis of three brow archetypes.

    PubMed

    Hamamoto, Ashley A; Liu, Tiffany W; Wong, Brian J

    2013-02-01

    Surgical browlifts counteract the effects of aging, correct ptosis, and optimize forehead aesthetics. While surgeons have control over brow shape, the metrics defining ideal brow shape are subjective. This study aims to empirically determine whether three expert brow design strategies are aesthetically equivalent by using expert focus group analysis and relating these findings to brow surgery. Comprehensive literature search identified three dominant brow design methods (Westmore, Lamas and Anastasia) that are heavily cited, referenced or internationally recognized in either medical literature or by the lay media. Using their respective guidelines, brow shape was modified for 10 synthetic female faces, yielding 30 images. A focus group of 50 professional makeup artists ranked the three images for each of the 10 faces to generate ordinal attractiveness scores. The contemporary methods employed by Anastasia and Lamas produce a brow arch more lateral than Westmore's classic method. Although the more laterally located brow arch is considered the current trend in facial aesthetics, this style was not empirically supported. No single method was consistently rated most or least attractive by the focus group, and no significant difference in attractiveness score for the different methods was observed (p = 0.2454). Although each method of brow placement has been promoted as the "best" approach, no single brow design method achieved statistical significance in optimizing attractiveness. Each can be used effectively as a guide in designing eyebrow shape during browlift procedures, making it possible to use the three methods interchangeably. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.

  19. High-performance computing — an overview

    NASA Astrophysics Data System (ADS)

    Marksteiner, Peter

    1996-08-01

    An overview of high-performance computing (HPC) is given. Different types of computer architectures used in HPC are discussed: vector supercomputers, high-performance RISC processors, various parallel computers like symmetric multiprocessors, workstation clusters, massively parallel processors. Software tools and programming techniques used in HPC are reviewed: vectorizing compilers, optimization and vector tuning, optimization for RISC processors; parallel programming techniques like shared-memory parallelism, message passing and data parallelism; and numerical libraries.

  20. Modeling of Mean-VaR portfolio optimization by risk tolerance when the utility function is quadratic

    NASA Astrophysics Data System (ADS)

    Sukono, Sidi, Pramono; Bon, Abdul Talib bin; Supian, Sudradjat

    2017-03-01

    The problems of investing in financial assets are to choose a combination of weighting a portfolio can be maximized return expectations and minimizing the risk. This paper discusses the modeling of Mean-VaR portfolio optimization by risk tolerance, when square-shaped utility functions. It is assumed that the asset return has a certain distribution, and the risk of the portfolio is measured using the Value-at-Risk (VaR). So, the process of optimization of the portfolio is done based on the model of Mean-VaR portfolio optimization model for the Mean-VaR done using matrix algebra approach, and the Lagrange multiplier method, as well as Khun-Tucker. The results of the modeling portfolio optimization is in the form of a weighting vector equations depends on the vector mean return vector assets, identities, and matrix covariance between return of assets, as well as a factor in risk tolerance. As an illustration of numeric, analyzed five shares traded on the stock market in Indonesia. Based on analysis of five stocks return data gained the vector of weight composition and graphics of efficient surface of portfolio. Vector composition weighting weights and efficient surface charts can be used as a guide for investors in decisions to invest.

  1. Optimized AAV rh.10 Vectors That Partially Evade Neutralizing Antibodies during Hepatic Gene Transfer

    PubMed Central

    Selot, Ruchita; Arumugam, Sathyathithan; Mary, Bertin; Cheemadan, Sabna; Jayandharan, Giridhara R.

    2017-01-01

    Of the 12 common serotypes used for gene delivery applications, Adeno-associated virus (AAV)rh.10 serotype has shown sustained hepatic transduction and has the lowest seropositivity in humans. We have evaluated if further modifications to AAVrh.10 at its phosphodegron like regions or predicted immunogenic epitopes could improve its hepatic gene transfer and immune evasion potential. Mutant AAVrh.10 vectors were generated by site directed mutagenesis of the predicted targets. These mutant vectors were first tested for their transduction efficiency in HeLa and HEK293T cells. The optimal vector was further evaluated for their cellular uptake, entry, and intracellular trafficking by quantitative PCR and time-lapse confocal microscopy. To evaluate their potential during hepatic gene therapy, C57BL/6 mice were administered with wild-type or optimal mutant AAVrh.10 and the luciferase transgene expression was documented by serial bioluminescence imaging at 14, 30, 45, and 72 days post-gene transfer. Their hepatic transduction was further verified by a quantitative PCR analysis of AAV copy number in the liver tissue. The optimal AAVrh.10 vector was further evaluated for their immune escape potential, in animals pre-immunized with human intravenous immunoglobulin. Our results demonstrate that a modified AAVrh.10 S671A vector had enhanced cellular entry (3.6 fold), migrate rapidly to the perinuclear region (1 vs. >2 h for wild type vectors) in vitro, which further translates to modest increase in hepatic gene transfer efficiency in vivo. More importantly, the mutant AAVrh.10 vector was able to partially evade neutralizing antibodies (~27–64 fold) in pre-immunized animals. The development of an AAV vector system that can escape the circulating neutralizing antibodies in the host will substantially widen the scope of gene therapy applications in humans. PMID:28769791

  2. Some Problems of Rocket-Space Vehicles' Characteristics co- ordination

    NASA Astrophysics Data System (ADS)

    Sergienko, Alexander A.

    2002-01-01

    of the XX century suffered a reverse. The designers of the United States' firms and enterprises of aviation and rocket-space industry (Boeing, Rocketdyne, Lockheed Martin, McDonnell Douglas, Rockwell, etc.) and NASA (Marshall Space Flight Center, Johnson Space Center, Langley Research Center and Lewis Research Center and others) could not correctly co-ordinate the characteristics of a propulsion system and a space vehicle for elaboration of the "Single-Stage-To-Orbit" reusable vehicle (SSTO) as an integral whole system, which is would able to inject a payload into an orbit and to return back on the Earth. jet nozzle design as well as the choice of propulsion system characteristics, ensuring the high ballistic efficiency, are considered in the present report. The efficiency criterions for the engine and launch system parameters optimization are discussed. The new methods of the nozzle block optimal parameters' choice for the satisfaction of the object task of flight are suggested. The family of SSTO with a payload mass from 5 to 20 ton and initial weight under 800 ton is considered.

  3. Vector processing efficiency of plasma MHD codes by use of the FACOM 230-75 APU

    NASA Astrophysics Data System (ADS)

    Matsuura, T.; Tanaka, Y.; Naraoka, K.; Takizuka, T.; Tsunematsu, T.; Tokuda, S.; Azumi, M.; Kurita, G.; Takeda, T.

    1982-06-01

    In the framework of pipelined vector architecture, the efficiency of vector processing is assessed with respect to plasma MHD codes in nuclear fusion research. By using a vector processor, the FACOM 230-75 APU, the limit of the enhancement factor due to parallelism of current vector machines is examined for three numerical codes based on a fluid model. Reasonable speed-up factors of approximately 6,6 and 4 times faster than the highly optimized scalar version are obtained for ERATO (linear stability code), AEOLUS-R1 (nonlinear stability code) and APOLLO (1-1/2D transport code), respectively. Problems of the pipelined vector processors are discussed from the viewpoint of restructuring, optimization and choice of algorithms. In conclusion, the important concept of "concurrency within pipelined parallelism" is emphasized.

  4. Prediction of two month modified Rankin Scale with an ordinal prediction model in patients with aneurysmal subarachnoid haemorrhage

    PubMed Central

    2010-01-01

    Background Aneurysmal subarachnoid haemorrhage (aSAH) is a devastating event with a frequently disabling outcome. Our aim was to develop a prognostic model to predict an ordinal clinical outcome at two months in patients with aSAH. Methods We studied patients enrolled in the International Subarachnoid Aneurysm Trial (ISAT), a randomized multicentre trial to compare coiling and clipping in aSAH patients. Several models were explored to estimate a patient's outcome according to the modified Rankin Scale (mRS) at two months after aSAH. Our final model was validated internally with bootstrapping techniques. Results The study population comprised of 2,128 patients of whom 159 patients died within 2 months (8%). Multivariable proportional odds analysis identified World Federation of Neurosurgical Societies (WFNS) grade as the most important predictor, followed by age, sex, lumen size of the aneurysm, Fisher grade, vasospasm on angiography, and treatment modality. The model discriminated moderately between those with poor and good mRS scores (c statistic = 0.65), with minor optimism according to bootstrap re-sampling (optimism corrected c statistic = 0.64). Conclusion We presented a calibrated and internally validated ordinal prognostic model to predict two month mRS in aSAH patients who survived the early stage up till a treatment decision. Although generalizability of the model is limited due to the selected population in which it was developed, this model could eventually be used to support clinical decision making after external validation. Trial Registration International Standard Randomised Controlled Trial, Number ISRCTN49866681 PMID:20920243

  5. Hypergraph-Based Combinatorial Optimization of Matrix-Vector Multiplication

    ERIC Educational Resources Information Center

    Wolf, Michael Maclean

    2009-01-01

    Combinatorial scientific computing plays an important enabling role in computational science, particularly in high performance scientific computing. In this thesis, we will describe our work on optimizing matrix-vector multiplication using combinatorial techniques. Our research has focused on two different problems in combinatorial scientific…

  6. A Hamiltonian approach to the planar optimization of mid-course corrections

    NASA Astrophysics Data System (ADS)

    Iorfida, E.; Palmer, P. L.; Roberts, M.

    2016-04-01

    Lawden's primer vector theory gives a set of necessary conditions that characterize the optimality of a transfer orbit, defined accordingly to the possibility of adding mid-course corrections. In this paper a novel approach is proposed where, through a polar coordinates transformation, the primer vector components decouple. Furthermore, the case when transfer, departure and arrival orbits are coplanar is analyzed using a Hamiltonian approach. This procedure leads to approximate analytic solutions for the in-plane components of the primer vector. Moreover, the solution for the circular transfer case is proven to be the Hill's solution. The novel procedure reduces the mathematical and computational complexity of the original case study. It is shown that the primer vector is independent of the semi-major axis of the transfer orbit. The case with a fixed transfer trajectory and variable initial and final thrust impulses is studied. The acquired related optimality maps are presented and analyzed and they express the likelihood of a set of trajectories to be optimal. Furthermore, it is presented which kind of requirements have to be fulfilled by a set of departure and arrival orbits to have the same profile of primer vector.

  7. Fault Detection of Bearing Systems through EEMD and Optimization Algorithm

    PubMed Central

    Lee, Dong-Han; Ahn, Jong-Hyo; Koh, Bong-Hwan

    2017-01-01

    This study proposes a fault detection and diagnosis method for bearing systems using ensemble empirical mode decomposition (EEMD) based feature extraction, in conjunction with particle swarm optimization (PSO), principal component analysis (PCA), and Isomap. First, a mathematical model is assumed to generate vibration signals from damaged bearing components, such as the inner-race, outer-race, and rolling elements. The process of decomposing vibration signals into intrinsic mode functions (IMFs) and extracting statistical features is introduced to develop a damage-sensitive parameter vector. Finally, PCA and Isomap algorithm are used to classify and visualize this parameter vector, to separate damage characteristics from healthy bearing components. Moreover, the PSO-based optimization algorithm improves the classification performance by selecting proper weightings for the parameter vector, to maximize the visualization effect of separating and grouping of parameter vectors in three-dimensional space. PMID:29143772

  8. Multiprocessing MCNP on an IBN RS/6000 cluster

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McKinney, G.W.; West, J.T.

    1993-01-01

    The advent of high-performance computer systems has brought to maturity programming concepts like vectorization, multiprocessing, and multitasking. While there are many schools of thought as to the most significant factor in obtaining order-of-magnitude increases in performance, such speedup can only be achieved by integrating the computer system and application code. Vectorization leads to faster manipulation of arrays by overlapping instruction CPU cycles. Discrete ordinates codes, which require the solving of large matrices, have proved to be major benefactors of vectorization. Monte Carlo transport, on the other hand, typically contains numerous logic statements and requires extensive redevelopment to benefit from vectorization.more » Multiprocessing and multitasking provide additional CPU cycles via multiple processors. Such systems are generally designed with either common memory access (multitasking) or distributed memory access. In both cases, theoretical speedup, as a function of the number of processors P and the fraction f of task time that multiprocesses, can be formulated using Amdahl's law: S(f, P) =1/(1-f+f/P). However, for most applications, this theoretical limit cannot be achieved because of additional terms (e.g., multitasking overhead, memory overlap, etc.) that are not included in Amdahl's law. Monte Carlo transport is a natural candidate for multiprocessing because the particle tracks are generally independent, and the precision of the result increases as the square Foot of the number of particles tracked.« less

  9. Multiprocessing MCNP on an IBM RS/6000 cluster

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McKinney, G.W.; West, J.T.

    1993-03-01

    The advent of high-performance computer systems has brought to maturity programming concepts like vectorization, multiprocessing, and multitasking. While there are many schools of thought as to the most significant factor in obtaining order-of-magnitude increases in performance, such speedup can only be achieved by integrating the computer system and application code. Vectorization leads to faster manipulation of arrays by overlapping instruction CPU cycles. Discrete ordinates codes, which require the solving of large matrices, have proved to be major benefactors of vectorization. Monte Carlo transport, on the other hand, typically contains numerous logic statements and requires extensive redevelopment to benefit from vectorization.more » Multiprocessing and multitasking provide additional CPU cycles via multiple processors. Such systems are generally designed with either common memory access (multitasking) or distributed memory access. In both cases, theoretical speedup, as a function of the number of processors (P) and the fraction of task time that multiprocesses (f), can be formulated using Amdahl`s Law S ((f,P) = 1 f + f/P). However, for most applications this theoretical limit cannot be achieved, due to additional terms not included in Amdahl`s Law. Monte Carlo transport is a natural candidate for multiprocessing, since the particle tracks are generally independent and the precision of the result increases as the square root of the number of particles tracked.« less

  10. Application of optimal control theory to the design of the NASA/JPL 70-meter antenna servos

    NASA Technical Reports Server (NTRS)

    Alvarez, L. S.; Nickerson, J.

    1989-01-01

    The application of Linear Quadratic Gaussian (LQG) techniques to the design of the 70-m axis servos is described. Linear quadratic optimal control and Kalman filter theory are reviewed, and model development and verification are discussed. Families of optimal controller and Kalman filter gain vectors were generated by varying weight parameters. Performance specifications were used to select final gain vectors.

  11. Support vector machine firefly algorithm based optimization of lens system.

    PubMed

    Shamshirband, Shahaboddin; Petković, Dalibor; Pavlović, Nenad T; Ch, Sudheer; Altameem, Torki A; Gani, Abdullah

    2015-01-01

    Lens system design is an important factor in image quality. The main aspect of the lens system design methodology is the optimization procedure. Since optimization is a complex, nonlinear task, soft computing optimization algorithms can be used. There are many tools that can be employed to measure optical performance, but the spot diagram is the most useful. The spot diagram gives an indication of the image of a point object. In this paper, the spot size radius is considered an optimization criterion. Intelligent soft computing scheme support vector machines (SVMs) coupled with the firefly algorithm (FFA) are implemented. The performance of the proposed estimators is confirmed with the simulation results. The result of the proposed SVM-FFA model has been compared with support vector regression (SVR), artificial neural networks, and generic programming methods. The results show that the SVM-FFA model performs more accurately than the other methodologies. Therefore, SVM-FFA can be used as an efficient soft computing technique in the optimization of lens system designs.

  12. Robust resolution enhancement optimization methods to process variations based on vector imaging model

    NASA Astrophysics Data System (ADS)

    Ma, Xu; Li, Yanqiu; Guo, Xuejia; Dong, Lisong

    2012-03-01

    Optical proximity correction (OPC) and phase shifting mask (PSM) are the most widely used resolution enhancement techniques (RET) in the semiconductor industry. Recently, a set of OPC and PSM optimization algorithms have been developed to solve for the inverse lithography problem, which are only designed for the nominal imaging parameters without giving sufficient attention to the process variations due to the aberrations, defocus and dose variation. However, the effects of process variations existing in the practical optical lithography systems become more pronounced as the critical dimension (CD) continuously shrinks. On the other hand, the lithography systems with larger NA (NA>0.6) are now extensively used, rendering the scalar imaging models inadequate to describe the vector nature of the electromagnetic field in the current optical lithography systems. In order to tackle the above problems, this paper focuses on developing robust gradient-based OPC and PSM optimization algorithms to the process variations under a vector imaging model. To achieve this goal, an integrative and analytic vector imaging model is applied to formulate the optimization problem, where the effects of process variations are explicitly incorporated in the optimization framework. The steepest descent algorithm is used to optimize the mask iteratively. In order to improve the efficiency of the proposed algorithms, a set of algorithm acceleration techniques (AAT) are exploited during the optimization procedure.

  13. Fabric wrinkle characterization and classification using modified wavelet coefficients and optimized support-vector-machine classifier

    USDA-ARS?s Scientific Manuscript database

    This paper presents a novel wrinkle evaluation method that uses modified wavelet coefficients and an optimized support-vector-machine (SVM) classification scheme to characterize and classify wrinkle appearance of fabric. Fabric images were decomposed with the wavelet transform (WT), and five parame...

  14. An Integrated Model of Co-ordinated Community-Based Care.

    PubMed

    Scharlach, Andrew E; Graham, Carrie L; Berridge, Clara

    2015-08-01

    Co-ordinated approaches to community-based care are a central component of current and proposed efforts to help vulnerable older adults obtain needed services and supports and reduce unnecessary use of health care resources. This study examines ElderHelp Concierge Club, an integrated community-based care model that includes comprehensive personal and environmental assessment, multilevel care co-ordination, a mix of professional and volunteer service providers, and a capitated, income-adjusted fee model. Evaluation includes a retrospective study (n = 96) of service use and perceived program impact, and a prospective study (n = 21) of changes in participant physical and social well-being and health services utilization. Over the period of this study, participants showed greater mobility, greater ability to meet household needs, greater access to health care, reduced social isolation, reduced home hazards, fewer falls, and greater perceived ability to obtain assistance needed to age in place. This study provides preliminary evidence that an integrated multilevel care co-ordination approach may be an effective and efficient model for serving vulnerable community-based elders, especially low and moderate-income elders who otherwise could not afford the cost of care. The findings suggest the need for multisite controlled studies to more rigorously evaluate program impacts and the optimal mix of various program components. © The Author 2014. Published by Oxford University Press on behalf of The Gerontological Society of America. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  15. Vectorial mask optimization methods for robust optical lithography

    NASA Astrophysics Data System (ADS)

    Ma, Xu; Li, Yanqiu; Guo, Xuejia; Dong, Lisong; Arce, Gonzalo R.

    2012-10-01

    Continuous shrinkage of critical dimension in an integrated circuit impels the development of resolution enhancement techniques for low k1 lithography. Recently, several pixelated optical proximity correction (OPC) and phase-shifting mask (PSM) approaches were developed under scalar imaging models to account for the process variations. However, the lithography systems with larger-NA (NA>0.6) are predominant for current technology nodes, rendering the scalar models inadequate to describe the vector nature of the electromagnetic field that propagates through the optical lithography system. In addition, OPC and PSM algorithms based on scalar models can compensate for wavefront aberrations, but are incapable of mitigating polarization aberrations in practical lithography systems, which can only be dealt with under the vector model. To this end, we focus on developing robust pixelated gradient-based OPC and PSM optimization algorithms aimed at canceling defocus, dose variation, wavefront and polarization aberrations under a vector model. First, an integrative and analytic vector imaging model is applied to formulate the optimization problem, where the effects of process variations are explicitly incorporated in the optimization framework. A steepest descent algorithm is then used to iteratively optimize the mask patterns. Simulations show that the proposed algorithms can effectively improve the process windows of the optical lithography systems.

  16. Efficient Optimization of Low-Thrust Spacecraft Trajectories

    NASA Technical Reports Server (NTRS)

    Lee, Seungwon; Fink, Wolfgang; Russell, Ryan; Terrile, Richard; Petropoulos, Anastassios; vonAllmen, Paul

    2007-01-01

    A paper describes a computationally efficient method of optimizing trajectories of spacecraft driven by propulsion systems that generate low thrusts and, hence, must be operated for long times. A common goal in trajectory-optimization problems is to find minimum-time, minimum-fuel, or Pareto-optimal trajectories (here, Pareto-optimality signifies that no other solutions are superior with respect to both flight time and fuel consumption). The present method utilizes genetic and simulated-annealing algorithms to search for globally Pareto-optimal solutions. These algorithms are implemented in parallel form to reduce computation time. These algorithms are coupled with either of two traditional trajectory- design approaches called "direct" and "indirect." In the direct approach, thrust control is discretized in either arc time or arc length, and the resulting discrete thrust vectors are optimized. The indirect approach involves the primer-vector theory (introduced in 1963), in which the thrust control problem is transformed into a co-state control problem and the initial values of the co-state vector are optimized. In application to two example orbit-transfer problems, this method was found to generate solutions comparable to those of other state-of-the-art trajectory-optimization methods while requiring much less computation time.

  17. Applicability of initial optimal maternal and fetal electrocardiogram combination vectors to subsequent recordings

    NASA Astrophysics Data System (ADS)

    Yan, Hua-Wen; Huang, Xiao-Lin; Zhao, Ying; Si, Jun-Feng; Liu, Tie-Bing; Liu, Hong-Xing

    2014-11-01

    A series of experiments are conducted to confirm whether the vectors calculated for an early section of a continuous non-invasive fetal electrocardiogram (fECG) recording can be directly applied to subsequent sections in order to reduce the computation required for real-time monitoring. Our results suggest that it is generally feasible to apply the initial optimal maternal and fetal ECG combination vectors to extract the fECG and maternal ECG in subsequent recorded sections.

  18. One health: the importance of companion animal vector-borne diseases

    PubMed Central

    2011-01-01

    The international prominence accorded the 'One Health' concept of co-ordinated activity of those involved in human and animal health is a modern incarnation of a long tradition of comparative medicine, with roots in the ancient civilizations and a golden era during the 19th century explosion of knowledge in the field of infectious disease research. Modern One Health tends to focus on zoonotic pathogens emerging from wildlife and production animal species, but one of the most significant One Health challenges is rabies for which there is a canine reservoir. This review considers the role of small companion animals in One Health and specifically addresses the major vector-borne infectious diseases that are shared by man, dogs and cats. The most significant of these are leishmaniosis, borreliosis, bartonellosis, ehrlichiosis, rickettsiosis and anaplasmosis. The challenges that lie ahead in this field of One Health are discussed, together with the role of the newly formed World Small Animal Veterinary Association One Health Committee. PMID:21489237

  19. Optimal distribution of integration time for intensity measurements in Stokes polarimetry.

    PubMed

    Li, Xiaobo; Liu, Tiegen; Huang, Bingjing; Song, Zhanjie; Hu, Haofeng

    2015-10-19

    We consider the typical Stokes polarimetry system, which performs four intensity measurements to estimate a Stokes vector. We show that if the total integration time of intensity measurements is fixed, the variance of the Stokes vector estimator depends on the distribution of the integration time at four intensity measurements. Therefore, by optimizing the distribution of integration time, the variance of the Stokes vector estimator can be decreased. In this paper, we obtain the closed-form solution of the optimal distribution of integration time by employing Lagrange multiplier method. According to the theoretical analysis and real-world experiment, it is shown that the total variance of the Stokes vector estimator can be significantly decreased about 40% in the case discussed in this paper. The method proposed in this paper can effectively decrease the measurement variance and thus statistically improves the measurement accuracy of the polarimetric system.

  20. Optimal Pitch Thrust-Vector Angle and Benefits for all Flight Regimes

    NASA Technical Reports Server (NTRS)

    Gilyard, Glenn B.; Bolonkin, Alexander

    2000-01-01

    The NASA Dryden Flight Research Center is exploring the optimum thrust-vector angle on aircraft. Simple aerodynamic performance models for various phases of aircraft flight are developed and optimization equations and algorithms are presented in this report. Results of optimal angles of thrust vectors and associated benefits for various flight regimes of aircraft (takeoff, climb, cruise, descent, final approach, and landing) are given. Results for a typical wide-body transport aircraft are also given. The benefits accruable for this class of aircraft are small, but the technique can be applied to other conventionally configured aircraft. The lower L/D aerodynamic characteristics of fighters generally would produce larger benefits than those produced for transport aircraft.

  1. Attitude Determination Using Two Vector Measurements

    NASA Technical Reports Server (NTRS)

    Markley, F. Landis

    1998-01-01

    Many spacecraft attitude determination methods use exactly two vector measurements. The two vectors are typically the unit vector to the Sun and the Earth's magnetic field vector for coarse "sun-mag" attitude determination or unit vectors to two stars tracked by two star trackers for fine attitude determination. TRIAD, the earliest published algorithm for determining spacecraft attitude from two vector measurements, has been widely used in both ground-based and onboard attitude determination. Later attitude determination methods have been based on Wahba's optimality criterion for n arbitrarily weighted observations. The solution of Wahba's problem is somewhat difficult in the general case, but there is a simple closed-form solution in the two-observation case. This solution reduces to the TRIAD solution for certain choices of measurement weights. This paper presents and compares these algorithms as well as sub-optimal algorithms proposed by Bar-Itzhack, Harman, and Reynolds. Some new results will be presented, but the paper is primarily a review and tutorial.

  2. Expression of chicken parvovirus VP2 in chicken embryo fibroblasts requires codon optimization for production of naked DNA and vectored meleagrid herpesvirus type 1 vaccines.

    PubMed

    Spatz, Stephen J; Volkening, Jeremy D; Mullis, Robert; Li, Fenglan; Mercado, John; Zsak, Laszlo

    2013-10-01

    Meleagrid herpesvirus type 1 (MeHV-1) is an ideal vector for the expression of antigens from pathogenic avian organisms in order to generate vaccines. Chicken parvovirus (ChPV) is a widespread infectious virus that causes serious disease in chickens. It is one of the etiological agents largely suspected in causing Runting Stunting Syndrome (RSS) in chickens. Initial attempts to express the wild-type gene encoding the capsid protein VP2 of ChPV by insertion into the thymidine kinase gene of MeHV-1 were unsuccessful. However, transient expression of a codon-optimized synthetic VP2 gene cloned into the bicistronic vector pIRES2-Ds-Red2, could be demonstrated by immunocytochemical staining of transfected chicken embryo fibroblasts (CEFs). Red fluorescence could also be detected in these transfected cells since the red fluorescent protein gene is downstream from the internal ribosome entry site (IRES). Strikingly, fluorescence could not be demonstrated in cells transiently transfected with the bicistronic vector containing the wild-type or non-codon-optimized VP2 gene. Immunocytochemical staining of these cells also failed to demonstrate expression of wild-type VP2, indicating that the lack of expression was at the RNA level and the VP2 protein was not toxic to CEFs. Chickens vaccinated with a DNA vaccine consisting of the bicistronic vector containing the codon-optimized VP2 elicited a humoral immune response as measured by a VP2-specific ELISA. This VP2 codon-optimized bicistronic cassette was rescued into the MeHV-1 genome generating a vectored vaccine against ChPV disease.

  3. The primer vector in linear, relative-motion equations. [spacecraft trajectory optimization

    NASA Technical Reports Server (NTRS)

    1980-01-01

    Primer vector theory is used in analyzing a set of linear, relative-motion equations - the Clohessy-Wiltshire equations - to determine the criteria and necessary conditions for an optimal, N-impulse trajectory. Since the state vector for these equations is defined in terms of a linear system of ordinary differential equations, all fundamental relations defining the solution of the state and costate equations, and the necessary conditions for optimality, can be expressed in terms of elementary functions. The analysis develops the analytical criteria for improving a solution by (1) moving any dependent or independent variable in the initial and/or final orbit, and (2) adding intermediate impulses. If these criteria are violated, the theory establishes a sufficient number of analytical equations. The subsequent satisfaction of these equations will result in the optimal position vectors and times of an N-impulse trajectory. The solution is examined for the specific boundary conditions of (1) fixed-end conditions, two-impulse, and time-open transfer; (2) an orbit-to-orbit transfer; and (3) a generalized rendezvous problem. A sequence of rendezvous problems is solved to illustrate the analysis and the computational procedure.

  4. Optimizing the method for generation of integration-free induced pluripotent stem cells from human peripheral blood.

    PubMed

    Gu, Haihui; Huang, Xia; Xu, Jing; Song, Lili; Liu, Shuping; Zhang, Xiao-Bing; Yuan, Weiping; Li, Yanxin

    2018-06-15

    Generation of induced pluripotent stem cells (iPSCs) from human peripheral blood provides a convenient and low-invasive way to obtain patient-specific iPSCs. The episomal vector is one of the best approaches for reprogramming somatic cells to pluripotent status because of its simplicity and affordability. However, the efficiency of episomal vector reprogramming of adult peripheral blood cells is relatively low compared with cord blood and bone marrow cells. In the present study, integration-free human iPSCs derived from peripheral blood were established via episomal technology. We optimized mononuclear cell isolation and cultivation, episomal vector promoters, and a combination of transcriptional factors to improve reprogramming efficiency. Here, we improved the generation efficiency of integration-free iPSCs from human peripheral blood mononuclear cells by optimizing the method of isolating mononuclear cells from peripheral blood, by modifying the integration of culture medium, and by adjusting the duration of culture time and the combination of different episomal vectors. With this optimized protocol, a valuable asset for banking patient-specific iPSCs has been established.

  5. [The preparation of recombinant adenovirus Ad-Rad50-GFP and detection of the optimal multiplicity of infection in CNE1 transfected hv Ad-Rad50-GFP].

    PubMed

    Yan, Ruicheng; Huang, Jiancong; Zhu, Ling; Chang, Lihong; Li, Jingjia; Wu, Xifu; Ye, Jin; Zhang, Gehua

    2015-12-01

    The optimal multiplicity of infection (MOI) of the recombinant adenovirus Ad-Rad50-GFP carrying a mutant Rad50 gene expression region on the cell growth of nasopharyngeal carcinoma and the viral amplification efficiency of CNE1 cell infected by this adenovirus were studied. The biological titer of Ad-Rad50-GFP was measured by end point dilution method. The impact of recombinant adenoviral vector transfection on the growth of CNE1 cells was observed by cell growth curve. Transfection efficacy of recombinant adenoviral vector was observed and calculated through fluorescence microscope. The expression f mutant Rad50 in the Ad-Rad50-GFP transfected CNE1 cells with optimal MOI was detected by Western Blot after transfection. The biological titer of Ad-Rad50-GFP was 1.26 x 10¹¹ pfu/ml. CNE1 cell growth was not influenced significantly as they were transfected by recombinant adenoviral vector with MOI less than 50. Transfection efficacy of recombinant adenoviral vector was most salient at 24 hours after transfection, with the high expression of mutant Rad50, and the efficiency still remained about 70% after 72 hours. Recombinant adenoviral vector Ad-Rad50-GFP could transfect CNE1 cells as well as result in the expression of mutant Rad50 in CNE1 cells effectively. MOI = 50 was the optimal multiplicity of infection of CNE1 cells transfected by recombinant adenoviral vector Ad-Rad50-GFP.

  6. Balancing aggregation and smoothing errors in inverse models

    DOE PAGES

    Turner, A. J.; Jacob, D. J.

    2015-06-30

    Inverse models use observations of a system (observation vector) to quantify the variables driving that system (state vector) by statistical optimization. When the observation vector is large, such as with satellite data, selecting a suitable dimension for the state vector is a challenge. A state vector that is too large cannot be effectively constrained by the observations, leading to smoothing error. However, reducing the dimension of the state vector leads to aggregation error as prior relationships between state vector elements are imposed rather than optimized. Here we present a method for quantifying aggregation and smoothing errors as a function ofmore » state vector dimension, so that a suitable dimension can be selected by minimizing the combined error. Reducing the state vector within the aggregation error constraints can have the added advantage of enabling analytical solution to the inverse problem with full error characterization. We compare three methods for reducing the dimension of the state vector from its native resolution: (1) merging adjacent elements (grid coarsening), (2) clustering with principal component analysis (PCA), and (3) applying a Gaussian mixture model (GMM) with Gaussian pdfs as state vector elements on which the native-resolution state vector elements are projected using radial basis functions (RBFs). The GMM method leads to somewhat lower aggregation error than the other methods, but more importantly it retains resolution of major local features in the state vector while smoothing weak and broad features.« less

  7. Balancing aggregation and smoothing errors in inverse models

    NASA Astrophysics Data System (ADS)

    Turner, A. J.; Jacob, D. J.

    2015-01-01

    Inverse models use observations of a system (observation vector) to quantify the variables driving that system (state vector) by statistical optimization. When the observation vector is large, such as with satellite data, selecting a suitable dimension for the state vector is a challenge. A state vector that is too large cannot be effectively constrained by the observations, leading to smoothing error. However, reducing the dimension of the state vector leads to aggregation error as prior relationships between state vector elements are imposed rather than optimized. Here we present a method for quantifying aggregation and smoothing errors as a function of state vector dimension, so that a suitable dimension can be selected by minimizing the combined error. Reducing the state vector within the aggregation error constraints can have the added advantage of enabling analytical solution to the inverse problem with full error characterization. We compare three methods for reducing the dimension of the state vector from its native resolution: (1) merging adjacent elements (grid coarsening), (2) clustering with principal component analysis (PCA), and (3) applying a Gaussian mixture model (GMM) with Gaussian pdfs as state vector elements on which the native-resolution state vector elements are projected using radial basis functions (RBFs). The GMM method leads to somewhat lower aggregation error than the other methods, but more importantly it retains resolution of major local features in the state vector while smoothing weak and broad features.

  8. Balancing aggregation and smoothing errors in inverse models

    NASA Astrophysics Data System (ADS)

    Turner, A. J.; Jacob, D. J.

    2015-06-01

    Inverse models use observations of a system (observation vector) to quantify the variables driving that system (state vector) by statistical optimization. When the observation vector is large, such as with satellite data, selecting a suitable dimension for the state vector is a challenge. A state vector that is too large cannot be effectively constrained by the observations, leading to smoothing error. However, reducing the dimension of the state vector leads to aggregation error as prior relationships between state vector elements are imposed rather than optimized. Here we present a method for quantifying aggregation and smoothing errors as a function of state vector dimension, so that a suitable dimension can be selected by minimizing the combined error. Reducing the state vector within the aggregation error constraints can have the added advantage of enabling analytical solution to the inverse problem with full error characterization. We compare three methods for reducing the dimension of the state vector from its native resolution: (1) merging adjacent elements (grid coarsening), (2) clustering with principal component analysis (PCA), and (3) applying a Gaussian mixture model (GMM) with Gaussian pdfs as state vector elements on which the native-resolution state vector elements are projected using radial basis functions (RBFs). The GMM method leads to somewhat lower aggregation error than the other methods, but more importantly it retains resolution of major local features in the state vector while smoothing weak and broad features.

  9. Population Fisher information matrix and optimal design of discrete data responses in population pharmacodynamic experiments.

    PubMed

    Ogungbenro, Kayode; Aarons, Leon

    2011-08-01

    In the recent years, interest in the application of experimental design theory to population pharmacokinetic (PK) and pharmacodynamic (PD) experiments has increased. The aim is to improve the efficiency and the precision with which parameters are estimated during data analysis and sometimes to increase the power and reduce the sample size required for hypothesis testing. The population Fisher information matrix (PFIM) has been described for uniresponse and multiresponse population PK experiments for design evaluation and optimisation. Despite these developments and availability of tools for optimal design of population PK and PD experiments much of the effort has been focused on repeated continuous variable measurements with less work being done on repeated discrete type measurements. Discrete data arise mainly in PDs e.g. ordinal, nominal, dichotomous or count measurements. This paper implements expressions for the PFIM for repeated ordinal, dichotomous and count measurements based on analysis by a mixed-effects modelling technique. Three simulation studies were used to investigate the performance of the expressions. Example 1 is based on repeated dichotomous measurements, Example 2 is based on repeated count measurements and Example 3 is based on repeated ordinal measurements. Data simulated in MATLAB were analysed using NONMEM (Laplace method) and the glmmML package in R (Laplace and adaptive Gauss-Hermite quadrature methods). The results obtained for Examples 1 and 2 showed good agreement between the relative standard errors obtained using the PFIM and simulations. The results obtained for Example 3 showed the importance of sampling at the most informative time points. Implementation of these expressions will provide the opportunity for efficient design of population PD experiments that involve discrete type data through design evaluation and optimisation.

  10. An optimal-estimation-based aerosol retrieval algorithm using OMI near-UV observations

    NASA Astrophysics Data System (ADS)

    Jeong, U.; Kim, J.; Ahn, C.; Torres, O.; Liu, X.; Bhartia, P. K.; Spurr, R. J. D.; Haffner, D.; Chance, K.; Holben, B. N.

    2016-01-01

    An optimal-estimation(OE)-based aerosol retrieval algorithm using the OMI (Ozone Monitoring Instrument) near-ultraviolet observation was developed in this study. The OE-based algorithm has the merit of providing useful estimates of errors simultaneously with the inversion products. Furthermore, instead of using the traditional look-up tables for inversion, it performs online radiative transfer calculations with the VLIDORT (linearized pseudo-spherical vector discrete ordinate radiative transfer code) to eliminate interpolation errors and improve stability. The measurements and inversion products of the Distributed Regional Aerosol Gridded Observation Network campaign in northeast Asia (DRAGON NE-Asia 2012) were used to validate the retrieved aerosol optical thickness (AOT) and single scattering albedo (SSA). The retrieved AOT and SSA at 388 nm have a correlation with the Aerosol Robotic Network (AERONET) products that is comparable to or better than the correlation with the operational product during the campaign. The OE-based estimated error represented the variance of actual biases of AOT at 388 nm between the retrieval and AERONET measurements better than the operational error estimates. The forward model parameter errors were analyzed separately for both AOT and SSA retrievals. The surface reflectance at 388 nm, the imaginary part of the refractive index at 354 nm, and the number fine-mode fraction (FMF) were found to be the most important parameters affecting the retrieval accuracy of AOT, while FMF was the most important parameter for the SSA retrieval. The additional information provided with the retrievals, including the estimated error and degrees of freedom, is expected to be valuable for relevant studies. Detailed advantages of using the OE method were described and discussed in this paper.

  11. An Optimal-Estimation-Based Aerosol Retrieval Algorithm Using OMI Near-UV Observations

    NASA Technical Reports Server (NTRS)

    Jeong, U; Kim, J.; Ahn, C.; Torres, O.; Liu, X.; Bhartia, P. K.; Spurr, R. J. D.; Haffner, D.; Chance, K.; Holben, B. N.

    2016-01-01

    An optimal-estimation(OE)-based aerosol retrieval algorithm using the OMI (Ozone Monitoring Instrument) near-ultraviolet observation was developed in this study. The OE-based algorithm has the merit of providing useful estimates of errors simultaneously with the inversion products. Furthermore, instead of using the traditional lookup tables for inversion, it performs online radiative transfer calculations with the VLIDORT (linearized pseudo-spherical vector discrete ordinate radiative transfer code) to eliminate interpolation errors and improve stability. The measurements and inversion products of the Distributed Regional Aerosol Gridded Observation Network campaign in northeast Asia (DRAGON NE-Asia 2012) were used to validate the retrieved aerosol optical thickness (AOT) and single scattering albedo (SSA). The retrieved AOT and SSA at 388 nm have a correlation with the Aerosol Robotic Network (AERONET) products that is comparable to or better than the correlation with the operational product during the campaign. The OEbased estimated error represented the variance of actual biases of AOT at 388 nm between the retrieval and AERONET measurements better than the operational error estimates. The forward model parameter errors were analyzed separately for both AOT and SSA retrievals. The surface reflectance at 388 nm, the imaginary part of the refractive index at 354 nm, and the number fine-mode fraction (FMF) were found to be the most important parameters affecting the retrieval accuracy of AOT, while FMF was the most important parameter for the SSA retrieval. The additional information provided with the retrievals, including the estimated error and degrees of freedom, is expected to be valuable for relevant studies. Detailed advantages of using the OE method were described and discussed in this paper.

  12. Validation of the Chinese Version of the Life Orientation Test with a Robust Weighted Least Squares Approach

    ERIC Educational Resources Information Center

    Li, Cheng-Hsien

    2012-01-01

    Of the several measures of optimism presently available in the literature, the Life Orientation Test (LOT; Scheier & Carver, 1985) has been the most widely used in empirical research. This article explores, confirms, and cross-validates the factor structure of the Chinese version of the LOT with ordinal data by using robust weighted least…

  13. SSE-based Thomas algorithm for quasi-block-tridiagonal linear equation systems, optimized for small dense blocks

    NASA Astrophysics Data System (ADS)

    Barnaś, Dawid; Bieniasz, Lesław K.

    2017-07-01

    We have recently developed a vectorized Thomas solver for quasi-block tridiagonal linear algebraic equation systems using Streaming SIMD Extensions (SSE) and Advanced Vector Extensions (AVX) in operations on dense blocks [D. Barnaś and L. K. Bieniasz, Int. J. Comput. Meth., accepted]. The acceleration caused by vectorization was observed for large block sizes, but was less satisfactory for small blocks. In this communication we report on another version of the solver, optimized for small blocks of size up to four rows and/or columns.

  14. P and M gene junction is the optimal insertion site in Newcastle disease virus vaccine vector for foreign gene expression

    USDA-ARS?s Scientific Manuscript database

    Newcastle disease virus (NDV) has been developed as a vector for vaccine and gene therapy purposes. However, the optimal insertion site for foreign gene expression remained to be determined. In the present study, we inserted the green fluorescence protein (GFP) gene into five different intergenic ...

  15. "RCL-Pooling Assay": A Simplified Method for the Detection of Replication-Competent Lentiviruses in Vector Batches Using Sequential Pooling.

    PubMed

    Corre, Guillaume; Dessainte, Michel; Marteau, Jean-Brice; Dalle, Bruno; Fenard, David; Galy, Anne

    2016-02-01

    Nonreplicative recombinant HIV-1-derived lentiviral vectors (LV) are increasingly used in gene therapy of various genetic diseases, infectious diseases, and cancer. Before they are used in humans, preparations of LV must undergo extensive quality control testing. In particular, testing of LV must demonstrate the absence of replication-competent lentiviruses (RCL) with suitable methods, on representative fractions of vector batches. Current methods based on cell culture are challenging because high titers of vector batches translate into high volumes of cell culture to be tested in RCL assays. As vector batch size and titers are continuously increasing because of the improvement of production and purification methods, it became necessary for us to modify the current RCL assay based on the detection of p24 in cultures of indicator cells. Here, we propose a practical optimization of this method using a pairwise pooling strategy enabling easier testing of higher vector inoculum volumes. These modifications significantly decrease material handling and operator time, leading to a cost-effective method, while maintaining optimal sensibility of the RCL testing. This optimized "RCL-pooling assay" ameliorates the feasibility of the quality control of large-scale batches of clinical-grade LV while maintaining the same sensitivity.

  16. Ontology Sparse Vector Learning Algorithm for Ontology Similarity Measuring and Ontology Mapping via ADAL Technology

    NASA Astrophysics Data System (ADS)

    Gao, Wei; Zhu, Linli; Wang, Kaiyun

    2015-12-01

    Ontology, a model of knowledge representation and storage, has had extensive applications in pharmaceutics, social science, chemistry and biology. In the age of “big data”, the constructed concepts are often represented as higher-dimensional data by scholars, and thus the sparse learning techniques are introduced into ontology algorithms. In this paper, based on the alternating direction augmented Lagrangian method, we present an ontology optimization algorithm for ontological sparse vector learning, and a fast version of such ontology technologies. The optimal sparse vector is obtained by an iterative procedure, and the ontology function is then obtained from the sparse vector. Four simulation experiments show that our ontological sparse vector learning model has a higher precision ratio on plant ontology, humanoid robotics ontology, biology ontology and physics education ontology data for similarity measuring and ontology mapping applications.

  17. SU-E-T-422: Fast Analytical Beamlet Optimization for Volumetric Intensity-Modulated Arc Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chan, Kenny S K; Lee, Louis K Y; Xing, L

    2015-06-15

    Purpose: To implement a fast optimization algorithm on CPU/GPU heterogeneous computing platform and to obtain an optimal fluence for a given target dose distribution from the pre-calculated beamlets in an analytical approach. Methods: The 2D target dose distribution was modeled as an n-dimensional vector and estimated by a linear combination of independent basis vectors. The basis set was composed of the pre-calculated beamlet dose distributions at every 6 degrees of gantry angle and the cost function was set as the magnitude square of the vector difference between the target and the estimated dose distribution. The optimal weighting of the basis,more » which corresponds to the optimal fluence, was obtained analytically by the least square method. Those basis vectors with a positive weighting were selected for entering into the next level of optimization. Totally, 7 levels of optimization were implemented in the study.Ten head-and-neck and ten prostate carcinoma cases were selected for the study and mapped to a round water phantom with a diameter of 20cm. The Matlab computation was performed in a heterogeneous programming environment with Intel i7 CPU and NVIDIA Geforce 840M GPU. Results: In all selected cases, the estimated dose distribution was in a good agreement with the given target dose distribution and their correlation coefficients were found to be in the range of 0.9992 to 0.9997. Their root-mean-square error was monotonically decreasing and converging after 7 cycles of optimization. The computation took only about 10 seconds and the optimal fluence maps at each gantry angle throughout an arc were quickly obtained. Conclusion: An analytical approach is derived for finding the optimal fluence for a given target dose distribution and a fast optimization algorithm implemented on the CPU/GPU heterogeneous computing environment greatly reduces the optimization time.« less

  18. Minimization of annotation work: diagnosis of mammographic masses via active learning

    NASA Astrophysics Data System (ADS)

    Zhao, Yu; Zhang, Jingyang; Xie, Hongzhi; Zhang, Shuyang; Gu, Lixu

    2018-06-01

    The prerequisite for establishing an effective prediction system for mammographic diagnosis is the annotation of each mammographic image. The manual annotation work is time-consuming and laborious, which becomes a great hindrance for researchers. In this article, we propose a novel active learning algorithm that can adequately address this problem, leading to the minimization of the labeling costs on the premise of guaranteed performance. Our proposed method is different from the existing active learning methods designed for the general problem as it is specifically designed for mammographic images. Through its modified discriminant functions and improved sample query criteria, the proposed method can fully utilize the pairing of mammographic images and select the most valuable images from both the mediolateral and craniocaudal views. Moreover, in order to extend active learning to the ordinal regression problem, which has no precedent in existing studies, but is essential for mammographic diagnosis (mammographic diagnosis is not only a classification task, but also an ordinal regression task for predicting an ordinal variable, viz. the malignancy risk of lesions), multiple sample query criteria need to be taken into consideration simultaneously. We formulate it as a criteria integration problem and further present an algorithm based on self-adaptive weighted rank aggregation to achieve a good solution. The efficacy of the proposed method was demonstrated on thousands of mammographic images from the digital database for screening mammography. The labeling costs of obtaining optimal performance in the classification and ordinal regression task respectively fell to 33.8 and 19.8 percent of their original costs. The proposed method also generated 1228 wins, 369 ties and 47 losses for the classification task, and 1933 wins, 258 ties and 185 losses for the ordinal regression task compared to the other state-of-the-art active learning algorithms. By taking the particularities of mammographic images, the proposed AL method can indeed reduce the manual annotation work to a great extent without sacrificing the performance of the prediction system for mammographic diagnosis.

  19. Minimization of annotation work: diagnosis of mammographic masses via active learning.

    PubMed

    Zhao, Yu; Zhang, Jingyang; Xie, Hongzhi; Zhang, Shuyang; Gu, Lixu

    2018-05-22

    The prerequisite for establishing an effective prediction system for mammographic diagnosis is the annotation of each mammographic image. The manual annotation work is time-consuming and laborious, which becomes a great hindrance for researchers. In this article, we propose a novel active learning algorithm that can adequately address this problem, leading to the minimization of the labeling costs on the premise of guaranteed performance. Our proposed method is different from the existing active learning methods designed for the general problem as it is specifically designed for mammographic images. Through its modified discriminant functions and improved sample query criteria, the proposed method can fully utilize the pairing of mammographic images and select the most valuable images from both the mediolateral and craniocaudal views. Moreover, in order to extend active learning to the ordinal regression problem, which has no precedent in existing studies, but is essential for mammographic diagnosis (mammographic diagnosis is not only a classification task, but also an ordinal regression task for predicting an ordinal variable, viz. the malignancy risk of lesions), multiple sample query criteria need to be taken into consideration simultaneously. We formulate it as a criteria integration problem and further present an algorithm based on self-adaptive weighted rank aggregation to achieve a good solution. The efficacy of the proposed method was demonstrated on thousands of mammographic images from the digital database for screening mammography. The labeling costs of obtaining optimal performance in the classification and ordinal regression task respectively fell to 33.8 and 19.8 percent of their original costs. The proposed method also generated 1228 wins, 369 ties and 47 losses for the classification task, and 1933 wins, 258 ties and 185 losses for the ordinal regression task compared to the other state-of-the-art active learning algorithms. By taking the particularities of mammographic images, the proposed AL method can indeed reduce the manual annotation work to a great extent without sacrificing the performance of the prediction system for mammographic diagnosis.

  20. Optimal impulsive manoeuvres and aerodynamic braking

    NASA Technical Reports Server (NTRS)

    Jezewski, D. J.

    1985-01-01

    A method developed for obtaining solutions to the aerodynamic braking problem, using impulses in the exoatmospheric phases is discussed. The solution combines primer vector theory and the results of a suboptimal atmospheric guidance program. For a specified initial and final orbit, the solution determines: (1) the minimum impulsive cost using a maximum of four impulses, (2) the optimal atmospheric entry and exit-state vectors subject to equality and inequality constraints, and (3) the optimal coast times. Numerical solutions which illustrate the characteristics of the solution are presented.

  1. The Preventive Control of a Dengue Disease Using Pontryagin Minimum Principal

    NASA Astrophysics Data System (ADS)

    Ratna Sari, Eminugroho; Insani, Nur; Lestari, Dwi

    2017-06-01

    Behaviour analysis for host-vector model without control of dengue disease is based on the value of basic reproduction number obtained using next generation matrices. Furthermore, the model is further developed involving a preventive control to minimize the contact between host and vector. The purpose is to obtain an optimal preventive strategy with minimal cost. The Pontryagin Minimum Principal is used to find the optimal control analytically. The derived optimality model is then solved numerically to investigate control effort to reduce infected class.

  2. Reliability and agreement in the use of four- and six-point ordinal scales for the assessment of erythema in digital images of canine skin.

    PubMed

    Hill, Peter B

    2015-06-01

    Grading of erythema in clinical practice is a subjective assessment that cannot be confirmed using a definitive test; nevertheless, erythema scores are typically measured in clinical trials assessing the response to treatment interventions. Most commonly, ordinal scales are used for this purpose, but the optimal number of categories in such scales has not been determined. This study aimed to compare the reliability and agreement of a four-point and a six-point ordinal scale for the assessment of erythema in digital images of canine skin. Fifteen digital images showing varying degrees of erythema were assessed by specialist dermatologists and laypeople, using either the four-point or the six-point scale. Reliability between the raters was assessed using intraclass correlation coefficients and Cronbach's α. Agreement was assessed using the variation ratio (the percentage of respondents who chose the mode, the most common answer). Intraobserver variability was assessed by comparing the results of two grading sessions, at least 6 weeks apart. Both scales demonstrated high reliability, with intraclass correlation coefficient values and Cronbach's α above 0.99. However, the four-point scale demonstrated significantly superior agreement, with variation ratios for the four-point scale averaging 74.8%, compared with 56.2% for the six-point scale. Intraobserver consistency for the four-point scale was very high. Although both scales demonstrated high reliability, the four-point scale was superior in terms of agreement. For the assessment of erythema in clinical trials, a four-point ordinal scale is recommended. © 2014 ESVD and ACVD.

  3. Optimal multiguidance integration in insect navigation.

    PubMed

    Hoinville, Thierry; Wehner, Rüdiger

    2018-03-13

    In the last decades, desert ants have become model organisms for the study of insect navigation. In finding their way, they use two major navigational routines: path integration using a celestial compass and landmark guidance based on sets of panoramic views of the terrestrial environment. It has been claimed that this information would enable the insect to acquire and use a centralized cognitive map of its foraging terrain. Here, we present a decentralized architecture, in which the concurrently operating path integration and landmark guidance routines contribute optimally to the directions to be steered, with "optimal" meaning maximizing the certainty (reliability) of the combined information. At any one time during its journey, the animal computes a path integration (global) vector and landmark guidance (local) vector, in which the length of each vector is proportional to the certainty of the individual estimates. Hence, these vectors represent the limited knowledge that the navigator has at any one place about the direction of the goal. The sum of the global and local vectors indicates the navigator's optimal directional estimate. Wherever applied, this decentralized model architecture is sufficient to simulate the results of quite a number of diverse cue-conflict experiments, which have recently been performed in various behavioral contexts by different authors in both desert ants and honeybees. They include even those experiments that have deliberately been designed by former authors to strengthen the evidence for a metric cognitive map in bees.

  4. On the use of co-ordinate stretching in the numeral computation of high frequency scattering. [of jet engine noise by fuselage

    NASA Technical Reports Server (NTRS)

    Bayliss, A.

    1978-01-01

    The scattering of the sound of a jet engine by an airplane fuselage is modeled by solving the axially symmetric Helmholtz equation exterior to a long thin ellipsoid. The integral equation method based on the single layer potential formulation is used. A family of coordinate systems on the body is introduced and an algorithm is presented to determine the optimal coordinate system. Numerical results verify that the optimal choice enables the solution to be computed with a grid that is coarse relative to the wavelength.

  5. Optimal cue integration in ants.

    PubMed

    Wystrach, Antoine; Mangan, Michael; Webb, Barbara

    2015-10-07

    In situations with redundant or competing sensory information, humans have been shown to perform cue integration, weighting different cues according to their certainty in a quantifiably optimal manner. Ants have been shown to merge the directional information available from their path integration (PI) and visual memory, but as yet it is not clear that they do so in a way that reflects the relative certainty of the cues. In this study, we manipulate the variance of the PI home vector by allowing ants (Cataglyphis velox) to run different distances and testing their directional choice when the PI vector direction is put in competition with visual memory. Ants show progressively stronger weighting of their PI direction as PI length increases. The weighting is quantitatively predicted by modelling the expected directional variance of home vectors of different lengths and assuming optimal cue integration. However, a subsequent experiment suggests ants may not actually compute an internal estimate of the PI certainty, but are using the PI home vector length as a proxy. © 2015 The Author(s).

  6. Feature Vector Construction Method for IRIS Recognition

    NASA Astrophysics Data System (ADS)

    Odinokikh, G.; Fartukov, A.; Korobkin, M.; Yoo, J.

    2017-05-01

    One of the basic stages of iris recognition pipeline is iris feature vector construction procedure. The procedure represents the extraction of iris texture information relevant to its subsequent comparison. Thorough investigation of feature vectors obtained from iris showed that not all the vector elements are equally relevant. There are two characteristics which determine the vector element utility: fragility and discriminability. Conventional iris feature extraction methods consider the concept of fragility as the feature vector instability without respect to the nature of such instability appearance. This work separates sources of the instability into natural and encodinginduced which helps deeply investigate each source of instability independently. According to the separation concept, a novel approach of iris feature vector construction is proposed. The approach consists of two steps: iris feature extraction using Gabor filtering with optimal parameters and quantization with separated preliminary optimized fragility thresholds. The proposed method has been tested on two different datasets of iris images captured under changing environmental conditions. The testing results show that the proposed method surpasses all the methods considered as a prior art by recognition accuracy on both datasets.

  7. Initialization of Formation Flying Using Primer Vector Theory

    NASA Technical Reports Server (NTRS)

    Mailhe, Laurie; Schiff, Conrad; Folta, David

    2002-01-01

    In this paper, we extend primer vector analysis to formation flying. Optimization of the classical rendezvous or free-time transfer problem between two orbits using primer vector theory has been extensively studied for one spacecraft. However, an increasing number of missions are now considering flying a set of spacecraft in close formation. Missions such as the Magnetospheric MultiScale (MMS) and Leonardo-BRDF (Bidirectional Reflectance Distribution Function) need to determine strategies to transfer each spacecraft from the common launch orbit to their respective operational orbit. In addition, all the spacecraft must synchronize their states so that they achieve the same desired formation geometry over each orbit. This periodicity requirement imposes constraints on the boundary conditions that can be used for the primer vector algorithm. In this work we explore the impact of the periodicity requirement in optimizing each spacecraft transfer trajectory using primer vector theory. We first present our adaptation of primer vector theory to formation flying. Using this method, we then compute the AV budget for each spacecraft subject to different formation endpoint constraints.

  8. Attitude determination using vector observations: A fast optimal matrix algorithm

    NASA Technical Reports Server (NTRS)

    Markley, F. Landis

    1993-01-01

    The attitude matrix minimizing Wahba's loss function is computed directly by a method that is competitive with the fastest known algorithm for finding this optimal estimate. The method also provides an estimate of the attitude error covariance matrix. Analysis of the special case of two vector observations identifies those cases for which the TRIAD or algebraic method minimizes Wahba's loss function.

  9. Automated vector selection of SIVQ and parallel computing integration MATLAB™: Innovations supporting large-scale and high-throughput image analysis studies.

    PubMed

    Cheng, Jerome; Hipp, Jason; Monaco, James; Lucas, David R; Madabhushi, Anant; Balis, Ulysses J

    2011-01-01

    Spatially invariant vector quantization (SIVQ) is a texture and color-based image matching algorithm that queries the image space through the use of ring vectors. In prior studies, the selection of one or more optimal vectors for a particular feature of interest required a manual process, with the user initially stochastically selecting candidate vectors and subsequently testing them upon other regions of the image to verify the vector's sensitivity and specificity properties (typically by reviewing a resultant heat map). In carrying out the prior efforts, the SIVQ algorithm was noted to exhibit highly scalable computational properties, where each region of analysis can take place independently of others, making a compelling case for the exploration of its deployment on high-throughput computing platforms, with the hypothesis that such an exercise will result in performance gains that scale linearly with increasing processor count. An automated process was developed for the selection of optimal ring vectors to serve as the predicate matching operator in defining histopathological features of interest. Briefly, candidate vectors were generated from every possible coordinate origin within a user-defined vector selection area (VSA) and subsequently compared against user-identified positive and negative "ground truth" regions on the same image. Each vector from the VSA was assessed for its goodness-of-fit to both the positive and negative areas via the use of the receiver operating characteristic (ROC) transfer function, with each assessment resulting in an associated area-under-the-curve (AUC) figure of merit. Use of the above-mentioned automated vector selection process was demonstrated in two cases of use: First, to identify malignant colonic epithelium, and second, to identify soft tissue sarcoma. For both examples, a very satisfactory optimized vector was identified, as defined by the AUC metric. Finally, as an additional effort directed towards attaining high-throughput capability for the SIVQ algorithm, we demonstrated the successful incorporation of it with the MATrix LABoratory (MATLAB™) application interface. The SIVQ algorithm is suitable for automated vector selection settings and high throughput computation.

  10. Supercomputer optimizations for stochastic optimal control applications

    NASA Technical Reports Server (NTRS)

    Chung, Siu-Leung; Hanson, Floyd B.; Xu, Huihuang

    1991-01-01

    Supercomputer optimizations for a computational method of solving stochastic, multibody, dynamic programming problems are presented. The computational method is valid for a general class of optimal control problems that are nonlinear, multibody dynamical systems, perturbed by general Markov noise in continuous time, i.e., nonsmooth Gaussian as well as jump Poisson random white noise. Optimization techniques for vector multiprocessors or vectorizing supercomputers include advanced data structures, loop restructuring, loop collapsing, blocking, and compiler directives. These advanced computing techniques and superconducting hardware help alleviate Bellman's curse of dimensionality in dynamic programming computations, by permitting the solution of large multibody problems. Possible applications include lumped flight dynamics models for uncertain environments, such as large scale and background random aerospace fluctuations.

  11. Medical Image Compression Based on Vector Quantization with Variable Block Sizes in Wavelet Domain

    PubMed Central

    Jiang, Huiyan; Ma, Zhiyuan; Hu, Yang; Yang, Benqiang; Zhang, Libo

    2012-01-01

    An optimized medical image compression algorithm based on wavelet transform and improved vector quantization is introduced. The goal of the proposed method is to maintain the diagnostic-related information of the medical image at a high compression ratio. Wavelet transformation was first applied to the image. For the lowest-frequency subband of wavelet coefficients, a lossless compression method was exploited; for each of the high-frequency subbands, an optimized vector quantization with variable block size was implemented. In the novel vector quantization method, local fractal dimension (LFD) was used to analyze the local complexity of each wavelet coefficients, subband. Then an optimal quadtree method was employed to partition each wavelet coefficients, subband into several sizes of subblocks. After that, a modified K-means approach which is based on energy function was used in the codebook training phase. At last, vector quantization coding was implemented in different types of sub-blocks. In order to verify the effectiveness of the proposed algorithm, JPEG, JPEG2000, and fractal coding approach were chosen as contrast algorithms. Experimental results show that the proposed method can improve the compression performance and can achieve a balance between the compression ratio and the image visual quality. PMID:23049544

  12. Medical image compression based on vector quantization with variable block sizes in wavelet domain.

    PubMed

    Jiang, Huiyan; Ma, Zhiyuan; Hu, Yang; Yang, Benqiang; Zhang, Libo

    2012-01-01

    An optimized medical image compression algorithm based on wavelet transform and improved vector quantization is introduced. The goal of the proposed method is to maintain the diagnostic-related information of the medical image at a high compression ratio. Wavelet transformation was first applied to the image. For the lowest-frequency subband of wavelet coefficients, a lossless compression method was exploited; for each of the high-frequency subbands, an optimized vector quantization with variable block size was implemented. In the novel vector quantization method, local fractal dimension (LFD) was used to analyze the local complexity of each wavelet coefficients, subband. Then an optimal quadtree method was employed to partition each wavelet coefficients, subband into several sizes of subblocks. After that, a modified K-means approach which is based on energy function was used in the codebook training phase. At last, vector quantization coding was implemented in different types of sub-blocks. In order to verify the effectiveness of the proposed algorithm, JPEG, JPEG2000, and fractal coding approach were chosen as contrast algorithms. Experimental results show that the proposed method can improve the compression performance and can achieve a balance between the compression ratio and the image visual quality.

  13. Pulmonary Nodule Recognition Based on Multiple Kernel Learning Support Vector Machine-PSO

    PubMed Central

    Zhu, Zhichuan; Zhao, Qingdong; Liu, Liwei; Zhang, Lijuan

    2018-01-01

    Pulmonary nodule recognition is the core module of lung CAD. The Support Vector Machine (SVM) algorithm has been widely used in pulmonary nodule recognition, and the algorithm of Multiple Kernel Learning Support Vector Machine (MKL-SVM) has achieved good results therein. Based on grid search, however, the MKL-SVM algorithm needs long optimization time in course of parameter optimization; also its identification accuracy depends on the fineness of grid. In the paper, swarm intelligence is introduced and the Particle Swarm Optimization (PSO) is combined with MKL-SVM algorithm to be MKL-SVM-PSO algorithm so as to realize global optimization of parameters rapidly. In order to obtain the global optimal solution, different inertia weights such as constant inertia weight, linear inertia weight, and nonlinear inertia weight are applied to pulmonary nodules recognition. The experimental results show that the model training time of the proposed MKL-SVM-PSO algorithm is only 1/7 of the training time of the MKL-SVM grid search algorithm, achieving better recognition effect. Moreover, Euclidean norm of normalized error vector is proposed to measure the proximity between the average fitness curve and the optimal fitness curve after convergence. Through statistical analysis of the average of 20 times operation results with different inertial weights, it can be seen that the dynamic inertial weight is superior to the constant inertia weight in the MKL-SVM-PSO algorithm. In the dynamic inertial weight algorithm, the parameter optimization time of nonlinear inertia weight is shorter; the average fitness value after convergence is much closer to the optimal fitness value, which is better than the linear inertial weight. Besides, a better nonlinear inertial weight is verified. PMID:29853983

  14. Pulmonary Nodule Recognition Based on Multiple Kernel Learning Support Vector Machine-PSO.

    PubMed

    Li, Yang; Zhu, Zhichuan; Hou, Alin; Zhao, Qingdong; Liu, Liwei; Zhang, Lijuan

    2018-01-01

    Pulmonary nodule recognition is the core module of lung CAD. The Support Vector Machine (SVM) algorithm has been widely used in pulmonary nodule recognition, and the algorithm of Multiple Kernel Learning Support Vector Machine (MKL-SVM) has achieved good results therein. Based on grid search, however, the MKL-SVM algorithm needs long optimization time in course of parameter optimization; also its identification accuracy depends on the fineness of grid. In the paper, swarm intelligence is introduced and the Particle Swarm Optimization (PSO) is combined with MKL-SVM algorithm to be MKL-SVM-PSO algorithm so as to realize global optimization of parameters rapidly. In order to obtain the global optimal solution, different inertia weights such as constant inertia weight, linear inertia weight, and nonlinear inertia weight are applied to pulmonary nodules recognition. The experimental results show that the model training time of the proposed MKL-SVM-PSO algorithm is only 1/7 of the training time of the MKL-SVM grid search algorithm, achieving better recognition effect. Moreover, Euclidean norm of normalized error vector is proposed to measure the proximity between the average fitness curve and the optimal fitness curve after convergence. Through statistical analysis of the average of 20 times operation results with different inertial weights, it can be seen that the dynamic inertial weight is superior to the constant inertia weight in the MKL-SVM-PSO algorithm. In the dynamic inertial weight algorithm, the parameter optimization time of nonlinear inertia weight is shorter; the average fitness value after convergence is much closer to the optimal fitness value, which is better than the linear inertial weight. Besides, a better nonlinear inertial weight is verified.

  15. A New Unified Analysis of Estimate Errors by Model-Matching Phase-Estimation Methods for Sensorless Drive of Permanent-Magnet Synchronous Motors and New Trajectory-Oriented Vector Control, Part II

    NASA Astrophysics Data System (ADS)

    Shinnaka, Shinji

    This paper presents a new unified analysis of estimate errors by model-matching extended-back-EMF estimation methods for sensorless drive of permanent-magnet synchronous motors. Analytical solutions about estimate errors, whose validity is confirmed by numerical experiments, are rich in universality and applicability. As an example of universality and applicability, a new trajectory-oriented vector control method is proposed, which can realize directly quasi-optimal strategy minimizing total losses with no additional computational loads by simply orienting one of vector-control coordinates to the associated quasi-optimal trajectory. The coordinate orientation rule, which is analytically derived, is surprisingly simple. Consequently the trajectory-oriented vector control method can be applied to a number of conventional vector control systems using model-matching extended-back-EMF estimation methods.

  16. Primer Vector Optimization: Survey of Theory, New Analysis and Applications

    NASA Technical Reports Server (NTRS)

    Guzman, J. J.; Mailhe, L. M.; Schiff, C.; Hughes, S. P.; Folta, D. C.

    2002-01-01

    In this paper, a summary of primer vector theory is presented. The applicability of primer vector theory is examined in an effort to understand when and why the theory can fail. For example, since the Calculus of Variations is based on "small" variations, singularities in the linearized (variational) equations of motion along the arcs must be taken into account. These singularities are a recurring problem in analyse that employ small variations. Two examples, the initialization of an orbit and a line of apsides rotation, are presented. Recommendations, future work, and the possible addition of other optimization techniques are also discussed.

  17. Goal-based h-adaptivity of the 1-D diamond difference discrete ordinate method

    NASA Astrophysics Data System (ADS)

    Jeffers, R. S.; Kópházi, J.; Eaton, M. D.; Févotte, F.; Hülsemann, F.; Ragusa, J.

    2017-04-01

    The quantity of interest (QoI) associated with a solution of a partial differential equation (PDE) is not, in general, the solution itself, but a functional of the solution. Dual weighted residual (DWR) error estimators are one way of providing an estimate of the error in the QoI resulting from the discretisation of the PDE. This paper aims to provide an estimate of the error in the QoI due to the spatial discretisation, where the discretisation scheme being used is the diamond difference (DD) method in space and discrete ordinate (SN) method in angle. The QoI are reaction rates in detectors and the value of the eigenvalue (Keff) for 1-D fixed source and eigenvalue (Keff criticality) neutron transport problems respectively. Local values of the DWR over individual cells are used as error indicators for goal-based mesh refinement, which aims to give an optimal mesh for a given QoI.

  18. Optimal source coding, removable noise elimination, and natural coordinate system construction for general vector sources using replicator neural networks

    NASA Astrophysics Data System (ADS)

    Hecht-Nielsen, Robert

    1997-04-01

    A new universal one-chart smooth manifold model for vector information sources is introduced. Natural coordinates (a particular type of chart) for such data manifolds are then defined. Uniformly quantized natural coordinates form an optimal vector quantization code for a general vector source. Replicator neural networks (a specialized type of multilayer perceptron with three hidden layers) are the introduced. As properly configured examples of replicator networks approach minimum mean squared error (e.g., via training and architecture adjustment using randomly chosen vectors from the source), these networks automatically develop a mapping which, in the limit, produces natural coordinates for arbitrary source vectors. The new concept of removable noise (a noise model applicable to a wide variety of real-world noise processes) is then discussed. Replicator neural networks, when configured to approach minimum mean squared reconstruction error (e.g., via training and architecture adjustment on randomly chosen examples from a vector source, each with randomly chosen additive removable noise contamination), in the limit eliminate removable noise and produce natural coordinates for the data vector portions of the noise-corrupted source vectors. Consideration regarding selection of the dimension of a data manifold source model and the training/configuration of replicator neural networks are discussed.

  19. Mosquito communities and disease risk influenced by land use change and seasonality in the Australian tropics.

    PubMed

    Meyer Steiger, Dagmar B; Ritchie, Scott A; Laurance, Susan G W

    2016-07-07

    Anthropogenic land use changes have contributed considerably to the rise of emerging and re-emerging mosquito-borne diseases. These diseases appear to be increasing as a result of the novel juxtapositions of habitats and species that can result in new interchanges of vectors, diseases and hosts. We studied whether the mosquito community structure varied between habitats and seasons and whether known disease vectors displayed habitat preferences in tropical Australia. Using CDC model 512 traps, adult mosquitoes were sampled across an anthropogenic disturbance gradient of grassland, rainforest edge and rainforest interior habitats, in both the wet and dry seasons. Nonmetric multidimensional scaling (NMS) ordinations were applied to examine major gradients in the composition of mosquito and vector communities. We captured ~13,000 mosquitoes from 288 trap nights across four study sites. A community analysis identified 29 species from 7 genera. Even though mosquito abundance and richness were similar between the three habitats, the community composition varied significantly in response to habitat type. The mosquito community in rainforest interiors was distinctly different to the community in grasslands, whereas forest edges acted as an ecotone with shared communities from both forest interiors and grasslands. We found two community patterns that will influence disease risk at out study sites, first, that disease vectoring mosquito species occurred all year round. Secondly, that anthropogenic grasslands adjacent to rainforests may increase the probability of novel disease transmission through changes to the vector community on rainforest edges, as most disease transmitting species predominantly occurred in grasslands. Our results indicate that the strong influence of anthropogenic land use change on mosquito communities could have potential implications for pathogen transmission to humans and wildlife.

  20. A novel non-uniform control vector parameterization approach with time grid refinement for flight level tracking optimal control problems.

    PubMed

    Liu, Ping; Li, Guodong; Liu, Xinggao; Xiao, Long; Wang, Yalin; Yang, Chunhua; Gui, Weihua

    2018-02-01

    High quality control method is essential for the implementation of aircraft autopilot system. An optimal control problem model considering the safe aerodynamic envelop is therefore established to improve the control quality of aircraft flight level tracking. A novel non-uniform control vector parameterization (CVP) method with time grid refinement is then proposed for solving the optimal control problem. By introducing the Hilbert-Huang transform (HHT) analysis, an efficient time grid refinement approach is presented and an adaptive time grid is automatically obtained. With this refinement, the proposed method needs fewer optimization parameters to achieve better control quality when compared with uniform refinement CVP method, whereas the computational cost is lower. Two well-known flight level altitude tracking problems and one minimum time cost problem are tested as illustrations and the uniform refinement control vector parameterization method is adopted as the comparative base. Numerical results show that the proposed method achieves better performances in terms of optimization accuracy and computation cost; meanwhile, the control quality is efficiently improved. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  1. Aircraft Engine Thrust Estimator Design Based on GSA-LSSVM

    NASA Astrophysics Data System (ADS)

    Sheng, Hanlin; Zhang, Tianhong

    2017-08-01

    In view of the necessity of highly precise and reliable thrust estimator to achieve direct thrust control of aircraft engine, based on support vector regression (SVR), as well as least square support vector machine (LSSVM) and a new optimization algorithm - gravitational search algorithm (GSA), by performing integrated modelling and parameter optimization, a GSA-LSSVM-based thrust estimator design solution is proposed. The results show that compared to particle swarm optimization (PSO) algorithm, GSA can find unknown optimization parameter better and enables the model developed with better prediction and generalization ability. The model can better predict aircraft engine thrust and thus fulfills the need of direct thrust control of aircraft engine.

  2. Minimum impulse three-body trajectories.

    NASA Technical Reports Server (NTRS)

    D'Amario, L.; Edelbaum, T. N.

    1973-01-01

    A rapid and accurate method of calculating optimal impulsive transfers in the restricted problem of three bodies has been developed. The technique combines a multi-conic method of trajectory integration with primer vector theory and an accelerated gradient method of trajectory optimization. A unique feature is that the state transition matrix and the primer vector are found analytical without additional integrations or differentiations. The method has been applied to the determination of optimal two and three impulse transfers between the L2 libration point and circular orbits about both the earth and the moon.

  3. Deterministic absorbed dose estimation in computed tomography using a discrete ordinates method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Norris, Edward T.; Liu, Xin, E-mail: xinliu@mst.edu; Hsieh, Jiang

    Purpose: Organ dose estimation for a patient undergoing computed tomography (CT) scanning is very important. Although Monte Carlo methods are considered gold-standard in patient dose estimation, the computation time required is formidable for routine clinical calculations. Here, the authors instigate a deterministic method for estimating an absorbed dose more efficiently. Methods: Compared with current Monte Carlo methods, a more efficient approach to estimating the absorbed dose is to solve the linear Boltzmann equation numerically. In this study, an axial CT scan was modeled with a software package, Denovo, which solved the linear Boltzmann equation using the discrete ordinates method. Themore » CT scanning configuration included 16 x-ray source positions, beam collimators, flat filters, and bowtie filters. The phantom was the standard 32 cm CT dose index (CTDI) phantom. Four different Denovo simulations were performed with different simulation parameters, including the number of quadrature sets and the order of Legendre polynomial expansions. A Monte Carlo simulation was also performed for benchmarking the Denovo simulations. A quantitative comparison was made of the simulation results obtained by the Denovo and the Monte Carlo methods. Results: The difference in the simulation results of the discrete ordinates method and those of the Monte Carlo methods was found to be small, with a root-mean-square difference of around 2.4%. It was found that the discrete ordinates method, with a higher order of Legendre polynomial expansions, underestimated the absorbed dose near the center of the phantom (i.e., low dose region). Simulations of the quadrature set 8 and the first order of the Legendre polynomial expansions proved to be the most efficient computation method in the authors’ study. The single-thread computation time of the deterministic simulation of the quadrature set 8 and the first order of the Legendre polynomial expansions was 21 min on a personal computer. Conclusions: The simulation results showed that the deterministic method can be effectively used to estimate the absorbed dose in a CTDI phantom. The accuracy of the discrete ordinates method was close to that of a Monte Carlo simulation, and the primary benefit of the discrete ordinates method lies in its rapid computation speed. It is expected that further optimization of this method in routine clinical CT dose estimation will improve its accuracy and speed.« less

  4. Identification and characterization of highly versatile peptide-vectors that bind non-competitively to the low-density lipoprotein receptor for in vivo targeting and delivery of small molecules and protein cargos

    PubMed Central

    David, Marion; Lécorché, Pascaline; Masse, Maxime; Faucon, Aude; Abouzid, Karima; Gaudin, Nicolas; Varini, Karine; Gassiot, Fanny; Ferracci, Géraldine; Jacquot, Guillaume; Vlieghe, Patrick

    2018-01-01

    Insufficient membrane penetration of drugs, in particular biotherapeutics and/or low target specificity remain a major drawback in their efficacy. We propose here the rational characterization and optimization of peptides to be developed as vectors that target cells expressing specific receptors involved in endocytosis or transcytosis. Among receptors involved in receptor-mediated transport is the LDL receptor. Screening complex phage-displayed peptide libraries on the human LDLR (hLDLR) stably expressed in cell lines led to the characterization of a family of cyclic and linear peptides that specifically bind the hLDLR. The VH411 lead cyclic peptide allowed endocytosis of payloads such as the S-Tag peptide or antibodies into cells expressing the hLDLR. Size reduction and chemical optimization of this lead peptide-vector led to improved receptor affinity. The optimized peptide-vectors were successfully conjugated to cargos of different nature and size including small organic molecules, siRNAs, peptides or a protein moiety such as an Fc fragment. We show that in all cases, the peptide-vectors retain their binding affinity to the hLDLR and potential for endocytosis. Following i.v. administration in wild type or ldlr-/- mice, an Fc fragment chemically conjugated or fused in C-terminal to peptide-vectors showed significant biodistribution in LDLR-enriched organs. We have thus developed highly versatile peptide-vectors endowed with good affinity for the LDLR as a target receptor. These peptide-vectors have the potential to be further developed for efficient transport of therapeutic or imaging agents into cells -including pathological cells—or organs that express the LDLR. PMID:29485998

  5. Thyra Abstract Interface Package

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bartlett, Roscoe A.

    2005-09-01

    Thrya primarily defines a set of abstract C++ class interfaces needed for the development of abstract numerical atgorithms (ANAs) such as iterative linear solvers, transient solvers all the way up to optimization. At the foundation of these interfaces are abstract C++ classes for vectors, vector spaces, linear operators and multi-vectors. Also included in the Thyra package is C++ code for creating concrete vector, vector space, linear operator, and multi-vector subclasses as well as other utilities to aid in the development of ANAs. Currently, very general and efficient concrete subclass implementations exist for serial and SPMD in-core vectors and multi-vectors. Codemore » also currently exists for testing objects and providing composite objects such as product vectors.« less

  6. Advances in Engineering Software for Lift Transportation Systems

    NASA Astrophysics Data System (ADS)

    Kazakoff, Alexander Borisoff

    2012-03-01

    In this paper an attempt is performed at computer modelling of ropeway ski lift systems. The logic in these systems is based on a travel form between the two terminals, which operates with high capacity cabins, chairs, gondolas or draw-bars. Computer codes AUTOCAD, MATLAB and Compaq-Visual Fortran - version 6.6 are used in the computer modelling. The rope systems computer modelling is organized in two stages in this paper. The first stage is organization of the ground relief profile and a design of the lift system as a whole, according to the terrain profile and the climatic and atmospheric conditions. The ground profile is prepared by the geodesists and is presented in an AUTOCAD view. The next step is the design of the lift itself which is performed by programmes using the computer code MATLAB. The second stage of the computer modelling is performed after the optimization of the co-ordinates and the lift profile using the computer code MATLAB. Then the co-ordinates and the parameters are inserted into a program written in Compaq Visual Fortran - version 6.6., which calculates 171 lift parameters, organized in 42 tables. The objective of the work presented in this paper is an attempt at computer modelling of the design and parameters derivation of the rope way systems and their computer variation and optimization.

  7. Primer vector theory applied to the linear relative-motion equations. [for N-impulse space trajectory optimization

    NASA Technical Reports Server (NTRS)

    Jezewski, D.

    1980-01-01

    Prime vector theory is used in analyzing a set of linear relative-motion equations - the Clohessy-Wiltshire (C/W) equations - to determine the criteria and necessary conditions for an optimal N-impulse trajectory. The analysis develops the analytical criteria for improving a solution by: (1) moving any dependent or independent variable in the initial and/or final orbit, and (2) adding intermediate impulses. If these criteria are violated, the theory establishes a sufficient number of analytical equations. The subsequent satisfaction of these equations will result in the optimal position vectors and times of an N-impulse trajectory. The solution is examined for the specific boundary conditions of: (1) fixed-end conditions, two impulse, and time-open transfer; (2) an orbit-to-orbit transfer; and (3) a generalized renezvous problem.

  8. Processing Ordinality and Quantity: The Case of Developmental Dyscalculia

    PubMed Central

    Rubinsten, Orly; Sury, Dana

    2011-01-01

    In contrast to quantity processing, up to date, the nature of ordinality has received little attention from researchers despite the fact that both quantity and ordinality are embodied in numerical information. Here we ask if there are two separate core systems that lie at the foundations of numerical cognition: (1) the traditionally and well accepted numerical magnitude system but also (2) core system for representing ordinal information. We report two novel experiments of ordinal processing that explored the relation between ordinal and numerical information processing in typically developing adults and adults with developmental dyscalculia (DD). Participants made “ordered” or “non-ordered” judgments about 3 groups of dots (non-symbolic numerical stimuli; in Experiment 1) and 3 numbers (symbolic task: Experiment 2). In contrast to previous findings and arguments about quantity deficit in DD participants, when quantity and ordinality are dissociated (as in the current tasks), DD participants exhibited a normal ratio effect in the non-symbolic ordinal task. They did not show, however, the ordinality effect. Ordinality effect in DD appeared only when area and density were randomized, but only in the descending direction. In the symbolic task, the ordinality effect was modulated by ratio and direction in both groups. These findings suggest that there might be two separate cognitive representations of ordinal and quantity information and that linguistic knowledge may facilitate estimation of ordinal information. PMID:21935374

  9. Processing ordinality and quantity: the case of developmental dyscalculia.

    PubMed

    Rubinsten, Orly; Sury, Dana

    2011-01-01

    In contrast to quantity processing, up to date, the nature of ordinality has received little attention from researchers despite the fact that both quantity and ordinality are embodied in numerical information. Here we ask if there are two separate core systems that lie at the foundations of numerical cognition: (1) the traditionally and well accepted numerical magnitude system but also (2) core system for representing ordinal information. We report two novel experiments of ordinal processing that explored the relation between ordinal and numerical information processing in typically developing adults and adults with developmental dyscalculia (DD). Participants made "ordered" or "non-ordered" judgments about 3 groups of dots (non-symbolic numerical stimuli; in Experiment 1) and 3 numbers (symbolic task: Experiment 2). In contrast to previous findings and arguments about quantity deficit in DD participants, when quantity and ordinality are dissociated (as in the current tasks), DD participants exhibited a normal ratio effect in the non-symbolic ordinal task. They did not show, however, the ordinality effect. Ordinality effect in DD appeared only when area and density were randomized, but only in the descending direction. In the symbolic task, the ordinality effect was modulated by ratio and direction in both groups. These findings suggest that there might be two separate cognitive representations of ordinal and quantity information and that linguistic knowledge may facilitate estimation of ordinal information.

  10. Attacking the mosquito on multiple fronts: Insights from the Vector Control Optimization Model (VCOM) for malaria elimination.

    PubMed

    Kiware, Samson S; Chitnis, Nakul; Tatarsky, Allison; Wu, Sean; Castellanos, Héctor Manuel Sánchez; Gosling, Roly; Smith, David; Marshall, John M

    2017-01-01

    Despite great achievements by insecticide-treated nets (ITNs) and indoor residual spraying (IRS) in reducing malaria transmission, it is unlikely these tools will be sufficient to eliminate malaria transmission on their own in many settings today. Fortunately, field experiments indicate that there are many promising vector control interventions that can be used to complement ITNs and/or IRS by targeting a wide range of biological and environmental mosquito resources. The majority of these experiments were performed to test a single vector control intervention in isolation; however, there is growing evidence and consensus that effective vector control with the goal of malaria elimination will require a combination of interventions. We have developed a model of mosquito population dynamic to describe the mosquito life and feeding cycles and to optimize the impact of vector control intervention combinations at suppressing mosquito populations. The model simulations were performed for the main three malaria vectors in sub-Saharan Africa, Anopheles gambiae s.s, An. arabiensis and An. funestus. We considered areas having low, moderate and high malaria transmission, corresponding to entomological inoculation rates of 10, 50 and 100 infective bites per person per year, respectively. In all settings, we considered baseline ITN coverage of 50% or 80% in addition to a range of other vector control tools to interrupt malaria transmission. The model was used to sweep through parameters space to select the best optimal intervention packages. Sample model simulations indicate that, starting with ITNs at a coverage of 50% (An. gambiae s.s. and An. funestus) or 80% (An. arabiensis) and adding interventions that do not require human participation (e.g. larviciding at 80% coverage, endectocide treated cattle at 50% coverage and attractive toxic sugar baits at 50% coverage) may be sufficient to suppress all the three species to an extent required to achieve local malaria elimination. The Vector Control Optimization Model (VCOM) is a computational tool to predict the impact of combined vector control interventions at the mosquito population level in a range of eco-epidemiological settings. The model predicts specific combinations of vector control tools to achieve local malaria elimination in a range of eco-epidemiological settings and can assist researchers and program decision-makers on the design of experimental or operational research to test vector control interventions. A corresponding graphical user interface is available for national malaria control programs and other end users.

  11. Modular protein expression by RNA trans-splicing enables flexible expression of antibody formats in mammalian cells from a dual-host phage display vector.

    PubMed

    Shang, Yonglei; Tesar, Devin; Hötzel, Isidro

    2015-10-01

    A recently described dual-host phage display vector that allows expression of immunoglobulin G (IgG) in mammalian cells bypasses the need for subcloning of phage display clone inserts to mammalian vectors for IgG expression in large antibody discovery and optimization campaigns. However, antibody discovery and optimization campaigns usually need different antibody formats for screening, requiring reformatting of the clones in the dual-host phage display vector to an alternative vector. We developed a modular protein expression system mediated by RNA trans-splicing to enable the expression of different antibody formats from the same phage display vector. The heavy-chain region encoded by the phage display vector is directly and precisely fused to different downstream heavy-chain sequences encoded by complementing plasmids simply by joining exons in different pre-mRNAs by trans-splicing. The modular expression system can be used to efficiently express structurally correct IgG and Fab fragments or other antibody formats from the same phage display clone in mammalian cells without clone reformatting. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  12. Methods, systems and apparatus for optimization of third harmonic current injection in a multi-phase machine

    DOEpatents

    Gallegos-Lopez, Gabriel

    2012-10-02

    Methods, system and apparatus are provided for increasing voltage utilization in a five-phase vector controlled machine drive system that employs third harmonic current injection to increase torque and power output by a five-phase machine. To do so, a fundamental current angle of a fundamental current vector is optimized for each particular torque-speed of operating point of the five-phase machine.

  13. Direct model-based predictive control scheme without cost function for voltage source inverters with reduced common-mode voltage

    NASA Astrophysics Data System (ADS)

    Kim, Jae-Chang; Moon, Sung-Ki; Kwak, Sangshin

    2018-04-01

    This paper presents a direct model-based predictive control scheme for voltage source inverters (VSIs) with reduced common-mode voltages (CMVs). The developed method directly finds optimal vectors without using repetitive calculation of a cost function. To adjust output currents with the CMVs in the range of -Vdc/6 to +Vdc/6, the developed method uses voltage vectors, as finite control resources, excluding zero voltage vectors which produce the CMVs in the VSI within ±Vdc/2. In a model-based predictive control (MPC), not using zero voltage vectors increases the output current ripples and the current errors. To alleviate these problems, the developed method uses two non-zero voltage vectors in one sampling step. In addition, the voltage vectors scheduled to be used are directly selected at every sampling step once the developed method calculates the future reference voltage vector, saving the efforts of repeatedly calculating the cost function. And the two non-zero voltage vectors are optimally allocated to make the output current approach the reference current as close as possible. Thus, low CMV, rapid current-following capability and sufficient output current ripple performance are attained by the developed method. The results of a simulation and an experiment verify the effectiveness of the developed method.

  14. Application of three controls optimally in a vector-borne disease - a mathematical study

    NASA Astrophysics Data System (ADS)

    Kar, T. K.; Jana, Soovoojeet

    2013-10-01

    We have proposed and analyzed a vector-borne disease model with three types of controls for the eradication of the disease. Four different classes for the human population namely susceptible, infected, recovered and vaccinated and two different classes for the vector populations namely susceptible and infected are considered. In the first part of our analysis the disease dynamics are described for fixed controls and some inferences have been drawn regarding the spread of the disease. Next the optimal control problem is formulated and solved considering control parameters as time dependent. Different possible combination of controls are used and their effectiveness are compared by numerical simulation.

  15. Optimal integer resolution for attitude determination using global positioning system signals

    NASA Technical Reports Server (NTRS)

    Crassidis, John L.; Markley, F. Landis; Lightsey, E. Glenn

    1998-01-01

    In this paper, a new motion-based algorithm for GPS integer ambiguity resolution is derived. The first step of this algorithm converts the reference sightline vectors into body frame vectors. This is accomplished by an optimal vectorized transformation of the phase difference measurements. The result of this transformation leads to the conversion of the integer ambiguities to vectorized biases. This essentially converts the problem to the familiar magnetometer-bias determination problem, for which an optimal and efficient solution exists. Also, the formulation in this paper is re-derived to provide a sequential estimate, so that a suitable stopping condition can be found during the vehicle motion. The advantages of the new algorithm include: it does not require an a-priori estimate of the vehicle's attitude; it provides an inherent integrity check using a covariance-type expression; and it can sequentially estimate the ambiguities during the vehicle motion. The only disadvantage of the new algorithm is that it requires at least three non-coplanar baselines. The performance of the new algorithm is tested on a dynamic hardware simulator.

  16. The effect of ordinances requiring smoke-free restaurants and bars on revenues: a follow-up.

    PubMed Central

    Glantz, S A; Smith, L R

    1997-01-01

    OBJECTIVES: The purpose of this study was to extend an earlier evaluation of the economic effects of ordinances requiring smoke-free restaurants and bars. METHODS: Sales tax data for 15 cities with smoke-free restaurant ordinances, 5 cities and 2 counties with smoke-free bar ordinances, and matched comparison locations were analyzed by multiple regression, including time and a dummy variable for the ordinance. RESULTS: Ordinances had no significant effect on the fraction of total retail sales that went to eating and drinking places or on the ratio between sales in communities with ordinances and sales in comparison communities. Ordinances requiring smoke-free bars had no significant effect on the fraction of revenues going to eating and drinking places that serve all types of liquor. CONCLUSIONS: Smoke-free ordinances do not adversely affect either restaurant or bar sales. PMID:9357356

  17. Design of 2D time-varying vector fields.

    PubMed

    Chen, Guoning; Kwatra, Vivek; Wei, Li-Yi; Hansen, Charles D; Zhang, Eugene

    2012-10-01

    Design of time-varying vector fields, i.e., vector fields that can change over time, has a wide variety of important applications in computer graphics. Existing vector field design techniques do not address time-varying vector fields. In this paper, we present a framework for the design of time-varying vector fields, both for planar domains as well as manifold surfaces. Our system supports the creation and modification of various time-varying vector fields with desired spatial and temporal characteristics through several design metaphors, including streamlines, pathlines, singularity paths, and bifurcations. These design metaphors are integrated into an element-based design to generate the time-varying vector fields via a sequence of basis field summations or spatial constrained optimizations at the sampled times. The key-frame design and field deformation are also introduced to support other user design scenarios. Accordingly, a spatial-temporal constrained optimization and the time-varying transformation are employed to generate the desired fields for these two design scenarios, respectively. We apply the time-varying vector fields generated using our design system to a number of important computer graphics applications that require controllable dynamic effects, such as evolving surface appearance, dynamic scene design, steerable crowd movement, and painterly animation. Many of these are difficult or impossible to achieve via prior simulation-based methods. In these applications, the time-varying vector fields have been applied as either orientation fields or advection fields to control the instantaneous appearance or evolving trajectories of the dynamic effects.

  18. Ant Navigation: Fractional Use of the Home Vector

    PubMed Central

    Cheung, Allen; Hiby, Lex; Narendra, Ajay

    2012-01-01

    Home is a special location for many animals, offering shelter from the elements, protection from predation, and a common place for gathering of the same species. Not surprisingly, many species have evolved efficient, robust homing strategies, which are used as part of each and every foraging journey. A basic strategy used by most animals is to take the shortest possible route home by accruing the net distances and directions travelled during foraging, a strategy well known as path integration. This strategy is part of the navigation toolbox of ants occupying different landscapes. However, when there is a visual discrepancy between test and training conditions, the distance travelled by animals relying on the path integrator varies dramatically between species: from 90% of the home vector to an absolute distance of only 50 cm. We here ask what the theoretically optimal balance between PI-driven and landmark-driven navigation should be. In combination with well-established results from optimal search theory, we show analytically that this fractional use of the home vector is an optimal homing strategy under a variety of circumstances. Assuming there is a familiar route that an ant recognizes, theoretically optimal search should always begin at some fraction of the home vector, depending on the region of familiarity. These results are shown to be largely independent of the search algorithm used. Ant species from different habitats appear to have optimized their navigation strategy based on the availability and nature of navigational information content in their environment. PMID:23209744

  19. Gain-adaptive vector quantization for medium-rate speech coding

    NASA Technical Reports Server (NTRS)

    Chen, J.-H.; Gersho, A.

    1985-01-01

    A class of adaptive vector quantizers (VQs) that can dynamically adjust the 'gain' of codevectors according to the input signal level is introduced. The encoder uses a gain estimator to determine a suitable normalization of each input vector prior to VQ coding. The normalized vectors have reduced dynamic range and can then be more efficiently coded. At the receiver, the VQ decoder output is multiplied by the estimated gain. Both forward and backward adaptation are considered and several different gain estimators are compared and evaluated. An approach to optimizing the design of gain estimators is introduced. Some of the more obvious techniques for achieving gain adaptation are substantially less effective than the use of optimized gain estimators. A novel design technique that is needed to generate the appropriate gain-normalized codebook for the vector quantizer is introduced. Experimental results show that a significant gain in segmental SNR can be obtained over nonadaptive VQ with a negligible increase in complexity.

  20. A New Unified Analysis of Estimate Errors by Model-Matching Phase-Estimation Methods for Sensorless Drive of Permanent-Magnet Synchronous Motors and New Trajectory-Oriented Vector Control, Part I

    NASA Astrophysics Data System (ADS)

    Shinnaka, Shinji; Sano, Kousuke

    This paper presents a new unified analysis of estimate errors by model-matching phase-estimation methods such as rotor-flux state-observers, back EMF state-observers, and back EMF disturbance-observers, for sensorless drive of permanent-magnet synchronous motors. Analytical solutions about estimate errors, whose validity is confirmed by numerical experiments, are rich in universality and applicability. As an example of universality and applicability, a new trajectory-oriented vector control method is proposed, which can realize directly quasi-optimal strategy minimizing total losses with no additional computational loads by simply orienting one of vector-control coordinates to the associated quasi-optimal trajectory. The coordinate orientation rule, which is analytically derived, is surprisingly simple. Consequently the trajectory-oriented vector control method can be applied to a number of conventional vector control systems using one of the model-matching phase-estimation methods.

  1. Optimization of the Brillouin operator on the KNL architecture

    NASA Astrophysics Data System (ADS)

    Dürr, Stephan

    2018-03-01

    Experiences with optimizing the matrix-times-vector application of the Brillouin operator on the Intel KNL processor are reported. Without adjustments to the memory layout, performance figures of 360 Gflop/s in single and 270 Gflop/s in double precision are observed. This is with Nc = 3 colors, Nv = 12 right-hand-sides, Nthr = 256 threads, on lattices of size 323 × 64, using exclusively OMP pragmas. Interestingly, the same routine performs quite well on Intel Core i7 architectures, too. Some observations on the much harderWilson fermion matrix-times-vector optimization problem are added.

  2. Necessary conditions for the optimality of variable rate residual vector quantizers

    NASA Technical Reports Server (NTRS)

    Kossentini, Faouzi; Smith, Mark J. T.; Barnes, Christopher F.

    1993-01-01

    Residual vector quantization (RVQ), or multistage VQ, as it is also called, has recently been shown to be a competitive technique for data compression. The competitive performance of RVQ reported in results from the joint optimization of variable rate encoding and RVQ direct-sum code books. In this paper, necessary conditions for the optimality of variable rate RVQ's are derived, and an iterative descent algorithm based on a Lagrangian formulation is introduced for designing RVQ's having minimum average distortion subject to an entropy constraint. Simulation results for these entropy-constrained RVQ's (EC-RVQ's) are presented for memory less Gaussian, Laplacian, and uniform sources. A Gauss-Markov source is also considered. The performance is superior to that of entropy-constrained scalar quantizers (EC-SQ's) and practical entropy-constrained vector quantizers (EC-VQ's), and is competitive with that of some of the best source coding techniques that have appeared in the literature.

  3. Tomo3D 2.0--exploitation of advanced vector extensions (AVX) for 3D reconstruction.

    PubMed

    Agulleiro, Jose-Ignacio; Fernandez, Jose-Jesus

    2015-02-01

    Tomo3D is a program for fast tomographic reconstruction on multicore computers. Its high speed stems from code optimization, vectorization with Streaming SIMD Extensions (SSE), multithreading and optimization of disk access. Recently, Advanced Vector eXtensions (AVX) have been introduced in the x86 processor architecture. Compared to SSE, AVX double the number of simultaneous operations, thus pointing to a potential twofold gain in speed. However, in practice, achieving this potential is extremely difficult. Here, we provide a technical description and an assessment of the optimizations included in Tomo3D to take advantage of AVX instructions. Tomo3D 2.0 allows huge reconstructions to be calculated in standard computers in a matter of minutes. Thus, it will be a valuable tool for electron tomography studies with increasing resolution needs. Copyright © 2014 Elsevier Inc. All rights reserved.

  4. Structure, function and five basic needs of the global health research system

    PubMed Central

    Rudan, Igor; Sridhar, Devi

    2016-01-01

    Background Two major initiatives that were set up to support and co–ordinate global health research efforts have been largely discontinued in recent years: the Global Forum for Health Research and World Health Organization's Department for Research Policy and Cooperation. These developments provide an interesting case study into the factors that contribute to the sustainability of initiatives to support and co–ordinate global health research in the 21st century. Methods We reviewed the history of attempts to govern, support or co–ordinate research in global health. Moreover, we studied the changes and shifts in funding flows attributed to global health research. This allowed us to map the structure of the global health research system, as it has evolved under the increased funding contributions of the past decade. Bearing in mind its structure, core functions and dynamic nature, we proposed a framework on how to effectively support the system to increase its efficiency. Results Based on our framework, which charted the structure and function of the global health research system and exposed places and roles for many stakeholders within the system, five basic needs emerged: (i) to co–ordinate funding among donors more effectively; (ii) to prioritize among many research ideas; (iii) to quickly recognize results of successful research; (iv) to ensure broad and rapid dissemination of results and their accessibility; and (v) to evaluate return on investments in health research. Conclusion The global health research system has evolved rapidly and spontaneously. It has not been optimally efficient, but it is possible to identify solutions that could improve this. There are already examples of effective responses for the need of prioritization of research questions (eg, the CHNRI method), quick recognition of important research (eg, systems used by editors of the leading journals) and rapid and broadly accessible publication of the new knowledge (eg, PLoS One journal as an example). It is still necessary to develop tools that could assist donors to co–ordinate funding and ensure more equity between areas in the provided support, and to evaluate the value for money invested in health research. PMID:26401270

  5. Squeezing Interval Change From Ordinal Panel Data: Latent Growth Curves With Ordinal Outcomes

    ERIC Educational Resources Information Center

    Mehta, Paras D.; Neale, Michael C.; Flay, Brian R.

    2004-01-01

    A didactic on latent growth curve modeling for ordinal outcomes is presented. The conceptual aspects of modeling growth with ordinal variables and the notion of threshold invariance are illustrated graphically using a hypothetical example. The ordinal growth model is described in terms of 3 nested models: (a) multivariate normality of the…

  6. [Extraction Optimization of Rhizome of Curcuma longa by Response Surface Methodology and Support Vector Regression].

    PubMed

    Zhou, Pei-pei; Shan, Jin-feng; Jiang, Jian-lan

    2015-12-01

    To optimize the optimal microwave-assisted extraction method of curcuminoids from Curcuma longa. On the base of single factor experiment, the ethanol concentration, the ratio of liquid to solid and the microwave time were selected for further optimization. Support Vector Regression (SVR) and Central Composite Design-Response Surface Methodology (CCD) algorithm were utilized to design and establish models respectively, while Particle Swarm Optimization (PSO) was introduced to optimize the parameters of SVR models and to search optimal points of models. The evaluation indicator, the sum of curcumin, demethoxycurcumin and bisdemethoxycurcumin by HPLC, were used. The optimal parameters of microwave-assisted extraction were as follows: ethanol concentration of 69%, ratio of liquid to solid of 21 : 1, microwave time of 55 s. On those conditions, the sum of three curcuminoids was 28.97 mg/g (per gram of rhizomes powder). Both the CCD model and the SVR model were credible, for they have predicted the similar process condition and the deviation of yield were less than 1.2%.

  7. ℓ(p)-Norm multikernel learning approach for stock market price forecasting.

    PubMed

    Shao, Xigao; Wu, Kun; Liao, Bifeng

    2012-01-01

    Linear multiple kernel learning model has been used for predicting financial time series. However, ℓ(1)-norm multiple support vector regression is rarely observed to outperform trivial baselines in practical applications. To allow for robust kernel mixtures that generalize well, we adopt ℓ(p)-norm multiple kernel support vector regression (1 ≤ p < ∞) as a stock price prediction model. The optimization problem is decomposed into smaller subproblems, and the interleaved optimization strategy is employed to solve the regression model. The model is evaluated on forecasting the daily stock closing prices of Shanghai Stock Index in China. Experimental results show that our proposed model performs better than ℓ(1)-norm multiple support vector regression model.

  8. Experiences in using the CYBER 203 for three-dimensional transonic flow calculations

    NASA Technical Reports Server (NTRS)

    Melson, N. D.; Keller, J. D.

    1982-01-01

    In this paper, the authors report on some of their experiences modifying two three-dimensional transonic flow programs (FLO22 and FLO27) for use on the NASA Langley Research Center CYBER 203. Both of the programs discussed were originally written for use on serial machines. Several methods were attempted to optimize the execution of the two programs on the vector machine, including: (1) leaving the program in a scalar form (i.e., serial computation) with compiler software used to optimize and vectorize the program, (2) vectorizing parts of the existing algorithm in the program, and (3) incorporating a new vectorizable algorithm (ZEBRA I or ZEBRA II) in the program.

  9. Optimization of large matrix calculations for execution on the Cray X-MP vector supercomputer

    NASA Technical Reports Server (NTRS)

    Hornfeck, William A.

    1988-01-01

    A considerable volume of large computational computer codes were developed for NASA over the past twenty-five years. This code represents algorithms developed for machines of earlier generation. With the emergence of the vector supercomputer as a viable, commercially available machine, an opportunity exists to evaluate optimization strategies to improve the efficiency of existing software. This result is primarily due to architectural differences in the latest generation of large-scale machines and the earlier, mostly uniprocessor, machines. A sofware package being used by NASA to perform computations on large matrices is described, and a strategy for conversion to the Cray X-MP vector supercomputer is also described.

  10. [Effects of plant viruses on vector and non-vector herbivorous arthropods and their natural enemies: a mini review].

    PubMed

    He, Xiao-Chan; Xu, Hong-Xing; Zhou, Xiao-Jun; Zheng, Xu-Song; Sun, Yu-Jian; Yang, Ya-Jun; Tian, Jun-Ce; Lü, Zhong-Xian

    2014-05-01

    Plant viruses transmitted by arthropods, as an important biotic factor, may not only directly affect the yield and quality of host plants, and development, physiological characteristics and ecological performances of their vector arthropods, but also directly or indirectly affect the non-vector herbivorous arthropods and their natural enemies in the same ecosystem, thereby causing influences to the whole agro-ecosystem. This paper reviewed the progress on the effects of plant viruses on herbivorous arthropods, including vector and non-vector, and their natural enemies, and on their ecological mechanisms to provide a reference for optimizing the management of vector and non-vector arthropod populations and sustainable control of plant viruses in agro-ecosystem.

  11. Predictability of a Coupled Model of ENSO Using Singular Vector Analysis: Optimal Growth and Forecast Skill.

    NASA Astrophysics Data System (ADS)

    Xue, Yan

    The optimal growth and its relationship with the forecast skill of the Zebiak and Cane model are studied using a simple statistical model best fit to the original nonlinear model and local linear tangent models about idealized climatic states (the mean background and ENSO cycles in a long model run), and the actual forecast states, including two sets of runs using two different initialization procedures. The seasonally varying Markov model best fit to a suite of 3-year forecasts in a reduced EOF space (18 EOFs) fits the original nonlinear model reasonably well and has comparable or better forecast skill. The initial error growth in a linear evolution operator A is governed by the eigenvalues of A^{T}A, and the square roots of eigenvalues and eigenvectors of A^{T}A are named singular values and singular vectors. One dominant growing singular vector is found, and the optimal 6 month growth rate is largest for a (boreal) spring start and smallest for a fall start. Most of the variation in the optimal growth rate of the two forecasts is seasonal, attributable to the seasonal variations in the mean background, except that in the cold events it is substantially suppressed. It is found that the mean background (zero anomaly) is the most unstable state, and the "forecast IC states" are more unstable than the "coupled model states". One dominant growing singular vector is found, characterized by north-south and east -west dipoles, convergent winds on the equator in the eastern Pacific and a deepened thermocline in the whole equatorial belt. This singular vector is insensitive to initial time and optimization time, but its final pattern is a strong function of initial states. The ENSO system is inherently unpredictable for the dominant singular vector can amplify 5-fold to 24-fold in 6 months and evolve into the large scales characteristic of ENSO. However, the inherent ENSO predictability is only a secondary factor, while the mismatches between the model and data is a primary factor controlling the current forecast skill.

  12. An adaptive evolutionary multi-objective approach based on simulated annealing.

    PubMed

    Li, H; Landa-Silva, D

    2011-01-01

    A multi-objective optimization problem can be solved by decomposing it into one or more single objective subproblems in some multi-objective metaheuristic algorithms. Each subproblem corresponds to one weighted aggregation function. For example, MOEA/D is an evolutionary multi-objective optimization (EMO) algorithm that attempts to optimize multiple subproblems simultaneously by evolving a population of solutions. However, the performance of MOEA/D highly depends on the initial setting and diversity of the weight vectors. In this paper, we present an improved version of MOEA/D, called EMOSA, which incorporates an advanced local search technique (simulated annealing) and adapts the search directions (weight vectors) corresponding to various subproblems. In EMOSA, the weight vector of each subproblem is adaptively modified at the lowest temperature in order to diversify the search toward the unexplored parts of the Pareto-optimal front. Our computational results show that EMOSA outperforms six other well established multi-objective metaheuristic algorithms on both the (constrained) multi-objective knapsack problem and the (unconstrained) multi-objective traveling salesman problem. Moreover, the effects of the main algorithmic components and parameter sensitivities on the search performance of EMOSA are experimentally investigated.

  13. On the Improvement of Convergence Performance for Integrated Design of Wind Turbine Blade Using a Vector Dominating Multi-objective Evolution Algorithm

    NASA Astrophysics Data System (ADS)

    Wang, L.; Wang, T. G.; Wu, J. H.; Cheng, G. P.

    2016-09-01

    A novel multi-objective optimization algorithm incorporating evolution strategies and vector mechanisms, referred as VD-MOEA, is proposed and applied in aerodynamic- structural integrated design of wind turbine blade. In the algorithm, a set of uniformly distributed vectors is constructed to guide population in moving forward to the Pareto front rapidly and maintain population diversity with high efficiency. For example, two- and three- objective designs of 1.5MW wind turbine blade are subsequently carried out for the optimization objectives of maximum annual energy production, minimum blade mass, and minimum extreme root thrust. The results show that the Pareto optimal solutions can be obtained in one single simulation run and uniformly distributed in the objective space, maximally maintaining the population diversity. In comparison to conventional evolution algorithms, VD-MOEA displays dramatic improvement of algorithm performance in both convergence and diversity preservation for handling complex problems of multi-variables, multi-objectives and multi-constraints. This provides a reliable high-performance optimization approach for the aerodynamic-structural integrated design of wind turbine blade.

  14. Optimal control in a model of malaria with differential susceptibility

    NASA Astrophysics Data System (ADS)

    Hincapié, Doracelly; Ospina, Juan

    2014-06-01

    A malaria model with differential susceptibility is analyzed using the optimal control technique. In the model the human population is classified as susceptible, infected and recovered. Susceptibility is assumed dependent on genetic, physiological, or social characteristics that vary between individuals. The model is described by a system of differential equations that relate the human and vector populations, so that the infection is transmitted to humans by vectors, and the infection is transmitted to vectors by humans. The model considered is analyzed using the optimal control method when the control consists in using of insecticide-treated nets and educational campaigns; and the optimality criterion is to minimize the number of infected humans, while keeping the cost as low as is possible. One first goal is to determine the effects of differential susceptibility in the proposed control mechanism; and the second goal is to determine the algebraic form of the basic reproductive number of the model. All computations are performed using computer algebra, specifically Maple. It is claimed that the analytical results obtained are important for the design and implementation of control measures for malaria. It is suggested some future investigations such as the application of the method to other vector-borne diseases such as dengue or yellow fever; and also it is suggested the possible application of free software of computer algebra like Maxima.

  15. Application of support vector regression for optimization of vibration flow field of high-density polyethylene melts characterized by small angle light scattering

    NASA Astrophysics Data System (ADS)

    Xian, Guangming

    2018-03-01

    In this paper, the vibration flow field parameters of polymer melts in a visual slit die are optimized by using intelligent algorithm. Experimental small angle light scattering (SALS) patterns are shown to characterize the processing process. In order to capture the scattered light, a polarizer and an analyzer are placed before and after the polymer melts. The results reported in this study are obtained using high-density polyethylene (HDPE) with rotation speed at 28 rpm. In addition, support vector regression (SVR) analytical method is introduced for optimization the parameters of vibration flow field. This work establishes the general applicability of SVR for predicting the optimal parameters of vibration flow field.

  16. Cardiac Gene Therapy: Optimization of Gene Delivery Techniques In Vivo

    PubMed Central

    Katz, Michael G.; Swain, JaBaris D.; White, Jennifer D.; Low, David; Stedman, Hansell

    2010-01-01

    Abstract Vector-mediated cardiac gene therapy holds tremendous promise as a translatable platform technology for treating many cardiovascular diseases. The ideal technique is one that is efficient and practical, allowing for global cardiac gene expression, while minimizing collateral expression in other organs. Here we survey the available in vivo vector-mediated cardiac gene delivery methods—including transcutaneous, intravascular, intramuscular, and cardiopulmonary bypass techniques—with consideration of the relative merits and deficiencies of each. Review of available techniques suggests that an optimal method for vector-mediated gene delivery to the large animal myocardium would ideally employ retrograde and/or anterograde transcoronary gene delivery,extended vector residence time in the coronary circulation, an increased myocardial transcapillary gradient using physical methods, increased endothelial permeability with pharmacological agents, minimal collateral gene expression by isolation of the cardiac circulation from the systemic, and have low immunogenicity. PMID:19947886

  17. Rank-Optimized Logistic Matrix Regression toward Improved Matrix Data Classification.

    PubMed

    Zhang, Jianguang; Jiang, Jianmin

    2018-02-01

    While existing logistic regression suffers from overfitting and often fails in considering structural information, we propose a novel matrix-based logistic regression to overcome the weakness. In the proposed method, 2D matrices are directly used to learn two groups of parameter vectors along each dimension without vectorization, which allows the proposed method to fully exploit the underlying structural information embedded inside the 2D matrices. Further, we add a joint [Formula: see text]-norm on two parameter matrices, which are organized by aligning each group of parameter vectors in columns. This added co-regularization term has two roles-enhancing the effect of regularization and optimizing the rank during the learning process. With our proposed fast iterative solution, we carried out extensive experiments. The results show that in comparison to both the traditional tensor-based methods and the vector-based regression methods, our proposed solution achieves better performance for matrix data classifications.

  18. Numerical Model of Multiple Scattering and Emission from Layering Snowpack for Microwave Remote Sensing

    NASA Astrophysics Data System (ADS)

    Jin, Y.; Liang, Z.

    2002-12-01

    The vector radiative transfer (VRT) equation is an integral-deferential equation to describe multiple scattering, absorption and transmission of four Stokes parameters in random scatter media. From the integral formal solution of VRT equation, the lower order solutions, such as the first-order scattering for a layer medium or the second order scattering for a half space, can be obtained. The lower order solutions are usually good at low frequency when high-order scattering is negligible. It won't be feasible to continue iteration for obtaining high order scattering solution because too many folds integration would be involved. In the space-borne microwave remote sensing, for example, the DMSP (Defense Meterological Satellite Program) SSM/I (Special Sensor Microwave/Imager) employed seven channels of 19, 22, 37 and 85GHz. Multiple scattering from the terrain surfaces such as snowpack cannot be neglected at these channels. The discrete ordinate and eigen-analysis method has been studied to take into account for multiple scattering and applied to remote sensing of atmospheric precipitation, snowpack etc. Snowpack was modeled as a layer of dense spherical particles, and the VRT for a layer of uniformly dense spherical particles has been numerically studied by the discrete ordinate method. However, due to surface melting and refrozen crusts, the snowpack undergoes stratifying to form inhomegeneous profiles of the ice grain size, fractional volume and physical temperature etc. It becomes necessary to study multiple scattering and emission from stratified snowpack of dense ice grains. But, the discrete ordinate and eigen-analysis method cannot be simply applied to multi-layers model, because numerically solving a set of multi-equations of VRT is difficult. Stratifying the inhomogeneous media into multi-slabs and employing the first order Mueller matrix of each thin slab, this paper developed an iterative method to derive high orders scattering solutions of whole scatter media. High order scattering and emission from inhomogeneous stratifying media of dense spherical particles are numerically obtained. The brightness temperature at low frequency such as 5.3 GHz without high order scattering and at SSM/I channels with high order scattering are obtained. This approach is also compared with the conventional discrete ordinate method for an uniform layer model. Numerical simulation for inhomogeneous snowpack is also compared with the measurements of microwave remote sensing.

  19. Optimization of the transductional efficiency of lentiviral vectors: effect of sera and polycations

    PubMed Central

    Denning, Warren; Das, Suvendu; Guo, Siqi; Xu, Jun; Kappes, John C.; Hel, Zdenek

    2012-01-01

    Lentiviral vectors are widely used as effective gene-delivery vehicles. Optimization of the conditions for efficient lentiviral transduction is of a high importance for a variety of research applications. Presence of positively-charged polycations reduces the electrostatic repulsion forces between a negatively-charged cell and an approaching enveloped lentiviral particle resulting in an increase in the transduction efficiency. Although a variety of polycations are commonly used to enhance the transduction with retroviruses, the relative effect of various types of polycations on the efficiency of transduction and on the potential bias in the determination of titer of lentiviral vectors is not fully understood. Here we present data suggesting that DEAE-dextran provides superior results in enhancing lentiviral transduction of most tested cell lines and primary cell cultures. Specific type and source of serum affects the efficiency of transduction of target cell populations. Non-specific binding of enhanced green fluorescent protein (EGFP)-containing membrane aggregates in the presence of DEAE-dextran does not significantly affect the determination of the titer of EGFP-expressing lentiviral vectors. In conclusion, various polycations and types of sera should be tested when optimizing lentiviral transduction of target cell populations. PMID:22407723

  20. Optimization of the transductional efficiency of lentiviral vectors: effect of sera and polycations.

    PubMed

    Denning, Warren; Das, Suvendu; Guo, Siqi; Xu, Jun; Kappes, John C; Hel, Zdenek

    2013-03-01

    Lentiviral vectors are widely used as effective gene-delivery vehicles. Optimization of the conditions for efficient lentiviral transduction is of a high importance for a variety of research applications. Presence of positively charged polycations reduces the electrostatic repulsion forces between a negatively charged cell and an approaching enveloped lentiviral particle resulting in an increase in the transduction efficiency. Although a variety of polycations are commonly used to enhance the transduction with retroviruses, the relative effect of various types of polycations on the efficiency of transduction and on the potential bias in the determination of titer of lentiviral vectors is not fully understood. Here, we present data suggesting that DEAE-dextran provides superior results in enhancing lentiviral transduction of most tested cell lines and primary cell cultures. Specific type and source of serum affects the efficiency of transduction of target cell populations. Non-specific binding of enhanced green fluorescent protein (EGFP)-containing membrane aggregates in the presence of DEAE-dextran does not significantly affect the determination of the titer of EGFP-expressing lentiviral vectors. In conclusion, various polycations and types of sera should be tested when optimizing lentiviral transduction of target cell populations.

  1. Structures and reaction pathways of the molybdenum centres of sulfite-oxidizing enzymes by pulsed EPR spectroscopy.

    PubMed

    Enemark, John H; Astashkin, Andrei V; Raitsimring, Arnold M

    2008-12-01

    SOEs (sulfite-oxidizing enzymes) are physiologically vital and occur in all forms of life. During the catalytic cycle, the five-co-ordinate square pyramidal oxo-molybdenum active site passes through the Mo(V) state, and intimate details of the structure can be obtained from variable frequency pulsed EPR spectroscopy through the hyperfine and nuclear quadrupole interactions of nearby magnetic nuclei. By employing variable spectrometer operational frequencies, it is possible to optimize the measurement conditions for difficult quadrupolar nuclei of interest (e.g. (17)O, (33)S, (35)Cl and (37)Cl) and to simplify the interpretation of the spectra. Isotopically labelled model Mo(V) compounds provide further insight into the electronic and geometric structures and chemical reactions of the enzymes. Recently, blocked forms of SOEs having co-ordinated sulfate, the reaction product, were detected using (33)S (I=3/2) labelling. This blocking of product release is a possible contributor to fatal human sulfite oxidase deficiency in young children.

  2. A parallel Jacobson-Oksman optimization algorithm. [parallel processing (computers)

    NASA Technical Reports Server (NTRS)

    Straeter, T. A.; Markos, A. T.

    1975-01-01

    A gradient-dependent optimization technique which exploits the vector-streaming or parallel-computing capabilities of some modern computers is presented. The algorithm, derived by assuming that the function to be minimized is homogeneous, is a modification of the Jacobson-Oksman serial minimization method. In addition to describing the algorithm, conditions insuring the convergence of the iterates of the algorithm and the results of numerical experiments on a group of sample test functions are presented. The results of these experiments indicate that this algorithm will solve optimization problems in less computing time than conventional serial methods on machines having vector-streaming or parallel-computing capabilities.

  3. Transfers between libration-point orbits in the elliptic restricted problem

    NASA Astrophysics Data System (ADS)

    Hiday, L. A.; Howell, K. C.

    The present time-fixed impulsive transfers between 3D libration point orbits in the vicinity of the interior L(1) libration point of the sun-earth-moon barycenter system are 'optimal' in that the total characteristic velocity required for implementation of the transfer exhibits a local minimum. The conditions necessary for a time-fixed, two-impulse transfer trajectory to be optimal are stated in terms of the primer vector, and the conditions necessary for satisfying the local optimality of a transfer trajectory containing additional impulses are addressed by requiring continuity of the Hamiltonian and the derivative of the primer vector at all interior impulses.

  4. Clean Indoor Air Ordinance Coverage in the Appalachian Region of the United States

    PubMed Central

    Liber, Alex; Pennell, Michael; Nealy, Darren; Hammer, Jana; Berman, Micah

    2010-01-01

    Objectives. We sought to quantitatively examine the pattern of, and socioeconomic factors associated with, adoption of clean indoor air ordinances in Appalachia. Methods. We collected and reviewed clean indoor air ordinances in Appalachian communities in 6 states and rated the ordinances for completeness of coverage in workplaces, restaurants, and bars. Additionally, we computed a strength score to measure coverage in 7 locations. We fit mixed-effects models to determine whether the presence of a comprehensive ordinance and the ordinance strength were related to community socioeconomic disadvantage. Results. Of the 332 communities included in the analysis, fewer than 20% had adopted a comprehensive workplace, restaurant, or bar ordinance. Most ordinances were weak, achieving on average only 43% of the total possible points. Communities with a higher unemployment rate were less likely and those with a higher education level were more likely to have a strong ordinance. Conclusions. The majority of residents in these communities are not protected from secondhand smoke. Efforts to pass strong statewide clean indoor air laws should take priority over local initiatives in these states. PMID:20466957

  5. Method and apparatus for optimized processing of sparse matrices

    DOEpatents

    Taylor, Valerie E.

    1993-01-01

    A computer architecture for processing a sparse matrix is disclosed. The apparatus stores a value-row vector corresponding to nonzero values of a sparse matrix. Each of the nonzero values is located at a defined row and column position in the matrix. The value-row vector includes a first vector including nonzero values and delimiting characters indicating a transition from one column to another. The value-row vector also includes a second vector which defines row position values in the matrix corresponding to the nonzero values in the first vector and column position values in the matrix corresponding to the column position of the nonzero values in the first vector. The architecture also includes a circuit for detecting a special character within the value-row vector. Matrix-vector multiplication is executed on the value-row vector. This multiplication is performed by multiplying an index value of the first vector value by a column value from a second matrix to form a matrix-vector product which is added to a previous matrix-vector product.

  6. City curfew ordinances and teenage motor vehicle injury.

    PubMed

    Preusser, D F; Williams, A F; Lund, A K; Zador, P L

    1990-08-01

    Several U.S. cities have curfew ordinances that limit the late night activities of minor teenagers in public places including highways. Detroit, Cleveland, and Columbus, which have curfew ordinances, were compared to Cincinnati, which does not have such an ordinance. The curfew ordinances were associated with a 23% reduction in motor vehicle related injury for 13- to 17-year-olds as passengers, drivers, pedestrians, or bicyclists during the curfew hours. It was concluded that city curfew ordinances, like the statewide driving curfews studied in other states, can reduce motor vehicle injury to teenagers during the particularly hazardous late night hours.

  7. A vectorization of the Jameson-Caughey NYU transonic swept-wing computer program FLO-22-V1 for the STAR-100 computer

    NASA Technical Reports Server (NTRS)

    Smith, R. E.; Pitts, J. I.; Lambiotte, J. J., Jr.

    1978-01-01

    The computer program FLO-22 for analyzing inviscid transonic flow past 3-D swept-wing configurations was modified to use vector operations and run on the STAR-100 computer. The vectorized version described herein was called FLO-22-V1. Vector operations were incorporated into Successive Line Over-Relaxation in the transformed horizontal direction. Vector relational operations and control vectors were used to implement upwind differencing at supersonic points. A high speed of computation and extended grid domain were characteristics of FLO-22-V1. The new program was not the optimal vectorization of Successive Line Over-Relaxation applied to transonic flow; however, it proved that vector operations can readily be implemented to increase the computation rate of the algorithm.

  8. Minimum Variance Distortionless Response Beamformer with Enhanced Nulling Level Control via Dynamic Mutated Artificial Immune System

    PubMed Central

    Kiong, Tiong Sieh; Salem, S. Balasem; Paw, Johnny Koh Siaw; Sankar, K. Prajindra

    2014-01-01

    In smart antenna applications, the adaptive beamforming technique is used to cancel interfering signals (placing nulls) and produce or steer a strong beam toward the target signal according to the calculated weight vectors. Minimum variance distortionless response (MVDR) beamforming is capable of determining the weight vectors for beam steering; however, its nulling level on the interference sources remains unsatisfactory. Beamforming can be considered as an optimization problem, such that optimal weight vector should be obtained through computation. Hence, in this paper, a new dynamic mutated artificial immune system (DM-AIS) is proposed to enhance MVDR beamforming for controlling the null steering of interference and increase the signal to interference noise ratio (SINR) for wanted signals. PMID:25003136

  9. Minimum variance distortionless response beamformer with enhanced nulling level control via dynamic mutated artificial immune system.

    PubMed

    Kiong, Tiong Sieh; Salem, S Balasem; Paw, Johnny Koh Siaw; Sankar, K Prajindra; Darzi, Soodabeh

    2014-01-01

    In smart antenna applications, the adaptive beamforming technique is used to cancel interfering signals (placing nulls) and produce or steer a strong beam toward the target signal according to the calculated weight vectors. Minimum variance distortionless response (MVDR) beamforming is capable of determining the weight vectors for beam steering; however, its nulling level on the interference sources remains unsatisfactory. Beamforming can be considered as an optimization problem, such that optimal weight vector should be obtained through computation. Hence, in this paper, a new dynamic mutated artificial immune system (DM-AIS) is proposed to enhance MVDR beamforming for controlling the null steering of interference and increase the signal to interference noise ratio (SINR) for wanted signals.

  10. Power line identification of millimeter wave radar based on PCA-GS-SVM

    NASA Astrophysics Data System (ADS)

    Fang, Fang; Zhang, Guifeng; Cheng, Yansheng

    2017-12-01

    Aiming at the problem that the existing detection method can not effectively solve the security of UAV's ultra low altitude flight caused by power line, a power line recognition method based on grid search (GS) and the principal component analysis and support vector machine (PCA-SVM) is proposed. Firstly, the candidate line of Hough transform is reduced by PCA, and the main feature of candidate line is extracted. Then, upport vector machine (SVM is) optimized by grid search method (GS). Finally, using support vector machine classifier optimized parameters to classify the candidate line. MATLAB simulation results show that this method can effectively identify the power line and noise, and has high recognition accuracy and algorithm efficiency.

  11. ℓ p-Norm Multikernel Learning Approach for Stock Market Price Forecasting

    PubMed Central

    Shao, Xigao; Wu, Kun; Liao, Bifeng

    2012-01-01

    Linear multiple kernel learning model has been used for predicting financial time series. However, ℓ 1-norm multiple support vector regression is rarely observed to outperform trivial baselines in practical applications. To allow for robust kernel mixtures that generalize well, we adopt ℓ p-norm multiple kernel support vector regression (1 ≤ p < ∞) as a stock price prediction model. The optimization problem is decomposed into smaller subproblems, and the interleaved optimization strategy is employed to solve the regression model. The model is evaluated on forecasting the daily stock closing prices of Shanghai Stock Index in China. Experimental results show that our proposed model performs better than ℓ 1-norm multiple support vector regression model. PMID:23365561

  12. Use of CYBER 203 and CYBER 205 computers for three-dimensional transonic flow calculations

    NASA Technical Reports Server (NTRS)

    Melson, N. D.; Keller, J. D.

    1983-01-01

    Experiences are discussed for modifying two three-dimensional transonic flow computer programs (FLO 22 and FLO 27) for use on the CDC CYBER 203 computer system. Both programs were originally written for use on serial machines. Several methods were attempted to optimize the execution of the two programs on the vector machine: leaving the program in a scalar form (i.e., serial computation) with compiler software used to optimize and vectorize the program, vectorizing parts of the existing algorithm in the program, and incorporating a vectorizable algorithm (ZEBRA I or ZEBRA II) in the program. Comparison runs of the programs were made on CDC CYBER 175. CYBER 203, and two pipe CDC CYBER 205 computer systems.

  13. Gap-minimal systems of notations and the constructible hierarchy

    NASA Technical Reports Server (NTRS)

    Lucian, M. L.

    1972-01-01

    If a constructibly countable ordinal alpha is a gap ordinal, then the order type of the set of index ordinals smaller than alpha is exactly alpha. The gap ordinals are the only points of discontinuity of a certain ordinal-valued function. The notion of gap minimality for well ordered systems of notations is defined, and the existence of gap-minimal systems of notations of arbitrarily large constructibly countable length is established.

  14. 78 FR 54670 - Miami Tribe of Oklahoma-Liquor Control Ordinance

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-09-05

    ... Tribe of Oklahoma--Liquor Control Ordinance AGENCY: Bureau of Indian Affairs, Interior. ACTION: Notice. SUMMARY: This notice publishes the Miami Tribe of Oklahoma--Liquor Control Ordinance. This Ordinance... Oklahoma, increases the ability of the tribal government to control the distribution and possession of...

  15. Tax revenue in Mississippi communities following implementation of smoke-free ordinances: an examination of tourism and economic development tax revenues.

    PubMed

    McMillen, Robert; Shackelford, Signe

    2012-10-01

    There is no safe level of exposure to tobacco smoke. More than 60 Mississippi communities have passed smoke-free ordinances in the past six years. Opponents claim that these ordinances harm local businesses. Mississippi law allows municipalities to place a tourism and economic development (TED) tax on local restaurants and hotels/motels. The objective of this study is to examine the impact of these ordinances on TED tax revenues. This study applies a pre/post quasi-experimental design to compare TED tax revenue before and after implementing ordinances. Descriptive analyses indicated that inflation-adjusted tax revenues increased during the 12 months following implementation of smoke-free ordinances while there was no change in aggregated control communities. Multivariate fixed-effects analyses found no statistically significant effect of smoke-free ordinances on hospitality tax revenue. No evidence was found that smoke-free ordinances have an adverse effect on the local hospitality industry.

  16. A self-contained, automated methodology for optimal flow control validated for transition delay

    NASA Technical Reports Server (NTRS)

    Joslin, Ronald D.; Gunzburger, Max D.; Nicolaides, R. A.; Erlebacher, Gordon; Hussaini, M. Yousuff

    1995-01-01

    This paper describes a self-contained, automated methodology for flow control along with a validation of the methodology for the problem of boundary layer instability suppression. The objective of control is to match the stress vector along a portion of the boundary to a given vector; instability suppression is achieved by choosing the given vector to be that of a steady base flow, e.g., Blasius boundary layer. Control is effected through the injection or suction of fluid through a single orifice on the boundary. The present approach couples the time-dependent Navier-Stokes system with an adjoint Navier-Stokes system and optimality conditions from which optimal states, i.e., unsteady flow fields, and control, e.g., actuators, may be determined. The results demonstrate that instability suppression can be achieved without any a priori knowledge of the disturbance, which is significant because other control techniques have required some knowledge of the flow unsteadiness such as frequencies, instability type, etc.

  17. Lanczos eigensolution method for high-performance computers

    NASA Technical Reports Server (NTRS)

    Bostic, Susan W.

    1991-01-01

    The theory, computational analysis, and applications are presented of a Lanczos algorithm on high performance computers. The computationally intensive steps of the algorithm are identified as: the matrix factorization, the forward/backward equation solution, and the matrix vector multiples. These computational steps are optimized to exploit the vector and parallel capabilities of high performance computers. The savings in computational time from applying optimization techniques such as: variable band and sparse data storage and access, loop unrolling, use of local memory, and compiler directives are presented. Two large scale structural analysis applications are described: the buckling of a composite blade stiffened panel with a cutout, and the vibration analysis of a high speed civil transport. The sequential computational time for the panel problem executed on a CONVEX computer of 181.6 seconds was decreased to 14.1 seconds with the optimized vector algorithm. The best computational time of 23 seconds for the transport problem with 17,000 degs of freedom was on the the Cray-YMP using an average of 3.63 processors.

  18. Sparse Solutions for Single Class SVMs: A Bi-Criterion Approach

    NASA Technical Reports Server (NTRS)

    Das, Santanu; Oza, Nikunj C.

    2011-01-01

    In this paper we propose an innovative learning algorithm - a variation of One-class nu Support Vector Machines (SVMs) learning algorithm to produce sparser solutions with much reduced computational complexities. The proposed technique returns an approximate solution, nearly as good as the solution set obtained by the classical approach, by minimizing the original risk function along with a regularization term. We introduce a bi-criterion optimization that helps guide the search towards the optimal set in much reduced time. The outcome of the proposed learning technique was compared with the benchmark one-class Support Vector machines algorithm which more often leads to solutions with redundant support vectors. Through out the analysis, the problem size for both optimization routines was kept consistent. We have tested the proposed algorithm on a variety of data sources under different conditions to demonstrate the effectiveness. In all cases the proposed algorithm closely preserves the accuracy of standard one-class nu SVMs while reducing both training time and test time by several factors.

  19. Customer demand prediction of service-oriented manufacturing using the least square support vector machine optimized by particle swarm optimization algorithm

    NASA Astrophysics Data System (ADS)

    Cao, Jin; Jiang, Zhibin; Wang, Kangzhou

    2017-07-01

    Many nonlinear customer satisfaction-related factors significantly influence the future customer demand for service-oriented manufacturing (SOM). To address this issue and enhance the prediction accuracy, this article develops a novel customer demand prediction approach for SOM. The approach combines the phase space reconstruction (PSR) technique with the optimized least square support vector machine (LSSVM). First, the prediction sample space is reconstructed by the PSR to enrich the time-series dynamics of the limited data sample. Then, the generalization and learning ability of the LSSVM are improved by the hybrid polynomial and radial basis function kernel. Finally, the key parameters of the LSSVM are optimized by the particle swarm optimization algorithm. In a real case study, the customer demand prediction of an air conditioner compressor is implemented. Furthermore, the effectiveness and validity of the proposed approach are demonstrated by comparison with other classical predication approaches.

  20. Feed-Forward Neural Network Soft-Sensor Modeling of Flotation Process Based on Particle Swarm Optimization and Gravitational Search Algorithm

    PubMed Central

    Wang, Jie-Sheng; Han, Shuang

    2015-01-01

    For predicting the key technology indicators (concentrate grade and tailings recovery rate) of flotation process, a feed-forward neural network (FNN) based soft-sensor model optimized by the hybrid algorithm combining particle swarm optimization (PSO) algorithm and gravitational search algorithm (GSA) is proposed. Although GSA has better optimization capability, it has slow convergence velocity and is easy to fall into local optimum. So in this paper, the velocity vector and position vector of GSA are adjusted by PSO algorithm in order to improve its convergence speed and prediction accuracy. Finally, the proposed hybrid algorithm is adopted to optimize the parameters of FNN soft-sensor model. Simulation results show that the model has better generalization and prediction accuracy for the concentrate grade and tailings recovery rate to meet the online soft-sensor requirements of the real-time control in the flotation process. PMID:26583034

  1. 75 FR 65373 - Klamath Tribes Liquor Control Ordinance

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-10-22

    ... DEPARTMENT OF THE INTERIOR Bureau of Indian Affairs Klamath Tribes Liquor Control Ordinance AGENCY... certification of the amendment to the Klamath Tribes Liquor Control Ordinance. The first Ordinance was published... and controls the sale, possession and distribution of liquor within the tribal lands. The tribal lands...

  2. Interpretation for scales of measurement linking with abstract algebra

    PubMed Central

    2014-01-01

    The Stevens classification of levels of measurement involves four types of scale: “Nominal”, “Ordinal”, “Interval” and “Ratio”. This classification has been used widely in medical fields and has accomplished an important role in composition and interpretation of scale. With this classification, levels of measurements appear organized and validated. However, a group theory-like systematization beckons as an alternative because of its logical consistency and unexceptional applicability in the natural sciences but which may offer great advantages in clinical medicine. According to this viewpoint, the Stevens classification is reformulated within an abstract algebra-like scheme; ‘Abelian modulo additive group’ for “Ordinal scale” accompanied with ‘zero’, ‘Abelian additive group’ for “Interval scale”, and ‘field’ for “Ratio scale”. Furthermore, a vector-like display arranges a mixture of schemes describing the assessment of patient states. With this vector-like notation, data-mining and data-set combination is possible on a higher abstract structure level based upon a hierarchical-cluster form. Using simple examples, we show that operations acting on the corresponding mixed schemes of this display allow for a sophisticated means of classifying, updating, monitoring, and prognosis, where better data mining/data usage and efficacy is expected. PMID:24987515

  3. Ordinal probability effect measures for group comparisons in multinomial cumulative link models.

    PubMed

    Agresti, Alan; Kateri, Maria

    2017-03-01

    We consider simple ordinal model-based probability effect measures for comparing distributions of two groups, adjusted for explanatory variables. An "ordinal superiority" measure summarizes the probability that an observation from one distribution falls above an independent observation from the other distribution, adjusted for explanatory variables in a model. The measure applies directly to normal linear models and to a normal latent variable model for ordinal response variables. It equals Φ(β/2) for the corresponding ordinal model that applies a probit link function to cumulative multinomial probabilities, for standard normal cdf Φ and effect β that is the coefficient of the group indicator variable. For the more general latent variable model for ordinal responses that corresponds to a linear model with other possible error distributions and corresponding link functions for cumulative multinomial probabilities, the ordinal superiority measure equals exp(β)/[1+exp(β)] with the log-log link and equals approximately exp(β/2)/[1+exp(β/2)] with the logit link, where β is the group effect. Another ordinal superiority measure generalizes the difference of proportions from binary to ordinal responses. We also present related measures directly for ordinal models for the observed response that need not assume corresponding latent response models. We present confidence intervals for the measures and illustrate with an example. © 2016, The International Biometric Society.

  4. Clinical prediction model to identify vulnerable patients in ambulatory surgery: towards optimal medical decision-making.

    PubMed

    Mijderwijk, Herjan; Stolker, Robert Jan; Duivenvoorden, Hugo J; Klimek, Markus; Steyerberg, Ewout W

    2016-09-01

    Ambulatory surgery patients are at risk of adverse psychological outcomes such as anxiety, aggression, fatigue, and depression. We developed and validated a clinical prediction model to identify patients who were vulnerable to these psychological outcome parameters. We prospectively assessed 383 mixed ambulatory surgery patients for psychological vulnerability, defined as the presence of anxiety (state/trait), aggression (state/trait), fatigue, and depression seven days after surgery. Three psychological vulnerability categories were considered-i.e., none, one, or multiple poor scores, defined as a score exceeding one standard deviation above the mean for each single outcome according to normative data. The following determinants were assessed preoperatively: sociodemographic (age, sex, level of education, employment status, marital status, having children, religion, nationality), medical (heart rate and body mass index), and psychological variables (self-esteem and self-efficacy), in addition to anxiety, aggression, fatigue, and depression. A prediction model was constructed using ordinal polytomous logistic regression analysis, and bootstrapping was applied for internal validation. The ordinal c-index (ORC) quantified the discriminative ability of the model, in addition to measures for overall model performance (Nagelkerke's R (2) ). In this population, 137 (36%) patients were identified as being psychologically vulnerable after surgery for at least one of the psychological outcomes. The most parsimonious and optimal prediction model combined sociodemographic variables (level of education, having children, and nationality) with psychological variables (trait anxiety, state/trait aggression, fatigue, and depression). Model performance was promising: R (2)  = 30% and ORC = 0.76 after correction for optimism. This study identified a substantial group of vulnerable patients in ambulatory surgery. The proposed clinical prediction model could allow healthcare professionals the opportunity to identify vulnerable patients in ambulatory surgery, although additional modification and validation are needed. (ClinicalTrials.gov number, NCT01441843).

  5. Weighted optimization of irradiance for photodynamic therapy of port wine stains

    NASA Astrophysics Data System (ADS)

    He, Linhuan; Zhou, Ya; Hu, Xiaoming

    2016-10-01

    Planning of irradiance distribution (PID) is one of the foremost factors for on-demand treatment of port wine stains (PWS) with photodynamic therapy (PDT). A weighted optimization method for PID was proposed according to the grading of PWS with a three dimensional digital illumination instrument. Firstly, the point clouds of lesions were filtered to remove the error or redundant points, the triangulation was carried out and the lesion was divided into small triangular patches. Secondly, the parameters such as area, normal vector and orthocenter for optimization of each triangular patch were calculated, and the weighted coefficients were determined by the erythema indexes and areas of patches. Then, the optimization initial point was calculated based on the normal vectors and orthocenters to optimize the light direction. In the end, the irradiation can be optimized according to cosine values of irradiance angles and weighted coefficients. Comparing the irradiance distribution before and after optimization, the proposed weighted optimization method can make the irradiance distribution match better with the characteristics of lesions, and has the potential to improve the therapeutic efficacy.

  6. Ordinality and the nature of symbolic numbers.

    PubMed

    Lyons, Ian M; Beilock, Sian L

    2013-10-23

    The view that representations of symbolic and nonsymbolic numbers are closely tied to one another is widespread. However, the link between symbolic and nonsymbolic numbers is almost always inferred from cardinal processing tasks. In the current work, we show that considering ordinality instead points to striking differences between symbolic and nonsymbolic numbers. Human behavioral and neural data show that ordinal processing of symbolic numbers (Are three Indo-Arabic numerals in numerical order?) is distinct from symbolic cardinal processing (Which of two numerals represents the greater quantity?) and nonsymbolic number processing (ordinal and cardinal judgments of dot-arrays). Behaviorally, distance-effects were reversed when assessing ordinality in symbolic numbers, but canonical distance-effects were observed for cardinal judgments of symbolic numbers and all nonsymbolic judgments. At the neural level, symbolic number-ordering was the only numerical task that did not show number-specific activity (greater than control) in the intraparietal sulcus. Only activity in left premotor cortex was specifically associated with symbolic number-ordering. For nonsymbolic numbers, activation in cognitive-control areas during ordinal processing and a high degree of overlap between ordinal and cardinal processing networks indicate that nonsymbolic ordinality is assessed via iterative cardinality judgments. This contrasts with a striking lack of neural overlap between ordinal and cardinal judgments anywhere in the brain for symbolic numbers, suggesting that symbolic number processing varies substantially with computational context. Ordinal processing sheds light on key differences between symbolic and nonsymbolic number processing both behaviorally and in the brain. Ordinality may prove important for understanding the power of representing numbers symbolically.

  7. Optimization of Retinal Gene Therapy for X-Linked Retinitis Pigmentosa Due to RPGR Mutations.

    PubMed

    Beltran, William A; Cideciyan, Artur V; Boye, Shannon E; Ye, Guo-Jie; Iwabe, Simone; Dufour, Valerie L; Marinho, Luis Felipe; Swider, Malgorzata; Kosyk, Mychajlo S; Sha, Jin; Boye, Sanford L; Peterson, James J; Witherspoon, C Douglas; Alexander, John J; Ying, Gui-Shuang; Shearman, Mark S; Chulay, Jeffrey D; Hauswirth, William W; Gamlin, Paul D; Jacobson, Samuel G; Aguirre, Gustavo D

    2017-08-02

    X-linked retinitis pigmentosa (XLRP) caused by mutations in the RPGR gene is an early onset and severe cause of blindness. Successful proof-of-concept studies in a canine model have recently shown that development of a corrective gene therapy for RPGR-XLRP may now be an attainable goal. In preparation for a future clinical trial, we have here optimized the therapeutic AAV vector construct by showing that GRK1 (rather than IRBP) is a more efficient promoter for targeting gene expression to both rods and cones in non-human primates. Two transgenes were used in RPGR mutant (XLPRA2) dogs under the control of the GRK1 promoter. First was the previously developed stabilized human RPGR (hRPGRstb). Second was a new full-length stabilized and codon-optimized human RPGR (hRPGRco). Long-term (>2 years) studies with an AAV2/5 vector carrying hRPGRstb under control of the GRK1 promoter showed rescue of rods and cones from degeneration and retention of vision. Shorter term (3 months) studies demonstrated comparable preservation of photoreceptors in canine eyes treated with an AAV2/5 vector carrying either transgene under the control of the GRK1 promoter. These results provide the critical molecular components (GRK1 promoter, hRPGRco transgene) to now construct a therapeutic viral vector optimized for RPGR-XLRP patients. Copyright © 2017 The American Society of Gene and Cell Therapy. Published by Elsevier Inc. All rights reserved.

  8. Social Host Ordinances and Policies. Prevention Update

    ERIC Educational Resources Information Center

    Higher Education Center for Alcohol, Drug Abuse, and Violence Prevention, 2011

    2011-01-01

    Social host liability laws (also known as teen party ordinances, loud or unruly gathering ordinances, or response costs ordinances) target the location in which underage drinking takes place. Social host liability laws hold noncommercial individuals responsible for underage drinking events on property they own, lease, or otherwise control. They…

  9. 25 CFR 522.8 - Publication of class III ordinance and approval.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... Section 522.8 Indians NATIONAL INDIAN GAMING COMMISSION, DEPARTMENT OF THE INTERIOR APPROVAL OF CLASS II AND CLASS III ORDINANCES AND RESOLUTIONS SUBMISSION OF GAMING ORDINANCE OR RESOLUTION § 522.8 Publication of class III ordinance and approval. The Chairman shall publish a class III tribal gaming...

  10. 27 CFR 478.24 - Compilation of State laws and published ordinances.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... and published ordinances. 478.24 Section 478.24 Alcohol, Tobacco Products, and Firearms BUREAU OF... published ordinances. (a) The Director shall annually revise and furnish Federal firearms licensees with a compilation of State laws and published ordinances which are relevant to the enforcement of this part. The...

  11. Automated flare forecasting using a statistical learning technique

    NASA Astrophysics Data System (ADS)

    Yuan, Yuan; Shih, Frank Y.; Jing, Ju; Wang, Hai-Min

    2010-08-01

    We present a new method for automatically forecasting the occurrence of solar flares based on photospheric magnetic measurements. The method is a cascading combination of an ordinal logistic regression model and a support vector machine classifier. The predictive variables are three photospheric magnetic parameters, i.e., the total unsigned magnetic flux, length of the strong-gradient magnetic polarity inversion line, and total magnetic energy dissipation. The output is true or false for the occurrence of a certain level of flares within 24 hours. Experimental results, from a sample of 230 active regions between 1996 and 2005, show the accuracies of a 24-hour flare forecast to be 0.86, 0.72, 0.65 and 0.84 respectively for the four different levels. Comparison shows an improvement in the accuracy of X-class flare forecasting.

  12. Research on bearing fault diagnosis of large machinery based on mathematical morphology

    NASA Astrophysics Data System (ADS)

    Wang, Yu

    2018-04-01

    To study the automatic diagnosis of large machinery fault based on support vector machine, combining the four common faults of the large machinery, the support vector machine is used to classify and identify the fault. The extracted feature vectors are entered. The feature vector is trained and identified by multi - classification method. The optimal parameters of the support vector machine are searched by trial and error method and cross validation method. Then, the support vector machine is compared with BP neural network. The results show that the support vector machines are short in time and high in classification accuracy. It is more suitable for the research of fault diagnosis in large machinery. Therefore, it can be concluded that the training speed of support vector machines (SVM) is fast and the performance is good.

  13. The Flies and Eyes project: design and methods of a cluster-randomised intervention study to confirm the importance of flies as trachoma vectors in The Gambia and to test a sustainable method of fly control using pit latrines.

    PubMed

    Emerson, Paul M; Lindsay, Steve W; Walraven, Gijs E L; Dibba, Sheikh Mafuji; Lowe, Kebba O; Bailey, Robin L

    2002-04-01

    The Flies and Eyes project is a community-based, cluster-randomised, intervention trial based in a rural area of The Gambia. It was designed to prove whether flies are mechanical vectors of trachoma; to quantify the relative importance of flies as vectors of trachoma and to test the effectiveness of insecticide spraying and the provision of latrines in trachoma control. A total of 21 clusters, each composed of 300-550 people, are to be recruited in groups of three. One cluster from each group is randomly allocated to receive insecticide spraying, one to receive pit latrines and the remaining to act as a control. The seven groups of clusters are recruited on a step-wise basis separated by two months to aid logistics and allow all seasons to be covered. Standardised, validated trachoma surveys are conducted for people of all ages and both sexes at baseline and six months post intervention. The Muscid fly population is monitored using standard traps and fly-eye contact is measured with catches of flies direct from children's faces. The Flies and Eyes project has been designed to strengthen the evidence base for the 'E' component of the SAFE strategy for trachoma control. The results will assist programme planners and country co-ordinators to make informed decisions on the environmental aspects of trachoma control.

  14. Implementing Scientific Simulation Codes Highly Tailored for Vector Architectures Using Custom Configurable Computing Machines

    NASA Technical Reports Server (NTRS)

    Rutishauser, David

    2006-01-01

    The motivation for this work comes from an observation that amidst the push for Massively Parallel (MP) solutions to high-end computing problems such as numerical physical simulations, large amounts of legacy code exist that are highly optimized for vector supercomputers. Because re-hosting legacy code often requires a complete re-write of the original code, which can be a very long and expensive effort, this work examines the potential to exploit reconfigurable computing machines in place of a vector supercomputer to implement an essentially unmodified legacy source code. Custom and reconfigurable computing resources could be used to emulate an original application's target platform to the extent required to achieve high performance. To arrive at an architecture that delivers the desired performance subject to limited resources involves solving a multi-variable optimization problem with constraints. Prior research in the area of reconfigurable computing has demonstrated that designing an optimum hardware implementation of a given application under hardware resource constraints is an NP-complete problem. The premise of the approach is that the general issue of applying reconfigurable computing resources to the implementation of an application, maximizing the performance of the computation subject to physical resource constraints, can be made a tractable problem by assuming a computational paradigm, such as vector processing. This research contributes a formulation of the problem and a methodology to design a reconfigurable vector processing implementation of a given application that satisfies a performance metric. A generic, parametric, architectural framework for vector processing implemented in reconfigurable logic is developed as a target for a scheduling/mapping algorithm that maps an input computation to a given instance of the architecture. This algorithm is integrated with an optimization framework to arrive at a specification of the architecture parameters that attempts to minimize execution time, while staying within resource constraints. The flexibility of using a custom reconfigurable implementation is exploited in a unique manner to leverage the lessons learned in vector supercomputer development. The vector processing framework is tailored to the application, with variable parameters that are fixed in traditional vector processing. Benchmark data that demonstrates the functionality and utility of the approach is presented. The benchmark data includes an identified bottleneck in a real case study example vector code, the NASA Langley Terminal Area Simulation System (TASS) application.

  15. Optimal Low Energy Earth-Moon Transfers

    NASA Technical Reports Server (NTRS)

    Griesemer, Paul Ricord; Ocampo, Cesar; Cooley, D. S.

    2010-01-01

    The optimality of a low-energy Earth-Moon transfer is examined for the first time using primer vector theory. An optimal control problem is formed with the following free variables: the location, time, and magnitude of the transfer insertion burn, and the transfer time. A constraint is placed on the initial state of the spacecraft to bind it to a given initial orbit around a first body, and on the final state of the spacecraft to limit its Keplerian energy with respect to a second body. Optimal transfers in the system are shown to meet certain conditions placed on the primer vector and its time derivative. A two point boundary value problem containing these necessary conditions is created for use in targeting optimal transfers. The two point boundary value problem is then applied to the ballistic lunar capture problem, and an optimal trajectory is shown. Additionally, the ballistic lunar capture trajectory is examined to determine whether one or more additional impulses may improve on the cost of the transfer.

  16. Analysis of the Pointing Accuracy of a 6U CubeSat Mission for Proximity Operations and Resident Space Object Imaging

    DTIC Science & Technology

    2013-05-29

    not necessarily express the views of and should not be attributed to ESA. 1 and visual navigation to maneuver autonomously to reduce the size of the...successful orbit and three-dimensional imaging of an RSO, using passive visual -only navigation and real-time near-optimal guidance. The mission design...Kit ( STK ) in the Earth-centered Earth-fixed (ECF) co- ordinate system, loaded to Simulink and transformed to the BFF for calculation of the SRP

  17. Rotman Lens Sidewall Design and Optimization with Hybrid Hardware/Software Based Programming

    DTIC Science & Technology

    2015-01-09

    conventional MoM and stored in memory. The components of Zfar are computed as needed through a fast matrix vector multiplication ( MVM ), which...V vector. Iterative methods, e.g. BiCGSTAB, are employed for solving the linear equation. The matrix-vector multiplications ( MVMs ), which dominate...most of the computation in the solving phase, consists of calculating near and far MVMs . The far MVM comprises aggregation, translation, and

  18. [Prokaryotic expression and immunological activity of human neutrophil gelatinase associated lipocalin].

    PubMed

    Wu, Jianwei; Cai, Lei; Qian, Wei; Jiao, Liyuan; Li, Jiangfeng; Song, Xiaoli; Wang, Jihua

    2015-07-01

    To construct a prokaryotic expression vector of human neutrophil gelatinase associated lipocalin (NGAL) and identify the bioactivity of the fusion protein. The cDNA of human NGAL obtained from GenBank was linked to a cloning vector to construct the prokaryotic expression vector pCold-NGAL. Then the vector was transformed into E.coli BL21(DE3) plysS. Under the optimal induction condition, the recombinant NGAL (rNGAL) was expressed and purified by Ni Sepharose 6 Fast Flow affinity chromatography. The purity and activity of the rNGAL were respectively identified by SDS-PAGE and Western blotting combined with NGAL reagent (Latex enhanced immunoturbidimetry). Restriction enzyme digestion and nucleotide sequencing proved that the expression vector pCold-NGAL was successfully constructed. Under the optimal induction condition that we determined, the rNGAL was expressed in soluble form in E.coli BL21(DE3) plysS. The relative molecular mass of the rNGAL was 25 000, and its purity was more than 98.0%. Furthermore, Western blotting and immunoturbidimetry indicated that the rNGAL reacted with NGAL mAb specifically. Human rNGAL of high purity and bioactivity was successfully constructed in E.coli BL21(DE3) plysS using the expression vector pCold-NGAL.

  19. Methods for the analysis of ordinal response data in medical image quality assessment.

    PubMed

    Keeble, Claire; Baxter, Paul D; Gislason-Lee, Amber J; Treadgold, Laura A; Davies, Andrew G

    2016-07-01

    The assessment of image quality in medical imaging often requires observers to rate images for some metric or detectability task. These subjective results are used in optimization, radiation dose reduction or system comparison studies and may be compared to objective measures from a computer vision algorithm performing the same task. One popular scoring approach is to use a Likert scale, then assign consecutive numbers to the categories. The mean of these response values is then taken and used for comparison with the objective or second subjective response. Agreement is often assessed using correlation coefficients. We highlight a number of weaknesses in this common approach, including inappropriate analyses of ordinal data and the inability to properly account for correlations caused by repeated images or observers. We suggest alternative data collection and analysis techniques such as amendments to the scale and multilevel proportional odds models. We detail the suitability of each approach depending upon the data structure and demonstrate each method using a medical imaging example. Whilst others have raised some of these issues, we evaluated the entire study from data collection to analysis, suggested sources for software and further reading, and provided a checklist plus flowchart for use with any ordinal data. We hope that raised awareness of the limitations of the current approaches will encourage greater method consideration and the utilization of a more appropriate analysis. More accurate comparisons between measures in medical imaging will lead to a more robust contribution to the imaging literature and ultimately improved patient care.

  20. Using ordinal partition transition networks to analyze ECG data

    NASA Astrophysics Data System (ADS)

    Kulp, Christopher W.; Chobot, Jeremy M.; Freitas, Helena R.; Sprechini, Gene D.

    2016-07-01

    Electrocardiogram (ECG) data from patients with a variety of heart conditions are studied using ordinal pattern partition networks. The ordinal pattern partition networks are formed from the ECG time series by symbolizing the data into ordinal patterns. The ordinal patterns form the nodes of the network and edges are defined through the time ordering of the ordinal patterns in the symbolized time series. A network measure, called the mean degree, is computed from each time series-generated network. In addition, the entropy and number of non-occurring ordinal patterns (NFP) is computed for each series. The distribution of mean degrees, entropies, and NFPs for each heart condition studied is compared. A statistically significant difference between healthy patients and several groups of unhealthy patients with varying heart conditions is found for the distributions of the mean degrees, unlike for any of the distributions of the entropies or NFPs.

  1. Confirmatory Factor Analysis of Ordinal Variables with Misspecified Models

    ERIC Educational Resources Information Center

    Yang-Wallentin, Fan; Joreskog, Karl G.; Luo, Hao

    2010-01-01

    Ordinal variables are common in many empirical investigations in the social and behavioral sciences. Researchers often apply the maximum likelihood method to fit structural equation models to ordinal data. This assumes that the observed measures have normal distributions, which is not the case when the variables are ordinal. A better approach is…

  2. 75 FR 51102 - Liquor Ordinance of the Wichita and Affiliated Tribes; Correction

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-08-18

    ... Tribes; Correction AGENCY: Bureau of Indian Affairs, Interior ACTION: Notice; correction SUMMARY: The... Liquor Ordinance of the Wichita and Affiliated Tribes. The notice refers to an amended ordinance of the Wichita and Affiliated Tribes when in fact the Liquor Ordinance adopted by Resolution No. WT-10-31 on May...

  3. Estimating Ordinal Reliability for Likert-Type and Ordinal Item Response Data: A Conceptual, Empirical, and Practical Guide

    ERIC Educational Resources Information Center

    Gadermann, Anne M.; Guhn, Martin; Zumbo, Bruno D.

    2012-01-01

    This paper provides a conceptual, empirical, and practical guide for estimating ordinal reliability coefficients for ordinal item response data (also referred to as Likert, Likert-type, ordered categorical, or rating scale item responses). Conventionally, reliability coefficients, such as Cronbach's alpha, are calculated using a Pearson…

  4. The effect of ordinances requiring smoke-free restaurants on restaurant sales.

    PubMed Central

    Glantz, S A; Smith, L R

    1994-01-01

    OBJECTIVES: The effect on restaurant revenues of local ordinances requiring smoke-free restaurants is an important consideration for restauranteurs themselves and the cities that depend on sales tax revenues to provide services. METHODS: Data were obtained from the California State Board of Equalization and Colorado State Department of Revenue on taxable restaurant sales from 1986 (1982 for Aspen) through 1993 for all 15 cities where ordinances were in force, as well as for 15 similar control communities without smoke-free ordinances during this period. These data were analyzed using multiple regression, including time and a dummy variable for whether an ordinance was in force. Total restaurant sales were analyzed as a fraction of total retail sales and restaurant sales in smoke-free cities vs the comparison cities similar in population, median income, and other factors. RESULTS. Ordinances had no significant effect on the fraction of total retail sales that went to restaurants or on the ratio of restaurant sales in communities with ordinances compared with those in the matched control communities. CONCLUSIONS. Smoke-free restaurant ordinances do not adversely affect restaurant sales. PMID:8017529

  5. Ordinal measures for iris recognition.

    PubMed

    Sun, Zhenan; Tan, Tieniu

    2009-12-01

    Images of a human iris contain rich texture information useful for identity authentication. A key and still open issue in iris recognition is how best to represent such textural information using a compact set of features (iris features). In this paper, we propose using ordinal measures for iris feature representation with the objective of characterizing qualitative relationships between iris regions rather than precise measurements of iris image structures. Such a representation may lose some image-specific information, but it achieves a good trade-off between distinctiveness and robustness. We show that ordinal measures are intrinsic features of iris patterns and largely invariant to illumination changes. Moreover, compactness and low computational complexity of ordinal measures enable highly efficient iris recognition. Ordinal measures are a general concept useful for image analysis and many variants can be derived for ordinal feature extraction. In this paper, we develop multilobe differential filters to compute ordinal measures with flexible intralobe and interlobe parameters such as location, scale, orientation, and distance. Experimental results on three public iris image databases demonstrate the effectiveness of the proposed ordinal feature models.

  6. Enhancing speech recognition using improved particle swarm optimization based hidden Markov model.

    PubMed

    Selvaraj, Lokesh; Ganesan, Balakrishnan

    2014-01-01

    Enhancing speech recognition is the primary intention of this work. In this paper a novel speech recognition method based on vector quantization and improved particle swarm optimization (IPSO) is suggested. The suggested methodology contains four stages, namely, (i) denoising, (ii) feature mining (iii), vector quantization, and (iv) IPSO based hidden Markov model (HMM) technique (IP-HMM). At first, the speech signals are denoised using median filter. Next, characteristics such as peak, pitch spectrum, Mel frequency Cepstral coefficients (MFCC), mean, standard deviation, and minimum and maximum of the signal are extorted from the denoised signal. Following that, to accomplish the training process, the extracted characteristics are given to genetic algorithm based codebook generation in vector quantization. The initial populations are created by selecting random code vectors from the training set for the codebooks for the genetic algorithm process and IP-HMM helps in doing the recognition. At this point the creativeness will be done in terms of one of the genetic operation crossovers. The proposed speech recognition technique offers 97.14% accuracy.

  7. Food marketing to children through toys: response of restaurants to the first U.S. toy ordinance.

    PubMed

    Otten, Jennifer J; Hekler, Eric B; Krukowski, Rebecca A; Buman, Matthew P; Saelens, Brian E; Gardner, Christopher D; King, Abby C

    2012-01-01

    On August 9, 2010, Santa Clara County CA became the first U.S. jurisdiction to implement an ordinance that prohibits the distribution of toys and other incentives to children in conjunction with meals, foods, or beverages that do not meet minimal nutritional criteria. Restaurants had many different options for complying with this ordinance, such as introducing more healthful menu options, reformulating current menu items, or changing marketing or toy distribution practices. To assess how ordinance-affected restaurants changed their child menus, marketing, and toy distribution practices relative to non-affected restaurants. Children's menu items and child-directed marketing and toy distribution practices were examined before and at two time points after ordinance implementation (from July through November 2010) at ordinance-affected fast-food restaurants compared with demographically matched unaffected same-chain restaurants using the Children's Menu Assessment tool. Affected restaurants showed a 2.8- to 3.4-fold improvement in Children's Menu Assessment scores from pre- to post-ordinance with minimal changes at unaffected restaurants. Response to the ordinance varied by restaurant. Improvements were seen in on-site nutritional guidance; promotion of healthy meals, beverages, and side items; and toy marketing and distribution activities. The ordinance appears to have positively influenced marketing of healthful menu items and toys as well as toy distribution practices at ordinance-affected restaurants, but did not affect the number of healthful food items offered. Copyright © 2012 American Journal of Preventive Medicine. Published by Elsevier Inc. All rights reserved.

  8. Vector boson fusion in the inert doublet model

    NASA Astrophysics Data System (ADS)

    Dutta, Bhaskar; Palacio, Guillermo; Restrepo, Diego; Ruiz-Álvarez, José D.

    2018-03-01

    In this paper we probe the inert Higgs doublet model at the LHC using vector boson fusion (VBF) search strategy. We optimize the selection cuts and investigate the parameter space of the model and we show that the VBF search has a better reach when compared with the monojet searches. We also investigate the Drell-Yan type cuts and show that they can be important for smaller charged Higgs masses. We determine the 3 σ reach for the parameter space using these optimized cuts for a luminosity of 3000 fb-1 .

  9. Optimum Multi-Impulse Rendezvous Program

    NASA Technical Reports Server (NTRS)

    Glandorf, D. R.; Onley, A. G.; Rozendaal, H. L.

    1970-01-01

    OMIRPROGRAM determines optimal n-impulse rendezvous trajectories under the restrictions of two-body motion in free space. Lawden's primer vector theory is applied to determine optimum number of midcourse impulse applications. Global optimality is not guaranteed.

  10. Optimizing fusion PIC code performance at scale on Cori Phase 2

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Koskela, T. S.; Deslippe, J.

    In this paper we present the results of optimizing the performance of the gyrokinetic full-f fusion PIC code XGC1 on the Cori Phase Two Knights Landing system. The code has undergone substantial development to enable the use of vector instructions in its most expensive kernels within the NERSC Exascale Science Applications Program. We study the single-node performance of the code on an absolute scale using the roofline methodology to guide optimization efforts. We have obtained 2x speedups in single node performance due to enabling vectorization and performing memory layout optimizations. On multiple nodes, the code is shown to scale wellmore » up to 4000 nodes, near half the size of the machine. We discuss some communication bottlenecks that were identified and resolved during the work.« less

  11. Application of high-performance computing to numerical simulation of human movement

    NASA Technical Reports Server (NTRS)

    Anderson, F. C.; Ziegler, J. M.; Pandy, M. G.; Whalen, R. T.

    1995-01-01

    We have examined the feasibility of using massively-parallel and vector-processing supercomputers to solve large-scale optimization problems for human movement. Specifically, we compared the computational expense of determining the optimal controls for the single support phase of gait using a conventional serial machine (SGI Iris 4D25), a MIMD parallel machine (Intel iPSC/860), and a parallel-vector-processing machine (Cray Y-MP 8/864). With the human body modeled as a 14 degree-of-freedom linkage actuated by 46 musculotendinous units, computation of the optimal controls for gait could take up to 3 months of CPU time on the Iris. Both the Cray and the Intel are able to reduce this time to practical levels. The optimal solution for gait can be found with about 77 hours of CPU on the Cray and with about 88 hours of CPU on the Intel. Although the overall speeds of the Cray and the Intel were found to be similar, the unique capabilities of each machine are better suited to different portions of the computational algorithm used. The Intel was best suited to computing the derivatives of the performance criterion and the constraints whereas the Cray was best suited to parameter optimization of the controls. These results suggest that the ideal computer architecture for solving very large-scale optimal control problems is a hybrid system in which a vector-processing machine is integrated into the communication network of a MIMD parallel machine.

  12. Optimal ballistically captured Earth-Moon transfers

    NASA Astrophysics Data System (ADS)

    Ricord Griesemer, Paul; Ocampo, Cesar; Cooley, D. S.

    2012-07-01

    The optimality of a low-energy Earth-Moon transfer terminating in ballistic capture is examined for the first time using primer vector theory. An optimal control problem is formed with the following free variables: the location, time, and magnitude of the transfer insertion burn, and the transfer time. A constraint is placed on the initial state of the spacecraft to bind it to a given initial orbit around a first body, and on the final state of the spacecraft to limit its Keplerian energy with respect to a second body. Optimal transfers in the system are shown to meet certain conditions placed on the primer vector and its time derivative. A two point boundary value problem containing these necessary conditions is created for use in targeting optimal transfers. The two point boundary value problem is then applied to the ballistic lunar capture problem, and an optimal trajectory is shown. Additionally, the problem is then modified to fix the time of transfer, allowing for optimal multi-impulse transfers. The tradeoff between transfer time and fuel cost is shown for Earth-Moon ballistic lunar capture transfers.

  13. Parallel-vector computation for structural analysis and nonlinear unconstrained optimization problems

    NASA Technical Reports Server (NTRS)

    Nguyen, Duc T.

    1990-01-01

    Practical engineering application can often be formulated in the form of a constrained optimization problem. There are several solution algorithms for solving a constrained optimization problem. One approach is to convert a constrained problem into a series of unconstrained problems. Furthermore, unconstrained solution algorithms can be used as part of the constrained solution algorithms. Structural optimization is an iterative process where one starts with an initial design, a finite element structure analysis is then performed to calculate the response of the system (such as displacements, stresses, eigenvalues, etc.). Based upon the sensitivity information on the objective and constraint functions, an optimizer such as ADS or IDESIGN, can be used to find the new, improved design. For the structural analysis phase, the equation solver for the system of simultaneous, linear equations plays a key role since it is needed for either static, or eigenvalue, or dynamic analysis. For practical, large-scale structural analysis-synthesis applications, computational time can be excessively large. Thus, it is necessary to have a new structural analysis-synthesis code which employs new solution algorithms to exploit both parallel and vector capabilities offered by modern, high performance computers such as the Convex, Cray-2 and Cray-YMP computers. The objective of this research project is, therefore, to incorporate the latest development in the parallel-vector equation solver, PVSOLVE into the widely popular finite-element production code, such as the SAP-4. Furthermore, several nonlinear unconstrained optimization subroutines have also been developed and tested under a parallel computer environment. The unconstrained optimization subroutines are not only useful in their own right, but they can also be incorporated into a more popular constrained optimization code, such as ADS.

  14. 25 CFR 522.7 - Disapproval of a class III ordinance.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 25 Indians 2 2010-04-01 2010-04-01 false Disapproval of a class III ordinance. 522.7 Section 522.7 Indians NATIONAL INDIAN GAMING COMMISSION, DEPARTMENT OF THE INTERIOR APPROVAL OF CLASS II AND CLASS III ORDINANCES AND RESOLUTIONS SUBMISSION OF GAMING ORDINANCE OR RESOLUTION § 522.7 Disapproval of a class III...

  15. 25 CFR 522.5 - Disapproval of a class II ordinance.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 25 Indians 2 2010-04-01 2010-04-01 false Disapproval of a class II ordinance. 522.5 Section 522.5 Indians NATIONAL INDIAN GAMING COMMISSION, DEPARTMENT OF THE INTERIOR APPROVAL OF CLASS II AND CLASS III ORDINANCES AND RESOLUTIONS SUBMISSION OF GAMING ORDINANCE OR RESOLUTION § 522.5 Disapproval of a class II...

  16. Ordinary Least Squares Estimation of Parameters in Exploratory Factor Analysis with Ordinal Data

    ERIC Educational Resources Information Center

    Lee, Chun-Ting; Zhang, Guangjian; Edwards, Michael C.

    2012-01-01

    Exploratory factor analysis (EFA) is often conducted with ordinal data (e.g., items with 5-point responses) in the social and behavioral sciences. These ordinal variables are often treated as if they were continuous in practice. An alternative strategy is to assume that a normally distributed continuous variable underlies each ordinal variable.…

  17. Local Area Co-Ordination: Strengthening Support for People with Learning Disabilities in Scotland

    ERIC Educational Resources Information Center

    Stalker, Kirsten Ogilvie; Malloch, Margaret; Barry, Monica Anne; Watson, June Ann

    2008-01-01

    This paper reports the findings of a study commissioned by the Scottish Executive which examined the introduction and implementation of local area co-ordination (LAC) in Scotland. A questionnaire about their posts was completed by 44 local area co-ordinators, interviews were conducted with 35 local area co-ordinators and 14 managers and case…

  18. Light scattering of rectangular slot antennas: parallel magnetic vector vs perpendicular electric vector

    NASA Astrophysics Data System (ADS)

    Lee, Dukhyung; Kim, Dai-Sik

    2016-01-01

    We study light scattering off rectangular slot nano antennas on a metal film varying incident polarization and incident angle, to examine which field vector of light is more important: electric vector perpendicular to, versus magnetic vector parallel to the long axis of the rectangle. While vector Babinet’s principle would prefer magnetic field along the long axis for optimizing slot antenna function, convention and intuition most often refer to the electric field perpendicular to it. Here, we demonstrate experimentally that in accordance with vector Babinet’s principle, the incident magnetic vector parallel to the long axis is the dominant component, with the perpendicular incident electric field making a small contribution of the factor of 1/|ε|, the reciprocal of the absolute value of the dielectric constant of the metal, owing to the non-perfectness of metals at optical frequencies.

  19. Frozen orbit realization using LQR analogy

    NASA Astrophysics Data System (ADS)

    Nagarajan, N.; Rayan, H. Reno

    In the case of remote sensing orbits, the Frozen Orbit concept minimizes altitude variations over a given region using passive means. This is achieved by establishing the mean eccentricity vector at the orbital poles i.e., by fixing the mean argument of perigee at 90 deg with an appropriate eccentricity to balance the perturbations due to zonal harmonics J2 and J3 of the Earth's potential. Eccentricity vector is a vector whose magnitude is the eccentricity and direction is the argument of perigee. The launcher dispersions result in an eccentricity vector which is away from the frozen orbit values. The objective is then to formulate an orbit maneuver strategy to optimize the fuel required to achieve the frozen orbit in the presence of visibility and impulse constraints. It is shown that the motion of the eccentricity vector around the frozen perigee can be approximated as a circle. Combining the circular motion of the eccentricity vector around the frozen point and the maneuver equation, the following discrete equation is obtained. X(k+1) = AX(k) + Bu(k), where X is the state (i.e. eccentricity vector components), A the state transition matrix, u the scalar control force (i.e. dV in this case) and B the control matrix which transforms dV into eccentricity vector change. Based on this, it is shown that the problem of optimizing the fuel can be treated as a Linear Quadratic Regulator (LQR) problem in which the maneuver can be solved by using control system design tools like MATLAB by deriving an analogy LQR design.

  20. From nonlinear optimization to convex optimization through firefly algorithm and indirect approach with applications to CAD/CAM.

    PubMed

    Gálvez, Akemi; Iglesias, Andrés

    2013-01-01

    Fitting spline curves to data points is a very important issue in many applied fields. It is also challenging, because these curves typically depend on many continuous variables in a highly interrelated nonlinear way. In general, it is not possible to compute these parameters analytically, so the problem is formulated as a continuous nonlinear optimization problem, for which traditional optimization techniques usually fail. This paper presents a new bioinspired method to tackle this issue. In this method, optimization is performed through a combination of two techniques. Firstly, we apply the indirect approach to the knots, in which they are not initially the subject of optimization but precomputed with a coarse approximation scheme. Secondly, a powerful bioinspired metaheuristic technique, the firefly algorithm, is applied to optimization of data parameterization; then, the knot vector is refined by using De Boor's method, thus yielding a better approximation to the optimal knot vector. This scheme converts the original nonlinear continuous optimization problem into a convex optimization problem, solved by singular value decomposition. Our method is applied to some illustrative real-world examples from the CAD/CAM field. Our experimental results show that the proposed scheme can solve the original continuous nonlinear optimization problem very efficiently.

  1. From Nonlinear Optimization to Convex Optimization through Firefly Algorithm and Indirect Approach with Applications to CAD/CAM

    PubMed Central

    Gálvez, Akemi; Iglesias, Andrés

    2013-01-01

    Fitting spline curves to data points is a very important issue in many applied fields. It is also challenging, because these curves typically depend on many continuous variables in a highly interrelated nonlinear way. In general, it is not possible to compute these parameters analytically, so the problem is formulated as a continuous nonlinear optimization problem, for which traditional optimization techniques usually fail. This paper presents a new bioinspired method to tackle this issue. In this method, optimization is performed through a combination of two techniques. Firstly, we apply the indirect approach to the knots, in which they are not initially the subject of optimization but precomputed with a coarse approximation scheme. Secondly, a powerful bioinspired metaheuristic technique, the firefly algorithm, is applied to optimization of data parameterization; then, the knot vector is refined by using De Boor's method, thus yielding a better approximation to the optimal knot vector. This scheme converts the original nonlinear continuous optimization problem into a convex optimization problem, solved by singular value decomposition. Our method is applied to some illustrative real-world examples from the CAD/CAM field. Our experimental results show that the proposed scheme can solve the original continuous nonlinear optimization problem very efficiently. PMID:24376380

  2. Tourism and hotel revenues before and after passage of smoke-free restaurant ordinances.

    PubMed

    Glantz, S A; Charlesworth, A

    1999-05-26

    Claims that ordinances requiring smoke-free restaurants will adversely affect tourism have been used to argue against passing such ordinances. Data exist regarding the validity of these claims. To determine the changes in hotel revenues and international tourism after passage of smoke-free restaurant ordinances in locales where the effect has been debated. Comparison of hotel revenues and tourism rates before and after passage of 100% smoke-free restaurant ordinances and comparison with US hotel revenue overall. Three states (California, Utah, and Vermont) and 6 cities (Boulder, Colo; Flagstaff, Ariz; Los Angeles, Calif; Mesa, Ariz; New York, NY; and San Francisco, Calif) in which the effect on tourism of smoke-free restaurant ordinances had been debated. Hotel room revenues and hotel revenues as a fraction of total retail sales compared with preordinance revenues and overall US revenues. In constant 1997 dollars, passage of the smoke-free restaurant ordinance was associated with a statistically significant increase in the rate of change of hotel revenues in 4 localities, no significant change in 4 localities, and a significant slowing in the rate of increase (but not a decrease) in 1 locality. There was no significant change in the rate of change of hotel revenues as a fraction of total retail sales (P=.16) or total US hotel revenues associated with the ordinances when pooled across all localities (P = .93). International tourism was either unaffected or increased following implementation of the smoke-free ordinances. Smoke-free ordinances do not appear to adversely affect, and may increase, tourist business.

  3. Co-ordinated action between youth-care and sports: facilitators and barriers.

    PubMed

    Hermens, Niels; de Langen, Lisanne; Verkooijen, Kirsten T; Koelen, Maria A

    2017-07-01

    In the Netherlands, youth-care organisations and community sports clubs are collaborating to increase socially vulnerable youths' participation in sport. This is rooted in the idea that sports clubs are settings for youth development. As not much is known about co-ordinated action involving professional care organisations and community sports clubs, this study aims to generate insight into facilitators of and barriers to successful co-ordinated action between these two organisations. A cross-sectional study was conducted using in-depth semi-structured qualitative interview data. In total, 23 interviews were held at five locations where co-ordinated action between youth-care and sports takes place. Interviewees were youth-care workers, representatives from community sports clubs, and Care Sport Connectors who were assigned to encourage and manage the co-ordinated action. Using inductive coding procedures, this study shows that existing and good relationships, a boundary spanner, care workers' attitudes, knowledge and competences of the participants, organisational policies and ambitions, and some elements external to the co-ordinated action were reported to be facilitators or barriers. In addition, the participants reported that the different facilitators and barriers influenced the success of the co-ordinated action at different stages of the co-ordinated action. Future research is recommended to further explore the role of boundary spanners in co-ordinated action involving social care organisations and community sports clubs, and to identify what external elements (e.g. events, processes, national policies) are turning points in the formation, implementation and continuation of such co-ordinated action. © 2017 John Wiley & Sons Ltd.

  4. Survey of local forestry-related ordinances and regulations in the south

    Treesearch

    Jonathan J. Spink; Karry L. Haney; John L. Greene

    2000-01-01

    A survey of the 13 southern states was conducted in 1999-2000 to obtain a comprehensive list of forestry-related ordinances enacted by various local governments. Each ordinance was examined to determine the date of adoption, regulatory objective, and its regu1atory provisions. Based on the regulatory objective, the ordinances were categorized into five general types:...

  5. Knowledge of the ordinal position of list items in pigeons.

    PubMed

    Scarf, Damian; Colombo, Michael

    2011-10-01

    Ordinal knowledge is a fundamental aspect of advanced cognition. It is self-evident that humans represent ordinal knowledge, and over the past 20 years it has become clear that nonhuman primates share this ability. In contrast, evidence that nonprimate species represent ordinal knowledge is missing from the comparative literature. To address this issue, in the present experiment we trained pigeons on three 4-item lists and then tested them with derived lists in which, relative to the training lists, the ordinal position of the items was either maintained or changed. Similar to the findings with human and nonhuman primates, our pigeons performed markedly better on the maintained lists compared to the changed lists, and displayed errors consistent with the view that they used their knowledge of ordinal position to guide responding on the derived lists. These findings demonstrate that the ability to acquire ordinal knowledge is not unique to the primate lineage. (PsycINFO Database Record (c) 2011 APA, all rights reserved).

  6. Development of a new calibration procedure and its experimental validation applied to a human motion capture system.

    PubMed

    Royo Sánchez, Ana Cristina; Aguilar Martín, Juan José; Santolaria Mazo, Jorge

    2014-12-01

    Motion capture systems are often used for checking and analyzing human motion in biomechanical applications. It is important, in this context, that the systems provide the best possible accuracy. Among existing capture systems, optical systems are those with the highest accuracy. In this paper, the development of a new calibration procedure for optical human motion capture systems is presented. The performance and effectiveness of that new calibration procedure are also checked by experimental validation. The new calibration procedure consists of two stages. In the first stage, initial estimators of intrinsic and extrinsic parameters are sought. The camera calibration method used in this stage is the one proposed by Tsai. These parameters are determined from the camera characteristics, the spatial position of the camera, and the center of the capture volume. In the second stage, a simultaneous nonlinear optimization of all parameters is performed to identify the optimal values, which minimize the objective function. The objective function, in this case, minimizes two errors. The first error is the distance error between two markers placed in a wand. The second error is the error of position and orientation of the retroreflective markers of a static calibration object. The real co-ordinates of the two objects are calibrated in a co-ordinate measuring machine (CMM). The OrthoBio system is used to validate the new calibration procedure. Results are 90% lower than those from the previous calibration software and broadly comparable with results from a similarly configured Vicon system.

  7. Improved Prefusion Stability, Optimized Codon Usage, and Augmented Virion Packaging Enhance the Immunogenicity of Respiratory Syncytial Virus Fusion Protein in a Vectored-Vaccine Candidate

    PubMed Central

    Liang, Bo; Ngwuta, Joan O.; Surman, Sonja; Kabatova, Barbora; Liu, Xiang; Lingemann, Matthias; Liu, Xueqiao; Yang, Lijuan; Herbert, Richard; Swerczek, Joanna; Chen, Man; Moin, Syed M.; Kumar, Azad; McLellan, Jason S.; Kwong, Peter D.; Graham, Barney S.; Collins, Peter L.

    2017-01-01

    ABSTRACT Respiratory syncytial virus (RSV) is the most important viral agent of severe pediatric respiratory tract disease worldwide, but it lacks a licensed vaccine or suitable antiviral drug. A live attenuated chimeric bovine/human parainfluenza virus type 3 (rB/HPIV3) was developed previously as a vector expressing RSV fusion (F) protein to confer bivalent protection against RSV and HPIV3. In a previous clinical trial in virus-naive children, rB/HPIV3 was well tolerated but the immunogenicity of wild-type RSV F was unsatisfactory. We previously modified RSV F with a designed disulfide bond (DS) to increase stability in the prefusion (pre-F) conformation and to be efficiently packaged in the vector virion. Here, we further stabilized pre-F by adding both disulfide and cavity-filling mutations (DS-Cav1), and we also modified RSV F codon usage to have a lower CpG content and a higher level of expression. This RSV F open reading frame was evaluated in rB/HPIV3 in three forms: (i) pre-F without vector-packaging signal, (ii) pre-F with vector-packaging signal, and (iii) secreted pre-F ectodomain trimer. Despite being efficiently expressed, the secreted pre-F was poorly immunogenic. DS-Cav1 stabilized pre-F, with or without packaging, induced higher titers of pre-F specific antibodies in hamsters, and improved the quality of RSV-neutralizing serum antibodies. Codon-optimized RSV F containing fewer CpG dinucleotides had higher F expression, replicated more efficiently in vivo, and was more immunogenic. The combination of DS-Cav1 pre-F stabilization, optimized codon usage, reduced CpG content, and vector packaging significantly improved vector immunogenicity and protective efficacy against RSV. This provides an improved vectored RSV vaccine candidate suitable for pediatric clinical evaluation. IMPORTANCE RSV and HPIV3 are the first and second leading viral causes of severe pediatric respiratory disease worldwide. Licensed vaccines or suitable antiviral drugs are not available. We are developing a chimeric rB/HPIV3 vector expressing RSV F as a bivalent RSV/HPIV3 vaccine and have been evaluating means to increase RSV F immunogenicity. In this study, we evaluated the effects of improved stabilization of F in the pre-F conformation and of codon optimization resulting in reduced CpG content and greater pre-F expression. Reduced CpG content dampened the interferon response to infection, promoting higher replication and increased F expression. We demonstrate that improved pre-F stabilization and strategic manipulation of codon usage, together with efficient pre-F packaging into vector virions, significantly increased F immunogenicity in the bivalent RSV/HPIV3 vaccine. The improved immunogenicity included induction of increased titers of high-quality complement-independent antibodies with greater pre-F site Ø binding and greater protection against RSV challenge. PMID:28539444

  8. Self-Contained Automated Methodology for Optimal Flow Control

    NASA Technical Reports Server (NTRS)

    Joslin, Ronald D.; Gunzburger, Max D.; Nicolaides, Roy A.; Erlebacherl, Gordon; Hussaini, M. Yousuff

    1997-01-01

    This paper describes a self-contained, automated methodology for active flow control which couples the time-dependent Navier-Stokes system with an adjoint Navier-Stokes system and optimality conditions from which optimal states, i.e., unsteady flow fields and controls (e.g., actuators), may be determined. The problem of boundary layer instability suppression through wave cancellation is used as the initial validation case to test the methodology. Here, the objective of control is to match the stress vector along a portion of the boundary to a given vector; instability suppression is achieved by choosing the given vector to be that of a steady base flow. Control is effected through the injection or suction of fluid through a single orifice on the boundary. The results demonstrate that instability suppression can be achieved without any a priori knowledge of the disturbance, which is significant because other control techniques have required some knowledge of the flow unsteadiness such as frequencies, instability type, etc. The present methodology has been extended to three dimensions and may potentially be applied to separation control, re-laminarization, and turbulence control applications using one to many sensors and actuators.

  9. The q-G method : A q-version of the Steepest Descent method for global optimization.

    PubMed

    Soterroni, Aline C; Galski, Roberto L; Scarabello, Marluce C; Ramos, Fernando M

    2015-01-01

    In this work, the q-Gradient (q-G) method, a q-version of the Steepest Descent method, is presented. The main idea behind the q-G method is the use of the negative of the q-gradient vector of the objective function as the search direction. The q-gradient vector, or simply the q-gradient, is a generalization of the classical gradient vector based on the concept of Jackson's derivative from the q-calculus. Its use provides the algorithm an effective mechanism for escaping from local minima. The q-G method reduces to the Steepest Descent method when the parameter q tends to 1. The algorithm has three free parameters and it is implemented so that the search process gradually shifts from global exploration in the beginning to local exploitation in the end. We evaluated the q-G method on 34 test functions, and compared its performance with 34 optimization algorithms, including derivative-free algorithms and the Steepest Descent method. Our results show that the q-G method is competitive and has a great potential for solving multimodal optimization problems.

  10. Optimized scalar promotion with load and splat SIMD instructions

    DOEpatents

    Eichenberger, Alexander E; Gschwind, Michael K; Gunnels, John A

    2013-10-29

    Mechanisms for optimizing scalar code executed on a single instruction multiple data (SIMD) engine are provided. Placement of vector operation-splat operations may be determined based on an identification of scalar and SIMD operations in an original code representation. The original code representation may be modified to insert the vector operation-splat operations based on the determined placement of vector operation-splat operations to generate a first modified code representation. Placement of separate splat operations may be determined based on identification of scalar and SIMD operations in the first modified code representation. The first modified code representation may be modified to insert or delete separate splat operations based on the determined placement of the separate splat operations to generate a second modified code representation. SIMD code may be output based on the second modified code representation for execution by the SIMD engine.

  11. Optimized scalar promotion with load and splat SIMD instructions

    DOEpatents

    Eichenberger, Alexandre E [Chappaqua, NY; Gschwind, Michael K [Chappaqua, NY; Gunnels, John A [Yorktown Heights, NY

    2012-08-28

    Mechanisms for optimizing scalar code executed on a single instruction multiple data (SIMD) engine are provided. Placement of vector operation-splat operations may be determined based on an identification of scalar and SIMD operations in an original code representation. The original code representation may be modified to insert the vector operation-splat operations based on the determined placement of vector operation-splat operations to generate a first modified code representation. Placement of separate splat operations may be determined based on identification of scalar and SIMD operations in the first modified code representation. The first modified code representation may be modified to insert or delete separate splat operations based on the determined placement of the separate splat operations to generate a second modified code representation. SIMD code may be output based on the second modified code representation for execution by the SIMD engine.

  12. Principles for valid histopathologic scoring in research

    PubMed Central

    Gibson-Corley, Katherine N.; Olivier, Alicia K.; Meyerholz, David K.

    2013-01-01

    Histopathologic scoring is a tool by which semi-quantitative data can be obtained from tissues. Initially, a thorough understanding of the experimental design, study objectives and methods are required to allow the pathologist to appropriately examine tissues and develop lesion scoring approaches. Many principles go into the development of a scoring system such as tissue examination, lesion identification, scoring definitions and consistency in interpretation. Masking (a.k.a. “blinding”) of the pathologist to experimental groups is often necessary to constrain bias and multiple mechanisms are available. Development of a tissue scoring system requires appreciation of the attributes and limitations of the data (e.g. nominal, ordinal, interval and ratio data) to be evaluated. Incidence, ordinal and rank methods of tissue scoring are demonstrated along with key principles for statistical analyses and reporting. Validation of a scoring system occurs through two principal measures: 1) validation of repeatability and 2) validation of tissue pathobiology. Understanding key principles of tissue scoring can help in the development and/or optimization of scoring systems so as to consistently yield meaningful and valid scoring data. PMID:23558974

  13. Urban Runoff: Model Ordinances for Erosion and Sediment Control

    EPA Pesticide Factsheets

    The model ordinance in this section borrows language from the erosion and sediment control ordinance features that might help prevent erosion and sedimentation and protect natural resources more fully.

  14. Optimal preview control for a linear continuous-time stochastic control system in finite-time horizon

    NASA Astrophysics Data System (ADS)

    Wu, Jiang; Liao, Fucheng; Tomizuka, Masayoshi

    2017-01-01

    This paper discusses the design of the optimal preview controller for a linear continuous-time stochastic control system in finite-time horizon, using the method of augmented error system. First, an assistant system is introduced for state shifting. Then, in order to overcome the difficulty of the state equation of the stochastic control system being unable to be differentiated because of Brownian motion, the integrator is introduced. Thus, the augmented error system which contains the integrator vector, control input, reference signal, error vector and state of the system is reconstructed. This leads to the tracking problem of the optimal preview control of the linear stochastic control system being transformed into the optimal output tracking problem of the augmented error system. With the method of dynamic programming in the theory of stochastic control, the optimal controller with previewable signals of the augmented error system being equal to the controller of the original system is obtained. Finally, numerical simulations show the effectiveness of the controller.

  15. A dual host vector for Fab phage display and expression of native IgG in mammalian cells.

    PubMed

    Tesar, Devin; Hötzel, Isidro

    2013-10-01

    A significant bottleneck in antibody discovery by phage display is the transfer of immunoglobulin variable regions from phage clones to vectors that express immunoglobulin G (IgG) in mammalian cells for screening. Here, we describe a novel phagemid vector for Fab phage display that allows expression of native IgG in mammalian cells without sub-cloning. The vector uses an optimized mammalian signal sequence that drives robust expression of Fab fragments fused to an M13 phage coat protein in Escherichia coli and IgG expression in mammalian cells. To allow the expression of Fab fragments fused to a phage coat protein in E.coli and full-length IgG in mammalian cells from the same vector without sub-cloning, the sequence encoding the phage coat protein was embedded in an optimized synthetic intron within the immunoglobulin heavy chain gene. This intron is removed from transcripts in mammalian cells by RNA splicing. Using this vector, we constructed a synthetic Fab phage display library with diversity in the heavy chain only and selected for clones binding different antigens. Co-transfection of mammalian cells with DNA from individual phage clones and a plasmid expressing the invariant light chain resulted in the expression of native IgG that was used to assay affinity, ligand blocking activity and specificity.

  16. Attitude Control for an Aero-Vehicle Using Vector Thrusting and Variable Speed Control Moment Gyros

    NASA Technical Reports Server (NTRS)

    Shin, Jong-Yeob; Lim, K. B.; Moerder, D. D.

    2005-01-01

    Stabilization of passively unstable thrust-levitated vehicles can require significant control inputs. Although thrust vectoring is a straightforward choice for realizing these inputs, this may lead to difficulties discussed in the paper. This paper examines supplementing thrust vectoring with Variable-Speed Control Moment Gyroscopes (VSCMGs). The paper describes how to allocate VSCMGs and the vectored thrust mechanism for attitude stabilization in frequency domain and also shows trade-off between vectored thrust and VSCMGs. Using an H2 control synthesis methodology in LMI optimization, a feedback control law is designed for a thrust-levitated research vehicle and is simulated with the full nonlinear model. It is demonstrated that VSCMGs can reduce the use of vectored thrust variation for stabilizing the hovering platform in the presence of strong wind gusts.

  17. Data-Driven Sampling Matrix Boolean Optimization for Energy-Efficient Biomedical Signal Acquisition by Compressive Sensing.

    PubMed

    Wang, Yuhao; Li, Xin; Xu, Kai; Ren, Fengbo; Yu, Hao

    2017-04-01

    Compressive sensing is widely used in biomedical applications, and the sampling matrix plays a critical role on both quality and power consumption of signal acquisition. It projects a high-dimensional vector of data into a low-dimensional subspace by matrix-vector multiplication. An optimal sampling matrix can ensure accurate data reconstruction and/or high compression ratio. Most existing optimization methods can only produce real-valued embedding matrices that result in large energy consumption during data acquisition. In this paper, we propose an efficient method that finds an optimal Boolean sampling matrix in order to reduce the energy consumption. Compared to random Boolean embedding, our data-driven Boolean sampling matrix can improve the image recovery quality by 9 dB. Moreover, in terms of sampling hardware complexity, it reduces the energy consumption by 4.6× and the silicon area by 1.9× over the data-driven real-valued embedding.

  18. Intelligent control for PMSM based on online PSO considering parameters change

    NASA Astrophysics Data System (ADS)

    Song, Zhengqiang; Yang, Huiling

    2018-03-01

    A novel online particle swarm optimization method is proposed to design speed and current controllers of vector controlled interior permanent magnet synchronous motor drives considering stator resistance variation. In the proposed drive system, the space vector modulation technique is employed to generate the switching signals for a two-level voltage-source inverter. The nonlinearity of the inverter is also taken into account due to the dead-time, threshold and voltage drop of the switching devices in order to simulate the system in the practical condition. Speed and PI current controller gains are optimized with PSO online, and the fitness function is changed according to the system dynamic and steady states. The proposed optimization algorithm is compared with conventional PI control method in the condition of step speed change and stator resistance variation, showing that the proposed online optimization method has better robustness and dynamic characteristics compared with conventional PI controller design.

  19. Design of an optimal preview controller for linear discrete-time descriptor systems with state delay

    NASA Astrophysics Data System (ADS)

    Cao, Mengjuan; Liao, Fucheng

    2015-04-01

    In this paper, the linear discrete-time descriptor system with state delay is studied, and a design method for an optimal preview controller is proposed. First, by using the discrete lifting technique, the original system is transformed into a general descriptor system without state delay in form. Then, taking advantage of the first-order forward difference operator, we construct a descriptor augmented error system, including the state vectors of the lifted system, error vectors, and desired target signals. Rigorous mathematical proofs are given for the regularity, stabilisability, causal controllability, and causal observability of the descriptor augmented error system. Based on these, the optimal preview controller with preview feedforward compensation for the original system is obtained by using the standard optimal regulator theory of the descriptor system. The effectiveness of the proposed method is shown by numerical simulation.

  20. Insecticide Resistance and Malaria Vector Control: The Importance of Fitness Cost Mechanisms in Determining Economically Optimal Control Trajectories

    PubMed Central

    Brown, Zachary S.; Dickinson, Katherine L.; Kramer, Randall A.

    2014-01-01

    The evolutionary dynamics of insecticide resistance in harmful arthropods has economic implications, not only for the control of agricultural pests (as has been well studied), but also for the control of disease vectors, such as malaria-transmitting Anopheles mosquitoes. Previous economic work on insecticide resistance illustrates the policy relevance of knowing whether insecticide resistance mutations involve fitness costs. Using a theoretical model, this article investigates economically optimal strategies for controlling malaria-transmitting mosquitoes when there is the potential for mosquitoes to evolve resistance to insecticides. Consistent with previous literature, we find that fitness costs are a key element in the computation of economically optimal resistance management strategies. Additionally, our models indicate that different biological mechanisms underlying these fitness costs (e.g., increased adult mortality and/or decreased fecundity) can significantly alter economically optimal resistance management strategies. PMID:23448053

  1. Optimal sampling strategies for detecting zoonotic disease epidemics.

    PubMed

    Ferguson, Jake M; Langebrake, Jessica B; Cannataro, Vincent L; Garcia, Andres J; Hamman, Elizabeth A; Martcheva, Maia; Osenberg, Craig W

    2014-06-01

    The early detection of disease epidemics reduces the chance of successful introductions into new locales, minimizes the number of infections, and reduces the financial impact. We develop a framework to determine the optimal sampling strategy for disease detection in zoonotic host-vector epidemiological systems when a disease goes from below detectable levels to an epidemic. We find that if the time of disease introduction is known then the optimal sampling strategy can switch abruptly between sampling only from the vector population to sampling only from the host population. We also construct time-independent optimal sampling strategies when conducting periodic sampling that can involve sampling both the host and the vector populations simultaneously. Both time-dependent and -independent solutions can be useful for sampling design, depending on whether the time of introduction of the disease is known or not. We illustrate the approach with West Nile virus, a globally-spreading zoonotic arbovirus. Though our analytical results are based on a linearization of the dynamical systems, the sampling rules appear robust over a wide range of parameter space when compared to nonlinear simulation models. Our results suggest some simple rules that can be used by practitioners when developing surveillance programs. These rules require knowledge of transition rates between epidemiological compartments, which population was initially infected, and of the cost per sample for serological tests.

  2. Cache-Oblivious parallel SIMD Viterbi decoding for sequence search in HMMER.

    PubMed

    Ferreira, Miguel; Roma, Nuno; Russo, Luis M S

    2014-05-30

    HMMER is a commonly used bioinformatics tool based on Hidden Markov Models (HMMs) to analyze and process biological sequences. One of its main homology engines is based on the Viterbi decoding algorithm, which was already highly parallelized and optimized using Farrar's striped processing pattern with Intel SSE2 instruction set extension. A new SIMD vectorization of the Viterbi decoding algorithm is proposed, based on an SSE2 inter-task parallelization approach similar to the DNA alignment algorithm proposed by Rognes. Besides this alternative vectorization scheme, the proposed implementation also introduces a new partitioning of the Markov model that allows a significantly more efficient exploitation of the cache locality. Such optimization, together with an improved loading of the emission scores, allows the achievement of a constant processing throughput, regardless of the innermost-cache size and of the dimension of the considered model. The proposed optimized vectorization of the Viterbi decoding algorithm was extensively evaluated and compared with the HMMER3 decoder to process DNA and protein datasets, proving to be a rather competitive alternative implementation. Being always faster than the already highly optimized ViterbiFilter implementation of HMMER3, the proposed Cache-Oblivious Parallel SIMD Viterbi (COPS) implementation provides a constant throughput and offers a processing speedup as high as two times faster, depending on the model's size.

  3. On the ordinality of numbers: A review of neural and behavioral studies.

    PubMed

    Lyons, I M; Vogel, S E; Ansari, D

    2016-01-01

    The last several years have seen steady growth in research on the cognitive and neuronal mechanisms underlying how numbers are represented as part of ordered sequences. In the present review, we synthesize what is currently known about numerical ordinality from behavioral and neuroimaging research, point out major gaps in our current knowledge, and propose several hypotheses that may bear further investigation. Evidence suggests that how we process ordinality differs from how we process cardinality, but that this difference depends strongly on context-in particular, whether numbers are presented symbolically or nonsymbolically. Results also reveal many commonalities between numerical and nonnumerical ordinal processing; however, the degree to which numerical ordinality can be reduced to domain-general mechanisms remains unclear. One proposal is that numerical ordinality relies upon more general short-term memory mechanisms as well as more numerically specific long-term memory representations. It is also evident that numerical ordinality is highly multifaceted, with symbolic representations in particular allowing for a wide range of different types of ordinal relations, the complexity of which appears to increase over development. We examine the proposal that these relations may form the basis of a richer set of associations that may prove crucial to the emergence of more complex math abilities and concepts. In sum, ordinality appears to be an important and relatively understudied facet of numerical cognition that presents substantial opportunities for new and ground-breaking research. © 2016 Elsevier B.V. All rights reserved.

  4. A solution quality assessment method for swarm intelligence optimization algorithms.

    PubMed

    Zhang, Zhaojun; Wang, Gai-Ge; Zou, Kuansheng; Zhang, Jianhua

    2014-01-01

    Nowadays, swarm intelligence optimization has become an important optimization tool and wildly used in many fields of application. In contrast to many successful applications, the theoretical foundation is rather weak. Therefore, there are still many problems to be solved. One problem is how to quantify the performance of algorithm in finite time, that is, how to evaluate the solution quality got by algorithm for practical problems. It greatly limits the application in practical problems. A solution quality assessment method for intelligent optimization is proposed in this paper. It is an experimental analysis method based on the analysis of search space and characteristic of algorithm itself. Instead of "value performance," the "ordinal performance" is used as evaluation criteria in this method. The feasible solutions were clustered according to distance to divide solution samples into several parts. Then, solution space and "good enough" set can be decomposed based on the clustering results. Last, using relative knowledge of statistics, the evaluation result can be got. To validate the proposed method, some intelligent algorithms such as ant colony optimization (ACO), particle swarm optimization (PSO), and artificial fish swarm algorithm (AFS) were taken to solve traveling salesman problem. Computational results indicate the feasibility of proposed method.

  5. Reversal of blindness in animal models of leber congenital amaurosis using optimized AAV2-mediated gene transfer.

    PubMed

    Bennicelli, Jeannette; Wright, John Fraser; Komaromy, Andras; Jacobs, Jonathan B; Hauck, Bernd; Zelenaia, Olga; Mingozzi, Federico; Hui, Daniel; Chung, Daniel; Rex, Tonia S; Wei, Zhangyong; Qu, Guang; Zhou, Shangzhen; Zeiss, Caroline; Arruda, Valder R; Acland, Gregory M; Dell'Osso, Lou F; High, Katherine A; Maguire, Albert M; Bennett, Jean

    2008-03-01

    We evaluated the safety and efficacy of an optimized adeno-associated virus (AAV; AAV2.RPE65) in animal models of the RPE65 form of Leber congenital amaurosis (LCA). Protein expression was optimized by addition of a modified Kozak sequence at the translational start site of hRPE65. Modifications in AAV production and delivery included use of a long stuffer sequence to prevent reverse packaging from the AAV inverted-terminal repeats, and co-injection with a surfactant. The latter allows consistent and predictable delivery of a given dose of vector. We observed improved electroretinograms (ERGs) and visual acuity in Rpe65 mutant mice. This has not been reported previously using AAV2 vectors. Subretinal delivery of 8.25 x 10(10) vector genomes in affected dogs was well tolerated both locally and systemically, and treated animals showed improved visual behavior and pupillary responses, and reduced nystagmus within 2 weeks of injection. ERG responses confirmed the reversal of visual deficit. Immunohistochemistry confirmed transduction of retinal pigment epithelium cells and there was minimal toxicity to the retina as judged by histopathologic analysis. The data demonstrate that AAV2.RPE65 delivers the RPE65 transgene efficiently and quickly to the appropriate target cells in vivo in animal models. This vector holds great promise for treatment of LCA due to RPE65 mutations.

  6. Reversal of Blindness in Animal Models of Leber Congenital Amaurosis Using Optimized AAV2-mediated Gene Transfer

    PubMed Central

    Bennicelli, Jeannette; Wright, John Fraser; Komaromy, Andras; Jacobs, Jonathan B; Hauck, Bernd; Zelenaia, Olga; Mingozzi, Federico; Hui, Daniel; Chung, Daniel; Rex, Tonia S; Wei, Zhangyong; Qu, Guang; Zhou, Shangzhen; Zeiss, Caroline; Arruda, Valder R; Acland, Gregory M; Dell’Osso, Lou F; High, Katherine A; Maguire, Albert M; Bennett, Jean

    2010-01-01

    We evaluated the safety and efficacy of an optimized adeno-associated virus (AAV; AAV2.RPE65) in animal models of the RPE65 form of Leber congenital amaurosis (LCA). Protein expression was optimized by addition of a modified Kozak sequence at the translational start site of hRPE65. Modifications in AAV production and delivery included use of a long stuffer sequence to prevent reverse packaging from the AAV inverted-terminal repeats, and co-injection with a surfactant. The latter allows consistent and predictable delivery of a given dose of vector. We observed improved electroretinograms (ERGs) and visual acuity in Rpe65 mutant mice. This has not been reported previously using AAV2 vectors. Subretinal delivery of 8.25 × 1010 vector genomes in affected dogs was well tolerated both locally and systemically, and treated animals showed improved visual behavior and pupillary responses, and reduced nystagmus within 2 weeks of injection. ERG responses confirmed the reversal of visual deficit. Immunohistochemistry confirmed transduction of retinal pigment epithelium cells and there was minimal toxicity to the retina as judged by histopathologic analysis. The data demonstrate that AAV2.RPE65 delivers the RPE65 transgene efficiently and quickly to the appropriate target cells in vivo in animal models. This vector holds great promise for treatment of LCA due to RPE65 mutations. PMID:18209734

  7. A Comprehensive Optimization Strategy for Real-time Spatial Feature Sharing and Visual Analytics in Cyberinfrastructure

    NASA Astrophysics Data System (ADS)

    Li, W.; Shao, H.

    2017-12-01

    For geospatial cyberinfrastructure enabled web services, the ability of rapidly transmitting and sharing spatial data over the Internet plays a critical role to meet the demands of real-time change detection, response and decision-making. Especially for the vector datasets which serve as irreplaceable and concrete material in data-driven geospatial applications, their rich geometry and property information facilitates the development of interactive, efficient and intelligent data analysis and visualization applications. However, the big-data issues of vector datasets have hindered their wide adoption in web services. In this research, we propose a comprehensive optimization strategy to enhance the performance of vector data transmitting and processing. This strategy combines: 1) pre- and on-the-fly generalization, which automatically determines proper simplification level through the introduction of appropriate distance tolerance (ADT) to meet various visualization requirements, and at the same time speed up simplification efficiency; 2) a progressive attribute transmission method to reduce data size and therefore the service response time; 3) compressed data transmission and dynamic adoption of a compression method to maximize the service efficiency under different computing and network environments. A cyberinfrastructure web portal was developed for implementing the proposed technologies. After applying our optimization strategies, substantial performance enhancement is achieved. We expect this work to widen the use of web service providing vector data to support real-time spatial feature sharing, visual analytics and decision-making.

  8. Support Vector Data Description Model to Map Specific Land Cover with Optimal Parameters Determined from a Window-Based Validation Set.

    PubMed

    Zhang, Jinshui; Yuan, Zhoumiqi; Shuai, Guanyuan; Pan, Yaozhong; Zhu, Xiufang

    2017-04-26

    This paper developed an approach, the window-based validation set for support vector data description (WVS-SVDD), to determine optimal parameters for support vector data description (SVDD) model to map specific land cover by integrating training and window-based validation sets. Compared to the conventional approach where the validation set included target and outlier pixels selected visually and randomly, the validation set derived from WVS-SVDD constructed a tightened hypersphere because of the compact constraint by the outlier pixels which were located neighboring to the target class in the spectral feature space. The overall accuracies for wheat and bare land achieved were as high as 89.25% and 83.65%, respectively. However, target class was underestimated because the validation set covers only a small fraction of the heterogeneous spectra of the target class. The different window sizes were then tested to acquire more wheat pixels for validation set. The results showed that classification accuracy increased with the increasing window size and the overall accuracies were higher than 88% at all window size scales. Moreover, WVS-SVDD showed much less sensitivity to the untrained classes than the multi-class support vector machine (SVM) method. Therefore, the developed method showed its merits using the optimal parameters, tradeoff coefficient ( C ) and kernel width ( s ), in mapping homogeneous specific land cover.

  9. Computer-aided diagnosis of lung nodule using gradient tree boosting and Bayesian optimization.

    PubMed

    Nishio, Mizuho; Nishizawa, Mitsuo; Sugiyama, Osamu; Kojima, Ryosuke; Yakami, Masahiro; Kuroda, Tomohiro; Togashi, Kaori

    2018-01-01

    We aimed to evaluate a computer-aided diagnosis (CADx) system for lung nodule classification focussing on (i) usefulness of the conventional CADx system (hand-crafted imaging feature + machine learning algorithm), (ii) comparison between support vector machine (SVM) and gradient tree boosting (XGBoost) as machine learning algorithms, and (iii) effectiveness of parameter optimization using Bayesian optimization and random search. Data on 99 lung nodules (62 lung cancers and 37 benign lung nodules) were included from public databases of CT images. A variant of the local binary pattern was used for calculating a feature vector. SVM or XGBoost was trained using the feature vector and its corresponding label. Tree Parzen Estimator (TPE) was used as Bayesian optimization for parameters of SVM and XGBoost. Random search was done for comparison with TPE. Leave-one-out cross-validation was used for optimizing and evaluating the performance of our CADx system. Performance was evaluated using area under the curve (AUC) of receiver operating characteristic analysis. AUC was calculated 10 times, and its average was obtained. The best averaged AUC of SVM and XGBoost was 0.850 and 0.896, respectively; both were obtained using TPE. XGBoost was generally superior to SVM. Optimal parameters for achieving high AUC were obtained with fewer numbers of trials when using TPE, compared with random search. Bayesian optimization of SVM and XGBoost parameters was more efficient than random search. Based on observer study, AUC values of two board-certified radiologists were 0.898 and 0.822. The results show that diagnostic accuracy of our CADx system was comparable to that of radiologists with respect to classifying lung nodules.

  10. On the characteristics of optimal transfers

    NASA Astrophysics Data System (ADS)

    Iorfida, Elisabetta

    In the past 50 years the scientists have been developing and analysing methods and new algorithms that optimise an interplanetary trajectory according to one or more objectives. Within this field, in 1963 Lawden derived, from Pontryagin's minimum principle, the so-called `primer vector theory'. The main goal of this thesis is to develop a theoretical understanding of Lawden's theory, getting an insight into the optimality of a trajectory when mid-course corrections need to be applied. The novelty of the research is represented by a different approach to the primer vector theory, which simplifies the structure of the problem.

  11. An Optimization Principle for Deriving Nonequilibrium Statistical Models of Hamiltonian Dynamics

    NASA Astrophysics Data System (ADS)

    Turkington, Bruce

    2013-08-01

    A general method for deriving closed reduced models of Hamiltonian dynamical systems is developed using techniques from optimization and statistical estimation. Given a vector of resolved variables, selected to describe the macroscopic state of the system, a family of quasi-equilibrium probability densities on phase space corresponding to the resolved variables is employed as a statistical model, and the evolution of the mean resolved vector is estimated by optimizing over paths of these densities. Specifically, a cost function is constructed to quantify the lack-of-fit to the microscopic dynamics of any feasible path of densities from the statistical model; it is an ensemble-averaged, weighted, squared-norm of the residual that results from submitting the path of densities to the Liouville equation. The path that minimizes the time integral of the cost function determines the best-fit evolution of the mean resolved vector. The closed reduced equations satisfied by the optimal path are derived by Hamilton-Jacobi theory. When expressed in terms of the macroscopic variables, these equations have the generic structure of governing equations for nonequilibrium thermodynamics. In particular, the value function for the optimization principle coincides with the dissipation potential that defines the relation between thermodynamic forces and fluxes. The adjustable closure parameters in the best-fit reduced equations depend explicitly on the arbitrary weights that enter into the lack-of-fit cost function. Two particular model reductions are outlined to illustrate the general method. In each example the set of weights in the optimization principle contracts into a single effective closure parameter.

  12. Semi-supervised learning for ordinal Kernel Discriminant Analysis.

    PubMed

    Pérez-Ortiz, M; Gutiérrez, P A; Carbonero-Ruz, M; Hervás-Martínez, C

    2016-12-01

    Ordinal classification considers those classification problems where the labels of the variable to predict follow a given order. Naturally, labelled data is scarce or difficult to obtain in this type of problems because, in many cases, ordinal labels are given by a user or expert (e.g. in recommendation systems). Firstly, this paper develops a new strategy for ordinal classification where both labelled and unlabelled data are used in the model construction step (a scheme which is referred to as semi-supervised learning). More specifically, the ordinal version of kernel discriminant learning is extended for this setting considering the neighbourhood information of unlabelled data, which is proposed to be computed in the feature space induced by the kernel function. Secondly, a new method for semi-supervised kernel learning is devised in the context of ordinal classification, which is combined with our developed classification strategy to optimise the kernel parameters. The experiments conducted compare 6 different approaches for semi-supervised learning in the context of ordinal classification in a battery of 30 datasets, showing (1) the good synergy of the ordinal version of discriminant analysis and the use of unlabelled data and (2) the advantage of computing distances in the feature space induced by the kernel function. Copyright © 2016 Elsevier Ltd. All rights reserved.

  13. A Simple Method to Increase the Transduction Efficiency of Single-Stranded Adeno-Associated Virus Vectors In Vitro and In Vivo

    PubMed Central

    Ma, Wenqin; Li, Baozheng; Ling, Chen; Jayandharan, Giridhara R.; Byrne, Barry J.

    2011-01-01

    Abstract We have recently shown that co-administration of conventional single-stranded adeno-associated virus 2 (ssAAV2) vectors with self-complementary (sc) AAV2-protein phosphatase 5 (PP5) vectors leads to a significant increase in the transduction efficiency of ssAAV2 vectors in human cells in vitro as well as in murine hepatocytes in vivo. In the present study, this strategy has been further optimized by generating a mixed population of ssAAV2-EGFP and scAAV2-PP5 vectors at a 10:1 ratio to achieve enhanced green fluorescent protein (EGFP) transgene expression at approximately 5- to 10-fold higher efficiency, both in vitro and in vivo. This simple coproduction method should be adaptable to any ssAAV serotype vector containing transgene cassettes that are too large to be encapsidated in scAAV vectors. PMID:21219084

  14. Optimal Cloning of PCR Fragments by Homologous Recombination in Escherichia coli

    PubMed Central

    Jacobus, Ana Paula; Gross, Jeferson

    2015-01-01

    PCR fragments and linear vectors containing overlapping ends are easily assembled into a propagative plasmid by homologous recombination in Escherichia coli. Although this gap-repair cloning approach is straightforward, its existence is virtually unknown to most molecular biologists. To popularize this method, we tested critical parameters influencing the efficiency of PCR fragments cloning into PCR-amplified vectors by homologous recombination in the widely used E. coli strain DH5α. We found that the number of positive colonies after transformation increases with the length of overlap between the PCR fragment and linear vector. For most practical purposes, a 20 bp identity already ensures high-cloning yields. With an insert to vector ratio of 2:1, higher colony forming numbers are obtained when the amount of vector is in the range of 100 to 250 ng. An undesirable cloning background of empty vectors can be minimized during vector PCR amplification by applying a reduced amount of plasmid template or by using primers in which the 5′ termini are separated by a large gap. DpnI digestion of the plasmid template after PCR is also effective to decrease the background of negative colonies. We tested these optimized cloning parameters during the assembly of five independent DNA constructs and obtained 94% positive clones out of 100 colonies probed. We further demonstrated the efficient and simultaneous cloning of two PCR fragments into a vector. These results support the idea that homologous recombination in E. coli might be one of the most effective methods for cloning one or two PCR fragments. For its simplicity and high efficiency, we believe that recombinational cloning in E. coli has a great potential to become a routine procedure in most molecular biology-oriented laboratories. PMID:25774528

  15. Primer vector theory and applications

    NASA Technical Reports Server (NTRS)

    Jezewski, D. J.

    1975-01-01

    A method developed to compute two-body, optimal, N-impulse trajectories was presented. The necessary conditions established define the gradient structure of the primer vector and its derivative for any set of boundary conditions and any number of impulses. Inequality constraints, a conjugate gradient iterator technique, and the use of a penalty function were also discussed.

  16. Evaluation of the discrete vortex wake cross flow model using vector computers. Part 2: User's manual for DIVORCE

    NASA Technical Reports Server (NTRS)

    Deffenbaugh, F. D.; Vitz, J. F.

    1979-01-01

    The users manual for the Discrete Vortex Cross flow Evaluator (DIVORCE) computer program is presented. DIVORCE was developed in FORTRAN 4 for the DCD 6600 and CDC 7600 machines. Optimal calls to a NASA vector subroutine package are provided for use with the CDC 7600.

  17. A phage display vector optimized for the generation of human antibody combinatorial libraries and the molecular cloning of monoclonal antibody fragments.

    PubMed

    Solforosi, Laura; Mancini, Nicasio; Canducci, Filippo; Clementi, Nicola; Sautto, Giuseppe Andrea; Diotti, Roberta Antonia; Clementi, Massimo; Burioni, Roberto

    2012-07-01

    A novel phagemid vector, named pCM, was optimized for the cloning and display of antibody fragment (Fab) libraries on the surface of filamentous phage. This vector contains two long DNA "stuffer" fragments for easier differentiation of the correctly cut forms of the vector. Moreover, in pCM the fragment at the heavy-chain cloning site contains an acid phosphatase-encoding gene allowing an easy distinction of the Escherichia coli cells containing the unmodified form of the phagemid versus the heavy-chain fragment coding cDNA. In pCM transcription of heavy-chain Fd/gene III and light chain is driven by a single lacZ promoter. The light chain is directed to the periplasm by the ompA signal peptide, whereas the heavy-chain Fd/coat protein III is trafficked by the pelB signal peptide. The phagemid pCM was used to generate a human combinatorial phage display antibody library that allowed the selection of a monoclonal Fab fragment antibody directed against the nucleoprotein (NP) of Influenza A virus.

  18. Robust model predictive control for satellite formation keeping with eccentricity/inclination vector separation

    NASA Astrophysics Data System (ADS)

    Lim, Yeerang; Jung, Youeyun; Bang, Hyochoong

    2018-05-01

    This study presents model predictive formation control based on an eccentricity/inclination vector separation strategy. Alternative collision avoidance can be accomplished by using eccentricity/inclination vectors and adding a simple goal function term for optimization process. Real-time control is also achievable with model predictive controller based on convex formulation. Constraint-tightening approach is address as well improve robustness of the controller, and simulation results are presented to verify performance enhancement for the proposed approach.

  19. Regularizing portfolio optimization

    NASA Astrophysics Data System (ADS)

    Still, Susanne; Kondor, Imre

    2010-07-01

    The optimization of large portfolios displays an inherent instability due to estimation error. This poses a fundamental problem, because solutions that are not stable under sample fluctuations may look optimal for a given sample, but are, in effect, very far from optimal with respect to the average risk. In this paper, we approach the problem from the point of view of statistical learning theory. The occurrence of the instability is intimately related to over-fitting, which can be avoided using known regularization methods. We show how regularized portfolio optimization with the expected shortfall as a risk measure is related to support vector regression. The budget constraint dictates a modification. We present the resulting optimization problem and discuss the solution. The L2 norm of the weight vector is used as a regularizer, which corresponds to a diversification 'pressure'. This means that diversification, besides counteracting downward fluctuations in some assets by upward fluctuations in others, is also crucial because it improves the stability of the solution. The approach we provide here allows for the simultaneous treatment of optimization and diversification in one framework that enables the investor to trade off between the two, depending on the size of the available dataset.

  20. Multivariate normal maximum likelihood with both ordinal and continuous variables, and data missing at random.

    PubMed

    Pritikin, Joshua N; Brick, Timothy R; Neale, Michael C

    2018-04-01

    A novel method for the maximum likelihood estimation of structural equation models (SEM) with both ordinal and continuous indicators is introduced using a flexible multivariate probit model for the ordinal indicators. A full information approach ensures unbiased estimates for data missing at random. Exceeding the capability of prior methods, up to 13 ordinal variables can be included before integration time increases beyond 1 s per row. The method relies on the axiom of conditional probability to split apart the distribution of continuous and ordinal variables. Due to the symmetry of the axiom, two similar methods are available. A simulation study provides evidence that the two similar approaches offer equal accuracy. A further simulation is used to develop a heuristic to automatically select the most computationally efficient approach. Joint ordinal continuous SEM is implemented in OpenMx, free and open-source software.

  1. Comparison of Ordinal and Nominal Classification Trees to Predict Ordinal Expert-Based Occupational Exposure Estimates in a Case–Control Study

    PubMed Central

    Wheeler, David C.; Archer, Kellie J.; Burstyn, Igor; Yu, Kai; Stewart, Patricia A.; Colt, Joanne S.; Baris, Dalsu; Karagas, Margaret R.; Schwenn, Molly; Johnson, Alison; Armenti, Karla; Silverman, Debra T.; Friesen, Melissa C.

    2015-01-01

    Objectives: To evaluate occupational exposures in case–control studies, exposure assessors typically review each job individually to assign exposure estimates. This process lacks transparency and does not provide a mechanism for recreating the decision rules in other studies. In our previous work, nominal (unordered categorical) classification trees (CTs) generally successfully predicted expert-assessed ordinal exposure estimates (i.e. none, low, medium, high) derived from occupational questionnaire responses, but room for improvement remained. Our objective was to determine if using recently developed ordinal CTs would improve the performance of nominal trees in predicting ordinal occupational diesel exhaust exposure estimates in a case–control study. Methods: We used one nominal and four ordinal CT methods to predict expert-assessed probability, intensity, and frequency estimates of occupational diesel exhaust exposure (each categorized as none, low, medium, or high) derived from questionnaire responses for the 14983 jobs in the New England Bladder Cancer Study. To replicate the common use of a single tree, we applied each method to a single sample of 70% of the jobs, using 15% to test and 15% to validate each method. To characterize variability in performance, we conducted a resampling analysis that repeated the sample draws 100 times. We evaluated agreement between the tree predictions and expert estimates using Somers’ d, which measures differences in terms of ordinal association between predicted and observed scores and can be interpreted similarly to a correlation coefficient. Results: From the resampling analysis, compared with the nominal tree, an ordinal CT method that used a quadratic misclassification function and controlled tree size based on total misclassification cost had a slightly better predictive performance that was statistically significant for the frequency metric (Somers’ d: nominal tree = 0.61; ordinal tree = 0.63) and similar performance for the probability (nominal = 0.65; ordinal = 0.66) and intensity (nominal = 0.65; ordinal = 0.65) metrics. The best ordinal CT predicted fewer cases of large disagreement with the expert assessments (i.e. no exposure predicted for a job with high exposure and vice versa) compared with the nominal tree across all of the exposure metrics. For example, the percent of jobs with expert-assigned high intensity of exposure that the model predicted as no exposure was 29% for the nominal tree and 22% for the best ordinal tree. Conclusions: The overall agreements were similar across CT models; however, the use of ordinal models reduced the magnitude of the discrepancy when disagreements occurred. As the best performing model can vary by situation, researchers should consider evaluating multiple CT methods to maximize the predictive performance within their data. PMID:25433003

  2. Promoting the energy structure optimization around Chinese Beijing-Tianjin area by developing biomass energy

    NASA Astrophysics Data System (ADS)

    Zhao, Li; Sun, Du; Wang, Shi-Yu; Zhao, Feng-Qing

    2017-06-01

    In recent years, remarkable achievements in the utilization of biomass energy have been made in China. However, there are still some problems, such as irrational industry layout, immature existing market survival mechanism and lack of core competitiveness. On the basis of investigation and research, some recommendations and strategies are proposed for the development of biomass energy around Chinese Beijing-Tianjin area: scientific planning and precise laying out of biomass industry; rationalizing the relationship between government and enterprises and promoting the establishment of a market-oriented survival mechanism; combining ‘supply side’ with ‘demand side’ to optimize product structure; extending industrial chain to promote industry upgrading and sustainable development; and comprehensive co-ordinating various types of biomass resources and extending product chain to achieve better economic benefits.

  3. Viral vector-based influenza vaccines

    PubMed Central

    de Vries, Rory D.; Rimmelzwaan, Guus F.

    2016-01-01

    ABSTRACT Antigenic drift of seasonal influenza viruses and the occasional introduction of influenza viruses of novel subtypes into the human population complicate the timely production of effective vaccines that antigenically match the virus strains that cause epidemic or pandemic outbreaks. The development of game-changing vaccines that induce broadly protective immunity against a wide variety of influenza viruses is an unmet need, in which recombinant viral vectors may provide. Use of viral vectors allows the delivery of any influenza virus antigen, or derivative thereof, to the immune system, resulting in the optimal induction of virus-specific B- and T-cell responses against this antigen of choice. This systematic review discusses results obtained with vectored influenza virus vaccines and advantages and disadvantages of the currently available viral vectors. PMID:27455345

  4. First experience of vectorizing electromagnetic physics models for detector simulation

    NASA Astrophysics Data System (ADS)

    Amadio, G.; Apostolakis, J.; Bandieramonte, M.; Bianchini, C.; Bitzes, G.; Brun, R.; Canal, P.; Carminati, F.; de Fine Licht, J.; Duhem, L.; Elvira, D.; Gheata, A.; Jun, S. Y.; Lima, G.; Novak, M.; Presbyterian, M.; Shadura, O.; Seghal, R.; Wenzel, S.

    2015-12-01

    The recent emergence of hardware architectures characterized by many-core or accelerated processors has opened new opportunities for concurrent programming models taking advantage of both SIMD and SIMT architectures. The GeantV vector prototype for detector simulations has been designed to exploit both the vector capability of mainstream CPUs and multi-threading capabilities of coprocessors including NVidia GPUs and Intel Xeon Phi. The characteristics of these architectures are very different in terms of the vectorization depth, parallelization needed to achieve optimal performance or memory access latency and speed. An additional challenge is to avoid the code duplication often inherent to supporting heterogeneous platforms. In this paper we present the first experience of vectorizing electromagnetic physics models developed for the GeantV project.

  5. Viral vector-based influenza vaccines.

    PubMed

    de Vries, Rory D; Rimmelzwaan, Guus F

    2016-11-01

    Antigenic drift of seasonal influenza viruses and the occasional introduction of influenza viruses of novel subtypes into the human population complicate the timely production of effective vaccines that antigenically match the virus strains that cause epidemic or pandemic outbreaks. The development of game-changing vaccines that induce broadly protective immunity against a wide variety of influenza viruses is an unmet need, in which recombinant viral vectors may provide. Use of viral vectors allows the delivery of any influenza virus antigen, or derivative thereof, to the immune system, resulting in the optimal induction of virus-specific B- and T-cell responses against this antigen of choice. This systematic review discusses results obtained with vectored influenza virus vaccines and advantages and disadvantages of the currently available viral vectors.

  6. Expanding integrated vector management to promote healthy environments

    PubMed Central

    Lizzi, Karina M.; Qualls, Whitney A.; Brown, Scott C.; Beier, John C.

    2014-01-01

    Integrated Vector Management (IVM) strategies are intended to protect communities from pathogen transmission by arthropods. These strategies target multiple vectors and different ecological and socioeconomic settings, but the aggregate benefits of IVM are limited by the narrow focus of its approach; IVM strategies only aim to control arthropod vectors. We argue that IVM should encompass environmental modifications at early stages, for instance, infrastructural development and sanitation services, to regulate not only vectors but also nuisance-biting arthropods. An additional focus on nuisance-biting arthropods will improve public health, quality of life, and minimize social disparity issues fostered by pests. Optimally, IVM could incorporate environmental awareness and promotion of control methods in order to proactively reduce threats of serious pest situations. PMID:25028090

  7. Optimized Pan-species and Speciation Duplex Real-time PCR Assays for Plasmodium Parasites Detection in Malaria Vectors

    PubMed Central

    Sandeu, Maurice Marcel; Moussiliou, Azizath; Moiroux, Nicolas; Padonou, Gilles G.; Massougbodji, Achille; Corbel, Vincent; Tuikue Ndam, Nicaise

    2012-01-01

    Background An accurate method for detecting malaria parasites in the mosquito’s vector remains an essential component in the vector control. The Enzyme linked immunosorbent assay specific for circumsporozoite protein (ELISA-CSP) is the gold standard method for the detection of malaria parasites in the vector even if it presents some limitations. Here, we optimized multiplex real-time PCR assays to accurately detect minor populations in mixed infection with multiple Plasmodium species in the African malaria vectors Anopheles gambiae and Anopheles funestus. Methods Complementary TaqMan-based real-time PCR assays that detect Plasmodium species using specific primers and probes were first evaluated on artificial mixtures of different targets inserted in plasmid constructs. The assays were further validated in comparison with the ELISA-CSP on 200 field caught Anopheles gambiae and Anopheles funestus mosquitoes collected in two localities in southern Benin. Results The validation of the duplex real-time PCR assays on the plasmid mixtures demonstrated robust specificity and sensitivity for detecting distinct targets. Using a panel of mosquito specimen, the real-time PCR showed a relatively high sensitivity (88.6%) and specificity (98%), compared to ELISA-CSP as the referent standard. The agreement between both methods was “excellent” (κ = 0.8, P<0.05). The relative quantification of Plasmodium DNA between the two Anopheles species analyzed showed no significant difference (P = 0, 2). All infected mosquito samples contained Plasmodium falciparum DNA and mixed infections with P. malariae and/or P. ovale were observed in 18.6% and 13.6% of An. gambiae and An. funestus respectively. Plasmodium vivax was found in none of the mosquito samples analyzed. Conclusion This study presents an optimized method for detecting the four Plasmodium species in the African malaria vectors. The study highlights substantial discordance with traditional ELISA-CSP pointing out the utility of employing an accurate molecular diagnostic tool for detecting malaria parasites in field mosquito populations. PMID:23285168

  8. Testing the capacity of clothing to act as a vector for non-native seed in protected areas.

    PubMed

    Mount, Ann; Pickering, Catherine Marina

    2009-10-01

    Although humans are a major mechanism for short and long distance seed dispersal, there is limited research testing clothing as a vector. The effect of different types of material (sports vs hiking socks), or different items of clothing (boots, socks, laces vs legs) or the same item (socks) worn in different places on seed composition were assessed in Kosciuszko National Park, Australia. Data was analyzed using Repeated Measures ANOVA, independent and paired t-tests, Multi-dimensional Scaling Ordinations and Analysis of Similarity. A total of 24,776 seeds from 70 taxa were collected from the 207 pieces of clothing sampled, with seed identified from 31 native and 19 non-native species. Socks worn off-track collected more native seeds while those worn on roadsides collected more non-native seeds. Sports socks collected a greater diversity of seeds and more native seeds than hiking socks. Boots, uncovered socks and laces collect more seeds than covered socks and laces, resulting in 17% fewer seeds collected when wearing trousers. With seeds from over 179 species (134 recognized weeds) collected on clothing in this, and nine other studies, it is clear that clothing contributes to unintended human mediated seed dispersal, including for many invasive species.

  9. Impact of San Francisco's toy ordinance on restaurants and children's food purchases, 2011-2012.

    PubMed

    Otten, Jennifer J; Saelens, Brian E; Kapphahn, Kristopher I; Hekler, Eric B; Buman, Matthew P; Goldstein, Benjamin A; Krukowski, Rebecca A; O'Donohue, Laura S; Gardner, Christopher D; King, Abby C

    2014-07-17

    In 2011, San Francisco passed the first citywide ordinance to improve the nutritional standards of children's meals sold at restaurants by preventing the giving away of free toys or other incentives with meals unless nutritional criteria were met. This study examined the impact of the Healthy Food Incentives Ordinance at ordinance-affected restaurants on restaurant response (eg, toy-distribution practices, change in children's menus), and the energy and nutrient content of all orders and children's-meal-only orders purchased for children aged 0 through 12 years. Restaurant responses were examined from January 2010 through March 2012. Parent-caregiver/child dyads (n = 762) who were restaurant customers were surveyed at 2 points before and 1 seasonally matched point after ordinance enactment at Chain A and B restaurants (n = 30) in 2011 and 2012. Both restaurant chains responded to the ordinance by selling toys separately from children's meals, but neither changed their menus to meet ordinance-specified nutrition criteria. Among children for whom children's meals were purchased, significant decreases in kilocalories, sodium, and fat per order were likely due to changes in children's side dishes and beverages at Chain A. Although the changes at Chain A did not appear to be directly in response to the ordinance, the transition to a more healthful beverage and default side dish was consistent with the intent of the ordinance. Study results underscore the importance of policy wording, support the concept that more healthful defaults may be a powerful approach for improving dietary intake, and suggest that public policies may contribute to positive restaurant changes.

  10. Segmentation of discrete vector fields.

    PubMed

    Li, Hongyu; Chen, Wenbin; Shen, I-Fan

    2006-01-01

    In this paper, we propose an approach for 2D discrete vector field segmentation based on the Green function and normalized cut. The method is inspired by discrete Hodge Decomposition such that a discrete vector field can be broken down into three simpler components, namely, curl-free, divergence-free, and harmonic components. We show that the Green Function Method (GFM) can be used to approximate the curl-free and the divergence-free components to achieve our goal of the vector field segmentation. The final segmentation curves that represent the boundaries of the influence region of singularities are obtained from the optimal vector field segmentations. These curves are composed of piecewise smooth contours or streamlines. Our method is applicable to both linear and nonlinear discrete vector fields. Experiments show that the segmentations obtained using our approach essentially agree with human perceptual judgement.

  11. Polycation nanostructured lipid carrier, a novel nonviral vector constructed with triolein for efficient gene delivery.

    PubMed

    Zhang, Zhiwen; Sha, Xianyi; Shen, Anle; Wang, Yongzhong; Sun, Zhaogui; Gu, Zheng; Fang, Xiaoling

    2008-06-06

    A novel nonviral gene transfer vector was developed by modifying nanostructured lipid carrier (NLC) with cetylated polyethylenimine (PEI). Polycation nanostructured lipid carrier (PNLC) was prepared using the emulsion-solvent evaporation method. Its in vitro gene transfer properties were evaluated in the human lung adenocarcinoma cell line SPC-A1 and Chinese Hamster Ovary (CHO) cells. Enhanced transfection efficiency of PNLC was observed after the addition of triolein to the PNLC formulation and the transfection efficiency of the optimized PNLC was comparable to that of Lipofectamine 2000. In the presence of 10% serum the transfection efficiency of the optimal PNLC was not significantly changed in either cell line, whereas that of Lipofectamine 2000 was greatly decreased in both. Thus, PNLC is an effective nonviral gene transfer vector and the gene delivery activity of PNLC was enhanced after triolein was included into the PNLC formulation.

  12. Product demand forecasts using wavelet kernel support vector machine and particle swarm optimization in manufacture system

    NASA Astrophysics Data System (ADS)

    Wu, Qi

    2010-03-01

    Demand forecasts play a crucial role in supply chain management. The future demand for a certain product is the basis for the respective replenishment systems. Aiming at demand series with small samples, seasonal character, nonlinearity, randomicity and fuzziness, the existing support vector kernel does not approach the random curve of the sales time series in the space (quadratic continuous integral space). In this paper, we present a hybrid intelligent system combining the wavelet kernel support vector machine and particle swarm optimization for demand forecasting. The results of application in car sale series forecasting show that the forecasting approach based on the hybrid PSOWv-SVM model is effective and feasible, the comparison between the method proposed in this paper and other ones is also given, which proves that this method is, for the discussed example, better than hybrid PSOv-SVM and other traditional methods.

  13. Heterogeneous Tensor Decomposition for Clustering via Manifold Optimization.

    PubMed

    Sun, Yanfeng; Gao, Junbin; Hong, Xia; Mishra, Bamdev; Yin, Baocai

    2016-03-01

    Tensor clustering is an important tool that exploits intrinsically rich structures in real-world multiarray or Tensor datasets. Often in dealing with those datasets, standard practice is to use subspace clustering that is based on vectorizing multiarray data. However, vectorization of tensorial data does not exploit complete structure information. In this paper, we propose a subspace clustering algorithm without adopting any vectorization process. Our approach is based on a novel heterogeneous Tucker decomposition model taking into account cluster membership information. We propose a new clustering algorithm that alternates between different modes of the proposed heterogeneous tensor model. All but the last mode have closed-form updates. Updating the last mode reduces to optimizing over the multinomial manifold for which we investigate second order Riemannian geometry and propose a trust-region algorithm. Numerical experiments show that our proposed algorithm compete effectively with state-of-the-art clustering algorithms that are based on tensor factorization.

  14. Absolute determination of single-stranded and self-complementary adeno-associated viral vector genome titers by droplet digital PCR.

    PubMed

    Lock, Martin; Alvira, Mauricio R; Chen, Shu-Jen; Wilson, James M

    2014-04-01

    Accurate titration of adeno-associated viral (AAV) vector genome copies is critical for ensuring correct and reproducible dosing in both preclinical and clinical settings. Quantitative PCR (qPCR) is the current method of choice for titrating AAV genomes because of the simplicity, accuracy, and robustness of the assay. However, issues with qPCR-based determination of self-complementary AAV vector genome titers, due to primer-probe exclusion through genome self-annealing or through packaging of prematurely terminated defective interfering (DI) genomes, have been reported. Alternative qPCR, gel-based, or Southern blotting titering methods have been designed to overcome these issues but may represent a backward step from standard qPCR methods in terms of simplicity, robustness, and precision. Droplet digital PCR (ddPCR) is a new PCR technique that directly quantifies DNA copies with an unparalleled degree of precision and without the need for a standard curve or for a high degree of amplification efficiency; all properties that lend themselves to the accurate quantification of both single-stranded and self-complementary AAV genomes. Here we compare a ddPCR-based AAV genome titer assay with a standard and an optimized qPCR assay for the titration of both single-stranded and self-complementary AAV genomes. We demonstrate absolute quantification of single-stranded AAV vector genomes by ddPCR with up to 4-fold increases in titer over a standard qPCR titration but with equivalent readout to an optimized qPCR assay. In the case of self-complementary vectors, ddPCR titers were on average 5-, 1.9-, and 2.3-fold higher than those determined by standard qPCR, optimized qPCR, and agarose gel assays, respectively. Droplet digital PCR-based genome titering was superior to qPCR in terms of both intra- and interassay precision and is more resistant to PCR inhibitors, a desirable feature for in-process monitoring of early-stage vector production and for vector genome biodistribution analysis in inhibitory tissues.

  15. Controlling and Coordinating Development in Vector-Transmitted Parasites

    PubMed Central

    Matthews, Keith R.

    2013-01-01

    Vector-borne parasites cause major human diseases of the developing world, including malaria, human African trypanosomiasis, Chagas disease, leishmaniasis, filariasis, and schistosomiasis. Although the life cycles of these parasites were defined over 100 years ago, the strategies they use to optimize their successful transmission are only now being understood in molecular terms. Parasites are now known to monitor their environment in both their host and vector and in response to other parasites. This allows them to adapt their developmental cycles and to counteract any unfavorable conditions they encounter. Here, I review the interactions that parasites engage in with their hosts and vectors to maximize their survival and spread. PMID:21385707

  16. The tunable pReX expression vector enables optimizing the T7-based production of membrane and secretory proteins in E. coli.

    PubMed

    Kuipers, Grietje; Karyolaimos, Alexandros; Zhang, Zhe; Ismail, Nurzian; Trinco, Gianluca; Vikström, David; Slotboom, Dirk Jan; de Gier, Jan-Willem

    2017-12-16

    To optimize the production of membrane and secretory proteins in Escherichia coli, it is critical to harmonize the expression rates of the genes encoding these proteins with the capacity of their biogenesis machineries. Therefore, we engineered the Lemo21(DE3) strain, which is derived from the T7 RNA polymerase-based BL21(DE3) protein production strain. In Lemo21(DE3), the T7 RNA polymerase activity can be modulated by the controlled co-production of its natural inhibitor T7 lysozyme. This setup enables to precisely tune target gene expression rates in Lemo21(DE3). The t7lys gene is expressed from the pLemo plasmid using the titratable rhamnose promoter. A disadvantage of the Lemo21(DE3) setup is that the system is based on two plasmids, a T7 expression vector and pLemo. The aim of this study was to simplify the Lemo21(DE3) setup by incorporating the key elements of pLemo in a standard T7-based expression vector. By incorporating the gene encoding the T7 lysozyme under control of the rhamnose promoter in a standard T7-based expression vector, pReX was created (ReX stands for Regulated gene eXpression). For two model membrane proteins and a model secretory protein we show that the optimized production yields obtained with the pReX expression vector in BL21(DE3) are similar to the ones obtained with Lemo21(DE3) using a standard T7 expression vector. For another secretory protein, a c-type cytochrome, we show that pReX, in contrast to Lemo21(DE3), enables the use of a helper plasmid that is required for the maturation and hence the production of this heme c protein. Here, we created pReX, a T7-based expression vector that contains the gene encoding the T7 lysozyme under control of the rhamnose promoter. pReX enables regulated T7-based target gene expression using only one plasmid. We show that with pReX the production of membrane and secretory proteins can be readily optimized. Importantly, pReX facilitates the use of helper plasmids. Furthermore, the use of pReX is not restricted to BL21(DE3), but it can in principle be used in any T7 RNAP-based strain. Thus, pReX is a versatile alternative to Lemo21(DE3).

  17. Optimal orientation in flows: providing a benchmark for animal movement strategies.

    PubMed

    McLaren, James D; Shamoun-Baranes, Judy; Dokter, Adriaan M; Klaassen, Raymond H G; Bouten, Willem

    2014-10-06

    Animal movements in air and water can be strongly affected by experienced flow. While various flow-orientation strategies have been proposed and observed, their performance in variable flow conditions remains unclear. We apply control theory to establish a benchmark for time-minimizing (optimal) orientation. We then define optimal orientation for movement in steady flow patterns and, using dynamic wind data, for short-distance mass movements of thrushes (Turdus sp.) and 6000 km non-stop migratory flights by great snipes, Gallinago media. Relative to the optimal benchmark, we assess the efficiency (travel speed) and reliability (success rate) of three generic orientation strategies: full compensation for lateral drift, vector orientation (single-heading movement) and goal orientation (continually heading towards the goal). Optimal orientation is characterized by detours to regions of high flow support, especially when flow speeds approach and exceed the animal's self-propelled speed. In strong predictable flow (short distance thrush flights), vector orientation adjusted to flow on departure is nearly optimal, whereas for unpredictable flow (inter-continental snipe flights), only goal orientation was near-optimally reliable and efficient. Optimal orientation provides a benchmark for assessing efficiency of responses to complex flow conditions, thereby offering insight into adaptive flow-orientation across taxa in the light of flow strength, predictability and navigation capacity.

  18. Optimal orientation in flows: providing a benchmark for animal movement strategies

    PubMed Central

    McLaren, James D.; Shamoun-Baranes, Judy; Dokter, Adriaan M.; Klaassen, Raymond H. G.; Bouten, Willem

    2014-01-01

    Animal movements in air and water can be strongly affected by experienced flow. While various flow-orientation strategies have been proposed and observed, their performance in variable flow conditions remains unclear. We apply control theory to establish a benchmark for time-minimizing (optimal) orientation. We then define optimal orientation for movement in steady flow patterns and, using dynamic wind data, for short-distance mass movements of thrushes (Turdus sp.) and 6000 km non-stop migratory flights by great snipes, Gallinago media. Relative to the optimal benchmark, we assess the efficiency (travel speed) and reliability (success rate) of three generic orientation strategies: full compensation for lateral drift, vector orientation (single-heading movement) and goal orientation (continually heading towards the goal). Optimal orientation is characterized by detours to regions of high flow support, especially when flow speeds approach and exceed the animal's self-propelled speed. In strong predictable flow (short distance thrush flights), vector orientation adjusted to flow on departure is nearly optimal, whereas for unpredictable flow (inter-continental snipe flights), only goal orientation was near-optimally reliable and efficient. Optimal orientation provides a benchmark for assessing efficiency of responses to complex flow conditions, thereby offering insight into adaptive flow-orientation across taxa in the light of flow strength, predictability and navigation capacity. PMID:25056213

  19. GPU Accelerated Vector Median Filter

    NASA Technical Reports Server (NTRS)

    Aras, Rifat; Shen, Yuzhong

    2011-01-01

    Noise reduction is an important step for most image processing tasks. For three channel color images, a widely used technique is vector median filter in which color values of pixels are treated as 3-component vectors. Vector median filters are computationally expensive; for a window size of n x n, each of the n(sup 2) vectors has to be compared with other n(sup 2) - 1 vectors in distances. General purpose computation on graphics processing units (GPUs) is the paradigm of utilizing high-performance many-core GPU architectures for computation tasks that are normally handled by CPUs. In this work. NVIDIA's Compute Unified Device Architecture (CUDA) paradigm is used to accelerate vector median filtering. which has to the best of our knowledge never been done before. The performance of GPU accelerated vector median filter is compared to that of the CPU and MPI-based versions for different image and window sizes, Initial findings of the study showed 100x improvement of performance of vector median filter implementation on GPUs over CPU implementations and further speed-up is expected after more extensive optimizations of the GPU algorithm .

  20. Heading-vector navigation based on head-direction cells and path integration.

    PubMed

    Kubie, John L; Fenton, André A

    2009-05-01

    Insect navigation is guided by heading vectors that are computed by path integration. Mammalian navigation models, on the other hand, are typically based on map-like place representations provided by hippocampal place cells. Such models compute optimal routes as a continuous series of locations that connect the current location to a goal. We propose a "heading-vector" model in which head-direction cells or their derivatives serve both as key elements in constructing the optimal route and as the straight-line guidance during route execution. The model is based on a memory structure termed the "shortcut matrix," which is constructed during the initial exploration of an environment when a set of shortcut vectors between sequential pairs of visited waypoint locations is stored. A mechanism is proposed for calculating and storing these vectors that relies on a hypothesized cell type termed an "accumulating head-direction cell." Following exploration, shortcut vectors connecting all pairs of waypoint locations are computed by vector arithmetic and stored in the shortcut matrix. On re-entry, when local view or place representations query the shortcut matrix with a current waypoint and goal, a shortcut trajectory is retrieved. Since the trajectory direction is in head-direction compass coordinates, navigation is accomplished by tracking the firing of head-direction cells that are tuned to the heading angle. Section 1 of the manuscript describes the properties of accumulating head-direction cells. It then shows how accumulating head-direction cells can store local vectors and perform vector arithmetic to perform path-integration-based homing. Section 2 describes the construction and use of the shortcut matrix for computing direct paths between any pair of locations that have been registered in the shortcut matrix. In the discussion, we analyze the advantages of heading-based navigation over map-based navigation. Finally, we survey behavioral evidence that nonhippocampal, heading-based navigation is used in small mammals and humans. Copyright 2008 Wiley-Liss, Inc.

  1. Overstatement in happiness reporting with ordinal, bounded scale.

    PubMed

    Tanaka, Saori C; Yamada, Katsunori; Kitada, Ryo; Tanaka, Satoshi; Sugawara, Sho K; Ohtake, Fumio; Sadato, Norihiro

    2016-02-18

    There are various methods by which people can express subjective evaluations quantitatively. For example, happiness can be measured on a scale from 1 to 10, and has been suggested as a measure of economic policy. However, there is resistance to these types of measurement from economists, who often regard welfare to be a cardinal, unbounded quantity. It is unclear whether there are differences between subjective evaluation reported on ordinal, bounded scales and on cardinal, unbounded scales. To answer this question, we developed functional magnetic resonance imaging experimental tasks for reporting happiness from monetary gain and the perception of visual stimulus. Subjects tended to report higher values when they used ordinal scales instead of cardinal scales. There were differences in neural activation between ordinal and cardinal reporting scales. The posterior parietal area showed greater activation when subjects used an ordinal scale instead of a cardinal scale. Importantly, the striatum exhibited greater activation when asked to report happiness on an ordinal scale than when asked to report on a cardinal scale. The finding that ordinal (bounded) scales are associated with higher reported happiness and greater activation in the reward system shows that overstatement bias in happiness data must be considered.

  2. A new approach to impulsive rendezvous near circular orbit

    NASA Astrophysics Data System (ADS)

    Carter, Thomas; Humi, Mayer

    2012-04-01

    A new approach is presented for the problem of planar optimal impulsive rendezvous of a spacecraft in an inertial frame near a circular orbit in a Newtonian gravitational field. The total characteristic velocity to be minimized is replaced by a related characteristic-value function and this related optimization problem can be solved in closed form. The solution of this problem is shown to approach the solution of the original problem in the limit as the boundary conditions approach those of a circular orbit. Using a form of primer-vector theory the problem is formulated in a way that leads to relatively easy calculation of the optimal velocity increments. A certain vector that can easily be calculated from the boundary conditions determines the number of impulses required for solution of the optimization problem and also is useful in the computation of these velocity increments. Necessary and sufficient conditions for boundary conditions to require exactly three nonsingular non-degenerate impulses for solution of the related optimal rendezvous problem, and a means of calculating these velocity increments are presented. A simple example of a three-impulse rendezvous problem is solved and the resulting trajectory is depicted. Optimal non-degenerate nonsingular two-impulse rendezvous for the related problem is found to consist of four categories of solutions depending on the four ways the primer vector locus intersects the unit circle. Necessary and sufficient conditions for each category of solutions are presented. The region of the boundary values that admit each category of solutions of the related problem are found, and in each case a closed-form solution of the optimal velocity increments is presented. Similar results are presented for the simpler optimal rendezvous that require only one-impulse. For brevity degenerate and singular solutions are not discussed in detail, but should be presented in a following study. Although this approach is thought to provide simpler computations than existing methods, its main contribution may be in establishing a new approach to the more general problem.

  3. 75 FR 75694 - Klamath Tribes Liquor Control Ordinance Correction

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-12-06

    ... DEPARTMENT OF THE INTERIOR Bureau of Indian Affairs Klamath Tribes Liquor Control Ordinance... Control Ordinance of the Klamath Tribes. This correction removes incorrect references to an amended... follows: SUMMARY: This notice publishes the Secretary's certification of the Klamath Tribes Liquor Control...

  4. Development of a computer program to obtain ordinates for NACA 4-digit, 4-digit modified, 5-digit, and 16 series airfoils

    NASA Technical Reports Server (NTRS)

    Ladson, C. L.; Brooks, Cuyler W., Jr.

    1975-01-01

    A computer program developed to calculate the ordinates and surface slopes of any thickness, symmetrical or cambered NACA airfoil of the 4-digit, 4-digit modified, 5-digit, and 16-series airfoil families is presented. The program produces plots of the airfoil nondimensional ordinates and a punch card output of ordinates in the input format of a readily available program for determining the pressure distributions of arbitrary airfoils in subsonic potential viscous flow.

  5. Optimization lighting layout based on gene density improved genetic algorithm for indoor visible light communications

    NASA Astrophysics Data System (ADS)

    Liu, Huanlin; Wang, Xin; Chen, Yong; Kong, Deqian; Xia, Peijie

    2017-05-01

    For indoor visible light communication system, the layout of LED lamps affects the uniformity of the received power on communication plane. In order to find an optimized lighting layout that meets both the lighting needs and communication needs, a gene density genetic algorithm (GDGA) is proposed. In GDGA, a gene indicates a pair of abscissa and ordinate of a LED, and an individual represents a LED layout in the room. The segmented crossover operation and gene mutation strategy based on gene density are put forward to make the received power on communication plane more uniform and increase the population's diversity. A weighted differences function between individuals is designed as the fitness function of GDGA for reserving the population having the useful LED layout genetic information and ensuring the global convergence of GDGA. Comparing square layout and circular layout, with the optimized layout achieved by the GDGA, the power uniformity increases by 83.3%, 83.1% and 55.4%, respectively. Furthermore, the convergence of GDGA is verified compared with evolutionary algorithm (EA). Experimental results show that GDGA can quickly find an approximation of optimal layout.

  6. Cache-Oblivious parallel SIMD Viterbi decoding for sequence search in HMMER

    PubMed Central

    2014-01-01

    Background HMMER is a commonly used bioinformatics tool based on Hidden Markov Models (HMMs) to analyze and process biological sequences. One of its main homology engines is based on the Viterbi decoding algorithm, which was already highly parallelized and optimized using Farrar’s striped processing pattern with Intel SSE2 instruction set extension. Results A new SIMD vectorization of the Viterbi decoding algorithm is proposed, based on an SSE2 inter-task parallelization approach similar to the DNA alignment algorithm proposed by Rognes. Besides this alternative vectorization scheme, the proposed implementation also introduces a new partitioning of the Markov model that allows a significantly more efficient exploitation of the cache locality. Such optimization, together with an improved loading of the emission scores, allows the achievement of a constant processing throughput, regardless of the innermost-cache size and of the dimension of the considered model. Conclusions The proposed optimized vectorization of the Viterbi decoding algorithm was extensively evaluated and compared with the HMMER3 decoder to process DNA and protein datasets, proving to be a rather competitive alternative implementation. Being always faster than the already highly optimized ViterbiFilter implementation of HMMER3, the proposed Cache-Oblivious Parallel SIMD Viterbi (COPS) implementation provides a constant throughput and offers a processing speedup as high as two times faster, depending on the model’s size. PMID:24884826

  7. Electrocardiographic signals and swarm-based support vector machine for hypoglycemia detection.

    PubMed

    Nuryani, Nuryani; Ling, Steve S H; Nguyen, H T

    2012-04-01

    Cardiac arrhythmia relating to hypoglycemia is suggested as a cause of death in diabetic patients. This article introduces electrocardiographic (ECG) parameters for artificially induced hypoglycemia detection. In addition, a hybrid technique of swarm-based support vector machine (SVM) is introduced for hypoglycemia detection using the ECG parameters as inputs. In this technique, a particle swarm optimization (PSO) is proposed to optimize the SVM to detect hypoglycemia. In an experiment using medical data of patients with Type 1 diabetes, the introduced ECG parameters show significant contributions to the performance of the hypoglycemia detection and the proposed detection technique performs well in terms of sensitivity and specificity.

  8. Optimization of Support Vector Machine (SVM) for Object Classification

    NASA Technical Reports Server (NTRS)

    Scholten, Matthew; Dhingra, Neil; Lu, Thomas T.; Chao, Tien-Hsin

    2012-01-01

    The Support Vector Machine (SVM) is a powerful algorithm, useful in classifying data into species. The SVMs implemented in this research were used as classifiers for the final stage in a Multistage Automatic Target Recognition (ATR) system. A single kernel SVM known as SVMlight, and a modified version known as a SVM with K-Means Clustering were used. These SVM algorithms were tested as classifiers under varying conditions. Image noise levels varied, and the orientation of the targets changed. The classifiers were then optimized to demonstrate their maximum potential as classifiers. Results demonstrate the reliability of SVM as a method for classification. From trial to trial, SVM produces consistent results.

  9. The added value of ordinal analysis in clinical trials: an example in traumatic brain injury.

    PubMed

    Roozenbeek, Bob; Lingsma, Hester F; Perel, Pablo; Edwards, Phil; Roberts, Ian; Murray, Gordon D; Maas, Andrew Ir; Steyerberg, Ewout W

    2011-01-01

    In clinical trials, ordinal outcome measures are often dichotomized into two categories. In traumatic brain injury (TBI) the 5-point Glasgow outcome scale (GOS) is collapsed into unfavourable versus favourable outcome. Simulation studies have shown that exploiting the ordinal nature of the GOS increases chances of detecting treatment effects. The objective of this study is to quantify the benefits of ordinal analysis in the real-life situation of a large TBI trial. We used data from the CRASH trial that investigated the efficacy of corticosteroids in TBI patients (n = 9,554). We applied two techniques for ordinal analysis: proportional odds analysis and the sliding dichotomy approach, where the GOS is dichotomized at different cut-offs according to baseline prognostic risk. These approaches were compared to dichotomous analysis. The information density in each analysis was indicated by a Wald statistic. All analyses were adjusted for baseline characteristics. Dichotomous analysis of the six-month GOS showed a non-significant treatment effect (OR = 1.09, 95% CI 0.98 to 1.21, P = 0.096). Ordinal analysis with proportional odds regression or sliding dichotomy showed highly statistically significant treatment effects (OR 1.15, 95% CI 1.06 to 1.25, P = 0.0007 and 1.19, 95% CI 1.08 to 1.30, P = 0.0002), with 2.05-fold and 2.56-fold higher information density compared to the dichotomous approach respectively. Analysis of the CRASH trial data confirmed that ordinal analysis of outcome substantially increases statistical power. We expect these results to hold for other fields of critical care medicine that use ordinal outcome measures and recommend that future trials adopt ordinal analyses. This will permit detection of smaller treatment effects.

  10. Impact of San Francisco’s Toy Ordinance on Restaurants and Children’s Food Purchases, 2011–2012

    PubMed Central

    Saelens, Brian E.; Kapphahn, Kristopher I.; Hekler, Eric B.; Buman, Matthew P.; Goldstein, Benjamin A.; Krukowski, Rebecca A.; O’Donohue, Laura S.; Gardner, Christopher D.; King, Abby C.

    2014-01-01

    Introduction In 2011, San Francisco passed the first citywide ordinance to improve the nutritional standards of children’s meals sold at restaurants by preventing the giving away of free toys or other incentives with meals unless nutritional criteria were met. This study examined the impact of the Healthy Food Incentives Ordinance at ordinance-affected restaurants on restaurant response (eg, toy-distribution practices, change in children’s menus), and the energy and nutrient content of all orders and children’s-meal–only orders purchased for children aged 0 through 12 years. Methods Restaurant responses were examined from January 2010 through March 2012. Parent–caregiver/child dyads (n = 762) who were restaurant customers were surveyed at 2 points before and 1 seasonally matched point after ordinance enactment at Chain A and B restaurants (n = 30) in 2011 and 2012. Results Both restaurant chains responded to the ordinance by selling toys separately from children’s meals, but neither changed their menus to meet ordinance-specified nutrition criteria. Among children for whom children’s meals were purchased, significant decreases in kilocalories, sodium, and fat per order were likely due to changes in children’s side dishes and beverages at Chain A. Conclusion Although the changes at Chain A did not appear to be directly in response to the ordinance, the transition to a more healthful beverage and default side dish was consistent with the intent of the ordinance. Study results underscore the importance of policy wording, support the concept that more healthful defaults may be a powerful approach for improving dietary intake, and suggest that public policies may contribute to positive restaurant changes. PMID:25032837

  11. Multivariate decoding of brain images using ordinal regression.

    PubMed

    Doyle, O M; Ashburner, J; Zelaya, F O; Williams, S C R; Mehta, M A; Marquand, A F

    2013-11-01

    Neuroimaging data are increasingly being used to predict potential outcomes or groupings, such as clinical severity, drug dose response, and transitional illness states. In these examples, the variable (target) we want to predict is ordinal in nature. Conventional classification schemes assume that the targets are nominal and hence ignore their ranked nature, whereas parametric and/or non-parametric regression models enforce a metric notion of distance between classes. Here, we propose a novel, alternative multivariate approach that overcomes these limitations - whole brain probabilistic ordinal regression using a Gaussian process framework. We applied this technique to two data sets of pharmacological neuroimaging data from healthy volunteers. The first study was designed to investigate the effect of ketamine on brain activity and its subsequent modulation with two compounds - lamotrigine and risperidone. The second study investigates the effect of scopolamine on cerebral blood flow and its modulation using donepezil. We compared ordinal regression to multi-class classification schemes and metric regression. Considering the modulation of ketamine with lamotrigine, we found that ordinal regression significantly outperformed multi-class classification and metric regression in terms of accuracy and mean absolute error. However, for risperidone ordinal regression significantly outperformed metric regression but performed similarly to multi-class classification both in terms of accuracy and mean absolute error. For the scopolamine data set, ordinal regression was found to outperform both multi-class and metric regression techniques considering the regional cerebral blood flow in the anterior cingulate cortex. Ordinal regression was thus the only method that performed well in all cases. Our results indicate the potential of an ordinal regression approach for neuroimaging data while providing a fully probabilistic framework with elegant approaches for model selection. Copyright © 2013. Published by Elsevier Inc.

  12. Transfers between libration-point orbits in the elliptic restricted problem

    NASA Astrophysics Data System (ADS)

    Hiday-Johnston, L. A.; Howell, K. C.

    1994-04-01

    A strategy is formulated to design optimal time-fixed impulsive transfers between three-dimensional libration-point orbits in the vicinity of the interior L1 libration point of the Sun-Earth/Moon barycenter system. The adjoint equation in terms of rotating coordinates in the elliptic restricted three-body problem is shown to be of a distinctly different form from that obtained in the analysis of trajectories in the two-body problem. Also, the necessary conditions for a time-fixed two-impulse transfer to be optimal are stated in terms of the primer vector. Primer vector theory is then extended to nonoptimal impulsive trajectories in order to establish a criterion whereby the addition of an interior impulse reduces total fuel expenditure. The necessary conditions for the local optimality of a transfer containing additional impulses are satisfied by requiring continuity of the Hamiltonian and the derivative of the primer vector at all interior impulses. Determination of location, orientation, and magnitude of each additional impulse is accomplished by the unconstrained minimization of the cost function using a multivariable search method. Results indicate that substantial savings in fuel can be achieved by the addition of interior impulsive maneuvers on transfers between libration-point orbits.

  13. Primer Vector Optimization: Survey of Theory, new Analysis and Applications

    NASA Astrophysics Data System (ADS)

    Guzman

    This paper presents a preliminary study in developing a set of optimization tools for orbit rendezvous, transfer and station keeping. This work is part of a large scale effort undergoing at NASA Goddard Space Flight Center and a.i. solutions, Inc. to build generic methods, which will enable missions with tight fuel budgets. Since no single optimization technique can solve efficiently all existing problems, a library of tools where the user could pick the method most suited for the particular mission is envisioned. The first trajectory optimization technique explored is Lawden's primer vector theory [Ref. 1]. Primer vector theory can be considered as a byproduct of applying Calculus of Variations (COV) techniques to the problem of minimizing the fuel usage of impulsive trajectories. For an n-impulse trajectory, it involves the solution of n-1 two-point boundary value problems. In this paper, we look at some of the different formulations of the primer vector (dependent on the frame employed and on the force model). Also, the applicability of primer vector theory is examined in effort to understand when and why the theory can fail. Specifically, since COV is based on "small variations", singularities in the linearized (variational) equations of motion along the arcs must be taken into account. These singularities are a recurring problem in analyzes that employ "small variations" [Refs. 2, 3]. For example, singularities in the (2-body problem) variational equations along elliptic arcs occur when [Ref. 4], 1) the difference between the initial and final times is a multiple of the reference orbit period, 2) the difference between the initial and final true anomalies are given by k, for k= 0, 1, 2, 3,..., note that this cover the 3) the time of flight is a minimum for the given difference in true anomaly. For the N-body problem, the situation is more complex and is still under investigation. Several examples, such as the initialization of an orbit (ascent trajectory) and rotation of the line of apsides, are utilized as test cases. Recommendations, future work, and the possible addition of other optimization techniques are also discussed. References: [1] Lawden D.F., Optimal Trajectories for Space Navigation, Butterworths, London, 1963. [2] Wilson, R.S., Howell, K.C., and, Lo, M, "Optimization of Insertion Cost for Transfer Trajectories to Libration Point Orbits", AIAA/AAS Astrodynamics Specialist Conference, AAS 99-041, Girdwood, Alaska, August 16-19, 1999. [3] Goodson, T, "Monte-Carlo Maneuver Analysis for the Microwave Anisotropy Probe", AAS/AIAA Astrodynamics Specialist Conference, AAS 01-331, Quebec City, Canada, July 30 - August 2, 2001. [4] Stern, R.G., "Singularities in the Analytic Solution of the Linearized Variational Equations of Elliptical Motion", Report RE-8, May 1964, Experimental Astronomy Lab., Massachusetts Institute of Technology, Cambridge, Massachusetts.

  14. The Outlier Detection for Ordinal Data Using Scalling Technique of Regression Coefficients

    NASA Astrophysics Data System (ADS)

    Adnan, Arisman; Sugiarto, Sigit

    2017-06-01

    The aims of this study is to detect the outliers by using coefficients of Ordinal Logistic Regression (OLR) for the case of k category responses where the score from 1 (the best) to 8 (the worst). We detect them by using the sum of moduli of the ordinal regression coefficients calculated by jackknife technique. This technique is improved by scalling the regression coefficients to their means. R language has been used on a set of ordinal data from reference distribution. Furthermore, we compare this approach by using studentised residual plots of jackknife technique for ANOVA (Analysis of Variance) and OLR. This study shows that the jackknifing technique along with the proper scaling may lead us to reveal outliers in ordinal regression reasonably well.

  15. Survey of city ordinances and local enforcement regarding commercial availability of tobacco to minors in Minnesota, United States.

    PubMed

    Forster, J L; Komro, K A; Wolfson, M

    1996-01-01

    To determine the extent and nature of local ordinances to regulate tobacco sales to minors, the level of enforcement of local and state laws concerning tobacco availability to minors, and sanctions applied as a result of enforcement. Tobacco control ordinances were collected in 1993 from 222 of the 229 cities greater than or equal to 2000 population in Minnesota, United States. In addition a telephone survey with the head of the agency responsible for enforcement of the tobacco ordinances was conducted. Presence or absence of legislative provisions dealing with youth and tobacco, including licensure of tobacco retailers, sanctions for selling tobacco products to minors, and restrictions on cigarette vending machines, self-service merchandising, and point-of-purchase advertising; and enforcement of these laws (use of inspections and "sting" operations, and sanctions imposed on businesses and minors). Almost 94% of cities required tobacco licences for retailers. However, 57% of the cities specified licences for cigarettes only. Annual licence fees ranged from $10 to $250, with the higher fees adopted in the previous four years. More than 25% of the cities had adopted some kind of restriction on cigarette vending machines, but only six communities had banned self-service cigarette displays. Three cities specified a minimum age for tobacco sales staff. Fewer than 25% of police officials reported having conducted compliance checks with minors or in-store observations of tobacco sales to determine if minors were being sold tobacco during the current year. Police carrying out compliance checks with youth were almost four times as likely to issue citations as those doing in-store observations. More than 90% of police reported enforcement of the law against tobacco purchase or possession by minors, and nearly 40% reported application of penalties against minors. Almost 75% of the cities have done nothing to change policies or enforcement practices to encourage compliance with tobacco age-of-sale legislation, and only a few of the remaining cities have adopted optimal policies. In addition, officials in Minnesota cities are much more likely to use enforcement strategies against minors who buy tobacco than against merchants who sell tobacco.

  16. A Short-Term and High-Resolution System Load Forecasting Approach Using Support Vector Regression with Hybrid Parameters Optimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jiang, Huaiguang

    This work proposes an approach for distribution system load forecasting, which aims to provide highly accurate short-term load forecasting with high resolution utilizing a support vector regression (SVR) based forecaster and a two-step hybrid parameters optimization method. Specifically, because the load profiles in distribution systems contain abrupt deviations, a data normalization is designed as the pretreatment for the collected historical load data. Then an SVR model is trained by the load data to forecast the future load. For better performance of SVR, a two-step hybrid optimization algorithm is proposed to determine the best parameters. In the first step of themore » hybrid optimization algorithm, a designed grid traverse algorithm (GTA) is used to narrow the parameters searching area from a global to local space. In the second step, based on the result of the GTA, particle swarm optimization (PSO) is used to determine the best parameters in the local parameter space. After the best parameters are determined, the SVR model is used to forecast the short-term load deviation in the distribution system.« less

  17. 25 CFR 522.2 - Submission requirements.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... NATIONAL INDIAN GAMING COMMISSION, DEPARTMENT OF THE INTERIOR APPROVAL OF CLASS II AND CLASS III ORDINANCES AND RESOLUTIONS SUBMISSION OF GAMING ORDINANCE OR RESOLUTION § 522.2 Submission requirements. A tribe... officials and key employees; (d) Copies of all tribal gaming regulations; (e) When an ordinance or...

  18. Cable Television Report and Suggested Ordinance.

    ERIC Educational Resources Information Center

    League of California Cities, Sacramento.

    Guidelines and suggested ordinances for cable television regulation by local governments are comprehensively discussed in this report. The emphasis is placed on franchising the cable operator. Seventeen legal aspects of franchising are reviewed, and an exemplary ordinance is presented. In addition, current statistics about cable franchising in…

  19. An account of co-ordination mechanisms for humanitarian assistance during the international response to the 1994 crisis in Rwanda.

    PubMed

    Borton, J

    1996-12-01

    This paper examines the co-ordination strategies developed to respond to the Great Lakes crisis following the events of April 1994. It analyses the different functions and mechanisms which sought to achieve a co-ordinated response--ranging from facilitation at one extreme to management and direction at the other. The different regimes developed to facilitate co-ordination within Rwanda and neighbouring countries, focusing on both inter-agency and inter-country co-ordination issues, are then analysed. Finally, the paper highlights the absence of mechanisms to achieve coherence between the humanitarian, political and security domains. It concludes that effective co-ordination is critical not only to achieve programme efficiency, but to ensure that the appropriate instruments and strategies to respond to complex political emergencies are in place. It proposes a radical re-shaping of international humanitarian, political and security institutions, particularly the United Nations, to improve the effectiveness of humanitarian and political responses to crises such as that in the Great Lakes.

  20. No toy for you! The healthy food incentives ordinance: paternalism or consumer protection?

    PubMed

    Etow, Alexis M

    2012-01-01

    The newest approach to discouraging children's unhealthy eating habits, amidst increasing rates of childhood obesity and other diet-related diseases, seeks to ban something that is not even edible. In 2010, San Francisco enacted the Healthy Food Incentives Ordinance, which prohibits toys in kids' meals if the meals do not meet certain nutritional requirements. Notwithstanding the Ordinance's impact on interstate commerce or potential infringement on companies' commercial speech rights and on parents' rights to determine what their children eat, this Comment argues that the Ordinance does not violate the dormant Commerce Clause, the First Amendment, or substantive due process. The irony is that although the Ordinance likely avoids the constitutional hurdles that hindered earlier measures aimed at childhood obesity, it intrudes on civil liberties more than its predecessors. This Comment analyzes the legality of the Healthy Food Incentives Ordinance to understand its implications on subsequent legislation aimed at combating childhood obesity and on the progression of public health law.

  1. Cutoff Finder: A Comprehensive and Straightforward Web Application Enabling Rapid Biomarker Cutoff Optimization

    PubMed Central

    Budczies, Jan; Klauschen, Frederick; Sinn, Bruno V.; Győrffy, Balázs; Schmitt, Wolfgang D.; Darb-Esfahani, Silvia; Denkert, Carsten

    2012-01-01

    Gene or protein expression data are usually represented by metric or at least ordinal variables. In order to translate a continuous variable into a clinical decision, it is necessary to determine a cutoff point and to stratify patients into two groups each requiring a different kind of treatment. Currently, there is no standard method or standard software for biomarker cutoff determination. Therefore, we developed Cutoff Finder, a bundle of optimization and visualization methods for cutoff determination that is accessible online. While one of the methods for cutoff optimization is based solely on the distribution of the marker under investigation, other methods optimize the correlation of the dichotomization with respect to an outcome or survival variable. We illustrate the functionality of Cutoff Finder by the analysis of the gene expression of estrogen receptor (ER) and progesterone receptor (PgR) in breast cancer tissues. This distribution of these important markers is analyzed and correlated with immunohistologically determined ER status and distant metastasis free survival. Cutoff Finder is expected to fill a relevant gap in the available biometric software repertoire and will enable faster optimization of new diagnostic biomarkers. The tool can be accessed at http://molpath.charite.de/cutoff. PMID:23251644

  2. Reduced state feedback gain computation. [optimization and control theory for aircraft control

    NASA Technical Reports Server (NTRS)

    Kaufman, H.

    1976-01-01

    Because application of conventional optimal linear regulator theory to flight controller design requires the capability of measuring and/or estimating the entire state vector, it is of interest to consider procedures for computing controls which are restricted to be linear feedback functions of a lower dimensional output vector and which take into account the presence of measurement noise and process uncertainty. Therefore, a stochastic linear model that was developed is presented which accounts for aircraft parameter and initial uncertainty, measurement noise, turbulence, pilot command and a restricted number of measurable outputs. Optimization with respect to the corresponding output feedback gains was performed for both finite and infinite time performance indices without gradient computation by using Zangwill's modification of a procedure originally proposed by Powell. Results using a seventh order process show the proposed procedures to be very effective.

  3. Quantum optimization for training support vector machines.

    PubMed

    Anguita, Davide; Ridella, Sandro; Rivieccio, Fabio; Zunino, Rodolfo

    2003-01-01

    Refined concepts, such as Rademacher estimates of model complexity and nonlinear criteria for weighting empirical classification errors, represent recent and promising approaches to characterize the generalization ability of Support Vector Machines (SVMs). The advantages of those techniques lie in both improving the SVM representation ability and yielding tighter generalization bounds. On the other hand, they often make Quadratic-Programming algorithms no longer applicable, and SVM training cannot benefit from efficient, specialized optimization techniques. The paper considers the application of Quantum Computing to solve the problem of effective SVM training, especially in the case of digital implementations. The presented research compares the behavioral aspects of conventional and enhanced SVMs; experiments in both a synthetic and real-world problems support the theoretical analysis. At the same time, the related differences between Quadratic-Programming and Quantum-based optimization techniques are considered.

  4. Vectorized program architectures for supercomputer-aided circuit design

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rizzoli, V.; Ferlito, M.; Neri, A.

    1986-01-01

    Vector processors (supercomputers) can be effectively employed in MIC or MMIC applications to solve problems of large numerical size such as broad-band nonlinear design or statistical design (yield optimization). In order to fully exploit the capabilities of a vector hardware, any program architecture must be structured accordingly. This paper presents a possible approach to the ''semantic'' vectorization of microwave circuit design software. Speed-up factors of the order of 50 can be obtained on a typical vector processor (Cray X-MP), with respect to the most powerful scaler computers (CDC 7600), with cost reductions of more than one order of magnitude. Thismore » could broaden the horizon of microwave CAD techniques to include problems that are practically out of the reach of conventional systems.« less

  5. Inline Measurement of Particle Concentrations in Multicomponent Suspensions using Ultrasonic Sensor and Least Squares Support Vector Machines.

    PubMed

    Zhan, Xiaobin; Jiang, Shulan; Yang, Yili; Liang, Jian; Shi, Tielin; Li, Xiwen

    2015-09-18

    This paper proposes an ultrasonic measurement system based on least squares support vector machines (LS-SVM) for inline measurement of particle concentrations in multicomponent suspensions. Firstly, the ultrasonic signals are analyzed and processed, and the optimal feature subset that contributes to the best model performance is selected based on the importance of features. Secondly, the LS-SVM model is tuned, trained and tested with different feature subsets to obtain the optimal model. In addition, a comparison is made between the partial least square (PLS) model and the LS-SVM model. Finally, the optimal LS-SVM model with the optimal feature subset is applied to inline measurement of particle concentrations in the mixing process. The results show that the proposed method is reliable and accurate for inline measuring the particle concentrations in multicomponent suspensions and the measurement accuracy is sufficiently high for industrial application. Furthermore, the proposed method is applicable to the modeling of the nonlinear system dynamically and provides a feasible way to monitor industrial processes.

  6. Optimal cooperative time-fixed impulsive rendezvous

    NASA Technical Reports Server (NTRS)

    Mirfakhraie, Koorosh; Conway, Bruce A.; Prussing, John E.

    1988-01-01

    A method has been developed for determining optimal, i.e., minimum fuel, trajectories for the fixed-time cooperative rendezvous of two spacecraft. The method presently assumes that the vehicles perform a total of three impulsive maneuvers with each vehicle being active, that is, making at least one maneuver. The cost of a feasible 'reference' trajectory is improved by an optimizer which uses an analytical gradient developed using primer vector theory and a new solution for the optimal terminal (rendezvous) maneuver. Results are presented for a large number of cases in which the initial orbits of both vehicles are circular but in which the initial positions of the vehicles and the allotted time for rendezvous are varied. In general, the cost of the cooperative rendezvous is less than that of rendezvous with one vehicle passive. Further improvement in cost may be obtained in the future when additional, i.e., midcourse, impulses are allowed and inserted as indicated for some cases by the primer vector histories which are generated by the program.

  7. Efficient design of gain-flattened multi-pump Raman fiber amplifiers using least squares support vector regression

    NASA Astrophysics Data System (ADS)

    Chen, Jing; Qiu, Xiaojie; Yin, Cunyi; Jiang, Hao

    2018-02-01

    An efficient method to design the broadband gain-flattened Raman fiber amplifier with multiple pumps is proposed based on least squares support vector regression (LS-SVR). A multi-input multi-output LS-SVR model is introduced to replace the complicated solving process of the nonlinear coupled Raman amplification equation. The proposed approach contains two stages: offline training stage and online optimization stage. During the offline stage, the LS-SVR model is trained. Owing to the good generalization capability of LS-SVR, the net gain spectrum can be directly and accurately obtained when inputting any combination of the pump wavelength and power to the well-trained model. During the online stage, we incorporate the LS-SVR model into the particle swarm optimization algorithm to find the optimal pump configuration. The design results demonstrate that the proposed method greatly shortens the computation time and enhances the efficiency of the pump parameter optimization for Raman fiber amplifier design.

  8. Prediction of chemical biodegradability using support vector classifier optimized with differential evolution.

    PubMed

    Cao, Qi; Leung, K M

    2014-09-22

    Reliable computer models for the prediction of chemical biodegradability from molecular descriptors and fingerprints are very important for making health and environmental decisions. Coupling of the differential evolution (DE) algorithm with the support vector classifier (SVC) in order to optimize the main parameters of the classifier resulted in an improved classifier called the DE-SVC, which is introduced in this paper for use in chemical biodegradability studies. The DE-SVC was applied to predict the biodegradation of chemicals on the basis of extensive sample data sets and known structural features of molecules. Our optimization experiments showed that DE can efficiently find the proper parameters of the SVC. The resulting classifier possesses strong robustness and reliability compared with grid search, genetic algorithm, and particle swarm optimization methods. The classification experiments conducted here showed that the DE-SVC exhibits better classification performance than models previously used for such studies. It is a more effective and efficient prediction model for chemical biodegradability.

  9. 77 FR 34981 - Stillaguamish Tribe of Indians-Liquor Control Ordinance

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-06-12

    ... DEPARTMENT OF THE INTERIOR Bureau of Indian Affairs Stillaguamish Tribe of Indians--Liquor Control... publishes the Stillaguamish Tribe of Indians' Liquor Control Ordinance. The Ordinance regulates and controls... of the Stillaguamish Tribe of Indians, will increase the ability of the tribal government to control...

  10. 7 CFR 1901.204 - Compliance reviews.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... Administrator, Community and Business Programs, for each recipient. (4) Mandatory hook-up ordinance. Compliance... under the provisions of a mandatory hook-up ordinance will consist of a certification by the borrower or grantee that the ordinance is still in effect and is being enforced. (5) Forwarding noncompliance report...

  11. 7 CFR 1901.204 - Compliance reviews.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... Administrator, Community and Business Programs, for each recipient. (4) Mandatory hook-up ordinance. Compliance... under the provisions of a mandatory hook-up ordinance will consist of a certification by the borrower or grantee that the ordinance is still in effect and is being enforced. (5) Forwarding noncompliance report...

  12. 7 CFR 1901.204 - Compliance reviews.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... Administrator, Community and Business Programs, for each recipient. (4) Mandatory hook-up ordinance. Compliance... under the provisions of a mandatory hook-up ordinance will consist of a certification by the borrower or grantee that the ordinance is still in effect and is being enforced. (5) Forwarding noncompliance report...

  13. 7 CFR 1901.204 - Compliance reviews.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... Administrator, Community and Business Programs, for each recipient. (4) Mandatory hook-up ordinance. Compliance... under the provisions of a mandatory hook-up ordinance will consist of a certification by the borrower or grantee that the ordinance is still in effect and is being enforced. (5) Forwarding noncompliance report...

  14. Quantum Support Vector Machine for Big Data Classification

    NASA Astrophysics Data System (ADS)

    Rebentrost, Patrick; Mohseni, Masoud; Lloyd, Seth

    2014-09-01

    Supervised machine learning is the classification of new data based on already classified training examples. In this work, we show that the support vector machine, an optimized binary classifier, can be implemented on a quantum computer, with complexity logarithmic in the size of the vectors and the number of training examples. In cases where classical sampling algorithms require polynomial time, an exponential speedup is obtained. At the core of this quantum big data algorithm is a nonsparse matrix exponentiation technique for efficiently performing a matrix inversion of the training data inner-product (kernel) matrix.

  15. Spatio-temporal evolution of perturbations in ensembles initialized by bred, Lyapunov and singular vectors

    NASA Astrophysics Data System (ADS)

    Pazó, Diego; Rodríguez, Miguel A.; López, Juan M.

    2010-05-01

    We study the evolution of finite perturbations in the Lorenz ‘96 model, a meteorological toy model of the atmosphere. The initial perturbations are chosen to be aligned along different dynamic vectors: bred, Lyapunov, and singular vectors. Using a particular vector determines not only the amplification rate of the perturbation but also the spatial structure of the perturbation and its stability under the evolution of the flow. The evolution of perturbations is systematically studied by means of the so-called mean-variance of logarithms diagram that provides in a very compact way the basic information to analyse the spatial structure. We discuss the corresponding advantages of using those different vectors for preparing initial perturbations to be used in ensemble prediction systems, focusing on key properties: dynamic adaptation to the flow, robustness, equivalence between members of the ensemble, etc. Among all the vectors considered here, the so-called characteristic Lyapunov vectors are possibly optimal, in the sense that they are both perfectly adapted to the flow and extremely robust.

  16. Spatio-temporal evolution of perturbations in ensembles initialized by bred, Lyapunov and singular vectors

    NASA Astrophysics Data System (ADS)

    Pazó, Diego; Rodríguez, Miguel A.; López, Juan M.

    2010-01-01

    We study the evolution of finite perturbations in the Lorenz `96 model, a meteorological toy model of the atmosphere. The initial perturbations are chosen to be aligned along different dynamic vectors: bred, Lyapunov, and singular vectors. Using a particular vector determines not only the amplification rate of the perturbation but also the spatial structure of the perturbation and its stability under the evolution of the flow. The evolution of perturbations is systematically studied by means of the so-called mean-variance of logarithms diagram that provides in a very compact way the basic information to analyse the spatial structure. We discuss the corresponding advantages of using those different vectors for preparing initial perturbations to be used in ensemble prediction systems, focusing on key properties: dynamic adaptation to the flow, robustness, equivalence between members of the ensemble, etc. Among all the vectors considered here, the so-called characteristic Lyapunov vectors are possibly optimal, in the sense that they are both perfectly adapted to the flow and extremely robust.

  17. Multiclass Reduced-Set Support Vector Machines

    NASA Technical Reports Server (NTRS)

    Tang, Benyang; Mazzoni, Dominic

    2006-01-01

    There are well-established methods for reducing the number of support vectors in a trained binary support vector machine, often with minimal impact on accuracy. We show how reduced-set methods can be applied to multiclass SVMs made up of several binary SVMs, with significantly better results than reducing each binary SVM independently. Our approach is based on Burges' approach that constructs each reduced-set vector as the pre-image of a vector in kernel space, but we extend this by recomputing the SVM weights and bias optimally using the original SVM objective function. This leads to greater accuracy for a binary reduced-set SVM, and also allows vectors to be 'shared' between multiple binary SVMs for greater multiclass accuracy with fewer reduced-set vectors. We also propose computing pre-images using differential evolution, which we have found to be more robust than gradient descent alone. We show experimental results on a variety of problems and find that this new approach is consistently better than previous multiclass reduced-set methods, sometimes with a dramatic difference.

  18. Methods, systems and apparatus for controlling third harmonic voltage when operating a multi-space machine in an overmodulation region

    DOEpatents

    Perisic, Milun; Kinoshita, Michael H; Ranson, Ray M; Gallegos-Lopez, Gabriel

    2014-06-03

    Methods, system and apparatus are provided for controlling third harmonic voltages when operating a multi-phase machine in an overmodulation region. The multi-phase machine can be, for example, a five-phase machine in a vector controlled motor drive system that includes a five-phase PWM controlled inverter module that drives the five-phase machine. Techniques for overmodulating a reference voltage vector are provided. For example, when the reference voltage vector is determined to be within the overmodulation region, an angle of the reference voltage vector can be modified to generate a reference voltage overmodulation control angle, and a magnitude of the reference voltage vector can be modified, based on the reference voltage overmodulation control angle, to generate a modified magnitude of the reference voltage vector. By modifying the reference voltage vector, voltage command signals that control a five-phase inverter module can be optimized to increase output voltages generated by the five-phase inverter module.

  19. Pushing Memory Bandwidth Limitations Through Efficient Implementations of Block-Krylov Space Solvers on GPUs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Clark, M. A.; Strelchenko, Alexei; Vaquero, Alejandro

    Lattice quantum chromodynamics simulations in nuclear physics have benefited from a tremendous number of algorithmic advances such as multigrid and eigenvector deflation. These improve the time to solution but do not alleviate the intrinsic memory-bandwidth constraints of the matrix-vector operation dominating iterative solvers. Batching this operation for multiple vectors and exploiting cache and register blocking can yield a super-linear speed up. Block-Krylov solvers can naturally take advantage of such batched matrix-vector operations, further reducing the iterations to solution by sharing the Krylov space between solves. However, practical implementations typically suffer from the quadratic scaling in the number of vector-vector operations.more » Using the QUDA library, we present an implementation of a block-CG solver on NVIDIA GPUs which reduces the memory-bandwidth complexity of vector-vector operations from quadratic to linear. We present results for the HISQ discretization, showing a 5x speedup compared to highly-optimized independent Krylov solves on NVIDIA's SaturnV cluster.« less

  20. Novel Concepts for HIV Vaccine Vector Design.

    PubMed

    Alayo, Quazim A; Provine, Nicholas M; Penaloza-MacMaster, Pablo

    2017-01-01

    The unprecedented challenges of developing effective vaccines against intracellular pathogens such as HIV, malaria, and tuberculosis have resulted in more rational approaches to vaccine development. Apart from the recent advances in the design and selection of improved epitopes and adjuvants, there are also ongoing efforts to optimize delivery platforms. Viral vectors are the best-characterized delivery tools because of their intrinsic adjuvant capability, unique cellular tropism, and ability to trigger robust adaptive immune responses. However, a known limitation of viral vectors is preexisting immunity, and ongoing efforts are aimed at developing novel vector platforms with lower seroprevalence. It is also becoming increasingly clear that different vectors, even those derived from phylogenetically similar viruses, can elicit substantially distinct immune responses, in terms of quantity, quality, and location, which can ultimately affect immune protection. This review provides a summary of the status of viral vector development for HIV vaccines, with a particular focus on novel viral vectors and the types of adaptive immune responses that they induce.

  1. An evaluation of the accuracy of geomagnetic data obtained from an unattended, automated, quasi-absolute station

    USGS Publications Warehouse

    Herzog, D.C.

    1990-01-01

    A comparison is made of geomagnetic calibration data obtained from a high-sensitivity proton magnetometer enclosed within an orthogonal bias coil system, with data obtained from standard procedures at a mid-latitude U.S. Geological Survey magnetic observatory using a quartz horizontal magnetometer, a Ruska magnetometer, and a total field magnetometer. The orthogonal coil arrangement is used with the proton magnetometer to provide Deflected-Inclination-Deflected-Declination (DIDD) data from which quasi-absolute values of declination, horizontal intensity, and vertical intensity can be derived. Vector magnetometers provide the ordinate values to yield baseline calibrations for both the DIDD and standard observatory processes. Results obtained from a prototype system over a period of several months indicate that the DIDD unit can furnish adequate absolute field values for maintaining observatory calibration data, thus providing baseline control for unattended, remote stations. ?? 1990.

  2. The impact of climate change on the epidemiology and control of Rift Valley fever.

    PubMed

    Martin, V; Chevalier, V; Ceccato, P; Anyamba, A; De Simone, L; Lubroth, J; de La Rocque, S; Domenech, J

    2008-08-01

    Climate change is likely to change the frequency of extreme weather events, such as tropical cyclones, floods, droughts and hurricanes, and may destabilise and weaken the ecosystem services upon which human society depends. Climate change is also expected to affect animal, human and plant health via indirect pathways: it is likely that the geography of infectious diseases and pests will be altered, including the distribution of vector-borne diseases, such as Rift Valley fever, yellow fever, malaria and dengue, which are highly sensitive to climatic conditions. Extreme weather events might then create the necessary conditions for Rift Valley fever to expand its geographical range northwards and cross the Mediterranean and Arabian seas, with an unexpected impact on the animal and human health of newly affected countries. Strengthening global, regional and national early warning systems is crucial, as are co-ordinated research programmes and subsequent prevention and intervention measures.

  3. Method for introducing unidirectional nested deletions

    DOEpatents

    Dunn, John J.; Quesada, Mark A.; Randesi, Matthew

    2001-01-01

    Disclosed is a method for the introduction of unidirectional deletions in a cloned DNA segment in the context of a cloning vector which contains an f1 endonuclease recognition sequence adjacent to the insertion site of the DNA segment. Also disclosed is a method for producing single-stranded DNA probes utilizing the same cloning vector. An optimal vector, PZIP is described. Methods for introducing unidirectional deletions into a terminal location of a cloned DNA sequence which is inserted into the vector of the present invention are also disclosed. These methods are useful for introducing deletions into either or both ends of a cloned DNA insert, for high throughput sequencing of any DNA of interest.

  4. Margin based ontology sparse vector learning algorithm and applied in biology science.

    PubMed

    Gao, Wei; Qudair Baig, Abdul; Ali, Haidar; Sajjad, Wasim; Reza Farahani, Mohammad

    2017-01-01

    In biology field, the ontology application relates to a large amount of genetic information and chemical information of molecular structure, which makes knowledge of ontology concepts convey much information. Therefore, in mathematical notation, the dimension of vector which corresponds to the ontology concept is often very large, and thus improves the higher requirements of ontology algorithm. Under this background, we consider the designing of ontology sparse vector algorithm and application in biology. In this paper, using knowledge of marginal likelihood and marginal distribution, the optimized strategy of marginal based ontology sparse vector learning algorithm is presented. Finally, the new algorithm is applied to gene ontology and plant ontology to verify its efficiency.

  5. The magnetofection method: using magnetic force to enhance gene delivery.

    PubMed

    Plank, Christian; Schillinger, Ulrike; Scherer, Franz; Bergemann, Christian; Rémy, Jean-Serge; Krötz, Florian; Anton, Martina; Lausier, Jim; Rosenecker, Joseph

    2003-05-01

    In order to enhance and target gene delivery we have previously established a novel method, termed magnetofection, which uses magnetic force acting on gene vectors that are associated with magnetic particles. Here we review the benefits, the mechanism and the potential of the method with regard to overcoming physical limitations to gene delivery. Magnetic particle chemistry and physics are discussed, followed by a detailed presentation of vector formulation and optimization work. While magnetofection does not necessarily improve the overall performance of any given standard gene transfer method in vitro, its major potential lies in the extraordinarily rapid and efficient transfection at low vector doses and the possibility of remotely controlled vector targeting in vivo.

  6. Testing of the Support Vector Machine for Binary-Class Classification

    NASA Technical Reports Server (NTRS)

    Scholten, Matthew

    2011-01-01

    The Support Vector Machine is a powerful algorithm, useful in classifying data in to species. The Support Vector Machines implemented in this research were used as classifiers for the final stage in a Multistage Autonomous Target Recognition system. A single kernel SVM known as SVMlight, and a modified version known as a Support Vector Machine with K-Means Clustering were used. These SVM algorithms were tested as classifiers under varying conditions. Image noise levels varied, and the orientation of the targets changed. The classifiers were then optimized to demonstrate their maximum potential as classifiers. Results demonstrate the reliability of SMV as a method for classification. From trial to trial, SVM produces consistent results

  7. Safety belt usage before and after enactment of a mandatory usage ordinance (Lexington-Fayette County, Kentucky)

    DOT National Transportation Integrated Search

    1990-10-01

    In the absence of a statewide law, a local ordinance was passed by the Lexington-Fayette Urban County Government mandating use of safety belts. The objective of this study was to conduct surveys before the ordinance was passed, during the implementat...

  8. Introducing Students to Plant Geography: Polar Ordination Applied to Hanging Gardens.

    ERIC Educational Resources Information Center

    Malanson, George P.; And Others

    1993-01-01

    Reports on a research study in which college students used a statistical ordination method to reveal relationships among plant community structures and physical, disturbance, and spatial variables. Concludes that polar ordination helps students understand the methodology of plant geography and encourages further student research. (CFR)

  9. Cardination and Ordination Learning in Young Children.

    ERIC Educational Resources Information Center

    Stock, William; Flora, June

    This paper analyzes Brainerd's work in assessing the developmental sequence or ordination and cardination concepts of number, and describes a study which investigated the hypothesis that task-specific difficulty could explain Brainers's data. Three new tasks were designed for the assessment of ordination and cardination and administered to a…

  10. 25 CFR 522.6 - Approval requirements for class III ordinances.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... Section 522.6 Indians NATIONAL INDIAN GAMING COMMISSION, DEPARTMENT OF THE INTERIOR APPROVAL OF CLASS II AND CLASS III ORDINANCES AND RESOLUTIONS SUBMISSION OF GAMING ORDINANCE OR RESOLUTION § 522.6 Approval...) The tribe shall have the sole proprietary interest in and responsibility for the conduct of any gaming...

  11. 36 CFR 28.15 - Approval of local zoning ordinances.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 36 Parks, Forests, and Public Property 1 2010-07-01 2010-07-01 false Approval of local zoning ordinances. 28.15 Section 28.15 Parks, Forests, and Public Property NATIONAL PARK SERVICE, DEPARTMENT OF THE INTERIOR FIRE ISLAND NATIONAL SEASHORE: ZONING STANDARDS Federal Standards and Approval of Local Ordinances...

  12. Simulating Ordinal Data

    ERIC Educational Resources Information Center

    Ferrari, Pier Alda; Barbiero, Alessandro

    2012-01-01

    The increasing use of ordinal variables in different fields has led to the introduction of new statistical methods for their analysis. The performance of these methods needs to be investigated under a number of experimental conditions. Procedures to simulate from ordinal variables are then required. In this article, we deal with simulation from…

  13. Bayesian Adaptive Lasso for Ordinal Regression with Latent Variables

    ERIC Educational Resources Information Center

    Feng, Xiang-Nan; Wu, Hao-Tian; Song, Xin-Yuan

    2017-01-01

    We consider an ordinal regression model with latent variables to investigate the effects of observable and latent explanatory variables on the ordinal responses of interest. Each latent variable is characterized by correlated observed variables through a confirmatory factor analysis model. We develop a Bayesian adaptive lasso procedure to conduct…

  14. 36 CFR 28.15 - Approval of local zoning ordinances.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 36 Parks, Forests, and Public Property 1 2011-07-01 2011-07-01 false Approval of local zoning ordinances. 28.15 Section 28.15 Parks, Forests, and Public Property NATIONAL PARK SERVICE, DEPARTMENT OF THE INTERIOR FIRE ISLAND NATIONAL SEASHORE: ZONING STANDARDS Federal Standards and Approval of Local Ordinances...

  15. 25 CFR 900.136 - Do tribal employment rights ordinances apply to construction contracts and subcontracts?

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 25 Indians 2 2010-04-01 2010-04-01 false Do tribal employment rights ordinances apply to... OF THE INTERIOR, AND INDIAN HEALTH SERVICE, DEPARTMENT OF HEALTH AND HUMAN SERVICES CONTRACTS UNDER... rights ordinances apply to construction contracts and subcontracts? Yes. Tribal employment rights...

  16. The Duluth Clean Indoor Air Ordinance: Problems and Success in Fighting the Tobacco Industry at the Local Level in the 21st Century

    PubMed Central

    Tsoukalas, Theodore; Glantz, Stanton A.

    2003-01-01

    Case study methodology was used to investigate the tobacco industry’s strategies to fight local tobacco control efforts in Duluth, Minn. The industry opposed the clean indoor air ordinance indirectly through allies and front groups and directly in a referendum. Health groups failed to win a strong ordinance because they framed it as a youth issue rather than a workplace issue and failed to engage the industry’s economic claims. Opponents’ overexploitation of weaknesses in the ordinance allowed health advocates to construct a stronger version. Health advocates should assume that the tobacco industry will oppose all local tobacco control measures indirectly, directly, or both. Clean indoor air ordinances should be framed as workplace safety issues. PMID:12893598

  17. Identification and Optimization of New Leads for Malaria Vector Control.

    PubMed

    Hueter, Ottmar F; Hoppé, Mark; Wege, Philip; Maienfisch, Peter

    2016-10-01

    A significant proportion of the world's population remains at risk from malaria, and whilst great progress has been made in reducing the number of malaria cases globally through the use of vector control insecticides, these gains are under threat from the emergence of insecticide resistance. The spread of resistance in the vector populations, principally to pyrethroids, is driving the need for the development of new tools for malaria vector control. In order to identify new leads 30,000 compounds from the Syngenta corporate chemical collection were tested in a newly developed screening platform. More than 3000 compounds (10%) showed activity at ≤200 mg active ingredient (AI) litre -1 against Anopheles stephensi. Further evaluation resulted in the identification of 12 viable leads for the control of adult mosquitoes, most originating from current or former insecticide projects. Surprisingly, one of these leads emerged from a former PPO herbicide project and one from a former complex III fungicide project. This indicates that representatives of certain herbicide and fungicide projects and modes of action can also represent a valuable source of leads for malaria vector control. Optimization of the diphenyl ether lead 1 resulted in the identification of the cyano-pyridyl compound 31. This compound 31 exhibits good activity against mosquito species including rdl resistant Anopheles. It is only slightly weaker than permethrin and does not show relevant levels of cross-resistance to the organochlorine insecticide dieldrin.

  18. Development of Response Spectral Ground Motion Prediction Equations from Empirical Models for Fourier Spectra and Duration of Ground Motion

    NASA Astrophysics Data System (ADS)

    Bora, S. S.; Scherbaum, F.; Kuehn, N. M.; Stafford, P.; Edwards, B.

    2014-12-01

    In a probabilistic seismic hazard assessment (PSHA) framework, it still remains a challenge to adjust ground motion prediction equations (GMPEs) for application in different seismological environments. In this context, this study presents a complete framework for the development of a response spectral GMPE easily adjustable to different seismological conditions; and which does not suffer from the technical problems associated with the adjustment in response spectral domain. Essentially, the approach consists of an empirical FAS (Fourier Amplitude Spectrum) model and a duration model for ground motion which are combined within the random vibration theory (RVT) framework to obtain the full response spectral ordinates. Additionally, FAS corresponding to individual acceleration records are extrapolated beyond the frequency range defined by the data using the stochastic FAS model, obtained by inversion as described in Edwards & Faeh, (2013). To that end, an empirical model for a duration, which is tuned to optimize the fit between RVT based and observed response spectral ordinate, at each oscillator frequency is derived. Although, the main motive of the presented approach was to address the adjustability issues of response spectral GMPEs; comparison, of median predicted response spectra with the other regional models indicate that presented approach can also be used as a stand-alone model. Besides that, a significantly lower aleatory variability (σ<0.5 in log units) in comparison to other regional models, at shorter periods brands it to a potentially viable alternative to the classical regression (on response spectral ordinates) based GMPEs for seismic hazard studies in the near future. The dataset used for the presented analysis is a subset of the recently compiled database RESORCE-2012 across Europe, Middle East and the Mediterranean region.

  19. Diversion of mentally disordered people from the criminal justice system in England and Wales: An overview.

    PubMed

    James, David V

    2010-01-01

    The form that diversion mechanisms take in a given jurisdiction will be influenced both by mental health law and sentencing policies, and by the structure of criminal justice and health care systems. In England and Wales, treatment in hospital in lieu of any other sentence is available as a disposal option following a finding of guilt. In addition, there is a National Health Service, free at the point of delivery, the existence of which creates the potential for a co-ordinated nationwide response to mental disorder within the criminal justice system. In recent years, the National Health Service has taken over the delivery of health care in prisons, including psychiatric services, with the principle being one of equivalence between the quality of health provision provided in the community and that provided in prisons. However, problems within the system dictate that an important place remains for add-on diversion initiatives at courts and police stations, which aim to circumvent some of the delays in dealing with mentally disordered people or to prevent them entering the criminal justice system in the first place. It has been demonstrated that such mechanisms can be highly effective, and a government-sponsored review in 1992 recommended their general adoption. A lack of central co-ordination determined that progress was very slow. A new government-commissioned report in 2009 set out detailed recommendations for reform throughout the system. It laid emphasis on a co-ordinated response at all levels and between all agencies, and placed importance on linking initiatives with community services and with preventative measures, including attention to the effects of social exclusion. Some grounds for optimism exist, although there are particular problems in implementing change at a time of financial austerity. Copyright 2010 Elsevier Ltd. All rights reserved.

  20. Re-engineering adenovirus vector systems to enable high-throughput analyses of gene function.

    PubMed

    Stanton, Richard J; McSharry, Brian P; Armstrong, Melanie; Tomasec, Peter; Wilkinson, Gavin W G

    2008-12-01

    With the enhanced capacity of bioinformatics to interrogate extensive banks of sequence data, more efficient technologies are needed to test gene function predictions. Replication-deficient recombinant adenovirus (Ad) vectors are widely used in expression analysis since they provide for extremely efficient expression of transgenes in a wide range of cell types. To facilitate rapid, high-throughput generation of recombinant viruses, we have re-engineered an adenovirus vector (designated AdZ) to allow single-step, directional gene insertion using recombineering technology. Recombineering allows for direct insertion into the Ad vector of PCR products, synthesized sequences, or oligonucleotides encoding shRNAs without requirement for a transfer vector Vectors were optimized for high-throughput applications by making them "self-excising" through incorporating the I-SceI homing endonuclease into the vector removing the need to linearize vectors prior to transfection into packaging cells. AdZ vectors allow genes to be expressed in their native form or with strep, V5, or GFP tags. Insertion of tetracycline operators downstream of the human cytomegalovirus major immediate early (HCMV MIE) promoter permits silencing of transgenes in helper cells expressing the tet repressor thus making the vector compatible with the cloning of toxic gene products. The AdZ vector system is robust, straightforward, and suited to both sporadic and high-throughput applications.

  1. Services and supports for young children with Down syndrome: parent and provider perspectives.

    PubMed

    Marshall, J; Tanner, J P; Kozyr, Y A; Kirby, R S

    2015-05-01

    As individuals with Down syndrome are living longer and more socially connected lives, early access to supports and services for their parents will ensure an optimal start and improved outcomes. The family's journey begins at the child's diagnosis, and cumulative experiences throughout infancy and childhood set the tone for a lifetime of decisions made by the family regarding services, supports and activities. This study utilized focus groups and interviews with seven nurses, five therapists, 25 service co-ordinators, and 10 English- and three Spanish-speaking parents to better understand family experiences and perceptions on accessing Down syndrome-related perinatal, infant and childhood services and supports. Parents and providers reflected on key early life issues for children with Down syndrome and their families in five areas: prenatal diagnosis; perinatal care; medical and developmental services; care co-ordination and services; and social and community support. Systems of care are not consistently prepared to provide appropriate family-centred services to individuals with Down syndrome and their families. Individuals with disabilities require formal and informal supports from birth to achieve and maintain a high quality of life. © 2014 John Wiley & Sons Ltd.

  2. A new paradigm for improved co-ordination and efficacy of European biomedical research: taking diabetes as a model.

    PubMed

    Halban, P A; Boulton, A J M; Smith, U

    2013-03-01

    Today, European biomedical and health-related research is insufficiently well funded and is fragmented, with no common vision, less-than-optimal sharing of resources, and inadequate support and training in clinical research. Improvements to the competitiveness of European biomedical research will depend on the creation of new infrastructures that must be dynamic and free of bureaucracy, involve all stakeholders and facilitate faster delivery of new discoveries from bench to bedside. Taking diabetes research as the model, a new paradigm for European biomedical research is presented, which offers improved co-ordination and common resources that will benefit both academic and industrial clinical research. This includes the creation of a European Council for Health Research, first proposed by the Alliance for Biomedical Research in Europe, which will bring together and consult with all health stakeholders to develop strategic and multidisciplinary research programmes addressing the full innovation cycle. A European Platform for Clinical Research in Diabetes is proposed by the Alliance for European Diabetes Research (EURADIA) in response to the special challenges and opportunities presented by research across the European region, with the need for common standards and shared expertise and data.

  3. Prediction of spectral acceleration response ordinates based on PGA attenuation

    USGS Publications Warehouse

    Graizer, V.; Kalkan, E.

    2009-01-01

    Developed herein is a new peak ground acceleration (PGA)-based predictive model for 5% damped pseudospectral acceleration (SA) ordinates of free-field horizontal component of ground motion from shallow-crustal earthquakes. The predictive model of ground motion spectral shape (i.e., normalized spectrum) is generated as a continuous function of few parameters. The proposed model eliminates the classical exhausted matrix of estimator coefficients, and provides significant ease in its implementation. It is structured on the Next Generation Attenuation (NGA) database with a number of additions from recent Californian events including 2003 San Simeon and 2004 Parkfield earthquakes. A unique feature of the model is its new functional form explicitly integrating PGA as a scaling factor. The spectral shape model is parameterized within an approximation function using moment magnitude, closest distance to the fault (fault distance) and VS30 (average shear-wave velocity in the upper 30 m) as independent variables. Mean values of its estimator coefficients were computed by fitting an approximation function to spectral shape of each record using robust nonlinear optimization. Proposed spectral shape model is independent of the PGA attenuation, allowing utilization of various PGA attenuation relations to estimate the response spectrum of earthquake recordings.

  4. Multi Objective Controller Design for Linear System via Optimal Interpolation

    NASA Technical Reports Server (NTRS)

    Ozbay, Hitay

    1996-01-01

    We propose a methodology for the design of a controller which satisfies a set of closed-loop objectives simultaneously. The set of objectives consists of: (1) pole placement, (2) decoupled command tracking of step inputs at steady-state, and (3) minimization of step response transients with respect to envelope specifications. We first obtain a characterization of all controllers placing the closed-loop poles in a prescribed region of the complex plane. In this characterization, the free parameter matrix Q(s) is to be determined to attain objectives (2) and (3). Objective (2) is expressed as determining a Pareto optimal solution to a vector valued optimization problem. The solution of this problem is obtained by transforming it to a scalar convex optimization problem. This solution determines Q(O) and the remaining freedom in choosing Q(s) is used to satisfy objective (3). We write Q(s) = (l/v(s))bar-Q(s) for a prescribed polynomial v(s). Bar-Q(s) is a polynomial matrix which is arbitrary except that Q(O) and the order of bar-Q(s) are fixed. Obeying these constraints bar-Q(s) is now to be 'shaped' to minimize the step response characteristics of specific input/output pairs according to the maximum envelope violations. This problem is expressed as a vector valued optimization problem using the concept of Pareto optimality. We then investigate a scalar optimization problem associated with this vector valued problem and show that it is convex. The organization of the report is as follows. The next section includes some definitions and preliminary lemmas. We then give the problem statement which is followed by a section including a detailed development of the design procedure. We then consider an aircraft control example. The last section gives some concluding remarks. The Appendix includes the proofs of technical lemmas, printouts of computer programs, and figures.

  5. Automation of POST Cases via External Optimizer and "Artificial p2" Calculation

    NASA Technical Reports Server (NTRS)

    Dees, Patrick D.; Zwack, Mathew R.

    2017-01-01

    During early conceptual design of complex systems, speed and accuracy are often at odds with one another. While many characteristics of the design are fluctuating rapidly during this phase there is nonetheless a need to acquire accurate data from which to down-select designs as these decisions will have a large impact upon program life-cycle cost. Therefore enabling the conceptual designer to produce accurate data in a timely manner is tantamount to program viability. For conceptual design of launch vehicles, trajectory analysis and optimization is a large hurdle. Tools such as the industry standard Program to Optimize Simulated Trajectories (POST) have traditionally required an expert in the loop for setting up inputs, running the program, and analyzing the output. The solution space for trajectory analysis is in general non-linear and multi-modal requiring an experienced analyst to weed out sub-optimal designs in pursuit of the global optimum. While an experienced analyst presented with a vehicle similar to one which they have already worked on can likely produce optimal performance figures in a timely manner, as soon as the "experienced" or "similar" adjectives are invalid the process can become lengthy. In addition, an experienced analyst working on a similar vehicle may go into the analysis with preconceived ideas about what the vehicle's trajectory should look like which can result in sub-optimal performance being recorded. Thus, in any case but the ideal either time or accuracy can be sacrificed. In the authors' previous work a tool called multiPOST was created which captures the heuristics of a human analyst over the process of executing trajectory analysis with POST. However without the instincts of a human in the loop, this method relied upon Monte Carlo simulation to find successful trajectories. Overall the method has mixed results, and in the context of optimizing multiple vehicles it is inefficient in comparison to the method presented POST's internal optimizer functions like any other gradient-based optimizer. It has a specified variable to optimize whose value is represented as optval, a set of dependent constraints to meet with associated forms and tolerances whose value is represented as p2, and a set of independent variables known as the u-vector to modify in pursuit of optimality. Each of these quantities are calculated or manipulated at a certain phase within the trajectory. The optimizer is further constrained by the requirement that the input u-vector must result in a trajectory which proceeds through each of the prescribed events in the input file. For example, if the input u-vector causes the vehicle to crash before it can achieve the orbital parameters required for a parking orbit, then the run will fail without engaging the optimizer, and a p2 value of exactly zero is returned. This poses a problem, as this "non-connecting" region of the u-vector space is far larger than the "connecting" region which returns a non-zero value of p2 and can be worked on by the internal optimizer. Finding this connecting region and more specifically the global optimum within this region has traditionally required the use of an expert analyst.

  6. Numerical and Non-Numerical Ordinality Processing in Children with and without Developmental Dyscalculia: Evidence from fMRI

    ERIC Educational Resources Information Center

    Kaufmann, L.; Vogel, S. E.; Starke, M.; Kremser, C.; Schocke, M.

    2009-01-01

    Ordinality is--beyond numerical magnitude (i.e., quantity)--an important characteristic of the number system. There is converging empirical evidence that (intra)parietal brain regions mediate number magnitude processing. Furthermore, recent findings suggest that the human intraparietal sulcus (IPS) supports magnitude and ordinality in a…

  7. 25 CFR 522.1 - Scope of this part.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... INDIAN GAMING COMMISSION, DEPARTMENT OF THE INTERIOR APPROVAL OF CLASS II AND CLASS III ORDINANCES AND RESOLUTIONS SUBMISSION OF GAMING ORDINANCE OR RESOLUTION § 522.1 Scope of this part. This part applies to any gaming ordinance or resolution adopted by a tribe after February 22, 1993. Part 523 of this chapter...

  8. Land and Liberty: The Ordinances of the 1780s.

    ERIC Educational Resources Information Center

    Sheehan, Bernard W.

    The U.S. Constitution established the broad legal frame for the U.S. political order; the ordinances provided the indispensable means for the expansion of that order across the continent. The first effort at organizing the northwest occurred in 1784. Written by Thomas Jefferson, the Ordinance of 1784 defined the stages through which territories…

  9. Educational Legislation in Colonial Zimbabwe (1899-1979)

    ERIC Educational Resources Information Center

    Richards, Kimberly; Govere, Ephraim

    2003-01-01

    This article focuses on a historical series of education acts that impacted on education in Rhodesia. These Acts are the: (1) 1899 Education Ordinance; (2) 1903 Education Ordinance; (3) 1907 Education Ordinance; (4) 1929 Department of Native Development Act; (5) 1930 Compulsory Education Act; (6) 1959 African Education Act; (7) 1973 Education Act;…

  10. Content based image retrieval using local binary pattern operator and data mining techniques.

    PubMed

    Vatamanu, Oana Astrid; Frandeş, Mirela; Lungeanu, Diana; Mihalaş, Gheorghe-Ioan

    2015-01-01

    Content based image retrieval (CBIR) concerns the retrieval of similar images from image databases, using feature vectors extracted from images. These feature vectors globally define the visual content present in an image, defined by e.g., texture, colour, shape, and spatial relations between vectors. Herein, we propose the definition of feature vectors using the Local Binary Pattern (LBP) operator. A study was performed in order to determine the optimum LBP variant for the general definition of image feature vectors. The chosen LBP variant is then subsequently used to build an ultrasound image database, and a database with images obtained from Wireless Capsule Endoscopy. The image indexing process is optimized using data clustering techniques for images belonging to the same class. Finally, the proposed indexing method is compared to the classical indexing technique, which is nowadays widely used.

  11. Knowledge, Attitude and Practices of Vector-Borne Disease Prevention during the Emergence of a New Arbovirus: Implications for the Control of Chikungunya Virus in French Guiana.

    PubMed

    Fritzell, Camille; Raude, Jocelyn; Adde, Antoine; Dusfour, Isabelle; Quenel, Philippe; Flamand, Claude

    2016-11-01

    During the last decade, French Guiana has been affected by major dengue fever outbreaks. Although this arbovirus has been a focus of many awareness campaigns, very little information is available about beliefs, attitudes and behaviors regarding vector-borne diseases among the population of French Guiana. During the first outbreak of the chikungunya virus, a quantitative survey was conducted among high school students to study experiences, practices and perceptions related to mosquito-borne diseases and to identify socio-demographic, cognitive and environmental factors that could be associated with the engagement in protective behaviors. A cross-sectional survey was administered in May 2014, with a total of 1462 students interviewed. Classrooms were randomly selected using a two-stage selection procedure with cluster samples. A multiple correspondence analysis (MCA) associated with a hierarchical cluster analysis and with an ordinal logistic regression was performed. Chikungunya was less understood and perceived as a more dreadful disease than dengue fever. The analysis identified three groups of individual protection levels against mosquito-borne diseases: "low" (30%), "moderate" (42%) and "high" (28%)". Protective health behaviors were found to be performed more frequently among students who were female, had a parent with a higher educational status, lived in an individual house, and had a better understanding of the disease. This study allowed us to estimate the level of protective practices against vector-borne diseases among students after the emergence of a new arbovirus. These results revealed that the adoption of protective behaviors is a multi-factorial process that depends on both sociocultural and cognitive factors. These findings may help public health authorities to strengthen communication and outreach strategies, thereby increasing the adoption of protective health behaviors, particularly in high-risk populations.

  12. HYBRID NEURAL NETWORK AND SUPPORT VECTOR MACHINE METHOD FOR OPTIMIZATION

    NASA Technical Reports Server (NTRS)

    Rai, Man Mohan (Inventor)

    2005-01-01

    System and method for optimization of a design associated with a response function, using a hybrid neural net and support vector machine (NN/SVM) analysis to minimize or maximize an objective function, optionally subject to one or more constraints. As a first example, the NN/SVM analysis is applied iteratively to design of an aerodynamic component, such as an airfoil shape, where the objective function measures deviation from a target pressure distribution on the perimeter of the aerodynamic component. As a second example, the NN/SVM analysis is applied to data classification of a sequence of data points in a multidimensional space. The NN/SVM analysis is also applied to data regression.

  13. Hybrid Neural Network and Support Vector Machine Method for Optimization

    NASA Technical Reports Server (NTRS)

    Rai, Man Mohan (Inventor)

    2007-01-01

    System and method for optimization of a design associated with a response function, using a hybrid neural net and support vector machine (NN/SVM) analysis to minimize or maximize an objective function, optionally subject to one or more constraints. As a first example, the NN/SVM analysis is applied iteratively to design of an aerodynamic component, such as an airfoil shape, where the objective function measures deviation from a target pressure distribution on the perimeter of the aerodynamic component. As a second example, the NN/SVM analysis is applied to data classification of a sequence of data points in a multidimensional space. The NN/SVM analysis is also applied to data regression.

  14. Guiding automated left ventricular chamber segmentation in cardiac imaging using the concept of conserved myocardial volume.

    PubMed

    Garson, Christopher D; Li, Bing; Acton, Scott T; Hossack, John A

    2008-06-01

    The active surface technique using gradient vector flow allows semi-automated segmentation of ventricular borders. The accuracy of the algorithm depends on the optimal selection of several key parameters. We investigated the use of conservation of myocardial volume for quantitative assessment of each of these parameters using synthetic and in vivo data. We predicted that for a given set of model parameters, strong conservation of volume would correlate with accurate segmentation. The metric was most useful when applied to the gradient vector field weighting and temporal step-size parameters, but less effective in guiding an optimal choice of the active surface tension and rigidity parameters.

  15. Prediction on sunspot activity based on fuzzy information granulation and support vector machine

    NASA Astrophysics Data System (ADS)

    Peng, Lingling; Yan, Haisheng; Yang, Zhigang

    2018-04-01

    In order to analyze the range of sunspots, a combined prediction method of forecasting the fluctuation range of sunspots based on fuzzy information granulation (FIG) and support vector machine (SVM) was put forward. Firstly, employing the FIG to granulate sample data and extract va)alid information of each window, namely the minimum value, the general average value and the maximum value of each window. Secondly, forecasting model is built respectively with SVM and then cross method is used to optimize these parameters. Finally, the fluctuation range of sunspots is forecasted with the optimized SVM model. Case study demonstrates that the model have high accuracy and can effectively predict the fluctuation of sunspots.

  16. The new immigration contestation: social movements and local immigration policy making in the United States, 2000-2011.

    PubMed

    Steil, Justin Peter; Vasi, Ion Bogdan

    2014-01-01

    Analyzing oppositional social movements in the context of municipal immigration ordinances, the authors examine whether the explanatory power of resource mobilization, political process, and strain theories of social movements' impact on policy outcomes differs when considering proactive as opposed to reactive movements. The adoption of pro-immigrant (proactive) ordinances was facilitated by the presence of immigrant community organizations and of sympathetic local political allies. The adoption of anti-immigrant (reactive) ordinances was influenced by structural social changes, such as rapid increases in the local Latino population, that were framed as threats. The study also finds that pro-immigrant protest events can influence policy in two ways, contributing both to the passage of pro-immigrant ordinances in the locality where protests occur and also inhibiting the passage of anti-immigrant ordinances in neighboring cities.

  17. OPTIMIZATION METHODOLOGY FOR LAND USE PATTERNS-EVALUATION BASED ON MULTISCALE HABITAT PATTERN COMPARISON. (R827169)

    EPA Science Inventory

    In this paper, the methodological concept of landscape optimization presented by Seppelt and Voinov [Ecol. Model. 151 (2/3) (2002) 125] is analyzed. Two aspects are chosen for detailed study. First, we generalize the performance criterion to assess a vector of ecosystem functi...

  18. A simple method to determine IgG light chain to heavy chain polypeptide ratios expressed by CHO cells.

    PubMed

    Gerster, Anja; Wodarczyk, Claas; Reichenbächer, Britta; Köhler, Janet; Schulze, Andreas; Krause, Felix; Müller, Dethardt

    2016-12-01

    To establish a high-throughput method for determination of antibodies intra- and extracellular light chain (LC) to heavy chain (HC) polypeptide ratio as screening parameter during cell line development. Chinese Hamster Ovary (CHO) TurboCell pools containing different designed vectors supposed to result in different LC:HC polypeptide ratios were generated by targeted integration. Cell culture supernatants and cell lysates of a fed batch experiment were purified by combined Protein A and anti-kappa affinity batch purification in 96-well format. Capture of all antibodies and their fragments allowed the determination of the intra- and extracellular LC:HC peptide ratios by reduced SDS capillary electrophoresis. Results demonstrate that the method is suitable to show the significant impact of the vector design on the intra- and extracellular LC:HC polypeptide ratios. Determination of LC:HC polypeptide ratios can give important information in vector design optimization leading to CHO cell lines with optimized antibody assembly and preferred product quality.

  19. Efficient boundary hunting via vector quantization

    NASA Astrophysics Data System (ADS)

    Diamantini, Claudia; Panti, Maurizio

    2001-03-01

    A great amount of information about a classification problem is contained in those instances falling near the decision boundary. This intuition dates back to the earliest studies in pattern recognition, and in the more recent adaptive approaches to the so called boundary hunting, such as the work of Aha et alii on Instance Based Learning and the work of Vapnik et alii on Support Vector Machines. The last work is of particular interest, since theoretical and experimental results ensure the accuracy of boundary reconstruction. However, its optimization approach has heavy computational and memory requirements, which limits its application on huge amounts of data. In the paper we describe an alternative approach to boundary hunting based on adaptive labeled quantization architectures. The adaptation is performed by a stochastic gradient algorithm for the minimization of the error probability. Error probability minimization guarantees the accurate approximation of the optimal decision boundary, while the use of a stochastic gradient algorithm defines an efficient method to reach such approximation. In the paper comparisons to Support Vector Machines are considered.

  20. Tuning the cache memory usage in tomographic reconstruction on standard computers with Advanced Vector eXtensions (AVX)

    PubMed Central

    Agulleiro, Jose-Ignacio; Fernandez, Jose-Jesus

    2015-01-01

    Cache blocking is a technique widely used in scientific computing to minimize the exchange of information with main memory by reusing the data kept in cache memory. In tomographic reconstruction on standard computers using vector instructions, cache blocking turns out to be central to optimize performance. To this end, sinograms of the tilt-series and slices of the volumes to be reconstructed have to be divided into small blocks that fit into the different levels of cache memory. The code is then reorganized so as to operate with a block as much as possible before proceeding with another one. This data article is related to the research article titled Tomo3D 2.0 – Exploitation of Advanced Vector eXtensions (AVX) for 3D reconstruction (Agulleiro and Fernandez, 2015) [1]. Here we present data of a thorough study of the performance of tomographic reconstruction by varying cache block sizes, which allows derivation of expressions for their automatic quasi-optimal tuning. PMID:26217710

  1. Tuning the cache memory usage in tomographic reconstruction on standard computers with Advanced Vector eXtensions (AVX).

    PubMed

    Agulleiro, Jose-Ignacio; Fernandez, Jose-Jesus

    2015-06-01

    Cache blocking is a technique widely used in scientific computing to minimize the exchange of information with main memory by reusing the data kept in cache memory. In tomographic reconstruction on standard computers using vector instructions, cache blocking turns out to be central to optimize performance. To this end, sinograms of the tilt-series and slices of the volumes to be reconstructed have to be divided into small blocks that fit into the different levels of cache memory. The code is then reorganized so as to operate with a block as much as possible before proceeding with another one. This data article is related to the research article titled Tomo3D 2.0 - Exploitation of Advanced Vector eXtensions (AVX) for 3D reconstruction (Agulleiro and Fernandez, 2015) [1]. Here we present data of a thorough study of the performance of tomographic reconstruction by varying cache block sizes, which allows derivation of expressions for their automatic quasi-optimal tuning.

  2. Comparative seroprevalence and immunogenicity of six rare serotype recombinant adenovirus vaccine vectors from subgroups B and D.

    PubMed

    Abbink, Peter; Lemckert, Angelique A C; Ewald, Bonnie A; Lynch, Diana M; Denholtz, Matthew; Smits, Shirley; Holterman, Lennart; Damen, Irma; Vogels, Ronald; Thorner, Anna R; O'Brien, Kara L; Carville, Angela; Mansfield, Keith G; Goudsmit, Jaap; Havenga, Menzo J E; Barouch, Dan H

    2007-05-01

    Recombinant adenovirus serotype 5 (rAd5) vector-based vaccines are currently being developed for both human immunodeficiency virus type 1 and other pathogens. The potential limitations associated with rAd5 vectors, however, have led to the construction of novel rAd vectors derived from rare Ad serotypes. Several rare serotype rAd vectors have already been described, but a detailed comparison of multiple rAd vectors from subgroups B and D has not previously been reported. Such a comparison is critical for selecting optimal rAd vectors for advancement into clinical trials. Here we describe the construction of three novel rAd vector systems from Ad26, Ad48, and Ad50. We report comparative seroprevalence and immunogenicity studies involving rAd11, rAd35, and rAd50 vectors from subgroup B; rAd26, rAd48, and rAd49 vectors from subgroup D; and rAd5 vectors from subgroup C. All six rAd vectors from subgroups B and D exhibited low seroprevalence in a cohort of 200 individuals from sub-Saharan Africa, and they elicited Gag-specific cellular immune responses in mice both with and without preexisting anti-Ad5 immunity. The rAd vectors from subgroup D were also evaluated using rhesus monkeys and were shown to be immunogenic after a single injection. The rAd26 vectors proved the most immunogenic among the rare serotype rAd vectors studied, although all rare serotype rAd vectors were still less potent than rAd5 vectors in the absence of anti-Ad5 immunity. These studies substantially expand the portfolio of rare serotype rAd vectors that may prove useful as vaccine vectors for the developing world.

  3. The p40 Subunit of Interleukin (IL)-12 Promotes Stabilization and Export of the p35 Subunit

    PubMed Central

    Jalah, Rashmi; Rosati, Margherita; Ganneru, Brunda; Pilkington, Guy R.; Valentin, Antonio; Kulkarni, Viraj; Bergamaschi, Cristina; Chowdhury, Bhabadeb; Zhang, Gen-Mu; Beach, Rachel Kelly; Alicea, Candido; Broderick, Kate E.; Sardesai, Niranjan Y.; Pavlakis, George N.; Felber, Barbara K.

    2013-01-01

    IL-12 is a 70-kDa heterodimeric cytokine composed of the p35 and p40 subunits. To maximize cytokine production from plasmid DNA, molecular steps controlling IL-12p70 biosynthesis at the posttranscriptional and posttranslational levels were investigated. We show that the combination of RNA/codon-optimized gene sequences and fine-tuning of the relative expression levels of the two subunits within a cell resulted in increased production of the IL-12p70 heterodimer. We found that the p40 subunit plays a critical role in enhancing the stability, intracellular trafficking, and export of the p35 subunit. This posttranslational regulation mediated by the p40 subunit is conserved in mammals. Based on these findings, dual gene expression vectors were generated, producing an optimal ratio of the two subunits, resulting in a ∼1 log increase in human, rhesus, and murine IL-12p70 production compared with vectors expressing the wild type sequences. Such optimized DNA plasmids also produced significantly higher levels of systemic bioactive IL-12 upon in vivo DNA delivery in mice compared with plasmids expressing the wild type sequences. A single therapeutic injection of an optimized murine IL-12 DNA plasmid showed significantly more potent control of tumor development in the B16 melanoma cancer model in mice. Therefore, the improved IL-12p70 DNA vectors have promising potential for in vivo use as molecular vaccine adjuvants and in cancer immunotherapy. PMID:23297419

  4. Satellite scheduling considering maximum observation coverage time and minimum orbital transfer fuel cost

    NASA Astrophysics Data System (ADS)

    Zhu, Kai-Jian; Li, Jun-Feng; Baoyin, He-Xi

    2010-01-01

    In case of an emergency like the Wenchuan earthquake, it is impossible to observe a given target on earth by immediately launching new satellites. There is an urgent need for efficient satellite scheduling within a limited time period, so we must find a way to reasonably utilize the existing satellites to rapidly image the affected area during a short time period. Generally, the main consideration in orbit design is satellite coverage with the subsatellite nadir point as a standard of reference. Two factors must be taken into consideration simultaneously in orbit design, i.e., the maximum observation coverage time and the minimum orbital transfer fuel cost. The local time of visiting the given observation sites must satisfy the solar radiation requirement. When calculating the operational orbit elements as optimal parameters to be evaluated, we obtain the minimum objective function by comparing the results derived from the primer vector theory with those derived from the Hohmann transfer because the operational orbit for observing the disaster area with impulse maneuvers is considered in this paper. The primer vector theory is utilized to optimize the transfer trajectory with three impulses and the Hohmann transfer is utilized for coplanar and small inclination of non-coplanar cases. Finally, we applied this method in a simulation of the rescue mission at Wenchuan city. The results of optimizing orbit design with a hybrid PSO and DE algorithm show that the primer vector and Hohmann transfer theory proved to be effective methods for multi-object orbit optimization.

  5. Expression of chicken parvovirus VP2 in chicken embryo fibroblasts requires codon optimization for production of naked DNA and vectored Meleagrid herpesvirus type 1 vaccines

    USDA-ARS?s Scientific Manuscript database

    Meleagrid herpesvirus type 1 (MeHV-1) is an ideal vector for the expression of antigens from pathogenic avian organisms in order to generate vaccines. Chicken parvovirus (ChPV) is a widespread infectious virus that causes serious disease in chickens. It is one of the etiological agents largely suspe...

  6. Adaptive Estimation and Heuristic Optimization of Nonlinear Spacecraft Attitude Dynamics

    DTIC Science & Technology

    2016-09-15

    Algorithm GPS Global Positioning System HOUF Higher Order Unscented Filter IC initial conditions IMM Interacting Multiple Model IMU Inertial Measurement Unit ...sources ranging from inertial measurement units to star sensors are used to construct observations for attitude estimation algorithms. The sensor...parameters. A single vector measurement will provide two independent parameters, as a unit vector constraint removes a DOF making the problem underdetermined

  7. Combinations of various CpG motifs cloned into plasmid backbone modulate and enhance protective immunity of viral replicon DNA anthrax vaccines.

    PubMed

    Yu, Yun-Zhou; Ma, Yao; Xu, Wen-Hui; Wang, Shuang; Sun, Zhi-Wei

    2015-08-01

    DNA vaccines are generally weak stimulators of the immune system. Fortunately, their efficacy can be improved using a viral replicon vector or by the addition of immunostimulatory CpG motifs, although the design of these engineered DNA vectors requires optimization. Our results clearly suggest that multiple copies of three types of CpG motifs or combinations of various types of CpG motifs cloned into a viral replicon vector backbone with strong immunostimulatory activities on human PBMC are efficient adjuvants for these DNA vaccines to modulate and enhance protective immunity against anthrax, although modifications with these different CpG forms in vivo elicited inconsistent immune response profiles. Modification with more copies of CpG motifs elicited more potent adjuvant effects leading to the generation of enhanced immunity, which indicated a CpG motif dose-dependent enhancement of antigen-specific immune responses. Notably, the enhanced and/or synchronous adjuvant effects were observed in modification with combinations of two different types of CpG motifs, which provides not only a contribution to the knowledge base on the adjuvant activities of CpG motifs combinations but also implications for the rational design of optimal DNA vaccines with combinations of CpG motifs as "built-in" adjuvants. We describe an efficient strategy to design and optimize DNA vaccines by the addition of combined immunostimulatory CpG motifs in a viral replicon DNA plasmid to produce strong immune responses, which indicates that the CpG-modified viral replicon DNA plasmid may be desirable for use as vector of DNA vaccines.

  8. Viral Vectors for Gene Delivery to the Central Nervous System

    PubMed Central

    Lentz, Thomas B.; Gray, Steven J.; Samulski, R. Jude

    2011-01-01

    The potential benefits of gene therapy for neurological diseases such as Parkinson’s, Amyotrophic Lateral Sclerosis (ALS), Epilepsy, and Alzheimer’s are enormous. Even a delay in the onset of severe symptoms would be invaluable to patients suffering from these and other diseases. Significant effort has been placed in developing vectors capable of delivering therapeutic genes to the CNS in order to treat neurological disorders. At the forefront of potential vectors, viral systems have evolved to efficiently deliver their genetic material to a cell. The biology of different viruses offers unique solutions to the challenges of gene therapy, such as cell targeting, transgene expression and vector production. It is important to consider the natural biology of a vector when deciding whether it will be the most effective for a specific therapeutic function. In this review, we outline desired features of the ideal vector for gene delivery to the CNS and discuss how well available viral vectors compare to this model. Adeno-associated virus, retrovirus, adenovirus and herpesvirus vectors are covered. Focus is placed on features of the natural biology that have made these viruses effective tools for gene delivery with emphasis on their application in the CNS. Our goal is to provide insight into features of the optimal vector and which viral vectors can provide these features. PMID:22001604

  9. Correlational Analysis of Ordinal Data: From Pearson's "r" to Bayesian Polychoric Correlation

    ERIC Educational Resources Information Center

    Choi, Jaehwa; Peters, Michelle; Mueller, Ralph O.

    2010-01-01

    Correlational analyses are one of the most popular quantitative methods, yet also one of the mostly frequently misused methods in social and behavioral research, especially when analyzing ordinal data from Likert or other rating scales. Although several correlational analysis options have been developed for ordinal data, there seems to be a lack…

  10. Reliability of Total Test Scores When Considered as Ordinal Measurements

    ERIC Educational Resources Information Center

    Biswas, Ajoy Kumar

    2006-01-01

    This article studies the ordinal reliability of (total) test scores. This study is based on a classical-type linear model of observed score (X), true score (T), and random error (E). Based on the idea of Kendall's tau-a coefficient, a measure of ordinal reliability for small-examinee populations is developed. This measure is extended to large…

  11. Economic Analysis of a Living Wage Ordinance.

    ERIC Educational Resources Information Center

    Tolley, George; Bernstein, Peter

    A study estimated the costs of the "Chicago Jobs and Living Wage Ordinance" that would require firms that receive assistance from the city of Chicago to pay their workers an hourly wage of at least $7.60. An estimate of the additional labor cost that would result from the proposed Ordinance was calculated. Results of a survey of…

  12. Dyslexia and Developmental Co-Ordination Disorder in Further and Higher Education--Similarities and Differences. Does the "Label" Influence the Support Given?

    ERIC Educational Resources Information Center

    Kirby, Amanda; Sugden, David; Beveridge, Sally; Edwards, Lisa; Edwards, Rachel

    2008-01-01

    Developmental co-ordination disorder (DCD) is a developmental disorder affecting motor co-ordination. The "Diagnostics Statistics Manual"--IV classification for DCD describes difficulties across a range of activities of daily living, impacting on everyday skills and academic performance in school. Recent evidence has shown that…

  13. The Development and Standardization of the Adult Developmental Co-Ordination Disorders/Dyspraxia Checklist (ADC)

    ERIC Educational Resources Information Center

    Kirby, Amanda; Edwards, Lisa; Sugden, David; Rosenblum, Sara

    2010-01-01

    Developmental Co-ordination Disorder (DCD), also known as Dyspraxia in the United Kingdom (U.K.), is a developmental disorder affecting motor co-ordination. In the past this was regarded as a childhood disorder, however there is increasing evidence that a significant number of children will continue to have persistent difficulties into adulthood.…

  14. How to Plan an Ordinance: An Outline and Some Examples.

    ERIC Educational Resources Information Center

    Cable Television Information Center, Washington, DC.

    Designed for public officials who must make policy decisions concerning cable television, this booklet forms a checklist to ensure that all basic questions have been considered in drafting an ordinance. The purpose of a cable television ordinance is to develop a law listing the specifications and obligations that will govern the franchising of a…

  15. Proposed Ordinance for the Regulation of Cable Television. Working Draft.

    ERIC Educational Resources Information Center

    Chicago City Council, IL.

    A model ordinance is proposed for the regulation of cable television in the city of Chicago. It defines the language of the ordinance, sets forth the method of granting franchises, and describes the terms of the franchises. The duties of a commission to regulate cable television are listed and the method of selecting commission members is…

  16. 25 CFR 11.108 - How are tribal ordinances affected by this part?

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 25 Indians 1 2014-04-01 2014-04-01 false How are tribal ordinances affected by this part? 11.108 Section 11.108 Indians BUREAU OF INDIAN AFFAIRS, DEPARTMENT OF THE INTERIOR LAW AND ORDER COURTS OF INDIAN OFFENSES AND LAW AND ORDER CODE Application; Jurisdiction § 11.108 How are tribal ordinances affected by...

  17. 25 CFR 11.108 - How are tribal ordinances affected by this part?

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 25 Indians 1 2013-04-01 2013-04-01 false How are tribal ordinances affected by this part? 11.108 Section 11.108 Indians BUREAU OF INDIAN AFFAIRS, DEPARTMENT OF THE INTERIOR LAW AND ORDER COURTS OF INDIAN OFFENSES AND LAW AND ORDER CODE Application; Jurisdiction § 11.108 How are tribal ordinances affected by...

  18. 25 CFR 11.108 - How are tribal ordinances affected by this part?

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 25 Indians 1 2012-04-01 2011-04-01 true How are tribal ordinances affected by this part? 11.108 Section 11.108 Indians BUREAU OF INDIAN AFFAIRS, DEPARTMENT OF THE INTERIOR LAW AND ORDER COURTS OF INDIAN OFFENSES AND LAW AND ORDER CODE Application; Jurisdiction § 11.108 How are tribal ordinances affected by...

  19. 25 CFR 11.108 - How are tribal ordinances affected by this part?

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 25 Indians 1 2011-04-01 2011-04-01 false How are tribal ordinances affected by this part? 11.108 Section 11.108 Indians BUREAU OF INDIAN AFFAIRS, DEPARTMENT OF THE INTERIOR LAW AND ORDER COURTS OF INDIAN OFFENSES AND LAW AND ORDER CODE Application; Jurisdiction § 11.108 How are tribal ordinances affected by...

  20. An Algorithm for Converting Ordinal Scale Measurement Data to Interval/Ratio Scale

    ERIC Educational Resources Information Center

    Granberg-Rademacker, J. Scott

    2010-01-01

    The extensive use of survey instruments in the social sciences has long created debate and concern about validity of outcomes, especially among instruments that gather ordinal-level data. Ordinal-level survey measurement of concepts that could be measured at the interval or ratio level produce errors because respondents are forced to truncate or…

  1. Image coding using entropy-constrained residual vector quantization

    NASA Technical Reports Server (NTRS)

    Kossentini, Faouzi; Smith, Mark J. T.; Barnes, Christopher F.

    1993-01-01

    The residual vector quantization (RVQ) structure is exploited to produce a variable length codeword RVQ. Necessary conditions for the optimality of this RVQ are presented, and a new entropy-constrained RVQ (ECRVQ) design algorithm is shown to be very effective in designing RVQ codebooks over a wide range of bit rates and vector sizes. The new EC-RVQ has several important advantages. It can outperform entropy-constrained VQ (ECVQ) in terms of peak signal-to-noise ratio (PSNR), memory, and computation requirements. It can also be used to design high rate codebooks and codebooks with relatively large vector sizes. Experimental results indicate that when the new EC-RVQ is applied to image coding, very high quality is achieved at relatively low bit rates.

  2. Prediction of Skin Sensitization with a Particle Swarm Optimized Support Vector Machine

    PubMed Central

    Yuan, Hua; Huang, Jianping; Cao, Chenzhong

    2009-01-01

    Skin sensitization is the most commonly reported occupational illness, causing much suffering to a wide range of people. Identification and labeling of environmental allergens is urgently required to protect people from skin sensitization. The guinea pig maximization test (GPMT) and murine local lymph node assay (LLNA) are the two most important in vivo models for identification of skin sensitizers. In order to reduce the number of animal tests, quantitative structure-activity relationships (QSARs) are strongly encouraged in the assessment of skin sensitization of chemicals. This paper has investigated the skin sensitization potential of 162 compounds with LLNA results and 92 compounds with GPMT results using a support vector machine. A particle swarm optimization algorithm was implemented for feature selection from a large number of molecular descriptors calculated by Dragon. For the LLNA data set, the classification accuracies are 95.37% and 88.89% for the training and the test sets, respectively. For the GPMT data set, the classification accuracies are 91.80% and 90.32% for the training and the test sets, respectively. The classification performances were greatly improved compared to those reported in the literature, indicating that the support vector machine optimized by particle swarm in this paper is competent for the identification of skin sensitizers. PMID:19742136

  3. A novel approach for dimension reduction of microarray.

    PubMed

    Aziz, Rabia; Verma, C K; Srivastava, Namita

    2017-12-01

    This paper proposes a new hybrid search technique for feature (gene) selection (FS) using Independent component analysis (ICA) and Artificial Bee Colony (ABC) called ICA+ABC, to select informative genes based on a Naïve Bayes (NB) algorithm. An important trait of this technique is the optimization of ICA feature vector using ABC. ICA+ABC is a hybrid search algorithm that combines the benefits of extraction approach, to reduce the size of data and wrapper approach, to optimize the reduced feature vectors. This hybrid search technique is facilitated by evaluating the performance of ICA+ABC on six standard gene expression datasets of classification. Extensive experiments were conducted to compare the performance of ICA+ABC with the results obtained from recently published Minimum Redundancy Maximum Relevance (mRMR) +ABC algorithm for NB classifier. Also to check the performance that how ICA+ABC works as feature selection with NB classifier, compared the combination of ICA with popular filter techniques and with other similar bio inspired algorithm such as Genetic Algorithm (GA) and Particle Swarm Optimization (PSO). The result shows that ICA+ABC has a significant ability to generate small subsets of genes from the ICA feature vector, that significantly improve the classification accuracy of NB classifier compared to other previously suggested methods. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. Optimization of a CRISPR/Cas9-mediated Knock-in Strategy at the Porcine Rosa26 Locus in Porcine Foetal Fibroblasts.

    PubMed

    Xie, Zicong; Pang, Daxin; Wang, Kankan; Li, Mengjing; Guo, Nannan; Yuan, Hongming; Li, Jianing; Zou, Xiaodong; Jiao, Huping; Ouyang, Hongsheng; Li, Zhanjun; Tang, Xiaochun

    2017-06-08

    Genetically modified pigs have important roles in agriculture and biomedicine. However, genome-specific knock-in techniques in pigs are still in their infancy and optimal strategies have not been extensively investigated. In this study, we performed electroporation to introduce a targeting donor vector (a non-linearized vector that did not contain a promoter or selectable marker) into Porcine Foetal Fibroblasts (PFFs) along with a CRISPR/Cas9 vector. After optimization, the efficiency of the EGFP site-specific knock-in could reach up to 29.6% at the pRosa26 locus in PFFs. Next, we used the EGFP reporter PFFs to address two key conditions in the process of achieving transgenic pigs, the limiting dilution method and the strategy to evaluate the safety and feasibility of the knock-in locus. This study demonstrates that we establish an efficient procedures for the exogenous gene knock-in technique and creates a platform to efficiently generate promoter-less and selectable marker-free transgenic PFFs through the CRISPR/Cas9 system. This study should contribute to the generation of promoter-less and selectable marker-free transgenic pigs and it may provide insights into sophisticated site-specific genome engineering techniques for additional species.

  5. Gammaretroviral Vectors: Biology, Technology and Application

    PubMed Central

    Maetzig, Tobias; Galla, Melanie; Baum, Christopher; Schambach, Axel

    2011-01-01

    Retroviruses are evolutionary optimized gene carriers that have naturally adapted to their hosts to efficiently deliver their nucleic acids into the target cell chromatin, thereby overcoming natural cellular barriers. Here we will review—starting with a deeper look into retroviral biology—how Murine Leukemia Virus (MLV), a simple gammaretrovirus, can be converted into an efficient vehicle of genetic therapeutics. Furthermore, we will describe how more rational vector backbones can be designed and how these so-called self-inactivating vectors can be pseudotyped and produced. Finally, we will provide an overview on existing clinical trials and how biosafety can be improved. PMID:21994751

  6. Case management for high-intensity service users: towards a relational approach to care co-ordination.

    PubMed

    McEvoy, Phil; Escott, Diane; Bee, Penny

    2011-01-01

    This study is based on a formative evaluation of a case management service for high-intensity service users in Northern England. The evaluation had three main purposes: (i) to assess the quality of the organisational infrastructure; (ii) to obtain a better understanding of the key influences that played a role in shaping the development of the service; and (iii) to identify potential changes in practice that may help to improve the quality of service provision. The evaluation was informed by Gittell's relational co-ordination theory, which focuses upon cross-boundary working practices that facilitate task integration. The Assessment of Chronic Illness Care Survey was used to assess the organisational infrastructure and qualitative interviews with front line staff were conducted to explore the key influences that shaped the development of the service. A high level of strategic commitment and political support for integrated working was identified. However, the quality of care co-ordination was variable. The most prominent operational factor that appeared to influence the scope and quality of care co-ordination was the pattern of interaction between the case managers and their co-workers. The co-ordination of patient care was much more effective in integrated co-ordination networks. Key features included clearly defined, task focussed, relational workspaces with interactive forums where case managers could engage with co-workers in discussions about the management of interdependent care activities. In dispersed co-ordination networks with fewer relational workspaces, the case managers struggled to work as effectively. The evaluation concluded that the creation of flexible and efficient task focused relational workspaces that are systemically managed and adequately resourced could help to improve the quality of care co-ordination, particularly in dispersed networks. © 2010 Blackwell Publishing Ltd.

  7. Projected health impact of the Los Angeles City living wage ordinance

    PubMed Central

    Cole, B.; Shimkhada, R.; Morgenstern, H.; Kominski, G.; Fielding, J.; Wu, S.

    2005-01-01

    Study objective: To estimate the relative health effects of the income and health insurance provisions of the Los Angeles City living wage ordinance. Setting and participants: About 10 000 employees of city contractors are subject to the Los Angeles City living wage ordinance, which establishes an annually adjusted minimum wage ($7.99 per hour in July 2002) and requires employers to contribute $1.25 per hour worked towards employees' health insurance, or, if health insurance is not provided, to add this amount to wages. Design: As part of a comprehensive health impact assessment (HIA), we used estimates of the effects of health insurance and income on mortality from the published literature to construct a model to estimate and compare potential reductions in mortality attributable to the increases in wage and changes in health insurance status among workers covered by the Los Angeles City living wage ordinance. Results: The model predicts that the ordinance currently reduces mortality by 1.4 deaths per year per 10 000 workers at a cost of $27.5 million per death prevented. If the ordinance were modified so that all uninsured workers received health insurance, mortality would be reduced by eight deaths per year per 10 000 workers at a cost of $3.4 million per death prevented. Conclusions: The health insurance provisions of the ordinance have the potential to benefit the health of covered workers far more cost effectively than the wage provisions of the ordinance. This analytical model can be adapted and used in other health impact assessments of related policy actions that might affect either income or access to health insurance in the affected population. PMID:16020640

  8. Projected health impact of the Los Angeles City living wage ordinance.

    PubMed

    Cole, Brian L; Shimkhada, Riti; Morgenstern, Hal; Kominski, Gerald; Fielding, Jonathan E; Wu, Sheng

    2005-08-01

    To estimate the relative health effects of the income and health insurance provisions of the Los Angeles City living wage ordinance. About 10 000 employees of city contractors are subject to the Los Angeles City living wage ordinance, which establishes an annually adjusted minimum wage (7.99 US dollars per hour in July 2002) and requires employers to contribute 1.25 US dollars per hour worked towards employees' health insurance, or, if health insurance is not provided, to add this amount to wages. As part of a comprehensive health impact assessment (HIA), we used estimates of the effects of health insurance and income on mortality from the published literature to construct a model to estimate and compare potential reductions in mortality attributable to the increases in wage and changes in health insurance status among workers covered by the Los Angeles City living wage ordinance. The model predicts that the ordinance currently reduces mortality by 1.4 deaths per year per 10,000 workers at a cost of 27.5 million US dollars per death prevented. If the ordinance were modified so that all uninsured workers received health insurance, mortality would be reduced by eight deaths per year per 10,000 workers at a cost of 3.4 million US dollars per death prevented. The health insurance provisions of the ordinance have the potential to benefit the health of covered workers far more cost effectively than the wage provisions of the ordinance. This analytical model can be adapted and used in other health impact assessments of related policy actions that might affect either income or access to health insurance in the affected population.

  9. Minimal entropy probability paths between genome families.

    PubMed

    Ahlbrandt, Calvin; Benson, Gary; Casey, William

    2004-05-01

    We develop a metric for probability distributions with applications to biological sequence analysis. Our distance metric is obtained by minimizing a functional defined on the class of paths over probability measures on N categories. The underlying mathematical theory is connected to a constrained problem in the calculus of variations. The solution presented is a numerical solution, which approximates the true solution in a set of cases called rich paths where none of the components of the path is zero. The functional to be minimized is motivated by entropy considerations, reflecting the idea that nature might efficiently carry out mutations of genome sequences in such a way that the increase in entropy involved in transformation is as small as possible. We characterize sequences by frequency profiles or probability vectors, in the case of DNA where N is 4 and the components of the probability vector are the frequency of occurrence of each of the bases A, C, G and T. Given two probability vectors a and b, we define a distance function based as the infimum of path integrals of the entropy function H( p) over all admissible paths p(t), 0 < or = t< or =1, with p(t) a probability vector such that p(0)=a and p(1)=b. If the probability paths p(t) are parameterized as y(s) in terms of arc length s and the optimal path is smooth with arc length L, then smooth and "rich" optimal probability paths may be numerically estimated by a hybrid method of iterating Newton's method on solutions of a two point boundary value problem, with unknown distance L between the abscissas, for the Euler-Lagrange equations resulting from a multiplier rule for the constrained optimization problem together with linear regression to improve the arc length estimate L. Matlab code for these numerical methods is provided which works only for "rich" optimal probability vectors. These methods motivate a definition of an elementary distance function which is easier and faster to calculate, works on non-rich vectors, does not involve variational theory and does not involve differential equations, but is a better approximation of the minimal entropy path distance than the distance //b-a//(2). We compute minimal entropy distance matrices for examples of DNA myostatin genes and amino-acid sequences across several species. Output tree dendograms for our minimal entropy metric are compared with dendograms based on BLAST and BLAST identity scores.

  10. Optimal quantum cloning based on the maximin principle by using a priori information

    NASA Astrophysics Data System (ADS)

    Kang, Peng; Dai, Hong-Yi; Wei, Jia-Hua; Zhang, Ming

    2016-10-01

    We propose an optimal 1 →2 quantum cloning method based on the maximin principle by making full use of a priori information of amplitude and phase about the general cloned qubit input set, which is a simply connected region enclosed by a "longitude-latitude grid" on the Bloch sphere. Theoretically, the fidelity of the optimal quantum cloning machine derived from this method is the largest in terms of the maximin principle compared with that of any other machine. The problem solving is an optimization process that involves six unknown complex variables, six vectors in an uncertain-dimensional complex vector space, and four equality constraints. Moreover, by restricting the structure of the quantum cloning machine, the optimization problem is simplified as a three-real-parameter suboptimization problem with only one equality constraint. We obtain the explicit formula for a suboptimal quantum cloning machine. Additionally, the fidelity of our suboptimal quantum cloning machine is higher than or at least equal to that of universal quantum cloning machines and phase-covariant quantum cloning machines. It is also underlined that the suboptimal cloning machine outperforms the "belt quantum cloning machine" for some cases.

  11. Evaluation of laser cutting process with auxiliary gas pressure by soft computing approach

    NASA Astrophysics Data System (ADS)

    Lazov, Lyubomir; Nikolić, Vlastimir; Jovic, Srdjan; Milovančević, Miloš; Deneva, Heristina; Teirumenieka, Erika; Arsic, Nebojsa

    2018-06-01

    Evaluation of the optimal laser cutting parameters is very important for the high cut quality. This is highly nonlinear process with different parameters which is the main challenge in the optimization process. Data mining methodology is one of most versatile method which can be used laser cutting process optimization. Support vector regression (SVR) procedure is implemented since it is a versatile and robust technique for very nonlinear data regression. The goal in this study was to determine the optimal laser cutting parameters to ensure robust condition for minimization of average surface roughness. Three cutting parameters, the cutting speed, the laser power, and the assist gas pressure, were used in the investigation. As a laser type TruLaser 1030 technological system was used. Nitrogen as an assisted gas was used in the laser cutting process. As the data mining method, support vector regression procedure was used. Data mining prediction accuracy was very high according the coefficient (R2) of determination and root mean square error (RMSE): R2 = 0.9975 and RMSE = 0.0337. Therefore the data mining approach could be used effectively for determination of the optimal conditions of the laser cutting process.

  12. Overview of Existing Wind Energy Ordinances

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oteri, F.

    2008-12-01

    Due to increased energy demand in the United States, rural communities with limited or no experience with wind energy now have the opportunity to become involved in this industry. Communities with good wind resources may be approached by entities with plans to develop the resource. Although these opportunities can create new revenue in the form of construction jobs and land lease payments, they also create a new responsibility on the part of local governments to ensure that ordinances will be established to aid the development of safe facilities that will be embraced by the community. The purpose of this reportmore » is to educate and engage state and local governments, as well as policymakers, about existing large wind energy ordinances. These groups will have a collection of examples to utilize when they attempt to draft a new large wind energy ordinance in a town or county without existing ordinances.« less

  13. Penalized Ordinal Regression Methods for Predicting Stage of Cancer in High-Dimensional Covariate Spaces.

    PubMed

    Gentry, Amanda Elswick; Jackson-Cook, Colleen K; Lyon, Debra E; Archer, Kellie J

    2015-01-01

    The pathological description of the stage of a tumor is an important clinical designation and is considered, like many other forms of biomedical data, an ordinal outcome. Currently, statistical methods for predicting an ordinal outcome using clinical, demographic, and high-dimensional correlated features are lacking. In this paper, we propose a method that fits an ordinal response model to predict an ordinal outcome for high-dimensional covariate spaces. Our method penalizes some covariates (high-throughput genomic features) without penalizing others (such as demographic and/or clinical covariates). We demonstrate the application of our method to predict the stage of breast cancer. In our model, breast cancer subtype is a nonpenalized predictor, and CpG site methylation values from the Illumina Human Methylation 450K assay are penalized predictors. The method has been made available in the ordinalgmifs package in the R programming environment.

  14. Optimizing antibody expression: The nuts and bolts.

    PubMed

    Ayyar, B Vijayalakshmi; Arora, Sushrut; Ravi, Shiva Shankar

    2017-03-01

    Antibodies are extensively utilized entities in biomedical research, and in the development of diagnostics and therapeutics. Many of these applications require high amounts of antibodies. However, meeting this ever-increasing demand of antibodies in the global market is one of the outstanding challenges. The need to maintain a balance between demand and supply of antibodies has led the researchers to discover better means and methods for optimizing their expression. These strategies aim to increase the volumetric productivity of the antibodies along with the reduction of associated manufacturing costs. Recent years have witnessed major advances in recombinant protein technology, owing to the introduction of novel cloning strategies, gene manipulation techniques, and an array of cell and vector engineering techniques, together with the progress in fermentation technologies. These innovations were also highly beneficial for antibody expression. Antibody expression depends upon the complex interplay of multiple factors that may require fine tuning at diverse levels to achieve maximum yields. However, each antibody is unique and requires individual consideration and customization for optimizing the associated expression parameters. This review provides a comprehensive overview of several state-of-the-art approaches, such as host selection, strain engineering, codon optimization, gene optimization, vector modification and process optimization that are deemed suitable for enhancing antibody expression. Copyright © 2017 Elsevier Inc. All rights reserved.

  15. Optimized Lentiviral Vector Design Improves Titer and Transgene Expression of Vectors Containing the Chicken β-Globin Locus HS4 Insulator Element

    PubMed Central

    Hanawa, Hideki; Yamamoto, Motoko; Zhao, Huifen; Shimada, Takashi; Persons, Derek A

    2009-01-01

    Hematopoietic cell gene therapy using retroviral vectors has achieved success in clinical trials. However, safety issues regarding vector insertional mutagenesis have emerged. In two different trials, vector insertion resulted in the transcriptional activation of proto-oncogenes. One strategy for potentially diminishing vector insertional mutagenesis is through the use of self-inactivating lentiviral vectors containing the 1.2-kb insulator element derived from the chicken β-globin locus. However, use of this element can dramatically decrease both vector titer and transgene expression, thereby compromising its practical use. Here, we studied lentiviral vectors containing either the full-length 1.2-kb insulator or the smaller 0.25-kb core element in both orientations in the partially deleted long-terminal repeat. We show that use of the 0.25-kb core insulator rescued vector titer by alleviating a postentry block to reverse transcription associated with the 1.2-kb element. In addition, in an orientation-dependent manner, the 0.25-kb core element significantly increased transgene expression from an internal promoter due to improved transcriptional termination. This element also demonstrated barrier activity, reducing variability of expression due to position effects. As it is known that the 0.25-kb core insulator has enhancer-blocking activity, this particular insulated lentiviral vector design may be useful for clinical application. PMID:19223867

  16. Construction and production of oncotropic vectors, derived from MVM(p), that share reduced sequence homology with helper plasmids.

    PubMed

    Clément, Nathalie; Velu, Thierry; Brandenburger, Annick

    2002-09-01

    The production of currently available vectors derived from autonomous parvoviruses requires the expression of capsid proteins in trans, from helper sequences. Cotransfection of a helper plasmid always generates significant amounts of replication-competent virus (RCV) that can be reduced by the integration of helper sequences into a packaging cell line. Although stocks of minute virus of mice (MVM)-based vectors with no detectable RCV could be produced by transfection into packaging cells; the latter appear after one or two rounds of replication, precluding further amplification of the vector stock. Indeed, once RCVs become detectable, they are efficiently amplified and rapidly take over the culture. Theoretically RCV-free vector stocks could be produced if all homology between vector and helper DNA is eliminated, thus preventing homologous recombination. We constructed new vectors based on the structure of spontaneously occurring defective particles of MVM. Based on published observations related to the size of vectors and the sequence of the viral origin of replication, these vectors were modified by the insertion of foreign DNA sequences downstream of the transgene and by the introduction of a consensus NS-1 nick site near the origin of replication to optimize their production. In one of the vectors the inserted fragment of mouse genomic DNA had a synergistic effect with the modified origin of replication in increasing vector production.

  17. Optimal transfers between libration-point orbits in the elliptic restricted three-body problem

    NASA Astrophysics Data System (ADS)

    Hiday, Lisa Ann

    1992-09-01

    A strategy is formulated to design optimal impulsive transfers between three-dimensional libration-point orbits in the vicinity of the interior L(1) libration point of the Sun-Earth/Moon barycenter system. Two methods of constructing nominal transfers, for which the fuel cost is to be minimized, are developed; both inferior and superior transfers between two halo orbits are considered. The necessary conditions for an optimal transfer trajectory are stated in terms of the primer vector. The adjoint equation relating reference and perturbed trajectories in this formulation of the elliptic restricted three-body problem is shown to be distinctly different from that obtained in the analysis of trajectories in the two-body problem. Criteria are established whereby the cost on a nominal transfer can be improved by the addition of an interior impulse or by the implementation of coastal arcs in the initial and final orbits. The necessary conditions for the local optimality of a time-fixed transfer trajectory possessing additional impulses are satisfied by requiring continuity of the Hamiltonian and the derivative of the primer vector at all interior impulses. The optimality of a time-free transfer containing coastal arcs is surmised by examination of the slopes at the endpoints of a plot of the magnitude of the primer vector over the duration of the transfer path. If the initial and final slopes of the primer magnitude are zero, the transfer trajectory is optimal; otherwise, the execution of coasts is warranted. The position and timing of each interior impulse applied to a time-fixed transfer as well as the direction and length of coastal periods implemented on a time-free transfer are specified by the unconstrained minimization of the appropriate variation in cost utilizing a multivariable search technique. Although optimal solutions in some instances are elusive, the time-fixed and time-free optimization algorithms prove to be very successful in diminishing costs on nominal transfer trajectories. The inclusion of coastal arcs on time-free superior and inferior transfers results in significant modification of the transfer time of flight caused by shifts in departure and arrival locations on the halo orbits.

  18. Overcoming preexisting humoral immunity to AAV using capsid decoys.

    PubMed

    Mingozzi, Federico; Anguela, Xavier M; Pavani, Giulia; Chen, Yifeng; Davidson, Robert J; Hui, Daniel J; Yazicioglu, Mustafa; Elkouby, Liron; Hinderer, Christian J; Faella, Armida; Howard, Carolann; Tai, Alex; Podsakoff, Gregory M; Zhou, Shangzhen; Basner-Tschakarjan, Etiena; Wright, John Fraser; High, Katherine A

    2013-07-17

    Adeno-associated virus (AAV) vectors delivered through the systemic circulation successfully transduce various target tissues in animal models. However, similar attempts in humans have been hampered by the high prevalence of neutralizing antibodies to AAV, which completely block vector transduction. We show in both mouse and nonhuman primate models that addition of empty capsid to the final vector formulation can, in a dose-dependent manner, adsorb these antibodies, even at high titers, thus overcoming their inhibitory effect. To further enhance the safety of the approach, we mutated the receptor binding site of AAV2 to generate an empty capsid mutant that can adsorb antibodies but cannot enter a target cell. Our work suggests that optimizing the ratio of full/empty capsids in the final formulation of vector, based on a patient's anti-AAV titers, will maximize the efficacy of gene transfer after systemic vector delivery.

  19. Decoding and optimized implementation of SECDED codes over GF(q)

    DOEpatents

    Ward, H. Lee; Ganti, Anand; Resnick, David R

    2013-10-22

    A plurality of columns for a check matrix that implements a distance d linear error correcting code are populated by providing a set of vectors from which to populate the columns, and applying to the set of vectors a filter operation that reduces the set by eliminating therefrom all vectors that would, if used to populate the columns, prevent the check matrix from satisfying a column-wise linear independence requirement associated with check matrices of distance d linear codes. One of the vectors from the reduced set may then be selected to populate one of the columns. The filtering and selecting repeats iteratively until either all of the columns are populated or the number of currently unpopulated columns exceeds the number of vectors in the reduced set. Columns for the check matrix may be processed to reduce the amount of logic needed to implement the check matrix in circuit logic.

  20. Design, decoding and optimized implementation of SECDED codes over GF(q)

    DOEpatents

    Ward, H Lee; Ganti, Anand; Resnick, David R

    2014-06-17

    A plurality of columns for a check matrix that implements a distance d linear error correcting code are populated by providing a set of vectors from which to populate the columns, and applying to the set of vectors a filter operation that reduces the set by eliminating therefrom all vectors that would, if used to populate the columns, prevent the check matrix from satisfying a column-wise linear independence requirement associated with check matrices of distance d linear codes. One of the vectors from the reduced set may then be selected to populate one of the columns. The filtering and selecting repeats iteratively until either all of the columns are populated or the number of currently unpopulated columns exceeds the number of vectors in the reduced set. Columns for the check matrix may be processed to reduce the amount of logic needed to implement the check matrix in circuit logic.

  1. Decoding and optimized implementation of SECDED codes over GF(q)

    DOEpatents

    Ward, H Lee; Ganti, Anand; Resnick, David R

    2014-11-18

    A plurality of columns for a check matrix that implements a distance d linear error correcting code are populated by providing a set of vectors from which to populate the columns, and applying to the set of vectors a filter operation that reduces the set by eliminating therefrom all vectors that would, if used to populate the columns, prevent the check matrix from satisfying a column-wise linear independence requirement associated with check matrices of distance d linear codes. One of the vectors from the reduced set may then be selected to populate one of the columns. The filtering and selecting repeats iteratively until either all of the columns are populated or the number of currently unpopulated columns exceeds the number of vectors in the reduced set. Columns for the check matrix may be processed to reduce the amount of logic needed to implement the check matrix in circuit logic.

  2. Overcoming Preexisting Humoral Immunity to AAV Using Capsid Decoys

    PubMed Central

    Anguela, Xavier M.; Pavani, Giulia; Chen, Yifeng; Davidson, Robert J.; Hui, Daniel J.; Yazicioglu, Mustafa; Elkouby, Liron; Hinderer, Christian J.; Faella, Armida; Howard, Carolann; Tai, Alex; Podsakoff, Gregory M.; Zhou, Shangzhen; Basner-Tschakarjan, Etiena; Wright, John Fraser

    2014-01-01

    Adeno-associated virus (AAV) vectors delivered through the systemic circulation successfully transduce various target tissues in animal models. However, similar attempts in humans have been hampered by the high prevalence of neutralizing antibodies to AAV, which completely block vector transduction. We show in both mouse and nonhuman primate models that addition of empty capsid to the final vector formulation can, in a dose-dependent manner, adsorb these antibodies, even at high titers, thus overcoming their inhibitory effect. To further enhance the safety of the approach, we mutated the receptor binding site of AAV2 to generate an empty capsid mutant that can adsorb antibodies but cannot enter a target cell. Our work suggests that optimizing the ratio of full/empty capsids in the final formulation of vector, based on a patient's anti-AAV titers, will maximize the efficacy of gene transfer after systemic vector delivery. PMID:23863832

  3. Young Children's Ability to Use Ordinal Labels in a Spatial Search Task

    ERIC Educational Resources Information Center

    Miller, Stephanie E.; Marcovitch, Stuart; Boseovski, Janet J.; Lewkowicz, David J.

    2015-01-01

    The use and understanding of ordinal terms (e.g., "first" and "second") is a developmental milestone that has been relatively unexplored in the preschool age range. In the present study, 4- and 5-year-olds watched as a reward was placed in one of three train cars labeled by the experimenter with an ordinal (e.g.,…

  4. 75 FR 39969 - Liquor Control Ordinance for the Manchester Band of Pomo Indians of the Manchester-Point Arena...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-07-13

    ... land. The tribal land is located on trust land and this Ordinance allows for the possession and sale of alcoholic beverages. This Ordinance will increase the ability of the tribal government to control the distribution and possession of liquor within their tribal land, and at the same time will provide an important...

  5. The assignment of scores procedure for ordinal categorical data.

    PubMed

    Chen, Han-Ching; Wang, Nae-Sheng

    2014-01-01

    Ordinal data are the most frequently encountered type of data in the social sciences. Many statistical methods can be used to process such data. One common method is to assign scores to the data, convert them into interval data, and further perform statistical analysis. There are several authors who have recently developed assigning score methods to assign scores to ordered categorical data. This paper proposes an approach that defines an assigning score system for an ordinal categorical variable based on underlying continuous latent distribution with interpretation by using three case study examples. The results show that the proposed score system is well for skewed ordinal categorical data.

  6. Regenerating time series from ordinal networks.

    PubMed

    McCullough, Michael; Sakellariou, Konstantinos; Stemler, Thomas; Small, Michael

    2017-03-01

    Recently proposed ordinal networks not only afford novel methods of nonlinear time series analysis but also constitute stochastic approximations of the deterministic flow time series from which the network models are constructed. In this paper, we construct ordinal networks from discrete sampled continuous chaotic time series and then regenerate new time series by taking random walks on the ordinal network. We then investigate the extent to which the dynamics of the original time series are encoded in the ordinal networks and retained through the process of regenerating new time series by using several distinct quantitative approaches. First, we use recurrence quantification analysis on traditional recurrence plots and order recurrence plots to compare the temporal structure of the original time series with random walk surrogate time series. Second, we estimate the largest Lyapunov exponent from the original time series and investigate the extent to which this invariant measure can be estimated from the surrogate time series. Finally, estimates of correlation dimension are computed to compare the topological properties of the original and surrogate time series dynamics. Our findings show that ordinal networks constructed from univariate time series data constitute stochastic models which approximate important dynamical properties of the original systems.

  7. Regenerating time series from ordinal networks

    NASA Astrophysics Data System (ADS)

    McCullough, Michael; Sakellariou, Konstantinos; Stemler, Thomas; Small, Michael

    2017-03-01

    Recently proposed ordinal networks not only afford novel methods of nonlinear time series analysis but also constitute stochastic approximations of the deterministic flow time series from which the network models are constructed. In this paper, we construct ordinal networks from discrete sampled continuous chaotic time series and then regenerate new time series by taking random walks on the ordinal network. We then investigate the extent to which the dynamics of the original time series are encoded in the ordinal networks and retained through the process of regenerating new time series by using several distinct quantitative approaches. First, we use recurrence quantification analysis on traditional recurrence plots and order recurrence plots to compare the temporal structure of the original time series with random walk surrogate time series. Second, we estimate the largest Lyapunov exponent from the original time series and investigate the extent to which this invariant measure can be estimated from the surrogate time series. Finally, estimates of correlation dimension are computed to compare the topological properties of the original and surrogate time series dynamics. Our findings show that ordinal networks constructed from univariate time series data constitute stochastic models which approximate important dynamical properties of the original systems.

  8. Cloning strategy for producing brush-forming protein-based polymers.

    PubMed

    Henderson, Douglas B; Davis, Richey M; Ducker, William A; Van Cott, Kevin E

    2005-01-01

    Brush-forming polymers are being used in a variety of applications, and by using recombinant DNA technology, there exists the potential to produce protein-based polymers that incorporate unique structures and functions in these brush layers. Despite this potential, production of protein-based brush-forming polymers is not routinely performed. For the design and production of new protein-based polymers with optimal brush-forming properties, it would be desirable to have a cloning strategy that allows an iterative approach wherein the protein based-polymer product can be produced and evaluated, and then if necessary, it can be sequentially modified in a controlled manner to obtain optimal surface density and brush extension. In this work, we report on the development of a cloning strategy intended for the production of protein-based brush-forming polymers. This strategy is based on the assembly of modules of DNA that encode for blocks of protein-based polymers into a commercially available expression vector; there is no need for custom-modified vectors and no need for intermediate cloning vectors. Additionally, because the design of new protein-based biopolymers can be an iterative process, our method enables sequential modification of a protein-based polymer product. With at least 21 bacterial expression vectors and 11 yeast expression vectors compatible with this strategy, there are a number of options available for production of protein-based polymers. It is our intent that this strategy will aid in advancing the production of protein-based brush-forming polymers.

  9. Phase Response Design of Recursive All-Pass Digital Filters Using a Modified PSO Algorithm

    PubMed Central

    2015-01-01

    This paper develops a new design scheme for the phase response of an all-pass recursive digital filter. A variant of particle swarm optimization (PSO) algorithm will be utilized for solving this kind of filter design problem. It is here called the modified PSO (MPSO) algorithm in which another adjusting factor is more introduced in the velocity updating formula of the algorithm in order to improve the searching ability. In the proposed method, all of the designed filter coefficients are firstly collected to be a parameter vector and this vector is regarded as a particle of the algorithm. The MPSO with a modified velocity formula will force all particles into moving toward the optimal or near optimal solution by minimizing some defined objective function of the optimization problem. To show the effectiveness of the proposed method, two different kinds of linear phase response design examples are illustrated and the general PSO algorithm is compared as well. The obtained results show that the MPSO is superior to the general PSO for the phase response design of digital recursive all-pass filter. PMID:26366168

  10. Intelligent Fault Diagnosis of HVCB with Feature Space Optimization-Based Random Forest

    PubMed Central

    Ma, Suliang; Wu, Jianwen; Wang, Yuhao; Jia, Bowen; Jiang, Yuan

    2018-01-01

    Mechanical faults of high-voltage circuit breakers (HVCBs) always happen over long-term operation, so extracting the fault features and identifying the fault type have become a key issue for ensuring the security and reliability of power supply. Based on wavelet packet decomposition technology and random forest algorithm, an effective identification system was developed in this paper. First, compared with the incomplete description of Shannon entropy, the wavelet packet time-frequency energy rate (WTFER) was adopted as the input vector for the classifier model in the feature selection procedure. Then, a random forest classifier was used to diagnose the HVCB fault, assess the importance of the feature variable and optimize the feature space. Finally, the approach was verified based on actual HVCB vibration signals by considering six typical fault classes. The comparative experiment results show that the classification accuracy of the proposed method with the origin feature space reached 93.33% and reached up to 95.56% with optimized input feature vector of classifier. This indicates that feature optimization procedure is successful, and the proposed diagnosis algorithm has higher efficiency and robustness than traditional methods. PMID:29659548

  11. Direct discriminant locality preserving projection with Hammerstein polynomial expansion.

    PubMed

    Chen, Xi; Zhang, Jiashu; Li, Defang

    2012-12-01

    Discriminant locality preserving projection (DLPP) is a linear approach that encodes discriminant information into the objective of locality preserving projection and improves its classification ability. To enhance the nonlinear description ability of DLPP, we can optimize the objective function of DLPP in reproducing kernel Hilbert space to form a kernel-based discriminant locality preserving projection (KDLPP). However, KDLPP suffers the following problems: 1) larger computational burden; 2) no explicit mapping functions in KDLPP, which results in more computational burden when projecting a new sample into the low-dimensional subspace; and 3) KDLPP cannot obtain optimal discriminant vectors, which exceedingly optimize the objective of DLPP. To overcome the weaknesses of KDLPP, in this paper, a direct discriminant locality preserving projection with Hammerstein polynomial expansion (HPDDLPP) is proposed. The proposed HPDDLPP directly implements the objective of DLPP in high-dimensional second-order Hammerstein polynomial space without matrix inverse, which extracts the optimal discriminant vectors for DLPP without larger computational burden. Compared with some other related classical methods, experimental results for face and palmprint recognition problems indicate the effectiveness of the proposed HPDDLPP.

  12. A Coarse-Alignment Method Based on the Optimal-REQUEST Algorithm

    PubMed Central

    Zhu, Yongyun

    2018-01-01

    In this paper, we proposed a coarse-alignment method for strapdown inertial navigation systems based on attitude determination. The observation vectors, which can be obtained by inertial sensors, usually contain various types of noise, which affects the convergence rate and the accuracy of the coarse alignment. Given this drawback, we studied an attitude-determination method named optimal-REQUEST, which is an optimal method for attitude determination that is based on observation vectors. Compared to the traditional attitude-determination method, the filtering gain of the proposed method is tuned autonomously; thus, the convergence rate of the attitude determination is faster than in the traditional method. Within the proposed method, we developed an iterative method for determining the attitude quaternion. We carried out simulation and turntable tests, which we used to validate the proposed method’s performance. The experiment’s results showed that the convergence rate of the proposed optimal-REQUEST algorithm is faster and that the coarse alignment’s stability is higher. In summary, the proposed method has a high applicability to practical systems. PMID:29337895

  13. Computation of optimal output-feedback compensators for linear time-invariant systems

    NASA Technical Reports Server (NTRS)

    Platzman, L. K.

    1972-01-01

    The control of linear time-invariant systems with respect to a quadratic performance criterion was considered, subject to the constraint that the control vector be a constant linear transformation of the output vector. The optimal feedback matrix, f*, was selected to optimize the expected performance, given the covariance of the initial state. It is first shown that the expected performance criterion can be expressed as the ratio of two multinomials in the element of f. This expression provides the basis for a feasible method of determining f* in the case of single-input single-output systems. A number of iterative algorithms are then proposed for the calculation of f* for multiple input-output systems. For two of these, monotone convergence is proved, but they involve the solution of nonlinear matrix equations at each iteration. Another is proposed involving the solution of Lyapunov equations at each iteration, and the gradual increase of the magnitude of a penalty function. Experience with this algorithm will be needed to determine whether or not it does, indeed, possess desirable convergence properties, and whether it can be used to determine the globally optimal f*.

  14. Development of two bacterial artificial chromosome shuttle vectors for a recombination-based cloning and regulated expression of large genes in mammalian cells.

    PubMed

    Hong, Y K; Kim, D H; Beletskii, A; Lee, C; Memili, E; Strauss, W M

    2001-04-01

    Most conditional expression vectors designed for mammalian cells have been valuable systems for studying genes of interest by regulating their expressions. The available vectors, however, are reliable for the short-length cDNA clones and not optimal for relatively long fragments of genomic DNA or long cDNAs. Here, we report the construction of two bacterial artificial chromosome (BAC) vectors, capable of harboring large inserts and shuttling among Escherichia coli, yeast, and mammalian cells. These two vectors, pEYMT and pEYMI, contain conditional expression systems which are designed to be regulated by tetracycline and mouse interferons, respectively. To test the properties of the vectors, we cloned in both vectors the green fluorescence protein (GFP) through an in vitro ligation reaction and the 17.8-kb-long X-inactive-specific transcript (Xist) cDNA through homologous recombination in yeast. Subsequently, we characterized their regulated expression properties using real-time quantitative RT-PCR (TaqMan) and RNA-fluorescent in situ hybridization (FISH). We demonstrate that these two BAC vectors are good systems for recombination-based cloning and regulated expression of large genes in mammalian cells. Copyright 2001 Academic Press.

  15. PROFILE user's guide

    NASA Technical Reports Server (NTRS)

    Collins, L.; Saunders, D.

    1986-01-01

    User information for program PROFILE, an aerodynamics design utility for refining, plotting, and tabulating airfoil profiles is provided. The theory and implementation details for two of the more complex options are also presented. These are the REFINE option, for smoothing curvature in selected regions while retaining or seeking some specified thickness ratio, and the OPTIMIZE option, which seeks a specified curvature distribution. REFINE uses linear techniques to manipulate ordinates via the central difference approximation to second derivatives, while OPTIMIZE works directly with curvature using nonlinear least squares techniques. Use of programs QPLOT and BPLOT is also described, since all of the plots provided by PROFILE (airfoil coordinates, curvature distributions) are achieved via the general purpose QPLOT utility. BPLOT illustrates (again, via QPLOT) the shape functions used by two of PROFILE's options. The programs were designed and implemented for the Applied Aerodynamics Branch at NASA Ames Research Center, Moffett Field, California, and written in FORTRAN and run on a VAX-11/780 under VMS.

  16. Genetic Algorithm for Traveling Salesman Problem with Modified Cycle Crossover Operator

    PubMed Central

    Mohamd Shoukry, Alaa; Gani, Showkat

    2017-01-01

    Genetic algorithms are evolutionary techniques used for optimization purposes according to survival of the fittest idea. These methods do not ensure optimal solutions; however, they give good approximation usually in time. The genetic algorithms are useful for NP-hard problems, especially the traveling salesman problem. The genetic algorithm depends on selection criteria, crossover, and mutation operators. To tackle the traveling salesman problem using genetic algorithms, there are various representations such as binary, path, adjacency, ordinal, and matrix representations. In this article, we propose a new crossover operator for traveling salesman problem to minimize the total distance. This approach has been linked with path representation, which is the most natural way to represent a legal tour. Computational results are also reported with some traditional path representation methods like partially mapped and order crossovers along with new cycle crossover operator for some benchmark TSPLIB instances and found improvements. PMID:29209364

  17. Genetic Algorithm for Traveling Salesman Problem with Modified Cycle Crossover Operator.

    PubMed

    Hussain, Abid; Muhammad, Yousaf Shad; Nauman Sajid, M; Hussain, Ijaz; Mohamd Shoukry, Alaa; Gani, Showkat

    2017-01-01

    Genetic algorithms are evolutionary techniques used for optimization purposes according to survival of the fittest idea. These methods do not ensure optimal solutions; however, they give good approximation usually in time. The genetic algorithms are useful for NP-hard problems, especially the traveling salesman problem. The genetic algorithm depends on selection criteria, crossover, and mutation operators. To tackle the traveling salesman problem using genetic algorithms, there are various representations such as binary, path, adjacency, ordinal, and matrix representations. In this article, we propose a new crossover operator for traveling salesman problem to minimize the total distance. This approach has been linked with path representation, which is the most natural way to represent a legal tour. Computational results are also reported with some traditional path representation methods like partially mapped and order crossovers along with new cycle crossover operator for some benchmark TSPLIB instances and found improvements.

  18. Computational mechanics analysis tools for parallel-vector supercomputers

    NASA Technical Reports Server (NTRS)

    Storaasli, Olaf O.; Nguyen, Duc T.; Baddourah, Majdi; Qin, Jiangning

    1993-01-01

    Computational algorithms for structural analysis on parallel-vector supercomputers are reviewed. These parallel algorithms, developed by the authors, are for the assembly of structural equations, 'out-of-core' strategies for linear equation solution, massively distributed-memory equation solution, unsymmetric equation solution, general eigensolution, geometrically nonlinear finite element analysis, design sensitivity analysis for structural dynamics, optimization search analysis and domain decomposition. The source code for many of these algorithms is available.

  19. Interdependence of spin structure, anion height and electronic structure of BaFe{sub 2}As{sub 2}

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sen, Smritijit, E-mail: smritijit.sen@gmail.com; Ghosh, Haranath, E-mail: hng@rrcat.gov.in; Homi Bhabha National Institute, Anushaktinagar, Mumbai, 400094

    2016-05-06

    Superconducting as well as other electronic properties of Fe-based superconductors are quite sensitive to the structural parameters specially, on anion height which is intimately related to z{sub As}, the fractional z co-ordinate of As atom. Due to presence of strong magnetic fluctuation in these Fe-based superconductors, optimized structural parameters (lattice parameters a, b, c) including z{sub As} using density functional theory (DFT) under generalized gradient approximation (GGA) does not match experimental values accurately. In this work, we show that the optimized value of z{sub As} is strongly influenced by the spin structures in the orthorhombic phase of BaFe{sub 2}As{sub 2}more » system. We take all possible spin structures for the orthorhombic BaFe{sub 2}As{sub 2} system and then optimize z{sub As}. Using these optimized structures we calculate electronic structures like density of states, band structures etc., for each spin configurations. From these studies we show that the electronic structure, orbital order which is responsible for structural as well as related to nematic transition, are significantly influenced by the spin structures.« less

  20. Searching for transcription factor binding sites in vector spaces

    PubMed Central

    2012-01-01

    Background Computational approaches to transcription factor binding site identification have been actively researched in the past decade. Learning from known binding sites, new binding sites of a transcription factor in unannotated sequences can be identified. A number of search methods have been introduced over the years. However, one can rarely find one single method that performs the best on all the transcription factors. Instead, to identify the best method for a particular transcription factor, one usually has to compare a handful of methods. Hence, it is highly desirable for a method to perform automatic optimization for individual transcription factors. Results We proposed to search for transcription factor binding sites in vector spaces. This framework allows us to identify the best method for each individual transcription factor. We further introduced two novel methods, the negative-to-positive vector (NPV) and optimal discriminating vector (ODV) methods, to construct query vectors to search for binding sites in vector spaces. Extensive cross-validation experiments showed that the proposed methods significantly outperformed the ungapped likelihood under positional background method, a state-of-the-art method, and the widely-used position-specific scoring matrix method. We further demonstrated that motif subtypes of a TF can be readily identified in this framework and two variants called the k NPV and k ODV methods benefited significantly from motif subtype identification. Finally, independent validation on ChIP-seq data showed that the ODV and NPV methods significantly outperformed the other compared methods. Conclusions We conclude that the proposed framework is highly flexible. It enables the two novel methods to automatically identify a TF-specific subspace to search for binding sites. Implementations are available as source code at: http://biogrid.engr.uconn.edu/tfbs_search/. PMID:23244338

  1. Sea ice motion from low-resolution satellite sensors: An alternative method and its validation in the Arctic

    NASA Astrophysics Data System (ADS)

    Lavergne, T.; Eastwood, S.; Teffah, Z.; Schyberg, H.; Breivik, L.-A.

    2010-10-01

    The retrieval of sea ice motion with the Maximum Cross-Correlation (MCC) method from low-resolution (10-15 km) spaceborne imaging sensors is challenged by a dominating quantization noise as the time span of displacement vectors is shortened. To allow investigating shorter displacements from these instruments, we introduce an alternative sea ice motion tracking algorithm that builds on the MCC method but relies on a continuous optimization step for computing the motion vector. The prime effect of this method is to effectively dampen the quantization noise, an artifact of the MCC. It allows for retrieving spatially smooth 48 h sea ice motion vector fields in the Arctic. Strategies to detect and correct erroneous vectors as well as to optimally merge several polarization channels of a given instrument are also described. A test processing chain is implemented and run with several active and passive microwave imagers (Advanced Microwave Scanning Radiometer-EOS (AMSR-E), Special Sensor Microwave Imager, and Advanced Scatterometer) during three Arctic autumn, winter, and spring seasons. Ice motion vectors are collocated to and compared with GPS positions of in situ drifters. Error statistics are shown to be ranging from 2.5 to 4.5 km (standard deviation for components of the vectors) depending on the sensor, without significant bias. We discuss the relative contribution of measurement and representativeness errors by analyzing monthly validation statistics. The 37 GHz channels of the AMSR-E instrument allow for the best validation statistics. The operational low-resolution sea ice drift product of the EUMETSAT OSI SAF (European Organisation for the Exploitation of Meteorological Satellites Ocean and Sea Ice Satellite Application Facility) is based on the algorithms presented in this paper.

  2. Visual grading characteristics and ordinal regression analysis during optimisation of CT head examinations.

    PubMed

    Zarb, Francis; McEntee, Mark F; Rainford, Louise

    2015-06-01

    To evaluate visual grading characteristics (VGC) and ordinal regression analysis during head CT optimisation as a potential alternative to visual grading assessment (VGA), traditionally employed to score anatomical visualisation. Patient images (n = 66) were obtained using current and optimised imaging protocols from two CT suites: a 16-slice scanner at the national Maltese centre for trauma and a 64-slice scanner in a private centre. Local resident radiologists (n = 6) performed VGA followed by VGC and ordinal regression analysis. VGC alone indicated that optimised protocols had similar image quality as current protocols. Ordinal logistic regression analysis provided an in-depth evaluation, criterion by criterion allowing the selective implementation of the protocols. The local radiology review panel supported the implementation of optimised protocols for brain CT examinations (including trauma) in one centre, achieving radiation dose reductions ranging from 24 % to 36 %. In the second centre a 29 % reduction in radiation dose was achieved for follow-up cases. The combined use of VGC and ordinal logistic regression analysis led to clinical decisions being taken on the implementation of the optimised protocols. This improved method of image quality analysis provided the evidence to support imaging protocol optimisation, resulting in significant radiation dose savings. • There is need for scientifically based image quality evaluation during CT optimisation. • VGC and ordinal regression analysis in combination led to better informed clinical decisions. • VGC and ordinal regression analysis led to dose reductions without compromising diagnostic efficacy.

  3. Adaptation and optimization of a line-by-line radiative transfer program for the STAR-100 (STARSMART)

    NASA Technical Reports Server (NTRS)

    Rarig, P. L.

    1980-01-01

    A program to calculate upwelling infrared radiation was modified to operate efficiently on the STAR-100. The modified software processes specific test cases significantly faster than the initial STAR-100 code. For example, a midlatitude summer atmospheric model is executed in less than 2% of the time originally required on the STAR-100. Furthermore, the optimized program performs extra operations to save the calculated absorption coefficients. Some of the advantages and pitfalls of virtual memory and vector processing are discussed along with strategies used to avoid loss of accuracy and computing power. Results from the vectorized code, in terms of speed, cost, and relative error with respect to serial code solutions are encouraging.

  4. About the use of vector optimization for company's contractors selection

    NASA Astrophysics Data System (ADS)

    Medvedeva, M. A.; Medvedev, M. A.

    2017-07-01

    For effective functioning of an enterprise it is necessary to make a right choice of partners: suppliers of raw material, buyers of finished products, and others with which the company interacts in the course of their business. However, the presence on the market of big amount of enterprises makes the choice of the most appropriate among them very difficult and requires the ability to objectively assess of the possible partners, based on multilateral analysis of their activities. This analysis can be carried out based on the solution of multiobjective problem of mathematical programming by using the methods of vector optimization. The present work addresses the theoretical foundations of such approach and also describes an algorithm realizing proposed method on practical example.

  5. Computation of output feedback gains for linear stochastic systems using the Zangwill-Powell method

    NASA Technical Reports Server (NTRS)

    Kaufman, H.

    1977-01-01

    Because conventional optimal linear regulator theory results in a controller which requires the capability of measuring and/or estimating the entire state vector, it is of interest to consider procedures for computing controls which are restricted to be linear feedback functions of a lower dimensional output vector and which take into account the presence of measurement noise and process uncertainty. To this effect a stochastic linear model has been developed that accounts for process parameter and initial uncertainty, measurement noise, and a restricted number of measurable outputs. Optimization with respect to the corresponding output feedback gains was then performed for both finite and infinite time performance indices without gradient computation by using Zangwill's modification of a procedure originally proposed by Powell.

  6. Fruit fly optimization based least square support vector regression for blind image restoration

    NASA Astrophysics Data System (ADS)

    Zhang, Jiao; Wang, Rui; Li, Junshan; Yang, Yawei

    2014-11-01

    The goal of image restoration is to reconstruct the original scene from a degraded observation. It is a critical and challenging task in image processing. Classical restorations require explicit knowledge of the point spread function and a description of the noise as priors. However, it is not practical for many real image processing. The recovery processing needs to be a blind image restoration scenario. Since blind deconvolution is an ill-posed problem, many blind restoration methods need to make additional assumptions to construct restrictions. Due to the differences of PSF and noise energy, blurring images can be quite different. It is difficult to achieve a good balance between proper assumption and high restoration quality in blind deconvolution. Recently, machine learning techniques have been applied to blind image restoration. The least square support vector regression (LSSVR) has been proven to offer strong potential in estimating and forecasting issues. Therefore, this paper proposes a LSSVR-based image restoration method. However, selecting the optimal parameters for support vector machine is essential to the training result. As a novel meta-heuristic algorithm, the fruit fly optimization algorithm (FOA) can be used to handle optimization problems, and has the advantages of fast convergence to the global optimal solution. In the proposed method, the training samples are created from a neighborhood in the degraded image to the central pixel in the original image. The mapping between the degraded image and the original image is learned by training LSSVR. The two parameters of LSSVR are optimized though FOA. The fitness function of FOA is calculated by the restoration error function. With the acquired mapping, the degraded image can be recovered. Experimental results show the proposed method can obtain satisfactory restoration effect. Compared with BP neural network regression, SVR method and Lucy-Richardson algorithm, it speeds up the restoration rate and performs better. Both objective and subjective restoration performances are studied in the comparison experiments.

  7. Trajectory optimization of spacecraft high-thrust orbit transfer using a modified evolutionary algorithm

    NASA Astrophysics Data System (ADS)

    Shirazi, Abolfazl

    2016-10-01

    This article introduces a new method to optimize finite-burn orbital manoeuvres based on a modified evolutionary algorithm. Optimization is carried out based on conversion of the orbital manoeuvre into a parameter optimization problem by assigning inverse tangential functions to the changes in direction angles of the thrust vector. The problem is analysed using boundary delimitation in a common optimization algorithm. A method is introduced to achieve acceptable values for optimization variables using nonlinear simulation, which results in an enlarged convergence domain. The presented algorithm benefits from high optimality and fast convergence time. A numerical example of a three-dimensional optimal orbital transfer is presented and the accuracy of the proposed algorithm is shown.

  8. Sacramento's parking lot shading ordinance: environmental and economic costs of compliance

    Treesearch

    E.G. McPherson

    2001-01-01

    A survey of 15 Sacramento parking lots and computer modeling were used to evaluate parking capacity and compliance with the 1983 ordinance requiring 50% shade of paved areas (PA) 15 years after development. There were 6% more parking spaces than required by ordinance, and 36% were vacant during peak use periods. Current shade was 14% with 44% of this amount provided by...

  9. 41 CFR 102-74.351 - If a state or local government has a smoke-free ordinance that is more strict than the smoking...

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... government has a smoke-free ordinance that is more strict than the smoking policy for Federal facilities... REGULATION REAL PROPERTY 74-FACILITY MANAGEMENT Facility Management Smoking § 102-74.351 If a state or local government has a smoke-free ordinance that is more strict than the smoking policy for Federal facilities...

  10. 41 CFR 102-74.351 - If a state or local government has a smoke-free ordinance that is more strict than the smoking...

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... government has a smoke-free ordinance that is more strict than the smoking policy for Federal facilities... REGULATION REAL PROPERTY 74-FACILITY MANAGEMENT Facility Management Smoking § 102-74.351 If a state or local government has a smoke-free ordinance that is more strict than the smoking policy for Federal facilities...

  11. 41 CFR 102-74.351 - If a state or local government has a smoke-free ordinance that is more strict than the smoking...

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... government has a smoke-free ordinance that is more strict than the smoking policy for Federal facilities... REGULATION REAL PROPERTY 74-FACILITY MANAGEMENT Facility Management Smoking § 102-74.351 If a state or local government has a smoke-free ordinance that is more strict than the smoking policy for Federal facilities...

  12. 41 CFR 102-74.351 - If a state or local government has a smoke-free ordinance that is more strict than the smoking...

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... government has a smoke-free ordinance that is more strict than the smoking policy for Federal facilities... REGULATION REAL PROPERTY 74-FACILITY MANAGEMENT Facility Management Smoking § 102-74.351 If a state or local government has a smoke-free ordinance that is more strict than the smoking policy for Federal facilities...

  13. 41 CFR 102-74.351 - If a state or local government has a smoke-free ordinance that is more strict than the smoking...

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... government has a smoke-free ordinance that is more strict than the smoking policy for Federal facilities... REGULATION REAL PROPERTY 74-FACILITY MANAGEMENT Facility Management Smoking § 102-74.351 If a state or local government has a smoke-free ordinance that is more strict than the smoking policy for Federal facilities...

  14. Vaccination strategies for SIR vector-transmitted diseases.

    PubMed

    Cruz-Pacheco, Gustavo; Esteva, Lourdes; Vargas, Cristobal

    2014-08-01

    Vector-borne diseases are one of the major public health problems in the world with the fastest spreading rate. Control measures have been focused on vector control, with poor results in most cases. Vaccines should help to reduce the diseases incidence, but vaccination strategies should also be defined. In this work, we propose a vector-transmitted SIR disease model with age-structured population subject to a vaccination program. We find an expression for the age-dependent basic reproductive number R(0), and we show that the disease-free equilibrium is locally stable for R(0) ≤ 1, and a unique endemic equilibrium exists for R(0) > 1. We apply the theoretical results to public data to evaluate vaccination strategies, immunization levels, and optimal age of vaccination for dengue disease.

  15. Gene Suppression of Mouse Testis In Vivo Using Small Interfering RNA Derived from Plasmid Vectors

    PubMed Central

    Takizawa, Takami; Ishikawa, Tomoko; Kosuge, Takuji; Mizuguchi, Yoshiaki; Sato, Yoko; Koji, Takehiko; Araki, Yoshihiko; Takizawa, Toshihiro

    2012-01-01

    We evaluated whether inhibiting gene expression by small interfering RNA (siRNA) can be used for an in vivo model using a germ cell-specific gene (Tex101) as a model target in mouse testis. We generated plasmid-based expression vectors of siRNA targeting the Tex101 gene and transfected them into postnatal day 10 mouse testes by in vivo electroporation. After optimizing the electroporation conditions using a vector transfected into the mouse testis, a combination of high- and low-voltage pulses showed excellent transfection efficiency for the vectors with minimal tissue damage, but gene suppression was transient. Gene suppression by in vivo electroporation may be helpful as an alternative approach when designing experiments to unravel the basic role of testicular molecules. PMID:22489107

  16. Design of a mixer for the thrust-vectoring system on the high-alpha research vehicle

    NASA Technical Reports Server (NTRS)

    Pahle, Joseph W.; Bundick, W. Thomas; Yeager, Jessie C.; Beissner, Fred L., Jr.

    1996-01-01

    One of the advanced control concepts being investigated on the High-Alpha Research Vehicle (HARV) is multi-axis thrust vectoring using an experimental thrust-vectoring (TV) system consisting of three hydraulically actuated vanes per engine. A mixer is used to translate the pitch-, roll-, and yaw-TV commands into the appropriate TV-vane commands for distribution to the vane actuators. A computer-aided optimization process was developed to perform the inversion of the thrust-vectoring effectiveness data for use by the mixer in performing this command translation. Using this process a new mixer was designed for the HARV and evaluated in simulation and flight. An important element of the Mixer is the priority logic, which determines priority among the pitch-, roll-, and yaw-TV commands.

  17. Survey of city ordinances and local enforcement regarding commercial availability of tobacco to minors in Minnesota, United States

    PubMed Central

    Forster, J. L.; Komro, K. A.; Wolfson, M.

    1996-01-01

    OBJECTIVES: To determine the extent and nature of local ordinances to regulate tobacco sales to minors, the level of enforcement of local and state laws concerning tobacco availability to minors, and sanctions applied as a result of enforcement. DESIGN: Tobacco control ordinances were collected in 1993 from 222 of the 229 cities greater than or equal to 2000 population in Minnesota, United States. In addition a telephone survey with the head of the agency responsible for enforcement of the tobacco ordinances was conducted. MAIN OUTCOME MEASURES: Presence or absence of legislative provisions dealing with youth and tobacco, including licensure of tobacco retailers, sanctions for selling tobacco products to minors, and restrictions on cigarette vending machines, self-service merchandising, and point-of-purchase advertising; and enforcement of these laws (use of inspections and "sting" operations, and sanctions imposed on businesses and minors). RESULTS: Almost 94% of cities required tobacco licences for retailers. However, 57% of the cities specified licences for cigarettes only. Annual licence fees ranged from $10 to $250, with the higher fees adopted in the previous four years. More than 25% of the cities had adopted some kind of restriction on cigarette vending machines, but only six communities had banned self-service cigarette displays. Three cities specified a minimum age for tobacco sales staff. Fewer than 25% of police officials reported having conducted compliance checks with minors or in-store observations of tobacco sales to determine if minors were being sold tobacco during the current year. Police carrying out compliance checks with youth were almost four times as likely to issue citations as those doing in-store observations. More than 90% of police reported enforcement of the law against tobacco purchase or possession by minors, and nearly 40% reported application of penalties against minors. CONCLUSIONS: Almost 75% of the cities have done nothing to change policies or enforcement practices to encourage compliance with tobacco age-of-sale legislation, and only a few of the remaining cities have adopted optimal policies. In addition, officials in Minnesota cities are much more likely to use enforcement strategies against minors who buy tobacco than against merchants who sell tobacco. 


 PMID:8795859

  18. Low-Thrust Many-Revolution Trajectory Optimization via Differential Dynamic Programming and a Sundman Transformation

    NASA Technical Reports Server (NTRS)

    Aziz, Jonathan D.; Parker, Jeffrey S.; Scheeres, Daniel J.; Englander, Jacob A.

    2017-01-01

    Low-thrust trajectories about planetary bodies characteristically span a high count of orbital revolutions. Directing the thrust vector over many revolutions presents a challenging optimization problem for any conventional strategy. This paper demonstrates the tractability of low-thrust trajectory optimization about planetary bodies by applying a Sundman transformation to change the independent variable of the spacecraft equations of motion to the eccentric anomaly and performing the optimization with differential dynamic programming. Fuel-optimal geocentric transfers are shown in excess of 1000 revolutions while subject to Earths J2 perturbation and lunar gravity.

  19. An accelerated training method for back propagation networks

    NASA Technical Reports Server (NTRS)

    Shelton, Robert O. (Inventor)

    1993-01-01

    The principal objective is to provide a training procedure for a feed forward, back propagation neural network which greatly accelerates the training process. A set of orthogonal singular vectors are determined from the input matrix such that the standard deviations of the projections of the input vectors along these singular vectors, as a set, are substantially maximized, thus providing an optimal means of presenting the input data. Novelty exists in the method of extracting from the set of input data, a set of features which can serve to represent the input data in a simplified manner, thus greatly reducing the time/expense to training the system.

  20. The relationship between temperamental traits and the level of performance of an eye-hand co-ordination task in jet pilots.

    PubMed

    Biernacki, Marcin; Tarnowski, Adam

    2008-01-01

    When assessing the psychological suitability for the profession of a pilot, it is important to consider personality traits and psychomotor abilities. Our study aimed at estimating the role of temperamental traits as components of pilots' personality in eye-hand co-ordination. The assumption was that differences in the escalation of the level of temperamental traits, as measured with the Formal Characteristic of Behaviour-Temperament Inventory (FCB-TI), will significantly influence eye-hand co-ordination. At the level of general scores, enhanced briskness proved to be the most important trait for eye-hand co-ordination. An analysis of partial scores additionally underlined the importance of sensory sensitivity, endurance and activity. The application of eye-hand co-ordination tasks, which involve energetic and temporal dimensions of performance, helped to disclose the role of biologically-based personality traits in psychomotor performance. The implication of these findings for selecting pilots is discussed.

  1. Genomic-Enabled Prediction of Ordinal Data with Bayesian Logistic Ordinal Regression.

    PubMed

    Montesinos-López, Osval A; Montesinos-López, Abelardo; Crossa, José; Burgueño, Juan; Eskridge, Kent

    2015-08-18

    Most genomic-enabled prediction models developed so far assume that the response variable is continuous and normally distributed. The exception is the probit model, developed for ordered categorical phenotypes. In statistical applications, because of the easy implementation of the Bayesian probit ordinal regression (BPOR) model, Bayesian logistic ordinal regression (BLOR) is implemented rarely in the context of genomic-enabled prediction [sample size (n) is much smaller than the number of parameters (p)]. For this reason, in this paper we propose a BLOR model using the Pólya-Gamma data augmentation approach that produces a Gibbs sampler with similar full conditional distributions of the BPOR model and with the advantage that the BPOR model is a particular case of the BLOR model. We evaluated the proposed model by using simulation and two real data sets. Results indicate that our BLOR model is a good alternative for analyzing ordinal data in the context of genomic-enabled prediction with the probit or logit link. Copyright © 2015 Montesinos-López et al.

  2. Computational mechanics analysis tools for parallel-vector supercomputers

    NASA Technical Reports Server (NTRS)

    Storaasli, O. O.; Nguyen, D. T.; Baddourah, M. A.; Qin, J.

    1993-01-01

    Computational algorithms for structural analysis on parallel-vector supercomputers are reviewed. These parallel algorithms, developed by the authors, are for the assembly of structural equations, 'out-of-core' strategies for linear equation solution, massively distributed-memory equation solution, unsymmetric equation solution, general eigen-solution, geometrically nonlinear finite element analysis, design sensitivity analysis for structural dynamics, optimization algorithm and domain decomposition. The source code for many of these algorithms is available from NASA Langley.

  3. Existence and Hadamard well-posedness of a system of simultaneous generalized vector quasi-equilibrium problems.

    PubMed

    Zhang, Wenyan; Zeng, Jing

    2017-01-01

    An existence result for the solution set of a system of simultaneous generalized vector quasi-equilibrium problems (for short, (SSGVQEP)) is obtained, which improves Theorem 3.1 of the work of Ansari et al. (J. Optim. Theory Appl. 127:27-44, 2005). Moreover, a definition of Hadamard-type well-posedness for (SSGVQEP) is introduced and sufficient conditions for Hadamard well-posedness of (SSGVQEP) are established.

  4. Factor Models for Ordinal Variables With Covariate Effects on the Manifest and Latent Variables: A Comparison of LISREL and IRT Approaches

    ERIC Educational Resources Information Center

    Moustaki, Irini; Joreskog, Karl G.; Mavridis, Dimitris

    2004-01-01

    We consider a general type of model for analyzing ordinal variables with covariate effects and 2 approaches for analyzing data for such models, the item response theory (IRT) approach and the PRELIS-LISREL (PLA) approach. We compare these 2 approaches on the basis of 2 examples, 1 involving only covariate effects directly on the ordinal variables…

  5. Two centuries of masting data for European beech and Norway spruce across the European continent.

    PubMed

    Ascoli, Davide; Maringer, Janet; Hacket-Pain, Andy; Conedera, Marco; Drobyshev, Igor; Motta, Renzo; Cirolli, Mara; Kantorowicz, Władysław; Zang, Christian; Schueler, Silvio; Croisé, Luc; Piussi, Pietro; Berretti, Roberta; Palaghianu, Ciprian; Westergren, Marjana; Lageard, Jonathan G A; Burkart, Anton; Gehrig Bichsel, Regula; Thomas, Peter A; Beudert, Burkhard; Övergaard, Rolf; Vacchiano, Giorgio

    2017-05-01

    Tree masting is one of the most intensively studied ecological processes. It affects nutrient fluxes of trees, regeneration dynamics in forests, animal population densities, and ultimately influences ecosystem services. Despite a large volume of research focused on masting, its evolutionary ecology, spatial and temporal variability, and environmental drivers are still matter of debate. Understanding the proximate and ultimate causes of masting at broad spatial and temporal scales will enable us to predict tree reproductive strategies and their response to changing environment. Here we provide broad spatial (distribution range-wide) and temporal (century) masting data for the two main masting tree species in Europe, European beech (Fagus sylvatica L.) and Norway spruce (Picea abies (L.) H. Karst.). We collected masting data from a total of 359 sources through an extensive literature review and from unpublished surveys. The data set has a total of 1,747 series and 18,348 yearly observations from 28 countries and covering a time span of years 1677-2016 and 1791-2016 for beech and spruce, respectively. For each record, the following information is available: identification code; species; year of observation; proxy of masting (flower, pollen, fruit, seed, dendrochronological reconstructions); statistical data type (ordinal, continuous); data value; unit of measurement (only in case of continuous data); geographical location (country, Nomenclature of Units for Territorial Statistics NUTS-1 level, municipality, coordinates); first and last record year and related length; type of data source (field survey, peer reviewed scientific literature, gray literature, personal observation); source identification code; date when data were added to the database; comments. To provide a ready-to-use masting index we harmonized ordinal data into five classes. Furthermore, we computed an additional field where continuous series with length >4 yr where converted into a five classes ordinal index. To our knowledge, this is the most comprehensive published database on species-specific masting behavior. It is useful to study spatial and temporal patterns of masting and its proximate and ultimate causes, to refine studies based on tree-ring chronologies, to understand dynamics of animal species and pests vectored by these animals affecting human health, and it may serve as calibration-validation data for dynamic forest models. © 2017 by the Ecological Society of America.

  6. Legislations combating counterfeit drugs in Hong Kong.

    PubMed

    Lai, C W; Chan, W K

    2013-08-01

    To understand legislation combating counterfeit drugs in Hong Kong. This study consisted of two parts. In part I, counterfeit drugs–related ordinances and court cases were reviewed. In part II, indepth interviews of the stakeholders were described. Hong Kong. All Hong Kong ordinances were screened manually to identify those combating counterfeit drugs. Court cases were searched for each of the identified cases. Then, the relevant judgement justifications were analysed to identify sentencing issues. Indepth interviews with the stakeholders were conducted to understand their perceptions about such legislation. Trade Marks Ordinance, Patents Ordinance, Trade Descriptions Ordinance, and Pharmacy and Poisons Ordinance were current legislative items combating counterfeit drugs. Sentencing criteria depended on: intention to deceive, quantity of seized drugs, presence of expected therapeutic effect or toxic ingredients, previous criminal records, cooperativeness with Customs officers, honest confessions, pleas of guilty, types of drugs, and precautionary measures to prevent sale of counterfeit drugs. Stakeholders’ perceptions were explored with respect to legislation regarding the scale and significance of the counterfeit drug problem, penalties and deterrents, drug-specific legislation and authority, and inspections and enforcement. To plug the loopholes, a specific law with heavy penalties should be adopted. This could be supplemented by non-legal measures like education of judges, lawyers, and the public; publishing the names of offending pharmacies; and emphasising the role of pharmacists to the public.

  7. Marker-based reconstruction of the kinematics of a chain of segments: a new method that incorporates joint kinematic constraints.

    PubMed

    Klous, Miriam; Klous, Sander

    2010-07-01

    The aim of skin-marker-based motion analysis is to reconstruct the motion of a kinematical model from noisy measured motion of skin markers. Existing kinematic models for reconstruction of chains of segments can be divided into two categories: analytical methods that do not take joint constraints into account and numerical global optimization methods that do take joint constraints into account but require numerical optimization of a large number of degrees of freedom, especially when the number of segments increases. In this study, a new and largely analytical method for a chain of rigid bodies is presented, interconnected in spherical joints (chain-method). In this method, the number of generalized coordinates to be determined through numerical optimization is three, irrespective of the number of segments. This new method is compared with the analytical method of Veldpaus et al. [1988, "A Least-Squares Algorithm for the Equiform Transformation From Spatial Marker Co-Ordinates," J. Biomech., 21, pp. 45-54] (Veldpaus-method, a method of the first category) and the numerical global optimization method of Lu and O'Connor [1999, "Bone Position Estimation From Skin-Marker Co-Ordinates Using Global Optimization With Joint Constraints," J. Biomech., 32, pp. 129-134] (Lu-method, a method of the second category) regarding the effects of continuous noise simulating skin movement artifacts and regarding systematic errors in joint constraints. The study is based on simulated data to allow a comparison of the results of the different algorithms with true (noise- and error-free) marker locations. Results indicate a clear trend that accuracy for the chain-method is higher than the Veldpaus-method and similar to the Lu-method. Because large parts of the equations in the chain-method can be solved analytically, the speed of convergence in this method is substantially higher than in the Lu-method. With only three segments, the average number of required iterations with the chain-method is 3.0+/-0.2 times lower than with the Lu-method when skin movement artifacts are simulated by applying a continuous noise model. When simulating systematic errors in joint constraints, the number of iterations for the chain-method was almost a factor 5 lower than the number of iterations for the Lu-method. However, the Lu-method performs slightly better than the chain-method. The RMSD value between the reconstructed and actual marker positions is approximately 57% of the systematic error on the joint center positions for the Lu-method compared with 59% for the chain-method.

  8. Care co-ordination for older people in the third sector: scoping the evidence.

    PubMed

    Abendstern, Michele; Hughes, Jane; Jasper, Rowan; Sutcliffe, Caroline; Challis, David

    2018-05-01

    The third sector has played a significant role internationally in the delivery of adult social care services for many years. Its contribution to care co-ordination activities for older people, however, in England and elsewhere, is relatively unknown. A scoping review was therefore conducted to ascertain the character of the literature, the nature and extent of third sector care co-ordination activity, and to identify evidence gaps. It was undertaken between autumn 2013 and summer 2014 and updated with additional searches in 2016. Electronic and manual searches of international literature using distinct terms for different approaches to care co-ordination were undertaken. From a total of 835 papers, 26 met inclusion criteria. Data were organised in relation to care co-ordination approaches, types of third sector organisation and care recipients. Papers were predominantly from the UK and published this century. Key findings included that: a minority of literature focused specifically on older people and that those doing so described only one care co-ordination approach; third sector services tended to be associated with independence and person-centred practice; and working with the statutory sector, a prerequisite of care co-ordination, was challenging and required a range of features to be in place to support effective partnerships. Strengths and weaknesses of care co-ordination practice in the third sector according to key stakeholder groups were also highlighted. Areas for future research included the need for: a specific focus on older people's experiences; an investigation of workforce issues; detailed examination of third sector practices, outcomes and costs; interactions with the statutory sector; and an examination of quality assurance systems and their appropriateness to third sector practice. The main implication of the findings is a need to nurture variety within the third sector in order to provide older people and other adults with the range of service options desired. © 2017 John Wiley & Sons Ltd.

  9. An approach of the exact linearization techniques to analysis of population dynamics of the mosquito Aedes aegypti.

    PubMed

    Dos Reis, Célia A; Florentino, Helenice de O; Cólon, Diego; Rosa, Suélia R Fleury; Cantane, Daniela R

    2018-05-01

    Dengue fever, chikungunya and zika are caused by different viruses and mainly transmitted by Aedes aegypti mosquitoes. These diseases have received special attention of public health officials due to the large number of infected people in tropical and subtropical countries and the possible sequels that those diseases can cause. In severe cases, the infection can have devastating effects, affecting the central nervous system, muscles, brain and respiratory system, often resulting in death. Vaccines against these diseases are still under development and, therefore, current studies are focused on the treatment of diseases and vector (mosquito) control. This work focuses on this last topic, and presents the analysis of a mathematical model describing the population dynamics of Aedes aegypti, as well as present the design of a control law for the mosquito population (vector control) via exact linearization techniques and optimal control. This control strategy optimizes the use of resources for vector control, and focuses on the aquatic stage of the mosquito life. Theoretical and computational results are also presented. Copyright © 2017 Elsevier Inc. All rights reserved.

  10. Effects of duration of electric pulse on in vitro development of cloned cat embryos with human artificial chromosome vector.

    PubMed

    Do, Ltk; Wittayarat, M; Terazono, T; Sato, Y; Taniguchi, M; Tanihara, F; Takemoto, T; Kazuki, Y; Kazuki, K; Oshimura, M; Otoi, T

    2016-12-01

    The current applications for cat cloning include production of models for the study of human and animal diseases. This study was conducted to investigate the optimal fusion protocol on in vitro development of transgenic cloned cat embryos by comparing duration of electric pulse. Cat fibroblast cells containing a human artificial chromosome (HAC) vector were used as genetically modified nuclear donor cells. Couplets were fused and activated simultaneously with a single DC pulse of 3.0 kV/cm for either 30 or 60 μs. Low rates of fusion and embryo development to the blastocyst stage were observed in the reconstructed HAC-transchromosomic embryos, when the duration of fusion was prolonged to 60 μs. In contrast, the prolongation of electric pulse duration improved the embryo development and quality in the reconstructed control embryos without HAC vector. Our results suggested that the optimal parameters of electric pulses for fusion in cat somatic cell nuclear transfer vary among the types used for donor cells. © 2016 Blackwell Verlag GmbH.

  11. Optimizing support vector machine learning for semi-arid vegetation mapping by using clustering analysis

    NASA Astrophysics Data System (ADS)

    Su, Lihong

    In remote sensing communities, support vector machine (SVM) learning has recently received increasing attention. SVM learning usually requires large memory and enormous amounts of computation time on large training sets. According to SVM algorithms, the SVM classification decision function is fully determined by support vectors, which compose a subset of the training sets. In this regard, a solution to optimize SVM learning is to efficiently reduce training sets. In this paper, a data reduction method based on agglomerative hierarchical clustering is proposed to obtain smaller training sets for SVM learning. Using a multiple angle remote sensing dataset of a semi-arid region, the effectiveness of the proposed method is evaluated by classification experiments with a series of reduced training sets. The experiments show that there is no loss of SVM accuracy when the original training set is reduced to 34% using the proposed approach. Maximum likelihood classification (MLC) also is applied on the reduced training sets. The results show that MLC can also maintain the classification accuracy. This implies that the most informative data instances can be retained by this approach.

  12. Nonlinear feedback control for high alpha flight

    NASA Technical Reports Server (NTRS)

    Stalford, Harold

    1990-01-01

    Analytical aerodynamic models are derived from a high alpha 6 DOF wind tunnel model. One detail model requires some interpolation between nonlinear functions of alpha. One analytical model requires no interpolation and as such is a completely continuous model. Flight path optimization is conducted on the basic maneuvers: half-loop, 90 degree pitch-up, and level turn. The optimal control analysis uses the derived analytical model in the equations of motion and is based on both moment and force equations. The maximum principle solution for the half-loop is poststall trajectory performing the half-loop in 13.6 seconds. The agility induced by thrust vectoring capability provided a minimum effect on reducing the maneuver time. By means of thrust vectoring control the 90 degrees pitch-up maneuver can be executed in a small place over a short time interval. The agility capability of thrust vectoring is quite beneficial for pitch-up maneuvers. The level turn results are based currently on only outer layer solutions of singular perturbation. Poststall solutions provide high turn rates but generate higher losses of energy than that of classical sustained solutions.

  13. Dynamic Price Vector Formation Model-Based Automatic Demand Response Strategy for PV-Assisted EV Charging Stations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Qifang; Wang, Fei; Hodge, Bri-Mathias

    A real-time price (RTP)-based automatic demand response (ADR) strategy for PV-assisted electric vehicle (EV) Charging Station (PVCS) without vehicle to grid is proposed. The charging process is modeled as a dynamic linear program instead of the normal day-ahead and real-time regulation strategy, to capture the advantages of both global and real-time optimization. Different from conventional price forecasting algorithms, a dynamic price vector formation model is proposed based on a clustering algorithm to form an RTP vector for a particular day. A dynamic feasible energy demand region (DFEDR) model considering grid voltage profiles is designed to calculate the lower and uppermore » bounds. A deduction method is proposed to deal with the unknown information of future intervals, such as the actual stochastic arrival and departure times of EVs, which make the DFEDR model suitable for global optimization. Finally, both the comparative cases articulate the advantages of the developed methods and the validity in reducing electricity costs, mitigating peak charging demand, and improving PV self-consumption of the proposed strategy are verified through simulation scenarios.« less

  14. Incorporating prior knowledge induced from stochastic differential equations in the classification of stochastic observations.

    PubMed

    Zollanvari, Amin; Dougherty, Edward R

    2016-12-01

    In classification, prior knowledge is incorporated in a Bayesian framework by assuming that the feature-label distribution belongs to an uncertainty class of feature-label distributions governed by a prior distribution. A posterior distribution is then derived from the prior and the sample data. An optimal Bayesian classifier (OBC) minimizes the expected misclassification error relative to the posterior distribution. From an application perspective, prior construction is critical. The prior distribution is formed by mapping a set of mathematical relations among the features and labels, the prior knowledge, into a distribution governing the probability mass across the uncertainty class. In this paper, we consider prior knowledge in the form of stochastic differential equations (SDEs). We consider a vector SDE in integral form involving a drift vector and dispersion matrix. Having constructed the prior, we develop the optimal Bayesian classifier between two models and examine, via synthetic experiments, the effects of uncertainty in the drift vector and dispersion matrix. We apply the theory to a set of SDEs for the purpose of differentiating the evolutionary history between two species.

  15. Multivariate ordination identifies vegetation types associated with spider conservation in brassica crops

    PubMed Central

    Saqib, Hafiz Sohaib Ahmed; You, Minsheng

    2017-01-01

    Conservation biological control emphasizes natural and other non-crop vegetation as a source of natural enemies to focal crops. There is an unmet need for better methods to identify the types of vegetation that are optimal to support specific natural enemies that may colonize the crops. Here we explore the commonality of the spider assemblage—considering abundance and diversity (H)—in brassica crops with that of adjacent non-crop and non-brassica crop vegetation. We employ spatial-based multivariate ordination approaches, hierarchical clustering and spatial eigenvector analysis. The small-scale mixed cropping and high disturbance frequency of southern Chinese vegetation farming offered a setting to test the role of alternate vegetation for spider conservation. Our findings indicate that spider families differ markedly in occurrence with respect to vegetation type. Grassy field margins, non-crop vegetation, taro and sweetpotato harbour spider morphospecies and functional groups that are also present in brassica crops. In contrast, pumpkin and litchi contain spiders not found in brassicas, and so may have little benefit for conservation biological control services for brassicas. Our findings also illustrate the utility of advanced statistical approaches for identifying spatial relationships between natural enemies and the land uses most likely to offer alternative habitats for conservation biological control efforts that generates testable hypotheses for future studies. PMID:29085741

  16. Mathematical methods to analysis of topology, functional variability and evolution of metabolic systems based on different decomposition concepts.

    PubMed

    Mrabet, Yassine; Semmar, Nabil

    2010-05-01

    Complexity of metabolic systems can be undertaken at different scales (metabolites, metabolic pathways, metabolic network map, biological population) and under different aspects (structural, functional, evolutive). To analyse such a complexity, metabolic systems need to be decomposed into different components according to different concepts. Four concepts are presented here consisting in considering metabolic systems as sets of metabolites, chemical reactions, metabolic pathways or successive processes. From a metabolomic dataset, such decompositions are performed using different mathematical methods including correlation, stiochiometric, ordination, classification, combinatorial and kinetic analyses. Correlation analysis detects and quantifies affinities/oppositions between metabolites. Stoichiometric analysis aims to identify the organisation of a metabolic network into different metabolic pathways on the hand, and to quantify/optimize the metabolic flux distribution through the different chemical reactions of the system. Ordination and classification analyses help to identify different metabolic trends and their associated metabolites in order to highlight chemical polymorphism representing different variability poles of the metabolic system. Then, metabolic processes/correlations responsible for such a polymorphism can be extracted in silico by combining metabolic profiles representative of different metabolic trends according to a weighting bootstrap approach. Finally evolution of metabolic processes in time can be analysed by different kinetic/dynamic modelling approaches.

  17. Two Hop Adaptive Vector Based Quality Forwarding for Void Hole Avoidance in Underwater WSNs

    PubMed Central

    Javaid, Nadeem; Ahmed, Farwa; Wadud, Zahid; Alrajeh, Nabil; Alabed, Mohamad Souheil; Ilahi, Manzoor

    2017-01-01

    Underwater wireless sensor networks (UWSNs) facilitate a wide range of aquatic applications in various domains. However, the harsh underwater environment poses challenges like low bandwidth, long propagation delay, high bit error rate, high deployment cost, irregular topological structure, etc. Node mobility and the uneven distribution of sensor nodes create void holes in UWSNs. Void hole creation has become a critical issue in UWSNs, as it severely affects the network performance. Avoiding void hole creation benefits better coverage over an area, less energy consumption in the network and high throughput. For this purpose, minimization of void hole probability particularly in local sparse regions is focused on in this paper. The two-hop adaptive hop by hop vector-based forwarding (2hop-AHH-VBF) protocol aims to avoid the void hole with the help of two-hop neighbor node information. The other protocol, quality forwarding adaptive hop by hop vector-based forwarding (QF-AHH-VBF), selects an optimal forwarder based on the composite priority function. QF-AHH-VBF improves network good-put because of optimal forwarder selection. QF-AHH-VBF aims to reduce void hole probability by optimally selecting next hop forwarders. To attain better network performance, mathematical problem formulation based on linear programming is performed. Simulation results show that by opting these mechanisms, significant reduction in end-to-end delay and better throughput are achieved in the network. PMID:28763014

  18. Two Hop Adaptive Vector Based Quality Forwarding for Void Hole Avoidance in Underwater WSNs.

    PubMed

    Javaid, Nadeem; Ahmed, Farwa; Wadud, Zahid; Alrajeh, Nabil; Alabed, Mohamad Souheil; Ilahi, Manzoor

    2017-08-01

    Underwater wireless sensor networks (UWSNs) facilitate a wide range of aquatic applications in various domains. However, the harsh underwater environment poses challenges like low bandwidth, long propagation delay, high bit error rate, high deployment cost, irregular topological structure, etc. Node mobility and the uneven distribution of sensor nodes create void holes in UWSNs. Void hole creation has become a critical issue in UWSNs, as it severely affects the network performance. Avoiding void hole creation benefits better coverage over an area, less energy consumption in the network and high throughput. For this purpose, minimization of void hole probability particularly in local sparse regions is focused on in this paper. The two-hop adaptive hop by hop vector-based forwarding (2hop-AHH-VBF) protocol aims to avoid the void hole with the help of two-hop neighbor node information. The other protocol, quality forwarding adaptive hop by hop vector-based forwarding (QF-AHH-VBF), selects an optimal forwarder based on the composite priority function. QF-AHH-VBF improves network good-put because of optimal forwarder selection. QF-AHH-VBF aims to reduce void hole probability by optimally selecting next hop forwarders. To attain better network performance, mathematical problem formulation based on linear programming is performed. Simulation results show that by opting these mechanisms, significant reduction in end-to-end delay and better throughput are achieved in the network.

  19. Correction of murine Rag2 severe combined immunodeficiency by lentiviral gene therapy using a codon-optimized RAG2 therapeutic transgene.

    PubMed

    van Til, Niek P; de Boer, Helen; Mashamba, Nomusa; Wabik, Agnieszka; Huston, Marshall; Visser, Trudi P; Fontana, Elena; Poliani, Pietro Luigi; Cassani, Barbara; Zhang, Fang; Thrasher, Adrian J; Villa, Anna; Wagemaker, Gerard

    2012-10-01

    Recombination activating gene 2 (RAG2) deficiency results in severe combined immunodeficiency (SCID) with complete lack of T and B lymphocytes. Initial gammaretroviral gene therapy trials for other types of SCID proved effective, but also revealed the necessity of safe vector design. We report the development of lentiviral vectors with the spleen focus forming virus (SF) promoter driving codon-optimized human RAG2 (RAG2co), which improved phenotype amelioration compared to native RAG2 in Rag2(-/-) mice. With the RAG2co therapeutic transgene, T-cell receptor (TCR) and immunoglobulin repertoire, T-cell mitogen responses, plasma immunoglobulin levels and T-cell dependent and independent specific antibody responses were restored. However, the thymus double positive T-cell population remained subnormal, possibly due to the SF virus derived element being sensitive to methylation/silencing in the thymus, which was prevented by replacing the SF promoter by the previously reported silencing resistant element (ubiquitous chromatin opening element (UCOE)), and also improved B-cell reconstitution to eventually near normal levels. Weak cellular promoters were effective in T-cell reconstitution, but deficient in B-cell reconstitution. We conclude that immune functions are corrected in Rag2(-/-) mice by genetic modification of stem cells using the UCOE driven codon-optimized RAG2, providing a valid optional vector for clinical implementation.

  20. Evaluation of a new parallel numerical parameter optimization algorithm for a dynamical system

    NASA Astrophysics Data System (ADS)

    Duran, Ahmet; Tuncel, Mehmet

    2016-10-01

    It is important to have a scalable parallel numerical parameter optimization algorithm for a dynamical system used in financial applications where time limitation is crucial. We use Message Passing Interface parallel programming and present such a new parallel algorithm for parameter estimation. For example, we apply the algorithm to the asset flow differential equations that have been developed and analyzed since 1989 (see [3-6] and references contained therein). We achieved speed-up for some time series to run up to 512 cores (see [10]). Unlike [10], we consider more extensive financial market situations, for example, in presence of low volatility, high volatility and stock market price at a discount/premium to its net asset value with varying magnitude, in this work. Moreover, we evaluated the convergence of the model parameter vector, the nonlinear least squares error and maximum improvement factor to quantify the success of the optimization process depending on the number of initial parameter vectors.

  1. Distance Metric Learning via Iterated Support Vector Machines.

    PubMed

    Zuo, Wangmeng; Wang, Faqiang; Zhang, David; Lin, Liang; Huang, Yuchi; Meng, Deyu; Zhang, Lei

    2017-07-11

    Distance metric learning aims to learn from the given training data a valid distance metric, with which the similarity between data samples can be more effectively evaluated for classification. Metric learning is often formulated as a convex or nonconvex optimization problem, while most existing methods are based on customized optimizers and become inefficient for large scale problems. In this paper, we formulate metric learning as a kernel classification problem with the positive semi-definite constraint, and solve it by iterated training of support vector machines (SVMs). The new formulation is easy to implement and efficient in training with the off-the-shelf SVM solvers. Two novel metric learning models, namely Positive-semidefinite Constrained Metric Learning (PCML) and Nonnegative-coefficient Constrained Metric Learning (NCML), are developed. Both PCML and NCML can guarantee the global optimality of their solutions. Experiments are conducted on general classification, face verification and person re-identification to evaluate our methods. Compared with the state-of-the-art approaches, our methods can achieve comparable classification accuracy and are efficient in training.

  2. Optimization of sparse matrix-vector multiplication on emerging multicore platforms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Williams, Samuel; Oliker, Leonid; Vuduc, Richard

    2007-01-01

    We are witnessing a dramatic change in computer architecture due to the multicore paradigm shift, as every electronic device from cell phones to supercomputers confronts parallelism of unprecedented scale. To fully unleash the potential of these systems, the HPC community must develop multicore specific optimization methodologies for important scientific computations. In this work, we examine sparse matrix-vector multiply (SpMV) - one of the most heavily used kernels in scientific computing - across a broad spectrum of multicore designs. Our experimental platform includes the homogeneous AMD dual-core and Intel quad-core designs, the heterogeneous STI Cell, as well as the first scientificmore » study of the highly multithreaded Sun Niagara2. We present several optimization strategies especially effective for the multicore environment, and demonstrate significant performance improvements compared to existing state-of-the-art serial and parallel SpMV implementations. Additionally, we present key insights into the architectural tradeoffs of leading multicore design strategies, in the context of demanding memory-bound numerical algorithms.« less

  3. A Systematic Approach to Sensor Selection for Aircraft Engine Health Estimation

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.; Garg, Sanjay

    2009-01-01

    A systematic approach for selecting an optimal suite of sensors for on-board aircraft gas turbine engine health estimation is presented. The methodology optimally chooses the engine sensor suite and the model tuning parameter vector to minimize the Kalman filter mean squared estimation error in the engine s health parameters or other unmeasured engine outputs. This technique specifically addresses the underdetermined estimation problem where there are more unknown system health parameters representing degradation than available sensor measurements. This paper presents the theoretical estimation error equations, and describes the optimization approach that is applied to select the sensors and model tuning parameters to minimize these errors. Two different model tuning parameter vector selection approaches are evaluated: the conventional approach of selecting a subset of health parameters to serve as the tuning parameters, and an alternative approach that selects tuning parameters as a linear combination of all health parameters. Results from the application of the technique to an aircraft engine simulation are presented, and compared to those from an alternative sensor selection strategy.

  4. Intelligent Optimization of the Film-to-Fiber Ratio of a Degradable Braided Bicomponent Ureteral Stent

    PubMed Central

    Liu, Xiaoyan; Li, Feng; Ding, Yongsheng; Zou, Ting; Wang, Lu; Hao, Kuangrong

    2015-01-01

    A hierarchical support vector regression (SVR) model (HSVRM) was employed to correlate the compositions and mechanical properties of bicomponent stents composed of poly(lactic-co-glycolic acid) (PGLA) film and poly(glycolic acid) (PGA) fibers for urethral repair for the first time. PGLA film and PGA fibers could provide ureteral stents with good compressive and tensile properties, respectively. In bicomponent stents, high film content led to high stiffness, while high fiber content resulted in poor compressional properties. To simplify the procedures to optimize the ratio of PGLA film and PGA fiber in the stents, a hierarchical support vector regression model (HSVRM) and particle swarm optimization (PSO) algorithm were used to construct relationships between the film-to-fiber weight ratio and the measured compressional/tensile properties of the stents. The experimental data and simulated data fit well, proving that the HSVRM could closely reflect the relationship between the component ratio and performance properties of the ureteral stents. PMID:28793658

  5. Study on Temperature and Synthetic Compensation of Piezo-Resistive Differential Pressure Sensors by Coupled Simulated Annealing and Simplex Optimized Kernel Extreme Learning Machine

    PubMed Central

    Li, Ji; Hu, Guoqing; Zhou, Yonghong; Zou, Chong; Peng, Wei; Alam SM, Jahangir

    2017-01-01

    As a high performance-cost ratio solution for differential pressure measurement, piezo-resistive differential pressure sensors are widely used in engineering processes. However, their performance is severely affected by the environmental temperature and the static pressure applied to them. In order to modify the non-linear measuring characteristics of the piezo-resistive differential pressure sensor, compensation actions should synthetically consider these two aspects. Advantages such as nonlinear approximation capability, highly desirable generalization ability and computational efficiency make the kernel extreme learning machine (KELM) a practical approach for this critical task. Since the KELM model is intrinsically sensitive to the regularization parameter and the kernel parameter, a searching scheme combining the coupled simulated annealing (CSA) algorithm and the Nelder-Mead simplex algorithm is adopted to find an optimal KLEM parameter set. A calibration experiment at different working pressure levels was conducted within the temperature range to assess the proposed method. In comparison with other compensation models such as the back-propagation neural network (BP), radius basis neural network (RBF), particle swarm optimization optimized support vector machine (PSO-SVM), particle swarm optimization optimized least squares support vector machine (PSO-LSSVM) and extreme learning machine (ELM), the compensation results show that the presented compensation algorithm exhibits a more satisfactory performance with respect to temperature compensation and synthetic compensation problems. PMID:28422080

  6. Study on Temperature and Synthetic Compensation of Piezo-Resistive Differential Pressure Sensors by Coupled Simulated Annealing and Simplex Optimized Kernel Extreme Learning Machine.

    PubMed

    Li, Ji; Hu, Guoqing; Zhou, Yonghong; Zou, Chong; Peng, Wei; Alam Sm, Jahangir

    2017-04-19

    As a high performance-cost ratio solution for differential pressure measurement, piezo-resistive differential pressure sensors are widely used in engineering processes. However, their performance is severely affected by the environmental temperature and the static pressure applied to them. In order to modify the non-linear measuring characteristics of the piezo-resistive differential pressure sensor, compensation actions should synthetically consider these two aspects. Advantages such as nonlinear approximation capability, highly desirable generalization ability and computational efficiency make the kernel extreme learning machine (KELM) a practical approach for this critical task. Since the KELM model is intrinsically sensitive to the regularization parameter and the kernel parameter, a searching scheme combining the coupled simulated annealing (CSA) algorithm and the Nelder-Mead simplex algorithm is adopted to find an optimal KLEM parameter set. A calibration experiment at different working pressure levels was conducted within the temperature range to assess the proposed method. In comparison with other compensation models such as the back-propagation neural network (BP), radius basis neural network (RBF), particle swarm optimization optimized support vector machine (PSO-SVM), particle swarm optimization optimized least squares support vector machine (PSO-LSSVM) and extreme learning machine (ELM), the compensation results show that the presented compensation algorithm exhibits a more satisfactory performance with respect to temperature compensation and synthetic compensation problems.

  7. Approximate optimal tracking control for near-surface AUVs with wave disturbances

    NASA Astrophysics Data System (ADS)

    Yang, Qing; Su, Hao; Tang, Gongyou

    2016-10-01

    This paper considers the optimal trajectory tracking control problem for near-surface autonomous underwater vehicles (AUVs) in the presence of wave disturbances. An approximate optimal tracking control (AOTC) approach is proposed. Firstly, a six-degrees-of-freedom (six-DOF) AUV model with its body-fixed coordinate system is decoupled and simplified and then a nonlinear control model of AUVs in the vertical plane is given. Also, an exosystem model of wave disturbances is constructed based on Hirom approximation formula. Secondly, the time-parameterized desired trajectory which is tracked by the AUV's system is represented by the exosystem. Then, the coupled two-point boundary value (TPBV) problem of optimal tracking control for AUVs is derived from the theory of quadratic optimal control. By using a recently developed successive approximation approach to construct sequences, the coupled TPBV problem is transformed into a problem of solving two decoupled linear differential sequences of state vectors and adjoint vectors. By iteratively solving the two equation sequences, the AOTC law is obtained, which consists of a nonlinear optimal feedback item, an expected output tracking item, a feedforward disturbances rejection item, and a nonlinear compensatory term. Furthermore, a wave disturbances observer model is designed in order to solve the physically realizable problem. Simulation is carried out by using the Remote Environmental Unit (REMUS) AUV model to demonstrate the effectiveness of the proposed algorithm.

  8. Point-based warping with optimized weighting factors of displacement vectors

    NASA Astrophysics Data System (ADS)

    Pielot, Ranier; Scholz, Michael; Obermayer, Klaus; Gundelfinger, Eckart D.; Hess, Andreas

    2000-06-01

    The accurate comparison of inter-individual 3D image brain datasets requires non-affine transformation techniques (warping) to reduce geometric variations. Constrained by the biological prerequisites we use in this study a landmark-based warping method with weighted sums of displacement vectors, which is enhanced by an optimization process. Furthermore, we investigate fast automatic procedures for determining landmarks to improve the practicability of 3D warping. This combined approach was tested on 3D autoradiographs of Gerbil brains. The autoradiographs were obtained after injecting a non-metabolized radioactive glucose derivative into the Gerbil thereby visualizing neuronal activity in the brain. Afterwards the brain was processed with standard autoradiographical methods. The landmark-generator computes corresponding reference points simultaneously within a given number of datasets by Monte-Carlo-techniques. The warping function is a distance weighted exponential function with a landmark- specific weighting factor. These weighting factors are optimized by a computational evolution strategy. The warping quality is quantified by several coefficients (correlation coefficient, overlap-index, and registration error). The described approach combines a highly suitable procedure to automatically detect landmarks in autoradiographical brain images and an enhanced point-based warping technique, optimizing the local weighting factors. This optimization process significantly improves the similarity between the warped and the target dataset.

  9. A short-term and high-resolution distribution system load forecasting approach using support vector regression with hybrid parameters optimization

    DOE PAGES

    Jiang, Huaiguang; Zhang, Yingchen; Muljadi, Eduard; ...

    2016-01-01

    This paper proposes an approach for distribution system load forecasting, which aims to provide highly accurate short-term load forecasting with high resolution utilizing a support vector regression (SVR) based forecaster and a two-step hybrid parameters optimization method. Specifically, because the load profiles in distribution systems contain abrupt deviations, a data normalization is designed as the pretreatment for the collected historical load data. Then an SVR model is trained by the load data to forecast the future load. For better performance of SVR, a two-step hybrid optimization algorithm is proposed to determine the best parameters. In the first step of themore » hybrid optimization algorithm, a designed grid traverse algorithm (GTA) is used to narrow the parameters searching area from a global to local space. In the second step, based on the result of the GTA, particle swarm optimization (PSO) is used to determine the best parameters in the local parameter space. After the best parameters are determined, the SVR model is used to forecast the short-term load deviation in the distribution system. The performance of the proposed approach is compared to some classic methods in later sections of the paper.« less

  10. Cloning and expression of codon-optimized recombinant darbepoetin alfa in Leishmania tarentolae T7-TR.

    PubMed

    Kianmehr, Anvarsadat; Golavar, Raziyeh; Rouintan, Mandana; Mahrooz, Abdolkarim; Fard-Esfahani, Pezhman; Oladnabi, Morteza; Khajeniazi, Safoura; Mostafavi, Seyede Samaneh; Omidinia, Eskandar

    2016-02-01

    Darbepoetin alfa is an engineered and hyperglycosylated analog of recombinant human erythropoietin (EPO) which is used as a drug in treating anemia in patients with chronic kidney failure and cancer. This study desribes the secretory expression of a codon-optimized recombinant form of darbepoetin alfa in Leishmania tarentolae T7-TR. Synthetic codon-optimized gene was amplified by PCR and cloned into the pLEXSY-I-blecherry3 vector. The resultant expression vector, pLEXSYDarbo, was purified, digested, and electroporated into the L. tarentolae. Expression of recombinant darbepoetin alfa was evaluated by ELISA, reverse-transcription PCR (RT-PCR), Western blotting, and biological activity. After codon optimization, codon adaptation index (CAI) of the gene raised from 0.50 to 0.99 and its GC% content changed from 56% to 58%. Expression analysis confirmed the presence of a protein band at 40 kDa. Furthermore, reticulocyte experiment results revealed that the activity of expressed darbepoetin alfa was similar to that of its equivalent expressed in Chinese hamster ovary (CHO) cells. These data suggested that the codon optimization and expression in L. tarentolae host provided an efficient approach for high level expression of darbepoetin alfa. Copyright © 2015 Elsevier Inc. All rights reserved.

  11. Use of the piggyBac transposon to create stable packaging cell lines for the production of clinical-grade self-inactivating γ-retroviral vectors.

    PubMed

    Feldman, Steven A; Xu, Hui; Black, Mary A; Park, Tristen S; Robbins, Paul F; Kochenderfer, James N; Morgan, Richard A; Rosenberg, Steven A

    2014-08-01

    Efforts to improve the biosafety of γ-retroviral-mediated gene therapy have resulted in a shift toward the use of self-inactivating (SIN) γ-retroviral vectors. However, scale-up and manufacturing of such vectors requires significant optimization of transient transfection-based processes or development of novel platforms for the generation of stable producer cell clones. To that end, we describe the use of the piggybac transposon to generate stable producer cell clones for the production of SIN γ-retroviral vectors. The piggybac transposon is a universal tool allowing for the stable integration of SIN γ-retroviral constructs into murine (PG13) and human 293-based Phoenix (GALV and RD114, respectively) packaging cell lines without reverse transcription. Following transposition, a high-titer clone is selected for manufacture of a master cell bank and subsequent γ-retroviral vector supernatant production. Packaging cell clones created using the piggybac transposon have comparable titers to non-SIN vectors generated via conventional methods. We describe herein the use of the piggybac transposon for the production of stable packaging cell clones for the manufacture of clinical-grade SIN γ-retroviral vectors for ex vivo gene therapy clinical trials.

  12. Surface coating of siRNA-peptidomimetic nano-self-assemblies with anionic lipid bilayers: enhanced gene silencing and reduced adverse effects in vitro

    NASA Astrophysics Data System (ADS)

    Zeng, Xianghui; de Groot, Anne Marit; Sijts, Alice J. A. M.; Broere, Femke; Oude Blenke, Erik; Colombo, Stefano; van Eden, Willem; Franzyk, Henrik; Nielsen, Hanne Mørck; Foged, Camilla

    2015-11-01

    Cationic vectors have demonstrated the potential to facilitate intracellular delivery of therapeutic oligonucleotides. However, enhanced transfection efficiency is usually associated with adverse effects, which also proves to be a challenge for vectors based on cationic peptides. In this study a series of proteolytically stable palmitoylated α-peptide/β-peptoid peptidomimetics with a systematically varied number of repeating lysine and homoarginine residues was shown to self-assemble with small interfering RNA (siRNA). The resulting well-defined nanocomplexes were coated with anionic lipids giving rise to net anionic liposomes. These complexes and the corresponding liposomes were optimized towards efficient gene silencing and low adverse effects. The optimal anionic liposomes mediated a high silencing effect, which was comparable to that of the control (cationic Lipofectamine 2000), and did not display any noticeable cytotoxicity and immunogenicity in vitro. In contrast, the corresponding nanocomplexes mediated a reduced silencing effect with a more narrow safety window. The surface coating with anionic lipid bilayers led to partial decomplexation of the siRNA-peptidomimetic nanocomplex core of the liposomes, which facilitated siRNA release. Additionally, the optimal anionic liposomes showed efficient intracellular uptake and endosomal escape. Therefore, these findings suggest that a more efficacious and safe formulation can be achieved by surface coating of the siRNA-peptidomimetic nano-self-assemblies with anionic lipid bilayers.Cationic vectors have demonstrated the potential to facilitate intracellular delivery of therapeutic oligonucleotides. However, enhanced transfection efficiency is usually associated with adverse effects, which also proves to be a challenge for vectors based on cationic peptides. In this study a series of proteolytically stable palmitoylated α-peptide/β-peptoid peptidomimetics with a systematically varied number of repeating lysine and homoarginine residues was shown to self-assemble with small interfering RNA (siRNA). The resulting well-defined nanocomplexes were coated with anionic lipids giving rise to net anionic liposomes. These complexes and the corresponding liposomes were optimized towards efficient gene silencing and low adverse effects. The optimal anionic liposomes mediated a high silencing effect, which was comparable to that of the control (cationic Lipofectamine 2000), and did not display any noticeable cytotoxicity and immunogenicity in vitro. In contrast, the corresponding nanocomplexes mediated a reduced silencing effect with a more narrow safety window. The surface coating with anionic lipid bilayers led to partial decomplexation of the siRNA-peptidomimetic nanocomplex core of the liposomes, which facilitated siRNA release. Additionally, the optimal anionic liposomes showed efficient intracellular uptake and endosomal escape. Therefore, these findings suggest that a more efficacious and safe formulation can be achieved by surface coating of the siRNA-peptidomimetic nano-self-assemblies with anionic lipid bilayers. Electronic supplementary information (ESI) available: Non-fusogenic liposomes; cytotoxicity of naked siRNA and the empty vector; immunogenicity; low-magnification images; DOPE/DPPC liposomes. See DOI: 10.1039/c5nr04807a

  13. Engineered Salmonella enterica serovar Typhimurium overcomes limitations of anti-bacterial immunity in bacteria-mediated tumor therapy

    PubMed Central

    Felgner, Sebastian; Kocijancic, Dino; Frahm, Michael; Heise, Ulrike; Rohde, Manfred; Zimmermann, Kurt; Falk, Christine; Weiss, Siegfried

    2018-01-01

    ABSTRACT Cancer is one of the leading causes of death in the industrialized world and represents a tremendous social and economic burden. As conventional therapies fail to provide a sustainable cure for most cancer patients, the emerging unique immune therapeutic approach of bacteria-mediated tumor therapy (BMTT) is marching towards a feasible solution. Although promising results have been obtained with BMTT using various preclinical tumor models, for advancement a major concern is immunity against the bacterial vector itself. Pre-exposure to the therapeutic agent under field conditions is a reasonable expectation and may limit the therapeutic efficacy of BMTT. In the present study, we investigated the therapeutic potential of Salmonella and E. coli vector strains in naïve and immunized tumor bearing mice. Pre-exposure to the therapeutic agent caused a significant aberrant phenotype of the microenvironment of colonized tumors and limited the in vivo efficacy of established BMTT vector strains Salmonella SL7207 and E. coli Symbioflor-2. Using targeted genetic engineering, we generated the optimized auxotrophic Salmonella vector strain SF200 (ΔlpxR9 ΔpagL7 ΔpagP8 ΔaroA ΔydiV ΔfliF) harboring modifications in Lipid A and flagella synthesis. This combination of mutations resulted in an increased immune-stimulatory capacity and as such the strain was able to overcome the efficacy-limiting effects of pre-exposure. Thus, we conclude that any limitations of BMTT concerning anti-bacterial immunity may be countered by strategies that optimize the immune-stimulatory capacity of the attenuated vector strains. PMID:29308303

  14. Sodium Chloride Enhances Recombinant Adeno-Associated Virus Production in a Serum-Free Suspension Manufacturing Platform Using the Herpes Simplex Virus System

    PubMed Central

    Adamson-Small, Laura; Potter, Mark; Byrne, Barry J.; Clément, Nathalie

    2017-01-01

    The increase in effective treatments using recombinant adeno-associated viral (rAAV) vectors has underscored the importance of scalable, high-yield manufacturing methods. Previous work from this group reported the use of recombinant herpes simplex virus type 1 (rHSV) vectors to produce rAAV in adherent HEK293 cells, demonstrating the capacity of this system and quality of the product generated. Here we report production and optimization of rAAV using the rHSV system in suspension HEK293 cells (Expi293F) grown in serum and animal component-free medium. Through adjustment of salt concentration in the medium and optimization of infection conditions, titers greater than 1 × 1014 vector genomes per liter (VG/liter) were observed in purified rAAV stocks produced in Expi293F cells. Furthermore, this system allowed for high-titer production of multiple rAAV serotypes (2, 5, and 9) as well as multiple transgenes (green fluorescent protein and acid α-glucosidase). A proportional increase in vector production was observed as this method was scaled, with a final 3-liter shaker flask production yielding an excess of 1 × 1015 VG in crude cell harvests and an average of 3.5 × 1014 total VG of purified rAAV9 material, resulting in greater than 1 × 105 VG/cell. These results support the use of this rHSV-based rAAV production method for large-scale preclinical and clinical vector production. PMID:28117600

  15. An Interrupted Time Series Analysis of the State College Nuisance Property Ordinance and an Assessment of Rental Property Managers as Place Manager/Intimate Handler of Offender

    ERIC Educational Resources Information Center

    Koehle, Gregory M.

    2011-01-01

    This research involves a legal impact study of the State College Nuisance Property Ordinance and an assessment of State College Rental Property Managers in the role of place manager/intimate handler of offender. The impact of the Ordinance was assessed by employing an interrupted time series design which examined five years of pre-ordinance…

  16. Design and optimization of stress centralized MEMS vector hydrophone with high sensitivity at low frequency

    NASA Astrophysics Data System (ADS)

    Zhang, Guojun; Ding, Junwen; Xu, Wei; Liu, Yuan; Wang, Renxin; Han, Janjun; Bai, Bing; Xue, Chenyang; Liu, Jun; Zhang, Wendong

    2018-05-01

    A micro hydrophone based on piezoresistive effect, "MEMS vector hydrophone" was developed for acoustic detection application. To improve the sensitivity of MEMS vector hydrophone at low frequency, we reported a stress centralized MEMS vector hydrophone (SCVH) mainly used in 20-500 Hz. Stress concentration area was actualized in sensitive unit of hydrophone by silicon micromachining technology. Then piezoresistors were placed in stress concentration area for better mechanical response, thereby obtaining higher sensitivity. Static analysis was done to compare the mechanical response of three different sensitive microstructure: SCVH, conventional micro-silicon four-beam vector hydrophone (CFVH) and Lollipop-shaped vector hydrophone (LVH) respectively. And fluid-structure interaction (FSI) was used to analyze the natural frequency of SCVH for ensuring the measurable bandwidth. Eventually, the calibration experiment in standing wave field was done to test the property of SCVH and verify the accuracy of simulation. The results show that the sensitivity of SCVH has nearly increased by 17.2 dB in contrast to CFVH and 7.6 dB in contrast to LVH during 20-500 Hz.

  17. Production and purification of lentiviral vectors generated in 293T suspension cells with baculoviral vectors.

    PubMed

    Lesch, H P; Laitinen, A; Peixoto, C; Vicente, T; Makkonen, K-E; Laitinen, L; Pikkarainen, J T; Samaranayake, H; Alves, P M; Carrondo, M J T; Ylä-Herttuala, S; Airenne, K J

    2011-06-01

    Lentivirus can be engineered to be a highly potent vector for gene therapy applications. However, generation of clinical grade vectors in enough quantities for therapeutic use is still troublesome and limits the preclinical and clinical experiments. As a first step to solve this unmet need we recently introduced a baculovirus-based production system for lentiviral vector (LV) production using adherent cells. Herein, we have adapted and optimized the production of these vectors to a suspension cell culture system using recombinant baculoviruses delivering all elements required for a safe latest generation LV preparation. High-titer LV stocks were achieved in 293T cells grown in suspension. Produced viruses were accurately characterized and the functionality was also tested in vivo. Produced viruses were compared with viruses produced by calcium phosphate transfection method in adherent cells and polyethylenimine transfection method in suspension cells. Furthermore, a scalable and cost-effective capture purification step was developed based on a diethylaminoethyl monolithic column capable of removing most of the baculoviruses from the LV pool with 65% recovery.

  18. Development of Sendai Virus Vectors and their Potential Applications in Gene Therapy and Regenerative Medicine

    PubMed Central

    Nakanishi, Mahito; Otsu, Makoto

    2012-01-01

    Gene delivery/expression vectors have been used as fundamental technologies in gene therapy since the 1980s. These technologies are also being applied in regenerative medicine as tools to reprogram cell genomes to a pluripotent state and to other cell lineages. Rapid progress in these new research areas and expectations for their translation into clinical applications have facilitated the development of more sophisticated gene delivery/expression technologies. Since its isolation in 1953 in Japan, Sendai virus (SeV) has been widely used as a research tool in cell biology and in industry, but the application of SeV as a recombinant viral vector has been investigated only recently. Recombinant SeV vectors have various unique characteristics, such as low pathogenicity, powerful capacity for gene expression and a wide host range. In addition, the cytoplasmic gene expression mediated by this vector is advantageous for applications, in that chromosomal integration of exogenous genes can be undesirable. In this review, we introduce a brief historical background on the development of recombinant SeV vectors and describe their current applications in gene therapy. We also describe the application of SeV vectors in advanced nuclear reprogramming and introduce a defective and persistent SeV vector (SeVdp) optimized for such reprogramming. PMID:22920683

  19. Continuous Coordination Tools and their Evaluation

    NASA Astrophysics Data System (ADS)

    Sarma, Anita; Al-Ani, Ban; Trainer, Erik; Silva Filho, Roberto S.; da Silva, Isabella A.; Redmiles, David; van der Hoek, André

    This chapter discusses a set of co-ordination tools (the Continuous Co-ordination (CC) tool suite that includes Ariadne, Workspace Activity Viewer (WAV), Lighthouse, Palantír, and YANCEES) and details of our evaluation framework for these tools. Specifically, we discuss how we assessed the usefulness and the usability of these tools within the context of a predefined evaluation framework called DESMETDESMET . For example, for visualization tools we evaluated the suitability of the level of abstraction and the mode of displaying information of each tool. Whereas for an infrastructure tool we evaluate the effort required to implement co-ordination tools based on the given tool. We conclude with pointers on factors to consider when evaluating co-ordination tools in general.

  20. GIS insulation co-ordination: On-site tests and dielectric diagnostic techniques, a utility point of view

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sabot, A.; Petit, A.; Taillebois, J.P.

    1996-07-01

    This paper summarizes the Electricite de France experience with insulation co-ordination of GIS. After a review of the insulation co-ordination practice mainly dealing with fast front overvoltage and the one minute AC test, some results of the on-site test procedure applied since 30 years are presented and related to the insulation co-ordination practice. The in-service return of experience dealing with dielectric failures is analyzed then the dielectric diagnostic techniques now available are briefly presented with their possibilities and limitations. According to this survey, the expectations of EDF from these diagnostic techniques as well as the new on-site test and on-linemore » monitoring tendencies at EDF are presented.« less

  1. Genetic transformation of Fusarium avenaceum by Agrobacterium tumefaciens mediated transformation and the development of a USER-Brick vector construction system

    PubMed Central

    2014-01-01

    Background The plant pathogenic and saprophytic fungus Fusarium avenaceum causes considerable in-field and post-field losses worldwide due to its infections of a wide range of different crops. Despite its significant impact on the profitability of agriculture production and a desire to characterize the infection process at the molecular biological level, no genetic transformation protocol has yet been established for F. avenaceum. In the current study, it is shown that F. avenaceum can be efficiently transformed by Agrobacterium tumefaciens mediated transformation. In addition, an efficient and versatile single step vector construction strategy relying on Uracil Specific Excision Reagent (USER) Fusion cloning, is developed. Results The new vector construction system, termed USER-Brick, is based on a limited number of PCR amplified vector fragments (core USER-Bricks) which are combined with PCR generated fragments from the gene of interest. The system was found to have an assembly efficiency of 97% with up to six DNA fragments, based on the construction of 55 vectors targeting different polyketide synthase (PKS) and PKS associated transcription factor encoding genes in F. avenaceum. Subsequently, the ΔFaPKS3 vector was used for optimizing A. tumefaciens mediated transformation (ATMT) of F. avenaceum with respect to six variables. Acetosyringone concentration, co-culturing time, co-culturing temperature and fungal inoculum were found to significantly impact the transformation frequency. Following optimization, an average of 140 transformants per 106 macroconidia was obtained in experiments aimed at introducing targeted genome modifications. Targeted deletion of FaPKS6 (FA08709.2) in F. avenaceum showed that this gene is essential for biosynthesis of the polyketide/nonribosomal compound fusaristatin A. Conclusion The new USER-Brick system is highly versatile by allowing for the reuse of a common set of building blocks to accommodate seven different types of genome modifications. New USER-Bricks with additional functionality can easily be added to the system by future users. The optimized protocol for ATMT of F. avenaceum represents the first reported targeted genome modification by double homologous recombination of this plant pathogen and will allow for future characterization of this fungus. Functional linkage of FaPKS6 to the production of the mycotoxin fusaristatin A serves as a first testimony to this. PMID:25048842

  2. Knockdown of the bovine prion gene PRNP by RNA interference (RNAi) technology.

    PubMed

    Sutou, Shizuyo; Kunishi, Miho; Kudo, Toshiyuki; Wongsrikeao, Pimprapar; Miyagishi, Makoto; Otoi, Takeshige

    2007-07-26

    Since prion gene-knockout mice do not contract prion diseases and animals in which production of prion protein (PrP) is reduced by half are resistant to the disease, we hypothesized that bovine animals with reduced PrP would be tolerant to BSE. Hence, attempts were made to produce bovine PRNP (bPRNP) that could be knocked down by RNA interference (RNAi) technology. Before an in vivo study, optimal conditions for knocking down bPRNP were determined in cultured mammalian cell systems. Factors examined included siRNA (short interfering RNA) expression plasmid vectors, target sites of PRNP, and lengths of siRNAs. Four siRNA expression plasmid vectors were used: three harboring different cloning sites were driven by the human U6 promoter (hU6), and one by the human tRNAVal promoter. Six target sites of bovine PRNP were designed using an algorithm. From 1 (22 mer) to 9 (19, 20, 21, 22, 23, 24, 25, 27, and 29 mer) siRNA expression vectors were constructed for each target site. As targets of siRNA, the entire bPRNP coding sequence was connected to the reporter gene of the fluorescent EGFP, or of firefly luciferase or Renilla luciferase. Target plasmid DNA was co-transfected with siRNA expression vector DNA into HeLaS3 cells, and fluorescence or luminescence was measured. The activities of siRNAs varied widely depending on the target sites, length of the siRNAs, and vectors used. Longer siRNAs were less effective, and 19 mer or 21 mer was generally optimal. Although 21 mer GGGGAGAACTTCACCGAAACT expressed by a hU6-driven plasmid with a Bsp MI cloning site was best under the present experimental conditions, the corresponding tRNA promoter-driven plasmid was almost equally useful. The effectiveness of this siRNA was confirmed by immunostaining and Western blotting. Four siRNA expression plasmid vectors, six target sites of bPRNP, and various lengths of siRNAs from 19 mer to 29 mer were examined to establish optimal conditions for knocking down of bPRNP in vitro. The most effective siRNA so far tested was 21 mer GGGGAGAACTTCACCGAAACT driven either by a hU6 or tRNA promoter, a finding that provides a basis for further studies in vivo.

  3. Research of facial feature extraction based on MMC

    NASA Astrophysics Data System (ADS)

    Xue, Donglin; Zhao, Jiufen; Tang, Qinhong; Shi, Shaokun

    2017-07-01

    Based on the maximum margin criterion (MMC), a new algorithm of statistically uncorrelated optimal discriminant vectors and a new algorithm of orthogonal optimal discriminant vectors for feature extraction were proposed. The purpose of the maximum margin criterion is to maximize the inter-class scatter while simultaneously minimizing the intra-class scatter after the projection. Compared with original MMC method and principal component analysis (PCA) method, the proposed methods are better in terms of reducing or eliminating the statistically correlation between features and improving recognition rate. The experiment results on Olivetti Research Laboratory (ORL) face database shows that the new feature extraction method of statistically uncorrelated maximum margin criterion (SUMMC) are better in terms of recognition rate and stability. Besides, the relations between maximum margin criterion and Fisher criterion for feature extraction were revealed.

  4. A Scatter-Based Prototype Framework and Multi-Class Extension of Support Vector Machines

    PubMed Central

    Jenssen, Robert; Kloft, Marius; Zien, Alexander; Sonnenburg, Sören; Müller, Klaus-Robert

    2012-01-01

    We provide a novel interpretation of the dual of support vector machines (SVMs) in terms of scatter with respect to class prototypes and their mean. As a key contribution, we extend this framework to multiple classes, providing a new joint Scatter SVM algorithm, at the level of its binary counterpart in the number of optimization variables. This enables us to implement computationally efficient solvers based on sequential minimal and chunking optimization. As a further contribution, the primal problem formulation is developed in terms of regularized risk minimization and the hinge loss, revealing the score function to be used in the actual classification of test patterns. We investigate Scatter SVM properties related to generalization ability, computational efficiency, sparsity and sensitivity maps, and report promising results. PMID:23118845

  5. CFD Research, Parallel Computation and Aerodynamic Optimization

    NASA Technical Reports Server (NTRS)

    Ryan, James S.

    1995-01-01

    During the last five years, CFD has matured substantially. Pure CFD research remains to be done, but much of the focus has shifted to integration of CFD into the design process. The work under these cooperative agreements reflects this trend. The recent work, and work which is planned, is designed to enhance the competitiveness of the US aerospace industry. CFD and optimization approaches are being developed and tested, so that the industry can better choose which methods to adopt in their design processes. The range of computer architectures has been dramatically broadened, as the assumption that only huge vector supercomputers could be useful has faded. Today, researchers and industry can trade off time, cost, and availability, choosing vector supercomputers, scalable parallel architectures, networked workstations, or heterogenous combinations of these to complete required computations efficiently.

  6. A Study on Aircraft Engine Control Systems for Integrated Flight and Propulsion Control

    NASA Astrophysics Data System (ADS)

    Yamane, Hideaki; Matsunaga, Yasushi; Kusakawa, Takeshi; Yasui, Hisako

    The Integrated Flight and Propulsion Control (IFPC) for a highly maneuverable aircraft and a fighter-class engine with pitch/yaw thrust vectoring is described. Of the two IFPC functions the aircraft maneuver control utilizes the thrust vectoring based on aerodynamic control surfaces/thrust vectoring control allocation specified by the Integrated Control Unit (ICU) of a FADEC (Full Authority Digital Electronic Control) system. On the other hand in the Performance Seeking Control (PSC) the ICU identifies engine's various characteristic changes, optimizes manipulated variables and finally adjusts engine control parameters in cooperation with the Engine Control Unit (ECU). It is shown by hardware-in-the-loop simulation that the thrust vectoring can enhance aircraft maneuverability/agility and that the PSC can improve engine performance parameters such as SFC (specific fuel consumption), thrust and gas temperature.

  7. Hybrid NN/SVM Computational System for Optimizing Designs

    NASA Technical Reports Server (NTRS)

    Rai, Man Mohan

    2009-01-01

    A computational method and system based on a hybrid of an artificial neural network (NN) and a support vector machine (SVM) (see figure) has been conceived as a means of maximizing or minimizing an objective function, optionally subject to one or more constraints. Such maximization or minimization could be performed, for example, to optimize solve a data-regression or data-classification problem or to optimize a design associated with a response function. A response function can be considered as a subset of a response surface, which is a surface in a vector space of design and performance parameters. A typical example of a design problem that the method and system can be used to solve is that of an airfoil, for which a response function could be the spatial distribution of pressure over the airfoil. In this example, the response surface would describe the pressure distribution as a function of the operating conditions and the geometric parameters of the airfoil. The use of NNs to analyze physical objects in order to optimize their responses under specified physical conditions is well known. NN analysis is suitable for multidimensional interpolation of data that lack structure and enables the representation and optimization of a succession of numerical solutions of increasing complexity or increasing fidelity to the real world. NN analysis is especially useful in helping to satisfy multiple design objectives. Feedforward NNs can be used to make estimates based on nonlinear mathematical models. One difficulty associated with use of a feedforward NN arises from the need for nonlinear optimization to determine connection weights among input, intermediate, and output variables. It can be very expensive to train an NN in cases in which it is necessary to model large amounts of information. Less widely known (in comparison with NNs) are support vector machines (SVMs), which were originally applied in statistical learning theory. In terms that are necessarily oversimplified to fit the scope of this article, an SVM can be characterized as an algorithm that (1) effects a nonlinear mapping of input vectors into a higher-dimensional feature space and (2) involves a dual formulation of governing equations and constraints. One advantageous feature of the SVM approach is that an objective function (which one seeks to minimize to obtain coefficients that define an SVM mathematical model) is convex, so that unlike in the cases of many NN models, any local minimum of an SVM model is also a global minimum.

  8. Matrix Multiplication Algorithm Selection with Support Vector Machines

    DTIC Science & Technology

    2015-05-01

    libraries that could intelligently choose the optimal algorithm for a particular set of inputs. Users would be oblivious to the underlying algorithmic...SAT.” J. Artif . Intell. Res.(JAIR), vol. 32, pp. 565–606, 2008. [9] M. G. Lagoudakis and M. L. Littman, “Algorithm selection using reinforcement...Artificial Intelligence , vol. 21, no. 05, pp. 961–976, 2007. [15] C.-C. Chang and C.-J. Lin, “LIBSVM: A library for support vector machines,” ACM

  9. Massively Parallel Solution of Poisson Equation on Coarse Grain MIMD Architectures

    NASA Technical Reports Server (NTRS)

    Fijany, A.; Weinberger, D.; Roosta, R.; Gulati, S.

    1998-01-01

    In this paper a new algorithm, designated as Fast Invariant Imbedding algorithm, for solution of Poisson equation on vector and massively parallel MIMD architectures is presented. This algorithm achieves the same optimal computational efficiency as other Fast Poisson solvers while offering a much better structure for vector and parallel implementation. Our implementation on the Intel Delta and Paragon shows that a speedup of over two orders of magnitude can be achieved even for moderate size problems.

  10. Preclinical studies for a phase 1 clinical trial of autologous hematopoietic stem cell gene therapy for sickle cell disease.

    PubMed

    Urbinati, Fabrizia; Wherley, Jennifer; Geiger, Sabine; Fernandez, Beatriz Campo; Kaufman, Michael L; Cooper, Aaron; Romero, Zulema; Marchioni, Filippo; Reeves, Lilith; Read, Elizabeth; Nowicki, Barbara; Grassman, Elke; Viswanathan, Shivkumar; Wang, Xiaoyan; Hollis, Roger P; Kohn, Donald B

    2017-09-01

    Gene therapy by autologous hematopoietic stem cell transplantation (HSCT) represents a new approach to treat sickle cell disease (SCD). Optimization of the manufacture, characterization and testing of the transduced hematopoietic stem cell final cell product (FCP), as well as an in depth in vivo toxicology study, are critical for advancing this approach to clinical trials. Data are shown to evaluate and establish the feasibility of isolating, transducing with the Lenti/β AS3 -FB vector and cryopreserving CD34 + cells from human bone marrow (BM) at clinical scale. In vitro and in vivo characterization of the FCP was performed, showing that all the release criteria were successfully met. In vivo toxicology studies were conducted to evaluate potential toxicity of the Lenti/β AS3 -FB LV in the context of a murine BM transplant. Primary and secondary transplantation did not reveal any toxicity from the lentiviral vector. Additionally, vector integration site analysis of murine and human BM cells did not show any clonal skewing caused by insertion of the Lenti/β AS3 -FB vector in cells from primary and secondary transplanted mice. We present here a complete protocol, thoroughly optimized to manufacture, characterize and establish safety of a FCP for gene therapy of SCD. Copyright © 2017 International Society for Cellular Therapy. Published by Elsevier Inc. All rights reserved.

  11. Synthesis and optimization of cholesterol-based diquaternary ammonium Gemini Surfactant (Chol-GS) as a new gene delivery vector.

    PubMed

    Kim, Bieong-Kil; Doh, Kyung-Oh; Bae, Yun-Ui; Seu, Young-Bae

    2011-01-01

    Amongst a number of potential nonviral vectors, cationic liposomes have been actively researched, with both gemini surfactants and bola amphiphiles reported as being in possession of good structures in terms of cell viability and in vitro transfection. In this study, a cholesterol-based diquaternary ammonium gemini surfactant (Chol-GS) was synthesized and assessed as a novel nonviral gene vector. Chol-GS was synthesized from cholesterol by way of four reaction steps. The optimal efficiency was found to be at a weight ratio of 1:4 of lipid:DOPE (1,2-dioleoyl-L-alpha- glycero-3-phosphatidylethanolamine), and at a ratio of between 10:1~15:1 of liposome:DNA. The transfection efficiency was compared with commercial liposomes and with Lipofectamine, 1,2-dimyristyloxypropyl-3-dimethylhydroxyethylammonium bromide (DMRIE-C), and N-[1-(2,3-dioleoyloxy)propyl]- N,N,N-trimethylammonium chloride (DOTAP). The results indicate that the efficiency of Chol-GS is greater than that of all the tested commercial liposomes in COS7 and Huh7 cells, and higher than DOTAP and Lipofectamine in A549 cells. Confirmation of these findings was observed through the use of green fluorescent protein expression. Chol-GS exhibited a moderate level of cytotoxicity, at optimum concentrations for efficient transfection, indicating cell viability. Hence, the newly synthesized Chol-GS liposome has the potential of being an excellent nonviral vector for gene delivery.

  12. Rational plasmid design and bioprocess optimization to enhance recombinant adeno-associated virus (AAV) productivity in mammalian cells.

    PubMed

    Emmerling, Verena V; Pegel, Antje; Milian, Ernest G; Venereo-Sanchez, Alina; Kunz, Marion; Wegele, Jessica; Kamen, Amine A; Kochanek, Stefan; Hoerer, Markus

    2016-02-01

    Viral vectors used for gene and oncolytic therapy belong to the most promising biological products for future therapeutics. Clinical success of recombinant adeno-associated virus (rAAV) based therapies raises considerable demand for viral vectors, which cannot be met by current manufacturing strategies. Addressing existing bottlenecks, we improved a plasmid system termed rep/cap split packaging and designed a minimal plasmid encoding adenoviral helper function. Plasmid modifications led to a 12-fold increase in rAAV vector titers compared to the widely used pDG standard system. Evaluation of different production approaches revealed superiority of processes based on anchorage- and serum-dependent HEK293T cells, exhibiting about 15-fold higher specific and volumetric productivity compared to well-established suspension cells cultivated in serum-free medium. As for most other viral vectors, classical stirred-tank bioreactor production is thus still not capable of providing drug product of sufficient amount. We show that manufacturing strategies employing classical surface-providing culture systems can be successfully transferred to the new fully-controlled, single-use bioreactor system Integrity(TM) iCELLis(TM) . In summary, we demonstrate substantial bioprocess optimizations leading to more efficient and scalable production processes suggesting a promising way for flexible large-scale rAAV manufacturing. Copyright © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  13. Multiple sensor fault diagnosis for dynamic processes.

    PubMed

    Li, Cheng-Chih; Jeng, Jyh-Cheng

    2010-10-01

    Modern industrial plants are usually large scaled and contain a great amount of sensors. Sensor fault diagnosis is crucial and necessary to process safety and optimal operation. This paper proposes a systematic approach to detect, isolate and identify multiple sensor faults for multivariate dynamic systems. The current work first defines deviation vectors for sensor observations, and further defines and derives the basic sensor fault matrix (BSFM), consisting of the normalized basic fault vectors, by several different methods. By projecting a process deviation vector to the space spanned by BSFM, this research uses a vector with the resulted weights on each direction for multiple sensor fault diagnosis. This study also proposes a novel monitoring index and derives corresponding sensor fault detectability. The study also utilizes that vector to isolate and identify multiple sensor faults, and discusses the isolatability and identifiability. Simulation examples and comparison with two conventional PCA-based contribution plots are presented to demonstrate the effectiveness of the proposed methodology. Copyright © 2010 ISA. Published by Elsevier Ltd. All rights reserved.

  14. Recent Advances in Preclinical Developments Using Adenovirus Hybrid Vectors.

    PubMed

    Ehrke-Schulz, Eric; Zhang, Wenli; Gao, Jian; Ehrhardt, Anja

    2017-10-01

    Adenovirus (Ad)-based vectors are efficient gene-transfer vehicles to deliver foreign DNA into living organisms, offering large cargo capacity and low immunogenicity and genotoxicity. As Ad shows low integration rates of their genomes into host chromosomes, vector-derived gene expression decreases due to continuous cell cycling in regenerating tissues and dividing cell populations. To overcome this hurdle, adenoviral delivery can be combined with mechanisms leading to maintenance of therapeutic DNA and long-term effects of the desired treatment. Several hybrid Ad vectors (AdV) exploiting various strategies for long-term treatment have been developed and characterized. This review summarizes recent developments of preclinical approaches using hybrid AdVs utilizing either the Sleeping Beauty transposase system for somatic integration into host chromosomes or designer nucleases, including transcription activator-like effector nucleases and clustered regularly interspaced short palindromic repeats/CRISPR-associated protein-9 nuclease for permanent gene editing. Further options on how to optimize these vectors further are discussed, which may lead to future clinical applications of these versatile gene-therapy tools.

  15. Protection of Chickens against Avian Influenza with Non-Replicating Adenovirus-Vectored Vaccine

    PubMed Central

    Toro, Haroldo; Tang, De-chu C.; Suarez, David L.; Shi, Z.

    2009-01-01

    Protective immunity against avian influenza (AI) virus was elicited in chickens by single-dose vaccination with a replication competent adenovirus (RCA) -free human adenovirus (Ad) vector encoding an H7 AI hemagglutinin (AdChNY94.H7). Chickens vaccinated in ovo with an Ad vector encoding an AI H5 (AdTW68.H5) previously described, which were subsequently vaccinated intramuscularly with AdChNY94.H7 post-hatch, responded with robust antibody titers against both the H5 and H7 AI proteins. Antibody responses to Ad vector in ovo vaccination follow a dose-response kinetic. The use of a synthetic AI H5 gene codon optimized to match the chicken cell tRNA pool was more potent than the cognate H5 gene. The use of Ad-vectored vaccines to increase resistance of chicken populations against multiple AI strains could reduce the risk of an avian-originating influenza pandemic in humans. PMID:18384919

  16. Nonlinear optimization with linear constraints using a projection method

    NASA Technical Reports Server (NTRS)

    Fox, T.

    1982-01-01

    Nonlinear optimization problems that are encountered in science and industry are examined. A method of projecting the gradient vector onto a set of linear contraints is developed, and a program that uses this method is presented. The algorithm that generates this projection matrix is based on the Gram-Schmidt method and overcomes some of the objections to the Rosen projection method.

  17. Optimizing structure of complex technical system by heterogeneous vector criterion in interval form

    NASA Astrophysics Data System (ADS)

    Lysenko, A. V.; Kochegarov, I. I.; Yurkov, N. K.; Grishko, A. K.

    2018-05-01

    The article examines the methods of development and multi-criteria choice of the preferred structural variant of the complex technical system at the early stages of its life cycle in the absence of sufficient knowledge of parameters and variables for optimizing this structure. The suggested methods takes into consideration the various fuzzy input data connected with the heterogeneous quality criteria of the designed system and the parameters set by their variation range. The suggested approach is based on the complex use of methods of interval analysis, fuzzy sets theory, and the decision-making theory. As a result, the method for normalizing heterogeneous quality criteria has been developed on the basis of establishing preference relations in the interval form. The method of building preferential relations in the interval form on the basis of the vector of heterogeneous quality criteria suggest the use of membership functions instead of the coefficients considering the criteria value. The former show the degree of proximity of the realization of the designed system to the efficient or Pareto optimal variants. The study analyzes the example of choosing the optimal variant for the complex system using heterogeneous quality criteria.

  18. A Memetic Algorithm for Global Optimization of Multimodal Nonseparable Problems.

    PubMed

    Zhang, Geng; Li, Yangmin

    2016-06-01

    It is a big challenging issue of avoiding falling into local optimum especially when facing high-dimensional nonseparable problems where the interdependencies among vector elements are unknown. In order to improve the performance of optimization algorithm, a novel memetic algorithm (MA) called cooperative particle swarm optimizer-modified harmony search (CPSO-MHS) is proposed in this paper, where the CPSO is used for local search and the MHS for global search. The CPSO, as a local search method, uses 1-D swarm to search each dimension separately and thus converges fast. Besides, it can obtain global optimum elements according to our experimental results and analyses. MHS implements the global search by recombining different vector elements and extracting global optimum elements. The interaction between local search and global search creates a set of local search zones, where global optimum elements reside within the search space. The CPSO-MHS algorithm is tested and compared with seven other optimization algorithms on a set of 28 standard benchmarks. Meanwhile, some MAs are also compared according to the results derived directly from their corresponding references. The experimental results demonstrate a good performance of the proposed CPSO-MHS algorithm in solving multimodal nonseparable problems.

  19. Detecting glaucomatous change in visual fields: Analysis with an optimization framework.

    PubMed

    Yousefi, Siamak; Goldbaum, Michael H; Varnousfaderani, Ehsan S; Belghith, Akram; Jung, Tzyy-Ping; Medeiros, Felipe A; Zangwill, Linda M; Weinreb, Robert N; Liebmann, Jeffrey M; Girkin, Christopher A; Bowd, Christopher

    2015-12-01

    Detecting glaucomatous progression is an important aspect of glaucoma management. The assessment of longitudinal series of visual fields, measured using Standard Automated Perimetry (SAP), is considered the reference standard for this effort. We seek efficient techniques for determining progression from longitudinal visual fields by formulating the problem as an optimization framework, learned from a population of glaucoma data. The longitudinal data from each patient's eye were used in a convex optimization framework to find a vector that is representative of the progression direction of the sample population, as a whole. Post-hoc analysis of longitudinal visual fields across the derived vector led to optimal progression (change) detection. The proposed method was compared to recently described progression detection methods and to linear regression of instrument-defined global indices, and showed slightly higher sensitivities at the highest specificities than other methods (a clinically desirable result). The proposed approach is simpler, faster, and more efficient for detecting glaucomatous changes, compared to our previously proposed machine learning-based methods, although it provides somewhat less information. This approach has potential application in glaucoma clinics for patient monitoring and in research centers for classification of study participants. Copyright © 2015 Elsevier Inc. All rights reserved.

  20. A novel non-toxic combined CTA1-DD and ISCOMS adjuvant vector for effective mucosal immunization against influenza virus.

    PubMed

    Eliasson, Dubravka Grdic; Helgeby, Anja; Schön, Karin; Nygren, Caroline; El-Bakkouri, Karim; Fiers, Walter; Saelens, Xavier; Lövgren, Karin Bengtsson; Nyström, Ida; Lycke, Nils Y

    2011-05-23

    Here we demonstrate that by using non-toxic fractions of saponin combined with CTA1-DD we can achieve a safe and above all highly efficacious mucosal adjuvant vector. We optimized the construction, tested the requirements for function and evaluated proof-of-concept in an influenza A virus challenge model. We demonstrated that the CTA1-3M2e-DD/ISCOMS vector provided 100% protection against mortality and greatly reduced morbidity in the mouse model. The immunogenicity of the vector was superior to other vaccine formulations using the ISCOM or CTA1-DD adjuvants alone. The versatility of the vector was best exemplified by the many options to insert, incorporate or admix vaccine antigens with the vector. Furthermore, the CTA1-3M2e-DD/ISCOMS could be kept 1 year at 4°C or as a freeze-dried powder without affecting immunogenicity or adjuvanticity of the vector. Strong serum IgG and mucosal IgA responses were elicited and CD4 T cell responses were greatly enhanced after intranasal administration of the combined vector. Together these findings hold promise for the combined vector as a mucosal vaccine against influenza virus infections including pandemic influenza. The CTA1-DD/ISCOMS technology represents a breakthrough in mucosal vaccine vector design which successfully combines immunomodulation and targeting in a safe and stable particulate formation. Copyright © 2011 Elsevier Ltd. All rights reserved.

Top