Sample records for minimum rank problem

  1. An Optimization-Based Method for Feature Ranking in Nonlinear Regression Problems.

    PubMed

    Bravi, Luca; Piccialli, Veronica; Sciandrone, Marco

    2017-04-01

    In this paper, we consider the feature ranking problem, where, given a set of training instances, the task is to associate a score with the features in order to assess their relevance. Feature ranking is a very important tool for decision support systems, and may be used as an auxiliary step of feature selection to reduce the high dimensionality of real-world data. We focus on regression problems by assuming that the process underlying the generated data can be approximated by a continuous function (for instance, a feedforward neural network). We formally state the notion of relevance of a feature by introducing a minimum zero-norm inversion problem of a neural network, which is a nonsmooth, constrained optimization problem. We employ a concave approximation of the zero-norm function, and we define a smooth, global optimization problem to be solved in order to assess the relevance of the features. We present the new feature ranking method based on the solution of instances of the global optimization problem depending on the available training data. Computational experiments on both artificial and real data sets are performed, and point out that the proposed feature ranking method is a valid alternative to existing methods in terms of effectiveness. The obtained results also show that the method is costly in terms of CPU time, and this may be a limitation in the solution of large-dimensional problems.

  2. Zero point and zero suffix methods with robust ranking for solving fully fuzzy transportation problems

    NASA Astrophysics Data System (ADS)

    Ngastiti, P. T. B.; Surarso, Bayu; Sutimin

    2018-05-01

    Transportation issue of the distribution problem such as the commodity or goods from the supply tothe demmand is to minimize the transportation costs. Fuzzy transportation problem is an issue in which the transport costs, supply and demand are in the form of fuzzy quantities. Inthe case study at CV. Bintang Anugerah Elektrik, a company engages in the manufacture of gensets that has more than one distributors. We use the methods of zero point and zero suffix to investigate the transportation minimum cost. In implementing both methods, we use robust ranking techniques for the defuzzification process. The studyresult show that the iteration of zero suffix method is less than that of zero point method.

  3. Ship detection in satellite imagery using rank-order greyscale hit-or-miss transforms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Harvey, Neal R; Porter, Reid B; Theiler, James

    2010-01-01

    Ship detection from satellite imagery is something that has great utility in various communities. Knowing where ships are and their types provides useful intelligence information. However, detecting and recognizing ships is a difficult problem. Existing techniques suffer from too many false-alarms. We describe approaches we have taken in trying to build ship detection algorithms that have reduced false alarms. Our approach uses a version of the grayscale morphological Hit-or-Miss transform. While this is well known and used in its standard form, we use a version in which we use a rank-order selection for the dilation and erosion parts of themore » transform, instead of the standard maximum and minimum operators. This provides some slack in the fitting that the algorithm employs and provides a method for tuning the algorithm's performance for particular detection problems. We describe our algorithms, show the effect of the rank-order parameter on the algorithm's performance and illustrate the use of this approach for real ship detection problems with panchromatic satellite imagery.« less

  4. Gas chimney detection based on improving the performance of combined multilayer perceptron and support vector classifier

    NASA Astrophysics Data System (ADS)

    Hashemi, H.; Tax, D. M. J.; Duin, R. P. W.; Javaherian, A.; de Groot, P.

    2008-11-01

    Seismic object detection is a relatively new field in which 3-D bodies are visualized and spatial relationships between objects of different origins are studied in order to extract geologic information. In this paper, we propose a method for finding an optimal classifier with the help of a statistical feature ranking technique and combining different classifiers. The method, which has general applicability, is demonstrated here on a gas chimney detection problem. First, we evaluate a set of input seismic attributes extracted at locations labeled by a human expert using regularized discriminant analysis (RDA). In order to find the RDA score for each seismic attribute, forward and backward search strategies are used. Subsequently, two non-linear classifiers: multilayer perceptron (MLP) and support vector classifier (SVC) are run on the ranked seismic attributes. Finally, to capitalize on the intrinsic differences between both classifiers, the MLP and SVC results are combined using logical rules of maximum, minimum and mean. The proposed method optimizes the ranked feature space size and yields the lowest classification error in the final combined result. We will show that the logical minimum reveals gas chimneys that exhibit both the softness of MLP and the resolution of SVC classifiers.

  5. Entanglement Distillation from Greenberger-Horne-Zeilinger Shares

    NASA Astrophysics Data System (ADS)

    Vrana, Péter; Christandl, Matthias

    2017-06-01

    We study the problem of converting a product of Greenberger-Horne-Zeilinger (GHZ) states shared by subsets of several parties in an arbitrary way into GHZ states shared by every party. Such a state can be described by a hypergraph on the parties as vertices and with each hyperedge corresponding to a GHZ state shared among the parties incident with it. Our result is that if SLOCC transformations are allowed, then the best asymptotic rate is the minimum of bipartite log-ranks of the initial state, which in turn equals the minimum cut of the hypergraph. This generalizes a result by Strassen on the asymptotic subrank of the matrix multiplication tensor.

  6. On the rank-distance median of 3 permutations.

    PubMed

    Chindelevitch, Leonid; Pereira Zanetti, João Paulo; Meidanis, João

    2018-05-08

    Recently, Pereira Zanetti, Biller and Meidanis have proposed a new definition of a rearrangement distance between genomes. In this formulation, each genome is represented as a matrix, and the distance d is the rank distance between these matrices. Although defined in terms of matrices, the rank distance is equal to the minimum total weight of a series of weighted operations that leads from one genome to the other, including inversions, translocations, transpositions, and others. The computational complexity of the median-of-three problem according to this distance is currently unknown. The genome matrices are a special kind of permutation matrices, which we study in this paper. In their paper, the authors provide an [Formula: see text] algorithm for determining three candidate medians, prove the tight approximation ratio [Formula: see text], and provide a sufficient condition for their candidates to be true medians. They also conduct some experiments that suggest that their method is accurate on simulated and real data. In this paper, we extend their results and provide the following: Three invariants characterizing the problem of finding the median of 3 matrices A sufficient condition for uniqueness of medians that can be checked in O(n) A faster, [Formula: see text] algorithm for determining the median under this condition A new heuristic algorithm for this problem based on compressed sensing A [Formula: see text] algorithm that exactly solves the problem when the inputs are orthogonal matrices, a class that includes both permutations and genomes as special cases. Our work provides the first proof that, with respect to the rank distance, the problem of finding the median of 3 genomes, as well as the median of 3 permutations, is exactly solvable in polynomial time, a result which should be contrasted with its NP-hardness for the DCJ (double cut-and-join) distance and most other families of genome rearrangement operations. This result, backed by our experimental tests, indicates that the rank distance is a viable alternative to the DCJ distance widely used in genome comparisons.

  7. Dispatching power system for preventive and corrective voltage collapse problem in a deregulated power system

    NASA Astrophysics Data System (ADS)

    Alemadi, Nasser Ahmed

    Deregulation has brought opportunities for increasing efficiency of production and delivery and reduced costs to customers. Deregulation has also bought great challenges to provide the reliability and security customers have come to expect and demand from the electrical delivery system. One of the challenges in the deregulated power system is voltage instability. Voltage instability has become the principal constraint on power system operation for many utilities. Voltage instability is a unique problem because it can produce an uncontrollable, cascading instability that results in blackout for a large region or an entire country. In this work we define a system of advanced analytical methods and tools for secure and efficient operation of the power system in the deregulated environment. The work consists of two modules; (a) contingency selection module and (b) a Security Constrained Optimization module. The contingency selection module to be used for voltage instability is the Voltage Stability Security Assessment and Diagnosis (VSSAD). VSSAD shows that each voltage control area and its reactive reserve basin describe a subsystem or agent that has a unique voltage instability problem. VSSAD identifies each such agent. VS SAD is to assess proximity to voltage instability for each agent and rank voltage instability agents for each contingency simulated. Contingency selection and ranking for each agent is also performed. Diagnosis of where, why, when, and what can be done to cure voltage instability for each equipment outage and transaction change combination that has no load flow solution is also performed. A security constrained optimization module developed solves a minimum control solvability problem. A minimum control solvability problem obtains the reactive reserves through action of voltage control devices that VSSAD determines are needed in each agent to obtain solution of the load flow. VSSAD makes a physically impossible recommendation of adding reactive generation capability to specific generators to allow a load flow solution to be obtained. The minimum control solvability problem can also obtain solution of the load flow without curtailing transactions that shed load and generation as recommended by VSSAD. A minimum control solvability problem will be implemented as a corrective control, that will achieve the above objectives by using minimum control changes. The control includes; (1) voltage setpoint on generator bus voltage terminals; (2) under load tap changer tap positions and switchable shunt capacitors; and (3) active generation at generator buses. The minimum control solvability problem uses the VSSAD recommendation to obtain the feasible stable starting point but completely eliminates the impossible or onerous recommendation made by VSSAD. This thesis reviews the capabilities of Voltage Stability Security Assessment and Diagnosis and how it can be used to implement a contingency selection module for the Open Access System Dispatch (OASYDIS). The OASYDIS will also use the corrective control computed by Security Constrained Dispatch. The corrective control would be computed off line and stored for each contingency that produces voltage instability. The control is triggered and implemented to correct the voltage instability in the agent experiencing voltage instability only after the equipment outage or operating changes predicted to produce voltage instability have occurred. The advantages and the requirements to implement the corrective control are also discussed.

  8. Interval MULTIMOORA method with target values of attributes based on interval distance and preference degree: biomaterials selection

    NASA Astrophysics Data System (ADS)

    Hafezalkotob, Arian; Hafezalkotob, Ashkan

    2017-06-01

    A target-based MADM method covers beneficial and non-beneficial attributes besides target values for some attributes. Such techniques are considered as the comprehensive forms of MADM approaches. Target-based MADM methods can also be used in traditional decision-making problems in which beneficial and non-beneficial attributes only exist. In many practical selection problems, some attributes have given target values. The values of decision matrix and target-based attributes can be provided as intervals in some of such problems. Some target-based decision-making methods have recently been developed; however, a research gap exists in the area of MADM techniques with target-based attributes under uncertainty of information. We extend the MULTIMOORA method for solving practical material selection problems in which material properties and their target values are given as interval numbers. We employ various concepts of interval computations to reduce degeneration of uncertain data. In this regard, we use interval arithmetic and introduce innovative formula for interval distance of interval numbers to create interval target-based normalization technique. Furthermore, we use a pairwise preference matrix based on the concept of degree of preference of interval numbers to calculate the maximum, minimum, and ranking of these numbers. Two decision-making problems regarding biomaterials selection of hip and knee prostheses are discussed. Preference degree-based ranking lists for subordinate parts of the extended MULTIMOORA method are generated by calculating the relative degrees of preference for the arranged assessment values of the biomaterials. The resultant rankings for the problem are compared with the outcomes of other target-based models in the literature.

  9. Traveling salesman problems with PageRank Distance on complex networks reveal community structure

    NASA Astrophysics Data System (ADS)

    Jiang, Zhongzhou; Liu, Jing; Wang, Shuai

    2016-12-01

    In this paper, we propose a new algorithm for community detection problems (CDPs) based on traveling salesman problems (TSPs), labeled as TSP-CDA. Since TSPs need to find a tour with minimum cost, cities close to each other are usually clustered in the tour. This inspired us to model CDPs as TSPs by taking each vertex as a city. Then, in the final tour, the vertices in the same community tend to cluster together, and the community structure can be obtained by cutting the tour into a couple of paths. There are two challenges. The first is to define a suitable distance between each pair of vertices which can reflect the probability that they belong to the same community. The second is to design a suitable strategy to cut the final tour into paths which can form communities. In TSP-CDA, we deal with these two challenges by defining a PageRank Distance and an automatic threshold-based cutting strategy. The PageRank Distance is designed with the intrinsic properties of CDPs in mind, and can be calculated efficiently. In the experiments, benchmark networks with 1000-10,000 nodes and varying structures are used to test the performance of TSP-CDA. A comparison is also made between TSP-CDA and two well-established community detection algorithms. The results show that TSP-CDA can find accurate community structure efficiently and outperforms the two existing algorithms.

  10. System analysis for technology transfer readiness assessment of horticultural postharvest

    NASA Astrophysics Data System (ADS)

    Hayuningtyas, M.; Djatna, T.

    2018-04-01

    Availability of postharvest technology is becoming abundant, but only a few technologies are applicable and useful to a wider community purposes. Based on this problem it requires a significant readiness level of transfer technology approach. This system is reliable to access readiness a technology with level, from 1-9 and to minimize time of transfer technology in every level, time required technology from the selection process can be minimum. Problem was solved by using Relief method to determine ranking by weighting feasible criteria on postharvest technology in each level and PERT (Program Evaluation Review Technique) to schedule. The results from ranking process of post-harvest technology in the field of horticulture is able to pass level 7. That, technology can be developed to increase into pilot scale and minimize time required for technological readiness on PERT with optimistic time of 7,9 years. Readiness level 9 shows that technology has been tested on the actual conditions also tied with estimated production price compared to competitors. This system can be used to determine readiness of technology innovation that is derived from agricultural raw materials and passes certain stages.

  11. Imaging Tasks Scheduling for High-Altitude Airship in Emergency Condition Based on Energy-Aware Strategy

    PubMed Central

    Zhimeng, Li; Chuan, He; Dishan, Qiu; Jin, Liu; Manhao, Ma

    2013-01-01

    Aiming to the imaging tasks scheduling problem on high-altitude airship in emergency condition, the programming models are constructed by analyzing the main constraints, which take the maximum task benefit and the minimum energy consumption as two optimization objectives. Firstly, the hierarchy architecture is adopted to convert this scheduling problem into three subproblems, that is, the task ranking, value task detecting, and energy conservation optimization. Then, the algorithms are designed for the sub-problems, and the solving results are corresponding to feasible solution, efficient solution, and optimization solution of original problem, respectively. This paper makes detailed introduction to the energy-aware optimization strategy, which can rationally adjust airship's cruising speed based on the distribution of task's deadline, so as to decrease the total energy consumption caused by cruising activities. Finally, the application results and comparison analysis show that the proposed strategy and algorithm are effective and feasible. PMID:23864822

  12. Variational Quantum Tomography with Incomplete Information by Means of Semidefinite Programs

    NASA Astrophysics Data System (ADS)

    Maciel, Thiago O.; Cesário, André T.; Vianna, Reinaldo O.

    We introduce a new method to reconstruct unknown quantum states out of incomplete and noisy information. The method is a linear convex optimization problem, therefore with a unique minimum, which can be efficiently solved with Semidefinite Programs. Numerical simulations indicate that the estimated state does not overestimate purity, and neither the expectation value of optimal entanglement witnesses. The convergence properties of the method are similar to compressed sensing approaches, in the sense that, in order to reconstruct low rank states, it needs just a fraction of the effort corresponding to an informationally complete measurement.

  13. Finding minimum gene subsets with heuristic breadth-first search algorithm for robust tumor classification

    PubMed Central

    2012-01-01

    Background Previous studies on tumor classification based on gene expression profiles suggest that gene selection plays a key role in improving the classification performance. Moreover, finding important tumor-related genes with the highest accuracy is a very important task because these genes might serve as tumor biomarkers, which is of great benefit to not only tumor molecular diagnosis but also drug development. Results This paper proposes a novel gene selection method with rich biomedical meaning based on Heuristic Breadth-first Search Algorithm (HBSA) to find as many optimal gene subsets as possible. Due to the curse of dimensionality, this type of method could suffer from over-fitting and selection bias problems. To address these potential problems, a HBSA-based ensemble classifier is constructed using majority voting strategy from individual classifiers constructed by the selected gene subsets, and a novel HBSA-based gene ranking method is designed to find important tumor-related genes by measuring the significance of genes using their occurrence frequencies in the selected gene subsets. The experimental results on nine tumor datasets including three pairs of cross-platform datasets indicate that the proposed method can not only obtain better generalization performance but also find many important tumor-related genes. Conclusions It is found that the frequencies of the selected genes follow a power-law distribution, indicating that only a few top-ranked genes can be used as potential diagnosis biomarkers. Moreover, the top-ranked genes leading to very high prediction accuracy are closely related to specific tumor subtype and even hub genes. Compared with other related methods, the proposed method can achieve higher prediction accuracy with fewer genes. Moreover, they are further justified by analyzing the top-ranked genes in the context of individual gene function, biological pathway, and protein-protein interaction network. PMID:22830977

  14. Lazy orbits: An optimization problem on the sphere

    NASA Astrophysics Data System (ADS)

    Vincze, Csaba

    2018-01-01

    Non-transitive subgroups of the orthogonal group play an important role in the non-Euclidean geometry. If G is a closed subgroup in the orthogonal group such that the orbit of a single Euclidean unit vector does not cover the (Euclidean) unit sphere centered at the origin then there always exists a non-Euclidean Minkowski functional such that the elements of G preserve the Minkowskian length of vectors. In other words the Minkowski geometry is an alternative of the Euclidean geometry for the subgroup G. It is rich of isometries if G is "close enough" to the orthogonal group or at least to one of its transitive subgroups. The measure of non-transitivity is related to the Hausdorff distances of the orbits under the elements of G to the Euclidean sphere. Its maximum/minimum belongs to the so-called lazy/busy orbits, i.e. they are the solutions of an optimization problem on the Euclidean sphere. The extremal distances allow us to characterize the reducible/irreducible subgroups. We also formulate an upper and a lower bound for the ratio of the extremal distances. As another application of the analytic tools we introduce the rank of a closed non-transitive group G. We shall see that if G is of maximal rank then it is finite or reducible. Since the reducible and the finite subgroups form two natural prototypes of non-transitive subgroups, the rank seems to be a fundamental notion in their characterization. Closed, non-transitive groups of rank n - 1 will be also characterized. Using the general results we classify all their possible types in lower dimensional cases n = 2 , 3 and 4. Finally we present some applications of the results to the holonomy group of a metric linear connection on a connected Riemannian manifold.

  15. Robust MST-Based Clustering Algorithm.

    PubMed

    Liu, Qidong; Zhang, Ruisheng; Zhao, Zhili; Wang, Zhenghai; Jiao, Mengyao; Wang, Guangjing

    2018-06-01

    Minimax similarity stresses the connectedness of points via mediating elements rather than favoring high mutual similarity. The grouping principle yields superior clustering results when mining arbitrarily-shaped clusters in data. However, it is not robust against noises and outliers in the data. There are two main problems with the grouping principle: first, a single object that is far away from all other objects defines a separate cluster, and second, two connected clusters would be regarded as two parts of one cluster. In order to solve such problems, we propose robust minimum spanning tree (MST)-based clustering algorithm in this letter. First, we separate the connected objects by applying a density-based coarsening phase, resulting in a low-rank matrix in which the element denotes the supernode by combining a set of nodes. Then a greedy method is presented to partition those supernodes through working on the low-rank matrix. Instead of removing the longest edges from MST, our algorithm groups the data set based on the minimax similarity. Finally, the assignment of all data points can be achieved through their corresponding supernodes. Experimental results on many synthetic and real-world data sets show that our algorithm consistently outperforms compared clustering algorithms.

  16. Problems in the Study of lineaments

    NASA Astrophysics Data System (ADS)

    Anokhin, Vladimir; Kholmyanskii, Michael

    2015-04-01

    The study of linear objects in upper crust, called lineaments, led at one time to a major scientific results - discovery of the planetary regmatic network, the birth of some new tectonic concepts, establishment of new search for signs of mineral deposits. But now lineaments studied not enough for such a promising research direction. Lineament geomorphology has a number of problems. 1.Terminology problems. Lineament theme still has no generally accepted terminology base. Different scientists have different interpretations even for the definition of lineament We offer an expanded definition for it: lineaments - line features of the earth's crust, expressed by linear landforms, geological linear forms, linear anomalies of physical fields may follow each other, associated with faults. The term "lineament" is not identical to the term "fault", but always lineament - reasonable suspicion to fault, and this suspicion is justified in most cases. The structure lineament may include only the objects that are at least presumably can be attributed to the deep processes. Specialists in the lineament theme can overcome terminological problems if together create a common terminology database. 2. Methodological problems. Procedure manual selection lineaments mainly is depiction of straight line segments along the axes of linear morphostructures on some cartographic basis. Reduce the subjective factors of manual selection is possible, following a few simple rules: - The choice of optimal projection, scale and quality of cartographic basis; - Selection of the optimal type of linear objects under study; - The establishment of boundary conditions for the allocation lineament (minimum length, maximum bending, the minimum length to width ratio, etc.); - Allocation of an increasing number of lineaments - for representative sampling and reduce the influence of random errors; - Ranking lineaments: fine lines (rank 3) combined to form larger lineaments rank 2; which, when combined capabilities in large lineaments rank 1; - Correlation of the resulting pattern of lineaments with a pattern already known of faults in the study area; - Separate allocation lineaments by several experts with correlation of the resulting schemes and create a common scheme. The problem of computer lineament allocation is not solved yet. Existing programs for lineament analysis is not so perfect to completely rely on them. In any of them, changing the initial parameters, we can get pictures lineaments any desired configuration. Also a high probability of heavy and hardly recognized systematic errors. In any case, computer lineament patterns after their creation should be subject to examination Real. 3. Interpretive problems. To minimize the distortion results of the lineament analysis is advisable to stick to a few techniques and rules: - use of visualization techniques, in particular, rose-charts, which are submitted azimuth and length of selected lineaments; - consistent downscaling of analysis. A preliminary analysis of a larger area that includes the area of interest with surroundings; - using the available information on the location of the already known faults and other tectonic linear objects of the study area; - comparison of the lineament scheme with schemes of other authors - can reduce the element of subjectivity in the schemes. The study of lineaments is a very promising direction of geomorfology and tectonics. Challenges facing the lineament theme, are solvable. To solve them, professionals should meet and talk to each other. The results of further work in this direction may exceed expectations.

  17. Vessel network detection using contour evolution and color components

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ushizima, Daniela; Medeiros, Fatima; Cuadros, Jorge

    2011-06-22

    Automated retinal screening relies on vasculature segmentation before the identification of other anatomical structures of the retina. Vasculature extraction can also be input to image quality ranking, neovascularization detection and image registration, among other applications. There is an extensive literature related to this problem, often excluding the inherent heterogeneity of ophthalmic clinical images. The contribution of this paper relies on an algorithm using front propagation to segment the vessel network. The algorithm includes a penalty in the wait queue on the fast marching heap to minimize leakage of the evolving interface. The method requires no manual labeling, a minimum numbermore » of parameters and it is capable of segmenting color ocular fundus images in real scenarios, where multi-ethnicity and brightness variations are parts of the problem.« less

  18. Ranking metrics in gene set enrichment analysis: do they matter?

    PubMed

    Zyla, Joanna; Marczyk, Michal; Weiner, January; Polanska, Joanna

    2017-05-12

    There exist many methods for describing the complex relation between changes of gene expression in molecular pathways or gene ontologies under different experimental conditions. Among them, Gene Set Enrichment Analysis seems to be one of the most commonly used (over 10,000 citations). An important parameter, which could affect the final result, is the choice of a metric for the ranking of genes. Applying a default ranking metric may lead to poor results. In this work 28 benchmark data sets were used to evaluate the sensitivity and false positive rate of gene set analysis for 16 different ranking metrics including new proposals. Furthermore, the robustness of the chosen methods to sample size was tested. Using k-means clustering algorithm a group of four metrics with the highest performance in terms of overall sensitivity, overall false positive rate and computational load was established i.e. absolute value of Moderated Welch Test statistic, Minimum Significant Difference, absolute value of Signal-To-Noise ratio and Baumgartner-Weiss-Schindler test statistic. In case of false positive rate estimation, all selected ranking metrics were robust with respect to sample size. In case of sensitivity, the absolute value of Moderated Welch Test statistic and absolute value of Signal-To-Noise ratio gave stable results, while Baumgartner-Weiss-Schindler and Minimum Significant Difference showed better results for larger sample size. Finally, the Gene Set Enrichment Analysis method with all tested ranking metrics was parallelised and implemented in MATLAB, and is available at https://github.com/ZAEDPolSl/MrGSEA . Choosing a ranking metric in Gene Set Enrichment Analysis has critical impact on results of pathway enrichment analysis. The absolute value of Moderated Welch Test has the best overall sensitivity and Minimum Significant Difference has the best overall specificity of gene set analysis. When the number of non-normally distributed genes is high, using Baumgartner-Weiss-Schindler test statistic gives better outcomes. Also, it finds more enriched pathways than other tested metrics, which may induce new biological discoveries.

  19. A new method of optimal capacitor switching based on minimum spanning tree theory in distribution systems

    NASA Astrophysics Data System (ADS)

    Li, H. W.; Pan, Z. Y.; Ren, Y. B.; Wang, J.; Gan, Y. L.; Zheng, Z. Z.; Wang, W.

    2018-03-01

    According to the radial operation characteristics in distribution systems, this paper proposes a new method based on minimum spanning trees method for optimal capacitor switching. Firstly, taking the minimal active power loss as objective function and not considering the capacity constraints of capacitors and source, this paper uses Prim algorithm among minimum spanning trees algorithms to get the power supply ranges of capacitors and source. Then with the capacity constraints of capacitors considered, capacitors are ranked by the method of breadth-first search. In term of the order from high to low of capacitor ranking, capacitor compensation capacity based on their power supply range is calculated. Finally, IEEE 69 bus system is adopted to test the accuracy and practicality of the proposed algorithm.

  20. Model of Decision Making through Consensus in Ranking Case

    NASA Astrophysics Data System (ADS)

    Tarigan, Gim; Darnius, Open

    2018-01-01

    The basic problem to determine ranking consensus is a problem to combine some rankings those are decided by two or more Decision Maker (DM) into ranking consensus. DM is frequently asked to present their preferences over a group of objects in terms of ranks, for example to determine a new project, new product, a candidate in a election, and so on. The problem in ranking can be classified into two major categories; namely, cardinal and ordinal rankings. The objective of the study is to obtin the ranking consensus by appying some algorithms and methods. The algorithms and methods used in this study were partial algorithm, optimal ranking consensus, BAK (Borde-Kendal)Model. A method proposed as an alternative in ranking conssensus is a Weighted Distance Forward-Backward (WDFB) method, which gave a little difference i ranking consensus result compare to the result oethe example solved by Cook, et.al (2005).

  1. A novel three-stage distance-based consensus ranking method

    NASA Astrophysics Data System (ADS)

    Aghayi, Nazila; Tavana, Madjid

    2018-05-01

    In this study, we propose a three-stage weighted sum method for identifying the group ranks of alternatives. In the first stage, a rank matrix, similar to the cross-efficiency matrix, is obtained by computing the individual rank position of each alternative based on importance weights. In the second stage, a secondary goal is defined to limit the vector of weights since the vector of weights obtained in the first stage is not unique. Finally, in the third stage, the group rank position of alternatives is obtained based on a distance of individual rank positions. The third stage determines a consensus solution for the group so that the ranks obtained have a minimum distance from the ranks acquired by each alternative in the previous stage. A numerical example is presented to demonstrate the applicability and exhibit the efficacy of the proposed method and algorithms.

  2. Connectivity ranking of heterogeneous random conductivity models

    NASA Astrophysics Data System (ADS)

    Rizzo, C. B.; de Barros, F.

    2017-12-01

    To overcome the challenges associated with hydrogeological data scarcity, the hydraulic conductivity (K) field is often represented by a spatial random process. The state-of-the-art provides several methods to generate 2D or 3D random K-fields, such as the classic multi-Gaussian fields or non-Gaussian fields, training image-based fields and object-based fields. We provide a systematic comparison of these models based on their connectivity. We use the minimum hydraulic resistance as a connectivity measure, which it has been found to be strictly correlated with early time arrival of dissolved contaminants. A computationally efficient graph-based algorithm is employed, allowing a stochastic treatment of the minimum hydraulic resistance through a Monte-Carlo approach and therefore enabling the computation of its uncertainty. The results show the impact of geostatistical parameters on the connectivity for each group of random fields, being able to rank the fields according to their minimum hydraulic resistance.

  3. Calibration of the clock-phase biases of GNSS networks: the closure-ambiguity approach

    NASA Astrophysics Data System (ADS)

    Lannes, A.; Prieur, J.-L.

    2013-08-01

    In global navigation satellite systems (GNSS), the problem of retrieving clock-phase biases from network data has a basic rank defect. We analyse the different ways of removing this rank defect, and define a particular strategy for obtaining these phase biases in a standard form. The minimum-constrained problem to be solved in the least-squares (LS) sense depends on some integer vector which can be fixed in an arbitrary manner. We propose to solve the problem via an undifferenced approach based on the notion of closure ambiguity. We present a theoretical justification of this closure-ambiguity approach (CAA), and the main elements for a practical implementation. The links with other methods are also established. We analyse all those methods in a unified interpretative framework, and derive functional relations between the corresponding solutions and our CAA solution. This could be interesting for many GNSS applications like real-time kinematic PPP for instance. To compare the methods providing LS estimates of clock-phase biases, we define a particular solution playing the role of reference solution. For this solution, when a phase bias is estimated for the first time, its fractional part is confined to the one-cycle width interval centred on zero; the integer-ambiguity set is modified accordingly. Our theoretical study is illustrated with some simple and generic examples; it could have applications in data processing of most GNSS networks, and particularly global networks using GPS, Glonass, Galileo, or BeiDou/Compass satellites.

  4. Multiple graph regularized protein domain ranking.

    PubMed

    Wang, Jim Jing-Yan; Bensmail, Halima; Gao, Xin

    2012-11-19

    Protein domain ranking is a fundamental task in structural biology. Most protein domain ranking methods rely on the pairwise comparison of protein domains while neglecting the global manifold structure of the protein domain database. Recently, graph regularized ranking that exploits the global structure of the graph defined by the pairwise similarities has been proposed. However, the existing graph regularized ranking methods are very sensitive to the choice of the graph model and parameters, and this remains a difficult problem for most of the protein domain ranking methods. To tackle this problem, we have developed the Multiple Graph regularized Ranking algorithm, MultiG-Rank. Instead of using a single graph to regularize the ranking scores, MultiG-Rank approximates the intrinsic manifold of protein domain distribution by combining multiple initial graphs for the regularization. Graph weights are learned with ranking scores jointly and automatically, by alternately minimizing an objective function in an iterative algorithm. Experimental results on a subset of the ASTRAL SCOP protein domain database demonstrate that MultiG-Rank achieves a better ranking performance than single graph regularized ranking methods and pairwise similarity based ranking methods. The problem of graph model and parameter selection in graph regularized protein domain ranking can be solved effectively by combining multiple graphs. This aspect of generalization introduces a new frontier in applying multiple graphs to solving protein domain ranking applications.

  5. Multiple graph regularized protein domain ranking

    PubMed Central

    2012-01-01

    Background Protein domain ranking is a fundamental task in structural biology. Most protein domain ranking methods rely on the pairwise comparison of protein domains while neglecting the global manifold structure of the protein domain database. Recently, graph regularized ranking that exploits the global structure of the graph defined by the pairwise similarities has been proposed. However, the existing graph regularized ranking methods are very sensitive to the choice of the graph model and parameters, and this remains a difficult problem for most of the protein domain ranking methods. Results To tackle this problem, we have developed the Multiple Graph regularized Ranking algorithm, MultiG-Rank. Instead of using a single graph to regularize the ranking scores, MultiG-Rank approximates the intrinsic manifold of protein domain distribution by combining multiple initial graphs for the regularization. Graph weights are learned with ranking scores jointly and automatically, by alternately minimizing an objective function in an iterative algorithm. Experimental results on a subset of the ASTRAL SCOP protein domain database demonstrate that MultiG-Rank achieves a better ranking performance than single graph regularized ranking methods and pairwise similarity based ranking methods. Conclusion The problem of graph model and parameter selection in graph regularized protein domain ranking can be solved effectively by combining multiple graphs. This aspect of generalization introduces a new frontier in applying multiple graphs to solving protein domain ranking applications. PMID:23157331

  6. FSMRank: feature selection algorithm for learning to rank.

    PubMed

    Lai, Han-Jiang; Pan, Yan; Tang, Yong; Yu, Rong

    2013-06-01

    In recent years, there has been growing interest in learning to rank. The introduction of feature selection into different learning problems has been proven effective. These facts motivate us to investigate the problem of feature selection for learning to rank. We propose a joint convex optimization formulation which minimizes ranking errors while simultaneously conducting feature selection. This optimization formulation provides a flexible framework in which we can easily incorporate various importance measures and similarity measures of the features. To solve this optimization problem, we use the Nesterov's approach to derive an accelerated gradient algorithm with a fast convergence rate O(1/T(2)). We further develop a generalization bound for the proposed optimization problem using the Rademacher complexities. Extensive experimental evaluations are conducted on the public LETOR benchmark datasets. The results demonstrate that the proposed method shows: 1) significant ranking performance gain compared to several feature selection baselines for ranking, and 2) very competitive performance compared to several state-of-the-art learning-to-rank algorithms.

  7. Efficient greedy algorithms for economic manpower shift planning

    NASA Astrophysics Data System (ADS)

    Nearchou, A. C.; Giannikos, I. C.; Lagodimos, A. G.

    2015-01-01

    Consideration is given to the economic manpower shift planning (EMSP) problem, an NP-hard capacity planning problem appearing in various industrial settings including the packing stage of production in process industries and maintenance operations. EMSP aims to determine the manpower needed in each available workday shift of a given planning horizon so as to complete a set of independent jobs at minimum cost. Three greedy heuristics are presented for the EMSP solution. These practically constitute adaptations of an existing algorithm for a simplified version of EMSP which had shown excellent performance in terms of solution quality and speed. Experimentation shows that the new algorithms perform very well in comparison to the results obtained by both the CPLEX optimizer and an existing metaheuristic. Statistical analysis is deployed to rank the algorithms in terms of their solution quality and to identify the effects that critical planning factors may have on their relative efficiency.

  8. Perceptions About Competing Psychosocial Problems and Treatment Priorities Among Older Adults With Depression

    PubMed Central

    Proctor, Enola K.; Hasche, Leslie; Morrow-Howell, Nancy; Shumway, Martha; Snell, Grace

    2009-01-01

    Objective Depression often co-occurs with other conditions that may pose competing demands to depression care, particularly in later life. This study examined older adults’ perceptions of depression among cooccurring social, medical, and functional problems and compared the priority of depression with that of other problems. Methods The study’s purposeful sample comprised 49 adults age 60 or older with a history of depression and in publicly funded community long-term care. Fourpart, mixed-methods interviews sought to capture participants’ perceptions of life problems as well as the priority they placed on depression. Methods included standardized depression screening, semistructured qualitative interviews, listing of problems, and qualitative and quantitative analysis of problem rankings. Results Most participants identified health, functional, and psychosocial problems co-occurring with depressive symptoms. Depression was ranked low among the co-occurring conditions; 6% ranked depression as the most important of their problems, whereas 45% ranked it last. Relative rank scores for problems were remarkably similar, with the notable exception of depression, which was ranked lowest of all problems. Participants did not see depression as a high priority compared with co-occurring problems, particularly psychosocial ones. Conclusions Effective and durable improvements to mental health care must be shaped by an understanding of client perceptions and priorities. Motivational interviewing, health education, and assessment of treatment priorities may be necessary in helping older adults value and accept depression care. Nonspecialty settings of care may effectively link depression treatment to other services, thereby increasing receptivity to mental health services. PMID:18511588

  9. An impatient evolutionary algorithm with probabilistic tabu search for unified solution of some NP-hard problems in graph and set theory via clique finding.

    PubMed

    Guturu, Parthasarathy; Dantu, Ram

    2008-06-01

    Many graph- and set-theoretic problems, because of their tremendous application potential and theoretical appeal, have been well investigated by the researchers in complexity theory and were found to be NP-hard. Since the combinatorial complexity of these problems does not permit exhaustive searches for optimal solutions, only near-optimal solutions can be explored using either various problem-specific heuristic strategies or metaheuristic global-optimization methods, such as simulated annealing, genetic algorithms, etc. In this paper, we propose a unified evolutionary algorithm (EA) to the problems of maximum clique finding, maximum independent set, minimum vertex cover, subgraph and double subgraph isomorphism, set packing, set partitioning, and set cover. In the proposed approach, we first map these problems onto the maximum clique-finding problem (MCP), which is later solved using an evolutionary strategy. The proposed impatient EA with probabilistic tabu search (IEA-PTS) for the MCP integrates the best features of earlier successful approaches with a number of new heuristics that we developed to yield a performance that advances the state of the art in EAs for the exploration of the maximum cliques in a graph. Results of experimentation with the 37 DIMACS benchmark graphs and comparative analyses with six state-of-the-art algorithms, including two from the smaller EA community and four from the larger metaheuristics community, indicate that the IEA-PTS outperforms the EAs with respect to a Pareto-lexicographic ranking criterion and offers competitive performance on some graph instances when individually compared to the other heuristic algorithms. It has also successfully set a new benchmark on one graph instance. On another benchmark suite called Benchmarks with Hidden Optimal Solutions, IEA-PTS ranks second, after a very recent algorithm called COVER, among its peers that have experimented with this suite.

  10. Oxygen enhanced switching to combustion of lower rank fuels

    DOEpatents

    Kobayashi, Hisashi; Bool, III, Lawrence E.; Wu, Kuang Tsai

    2004-03-02

    A furnace that combusts fuel, such as coal, of a given minimum energy content to obtain a stated minimum amount of energy per unit of time is enabled to combust fuel having a lower energy content, while still obtaining at least the stated minimum energy generation rate, by replacing a small amount of the combustion air fed to the furnace by oxygen. The replacement of oxygen for combustion air also provides reduction in the generation of NOx.

  11. Nonconvex Nonsmooth Low Rank Minimization via Iteratively Reweighted Nuclear Norm.

    PubMed

    Lu, Canyi; Tang, Jinhui; Yan, Shuicheng; Lin, Zhouchen

    2016-02-01

    The nuclear norm is widely used as a convex surrogate of the rank function in compressive sensing for low rank matrix recovery with its applications in image recovery and signal processing. However, solving the nuclear norm-based relaxed convex problem usually leads to a suboptimal solution of the original rank minimization problem. In this paper, we propose to use a family of nonconvex surrogates of L0-norm on the singular values of a matrix to approximate the rank function. This leads to a nonconvex nonsmooth minimization problem. Then, we propose to solve the problem by an iteratively re-weighted nuclear norm (IRNN) algorithm. IRNN iteratively solves a weighted singular value thresholding problem, which has a closed form solution due to the special properties of the nonconvex surrogate functions. We also extend IRNN to solve the nonconvex problem with two or more blocks of variables. In theory, we prove that the IRNN decreases the objective function value monotonically, and any limit point is a stationary point. Extensive experiments on both synthesized data and real images demonstrate that IRNN enhances the low rank matrix recovery compared with the state-of-the-art convex algorithms.

  12. Econophysics of a ranked demand and supply resource allocation problem

    NASA Astrophysics Data System (ADS)

    Priel, Avner; Tamir, Boaz

    2018-01-01

    We present a two sided resource allocation problem, between demands and supplies, where both parties are ranked. For example, in Big Data problems where a set of different computational tasks is divided between a set of computers each with its own resources, or between employees and employers where both parties are ranked, the employees by their fitness and the employers by their package benefits. The allocation process can be viewed as a repeated game where in each iteration the strategy is decided by a meta-rule, based on the ranks of both parties and the results of the previous games. We show the existence of a phase transition between an absorbing state, where all demands are satisfied, and an active one where part of the demands are always left unsatisfied. The phase transition is governed by the ratio between supplies and demand. In a job allocation problem we find positive correlation between the rank of the workers and the rank of the factories; higher rank workers are usually allocated to higher ranked factories. These all suggest global emergent properties stemming from local variables. To demonstrate the global versus local relations, we introduce a local inertial force that increases the rank of employees in proportion to their persistence time in the same factory. We show that such a local force induces non trivial global effects, mostly to benefit the lower ranked employees.

  13. An Efficient Rank Based Approach for Closest String and Closest Substring

    PubMed Central

    2012-01-01

    This paper aims to present a new genetic approach that uses rank distance for solving two known NP-hard problems, and to compare rank distance with other distance measures for strings. The two NP-hard problems we are trying to solve are closest string and closest substring. For each problem we build a genetic algorithm and we describe the genetic operations involved. Both genetic algorithms use a fitness function based on rank distance. We compare our algorithms with other genetic algorithms that use different distance measures, such as Hamming distance or Levenshtein distance, on real DNA sequences. Our experiments show that the genetic algorithms based on rank distance have the best results. PMID:22675483

  14. Optimal solution of full fuzzy transportation problems using total integral ranking

    NASA Astrophysics Data System (ADS)

    Sam’an, M.; Farikhin; Hariyanto, S.; Surarso, B.

    2018-03-01

    Full fuzzy transportation problem (FFTP) is a transportation problem where transport costs, demand, supply and decision variables are expressed in form of fuzzy numbers. To solve fuzzy transportation problem, fuzzy number parameter must be converted to a crisp number called defuzzyfication method. In this new total integral ranking method with fuzzy numbers from conversion of trapezoidal fuzzy numbers to hexagonal fuzzy numbers obtained result of consistency defuzzyfication on symmetrical fuzzy hexagonal and non symmetrical type 2 numbers with fuzzy triangular numbers. To calculate of optimum solution FTP used fuzzy transportation algorithm with least cost method. From this optimum solution, it is found that use of fuzzy number form total integral ranking with index of optimism gives different optimum value. In addition, total integral ranking value using hexagonal fuzzy numbers has an optimal value better than the total integral ranking value using trapezoidal fuzzy numbers.

  15. Top-d Rank Aggregation in Web Meta-search Engine

    NASA Astrophysics Data System (ADS)

    Fang, Qizhi; Xiao, Han; Zhu, Shanfeng

    In this paper, we consider the rank aggregation problem for information retrieval over Web making use of a kind of metric, the coherence, which considers both the normalized Kendall-τ distance and the size of overlap between two partial rankings. In general, the top-d coherence aggregation problem is defined as: given collection of partial rankings Π = {τ 1,τ 2, ⋯ , τ K }, how to find a final ranking π with specific length d, which maximizes the total coherence Φ(π,Pi)=sum_{i=1}^K Φ(π,tau_i). The corresponding complexity and algorithmic issues are discussed in this paper. Our main technical contribution is a polynomial time approximation scheme (PTAS) for a restricted top-d coherence aggregation problem.

  16. Improving the Incoherence of a Learned Dictionary via Rank Shrinkage.

    PubMed

    Ubaru, Shashanka; Seghouane, Abd-Krim; Saad, Yousef

    2017-01-01

    This letter considers the problem of dictionary learning for sparse signal representation whose atoms have low mutual coherence. To learn such dictionaries, at each step, we first update the dictionary using the method of optimal directions (MOD) and then apply a dictionary rank shrinkage step to decrease its mutual coherence. In the rank shrinkage step, we first compute a rank 1 decomposition of the column-normalized least squares estimate of the dictionary obtained from the MOD step. We then shrink the rank of this learned dictionary by transforming the problem of reducing the rank to a nonnegative garrotte estimation problem and solving it using a path-wise coordinate descent approach. We establish theoretical results that show that the rank shrinkage step included will reduce the coherence of the dictionary, which is further validated by experimental results. Numerical experiments illustrating the performance of the proposed algorithm in comparison to various other well-known dictionary learning algorithms are also presented.

  17. Misleading University Rankings: Cause and Cure for Discrepancies between Nominal and Attained Weights

    ERIC Educational Resources Information Center

    Soh, Kaycheng

    2013-01-01

    Recent research into university ranking methodologies uncovered several methodological problems among the systems currently in vogue. One of these is the discrepancy between the nominal and attained weights. The problem is the summation of unstandardized indicators for the total scores used in ranking. It is demonstrated that weight discrepancy…

  18. Classification of hyperbolic singularities of rank zero of integrable Hamiltonian systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oshemkov, Andrey A

    2010-10-06

    A complete invariant is constructed that is a solution of the problem of semilocal classification of saddle singularities of integrable Hamiltonian systems. Namely, a certain combinatorial object (an f{sub n}-graph) is associated with every nondegenerate saddle singularity of rank zero; as a result, the problem of semilocal classification of saddle singularities of rank zero is reduced to the problem of enumeration of the f{sub n}-graphs. This enables us to describe a simple algorithm for obtaining the lists of saddle singularities of rank zero for a given number of degrees of freedom and a given complexity. Bibliography: 24 titles.

  19. Trace Norm Regularized CANDECOMP/PARAFAC Decomposition With Missing Data.

    PubMed

    Liu, Yuanyuan; Shang, Fanhua; Jiao, Licheng; Cheng, James; Cheng, Hong

    2015-11-01

    In recent years, low-rank tensor completion (LRTC) problems have received a significant amount of attention in computer vision, data mining, and signal processing. The existing trace norm minimization algorithms for iteratively solving LRTC problems involve multiple singular value decompositions of very large matrices at each iteration. Therefore, they suffer from high computational cost. In this paper, we propose a novel trace norm regularized CANDECOMP/PARAFAC decomposition (TNCP) method for simultaneous tensor decomposition and completion. We first formulate a factor matrix rank minimization model by deducing the relation between the rank of each factor matrix and the mode- n rank of a tensor. Then, we introduce a tractable relaxation of our rank function, and then achieve a convex combination problem of much smaller-scale matrix trace norm minimization. Finally, we develop an efficient algorithm based on alternating direction method of multipliers to solve our problem. The promising experimental results on synthetic and real-world data validate the effectiveness of our TNCP method. Moreover, TNCP is significantly faster than the state-of-the-art methods and scales to larger problems.

  20. The Ranking of Higher Education Institutions in Russia: Some Methodological Problems.

    ERIC Educational Resources Information Center

    Filinov, Nikolay B.; Ruchkina, Svetlana

    2002-01-01

    The ranking of higher education institutions in Russia is examined from two points of view: as a social phenomenon and as a multi-criteria decision-making problem. The first point of view introduces the idea of interested and involved parties; the second introduces certain principles on which a rational ranking methodology should be based.…

  1. Mirror, Mirror on the Wall: A Closer Look at the Top Ten in University Rankings

    ERIC Educational Resources Information Center

    Cheng, Soh Kay

    2011-01-01

    Notwithstanding criticisms and discussions on methodological grounds, much attention has been and still will be paid to university rankings for various reasons. The present paper uses published information of the 10 top-ranking universities of the world and demonstrates the problem of spurious precision. In view of the problem of measurement error…

  2. Problems of Indicator Weights and Multicolinearity in World University Rankings: Comparisons of Three Systems

    ERIC Educational Resources Information Center

    Soh, Kaycheng

    2014-01-01

    World university rankings (WUR) use the weight-and-sum approach to arrive at an overall measure which is then used to rank the participating universities of the world. Although the weight-and-sum procedure seems straightforward and accords with common sense, it has hidden methodological or statistical problems which render the meaning of the…

  3. Efficient l1 -norm-based low-rank matrix approximations for large-scale problems using alternating rectified gradient method.

    PubMed

    Kim, Eunwoo; Lee, Minsik; Choi, Chong-Ho; Kwak, Nojun; Oh, Songhwai

    2015-02-01

    Low-rank matrix approximation plays an important role in the area of computer vision and image processing. Most of the conventional low-rank matrix approximation methods are based on the l2 -norm (Frobenius norm) with principal component analysis (PCA) being the most popular among them. However, this can give a poor approximation for data contaminated by outliers (including missing data), because the l2 -norm exaggerates the negative effect of outliers. Recently, to overcome this problem, various methods based on the l1 -norm, such as robust PCA methods, have been proposed for low-rank matrix approximation. Despite the robustness of the methods, they require heavy computational effort and substantial memory for high-dimensional data, which is impractical for real-world problems. In this paper, we propose two efficient low-rank factorization methods based on the l1 -norm that find proper projection and coefficient matrices using the alternating rectified gradient method. The proposed methods are applied to a number of low-rank matrix approximation problems to demonstrate their efficiency and robustness. The experimental results show that our proposals are efficient in both execution time and reconstruction performance unlike other state-of-the-art methods.

  4. Learning to Select Supplier Portfolios for Service Supply Chain

    PubMed Central

    Zhang, Rui; Li, Jingfei; Wu, Shaoyu; Meng, Dabin

    2016-01-01

    The research on service supply chain has attracted more and more focus from both academia and industrial community. In a service supply chain, the selection of supplier portfolio is an important and difficult problem due to the fact that a supplier portfolio may include multiple suppliers from a variety of fields. To address this problem, we propose a novel supplier portfolio selection method based on a well known machine learning approach, i.e., Ranking Neural Network (RankNet). In the proposed method, we regard the problem of supplier portfolio selection as a ranking problem, which integrates a large scale of decision making features into a ranking neural network. Extensive simulation experiments are conducted, which demonstrate the feasibility and effectiveness of the proposed method. The proposed supplier portfolio selection model can be applied in a real corporation easily in the future. PMID:27195756

  5. Multiple Ordinal Regression by Maximizing the Sum of Margins

    PubMed Central

    Hamsici, Onur C.; Martinez, Aleix M.

    2016-01-01

    Human preferences are usually measured using ordinal variables. A system whose goal is to estimate the preferences of humans and their underlying decision mechanisms requires to learn the ordering of any given sample set. We consider the solution of this ordinal regression problem using a Support Vector Machine algorithm. Specifically, the goal is to learn a set of classifiers with common direction vectors and different biases correctly separating the ordered classes. Current algorithms are either required to solve a quadratic optimization problem, which is computationally expensive, or are based on maximizing the minimum margin (i.e., a fixed margin strategy) between a set of hyperplanes, which biases the solution to the closest margin. Another drawback of these strategies is that they are limited to order the classes using a single ranking variable (e.g., perceived length). In this paper, we define a multiple ordinal regression algorithm based on maximizing the sum of the margins between every consecutive class with respect to one or more rankings (e.g., perceived length and weight). We provide derivations of an efficient, easy-to-implement iterative solution using a Sequential Minimal Optimization procedure. We demonstrate the accuracy of our solutions in several datasets. In addition, we provide a key application of our algorithms in estimating human subjects’ ordinal classification of attribute associations to object categories. We show that these ordinal associations perform better than the binary one typically employed in the literature. PMID:26529784

  6. Solving fuzzy shortest path problem by genetic algorithm

    NASA Astrophysics Data System (ADS)

    Syarif, A.; Muludi, K.; Adrian, R.; Gen, M.

    2018-03-01

    Shortest Path Problem (SPP) is known as one of well-studied fields in the area Operations Research and Mathematical Optimization. It has been applied for many engineering and management designs. The objective is usually to determine path(s) in the network with minimum total cost or traveling time. In the past, the cost value for each arc was usually assigned or estimated as a deteministic value. For some specific real world applications, however, it is often difficult to determine the cost value properly. One way of handling such uncertainty in decision making is by introducing fuzzy approach. With this situation, it will become difficult to solve the problem optimally. This paper presents the investigations on the application of Genetic Algorithm (GA) to a new SPP model in which the cost values are represented as Triangular Fuzzy Number (TFN). We adopts the concept of ranking fuzzy numbers to determine how good the solutions. Here, by giving his/her degree value, the decision maker can determine the range of objective value. This would be very valuable for decision support system in the real world applications.Simulation experiments were carried out by modifying several test problems with 10-25 nodes. It is noted that the proposed approach is capable attaining a good solution with different degree of optimism for the tested problems.

  7. Ranking Specific Sets of Objects.

    PubMed

    Maly, Jan; Woltran, Stefan

    2017-01-01

    Ranking sets of objects based on an order between the single elements has been thoroughly studied in the literature. In particular, it has been shown that it is in general impossible to find a total ranking - jointly satisfying properties as dominance and independence - on the whole power set of objects. However, in many applications certain elements from the entire power set might not be required and can be neglected in the ranking process. For instance, certain sets might be ruled out due to hard constraints or are not satisfying some background theory. In this paper, we treat the computational problem whether an order on a given subset of the power set of elements satisfying different variants of dominance and independence can be found, given a ranking on the elements. We show that this problem is tractable for partial rankings and NP-complete for total rankings.

  8. Rank-k modification methods for recursive least squares problems

    NASA Astrophysics Data System (ADS)

    Olszanskyj, Serge; Lebak, James; Bojanczyk, Adam

    1994-09-01

    In least squares problems, it is often desired to solve the same problem repeatedly but with several rows of the data either added, deleted, or both. Methods for quickly solving a problem after adding or deleting one row of data at a time are known. In this paper we introduce fundamental rank-k updating and downdating methods and show how extensions of rank-1 downdating methods based on LINPACK, Corrected Semi-Normal Equations (CSNE), and Gram-Schmidt factorizations, as well as new rank-k downdating methods, can all be derived from these fundamental results. We then analyze the cost of each new algorithm and make comparisons tok applications of the corresponding rank-1 algorithms. We provide experimental results comparing the numerical accuracy of the various algorithms, paying particular attention to the downdating methods, due to their potential numerical difficulties for ill-conditioned problems. We then discuss the computation involved for each downdating method, measured in terms of operation counts and BLAS calls. Finally, we provide serial execution timing results for these algorithms, noting preferable points for improvement and optimization. From our experiments we conclude that the Gram-Schmidt methods perform best in terms of numerical accuracy, but may be too costly for serial execution for large problems.

  9. Biomarker selection and classification of "-omics" data using a two-step bayes classification framework.

    PubMed

    Assawamakin, Anunchai; Prueksaaroon, Supakit; Kulawonganunchai, Supasak; Shaw, Philip James; Varavithya, Vara; Ruangrajitpakorn, Taneth; Tongsima, Sissades

    2013-01-01

    Identification of suitable biomarkers for accurate prediction of phenotypic outcomes is a goal for personalized medicine. However, current machine learning approaches are either too complex or perform poorly. Here, a novel two-step machine-learning framework is presented to address this need. First, a Naïve Bayes estimator is used to rank features from which the top-ranked will most likely contain the most informative features for prediction of the underlying biological classes. The top-ranked features are then used in a Hidden Naïve Bayes classifier to construct a classification prediction model from these filtered attributes. In order to obtain the minimum set of the most informative biomarkers, the bottom-ranked features are successively removed from the Naïve Bayes-filtered feature list one at a time, and the classification accuracy of the Hidden Naïve Bayes classifier is checked for each pruned feature set. The performance of the proposed two-step Bayes classification framework was tested on different types of -omics datasets including gene expression microarray, single nucleotide polymorphism microarray (SNParray), and surface-enhanced laser desorption/ionization time-of-flight (SELDI-TOF) proteomic data. The proposed two-step Bayes classification framework was equal to and, in some cases, outperformed other classification methods in terms of prediction accuracy, minimum number of classification markers, and computational time.

  10. Simpson's Paradox and Confounding Factors in University Rankings: A Demonstration Using QS 2011-12 Data

    ERIC Educational Resources Information Center

    Soh, Kay Cheng

    2012-01-01

    University ranking has become ritualistic in higher education. Ranking results are taken as bona fide by rank users. Ranking systems usually use large data sets from highly heterogeneous universities of varied backgrounds. This poses the problem of Simpson's Paradox and the lurking variables causing it. Using QS 2011-2012 Ranking data, the dual…

  11. Impact of insects on multiple-use values of north-central forests: an experimental rating scheme.

    Treesearch

    Norton D. Addy; Harold O. Batzer; William J. Mattson; William E. Miller

    1971-01-01

    Ranking or assigning priorities to problems is an essential step in research problem selection. Up to now, no rigorous basis for ranking forest insects has been available. We evaluate and rank forest insects with a systematic numerical scheme that considers insect impact on the multiple-use values of timber, wildlife, recreation, and water. The result is a better...

  12. Kriging for Simulation Metamodeling: Experimental Design, Reduced Rank Kriging, and Omni-Rank Kriging

    NASA Astrophysics Data System (ADS)

    Hosking, Michael Robert

    This dissertation improves an analyst's use of simulation by offering improvements in the utilization of kriging metamodels. There are three main contributions. First an analysis is performed of what comprises good experimental designs for practical (non-toy) problems when using a kriging metamodel. Second is an explanation and demonstration of how reduced rank decompositions can improve the performance of kriging, now referred to as reduced rank kriging. Third is the development of an extension of reduced rank kriging which solves an open question regarding the usage of reduced rank kriging in practice. This extension is called omni-rank kriging. Finally these results are demonstrated on two case studies. The first contribution focuses on experimental design. Sequential designs are generally known to be more efficient than "one shot" designs. However, sequential designs require some sort of pilot design from which the sequential stage can be based. We seek to find good initial designs for these pilot studies, as well as designs which will be effective if there is no following sequential stage. We test a wide variety of designs over a small set of test-bed problems. Our findings indicate that analysts should take advantage of any prior information they have about their problem's shape and/or their goals in metamodeling. In the event of a total lack of information we find that Latin hypercube designs are robust default choices. Our work is most distinguished by its attention to the higher levels of dimensionality. The second contribution introduces and explains an alternative method for kriging when there is noise in the data, which we call reduced rank kriging. Reduced rank kriging is based on using a reduced rank decomposition which artificially smoothes the kriging weights similar to a nugget effect. Our primary focus will be showing how the reduced rank decomposition propagates through kriging empirically. In addition, we show further evidence for our explanation through tests of reduced rank kriging's performance over different situations. In total, reduced rank kriging is a useful tool for simulation metamodeling. For the third contribution we will answer the question of how to find the best rank for reduced rank kriging. We do this by creating an alternative method which does not need to search for a particular rank. Instead it uses all potential ranks; we call this approach omnirank kriging. This modification realizes the potential gains from reduced rank kriging and provides a workable methodology for simulation metamodeling. Finally, we will demonstrate the use and value of these developments on two case studies, a clinic operation problem and a location problem. These cases will validate the value of this research. Simulation metamodeling always attempts to extract maximum information from limited data. Each one of these contributions will allow analysts to make better use of their constrained computational budgets.

  13. World University Rankings: Take with a Large Pinch of Salt

    ERIC Educational Resources Information Center

    Cheng, Soh Kay

    2011-01-01

    Equating the unequal is misleading, and this happens consistently in comparing rankings from different university ranking systems, as the NUT saga shows. This article illustrates the problem by analyzing the 2011 rankings of the top 100 universities in the AWUR, QSWUR and THEWUR ranking results. It also discusses the reasons why the rankings…

  14. Ranking REACH registered neutral, ionizable and ionic organic chemicals based on their aquatic persistency and mobility.

    PubMed

    Arp, H P H; Brown, T N; Berger, U; Hale, S E

    2017-07-19

    The contaminants that have the greatest chances of appearing in drinking water are those that are mobile enough in the aquatic environment to enter drinking water sources and persistent enough to survive treatment processes. Herein a screening procedure to rank neutral, ionizable and ionic organic compounds for being persistent and mobile organic compounds (PMOCs) is presented and applied to the list of industrial substances registered under the EU REACH legislation as of December 2014. This comprised 5155 identifiable, unique organic structures. The minimum cut-off criteria considered for PMOC classification herein are a freshwater half-life >40 days, which is consistent with the REACH definition of freshwater persistency, and a log D oc < 4.5 between pH 4-10 (where D oc is the organic carbon-water distribution coefficient). Experimental data were given the highest priority, followed by data from an array of available quantitative structure-activity relationships (QSARs), and as a third resort, an original Iterative Fragment Selection (IFS) QSAR. In total, 52% of the unique REACH structures made the minimum criteria to be considered a PMOC, and 21% achieved the highest PMOC ranking (half-life > 40 days, log D oc < 1.0 between pH 4-10). Only 9% of neutral substances received the highest PMOC ranking, compared to 30% of ionizable compounds and 44% of ionic compounds. Predicted hydrolysis products for all REACH parents (contributing 5043 additional structures) were found to have higher PMOC rankings than their parents, due to increased mobility but not persistence. The fewest experimental data available were for ionic compounds; therefore, their ranking is more uncertain than neutral and ionizable compounds. The most sensitive parameter for the PMOC ranking was freshwater persistency, which was also the parameter that QSARs performed the most poorly at predicting. Several prioritized drinking water contaminants in the EU and USA, and other contaminants of concern, were identified as PMOCs. This identification and ranking procedure for PMOCs can be part of a strategy to better identify contaminants that pose a threat to drinking water sources.

  15. Smoothed low rank and sparse matrix recovery by iteratively reweighted least squares minimization.

    PubMed

    Lu, Canyi; Lin, Zhouchen; Yan, Shuicheng

    2015-02-01

    This paper presents a general framework for solving the low-rank and/or sparse matrix minimization problems, which may involve multiple nonsmooth terms. The iteratively reweighted least squares (IRLSs) method is a fast solver, which smooths the objective function and minimizes it by alternately updating the variables and their weights. However, the traditional IRLS can only solve a sparse only or low rank only minimization problem with squared loss or an affine constraint. This paper generalizes IRLS to solve joint/mixed low-rank and sparse minimization problems, which are essential formulations for many tasks. As a concrete example, we solve the Schatten-p norm and l2,q-norm regularized low-rank representation problem by IRLS, and theoretically prove that the derived solution is a stationary point (globally optimal if p,q ≥ 1). Our convergence proof of IRLS is more general than previous one that depends on the special properties of the Schatten-p norm and l2,q-norm. Extensive experiments on both synthetic and real data sets demonstrate that our IRLS is much more efficient.

  16. Active subspace: toward scalable low-rank learning.

    PubMed

    Liu, Guangcan; Yan, Shuicheng

    2012-12-01

    We address the scalability issues in low-rank matrix learning problems. Usually these problems resort to solving nuclear norm regularized optimization problems (NNROPs), which often suffer from high computational complexities if based on existing solvers, especially in large-scale settings. Based on the fact that the optimal solution matrix to an NNROP is often low rank, we revisit the classic mechanism of low-rank matrix factorization, based on which we present an active subspace algorithm for efficiently solving NNROPs by transforming large-scale NNROPs into small-scale problems. The transformation is achieved by factorizing the large solution matrix into the product of a small orthonormal matrix (active subspace) and another small matrix. Although such a transformation generally leads to nonconvex problems, we show that a suboptimal solution can be found by the augmented Lagrange alternating direction method. For the robust PCA (RPCA) (Candès, Li, Ma, & Wright, 2009 ) problem, a typical example of NNROPs, theoretical results verify the suboptimality of the solution produced by our algorithm. For the general NNROPs, we empirically show that our algorithm significantly reduces the computational complexity without loss of optimality.

  17. Application of learning to rank to protein remote homology detection.

    PubMed

    Liu, Bin; Chen, Junjie; Wang, Xiaolong

    2015-11-01

    Protein remote homology detection is one of the fundamental problems in computational biology, aiming to find protein sequences in a database of known structures that are evolutionarily related to a given query protein. Some computational methods treat this problem as a ranking problem and achieve the state-of-the-art performance, such as PSI-BLAST, HHblits and ProtEmbed. This raises the possibility to combine these methods to improve the predictive performance. In this regard, we are to propose a new computational method called ProtDec-LTR for protein remote homology detection, which is able to combine various ranking methods in a supervised manner via using the Learning to Rank (LTR) algorithm derived from natural language processing. Experimental results on a widely used benchmark dataset showed that ProtDec-LTR can achieve an ROC1 score of 0.8442 and an ROC50 score of 0.9023 outperforming all the individual predictors and some state-of-the-art methods. These results indicate that it is correct to treat protein remote homology detection as a ranking problem, and predictive performance improvement can be achieved by combining different ranking approaches in a supervised manner via using LTR. For users' convenience, the software tools of three basic ranking predictors and Learning to Rank algorithm were provided at http://bioinformatics.hitsz.edu.cn/ProtDec-LTR/home/ bliu@insun.hit.edu.cn Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  18. Flavor structure in F-theory compactifications

    NASA Astrophysics Data System (ADS)

    Hayashi, Hirotaka; Kawano, Teruhiko; Tsuchiya, Yoichi; Watari, Taizan

    2010-08-01

    F-theory is one of frameworks in string theory where supersymmetric grand unification is accommodated, and all the Yukawa couplings and Majorana masses of righthanded neutrinos are generated. Yukawa couplings of charged fermions are generated at codimension-3 singularities, and a contribution from a given singularity point is known to be approximately rank 1. Thus, the approximate rank of Yukawa matrices in low-energy effective theory of generic F-theory compactifications are minimum of either the number of generations N gen = 3 or the number of singularity points of certain types. If there is a geometry with only one E 6 type point and one D 6 type point over the entire 7-brane for SU(5) gauge fields, F-theory compactified on such a geometry would reproduce approximately rank-1 Yukawa matrices in the real world. We found, however, that there is no such geometry. Thus, it is a problem how to generate hierarchical Yukawa eigenvalues in F-theory compactifications. A solution in the literature so far is to take an appropriate factorization limit. In this article, we propose an alternative solution to the hierarchical structure problem (which requires to tune some parameters) by studying how zero mode wavefunctions depend on complex structure moduli. In this solution, the N gen × N gen CKM matrix is predicted to have only N gen entries of order unity without an extra tuning of parameters, and the lepton flavor anarchy is predicted for the lepton mixing matrix. The hierarchy among the Yukawa eigenvalues of the down-type and charged lepton sector is predicted to be smaller than that of the up-type sector, and the Majorana masses of left-handed neutrinos generated through the see-saw mechanism have small hierarchy. All of these predictions agree with what we observe in the real world. We also obtained a precise description of zero mode wavefunctions near the E 6 type singularity points, where the up-type Yukawa couplings are generated.

  19. Rank 4 Premodular Categories

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bruillard, Paul J.; Galindo, Cesar; Ng, Siu Hung

    2016-09-01

    We consider the classification problem for rank 4 premodular categories. We uncover a formula for the 2nd Frobenius-Schur indicator of a premodular category is determined and the classification of rank 4 premodular categories (up to Grothendieck equivalence) is completed. In the appendix we show rank finiteness for premodular categories.

  20. Clinical Reasoning Terms Included in Clinical Problem Solving Exercises?

    PubMed Central

    Musgrove, John L.; Morris, Jason; Estrada, Carlos A.; Kraemer, Ryan R.

    2016-01-01

    Background Published clinical problem solving exercises have emerged as a common tool to illustrate aspects of the clinical reasoning process. The specific clinical reasoning terms mentioned in such exercises is unknown. Objective We identified which clinical reasoning terms are mentioned in published clinical problem solving exercises and compared them to clinical reasoning terms given high priority by clinician educators. Methods A convenience sample of clinician educators prioritized a list of clinical reasoning terms (whether to include, weight percentage of top 20 terms). The authors then electronically searched the terms in the text of published reports of 4 internal medicine journals between January 2010 and May 2013. Results The top 5 clinical reasoning terms ranked by educators were dual-process thinking (weight percentage = 24%), problem representation (12%), illness scripts (9%), hypothesis generation (7%), and problem categorization (7%). The top clinical reasoning terms mentioned in the text of 79 published reports were context specificity (n = 20, 25%), bias (n = 13, 17%), dual-process thinking (n = 11, 14%), illness scripts (n = 11, 14%), and problem representation (n = 10, 13%). Context specificity and bias were not ranked highly by educators. Conclusions Some core concepts of modern clinical reasoning theory ranked highly by educators are mentioned explicitly in published clinical problem solving exercises. However, some highly ranked terms were not used, and some terms used were not ranked by the clinician educators. Effort to teach clinical reasoning to trainees may benefit from a common nomenclature of clinical reasoning terms. PMID:27168884

  1. Clinical Reasoning Terms Included in Clinical Problem Solving Exercises?

    PubMed

    Musgrove, John L; Morris, Jason; Estrada, Carlos A; Kraemer, Ryan R

    2016-05-01

    Background Published clinical problem solving exercises have emerged as a common tool to illustrate aspects of the clinical reasoning process. The specific clinical reasoning terms mentioned in such exercises is unknown. Objective We identified which clinical reasoning terms are mentioned in published clinical problem solving exercises and compared them to clinical reasoning terms given high priority by clinician educators. Methods A convenience sample of clinician educators prioritized a list of clinical reasoning terms (whether to include, weight percentage of top 20 terms). The authors then electronically searched the terms in the text of published reports of 4 internal medicine journals between January 2010 and May 2013. Results The top 5 clinical reasoning terms ranked by educators were dual-process thinking (weight percentage = 24%), problem representation (12%), illness scripts (9%), hypothesis generation (7%), and problem categorization (7%). The top clinical reasoning terms mentioned in the text of 79 published reports were context specificity (n = 20, 25%), bias (n = 13, 17%), dual-process thinking (n = 11, 14%), illness scripts (n = 11, 14%), and problem representation (n = 10, 13%). Context specificity and bias were not ranked highly by educators. Conclusions Some core concepts of modern clinical reasoning theory ranked highly by educators are mentioned explicitly in published clinical problem solving exercises. However, some highly ranked terms were not used, and some terms used were not ranked by the clinician educators. Effort to teach clinical reasoning to trainees may benefit from a common nomenclature of clinical reasoning terms.

  2. A Ranking Approach to Genomic Selection.

    PubMed

    Blondel, Mathieu; Onogi, Akio; Iwata, Hiroyoshi; Ueda, Naonori

    2015-01-01

    Genomic selection (GS) is a recent selective breeding method which uses predictive models based on whole-genome molecular markers. Until now, existing studies formulated GS as the problem of modeling an individual's breeding value for a particular trait of interest, i.e., as a regression problem. To assess predictive accuracy of the model, the Pearson correlation between observed and predicted trait values was used. In this paper, we propose to formulate GS as the problem of ranking individuals according to their breeding value. Our proposed framework allows us to employ machine learning methods for ranking which had previously not been considered in the GS literature. To assess ranking accuracy of a model, we introduce a new measure originating from the information retrieval literature called normalized discounted cumulative gain (NDCG). NDCG rewards more strongly models which assign a high rank to individuals with high breeding value. Therefore, NDCG reflects a prerequisite objective in selective breeding: accurate selection of individuals with high breeding value. We conducted a comparison of 10 existing regression methods and 3 new ranking methods on 6 datasets, consisting of 4 plant species and 25 traits. Our experimental results suggest that tree-based ensemble methods including McRank, Random Forests and Gradient Boosting Regression Trees achieve excellent ranking accuracy. RKHS regression and RankSVM also achieve good accuracy when used with an RBF kernel. Traditional regression methods such as Bayesian lasso, wBSR and BayesC were found less suitable for ranking. Pearson correlation was found to correlate poorly with NDCG. Our study suggests two important messages. First, ranking methods are a promising research direction in GS. Second, NDCG can be a useful evaluation measure for GS.

  3. Academic Ranking of World Universities by Broad Subject Fields

    ERIC Educational Resources Information Center

    Cheng, Ying; Liu, Nian Cai

    2007-01-01

    Upon numerous requests to provide ranking of world universities by broad subject fields/schools/colleges and by subject fields/programs/departments, the authors present the ranking methodologies and problems that arose from the research by the Institute of Higher Education, Shanghai Jiao Tong University on the Academic Ranking of World…

  4. Test Scores, Class Rank and College Performance: Lessons for Broadening Access and Promoting Success.

    PubMed

    Niu, Sunny X; Tienda, Marta

    2012-04-01

    Using administrative data for five Texas universities that differ in selectivity, this study evaluates the relative influence of two key indicators for college success-high school class rank and standardized tests. Empirical results show that class rank is the superior predictor of college performance and that test score advantages do not insulate lower ranked students from academic underperformance. Using the UT-Austin campus as a test case, we conduct a simulation to evaluate the consequences of capping students admitted automatically using both achievement metrics. We find that using class rank to cap the number of students eligible for automatic admission would have roughly uniform impacts across high schools, but imposing a minimum test score threshold on all students would have highly unequal consequences by greatly reduce the admission eligibility of the highest performing students who attend poor high schools while not jeopardizing admissibility of students who attend affluent high schools. We discuss the implications of the Texas admissions experiment for higher education in Europe.

  5. Learning to rank image tags with limited training examples.

    PubMed

    Songhe Feng; Zheyun Feng; Rong Jin

    2015-04-01

    With an increasing number of images that are available in social media, image annotation has emerged as an important research topic due to its application in image matching and retrieval. Most studies cast image annotation into a multilabel classification problem. The main shortcoming of this approach is that it requires a large number of training images with clean and complete annotations in order to learn a reliable model for tag prediction. We address this limitation by developing a novel approach that combines the strength of tag ranking with the power of matrix recovery. Instead of having to make a binary decision for each tag, our approach ranks tags in the descending order of their relevance to the given image, significantly simplifying the problem. In addition, the proposed method aggregates the prediction models for different tags into a matrix, and casts tag ranking into a matrix recovery problem. It introduces the matrix trace norm to explicitly control the model complexity, so that a reliable prediction model can be learned for tag ranking even when the tag space is large and the number of training images is limited. Experiments on multiple well-known image data sets demonstrate the effectiveness of the proposed framework for tag ranking compared with the state-of-the-art approaches for image annotation and tag ranking.

  6. Neophilia Ranking of Scientific Journals.

    PubMed

    Packalen, Mikko; Bhattacharya, Jay

    2017-01-01

    The ranking of scientific journals is important because of the signal it sends to scientists about what is considered most vital for scientific progress. Existing ranking systems focus on measuring the influence of a scientific paper (citations)-these rankings do not reward journals for publishing innovative work that builds on new ideas. We propose an alternative ranking based on the proclivity of journals to publish papers that build on new ideas, and we implement this ranking via a text-based analysis of all published biomedical papers dating back to 1946. In addition, we compare our neophilia ranking to citation-based (impact factor) rankings; this comparison shows that the two ranking approaches are distinct. Prior theoretical work suggests an active role for our neophilia index in science policy. Absent an explicit incentive to pursue novel science, scientists underinvest in innovative work because of a coordination problem: for work on a new idea to flourish, many scientists must decide to adopt it in their work. Rankings that are based purely on influence thus do not provide sufficient incentives for publishing innovative work. By contrast, adoption of the neophilia index as part of journal-ranking procedures by funding agencies and university administrators would provide an explicit incentive for journals to publish innovative work and thus help solve the coordination problem by increasing scientists' incentives to pursue innovative work.

  7. A parallel adaptive quantum genetic algorithm for the controllability of arbitrary networks.

    PubMed

    Li, Yuhong; Gong, Guanghong; Li, Ni

    2018-01-01

    In this paper, we propose a novel algorithm-parallel adaptive quantum genetic algorithm-which can rapidly determine the minimum control nodes of arbitrary networks with both control nodes and state nodes. The corresponding network can be fully controlled with the obtained control scheme. We transformed the network controllability issue into a combinational optimization problem based on the Popov-Belevitch-Hautus rank condition. A set of canonical networks and a list of real-world networks were experimented. Comparison results demonstrated that the algorithm was more ideal to optimize the controllability of networks, especially those larger-size networks. We demonstrated subsequently that there were links between the optimal control nodes and some network statistical characteristics. The proposed algorithm provides an effective approach to improve the controllability optimization of large networks or even extra-large networks with hundreds of thousands nodes.

  8. Fuzzy α-minimum spanning tree problem: definition and solutions

    NASA Astrophysics Data System (ADS)

    Zhou, Jian; Chen, Lu; Wang, Ke; Yang, Fan

    2016-04-01

    In this paper, the minimum spanning tree problem is investigated on the graph with fuzzy edge weights. The notion of fuzzy ? -minimum spanning tree is presented based on the credibility measure, and then the solutions of the fuzzy ? -minimum spanning tree problem are discussed under different assumptions. First, we respectively, assume that all the edge weights are triangular fuzzy numbers and trapezoidal fuzzy numbers and prove that the fuzzy ? -minimum spanning tree problem can be transformed to a classical problem on a crisp graph in these two cases, which can be solved by classical algorithms such as the Kruskal algorithm and the Prim algorithm in polynomial time. Subsequently, as for the case that the edge weights are general fuzzy numbers, a fuzzy simulation-based genetic algorithm using Prüfer number representation is designed for solving the fuzzy ? -minimum spanning tree problem. Some numerical examples are also provided for illustrating the effectiveness of the proposed solutions.

  9. Income and Social Rank Influence UK Children's Behavioral Problems: A Longitudinal Analysis.

    PubMed

    Garratt, Elisabeth A; Chandola, Tarani; Purdam, Kingsley; Wood, Alex M

    2017-07-01

    Children living in low-income households face elevated risks of behavioral problems, but the impact of absolute and relative income to this risk remains unexplored. Using the U.K. Millennium Cohort Study data, longitudinal associations between Strengths and Difficulties Questionnaire scores and absolute household income, distance from the regional median and mean income, and regional income rank were examined in 3- to 12-year-olds (n = 16,532). Higher absolute household incomes were associated with lower behavioral problems, while higher income rank was associated with lower behavioral problems only at the highest absolute incomes. Higher absolute household incomes were associated with lower behavioral problems among children in working households, indicating compounding effects of income and socioeconomic advantages. Both absolute and relative incomes therefore appear to influence behavioral problems. © 2016 The Authors. Child Development © 2016 Society for Research in Child Development, Inc.

  10. Remote Sensing of Environmental Pollution

    NASA Technical Reports Server (NTRS)

    North, G. W.

    1971-01-01

    Environmental pollution is a problem of international scope and concern. It can be subdivided into problems relating to water, air, or land pollution. Many of the problems in these three categories lend themselves to study and possible solution by remote sensing. Through the use of remote sensing systems and techniques, it is possible to detect and monitor, and in some cases, identify, measure, and study the effects of various environmental pollutants. As a guide for making decisions regarding the use of remote sensors for pollution studies, a special five-dimensional sensor/applications matrix has been designed. The matrix defines an environmental goal, ranks the various remote sensing objectives in terms of their ability to assist in solving environmental problems, lists the environmental problems, ranks the sensors that can be used for collecting data on each problem, and finally ranks the sensor platform options that are currently available.

  11. CNN-based ranking for biomedical entity normalization.

    PubMed

    Li, Haodi; Chen, Qingcai; Tang, Buzhou; Wang, Xiaolong; Xu, Hua; Wang, Baohua; Huang, Dong

    2017-10-03

    Most state-of-the-art biomedical entity normalization systems, such as rule-based systems, merely rely on morphological information of entity mentions, but rarely consider their semantic information. In this paper, we introduce a novel convolutional neural network (CNN) architecture that regards biomedical entity normalization as a ranking problem and benefits from semantic information of biomedical entities. The CNN-based ranking method first generates candidates using handcrafted rules, and then ranks the candidates according to their semantic information modeled by CNN as well as their morphological information. Experiments on two benchmark datasets for biomedical entity normalization show that our proposed CNN-based ranking method outperforms traditional rule-based method with state-of-the-art performance. We propose a CNN architecture that regards biomedical entity normalization as a ranking problem. Comparison results show that semantic information is beneficial to biomedical entity normalization and can be well combined with morphological information in our CNN architecture for further improvement.

  12. A descriptive study of ninety-two hospital libraries in Mexico.

    PubMed

    Macías-Chapula, C A

    1995-01-01

    This work reports on the existing situation of ninety-two hospital libraries, located at Mexico's Social Security Institute (IMSS). A descriptive, systems approach was used to explore the physical structure and space of the libraries, staff, furniture, equipment, collections, services, organization and management, and users. Structured interviews were applied to each hospital library, and a questionnaire was used as a tool to collect data. A "control" system, designed to measure the status of each library, was applied through the assignment of values to several indicators, derived from IMSS policy manuals. This procedure helped to identify which library was "more" or "less" adequate to IMSS standards. Within the rank of 10 to 1 (10 = optimum; 1 = minimum), the mean rank of IMSS hospital libraries was 6. The major deficiencies found were those related to furniture (ranked 4) and services (ranked 5) and the lack of library professionals found in 92% of the libraries.

  13. Beyond Low Rank + Sparse: Multi-scale Low Rank Matrix Decomposition

    PubMed Central

    Ong, Frank; Lustig, Michael

    2016-01-01

    We present a natural generalization of the recent low rank + sparse matrix decomposition and consider the decomposition of matrices into components of multiple scales. Such decomposition is well motivated in practice as data matrices often exhibit local correlations in multiple scales. Concretely, we propose a multi-scale low rank modeling that represents a data matrix as a sum of block-wise low rank matrices with increasing scales of block sizes. We then consider the inverse problem of decomposing the data matrix into its multi-scale low rank components and approach the problem via a convex formulation. Theoretically, we show that under various incoherence conditions, the convex program recovers the multi-scale low rank components either exactly or approximately. Practically, we provide guidance on selecting the regularization parameters and incorporate cycle spinning to reduce blocking artifacts. Experimentally, we show that the multi-scale low rank decomposition provides a more intuitive decomposition than conventional low rank methods and demonstrate its effectiveness in four applications, including illumination normalization for face images, motion separation for surveillance videos, multi-scale modeling of the dynamic contrast enhanced magnetic resonance imaging and collaborative filtering exploiting age information. PMID:28450978

  14. Optical ranked-order filtering using threshold decomposition

    DOEpatents

    Allebach, Jan P.; Ochoa, Ellen; Sweeney, Donald W.

    1990-01-01

    A hybrid optical/electronic system performs median filtering and related ranked-order operations using threshold decomposition to encode the image. Threshold decomposition transforms the nonlinear neighborhood ranking operation into a linear space-invariant filtering step followed by a point-to-point threshold comparison step. Spatial multiplexing allows parallel processing of all the threshold components as well as recombination by a second linear, space-invariant filtering step. An incoherent optical correlation system performs the linear filtering, using a magneto-optic spatial light modulator as the input device and a computer-generated hologram in the filter plane. Thresholding is done electronically. By adjusting the value of the threshold, the same architecture is used to perform median, minimum, and maximum filtering of images. A totally optical system is also disclosed.

  15. A Ranking Approach on Large-Scale Graph With Multidimensional Heterogeneous Information.

    PubMed

    Wei, Wei; Gao, Bin; Liu, Tie-Yan; Wang, Taifeng; Li, Guohui; Li, Hang

    2016-04-01

    Graph-based ranking has been extensively studied and frequently applied in many applications, such as webpage ranking. It aims at mining potentially valuable information from the raw graph-structured data. Recently, with the proliferation of rich heterogeneous information (e.g., node/edge features and prior knowledge) available in many real-world graphs, how to effectively and efficiently leverage all information to improve the ranking performance becomes a new challenging problem. Previous methods only utilize part of such information and attempt to rank graph nodes according to link-based methods, of which the ranking performances are severely affected by several well-known issues, e.g., over-fitting or high computational complexity, especially when the scale of graph is very large. In this paper, we address the large-scale graph-based ranking problem and focus on how to effectively exploit rich heterogeneous information of the graph to improve the ranking performance. Specifically, we propose an innovative and effective semi-supervised PageRank (SSP) approach to parameterize the derived information within a unified semi-supervised learning framework (SSLF-GR), then simultaneously optimize the parameters and the ranking scores of graph nodes. Experiments on the real-world large-scale graphs demonstrate that our method significantly outperforms the algorithms that consider such graph information only partially.

  16. Symmetry breaking by bifundamentals

    NASA Astrophysics Data System (ADS)

    Schellekens, A. N.

    2018-03-01

    We derive all possible symmetry breaking patterns for all possible Higgs fields that can occur in intersecting brane models: bifundamentals and rank-2 tensors. This is a field-theoretic problem that was already partially solved in 1973 by Ling-Fong Li [1]. In that paper the solution was given for rank-2 tensors of orthogonal and unitary group, and U (N )×U (M ) and O (N )×O (M ) bifundamentals. We extend this first of all to symplectic groups. When formulated correctly, this turns out to be straightforward generalization of the previous results from real and complex numbers to quaternions. The extension to mixed bifundamentals is more challenging and interesting. The scalar potential has up to six real parameters. Its minima or saddle points are described by block-diagonal matrices built out of K blocks of size p ×q . Here p =q =1 for the solutions of Ling-Fong Li, and the number of possibilities for p ×q is equal to the number of real parameters in the potential, minus 1. The maximum block size is p ×q =2 ×4 . Different blocks cannot be combined, and the true minimum occurs for one choice of basic block, and for either K =1 or K maximal, depending on the parameter values.

  17. Regularized learning of linear ordered-statistic constant false alarm rate filters (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Havens, Timothy C.; Cummings, Ian; Botts, Jonathan; Summers, Jason E.

    2017-05-01

    The linear ordered statistic (LOS) is a parameterized ordered statistic (OS) that is a weighted average of a rank-ordered sample. LOS operators are useful generalizations of aggregation as they can represent any linear aggregation, from minimum to maximum, including conventional aggregations, such as mean and median. In the fuzzy logic field, these aggregations are called ordered weighted averages (OWAs). Here, we present a method for learning LOS operators from training data, viz., data for which you know the output of the desired LOS. We then extend the learning process with regularization, such that a lower complexity or sparse LOS can be learned. Hence, we discuss what 'lower complexity' means in this context and how to represent that in the optimization procedure. Finally, we apply our learning methods to the well-known constant-false-alarm-rate (CFAR) detection problem, specifically for the case of background levels modeled by long-tailed distributions, such as the K-distribution. These backgrounds arise in several pertinent imaging problems, including the modeling of clutter in synthetic aperture radar and sonar (SAR and SAS) and in wireless communications.

  18. Neophilia Ranking of Scientific Journals

    PubMed Central

    Packalen, Mikko; Bhattacharya, Jay

    2017-01-01

    The ranking of scientific journals is important because of the signal it sends to scientists about what is considered most vital for scientific progress. Existing ranking systems focus on measuring the influence of a scientific paper (citations)—these rankings do not reward journals for publishing innovative work that builds on new ideas. We propose an alternative ranking based on the proclivity of journals to publish papers that build on new ideas, and we implement this ranking via a text-based analysis of all published biomedical papers dating back to 1946. In addition, we compare our neophilia ranking to citation-based (impact factor) rankings; this comparison shows that the two ranking approaches are distinct. Prior theoretical work suggests an active role for our neophilia index in science policy. Absent an explicit incentive to pursue novel science, scientists underinvest in innovative work because of a coordination problem: for work on a new idea to flourish, many scientists must decide to adopt it in their work. Rankings that are based purely on influence thus do not provide sufficient incentives for publishing innovative work. By contrast, adoption of the neophilia index as part of journal-ranking procedures by funding agencies and university administrators would provide an explicit incentive for journals to publish innovative work and thus help solve the coordination problem by increasing scientists' incentives to pursue innovative work. PMID:28713181

  19. Target Fishing for Chemical Compounds using Target-Ligand Activity data and Ranking based Methods

    PubMed Central

    Wale, Nikil; Karypis, George

    2009-01-01

    In recent years the development of computational techniques that identify all the likely targets for a given chemical compound, also termed as the problem of Target Fishing, has been an active area of research. Identification of likely targets of a chemical compound helps to understand problems such as toxicity, lack of efficacy in humans, and poor physical properties associated with that compound in the early stages of drug discovery. In this paper we present a set of techniques whose goal is to rank or prioritize targets in the context of a given chemical compound such that most targets that this compound may show activity against appear higher in the ranked list. These methods are based on our extensions to the SVM and Ranking Perceptron algorithms for this problem. Our extensive experimental study shows that the methods developed in this work outperform previous approaches by 2% to 60% under different evaluation criterions. PMID:19764745

  20. A Constant-Factor Approximation Algorithm for the Link Building Problem

    NASA Astrophysics Data System (ADS)

    Olsen, Martin; Viglas, Anastasios; Zvedeniouk, Ilia

    In this work we consider the problem of maximizing the PageRank of a given target node in a graph by adding k new links. We consider the case that the new links must point to the given target node (backlinks). Previous work [7] shows that this problem has no fully polynomial time approximation schemes unless P = NP. We present a polynomial time algorithm yielding a PageRank value within a constant factor from the optimal. We also consider the naive algorithm where we choose backlinks from nodes with high PageRank values compared to the outdegree and show that the naive algorithm performs much worse on certain graphs compared to the constant factor approximation scheme.

  1. Quantum anonymous ranking

    NASA Astrophysics Data System (ADS)

    Huang, Wei; Wen, Qiao-Yan; Liu, Bin; Su, Qi; Qin, Su-Juan; Gao, Fei

    2014-03-01

    Anonymous ranking is a kind of privacy-preserving ranking whereby each of the involved participants can correctly and anonymously get the rankings of his data. It can be utilized to solve many practical problems, such as anonymously ranking the students' exam scores. We investigate the issue of how quantum mechanics can be of use in maintaining the anonymity of the participants in multiparty ranking and present a series of quantum anonymous multiparty, multidata ranking protocols. In each of these protocols, a participant can get the correct rankings of his data and nobody else can match the identity to his data. Furthermore, the security of these protocols with respect to different kinds of attacks is proved.

  2. Higher Education Ranking and Leagues Tables: Lessons Learned from Benchmarking

    ERIC Educational Resources Information Center

    Proulx, Roland

    2007-01-01

    The paper intends to contribute to the debate on ranking and league tables by adopting a critical approach to ranking methodologies from the point of view of a university benchmarking exercise. The absence of a strict benchmarking exercise in the ranking process has been, in the opinion of the author, one of the major problems encountered in the…

  3. An approach to solve group-decision-making problems with ordinal interval numbers.

    PubMed

    Fan, Zhi-Ping; Liu, Yang

    2010-10-01

    The ordinal interval number is a form of uncertain preference information in group decision making (GDM), while it is seldom discussed in the existing research. This paper investigates how the ranking order of alternatives is determined based on preference information of ordinal interval numbers in GDM problems. When ranking a large quantity of ordinal interval numbers, the efficiency and accuracy of the ranking process are critical. A new approach is proposed to rank alternatives using ordinal interval numbers when every ranking ordinal in an ordinal interval number is thought to be uniformly and independently distributed in its interval. First, we give the definition of possibility degree on comparing two ordinal interval numbers and the related theory analysis. Then, to rank alternatives, by comparing multiple ordinal interval numbers, a collective expectation possibility degree matrix on pairwise comparisons of alternatives is built, and an optimization model based on this matrix is constructed. Furthermore, an algorithm is also presented to rank alternatives by solving the model. Finally, two examples are used to illustrate the use of the proposed approach.

  4. Rank restriction for the variational calculation of two-electron reduced density matrices of many-electron atoms and molecules

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Naftchi-Ardebili, Kasra; Hau, Nathania W.; Mazziotti, David A.

    2011-11-15

    Variational minimization of the ground-state energy as a function of the two-electron reduced density matrix (2-RDM), constrained by necessary N-representability conditions, provides a polynomial-scaling approach to studying strongly correlated molecules without computing the many-electron wave function. Here we introduce a route to enhancing necessary conditions for N representability through rank restriction of the 2-RDM. Rather than adding computationally more expensive N-representability conditions, we directly enhance the accuracy of two-particle (2-positivity) conditions through rank restriction, which removes degrees of freedom in the 2-RDM that are not sufficiently constrained. We select the rank of the particle-hole 2-RDM by deriving the ranks associatedmore » with model wave functions, including both mean-field and antisymmetrized geminal power (AGP) wave functions. Because the 2-positivity conditions are exact for quantum systems with AGP ground states, the rank of the particle-hole 2-RDM from the AGP ansatz provides a minimum for its value in variational 2-RDM calculations of general quantum systems. To implement the rank-restricted conditions, we extend a first-order algorithm for large-scale semidefinite programming. The rank-restricted conditions significantly improve the accuracy of the energies; for example, the percentages of correlation energies recovered for HF, CO, and N{sub 2} improve from 115.2%, 121.7%, and 121.5% without rank restriction to 97.8%, 101.1%, and 100.0% with rank restriction. Similar results are found at both equilibrium and nonequilibrium geometries. While more accurate, the rank-restricted N-representability conditions are less expensive computationally than the full-rank conditions.« less

  5. Local constructions of gender-based violence amongst IDPs in northern Uganda: analysis of archival data collected using a gender- and age-segmented participatory ranking methodology.

    PubMed

    Ager, Alastair; Bancroft, Carolyn; Berger, Elizabeth; Stark, Lindsay

    2018-01-01

    Gender-based violence (GBV) is a significant problem in conflict-affected settings. Understanding local constructions of such violence is crucial to developing preventive and responsive interventions to address this issue. This study reports on a secondary analysis of archived data collected as part of formative qualitative work - using a group participatory ranking methodology (PRM) - informing research on the prevalence of GBV amongst IDPs in northern Uganda in 2006. Sixty-four PRM group discussions were held with women, with men, with girls (aged 14 to 18 years), and with boys (aged 14 to 18 years) selected on a randomized basis across four internally displaced persons (IDP) camps in Lira District. Discussions elicited problems facing women in the camps, and - through structured participatory methods - consensus ranking of their importance and narrative accounts explaining these judgments. Amongst forms of GBV faced by women, rape was ranked as the greatest concern amongst participants (with a mean problem rank of 3.4), followed by marital rape (mean problem rank of 4.5) and intimate partner violence (mean problem rank of 4.9). Girls ranked all forms of GBV as higher priority concerns than other participants. Discussions indicated that these forms of GBV were generally considered normalized within the camp. Gender roles and power, economic deprivation, and physical and social characteristics of the camp setting emerged as key explanatory factors in accounts of GBV prevalence, although these played out in different ways with respect to differing forms of violence. All groups acknowledged GBV to represent a significant threat - among other major concerns such as transportation, water, shelter, food and security - for women residing in the camps. Given evidence of the significantly higher risk in the camp of intimate partner violence and marital rape, the relative prominence of the issue of rape in all rankings suggests normalization of violence within the home. Programs targeting reduction in GBV need to address community-identified root causes such as economic deprivation and social norms related to gender roles. More generally, PRM appears to offer an efficient means of identifying local constructions of prevailing challenges in a manner that can inform programming.

  6. CRITICAL PROBLEMS AND NEEDS OF CALIFORNIA JUNIOR COLLEGES.

    ERIC Educational Resources Information Center

    PETERSON, BASIL H.; AND OTHERS

    THROUGH A PROCEDURE INVOLVING RESPONSES FROM 85 PERCENT OF THE STATE JUNIOR COLLEGES AND WEIGHTED RANK ORDERING OF ITEMS BY MEMBERS OF THE CALIFORNIA JUNIOR COLLEGE ASSOCIATION COMMITTEE'S ADVISORY AND STEERING SUBCOMMITTEES, 26 OF THE MOST CRITICAL NEEDS AND PROBLEMS ARE IDENTIFIED AND RANKED. THE FIRST FIVE ITEMS ARE CONCERN FOR EFFECTIVENESS…

  7. The Seven Deadly Sins of World University Ranking: A Summary from Several Papers

    ERIC Educational Resources Information Center

    Soh, Kaycheng

    2017-01-01

    World university rankings use the weight-and-sum approach to process data. Although this seems to pass the common sense test, it has statistical problems. In recent years, seven such problems have been uncovered: spurious precision, weight discrepancies, assumed mutual compensation, indictor redundancy, inter-system discrepancy, negligence of…

  8. Handbook of Classroom Management: Research, Practice, and Contemporary Issues

    ERIC Educational Resources Information Center

    Evertson, Carolyn M., Ed.; Weinstein, Carol S., Ed.

    2006-01-01

    Classroom management is a topic of enduring concern for teachers, administrators, and the public. It consistently ranks as the first or second most serious educational problem in the eyes of the general public, and beginning teachers consistently rank it as their most pressing concern during their early teaching years. Management problems continue…

  9. A Coupled Approach for Structural Damage Detection with Incomplete Measurements

    NASA Technical Reports Server (NTRS)

    James, George; Cao, Timothy; Kaouk, Mo; Zimmerman, David

    2013-01-01

    This historical work couples model order reduction, damage detection, dynamic residual/mode shape expansion, and damage extent estimation to overcome the incomplete measurements problem by using an appropriate undamaged structural model. A contribution of this work is the development of a process to estimate the full dynamic residuals using the columns of a spring connectivity matrix obtained by disassembling the structural stiffness matrix. Another contribution is the extension of an eigenvector filtering procedure to produce full-order mode shapes that more closely match the measured active partition of the mode shapes using a set of modified Ritz vectors. The full dynamic residuals and full mode shapes are used as inputs to the minimum rank perturbation theory to provide an estimate of damage location and extent. The issues associated with this process are also discussed as drivers of near-term development activities to understand and improve this approach.

  10. Optical ranked-order filtering using threshold decomposition

    DOEpatents

    Allebach, J.P.; Ochoa, E.; Sweeney, D.W.

    1987-10-09

    A hybrid optical/electronic system performs median filtering and related ranked-order operations using threshold decomposition to encode the image. Threshold decomposition transforms the nonlinear neighborhood ranking operation into a linear space-invariant filtering step followed by a point-to-point threshold comparison step. Spatial multiplexing allows parallel processing of all the threshold components as well as recombination by a second linear, space-invariant filtering step. An incoherent optical correlation system performs the linear filtering, using a magneto-optic spatial light modulator as the input device and a computer-generated hologram in the filter plane. Thresholding is done electronically. By adjusting the value of the threshold, the same architecture is used to perform median, minimum, and maximum filtering of images. A totally optical system is also disclosed. 3 figs.

  11. Multicolinearity and Indicator Redundancy Problem in World University Rankings: An Example Using Times Higher Education World University Ranking 2013-2014 Data

    ERIC Educational Resources Information Center

    Kaycheng, Soh

    2015-01-01

    World university ranking systems used the weight-and-sum approach to combined indicator scores into overall scores on which the universities are then ranked. This approach assumes that the indicators all independently contribute to the overall score in the specified proportions. In reality, this assumption is doubtful as the indicators tend to…

  12. Tripartite-to-Bipartite Entanglement Transformation by Stochastic Local Operations and Classical Communication and the Structure of Matrix Spaces

    NASA Astrophysics Data System (ADS)

    Li, Yinan; Qiao, Youming; Wang, Xin; Duan, Runyao

    2018-03-01

    We study the problem of transforming a tripartite pure state to a bipartite one using stochastic local operations and classical communication (SLOCC). It is known that the tripartite-to-bipartite SLOCC convertibility is characterized by the maximal Schmidt rank of the given tripartite state, i.e. the largest Schmidt rank over those bipartite states lying in the support of the reduced density operator. In this paper, we further study this problem and exhibit novel results in both multi-copy and asymptotic settings, utilizing powerful results from the structure of matrix spaces. In the multi-copy regime, we observe that the maximal Schmidt rank is strictly super-multiplicative, i.e. the maximal Schmidt rank of the tensor product of two tripartite pure states can be strictly larger than the product of their maximal Schmidt ranks. We then provide a full characterization of those tripartite states whose maximal Schmidt rank is strictly super-multiplicative when taking tensor product with itself. Notice that such tripartite states admit strict advantages in tripartite-to-bipartite SLOCC transformation when multiple copies are provided. In the asymptotic setting, we focus on determining the tripartite-to-bipartite SLOCC entanglement transformation rate. Computing this rate turns out to be equivalent to computing the asymptotic maximal Schmidt rank of the tripartite state, defined as the regularization of its maximal Schmidt rank. Despite the difficulty caused by the super-multiplicative property, we provide explicit formulas for evaluating the asymptotic maximal Schmidt ranks of two important families of tripartite pure states by resorting to certain results of the structure of matrix spaces, including the study of matrix semi-invariants. These formulas turn out to be powerful enough to give a sufficient and necessary condition to determine whether a given tripartite pure state can be transformed to the bipartite maximally entangled state under SLOCC, in the asymptotic setting. Applying the recent progress on the non-commutative rank problem, we can verify this condition in deterministic polynomial time.

  13. On size-constrained minimum s–t cut problems and size-constrained dense subgraph problems

    DOE PAGES

    Chen, Wenbin; Samatova, Nagiza F.; Stallmann, Matthias F.; ...

    2015-10-30

    In some application cases, the solutions of combinatorial optimization problems on graphs should satisfy an additional vertex size constraint. In this paper, we consider size-constrained minimum s–t cut problems and size-constrained dense subgraph problems. We introduce the minimum s–t cut with at-least-k vertices problem, the minimum s–t cut with at-most-k vertices problem, and the minimum s–t cut with exactly k vertices problem. We prove that they are NP-complete. Thus, they are not polynomially solvable unless P = NP. On the other hand, we also study the densest at-least-k-subgraph problem (DalkS) and the densest at-most-k-subgraph problem (DamkS) introduced by Andersen andmore » Chellapilla [1]. We present a polynomial time algorithm for DalkS when k is bounded by some constant c. We also present two approximation algorithms for DamkS. In conclusion, the first approximation algorithm for DamkS has an approximation ratio of n-1/k-1, where n is the number of vertices in the input graph. The second approximation algorithm for DamkS has an approximation ratio of O (n δ), for some δ < 1/3.« less

  14. Improved dynamic MRI reconstruction by exploiting sparsity and rank-deficiency.

    PubMed

    Majumdar, Angshul

    2013-06-01

    In this paper we address the problem of dynamic MRI reconstruction from partially sampled K-space data. Our work is motivated by previous studies in this area that proposed exploiting the spatiotemporal correlation of the dynamic MRI sequence by posing the reconstruction problem as a least squares minimization regularized by sparsity and low-rank penalties. Ideally the sparsity and low-rank penalties should be represented by the l(0)-norm and the rank of a matrix; however both are NP hard penalties. The previous studies used the convex l(1)-norm as a surrogate for the l(0)-norm and the non-convex Schatten-q norm (0

  15. Online ranking by projecting.

    PubMed

    Crammer, Koby; Singer, Yoram

    2005-01-01

    We discuss the problem of ranking instances. In our framework, each instance is associated with a rank or a rating, which is an integer in 1 to k. Our goal is to find a rank-prediction rule that assigns each instance a rank that is as close as possible to the instance's true rank. We discuss a group of closely related online algorithms, analyze their performance in the mistake-bound model, and prove their correctness. We describe two sets of experiments, with synthetic data and with the EachMovie data set for collaborative filtering. In the experiments we performed, our algorithms outperform online algorithms for regression and classification applied to ranking.

  16. Improve Biomedical Information Retrieval using Modified Learning to Rank Methods.

    PubMed

    Xu, Bo; Lin, Hongfei; Lin, Yuan; Ma, Yunlong; Yang, Liang; Wang, Jian; Yang, Zhihao

    2016-06-14

    In these years, the number of biomedical articles has increased exponentially, which becomes a problem for biologists to capture all the needed information manually. Information retrieval technologies, as the core of search engines, can deal with the problem automatically, providing users with the needed information. However, it is a great challenge to apply these technologies directly for biomedical retrieval, because of the abundance of domain specific terminologies. To enhance biomedical retrieval, we propose a novel framework based on learning to rank. Learning to rank is a series of state-of-the-art information retrieval techniques, and has been proved effective in many information retrieval tasks. In the proposed framework, we attempt to tackle the problem of the abundance of terminologies by constructing ranking models, which focus on not only retrieving the most relevant documents, but also diversifying the searching results to increase the completeness of the resulting list for a given query. In the model training, we propose two novel document labeling strategies, and combine several traditional retrieval models as learning features. Besides, we also investigate the usefulness of different learning to rank approaches in our framework. Experimental results on TREC Genomics datasets demonstrate the effectiveness of our framework for biomedical information retrieval.

  17. A discrepancy in objective and subjective measures of knowledge: do some medical students with learning problems delude themselves?

    PubMed

    Anthoney, T R

    1986-01-01

    In general, the rankings of first-year medical students on a written test of long-term neuroscience retention (RET) correlated strongly with how many of three neuroscience research presentations given within the following 2 days the students reported understanding. The lowest-ranking sixth of the class on RET, however, reported understanding almost every lecture, even more than the highest-ranking RET students did. Some of these low-ranking students were aware that they had areas of weakness, but simply tolerated more of them without reporting overall lack of understanding. Other low-ranking students, however, seemed genuinely unaware that they had any areas of weakness. This interpretation was further supported by data on small-group problem-solving performance during the first-year neuroscience course, on use of human resources during the final first-year neuroscience take-home examination, and on performance during the third-year clinical clerkships. Persistence of the problem, even after 5 months of instruction specifically designed to improve such information-processing skills, suggests that correction may be difficult to achieve. The need for specific valid evaluative instruments and effective correctional techniques is noted.

  18. Low-rank structure learning via nonconvex heuristic recovery.

    PubMed

    Deng, Yue; Dai, Qionghai; Liu, Risheng; Zhang, Zengke; Hu, Sanqing

    2013-03-01

    In this paper, we propose a nonconvex framework to learn the essential low-rank structure from corrupted data. Different from traditional approaches, which directly utilizes convex norms to measure the sparseness, our method introduces more reasonable nonconvex measurements to enhance the sparsity in both the intrinsic low-rank structure and the sparse corruptions. We will, respectively, introduce how to combine the widely used ℓp norm (0 < p < 1) and log-sum term into the framework of low-rank structure learning. Although the proposed optimization is no longer convex, it still can be effectively solved by a majorization-minimization (MM)-type algorithm, with which the nonconvex objective function is iteratively replaced by its convex surrogate and the nonconvex problem finally falls into the general framework of reweighed approaches. We prove that the MM-type algorithm can converge to a stationary point after successive iterations. The proposed model is applied to solve two typical problems: robust principal component analysis and low-rank representation. Experimental results on low-rank structure learning demonstrate that our nonconvex heuristic methods, especially the log-sum heuristic recovery algorithm, generally perform much better than the convex-norm-based method (0 < p < 1) for both data with higher rank and with denser corruptions.

  19. The price of a drink: levels of consumption and price paid per unit of alcohol by Edinburgh's ill drinkers with a comparison to wider alcohol sales in Scotland

    PubMed Central

    Black, Heather; Gill, Jan; Chick, Jonathan

    2011-01-01

    Aim To compare alcohol purchasing and consumption by ill drinkers in Edinburgh with wider alcohol sales in Scotland. Design Cross-sectional. Setting Two hospitals in Edinburgh in 2008/09. Participants A total of 377 patients with serious alcohol problems; two-thirds were in-patients with medical, surgical or psychiatric problems due to alcohol; one-third were out-patients. Measurements Last week's or typical weekly consumption of alcohol: type, brand, units (1 UK unit 8 g ethanol), purchase place and price. Findings Patients consumed mean 197.7 UK units/week. The mean price paid per unit was £0.43 (lowest £0.09/unit) (£1 = 1.6 US$ or 1.2€), which is below the mean unit price, £0.71 paid in Scotland in 2008. Of units consumed, 70.3% were sold at or below £0.40/unit (mid-range of price models proposed for minimum pricing legislation by the Scottish Government), and 83% at or below £0.50/unit proposed by the Chief Medical Officer of England. The lower the price paid per unit, the more units a patient consumed. A continuous increase in unit price from lower to higher social status, ranked according to the Scottish Index of Multiple Deprivation (based on postcode), was not seen; patients residing in postcodes in the mid-quintile paid the highest price per unit. Cheapness was quoted commonly as a reason for beverage choice; ciders, especially ‘white’ cider, and vodka were, at off-sales, cheapest per unit. Stealing alcohol or drinking alcohol substitutes was only very rarely reported. Conclusions Because patients with serious alcohol problems tend to purchase very cheap alcohol, elimination of the cheapest sales by minimum price or other legislation might reduce their consumption. It is unknown whether proposed price legislation in Scotland will encourage patients with serious alcohol problems to start stealing alcohol or drinking substitutes or will reduce the recruitment of new drinkers with serious alcohol problems and produce predicted longer-term gains in health and social wellbeing. PMID:21134019

  20. Grade Non-Disclosure. NBER Working Paper No. 17465

    ERIC Educational Resources Information Center

    Gottlieb, Daniel; Smetters, Kent

    2011-01-01

    This paper documents and explains the existence of grade non-disclosure policies in Masters in Business Administration programs, why these policies are concentrated in highly-ranked programs, and why these policies are not prevalent in most other professional degree programs. Related policies, including honors and minimum grade requirements, are…

  1. Upward Bound. Program Objectives, Summer 1971.

    ERIC Educational Resources Information Center

    Wesleyan Univ., Middletown, CT.

    The primary program objectives were as follows: (1) The students will achieve passing grade in the college preparation program; (2) The students will achieve one year academic growth each year as measured by the SCAT and other standardized measurements; (3) The students will achieve the minimum PSAT percentile rank as anticipated for college…

  2. An 11-Year Analysis of Black Students' Experience of Problems and Use of Services: Implications for Counseling Professionals.

    ERIC Educational Resources Information Center

    June, Lee N.; And Others

    1990-01-01

    Examined problems experienced and services used by Black college students (N=1,261) over 11 years. Found issues of finances, academic adjustment, and living conditions were ranked highest. Use of several services could be predicted by sex, classification, age, and residence, but use of services was not always consistent with rankings, particularly…

  3. Maine Environmental Priorities Project: Summary of the Reports from the Technical Working Groups to the Steering Committee.

    ERIC Educational Resources Information Center

    National Association for Environmental Education, Miami, FL.

    The Maine Environmental Priorities Project (MEPP) is a comparative risk project designed to identify, compare, and rank the most serious environmental problems facing Maine. Once the problems are analyzed and ranked according to their threat or risk to Maine's ecological health, human health, and quality of life, the project will propose…

  4. Embedded feature ranking for ensemble MLP classifiers.

    PubMed

    Windeatt, Terry; Duangsoithong, Rakkrit; Smith, Raymond

    2011-06-01

    A feature ranking scheme for multilayer perceptron (MLP) ensembles is proposed, along with a stopping criterion based upon the out-of-bootstrap estimate. To solve multi-class problems feature ranking is combined with modified error-correcting output coding. Experimental results on benchmark data demonstrate the versatility of the MLP base classifier in removing irrelevant features.

  5. Colleges Question Data Used by "Yahoo!" To Rank the "Most Wired" Campus.

    ERIC Educational Resources Information Center

    Young, Jeffrey R.

    1997-01-01

    College administrators are complaining that "Yahoo! Internet Life" magazine used a flawed surveying process and inaccurate data to select the institutions it named in a recent ranking of "American's 100 Most Wired Colleges." Even some institutions faring well in the ranking have concerns about the survey, citing problems with…

  6. Global University Rankings, Transnational Policy Discourse and Higher Education in Europe

    ERIC Educational Resources Information Center

    Erkkilä, Tero

    2014-01-01

    Global university rankings have portrayed European higher education institutions in varying lights, leading to intense reflection on the figures on the EU and national levels alike. The rankings have helped to construct a policy problem of "European higher education", framing higher education as an element of competitiveness in a global…

  7. Sparse subspace clustering for data with missing entries and high-rank matrix completion.

    PubMed

    Fan, Jicong; Chow, Tommy W S

    2017-09-01

    Many methods have recently been proposed for subspace clustering, but they are often unable to handle incomplete data because of missing entries. Using matrix completion methods to recover missing entries is a common way to solve the problem. Conventional matrix completion methods require that the matrix should be of low-rank intrinsically, but most matrices are of high-rank or even full-rank in practice, especially when the number of subspaces is large. In this paper, a new method called Sparse Representation with Missing Entries and Matrix Completion is proposed to solve the problems of incomplete-data subspace clustering and high-rank matrix completion. The proposed algorithm alternately computes the matrix of sparse representation coefficients and recovers the missing entries of a data matrix. The proposed algorithm recovers missing entries through minimizing the representation coefficients, representation errors, and matrix rank. Thorough experimental study and comparative analysis based on synthetic data and natural images were conducted. The presented results demonstrate that the proposed algorithm is more effective in subspace clustering and matrix completion compared with other existing methods. Copyright © 2017 Elsevier Ltd. All rights reserved.

  8. Ranking influential spreaders is an ill-defined problem

    NASA Astrophysics Data System (ADS)

    Gu, Jain; Lee, Sungmin; Saramäki, Jari; Holme, Petter

    2017-06-01

    Finding influential spreaders of information and disease in networks is an important theoretical problem, and one of considerable recent interest. It has been almost exclusively formulated as a node-ranking problem —methods for identifying influential spreaders output a ranking of the nodes. In this work, we show that such a greedy heuristic does not necessarily work: the set of most influential nodes depends on the number of nodes in the set. Therefore, the set of n most important nodes to vaccinate does not need to have any node in common with the set of n + 1 most important nodes. We propose a method for quantifying the extent and impact of this phenomenon. By this method, we show that it is a common phenomenon in both empirical and model networks.

  9. The Augmented Lagrange Multipliers Method for Matrix Completion from Corrupted Samplings with Application to Mixed Gaussian-Impulse Noise Removal

    PubMed Central

    Meng, Fan; Yang, Xiaomei; Zhou, Chenghu

    2014-01-01

    This paper studies the problem of the restoration of images corrupted by mixed Gaussian-impulse noise. In recent years, low-rank matrix reconstruction has become a research hotspot in many scientific and engineering domains such as machine learning, image processing, computer vision and bioinformatics, which mainly involves the problem of matrix completion and robust principal component analysis, namely recovering a low-rank matrix from an incomplete but accurate sampling subset of its entries and from an observed data matrix with an unknown fraction of its entries being arbitrarily corrupted, respectively. Inspired by these ideas, we consider the problem of recovering a low-rank matrix from an incomplete sampling subset of its entries with an unknown fraction of the samplings contaminated by arbitrary errors, which is defined as the problem of matrix completion from corrupted samplings and modeled as a convex optimization problem that minimizes a combination of the nuclear norm and the -norm in this paper. Meanwhile, we put forward a novel and effective algorithm called augmented Lagrange multipliers to exactly solve the problem. For mixed Gaussian-impulse noise removal, we regard it as the problem of matrix completion from corrupted samplings, and restore the noisy image following an impulse-detecting procedure. Compared with some existing methods for mixed noise removal, the recovery quality performance of our method is dominant if images possess low-rank features such as geometrically regular textures and similar structured contents; especially when the density of impulse noise is relatively high and the variance of Gaussian noise is small, our method can outperform the traditional methods significantly not only in the simultaneous removal of Gaussian noise and impulse noise, and the restoration ability for a low-rank image matrix, but also in the preservation of textures and details in the image. PMID:25248103

  10. A case against a divide and conquer approach to the nonsymmetric eigenvalue problem

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jessup, E.R.

    1991-12-01

    Divide and conquer techniques based on rank-one updating have proven fast, accurate, and efficient in parallel for the real symmetric tridiagonal and unitary eigenvalue problems and for the bidiagonal singular value problem. Although the divide and conquer mechanism can also be adapted to the real nonsymmetric eigenproblem in a straightforward way, most of the desirable characteristics of the other algorithms are lost. In this paper, we examine the problems of accuracy and efficiency that can stand in the way of a nonsymmetric divide and conquer eigensolver based on low-rank updating. 31 refs., 2 figs.

  11. Ranking Support Vector Machine with Kernel Approximation

    PubMed Central

    Dou, Yong

    2017-01-01

    Learning to rank algorithm has become important in recent years due to its successful application in information retrieval, recommender system, and computational biology, and so forth. Ranking support vector machine (RankSVM) is one of the state-of-art ranking models and has been favorably used. Nonlinear RankSVM (RankSVM with nonlinear kernels) can give higher accuracy than linear RankSVM (RankSVM with a linear kernel) for complex nonlinear ranking problem. However, the learning methods for nonlinear RankSVM are still time-consuming because of the calculation of kernel matrix. In this paper, we propose a fast ranking algorithm based on kernel approximation to avoid computing the kernel matrix. We explore two types of kernel approximation methods, namely, the Nyström method and random Fourier features. Primal truncated Newton method is used to optimize the pairwise L2-loss (squared Hinge-loss) objective function of the ranking model after the nonlinear kernel approximation. Experimental results demonstrate that our proposed method gets a much faster training speed than kernel RankSVM and achieves comparable or better performance over state-of-the-art ranking algorithms. PMID:28293256

  12. Ranking Support Vector Machine with Kernel Approximation.

    PubMed

    Chen, Kai; Li, Rongchun; Dou, Yong; Liang, Zhengfa; Lv, Qi

    2017-01-01

    Learning to rank algorithm has become important in recent years due to its successful application in information retrieval, recommender system, and computational biology, and so forth. Ranking support vector machine (RankSVM) is one of the state-of-art ranking models and has been favorably used. Nonlinear RankSVM (RankSVM with nonlinear kernels) can give higher accuracy than linear RankSVM (RankSVM with a linear kernel) for complex nonlinear ranking problem. However, the learning methods for nonlinear RankSVM are still time-consuming because of the calculation of kernel matrix. In this paper, we propose a fast ranking algorithm based on kernel approximation to avoid computing the kernel matrix. We explore two types of kernel approximation methods, namely, the Nyström method and random Fourier features. Primal truncated Newton method is used to optimize the pairwise L2-loss (squared Hinge-loss) objective function of the ranking model after the nonlinear kernel approximation. Experimental results demonstrate that our proposed method gets a much faster training speed than kernel RankSVM and achieves comparable or better performance over state-of-the-art ranking algorithms.

  13. Does the patient's inherent rating tendency influence reported satisfaction scores and affect division ranking?

    PubMed

    Francis, Patricia; Agoritsas, Thomas; Chopard, Pierre; Perneger, Thomas

    2016-04-01

    To determine the impact of adjusting for rating tendency (RT) on patient satisfaction scores in a large teaching hospital and to assess the impact of adjustment on the ranking of divisions. Cross-sectional survey. Large 2200-bed university teaching hospital. All adult patients hospitalized during a 1-month period in one of 20 medical divisions. None. Patient experience of care measured by the Picker Patient Experience questionnaire and RT scores. Problem scores were weakly but significantly associated with RT. Division ranking was slightly modified in RT adjusted models. Division ranking changed substantially in case-mix adjusted models. Adjusting patient self-reported problem scores for RT did impact ranking of divisions, although marginally. Further studies are needed to determine the impact of RT when comparing different institutions, particularly across inter-cultural settings, where the difference in RT may be more substantial. © The Author 2016. Published by Oxford University Press in association with the International Society for Quality in Health Care; all rights reserved.

  14. Leveraging Multiactions to Improve Medical Personalized Ranking for Collaborative Filtering.

    PubMed

    Gao, Shan; Guo, Guibing; Li, Runzhi; Wang, Zongmin

    2017-01-01

    Nowadays, providing high-quality recommendation services to users is an essential component in web applications, including shopping, making friends, and healthcare. This can be regarded either as a problem of estimating users' preference by exploiting explicit feedbacks (numerical ratings), or as a problem of collaborative ranking with implicit feedback (e.g., purchases, views, and clicks). Previous works for solving this issue include pointwise regression methods and pairwise ranking methods. The emerging healthcare websites and online medical databases impose a new challenge for medical service recommendation. In this paper, we develop a model, MBPR (Medical Bayesian Personalized Ranking over multiple users' actions), based on the simple observation that users tend to assign higher ranks to some kind of healthcare services that are meanwhile preferred in users' other actions. Experimental results on the real-world datasets demonstrate that MBPR achieves more accurate recommendations than several state-of-the-art methods and shows its generality and scalability via experiments on the datasets from one mobile shopping app.

  15. Leveraging Multiactions to Improve Medical Personalized Ranking for Collaborative Filtering

    PubMed Central

    2017-01-01

    Nowadays, providing high-quality recommendation services to users is an essential component in web applications, including shopping, making friends, and healthcare. This can be regarded either as a problem of estimating users' preference by exploiting explicit feedbacks (numerical ratings), or as a problem of collaborative ranking with implicit feedback (e.g., purchases, views, and clicks). Previous works for solving this issue include pointwise regression methods and pairwise ranking methods. The emerging healthcare websites and online medical databases impose a new challenge for medical service recommendation. In this paper, we develop a model, MBPR (Medical Bayesian Personalized Ranking over multiple users' actions), based on the simple observation that users tend to assign higher ranks to some kind of healthcare services that are meanwhile preferred in users' other actions. Experimental results on the real-world datasets demonstrate that MBPR achieves more accurate recommendations than several state-of-the-art methods and shows its generality and scalability via experiments on the datasets from one mobile shopping app. PMID:29118963

  16. Inference for Distributions over the Permutation Group

    DTIC Science & Technology

    2008-05-01

    world problems, such as voting , ranking, and data association. Representing uncertainty over permutations is challenging, since there are n...problems, such as voting , ranking, and data association. Representing uncertainty over permutations is challenging, since there are n! possibilities...the Krone ker (or Tensor ) Produ t Representation.In general, the Krone ker produ t representation is redu ible, and so it ande omposed into a dire t

  17. ICTNET at Web Track 2012 Ad-hoc Task

    DTIC Science & Technology

    2012-11-01

    Model and use it as baseline this year. 3.2 Learning to rank Learning to rank (LTR) introduces machine learning to retrieval ranking problem. It...Yoram Singer. An efficient boosting algorithm  for  combining preferences [J]. The Journal of  Machine   Learning  Research. 2003. 

  18. Rectifying an Honest Error in World University Rankings: A Solution to the Problem of Indicator Weight Discrepancies

    ERIC Educational Resources Information Center

    Soh, Kaycheng

    2013-01-01

    Discrepancies between the nominal and attained indicator weights misinform rank consumers as to the relative importance of the indicators. This may lead to unwarranted institutional judgements and misdirected actions, causing resources being wasted unnecessarily. As a follow-up to two earlier studies, data from the Academic Ranking of World…

  19. Student understanding of first order RC filters

    NASA Astrophysics Data System (ADS)

    Coppens, Pieter; Van den Bossche, Johan; De Cock, Mieke

    2017-12-01

    A series of interviews with second year electronics engineering students showed several problems with understanding first-order RC filters. To better explore how widespread these problems are, a questionnaire was administered to over 150 students in Belgium. One question asked to rank the output voltage of a low-pass filter with an AC or DC input signal while a second asked to rank the output voltages of a high-pass filter with doubled or halved resistor and capacitor values. In addition to a discussion of the rankings and students' consistency, the results are compared to the most common reasoning patterns students used to explain their rankings. Despite lecture and laboratory instruction, students not only rarely recognize the circuits as filters, but also fail to correctly apply Kirchhoff's laws and Ohm's law to arrive at a correct answer.

  20. Scalable Nonparametric Low-Rank Kernel Learning Using Block Coordinate Descent.

    PubMed

    Hu, En-Liang; Kwok, James T

    2015-09-01

    Nonparametric kernel learning (NPKL) is a flexible approach to learn the kernel matrix directly without assuming any parametric form. It can be naturally formulated as a semidefinite program (SDP), which, however, is not very scalable. To address this problem, we propose the combined use of low-rank approximation and block coordinate descent (BCD). Low-rank approximation avoids the expensive positive semidefinite constraint in the SDP by replacing the kernel matrix variable with V(T)V, where V is a low-rank matrix. The resultant nonlinear optimization problem is then solved by BCD, which optimizes each column of V sequentially. It can be shown that the proposed algorithm has nice convergence properties and low computational complexities. Experiments on a number of real-world data sets show that the proposed algorithm outperforms state-of-the-art NPKL solvers.

  1. Low-rank matrix decomposition and spatio-temporal sparse recovery for STAP radar

    DOE PAGES

    Sen, Satyabrata

    2015-08-04

    We develop space-time adaptive processing (STAP) methods by leveraging the advantages of sparse signal processing techniques in order to detect a slowly-moving target. We observe that the inherent sparse characteristics of a STAP problem can be formulated as the low-rankness of clutter covariance matrix when compared to the total adaptive degrees-of-freedom, and also as the sparse interference spectrum on the spatio-temporal domain. By exploiting these sparse properties, we propose two approaches for estimating the interference covariance matrix. In the first approach, we consider a constrained matrix rank minimization problem (RMP) to decompose the sample covariance matrix into a low-rank positivemore » semidefinite and a diagonal matrix. The solution of RMP is obtained by applying the trace minimization technique and the singular value decomposition with matrix shrinkage operator. Our second approach deals with the atomic norm minimization problem to recover the clutter response-vector that has a sparse support on the spatio-temporal plane. We use convex relaxation based standard sparse-recovery techniques to find the solutions. With extensive numerical examples, we demonstrate the performances of proposed STAP approaches with respect to both the ideal and practical scenarios, involving Doppler-ambiguous clutter ridges, spatial and temporal decorrelation effects. As a result, the low-rank matrix decomposition based solution requires secondary measurements as many as twice the clutter rank to attain a near-ideal STAP performance; whereas the spatio-temporal sparsity based approach needs a considerably small number of secondary data.« less

  2. Ranking in evolving complex networks

    NASA Astrophysics Data System (ADS)

    Liao, Hao; Mariani, Manuel Sebastian; Medo, Matúš; Zhang, Yi-Cheng; Zhou, Ming-Yang

    2017-05-01

    Complex networks have emerged as a simple yet powerful framework to represent and analyze a wide range of complex systems. The problem of ranking the nodes and the edges in complex networks is critical for a broad range of real-world problems because it affects how we access online information and products, how success and talent are evaluated in human activities, and how scarce resources are allocated by companies and policymakers, among others. This calls for a deep understanding of how existing ranking algorithms perform, and which are their possible biases that may impair their effectiveness. Many popular ranking algorithms (such as Google's PageRank) are static in nature and, as a consequence, they exhibit important shortcomings when applied to real networks that rapidly evolve in time. At the same time, recent advances in the understanding and modeling of evolving networks have enabled the development of a wide and diverse range of ranking algorithms that take the temporal dimension into account. The aim of this review is to survey the existing ranking algorithms, both static and time-aware, and their applications to evolving networks. We emphasize both the impact of network evolution on well-established static algorithms and the benefits from including the temporal dimension for tasks such as prediction of network traffic, prediction of future links, and identification of significant nodes.

  3. Control of Finite-State, Finite Memory Stochastic Systems

    NASA Technical Reports Server (NTRS)

    Sandell, Nils R.

    1974-01-01

    A generalized problem of stochastic control is discussed in which multiple controllers with different data bases are present. The vehicle for the investigation is the finite state, finite memory (FSFM) stochastic control problem. Optimality conditions are obtained by deriving an equivalent deterministic optimal control problem. A FSFM minimum principle is obtained via the equivalent deterministic problem. The minimum principle suggests the development of a numerical optimization algorithm, the min-H algorithm. The relationship between the sufficiency of the minimum principle and the informational properties of the problem are investigated. A problem of hypothesis testing with 1-bit memory is investigated to illustrate the application of control theoretic techniques to information processing problems.

  4. A Ranking Analysis/An Interlinking Approach of New Triangular Fuzzy Cognitive Maps and Combined Effective Time Dependent Matrix

    NASA Astrophysics Data System (ADS)

    Adiga, Shreemathi; Saraswathi, A.; Praveen Prakash, A.

    2018-04-01

    This paper aims an interlinking approach of new Triangular Fuzzy Cognitive Maps (TrFCM) and Combined Effective Time Dependent (CETD) matrix to find the ranking of the problems of Transgenders. Section one begins with an introduction that briefly describes the scope of Triangular Fuzzy Cognitive Maps (TrFCM) and CETD Matrix. Section two provides the process of causes of problems faced by Transgenders using Fuzzy Triangular Fuzzy Cognitive Maps (TrFCM) method and performs the calculations using the collected data among the Transgender. In Section 3, the reasons for the main causes for the problems of the Transgenders. Section 4 describes the Charles Spearmans coefficients of rank correlation method by interlinking of Triangular Fuzzy Cognitive Maps (TrFCM) Method and CETD Matrix. Section 5 shows the results based on our study.

  5. Learning of Rule Ensembles for Multiple Attribute Ranking Problems

    NASA Astrophysics Data System (ADS)

    Dembczyński, Krzysztof; Kotłowski, Wojciech; Słowiński, Roman; Szeląg, Marcin

    In this paper, we consider the multiple attribute ranking problem from a Machine Learning perspective. We propose two approaches to statistical learning of an ensemble of decision rules from decision examples provided by the Decision Maker in terms of pairwise comparisons of some objects. The first approach consists in learning a preference function defining a binary preference relation for a pair of objects. The result of application of this function on all pairs of objects to be ranked is then exploited using the Net Flow Score procedure, giving a linear ranking of objects. The second approach consists in learning a utility function for single objects. The utility function also gives a linear ranking of objects. In both approaches, the learning is based on the boosting technique. The presented approaches to Preference Learning share good properties of the decision rule preference model and have good performance in the massive-data learning problems. As Preference Learning and Multiple Attribute Decision Aiding share many concepts and methodological issues, in the introduction, we review some aspects bridging these two fields. To illustrate the two approaches proposed in this paper, we solve with them a toy example concerning the ranking of a set of cars evaluated by multiple attributes. Then, we perform a large data experiment on real data sets. The first data set concerns credit rating. Since recent research in the field of Preference Learning is motivated by the increasing role of modeling preferences in recommender systems and information retrieval, we chose two other massive data sets from this area - one comes from movie recommender system MovieLens, and the other concerns ranking of text documents from 20 Newsgroups data set.

  6. Rank-based decompositions of morphological templates.

    PubMed

    Sussner, P; Ritter, G X

    2000-01-01

    Methods for matrix decomposition have found numerous applications in image processing, in particular for the problem of template decomposition. Since existing matrix decomposition techniques are mainly concerned with the linear domain, we consider it timely to investigate matrix decomposition techniques in the nonlinear domain with applications in image processing. The mathematical basis for these investigations is the new theory of rank within minimax algebra. Thus far, only minimax decompositions of rank 1 and rank 2 matrices into outer product expansions are known to the image processing community. We derive a heuristic algorithm for the decomposition of matrices having arbitrary rank.

  7. Learning to rank-based gene summary extraction.

    PubMed

    Shang, Yue; Hao, Huihui; Wu, Jiajin; Lin, Hongfei

    2014-01-01

    In recent years, the biomedical literature has been growing rapidly. These articles provide a large amount of information about proteins, genes and their interactions. Reading such a huge amount of literature is a tedious task for researchers to gain knowledge about a gene. As a result, it is significant for biomedical researchers to have a quick understanding of the query concept by integrating its relevant resources. In the task of gene summary generation, we regard automatic summary as a ranking problem and apply the method of learning to rank to automatically solve this problem. This paper uses three features as a basis for sentence selection: gene ontology relevance, topic relevance and TextRank. From there, we obtain the feature weight vector using the learning to rank algorithm and predict the scores of candidate summary sentences and obtain top sentences to generate the summary. ROUGE (a toolkit for summarization of automatic evaluation) was used to evaluate the summarization result and the experimental results showed that our method outperforms the baseline techniques. According to the experimental result, the combination of three features can improve the performance of summary. The application of learning to rank can facilitate the further expansion of features for measuring the significance of sentences.

  8. Constrained Low-Rank Learning Using Least Squares-Based Regularization.

    PubMed

    Li, Ping; Yu, Jun; Wang, Meng; Zhang, Luming; Cai, Deng; Li, Xuelong

    2017-12-01

    Low-rank learning has attracted much attention recently due to its efficacy in a rich variety of real-world tasks, e.g., subspace segmentation and image categorization. Most low-rank methods are incapable of capturing low-dimensional subspace for supervised learning tasks, e.g., classification and regression. This paper aims to learn both the discriminant low-rank representation (LRR) and the robust projecting subspace in a supervised manner. To achieve this goal, we cast the problem into a constrained rank minimization framework by adopting the least squares regularization. Naturally, the data label structure tends to resemble that of the corresponding low-dimensional representation, which is derived from the robust subspace projection of clean data by low-rank learning. Moreover, the low-dimensional representation of original data can be paired with some informative structure by imposing an appropriate constraint, e.g., Laplacian regularizer. Therefore, we propose a novel constrained LRR method. The objective function is formulated as a constrained nuclear norm minimization problem, which can be solved by the inexact augmented Lagrange multiplier algorithm. Extensive experiments on image classification, human pose estimation, and robust face recovery have confirmed the superiority of our method.

  9. Generalization Performance of Regularized Ranking With Multiscale Kernels.

    PubMed

    Zhou, Yicong; Chen, Hong; Lan, Rushi; Pan, Zhibin

    2016-05-01

    The regularized kernel method for the ranking problem has attracted increasing attentions in machine learning. The previous regularized ranking algorithms are usually based on reproducing kernel Hilbert spaces with a single kernel. In this paper, we go beyond this framework by investigating the generalization performance of the regularized ranking with multiscale kernels. A novel ranking algorithm with multiscale kernels is proposed and its representer theorem is proved. We establish the upper bound of the generalization error in terms of the complexity of hypothesis spaces. It shows that the multiscale ranking algorithm can achieve satisfactory learning rates under mild conditions. Experiments demonstrate the effectiveness of the proposed method for drug discovery and recommendation tasks.

  10. Short communication: Field fertility in Holstein bulls: Can type of breeding strategy (artificial insemination following estrus versus timed artificial insemination) alter service sire fertility?

    PubMed

    Batista, E O S; Vieira, L M; Sá Filho, M F; Carvalho, P D; Rivera, H; Cabrera, V; Wiltbank, M C; Baruselli, P S; Souza, A H

    2016-03-01

    The aim of this study was to compare pregnancy per artificial insemination (P/AI) from service sires used on artificial insemination after estrus detection (EAI) or timed artificial insemination (TAI) breedings. Confirmed artificial insemination outcome records from 3 national data centers were merged and used as a data source. Criteria edits were herd's overall P/AI within 20 and 60%, a minimum of 30 breedings reported per herd-year, service sires that were used in at least 10 different herds with no more than 40% of the breedings performed in a single herd, breeding records from lactating Holstein cows receiving their first to fifth postpartum breedings occurring within 45 to 375 d in milk, and cows with 1 to 5 lactations producing a minimum of 6,804 kg. Initially 1,142,859 breeding records were available for analysis. After editing, a subset of the data (n=857,539) was used to classify breeding codes into either EAI or TAI based on weekly insemination profile in each individual herd. The procedure HPMIXED of SAS was used and took into account effects of state, farm, cow identification, breeding month, year, parity, days in milk at breeding, and service sire. This model was used independently for the 2 types osires f breeding codes (EAI vs. TAI), and service sire P/AI rankings within each breeding code were performed for sires with >700 breedings (94 sires) and for with >1,000 breedings (n=56 sires) following both EAI and TAI. Correlation for service sire fertility rankings following EAI and TAI was performed with the PROC CORR of SAS. Service sire P/AI rankings produced with EAI and TAI were 0.81 (for sires with >700 breedings) and 0.84 (for sires with >1,000 breedings). In addition, important changes occurred in service sire P/AI ranking to EAI and TAI for sires with less than 10,000 recorded artificial inseminations. In conclusion, the type of breeding strategy (EAI or TAI) was associated with some changes in service sire P/AI ranking, but ranking changes declined as number of breedings per service sire increased. Future randomized studies need to explore whether changes in P/AI ranking to EAI versus TAI are due to specific semen characteristics. Copyright © 2016 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  11. Comparison of potential method in analytic hierarchy process for multi-attribute of catering service companies

    NASA Astrophysics Data System (ADS)

    Mamat, Siti Salwana; Ahmad, Tahir; Awang, Siti Rahmah

    2017-08-01

    Analytic Hierarchy Process (AHP) is a method used in structuring, measuring and synthesizing criteria, in particular ranking of multiple criteria in decision making problems. On the other hand, Potential Method is a ranking procedure in which utilizes preference graph ς (V, A). Two nodes are adjacent if they are compared in a pairwise comparison whereby the assigned arc is oriented towards the more preferred node. In this paper Potential Method is used to solve problem on a catering service selection. The comparison of result by using Potential method is made with Extent Analysis. The Potential Method is found to produce the same rank as Extent Analysis in AHP.

  12. System identification using Nuclear Norm & Tabu Search optimization

    NASA Astrophysics Data System (ADS)

    Ahmed, Asif A.; Schoen, Marco P.; Bosworth, Ken W.

    2018-01-01

    In recent years, subspace System Identification (SI) algorithms have seen increased research, stemming from advanced minimization methods being applied to the Nuclear Norm (NN) approach in system identification. These minimization algorithms are based on hard computing methodologies. To the authors’ knowledge, as of now, there has been no work reported that utilizes soft computing algorithms to address the minimization problem within the nuclear norm SI framework. A linear, time-invariant, discrete time system is used in this work as the basic model for characterizing a dynamical system to be identified. The main objective is to extract a mathematical model from collected experimental input-output data. Hankel matrices are constructed from experimental data, and the extended observability matrix is employed to define an estimated output of the system. This estimated output and the actual - measured - output are utilized to construct a minimization problem. An embedded rank measure assures minimum state realization outcomes. Current NN-SI algorithms employ hard computing algorithms for minimization. In this work, we propose a simple Tabu Search (TS) algorithm for minimization. TS algorithm based SI is compared with the iterative Alternating Direction Method of Multipliers (ADMM) line search optimization based NN-SI. For comparison, several different benchmark system identification problems are solved by both approaches. Results show improved performance of the proposed SI-TS algorithm compared to the NN-SI ADMM algorithm.

  13. Method and Apparatus for Powered Descent Guidance

    NASA Technical Reports Server (NTRS)

    Acikmese, Behcet (Inventor); Blackmore, James C. L. (Inventor); Scharf, Daniel P. (Inventor)

    2013-01-01

    A method and apparatus for landing a spacecraft having thrusters with non-convex constraints is described. The method first computes a solution to a minimum error landing problem for a convexified constraints, then applies that solution to a minimum fuel landing problem for convexified constraints. The result is a solution that is a minimum error and minimum fuel solution that is also a feasible solution to the analogous system with non-convex thruster constraints.

  14. Non-Rigid Structure Estimation in Trajectory Space from Monocular Vision

    PubMed Central

    Wang, Yaming; Tong, Lingling; Jiang, Mingfeng; Zheng, Junbao

    2015-01-01

    In this paper, the problem of non-rigid structure estimation in trajectory space from monocular vision is investigated. Similar to the Point Trajectory Approach (PTA), based on characteristic points’ trajectories described by a predefined Discrete Cosine Transform (DCT) basis, the structure matrix was also calculated by using a factorization method. To further optimize the non-rigid structure estimation from monocular vision, the rank minimization problem about structure matrix is proposed to implement the non-rigid structure estimation by introducing the basic low-rank condition. Moreover, the Accelerated Proximal Gradient (APG) algorithm is proposed to solve the rank minimization problem, and the initial structure matrix calculated by the PTA method is optimized. The APG algorithm can converge to efficient solutions quickly and lessen the reconstruction error obviously. The reconstruction results of real image sequences indicate that the proposed approach runs reliably, and effectively improves the accuracy of non-rigid structure estimation from monocular vision. PMID:26473863

  15. Visualizing Internet routing changes.

    PubMed

    Lad, Mohit; Massey, Dan; Zhang, Lixia

    2006-01-01

    Today's Internet provides a global data delivery service to millions of end users and routing protocols play a critical role in this service. It is important to be able to identify and diagnose any problems occurring in Internet routing. However, the Internet's sheer size makes this task difficult. One cannot easily extract out the most important or relevant routing information from the large amounts of data collected from multiple routers. To tackle this problem, we have developed Link-Rank, a tool to visualize Internet routing changes at the global scale. Link-Rank weighs links in a topological graph by the number of routes carried over each link and visually captures changes in link weights in the form of a topological graph with adjustable size. Using Link-Rank, network operators can easily observe important routing changes from massive amounts of routing data, discover otherwise unnoticed routing problems, understand the impact of topological events, and infer root causes of observed routing changes.

  16. Fast Component Pursuit for Large-Scale Inverse Covariance Estimation.

    PubMed

    Han, Lei; Zhang, Yu; Zhang, Tong

    2016-08-01

    The maximum likelihood estimation (MLE) for the Gaussian graphical model, which is also known as the inverse covariance estimation problem, has gained increasing interest recently. Most existing works assume that inverse covariance estimators contain sparse structure and then construct models with the ℓ 1 regularization. In this paper, different from existing works, we study the inverse covariance estimation problem from another perspective by efficiently modeling the low-rank structure in the inverse covariance, which is assumed to be a combination of a low-rank part and a diagonal matrix. One motivation for this assumption is that the low-rank structure is common in many applications including the climate and financial analysis, and another one is that such assumption can reduce the computational complexity when computing its inverse. Specifically, we propose an efficient COmponent Pursuit (COP) method to obtain the low-rank part, where each component can be sparse. For optimization, the COP method greedily learns a rank-one component in each iteration by maximizing the log-likelihood. Moreover, the COP algorithm enjoys several appealing properties including the existence of an efficient solution in each iteration and the theoretical guarantee on the convergence of this greedy approach. Experiments on large-scale synthetic and real-world datasets including thousands of millions variables show that the COP method is faster than the state-of-the-art techniques for the inverse covariance estimation problem when achieving comparable log-likelihood on test data.

  17. Project Lefty: More Bang for the Search Query

    ERIC Educational Resources Information Center

    Varnum, Ken

    2010-01-01

    This article describes the Project Lefty, a search system that, at a minimum, adds a layer on top of traditional federated search tools that will make the wait for results more worthwhile for researchers. At best, Project Lefty improves search queries and relevance rankings for web-scale discovery tools to make the results themselves more relevant…

  18. Minimum distance classification in remote sensing

    NASA Technical Reports Server (NTRS)

    Wacker, A. G.; Landgrebe, D. A.

    1972-01-01

    The utilization of minimum distance classification methods in remote sensing problems, such as crop species identification, is considered. Literature concerning both minimum distance classification problems and distance measures is reviewed. Experimental results are presented for several examples. The objective of these examples is to: (a) compare the sample classification accuracy of a minimum distance classifier, with the vector classification accuracy of a maximum likelihood classifier, and (b) compare the accuracy of a parametric minimum distance classifier with that of a nonparametric one. Results show the minimum distance classifier performance is 5% to 10% better than that of the maximum likelihood classifier. The nonparametric classifier is only slightly better than the parametric version.

  19. Bayesian CP Factorization of Incomplete Tensors with Automatic Rank Determination.

    PubMed

    Zhao, Qibin; Zhang, Liqing; Cichocki, Andrzej

    2015-09-01

    CANDECOMP/PARAFAC (CP) tensor factorization of incomplete data is a powerful technique for tensor completion through explicitly capturing the multilinear latent factors. The existing CP algorithms require the tensor rank to be manually specified, however, the determination of tensor rank remains a challenging problem especially for CP rank . In addition, existing approaches do not take into account uncertainty information of latent factors, as well as missing entries. To address these issues, we formulate CP factorization using a hierarchical probabilistic model and employ a fully Bayesian treatment by incorporating a sparsity-inducing prior over multiple latent factors and the appropriate hyperpriors over all hyperparameters, resulting in automatic rank determination. To learn the model, we develop an efficient deterministic Bayesian inference algorithm, which scales linearly with data size. Our method is characterized as a tuning parameter-free approach, which can effectively infer underlying multilinear factors with a low-rank constraint, while also providing predictive distributions over missing entries. Extensive simulations on synthetic data illustrate the intrinsic capability of our method to recover the ground-truth of CP rank and prevent the overfitting problem, even when a large amount of entries are missing. Moreover, the results from real-world applications, including image inpainting and facial image synthesis, demonstrate that our method outperforms state-of-the-art approaches for both tensor factorization and tensor completion in terms of predictive performance.

  20. Rank One Strange Attractors in Periodically Kicked Predator-Prey System with Time-Delay

    NASA Astrophysics Data System (ADS)

    Yang, Wenjie; Lin, Yiping; Dai, Yunxian; Zhao, Huitao

    2016-06-01

    This paper is devoted to the study of the problem of rank one strange attractor in a periodically kicked predator-prey system with time-delay. Our discussion is based on the theory of rank one maps formulated by Wang and Young. Firstly, we develop the rank one chaotic theory to delayed systems. It is shown that strange attractors occur when the delayed system undergoes a Hopf bifurcation and encounters an external periodic force. Then we use the theory to the periodically kicked predator-prey system with delay, deriving the conditions for Hopf bifurcation and rank one chaos along with the results of numerical simulations.

  1. Locality for quantum systems on graphs depends on the number field

    NASA Astrophysics Data System (ADS)

    Hall, H. Tracy; Severini, Simone

    2013-07-01

    Adapting a definition of Aaronson and Ambainis (2005 Theory Comput. 1 47-79), we call a quantum dynamics on a digraph saturated Z-local if the nonzero transition amplitudes specifying the unitary evolution are in exact correspondence with the directed edges (including loops) of the digraph. This idea appears recurrently in a variety of contexts including angular momentum, quantum chaos, and combinatorial matrix theory. Complete characterization of the digraph properties that allow such a process to exist is a long-standing open question that can also be formulated in terms of minimum rank problems. We prove that saturated Z-local dynamics involving complex amplitudes occur on a proper superset of the digraphs that allow restriction to the real numbers or, even further, the rationals. Consequently, among these fields, complex numbers guarantee the largest possible choice of topologies supporting a discrete quantum evolution. A similar construction separates complex numbers from the skew field of quaternions. The result proposes a concrete ground for distinguishing between complex and quaternionic quantum mechanics.

  2. An improved grey wolf optimizer algorithm for the inversion of geoelectrical data

    NASA Astrophysics Data System (ADS)

    Li, Si-Yu; Wang, Shu-Ming; Wang, Peng-Fei; Su, Xiao-Lu; Zhang, Xin-Song; Dong, Zhi-Hui

    2018-05-01

    The grey wolf optimizer (GWO) is a novel bionics algorithm inspired by the social rank and prey-seeking behaviors of grey wolves. The GWO algorithm is easy to implement because of its basic concept, simple formula, and small number of parameters. This paper develops a GWO algorithm with a nonlinear convergence factor and an adaptive location updating strategy and applies this improved grey wolf optimizer (improved grey wolf optimizer, IGWO) algorithm to geophysical inversion problems using magnetotelluric (MT), DC resistivity and induced polarization (IP) methods. Numerical tests in MATLAB 2010b for the forward modeling data and the observed data show that the IGWO algorithm can find the global minimum and rarely sinks to the local minima. For further study, inverted results using the IGWO are contrasted with particle swarm optimization (PSO) and the simulated annealing (SA) algorithm. The outcomes of the comparison reveal that the IGWO and PSO similarly perform better in counterpoising exploration and exploitation with a given number of iterations than the SA.

  3. SWIFT-Review: a text-mining workbench for systematic review.

    PubMed

    Howard, Brian E; Phillips, Jason; Miller, Kyle; Tandon, Arpit; Mav, Deepak; Shah, Mihir R; Holmgren, Stephanie; Pelch, Katherine E; Walker, Vickie; Rooney, Andrew A; Macleod, Malcolm; Shah, Ruchir R; Thayer, Kristina

    2016-05-23

    There is growing interest in using machine learning approaches to priority rank studies and reduce human burden in screening literature when conducting systematic reviews. In addition, identifying addressable questions during the problem formulation phase of systematic review can be challenging, especially for topics having a large literature base. Here, we assess the performance of the SWIFT-Review priority ranking algorithm for identifying studies relevant to a given research question. We also explore the use of SWIFT-Review during problem formulation to identify, categorize, and visualize research areas that are data rich/data poor within a large literature corpus. Twenty case studies, including 15 public data sets, representing a range of complexity and size, were used to assess the priority ranking performance of SWIFT-Review. For each study, seed sets of manually annotated included and excluded titles and abstracts were used for machine training. The remaining references were then ranked for relevance using an algorithm that considers term frequency and latent Dirichlet allocation (LDA) topic modeling. This ranking was evaluated with respect to (1) the number of studies screened in order to identify 95 % of known relevant studies and (2) the "Work Saved over Sampling" (WSS) performance metric. To assess SWIFT-Review for use in problem formulation, PubMed literature search results for 171 chemicals implicated as EDCs were uploaded into SWIFT-Review (264,588 studies) and categorized based on evidence stream and health outcome. Patterns of search results were surveyed and visualized using a variety of interactive graphics. Compared with the reported performance of other tools using the same datasets, the SWIFT-Review ranking procedure obtained the highest scores on 11 out of 15 of the public datasets. Overall, these results suggest that using machine learning to triage documents for screening has the potential to save, on average, more than 50 % of the screening effort ordinarily required when using un-ordered document lists. In addition, the tagging and annotation capabilities of SWIFT-Review can be useful during the activities of scoping and problem formulation. Text-mining and machine learning software such as SWIFT-Review can be valuable tools to reduce the human screening burden and assist in problem formulation.

  4. The phase transition of matrix recovery from Gaussian measurements matches the minimax MSE of matrix denoising.

    PubMed

    Donoho, David L; Gavish, Matan; Montanari, Andrea

    2013-05-21

    Let X(0) be an unknown M by N matrix. In matrix recovery, one takes n < MN linear measurements y(1),…,y(n) of X(0), where y(i) = Tr(A(T)iX(0)) and each A(i) is an M by N matrix. A popular approach for matrix recovery is nuclear norm minimization (NNM): solving the convex optimization problem min ||X||*subject to y(i) =Tr(A(T)(i)X) for all 1 ≤ i ≤ n, where || · ||* denotes the nuclear norm, namely, the sum of singular values. Empirical work reveals a phase transition curve, stated in terms of the undersampling fraction δ(n,M,N) = n/(MN), rank fraction ρ=rank(X0)/min {M,N}, and aspect ratio β=M/N. Specifically when the measurement matrices Ai have independent standard Gaussian random entries, a curve δ*(ρ) = δ*(ρ;β) exists such that, if δ > δ*(ρ), NNM typically succeeds for large M,N, whereas if δ < δ*(ρ), it typically fails. An apparently quite different problem is matrix denoising in Gaussian noise, in which an unknown M by N matrix X(0) is to be estimated based on direct noisy measurements Y =X(0) + Z, where the matrix Z has independent and identically distributed Gaussian entries. A popular matrix denoising scheme solves the unconstrained optimization problem min|| Y-X||(2)(F)/2+λ||X||*. When optimally tuned, this scheme achieves the asymptotic minimax mean-squared error M(ρ;β) = lim(M,N → ∞)inf(λ)sup(rank(X) ≤ ρ · M)MSE(X,X(λ)), where M/N → . We report extensive experiments showing that the phase transition δ*(ρ) in the first problem, matrix recovery from Gaussian measurements, coincides with the minimax risk curve M(ρ)=M(ρ;β) in the second problem, matrix denoising in Gaussian noise: δ*(ρ)=M(ρ), for any rank fraction 0 < ρ < 1 (at each common aspect ratio β). Our experiments considered matrices belonging to two constraint classes: real M by N matrices, of various ranks and aspect ratios, and real symmetric positive-semidefinite N by N matrices, of various ranks.

  5. Fuzzy bi-objective linear programming for portfolio selection problem with magnitude ranking function

    NASA Astrophysics Data System (ADS)

    Kusumawati, Rosita; Subekti, Retno

    2017-04-01

    Fuzzy bi-objective linear programming (FBOLP) model is bi-objective linear programming model in fuzzy number set where the coefficients of the equations are fuzzy number. This model is proposed to solve portfolio selection problem which generate an asset portfolio with the lowest risk and the highest expected return. FBOLP model with normal fuzzy numbers for risk and expected return of stocks is transformed into linear programming (LP) model using magnitude ranking function.

  6. A Gaussian-based rank approximation for subspace clustering

    NASA Astrophysics Data System (ADS)

    Xu, Fei; Peng, Chong; Hu, Yunhong; He, Guoping

    2018-04-01

    Low-rank representation (LRR) has been shown successful in seeking low-rank structures of data relationships in a union of subspaces. Generally, LRR and LRR-based variants need to solve the nuclear norm-based minimization problems. Beyond the success of such methods, it has been widely noted that the nuclear norm may not be a good rank approximation because it simply adds all singular values of a matrix together and thus large singular values may dominant the weight. This results in far from satisfactory rank approximation and may degrade the performance of lowrank models based on the nuclear norm. In this paper, we propose a novel nonconvex rank approximation based on the Gaussian distribution function, which has demanding properties to be a better rank approximation than the nuclear norm. Then a low-rank model is proposed based on the new rank approximation with application to motion segmentation. Experimental results have shown significant improvements and verified the effectiveness of our method.

  7. Learning Incoherent Sparse and Low-Rank Patterns from Multiple Tasks

    PubMed Central

    Chen, Jianhui; Liu, Ji; Ye, Jieping

    2013-01-01

    We consider the problem of learning incoherent sparse and low-rank patterns from multiple tasks. Our approach is based on a linear multi-task learning formulation, in which the sparse and low-rank patterns are induced by a cardinality regularization term and a low-rank constraint, respectively. This formulation is non-convex; we convert it into its convex surrogate, which can be routinely solved via semidefinite programming for small-size problems. We propose to employ the general projected gradient scheme to efficiently solve such a convex surrogate; however, in the optimization formulation, the objective function is non-differentiable and the feasible domain is non-trivial. We present the procedures for computing the projected gradient and ensuring the global convergence of the projected gradient scheme. The computation of projected gradient involves a constrained optimization problem; we show that the optimal solution to such a problem can be obtained via solving an unconstrained optimization subproblem and an Euclidean projection subproblem. We also present two projected gradient algorithms and analyze their rates of convergence in details. In addition, we illustrate the use of the presented projected gradient algorithms for the proposed multi-task learning formulation using the least squares loss. Experimental results on a collection of real-world data sets demonstrate the effectiveness of the proposed multi-task learning formulation and the efficiency of the proposed projected gradient algorithms. PMID:24077658

  8. Learning Incoherent Sparse and Low-Rank Patterns from Multiple Tasks.

    PubMed

    Chen, Jianhui; Liu, Ji; Ye, Jieping

    2012-02-01

    We consider the problem of learning incoherent sparse and low-rank patterns from multiple tasks. Our approach is based on a linear multi-task learning formulation, in which the sparse and low-rank patterns are induced by a cardinality regularization term and a low-rank constraint, respectively. This formulation is non-convex; we convert it into its convex surrogate, which can be routinely solved via semidefinite programming for small-size problems. We propose to employ the general projected gradient scheme to efficiently solve such a convex surrogate; however, in the optimization formulation, the objective function is non-differentiable and the feasible domain is non-trivial. We present the procedures for computing the projected gradient and ensuring the global convergence of the projected gradient scheme. The computation of projected gradient involves a constrained optimization problem; we show that the optimal solution to such a problem can be obtained via solving an unconstrained optimization subproblem and an Euclidean projection subproblem. We also present two projected gradient algorithms and analyze their rates of convergence in details. In addition, we illustrate the use of the presented projected gradient algorithms for the proposed multi-task learning formulation using the least squares loss. Experimental results on a collection of real-world data sets demonstrate the effectiveness of the proposed multi-task learning formulation and the efficiency of the proposed projected gradient algorithms.

  9. The rank correlated FSK model for prediction of gas radiation in non-uniform media, and its relationship to the rank correlated SLW model

    NASA Astrophysics Data System (ADS)

    Solovjov, Vladimir P.; Webb, Brent W.; Andre, Frederic

    2018-07-01

    Following previous theoretical development based on the assumption of a rank correlated spectrum, the Rank Correlated Full Spectrum k-distribution (RC-FSK) method is proposed. The method proves advantageous in modeling radiation transfer in high temperature gases in non-uniform media in two important ways. First, and perhaps most importantly, the method requires no specification of a reference gas thermodynamic state. Second, the spectral construction of the RC-FSK model is simpler than original correlated FSK models, requiring only two cumulative k-distributions. Further, although not exhaustive, example problems presented here suggest that the method may also yield improved accuracy relative to prior methods, and may exhibit less sensitivity to the blackbody source temperature used in the model predictions. This paper outlines the theoretical development of the RC-FSK method, comparing the spectral construction with prior correlated spectrum FSK method formulations. Further the RC-FSK model's relationship to the Rank Correlated Spectral Line Weighted-sum-of-gray-gases (RC-SLW) model is defined. The work presents predictions using the Rank Correlated FSK method and previous FSK methods in three different example problems. Line-by-line benchmark predictions are used to assess the accuracy.

  10. A hybrid framework for reservoir characterization using fuzzy ranking and an artificial neural network

    NASA Astrophysics Data System (ADS)

    Wang, Baijie; Wang, Xin; Chen, Zhangxin

    2013-08-01

    Reservoir characterization refers to the process of quantitatively assigning reservoir properties using all available field data. Artificial neural networks (ANN) have recently been introduced to solve reservoir characterization problems dealing with the complex underlying relationships inherent in well log data. Despite the utility of ANNs, the current limitation is that most existing applications simply focus on directly implementing existing ANN models instead of improving/customizing them to fit the specific reservoir characterization tasks at hand. In this paper, we propose a novel intelligent framework that integrates fuzzy ranking (FR) and multilayer perceptron (MLP) neural networks for reservoir characterization. FR can automatically identify a minimum subset of well log data as neural inputs, and the MLP is trained to learn the complex correlations from the selected well log data to a target reservoir property. FR guarantees the selection of the optimal subset of representative data from the overall well log data set for the characterization of a specific reservoir property; and, this implicitly improves the modeling and predication accuracy of the MLP. In addition, a growing number of industrial agencies are implementing geographic information systems (GIS) in field data management; and, we have designed the GFAR solution (GIS-based FR ANN Reservoir characterization solution) system, which integrates the proposed framework into a GIS system that provides an efficient characterization solution. Three separate petroleum wells from southwestern Alberta, Canada, were used in the presented case study of reservoir porosity characterization. Our experiments demonstrate that our method can generate reliable results.

  11. Characteristics of good quality pharmaceutical services common to community pharmacies and dispensing general practices.

    PubMed

    Grey, Elisabeth; Harris, Michael; Rodham, Karen; Weiss, Marjorie C

    2016-10-01

    In the United Kingdom, pharmaceutical services can be delivered by both community pharmacies (CPs) and dispensing doctor practices (DPs). Both must adhere to minimum standards set out in NHS regulations; however, no common framework exists to guide quality improvement. Previous phases of this research had developed a set of characteristics indicative of good pharmaceutical service provision. To ask key stakeholders to confirm, and rank the importance of, a set of characteristics of good pharmaceutical service provision. A two-round Delphi-type survey was conducted in south-west England and was sent to participants representing three stakeholder groups: DPs, CPs and patients/lay members. Participants were asked to confirm, and rank, the importance of these characteristics as representing good quality pharmaceutical services. Thirty people were sent the first round survey; 22 participants completed both rounds. Median ratings for the 23 characteristics showed that all were seen to represent important aspects of pharmaceutical service provision. Participants' comments highlighted potential problems with the practicality of the characteristics. Characteristics relating to patient safety were deemed to be the most important and those relating to public health the least important. A set of 23 characteristics for providing good pharmaceutical services in CPs and DPs was developed and attained approval from a sample of stakeholders. With further testing and wider discussion, it is hoped that the characteristics will form the basis of a quality improvement tool for CPs and DPs. © 2016 Royal Pharmaceutical Society.

  12. A selection of giant radio sources from NVSS

    DOE PAGES

    Proctor, D. D.

    2016-06-01

    Results of the application of pattern-recognition techniques to the problem of identifying giant radio sources (GRSs) from the data in the NVSS catalog are presented, and issues affecting the process are explored. Decision-tree pattern-recognition software was applied to training-set source pairs developed from known NVSS large-angular-size radio galaxies. The full training set consisted of 51,195 source pairs, 48 of which were known GRSs for which each lobe was primarily represented by a single catalog component. The source pairs had a maximum separation ofmore » $$20^{\\prime} $$ and a minimum component area of 1.87 square arcmin at the 1.4 mJy level. The importance of comparing the resulting probability distributions of the training and application sets for cases of unknown class ratio is demonstrated. The probability of correctly ranking a randomly selected (GRS, non-GRS) pair from the best of the tested classifiers was determined to be 97.8 ± 1.5%. The best classifiers were applied to the over 870,000 candidate pairs from the entire catalog. Images of higher-ranked sources were visually screened, and a table of over 1600 candidates, including morphological annotation, is presented. These systems include doubles and triples, wide-angle tail and narrow-angle tail, S- or Z-shaped systems, and core-jets and resolved cores. In conclusion, while some resolved-lobe systems are recovered with this technique, generally it is expected that such systems would require a different approach.« less

  13. A finite-state, finite-memory minimum principle, part 2

    NASA Technical Reports Server (NTRS)

    Sandell, N. R., Jr.; Athans, M.

    1975-01-01

    In part 1 of this paper, a minimum principle was found for the finite-state, finite-memory (FSFM) stochastic control problem. In part 2, conditions for the sufficiency of the minimum principle are stated in terms of the informational properties of the problem. This is accomplished by introducing the notion of a signaling strategy. Then a min-H algorithm based on the FSFM minimum principle is presented. This algorithm converges, after a finite number of steps, to a person - by - person extremal solution.

  14. A New Direction of Cancer Classification: Positive Effect of Low-Ranking MicroRNAs.

    PubMed

    Li, Feifei; Piao, Minghao; Piao, Yongjun; Li, Meijing; Ryu, Keun Ho

    2014-10-01

    Many studies based on microRNA (miRNA) expression profiles showed a new aspect of cancer classification. Because one characteristic of miRNA expression data is the high dimensionality, feature selection methods have been used to facilitate dimensionality reduction. The feature selection methods have one shortcoming thus far: they just consider the problem of where feature to class is 1:1 or n:1. However, because one miRNA may influence more than one type of cancer, human miRNA is considered to be ranked low in traditional feature selection methods and are removed most of the time. In view of the limitation of the miRNA number, low-ranking miRNAs are also important to cancer classification. We considered both high- and low-ranking features to cover all problems (1:1, n:1, 1:n, and m:n) in cancer classification. First, we used the correlation-based feature selection method to select the high-ranking miRNAs, and chose the support vector machine, Bayes network, decision tree, k-nearest-neighbor, and logistic classifier to construct cancer classification. Then, we chose Chi-square test, information gain, gain ratio, and Pearson's correlation feature selection methods to build the m:n feature subset, and used the selected miRNAs to determine cancer classification. The low-ranking miRNA expression profiles achieved higher classification accuracy compared with just using high-ranking miRNAs in traditional feature selection methods. Our results demonstrate that the m:n feature subset made a positive impression of low-ranking miRNAs in cancer classification.

  15. A fast direct solver for boundary value problems on locally perturbed geometries

    NASA Astrophysics Data System (ADS)

    Zhang, Yabin; Gillman, Adrianna

    2018-03-01

    Many applications including optimal design and adaptive discretization techniques involve solving several boundary value problems on geometries that are local perturbations of an original geometry. This manuscript presents a fast direct solver for boundary value problems that are recast as boundary integral equations. The idea is to write the discretized boundary integral equation on a new geometry as a low rank update to the discretized problem on the original geometry. Using the Sherman-Morrison formula, the inverse can be expressed in terms of the inverse of the original system applied to the low rank factors and the right hand side. Numerical results illustrate for problems where perturbation is localized the fast direct solver is three times faster than building a new solver from scratch.

  16. Exchange-Hole Dipole Dispersion Model for Accurate Energy Ranking in Molecular Crystal Structure Prediction II: Nonplanar Molecules.

    PubMed

    Whittleton, Sarah R; Otero-de-la-Roza, A; Johnson, Erin R

    2017-11-14

    The crystal structure prediction (CSP) of a given compound from its molecular diagram is a fundamental challenge in computational chemistry with implications in relevant technological fields. A key component of CSP is the method to calculate the lattice energy of a crystal, which allows the ranking of candidate structures. This work is the second part of our investigation to assess the potential of the exchange-hole dipole moment (XDM) dispersion model for crystal structure prediction. In this article, we study the relatively large, nonplanar, mostly flexible molecules in the first five blind tests held by the Cambridge Crystallographic Data Centre. Four of the seven experimental structures are predicted as the energy minimum, and thermal effects are demonstrated to have a large impact on the ranking of at least another compound. As in the first part of this series, delocalization error affects the results for a single crystal (compound X), in this case by detrimentally overstabilizing the π-conjugated conformation of the monomer. Overall, B86bPBE-XDM correctly predicts 16 of the 21 compounds in the five blind tests, a result similar to the one obtained using the best CSP method available to date (dispersion-corrected PW91 by Neumann et al.). Perhaps more importantly, the systems for which B86bPBE-XDM fails to predict the experimental structure as the energy minimum are mostly the same as with Neumann's method, which suggests that similar difficulties (absence of vibrational free energy corrections, delocalization error,...) are not limited to B86bPBE-XDM but affect GGA-based DFT-methods in general. Our work confirms B86bPBE-XDM as an excellent option for crystal energy ranking in CSP and offers a guide to identify crystals (organic salts, conjugated flexible systems) where difficulties may appear.

  17. Using Concept Relations to Improve Ranking in Information Retrieval

    PubMed Central

    Price, Susan L.; Delcambre, Lois M.

    2005-01-01

    Despite improved search engine technology, most searches return numerous documents not directly related to the query. This problem is mitigated if relevant documents appear high on a ranked list of search results. We propose that some queries and the underlying information needs can be modeled as relationships between concepts (relations), and we match relations in queries to relations in documents to try to improve ranking of search results. We investigate four techniques to identify two relationships important in medicine, causes and treats, to improve the ranking of medical text documents relevant to clinical questions about causation and treatment. Preliminary results suggest that identifying relation instances can improve the ranking of search results. PMID:16779114

  18. A ranking algorithm for spacelab crew and experiment scheduling

    NASA Technical Reports Server (NTRS)

    Grone, R. D.; Mathis, F. H.

    1980-01-01

    The problem of obtaining an optimal or near optimal schedule for scientific experiments to be performed on Spacelab missions is addressed. The current capabilities in this regard are examined and a method of ranking experiments in order of difficulty is developed to support the existing software. Experimental data is obtained from applying this method to the sets of experiments corresponding to Spacelab mission 1, 2, and 3. Finally, suggestions are made concerning desirable modifications and features of second generation software being developed for this problem.

  19. Ranking Information in Networks

    NASA Astrophysics Data System (ADS)

    Eliassi-Rad, Tina; Henderson, Keith

    Given a network, we are interested in ranking sets of nodes that score highest on user-specified criteria. For instance in graphs from bibliographic data (e.g. PubMed), we would like to discover sets of authors with expertise in a wide range of disciplines. We present this ranking task as a Top-K problem; utilize fixed-memory heuristic search; and present performance of both the serial and distributed search algorithms on synthetic and real-world data sets.

  20. Evaluating nodes importance in complex network based on PageRank algorithm

    NASA Astrophysics Data System (ADS)

    Li, Kai; He, Yongfeng

    2018-04-01

    To evaluate the important nodes in the complex network, and aim at the problems existing in the traditional PageRank algorithm, we propose a modified PageRank algorithm. The algorithm has convergence for the evaluation of the importance of the suspended nodes and the nodes with a directed loop network. The simulation example shows the effectiveness of the modified algorithm for the evaluation of the complexity of the complex network nodes.

  1. Development of Gis Tool for the Solution of Minimum Spanning Tree Problem using Prim's Algorithm

    NASA Astrophysics Data System (ADS)

    Dutta, S.; Patra, D.; Shankar, H.; Alok Verma, P.

    2014-11-01

    minimum spanning tree (MST) of a connected, undirected and weighted network is a tree of that network consisting of all its nodes and the sum of weights of all its edges is minimum among all such possible spanning trees of the same network. In this study, we have developed a new GIS tool using most commonly known rudimentary algorithm called Prim's algorithm to construct the minimum spanning tree of a connected, undirected and weighted road network. This algorithm is based on the weight (adjacency) matrix of a weighted network and helps to solve complex network MST problem easily, efficiently and effectively. The selection of the appropriate algorithm is very essential otherwise it will be very hard to get an optimal result. In case of Road Transportation Network, it is very essential to find the optimal results by considering all the necessary points based on cost factor (time or distance). This paper is based on solving the Minimum Spanning Tree (MST) problem of a road network by finding it's minimum span by considering all the important network junction point. GIS technology is usually used to solve the network related problems like the optimal path problem, travelling salesman problem, vehicle routing problems, location-allocation problems etc. Therefore, in this study we have developed a customized GIS tool using Python script in ArcGIS software for the solution of MST problem for a Road Transportation Network of Dehradun city by considering distance and time as the impedance (cost) factors. It has a number of advantages like the users do not need a greater knowledge of the subject as the tool is user-friendly and that allows to access information varied and adapted the needs of the users. This GIS tool for MST can be applied for a nationwide plan called Prime Minister Gram Sadak Yojana in India to provide optimal all weather road connectivity to unconnected villages (points). This tool is also useful for constructing highways or railways spanning several cities optimally or connecting all cities with minimum total road length.

  2. Ranking procedure for partial discriminant analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beckman, R.J.; Johnson, M.E.

    1981-09-01

    A rank procedure developed by Broffitt, Randles, and Hogg (1976) is modified to control the conditional probability of misclassification given that classification has been attempted. This modification leads to a useful solution to the two-population partial discriminant analysis problem for even moderately sized training sets.

  3. Assessing Threat Detection Scenarios through Hypothesis Generation and Testing

    DTIC Science & Technology

    2015-12-01

    Dog Day scenario .............................................................................................................. 9...Figure 1. Rankings of priority threats identified in the Dog Day scenario ............................... 9 Figure 2. Rankings of priority...making in uncertain environments relies heavily on pattern matching. Cohen, Freeman, and Wolf (1996) reported that features of the decision problem

  4. American Prisoners of Japan: Did Rank have Its Privilege?

    DTIC Science & Technology

    transportation, leadership problems, and overall death rates . The study concludes that there were significant differences in treatment based on rank...These differences caused extremely high enlisted death rates during the first year of captivity. The officers fared worse as a group, however, because the

  5. Reliability evaluation of hermetic dual in-line flat microcircuit packages

    NASA Technical Reports Server (NTRS)

    Johnson, G. M.; Conaway, L. K.

    1977-01-01

    The relative strengths and weaknesses of 35 commonly used hermetic flat and dual in-line packages were determined and used to rank each of the packages according to a numerical weighting scheme for package attributes. The list of attributes included desirable features in five major areas: lead and lead seal, body construction, body materials, lid and lid seal, and marking. The metal flat pack and multilayer integral ceramic flat pack and DIP received the highest rankings, and the soft glass Cerdip and Cerpak types received the lowest rankings. Loss of package hermeticity due to lead and lid seal problems was found to be the predominant failure mode from the literature/data search. However, environmental test results showed that lead and lid seal failures due to thermal stressing was only a problem with the hard glass (Ceramic) body DIP utilizing a metal lid and/or bottom. Insufficient failure data were generated for the other package types tested to correlate test results with the package ranking.

  6. A Novel Riemannian Metric Based on Riemannian Structure and Scaling Information for Fixed Low-Rank Matrix Completion.

    PubMed

    Mao, Shasha; Xiong, Lin; Jiao, Licheng; Feng, Tian; Yeung, Sai-Kit

    2017-05-01

    Riemannian optimization has been widely used to deal with the fixed low-rank matrix completion problem, and Riemannian metric is a crucial factor of obtaining the search direction in Riemannian optimization. This paper proposes a new Riemannian metric via simultaneously considering the Riemannian geometry structure and the scaling information, which is smoothly varying and invariant along the equivalence class. The proposed metric can make a tradeoff between the Riemannian geometry structure and the scaling information effectively. Essentially, it can be viewed as a generalization of some existing metrics. Based on the proposed Riemanian metric, we also design a Riemannian nonlinear conjugate gradient algorithm, which can efficiently solve the fixed low-rank matrix completion problem. By experimenting on the fixed low-rank matrix completion, collaborative filtering, and image and video recovery, it illustrates that the proposed method is superior to the state-of-the-art methods on the convergence efficiency and the numerical performance.

  7. Selection of suitable e-learning approach using TOPSIS technique with best ranked criteria weights

    NASA Astrophysics Data System (ADS)

    Mohammed, Husam Jasim; Kasim, Maznah Mat; Shaharanee, Izwan Nizal Mohd

    2017-11-01

    This paper compares the performances of four rank-based weighting assessment techniques, Rank Sum (RS), Rank Reciprocal (RR), Rank Exponent (RE), and Rank Order Centroid (ROC) on five identified e-learning criteria to select the best weights method. A total of 35 experts in a public university in Malaysia were asked to rank the criteria and to evaluate five e-learning approaches which include blended learning, flipped classroom, ICT supported face to face learning, synchronous learning, and asynchronous learning. The best ranked criteria weights are defined as weights that have the least total absolute differences with the geometric mean of all weights, were then used to select the most suitable e-learning approach by using TOPSIS method. The results show that RR weights are the best, while flipped classroom approach implementation is the most suitable approach. This paper has developed a decision framework to aid decision makers (DMs) in choosing the most suitable weighting method for solving MCDM problems.

  8. Minimum Altitude-Loss Soaring in a Specified Vertical Wind Distribution

    NASA Technical Reports Server (NTRS)

    Pierson, B. L.; Chen, I.

    1979-01-01

    Minimum altitude-loss flight of a sailplane through a given vertical wind distribution is discussed. The problem is posed as an optimal control problem, and several numerical solutions are obtained for a sinusoidal wind distribution.

  9. Performance evaluation of the inverse dynamics method for optimal spacecraft reorientation

    NASA Astrophysics Data System (ADS)

    Ventura, Jacopo; Romano, Marcello; Walter, Ulrich

    2015-05-01

    This paper investigates the application of the inverse dynamics in the virtual domain method to Euler angles, quaternions, and modified Rodrigues parameters for rapid optimal attitude trajectory generation for spacecraft reorientation maneuvers. The impact of the virtual domain and attitude representation is numerically investigated for both minimum time and minimum energy problems. Owing to the nature of the inverse dynamics method, it yields sub-optimal solutions for minimum time problems. Furthermore, the virtual domain improves the optimality of the solution, but at the cost of more computational time. The attitude representation also affects solution quality and computational speed. For minimum energy problems, the optimal solution can be obtained without the virtual domain with any considered attitude representation.

  10. A finite state, finite memory minimum principle, part 2. [a discussion of game theory, signaling, stochastic processes, and control theory

    NASA Technical Reports Server (NTRS)

    Sandell, N. R., Jr.; Athans, M.

    1975-01-01

    The development of the theory of the finite - state, finite - memory (FSFM) stochastic control problem is discussed. The sufficiency of the FSFM minimum principle (which is in general only a necessary condition) was investigated. By introducing the notion of a signaling strategy as defined in the literature on games, conditions under which the FSFM minimum principle is sufficient were determined. This result explicitly interconnects the information structure of the FSFM problem with its optimality conditions. The min-H algorithm for the FSFM problem was studied. It is demonstrated that a version of the algorithm always converges to a particular type of local minimum termed a person - by - person extremal.

  11. Fast secant methods for the iterative solution of large nonsymmetric linear systems

    NASA Technical Reports Server (NTRS)

    Deuflhard, Peter; Freund, Roland; Walter, Artur

    1990-01-01

    A family of secant methods based on general rank-1 updates was revisited in view of the construction of iterative solvers for large non-Hermitian linear systems. As it turns out, both Broyden's good and bad update techniques play a special role, but should be associated with two different line search principles. For Broyden's bad update technique, a minimum residual principle is natural, thus making it theoretically comparable with a series of well known algorithms like GMRES. Broyden's good update technique, however, is shown to be naturally linked with a minimum next correction principle, which asymptotically mimics a minimum error principle. The two minimization principles differ significantly for sufficiently large system dimension. Numerical experiments on discretized partial differential equations of convection diffusion type in 2-D with integral layers give a first impression of the possible power of the derived good Broyden variant.

  12. Classification of singularities in the problem of motion of the Kovalevskaya top in a double force field

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ryabov, Pavel E; Kharlamov, Mikhail P

    2012-02-28

    The problem of motion of the Kovalevskaya top in a double force field is investigated (the integrable case of A.G. Reyman and M.A. Semenov-Tian-Shansky without a gyrostatic momentum). It is a completely integrable Hamiltonian system with three degrees of freedom not reducible to a family of systems with two degrees of freedom. The critical set of the integral map is studied. The critical subsystems and bifurcation diagrams are described. The classification of all nondegenerate critical points is given. The set of these points consists of equilibria (nondegenerate singularities of rank 0), of singular periodic motions (nondegenerate singularities of rank 1),more » and also of critical two-frequency motions (nondegenerate singularities of rank 2). Bibliography: 32 titles.« less

  13. Probabilistic Low-Rank Multitask Learning.

    PubMed

    Kong, Yu; Shao, Ming; Li, Kang; Fu, Yun

    2018-03-01

    In this paper, we consider the problem of learning multiple related tasks simultaneously with the goal of improving the generalization performance of individual tasks. The key challenge is to effectively exploit the shared information across multiple tasks as well as preserve the discriminative information for each individual task. To address this, we propose a novel probabilistic model for multitask learning (MTL) that can automatically balance between low-rank and sparsity constraints. The former assumes a low-rank structure of the underlying predictive hypothesis space to explicitly capture the relationship of different tasks and the latter learns the incoherent sparse patterns private to each task. We derive and perform inference via variational Bayesian methods. Experimental results on both regression and classification tasks on real-world applications demonstrate the effectiveness of the proposed method in dealing with the MTL problems.

  14. Brain tumor segmentation from multimodal magnetic resonance images via sparse representation.

    PubMed

    Li, Yuhong; Jia, Fucang; Qin, Jing

    2016-10-01

    Accurately segmenting and quantifying brain gliomas from magnetic resonance (MR) images remains a challenging task because of the large spatial and structural variability among brain tumors. To develop a fully automatic and accurate brain tumor segmentation algorithm, we present a probabilistic model of multimodal MR brain tumor segmentation. This model combines sparse representation and the Markov random field (MRF) to solve the spatial and structural variability problem. We formulate the tumor segmentation problem as a multi-classification task by labeling each voxel as the maximum posterior probability. We estimate the maximum a posteriori (MAP) probability by introducing the sparse representation into a likelihood probability and a MRF into the prior probability. Considering the MAP as an NP-hard problem, we convert the maximum posterior probability estimation into a minimum energy optimization problem and employ graph cuts to find the solution to the MAP estimation. Our method is evaluated using the Brain Tumor Segmentation Challenge 2013 database (BRATS 2013) and obtained Dice coefficient metric values of 0.85, 0.75, and 0.69 on the high-grade Challenge data set, 0.73, 0.56, and 0.54 on the high-grade Challenge LeaderBoard data set, and 0.84, 0.54, and 0.57 on the low-grade Challenge data set for the complete, core, and enhancing regions. The experimental results show that the proposed algorithm is valid and ranks 2nd compared with the state-of-the-art tumor segmentation algorithms in the MICCAI BRATS 2013 challenge. Copyright © 2016 Elsevier B.V. All rights reserved.

  15. Fast and accurate matrix completion via truncated nuclear norm regularization.

    PubMed

    Hu, Yao; Zhang, Debing; Ye, Jieping; Li, Xuelong; He, Xiaofei

    2013-09-01

    Recovering a large matrix from a small subset of its entries is a challenging problem arising in many real applications, such as image inpainting and recommender systems. Many existing approaches formulate this problem as a general low-rank matrix approximation problem. Since the rank operator is nonconvex and discontinuous, most of the recent theoretical studies use the nuclear norm as a convex relaxation. One major limitation of the existing approaches based on nuclear norm minimization is that all the singular values are simultaneously minimized, and thus the rank may not be well approximated in practice. In this paper, we propose to achieve a better approximation to the rank of matrix by truncated nuclear norm, which is given by the nuclear norm subtracted by the sum of the largest few singular values. In addition, we develop a novel matrix completion algorithm by minimizing the Truncated Nuclear Norm. We further develop three efficient iterative procedures, TNNR-ADMM, TNNR-APGL, and TNNR-ADMMAP, to solve the optimization problem. TNNR-ADMM utilizes the alternating direction method of multipliers (ADMM), while TNNR-AGPL applies the accelerated proximal gradient line search method (APGL) for the final optimization. For TNNR-ADMMAP, we make use of an adaptive penalty according to a novel update rule for ADMM to achieve a faster convergence rate. Our empirical study shows encouraging results of the proposed algorithms in comparison to the state-of-the-art matrix completion algorithms on both synthetic and real visual datasets.

  16. A Case-Based Reasoning Method with Rank Aggregation

    NASA Astrophysics Data System (ADS)

    Sun, Jinhua; Du, Jiao; Hu, Jian

    2018-03-01

    In order to improve the accuracy of case-based reasoning (CBR), this paper addresses a new CBR framework with the basic principle of rank aggregation. First, the ranking methods are put forward in each attribute subspace of case. The ordering relation between cases on each attribute is got between cases. Then, a sorting matrix is got. Second, the similar case retrieval process from ranking matrix is transformed into a rank aggregation optimal problem, which uses the Kemeny optimal. On the basis, a rank aggregation case-based reasoning algorithm, named RA-CBR, is designed. The experiment result on UCI data sets shows that case retrieval accuracy of RA-CBR algorithm is higher than euclidean distance CBR and mahalanobis distance CBR testing.So we can get the conclusion that RA-CBR method can increase the performance and efficiency of CBR.

  17. Delimitation of homogeneous regions in the UNIFESP/EPM healthcare center coverage area based on sociodemographic indicators.

    PubMed

    Harada, K Y; Silva, J G; Schenkman, S; Hayama, E T; Santos, F R; Prado, M C; Pontes, R H

    1999-01-07

    The drawing up of adequate Public Health action planning to address the true needs of the population would increase the chances of effectiveness and decrease unnecessary expenses. To identify homogeneous regions in the UNIFESP/EPM healthcare center (HCC) coverage area based on sociodemographic indicators and to relate them to causes of deaths in 1995. Secondary data analysis. HCC coverage area; primary care. Sociodemographic indicators were obtained from special tabulations of the Demographic Census of 1991. Proportion of children and elderly in the population; family providers' education level (maximum: > 15 years, minimum: < 1 year) and income level (maximum: > 20 minimum wages, minimum: < 1 minimum wage); proportional mortality distribution The maximum income permitted the construction of four homogeneous regions, according to income ranking. Although the proportion of children and of elderly did not vary significantly among the regions, minimum income and education showed a statistically significant (p < 0.05) difference between the first region (least affluent) and the others. A clear trend of increasing maximum education was observed across the regions. Mortality also differed in the first region, with deaths generated by possibly preventable infections. The inequalities observed may contribute to primary health prevention.

  18. A new fast algorithm for solving the minimum spanning tree problem based on DNA molecules computation.

    PubMed

    Wang, Zhaocai; Huang, Dongmei; Meng, Huajun; Tang, Chengpei

    2013-10-01

    The minimum spanning tree (MST) problem is to find minimum edge connected subsets containing all the vertex of a given undirected graph. It is a vitally important NP-complete problem in graph theory and applied mathematics, having numerous real life applications. Moreover in previous studies, DNA molecular operations usually were used to solve NP-complete head-to-tail path search problems, rarely for NP-hard problems with multi-lateral path solutions result, such as the minimum spanning tree problem. In this paper, we present a new fast DNA algorithm for solving the MST problem using DNA molecular operations. For an undirected graph with n vertex and m edges, we reasonably design flexible length DNA strands representing the vertex and edges, take appropriate steps and get the solutions of the MST problem in proper length range and O(3m+n) time complexity. We extend the application of DNA molecular operations and simultaneity simplify the complexity of the computation. Results of computer simulative experiments show that the proposed method updates some of the best known values with very short time and that the proposed method provides a better performance with solution accuracy over existing algorithms. Copyright © 2013 The Authors. Published by Elsevier Ireland Ltd.. All rights reserved.

  19. Faust Goes to College.

    ERIC Educational Resources Information Center

    Gilley, J. Wade

    1992-01-01

    Rankings of colleges and universities in the popular press have two problems: (1) they are gimmicks to sell publications; and (2) institutions have become pawns, juggling numbers in quest of higher rankings, the ethical equivalent of cheating. Higher education must return to truth, fairness, and honesty to regain its purpose and integrity. (MSE)

  20. Playing the Rankings Game

    ERIC Educational Resources Information Center

    Farrell, Elizabeth F.; Van Der Werf, Martin

    2007-01-01

    While some colleges claim not to care what "U.S. News & World Report" says, and experts cite problems in the way its annual rankings are done, many institutions scramble to improve their positions. There are well-documented examples of institutions that have solicited nominal donations from alumni to boost their percentage of giving, encouraged…

  1. Efficiently Ranking Hyphotheses in Machine Learning

    NASA Technical Reports Server (NTRS)

    Chien, Steve

    1997-01-01

    This paper considers the problem of learning the ranking of a set of alternatives based upon incomplete information (e.g. a limited number of observations). At each decision cycle, the system can output a complete ordering on the hypotheses or decide to gather additional information (e.g. observation) at some cost.

  2. A new approach for minimum phase output definition

    NASA Astrophysics Data System (ADS)

    Jahangiri, Fatemeh; Talebi, Heidar Ali; Menhaj, Mohammad Bagher; Ebenbauer, Christian

    2017-01-01

    This paper presents a novel method for output redefinition for linear systems. The approach also determines possible relative degrees for the systems corresponding to any new output vector. To guarantee the minimum phase property with a prescribed relative degree, a set of new conditions is introduced. A key feature of these conditions is that there is no need to any form of transformations which make the scheme suitable for optimisation problems in control to ensure the minimum phase property. Moreover, the results are useful for sensor placement problems and for obtaining minimum phase approximations of non-minimum phase systems. Numerical examples including an example of unmanned aerial vehicle systems are given to demonstrate the effectiveness of the methodology.

  3. An Optimal Schedule for Urban Road Network Repair Based on the Greedy Algorithm

    PubMed Central

    Lu, Guangquan; Xiong, Ying; Wang, Yunpeng

    2016-01-01

    The schedule of urban road network recovery caused by rainstorms, snow, and other bad weather conditions, traffic incidents, and other daily events is essential. However, limited studies have been conducted to investigate this problem. We fill this research gap by proposing an optimal schedule for urban road network repair with limited repair resources based on the greedy algorithm. Critical links will be given priority in repair according to the basic concept of the greedy algorithm. In this study, the link whose restoration produces the ratio of the system-wide travel time of the current network to the worst network is the minimum. We define such a link as the critical link for the current network. We will re-evaluate the importance of damaged links after each repair process is completed. That is, the critical link ranking will be changed along with the repair process because of the interaction among links. We repair the most critical link for the specific network state based on the greedy algorithm to obtain the optimal schedule. The algorithm can still quickly obtain an optimal schedule even if the scale of the road network is large because the greedy algorithm can reduce computational complexity. We prove that the problem can obtain the optimal solution using the greedy algorithm in theory. The algorithm is also demonstrated in the Sioux Falls network. The problem discussed in this paper is highly significant in dealing with urban road network restoration. PMID:27768732

  4. Riverine Bathymetry Imaging with Indirect Observations

    NASA Astrophysics Data System (ADS)

    Farthing, M.; Lee, J. H.; Ghorbanidehno, H.; Hesser, T.; Darve, E. F.; Kitanidis, P. K.

    2017-12-01

    Bathymetry, i.e, depth, imaging in a river is of crucial importance for shipping operations and flood management. With advancements in sensor technology and computational resources, various types of indirect measurements can be used to estimate high-resolution riverbed topography. Especially, the use of surface velocity measurements has been actively investigated recently since they are easy to acquire at a low cost in all river conditions and surface velocities are sensitive to the river depth. In this work, we image riverbed topography using depth-averaged quasi-steady velocity observations related to the topography through the 2D shallow water equations (SWE). The principle component geostatistical approach (PCGA), a fast and scalable variational inverse modeling method powered by low-rank representation of covariance matrix structure, is presented and applied to two "twin" riverine bathymetry identification problems. To compare the efficiency and effectiveness of the proposed method, an ensemble-based approach is also applied to the test problems. Results demonstrate that PCGA is superior to the ensemble-based approach in terms of computational effort and accuracy. Especially, the results obtained from PCGA capture small-scale bathymetry features irrespective of the initial guess through the successive linearization of the forward model. Analysis on the direct survey data of the riverine bathymetry used in one of the test problems shows an efficient, parsimonious choice of the solution basis in PCGA so that the number of the numerical model runs used to achieve the inversion results is close to the minimum number that reconstructs the underlying bathymetry.

  5. Generalized Higher Order Orthogonal Iteration for Tensor Learning and Decomposition.

    PubMed

    Liu, Yuanyuan; Shang, Fanhua; Fan, Wei; Cheng, James; Cheng, Hong

    2016-12-01

    Low-rank tensor completion (LRTC) has successfully been applied to a wide range of real-world problems. Despite the broad, successful applications, existing LRTC methods may become very slow or even not applicable for large-scale problems. To address this issue, a novel core tensor trace-norm minimization (CTNM) method is proposed for simultaneous tensor learning and decomposition, and has a much lower computational complexity. In our solution, first, the equivalence relation of trace norm of a low-rank tensor and its core tensor is induced. Second, the trace norm of the core tensor is used to replace that of the whole tensor, which leads to two much smaller scale matrix TNM problems. Finally, an efficient alternating direction augmented Lagrangian method is developed to solve our problems. Our CTNM formulation needs only O((R N +NRI)log(√{I N })) observations to reliably recover an N th-order I×I×…×I tensor of n -rank (r,r,…,r) , compared with O(rI N-1 ) observations required by those tensor TNM methods ( I > R ≥ r ). Extensive experimental results show that CTNM is usually more accurate than them, and is orders of magnitude faster.

  6. Thread Graphs, Linear Rank-Width and Their Algorithmic Applications

    NASA Astrophysics Data System (ADS)

    Ganian, Robert

    The introduction of tree-width by Robertson and Seymour [7] was a breakthrough in the design of graph algorithms. A lot of research since then has focused on obtaining a width measure which would be more general and still allowed efficient algorithms for a wide range of NP-hard problems on graphs of bounded width. To this end, Oum and Seymour have proposed rank-width, which allows the solution of many such hard problems on a less restricted graph classes (see e.g. [3,4]). But what about problems which are NP-hard even on graphs of bounded tree-width or even on trees? The parameter used most often for these exceptionally hard problems is path-width, however it is extremely restrictive - for example the graphs of path-width 1 are exactly paths.

  7. Modification of Prim’s algorithm on complete broadcasting graph

    NASA Astrophysics Data System (ADS)

    Dairina; Arif, Salmawaty; Munzir, Said; Halfiani, Vera; Ramli, Marwan

    2017-09-01

    Broadcasting is an information dissemination from one object to another object through communication between two objects in a network. Broadcasting for n objects can be solved by n - 1 communications and minimum time unit defined by ⌈2log n⌉ In this paper, weighted graph broadcasting is considered. The minimum weight of a complete broadcasting graph will be determined. Broadcasting graph is said to be complete if every vertex is connected. Thus to determine the minimum weight of complete broadcasting graph is equivalent to determine the minimum spanning tree of a complete graph. The Kruskal’s and Prim’s algorithm will be used to determine the minimum weight of a complete broadcasting graph regardless the minimum time unit ⌈2log n⌉ and modified Prim’s algorithm for the problems of the minimum time unit ⌈2log n⌉ is done. As an example case, here, the training of trainer problem is solved using these algorithms.

  8. The problem of genesis and systematic of sedimentary units of hydrocarbon reservoirs

    NASA Astrophysics Data System (ADS)

    Zhilina, E. N.; Chernova, O. S.

    2017-12-01

    The problem of identifying and ranking sedimentation, facies associations and their constituent parts - lithogenetic types of sedimentary rocks was considered. As a basis for paleo-sedimentary modelling, the author has developed a classification for terrigenous natural reservoirs,that for the first time links separate sedimentological units into a single hierarchical system. Hierarchy ranking levels are based on a compilation of global knowledge and experience in sediment geology, sedimentological study and systematization, and data from deep-well coresrepresentingJurassichydrocarbon-bearing formationsof the southeastern margin of the Western Siberian sedimentary basin.

  9. Distance estimation and collision prediction for on-line robotic motion planning

    NASA Technical Reports Server (NTRS)

    Kyriakopoulos, K. J.; Saridis, G. N.

    1991-01-01

    An efficient method for computing the minimum distance and predicting collisions between moving objects is presented. This problem has been incorporated in the framework of an in-line motion planning algorithm to satisfy collision avoidance between a robot and moving objects modeled as convex polyhedra. In the beginning the deterministic problem, where the information about the objects is assumed to be certain is examined. If instead of the Euclidean norm, L(sub 1) or L(sub infinity) norms are used to represent distance, the problem becomes a linear programming problem. The stochastic problem is formulated, where the uncertainty is induced by sensing and the unknown dynamics of the moving obstacles. Two problems are considered: (1) filtering of the minimum distance between the robot and the moving object, at the present time; and (2) prediction of the minimum distance in the future, in order to predict possible collisions with the moving obstacles and estimate the collision time.

  10. An Electromagnetically-Controlled Precision Orbital Tracking Vehicle (POTV)

    DTIC Science & Technology

    1992-12-01

    assume that C > B > A. Then 0 1(t) is purely sinusoidal. tk2 (t) is also sinusoidal because the forcing function z(t) is sinusoidal. 03 (t) is more...an unpredictable -manner. The problem arises from the rank deficiency of the G input matrix as shown below. Remember we have shown already that its...rank can never exceed five because rows two, four, and six are linearly dependent. The rank deficiency arises from the "translational part" of the input

  11. Plus Disease in Retinopathy of Prematurity: Improving Diagnosis by Ranking Disease Severity and Using Quantitative Image Analysis.

    PubMed

    Kalpathy-Cramer, Jayashree; Campbell, J Peter; Erdogmus, Deniz; Tian, Peng; Kedarisetti, Dharanish; Moleta, Chace; Reynolds, James D; Hutcheson, Kelly; Shapiro, Michael J; Repka, Michael X; Ferrone, Philip; Drenser, Kimberly; Horowitz, Jason; Sonmez, Kemal; Swan, Ryan; Ostmo, Susan; Jonas, Karyn E; Chan, R V Paul; Chiang, Michael F

    2016-11-01

    To determine expert agreement on relative retinopathy of prematurity (ROP) disease severity and whether computer-based image analysis can model relative disease severity, and to propose consideration of a more continuous severity score for ROP. We developed 2 databases of clinical images of varying disease severity (100 images and 34 images) as part of the Imaging and Informatics in ROP (i-ROP) cohort study and recruited expert physician, nonexpert physician, and nonphysician graders to classify and perform pairwise comparisons on both databases. Six participating expert ROP clinician-scientists, each with a minimum of 10 years of clinical ROP experience and 5 ROP publications, and 5 image graders (3 physicians and 2 nonphysician graders) who analyzed images that were obtained during routine ROP screening in neonatal intensive care units. Images in both databases were ranked by average disease classification (classification ranking), by pairwise comparison using the Elo rating method (comparison ranking), and by correlation with the i-ROP computer-based image analysis system. Interexpert agreement (weighted κ statistic) compared with the correlation coefficient (CC) between experts on pairwise comparisons and correlation between expert rankings and computer-based image analysis modeling. There was variable interexpert agreement on diagnostic classification of disease (plus, preplus, or normal) among the 6 experts (mean weighted κ, 0.27; range, 0.06-0.63), but good correlation between experts on comparison ranking of disease severity (mean CC, 0.84; range, 0.74-0.93) on the set of 34 images. Comparison ranking provided a severity ranking that was in good agreement with ranking obtained by classification ranking (CC, 0.92). Comparison ranking on the larger dataset by both expert and nonexpert graders demonstrated good correlation (mean CC, 0.97; range, 0.95-0.98). The i-ROP system was able to model this continuous severity with good correlation (CC, 0.86). Experts diagnose plus disease on a continuum, with poor absolute agreement on classification but good relative agreement on disease severity. These results suggest that the use of pairwise rankings and a continuous severity score, such as that provided by the i-ROP system, may improve agreement on disease severity in the future. Copyright © 2016 American Academy of Ophthalmology. Published by Elsevier Inc. All rights reserved.

  12. Job strain, rank, and mental health in the UK Armed Forces.

    PubMed

    Fear, Nicola Townsend; Rubin, G James; Hatch, Stephani; Hull, Lisa; Jones, Margaret; Hotopf, Matthew; Wessely, Simon; Rona, Roberto J

    2009-01-01

    We assessed whether job demand and job control have independent effects on psychological symptoms or whether job control modifies effect of job demand; we also assessed whether rank modified associations between job strain and psychological symptoms. We used the Post Traumatic Stress Disorder (PTSD) Checklist (PCL-C), General Health Questionnaire-12 (GHQ-12), Chalder Fatigue Scale, a checklist of 53 physical symptoms, and the WHO's Alcohol Use Disorders Identification Test (AUDIT). Job control, job demand, and rank were independently associated with PTSD, common mental disorders, multiple physical symptoms, and fatigue, but not with severe alcohol problems. Job control and demand had additive effects on psychological symptoms. Commissioned officers had lower risk of caseness for psychological symptoms than other ranks. Adjustment for rank had negligible effect on level of association between job strain and psychological symptoms. Reported job strain and rank contributed independently to psychological symptoms.

  13. Resident selection: how we are doing and why?

    PubMed

    Thordarson, David B; Ebramzadeh, Edward; Sangiorgio, Sophia N; Schnall, Stephen B; Patzakis, Michael J

    2007-06-01

    Selection of the best applicants for orthopaedic residency programs remains a difficult problem. Most quantifiable factors for residency selection evaluate test-taking ability and grades rather than other aspects, such as patient care, professionalism, moral reasoning, and integrity. Four current department members on our resident selection committee ranked four consecutive classes of orthopaedic residents interviewed for residency. We ranked incoming residents in order of best to least qualified and compared those rankings with rank lists by the same faculty on completion of residency. Rankings also were compared with the residents' United States Medical Licensing Examination (USMLE) Part I scores, American Board of Orthopaedic Surgery (ABOS) Part I scores, and fourth-year Orthopaedic-in-Training Examination (OITE) scores. We found fair or poor correlations between the residents' initial rankings, rankings on graduation, and their USMLE, ABOS, and OITE scores. The only relatively strong correlation found was between the OITE and ABOS scores. Despite the faculty's consensus regarding selection criteria, interviewers did not agree in their rankings of residents on graduation. Additional work is necessary to refine the inexact yet important science of selecting residency applicants.

  14. Exploring the Pattern of Links between Chinese University Web Sites.

    ERIC Educational Resources Information Center

    Tang, Rong; Thelwall, Mike

    2002-01-01

    Compares links between 76 Chinese university Web sites with ranks obtained from the NetBig lists, using a specialized Web crawler to collect data. Provides a background to the higher education system in mainland China, describes the NetBig ranking scheme, and explains Web site crawling problems encountered. (Author/LRW)

  15. You Cannot Judge a Book by Its Cover: The Problems with Journal Rankings

    ERIC Educational Resources Information Center

    Sangster, Alan

    2015-01-01

    Journal rankings lists have impacted and are impacting accounting educators and accounting education researchers around the world. Nowhere is the impact positive. It ranges from slight constraints on academic freedom to admonition, censure, reduced research allowances, non-promotion, non-short-listing for jobs, increased teaching loads, and…

  16. RANWAR: rank-based weighted association rule mining from gene expression and methylation data.

    PubMed

    Mallik, Saurav; Mukhopadhyay, Anirban; Maulik, Ujjwal

    2015-01-01

    Ranking of association rules is currently an interesting topic in data mining and bioinformatics. The huge number of evolved rules of items (or, genes) by association rule mining (ARM) algorithms makes confusion to the decision maker. In this article, we propose a weighted rule-mining technique (say, RANWAR or rank-based weighted association rule-mining) to rank the rules using two novel rule-interestingness measures, viz., rank-based weighted condensed support (wcs) and weighted condensed confidence (wcc) measures to bypass the problem. These measures are basically depended on the rank of items (genes). Using the rank, we assign weight to each item. RANWAR generates much less number of frequent itemsets than the state-of-the-art association rule mining algorithms. Thus, it saves time of execution of the algorithm. We run RANWAR on gene expression and methylation datasets. The genes of the top rules are biologically validated by Gene Ontologies (GOs) and KEGG pathway analyses. Many top ranked rules extracted from RANWAR that hold poor ranks in traditional Apriori, are highly biologically significant to the related diseases. Finally, the top rules evolved from RANWAR, that are not in Apriori, are reported.

  17. Optimal heliocentric trajectories for solar sail with minimum area

    NASA Astrophysics Data System (ADS)

    Petukhov, Vyacheslav G.

    2018-05-01

    The fixed-time heliocentric trajectory optimization problem is considered for planar solar sail with minimum area. Necessary optimality conditions are derived, a numerical method for solving the problem is developed, and numerical examples of optimal trajectories to Mars, Venus and Mercury are presented. The dependences of the minimum area of the solar sail from the date of departure from the Earth, the time of flight and the departing hyperbolic excess of velocity are analyzed. In particular, for the rendezvous problem (approaching a target planet with zero relative velocity) with zero departing hyperbolic excess of velocity for a flight duration of 1200 days it was found that the minimum area-to-mass ratio should be about 12 m2/kg for trajectory to Venus, 23.5 m2/kg for the trajectory to Mercury and 25 m2/kg for trajectory to Mars.

  18. Ranking of stopping criteria for log domain diffeomorphic demons application in clinical radiation therapy.

    PubMed

    Peroni, M; Golland, P; Sharp, G C; Baroni, G

    2011-01-01

    Deformable Image Registration is a complex optimization algorithm with the goal of modeling a non-rigid transformation between two images. A crucial issue in this field is guaranteeing the user a robust but computationally reasonable algorithm. We rank the performances of four stopping criteria and six stopping value computation strategies for a log domain deformable registration. The stopping criteria we test are: (a) velocity field update magnitude, (b) vector field Jacobian, (c) mean squared error, and (d) harmonic energy. Experiments demonstrate that comparing the metric value over the last three iterations with the metric minimum of between four and six previous iterations is a robust and appropriate strategy. The harmonic energy and vector field update magnitude metrics give the best results in terms of robustness and speed of convergence.

  19. Finding minimum-quotient cuts in planar graphs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Park, J.K.; Phillips, C.A.

    Given a graph G = (V, E) where each vertex v {element_of} V is assigned a weight w(v) and each edge e {element_of} E is assigned a cost c(e), the quotient of a cut partitioning the vertices of V into sets S and {bar S} is c(S, {bar S})/min{l_brace}w(S), w(S){r_brace}, where c(S, {bar S}) is the sum of the costs of the edges crossing the cut and w(S) and w({bar S}) are the sum of the weights of the vertices in S and {bar S}, respectively. The problem of finding a cut whose quotient is minimum for a graph hasmore » in recent years attracted considerable attention, due in large part to the work of Rao and Leighton and Rao. They have shown that an algorithm (exact or approximation) for the minimum-quotient-cut problem can be used to obtain an approximation algorithm for the more famous minimumb-balanced-cut problem, which requires finding a cut (S,{bar S}) minimizing c(S,{bar S}) subject to the constraint bW {le} w(S) {le} (1 {minus} b)W, where W is the total vertex weight and b is some fixed balance in the range 0 < b {le} {1/2}. Unfortunately, the minimum-quotient-cut problem is strongly NP-hard for general graphs, and the best polynomial-time approximation algorithm known for the general problem guarantees only a cut whose quotient is at mostO(lg n) times optimal, where n is the size of the graph. However, for planar graphs, the minimum-quotient-cut problem appears more tractable, as Rao has developed several efficient approximation algorithms for the planar version of the problem capable of finding a cut whose quotient is at most some constant times optimal. In this paper, we improve Rao`s algorithms, both in terms of accuracy and speed. As our first result, we present two pseudopolynomial-time exact algorithms for the planar minimum-quotient-cut problem. As Rao`s most accurate approximation algorithm for the problem -- also a pseudopolynomial-time algorithm -- guarantees only a 1.5-times-optimal cut, our algorithms represent a significant advance.« less

  20. Solving constrained minimum-time robot problems using the sequential gradient restoration algorithm

    NASA Technical Reports Server (NTRS)

    Lee, Allan Y.

    1991-01-01

    Three constrained minimum-time control problems of a two-link manipulator are solved using the Sequential Gradient and Restoration Algorithm (SGRA). The inequality constraints considered are reduced via Valentine-type transformations to nondifferential path equality constraints. The SGRA is then used to solve these transformed problems with equality constraints. The results obtained indicate that at least one of the two controls is at its limits at any instant in time. The remaining control then adjusts itself so that none of the system constraints is violated. Hence, the minimum-time control is either a pure bang-bang control or a combined bang-bang/singular control.

  1. Low rank approximation methods for MR fingerprinting with large scale dictionaries.

    PubMed

    Yang, Mingrui; Ma, Dan; Jiang, Yun; Hamilton, Jesse; Seiberlich, Nicole; Griswold, Mark A; McGivney, Debra

    2018-04-01

    This work proposes new low rank approximation approaches with significant memory savings for large scale MR fingerprinting (MRF) problems. We introduce a compressed MRF with randomized singular value decomposition method to significantly reduce the memory requirement for calculating a low rank approximation of large sized MRF dictionaries. We further relax this requirement by exploiting the structures of MRF dictionaries in the randomized singular value decomposition space and fitting them to low-degree polynomials to generate high resolution MRF parameter maps. In vivo 1.5T and 3T brain scan data are used to validate the approaches. T 1 , T 2 , and off-resonance maps are in good agreement with that of the standard MRF approach. Moreover, the memory savings is up to 1000 times for the MRF-fast imaging with steady-state precession sequence and more than 15 times for the MRF-balanced, steady-state free precession sequence. The proposed compressed MRF with randomized singular value decomposition and dictionary fitting methods are memory efficient low rank approximation methods, which can benefit the usage of MRF in clinical settings. They also have great potentials in large scale MRF problems, such as problems considering multi-component MRF parameters or high resolution in the parameter space. Magn Reson Med 79:2392-2400, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  2. Block-accelerated aggregation multigrid for Markov chains with application to PageRank problems

    NASA Astrophysics Data System (ADS)

    Shen, Zhao-Li; Huang, Ting-Zhu; Carpentieri, Bruno; Wen, Chun; Gu, Xian-Ming

    2018-06-01

    Recently, the adaptive algebraic aggregation multigrid method has been proposed for computing stationary distributions of Markov chains. This method updates aggregates on every iterative cycle to keep high accuracies of coarse-level corrections. Accordingly, its fast convergence rate is well guaranteed, but often a large proportion of time is cost by aggregation processes. In this paper, we show that the aggregates on each level in this method can be utilized to transfer the probability equation of that level into a block linear system. Then we propose a Block-Jacobi relaxation that deals with the block system on each level to smooth error. Some theoretical analysis of this technique is presented, meanwhile it is also adapted to solve PageRank problems. The purpose of this technique is to accelerate the adaptive aggregation multigrid method and its variants for solving Markov chains and PageRank problems. It also attempts to shed some light on new solutions for making aggregation processes more cost-effective for aggregation multigrid methods. Numerical experiments are presented to illustrate the effectiveness of this technique.

  3. Support vector methods for survival analysis: a comparison between ranking and regression approaches.

    PubMed

    Van Belle, Vanya; Pelckmans, Kristiaan; Van Huffel, Sabine; Suykens, Johan A K

    2011-10-01

    To compare and evaluate ranking, regression and combined machine learning approaches for the analysis of survival data. The literature describes two approaches based on support vector machines to deal with censored observations. In the first approach the key idea is to rephrase the task as a ranking problem via the concordance index, a problem which can be solved efficiently in a context of structural risk minimization and convex optimization techniques. In a second approach, one uses a regression approach, dealing with censoring by means of inequality constraints. The goal of this paper is then twofold: (i) introducing a new model combining the ranking and regression strategy, which retains the link with existing survival models such as the proportional hazards model via transformation models; and (ii) comparison of the three techniques on 6 clinical and 3 high-dimensional datasets and discussing the relevance of these techniques over classical approaches fur survival data. We compare svm-based survival models based on ranking constraints, based on regression constraints and models based on both ranking and regression constraints. The performance of the models is compared by means of three different measures: (i) the concordance index, measuring the model's discriminating ability; (ii) the logrank test statistic, indicating whether patients with a prognostic index lower than the median prognostic index have a significant different survival than patients with a prognostic index higher than the median; and (iii) the hazard ratio after normalization to restrict the prognostic index between 0 and 1. Our results indicate a significantly better performance for models including regression constraints above models only based on ranking constraints. This work gives empirical evidence that svm-based models using regression constraints perform significantly better than svm-based models based on ranking constraints. Our experiments show a comparable performance for methods including only regression or both regression and ranking constraints on clinical data. On high dimensional data, the former model performs better. However, this approach does not have a theoretical link with standard statistical models for survival data. This link can be made by means of transformation models when ranking constraints are included. Copyright © 2011 Elsevier B.V. All rights reserved.

  4. VDA, a Method of Choosing a Better Algorithm with Fewer Validations

    PubMed Central

    Kluger, Yuval

    2011-01-01

    The multitude of bioinformatics algorithms designed for performing a particular computational task presents end-users with the problem of selecting the most appropriate computational tool for analyzing their biological data. The choice of the best available method is often based on expensive experimental validation of the results. We propose an approach to design validation sets for method comparison and performance assessment that are effective in terms of cost and discrimination power. Validation Discriminant Analysis (VDA) is a method for designing a minimal validation dataset to allow reliable comparisons between the performances of different algorithms. Implementation of our VDA approach achieves this reduction by selecting predictions that maximize the minimum Hamming distance between algorithmic predictions in the validation set. We show that VDA can be used to correctly rank algorithms according to their performances. These results are further supported by simulations and by realistic algorithmic comparisons in silico. VDA is a novel, cost-efficient method for minimizing the number of validation experiments necessary for reliable performance estimation and fair comparison between algorithms. Our VDA software is available at http://sourceforge.net/projects/klugerlab/files/VDA/ PMID:22046256

  5. Implementation of the Hungarian Algorithm to Account for Ligand Symmetry and Similarity in Structure-Based Design

    PubMed Central

    2015-01-01

    False negative docking outcomes for highly symmetric molecules are a barrier to the accurate evaluation of docking programs, scoring functions, and protocols. This work describes an implementation of a symmetry-corrected root-mean-square deviation (RMSD) method into the program DOCK based on the Hungarian algorithm for solving the minimum assignment problem, which dynamically assigns atom correspondence in molecules with symmetry. The algorithm adds only a trivial amount of computation time to the RMSD calculations and is shown to increase the reported overall docking success rate by approximately 5% when tested over 1043 receptor–ligand systems. For some families of protein systems the results are even more dramatic, with success rate increases up to 16.7%. Several additional applications of the method are also presented including as a pairwise similarity metric to compare molecules during de novo design, as a scoring function to rank-order virtual screening results, and for the analysis of trajectories from molecular dynamics simulation. The new method, including source code, is available to registered users of DOCK6 (http://dock.compbio.ucsf.edu). PMID:24410429

  6. Unfolded equations for current interactions of 4d massless fields as a free system in mixed dimensions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gelfond, O. A., E-mail: gel@lpi.ru; Vasiliev, M. A., E-mail: vasiliev@lpi.ru

    2015-03-15

    Interactions of massless fields of all spins in four dimensions with currents of any spin are shown to result from a solution of the linear problem that describes a gluing between a rank-one (massless) system and a rank-two (current) system in the unfolded dynamics approach. Since the rank-two system is dual to a free rank-one higher-dimensional system that effectively describes conformal fields in six space-time dimensions, the constructed system can be interpreted as describing a mixture between linear conformal fields in four and six dimensions. An interpretation of the obtained results in the spirit of the AdS/CFT correspondence is discussed.

  7. Primary-Care Weight-Management Strategies: Parental Priorities and Preferences.

    PubMed

    Turer, Christy Boling; Upperman, Carla; Merchant, Zahra; Montaño, Sergio; Flores, Glenn

    2016-04-01

    To examine parental perspectives/rankings of the most important weight-management clinical practices and to determine whether preferences/rankings differ when parents disagree that their child is overweight. We performed mixed-methods analysis of a 32-question survey of parents of 2- to 18-year-old overweight children assessing parental agreement that their child is overweight, the single most important thing providers can do to improve weight status, ranking American Academy of Pediatrics-recommended clinical practices, and preferred follow-up interval. Four independent reviewers analyzed open-response data to identify qualitative themes/subthemes. Multivariable analyses examined parental rankings, preferred follow-up interval, and differences by agreement with their child's overweight assessment. Thirty-six percent of 219 children were overweight, 42% obese, and 22% severely obese; 16% of parents disagreed with their child's overweight assessment. Qualitative analysis of the most important practice to help overweight children yielded 10 themes; unique to parents disagreeing with their children's overweight assessments was "change weight-status assessments." After adjustment, the 3 highest-ranked clinical practices included, "check for weight-related problems," "review growth chart," and "recommend general dietary changes" (all P < .01); parents disagreeing with their children's overweight assessments ranked "review growth chart" as less important and ranked "reducing screen time" and "general activity changes" as more important. The mean preferred weight-management follow-up interval (10-12 weeks) did not differ by agreement with children's overweight assessments. Parents prefer weight-management strategies that prioritize evaluating weight-related problems, growth-chart review, and regular follow-up. Parents who disagree that their child is overweight want changes in how overweight is assessed. Using parent-preferred weight-management strategies may prove useful in improving child weight status. Copyright © 2016 Academic Pediatric Association. Published by Elsevier Inc. All rights reserved.

  8. Reactive Power Compensation Method Considering Minimum Effective Reactive Power Reserve

    NASA Astrophysics Data System (ADS)

    Gong, Yiyu; Zhang, Kai; Pu, Zhang; Li, Xuenan; Zuo, Xianghong; Zhen, Jiao; Sudan, Teng

    2017-05-01

    According to the calculation model of minimum generator reactive power reserve of power system voltage stability under the premise of the guarantee, the reactive power management system with reactive power compensation combined generator, the formation of a multi-objective optimization problem, propose a reactive power reserve is considered the minimum generator reactive power compensation optimization method. This method through the improvement of the objective function and constraint conditions, when the system load growth, relying solely on reactive power generation system can not meet the requirement of safe operation, increase the reactive power reserve to solve the problem of minimum generator reactive power compensation in the case of load node.

  9. An ILP based memetic algorithm for finding minimum positive influence dominating sets in social networks

    NASA Astrophysics Data System (ADS)

    Lin, Geng; Guan, Jian; Feng, Huibin

    2018-06-01

    The positive influence dominating set problem is a variant of the minimum dominating set problem, and has lots of applications in social networks. It is NP-hard, and receives more and more attention. Various methods have been proposed to solve the positive influence dominating set problem. However, most of the existing work focused on greedy algorithms, and the solution quality needs to be improved. In this paper, we formulate the minimum positive influence dominating set problem as an integer linear programming (ILP), and propose an ILP based memetic algorithm (ILPMA) for solving the problem. The ILPMA integrates a greedy randomized adaptive construction procedure, a crossover operator, a repair operator, and a tabu search procedure. The performance of ILPMA is validated on nine real-world social networks with nodes up to 36,692. The results show that ILPMA significantly improves the solution quality, and is robust.

  10. Is the problem list in the eye of the beholder? An exploration of consistency across physicians.

    PubMed

    Krauss, John C; Boonstra, Philip S; Vantsevich, Anna V; Friedman, Charles P

    2016-09-01

    Quantify the variability of patients' problem lists - in terms of the number, type, and ordering of problems - across multiple physicians and assess physicians' criteria for organizing and ranking diagnoses. In an experimental setting, 32 primary care physicians generated and ordered problem lists for three identical complex internal medicine cases expressed as detailed 2- to 4-page abstracts and subsequently expressed their criteria for ordering items in the list. We studied variability in problem list length. We modified a previously validated rank-based similarity measure, with range of zero to one, to quantify agreement between pairs of lists and calculate a single consensus problem list that maximizes agreement with each physician. Physicians' reasoning for the ordering of the problem lists was recorded. Subjects' problem lists were highly variable. The median problem list length was 8 (range: 3-14) for Case A, 10 (range: 4-20) for Case B, and 7 (range: 3-13) for Case C. The median indices of agreement - taking into account the length, content, and order of lists - over all possible physician pairings was 0.479, 0.371, 0.509, for Cases A, B, and C, respectively. The median agreements between the physicians' lists and the consensus list for each case were 0.683, 0.581, and 0.697 (for Cases A, B, and C, respectively).Out of a possible 1488 pairings, 2 lists were identical. Physicians most frequently ranked problem list items based on their acuity and immediate threat to health. The problem list is a physician's mental model of a patient's health status. These mental models were found to vary significantly between physicians, raising questions about whether problem lists created by individual physicians can serve their intended purpose to improve care coordination. © The Author 2016. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  11. Statistical analysis of effective singular values in matrix rank determination

    NASA Technical Reports Server (NTRS)

    Konstantinides, Konstantinos; Yao, Kung

    1988-01-01

    A major problem in using SVD (singular-value decomposition) as a tool in determining the effective rank of a perturbed matrix is that of distinguishing between significantly small and significantly large singular values to the end, conference regions are derived for the perturbed singular values of matrices with noisy observation data. The analysis is based on the theories of perturbations of singular values and statistical significance test. Threshold bounds for perturbation due to finite-precision and i.i.d. random models are evaluated. In random models, the threshold bounds depend on the dimension of the matrix, the noisy variance, and predefined statistical level of significance. Results applied to the problem of determining the effective order of a linear autoregressive system from the approximate rank of a sample autocorrelation matrix are considered. Various numerical examples illustrating the usefulness of these bounds and comparisons to other previously known approaches are given.

  12. A collaborative filtering recommendation algorithm based on weighted SimRank and social trust

    NASA Astrophysics Data System (ADS)

    Su, Chang; Zhang, Butao

    2017-05-01

    Collaborative filtering is one of the most widely used recommendation technologies, but the data sparsity and cold start problem of collaborative filtering algorithms are difficult to solve effectively. In order to alleviate the problem of data sparsity in collaborative filtering algorithm, firstly, a weighted improved SimRank algorithm is proposed to compute the rating similarity between users in rating data set. The improved SimRank can find more nearest neighbors for target users according to the transmissibility of rating similarity. Then, we build trust network and introduce the calculation of trust degree in the trust relationship data set. Finally, we combine rating similarity and trust to build a comprehensive similarity in order to find more appropriate nearest neighbors for target user. Experimental results show that the algorithm proposed in this paper improves the recommendation precision of the Collaborative algorithm effectively.

  13. Extremes in ecology: Avoiding the misleading effects of sampling variation in summary analyses

    USGS Publications Warehouse

    Link, W.A.; Sauer, J.R.

    1996-01-01

    Surveys such as the North American Breeding Bird Survey (BBS) produce large collections of parameter estimates. One's natural inclination when confronted with lists of parameter estimates is to look for the extreme values: in the BBS, these correspond to the species that appear to have the greatest changes in population size through time. Unfortunately, extreme estimates are liable to correspond to the most poorly estimated parameters. Consequently, the most extreme parameters may not match up with the most extreme parameter estimates. The ranking of parameter values on the basis of their estimates are a difficult statistical problem. We use data from the BBS and simulations to illustrate the potential misleading effects of sampling variation in rankings of parameters. We describe empirical Bayes and constrained empirical Bayes procedures which provide partial solutions to the problem of ranking in the presence of sampling variation.

  14. Crime Scene Investigation: Clinical Application of Chemical Shift Imaging as a Problem Solving Tool

    DTIC Science & Technology

    2016-02-26

    VERSION OF THE ATTACHED MATERIAL AND CERTIFY THAT IT IS AN ACCURATE MANUSCRIPT FOR PUBLICATION AND/OR PRESENTATION. AUTHOR’S PRINTED NAME/RANK/GRADE...8217""’ ’’’• Dolt.- ......... ,,.,,, ___ APPROVING AUTHORITY’S PRINTED NAME, RANK, TITLE APPROVING AUTHORITY’S SIGNATURE DATE ~ ........ !W l"li..~Aifl...QNO [gJ YES If yes give date: 16 Feb 2016 O N/A 6. COMMENTS [gj APPROVED 0 DISAPPROVED The article is approved. PRINTED NAME, RANK/GRADE, TITLE OF

  15. The Typicality Ranking Task: A New Method to Derive Typicality Judgments from Children.

    PubMed

    Djalal, Farah Mutiasari; Ameel, Eef; Storms, Gert

    2016-01-01

    An alternative method for deriving typicality judgments, applicable in young children that are not familiar with numerical values yet, is introduced, allowing researchers to study gradedness at younger ages in concept development. Contrary to the long tradition of using rating-based procedures to derive typicality judgments, we propose a method that is based on typicality ranking rather than rating, in which items are gradually sorted according to their typicality, and that requires a minimum of linguistic knowledge. The validity of the method is investigated and the method is compared to the traditional typicality rating measurement in a large empirical study with eight different semantic concepts. The results show that the typicality ranking task can be used to assess children's category knowledge and to evaluate how this knowledge evolves over time. Contrary to earlier held assumptions in studies on typicality in young children, our results also show that preference is not so much a confounding variable to be avoided, but that both variables are often significantly correlated in older children and even in adults.

  16. Origin of crashes in three US stock markets: shocks and bubbles

    NASA Astrophysics Data System (ADS)

    Johansen, Anders

    2004-07-01

    This paper presents an exclusive classification of the largest crashes in Dow Jones industrial average, SP500 and NASDAQ in the past century. Crashes are objectively defined as the top-rank filtered drawdowns (loss from the last local maximum to the next local minimum disregarding noise fluctuations), where the size of the filter is determined by the historical volatility of the index. It is shown that all crashes can be linked to either an external shock, e.g., outbreak of war, or a log-periodic power law (LPPL) bubble with an empirically well-defined complex value of the exponent. Conversely, with one sole exception all previously identified LPPL bubbles are followed by a top-rank drawdown. As a consequence, the analysis presented suggest a one-to-one correspondence between market crashes defined as top-rank filtered drawdowns on one hand and surprising news and LPPL bubbles on the other. We attribute this correspondence to the efficient market hypothesis effective on two quite different time scales depending on whether the market instability the crash represent is internally or externally generated.

  17. The Typicality Ranking Task: A New Method to Derive Typicality Judgments from Children

    PubMed Central

    Ameel, Eef; Storms, Gert

    2016-01-01

    An alternative method for deriving typicality judgments, applicable in young children that are not familiar with numerical values yet, is introduced, allowing researchers to study gradedness at younger ages in concept development. Contrary to the long tradition of using rating-based procedures to derive typicality judgments, we propose a method that is based on typicality ranking rather than rating, in which items are gradually sorted according to their typicality, and that requires a minimum of linguistic knowledge. The validity of the method is investigated and the method is compared to the traditional typicality rating measurement in a large empirical study with eight different semantic concepts. The results show that the typicality ranking task can be used to assess children’s category knowledge and to evaluate how this knowledge evolves over time. Contrary to earlier held assumptions in studies on typicality in young children, our results also show that preference is not so much a confounding variable to be avoided, but that both variables are often significantly correlated in older children and even in adults. PMID:27322371

  18. Constrained dictionary learning and probabilistic hypergraph ranking for person re-identification

    NASA Astrophysics Data System (ADS)

    He, You; Wu, Song; Pu, Nan; Qian, Li; Xiao, Guoqiang

    2018-04-01

    Person re-identification is a fundamental and inevitable task in public security. In this paper, we propose a novel framework to improve the performance of this task. First, two different types of descriptors are extracted to represent a pedestrian: (1) appearance-based superpixel features, which are constituted mainly by conventional color features and extracted from the supepixel rather than a whole picture and (2) due to the limitation of discrimination of appearance features, the deep features extracted by feature fusion Network are also used. Second, a view invariant subspace is learned by dictionary learning constrained by the minimum negative sample (termed as DL-cMN) to reduce the noise in appearance-based superpixel feature domain. Then, we use deep features and sparse codes transformed by appearancebased features to establish the hyperedges respectively by k-nearest neighbor, rather than jointing different features simply. Finally, a final ranking is performed by probabilistic hypergraph ranking algorithm. Extensive experiments on three challenging datasets (VIPeR, PRID450S and CUHK01) demonstrate the advantages and effectiveness of our proposed algorithm.

  19. Second-order optimality conditions for problems with C1 data

    NASA Astrophysics Data System (ADS)

    Ginchev, Ivan; Ivanov, Vsevolod I.

    2008-04-01

    In this paper we obtain second-order optimality conditions of Karush-Kuhn-Tucker type and Fritz John one for a problem with inequality constraints and a set constraint in nonsmooth settings using second-order directional derivatives. In the necessary conditions we suppose that the objective function and the active constraints are continuously differentiable, but their gradients are not necessarily locally Lipschitz. In the sufficient conditions for a global minimum we assume that the objective function is differentiable at and second-order pseudoconvex at , a notion introduced by the authors [I. Ginchev, V.I. Ivanov, Higher-order pseudoconvex functions, in: I.V. Konnov, D.T. Luc, A.M. Rubinov (Eds.), Generalized Convexity and Related Topics, in: Lecture Notes in Econom. and Math. Systems, vol. 583, Springer, 2007, pp. 247-264], the constraints are both differentiable and quasiconvex at . In the sufficient conditions for an isolated local minimum of order two we suppose that the problem belongs to the class C1,1. We show that they do not hold for C1 problems, which are not C1,1 ones. At last a new notion parabolic local minimum is defined and it is applied to extend the sufficient conditions for an isolated local minimum from problems with C1,1 data to problems with C1 one.

  20. Characterization of topological structure on complex networks.

    PubMed

    Nakamura, Ikuo

    2003-10-01

    Characterizing the topological structure of complex networks is a significant problem especially from the viewpoint of data mining on the World Wide Web. "Page rank" used in the commercial search engine Google is such a measure of authority to rank all the nodes matching a given query. We have investigated the page-rank distribution of the real Web and a growing network model, both of which have directed links and exhibit a power law distributions of in-degree (the number of incoming links to the node) and out-degree (the number of outgoing links from the node), respectively. We find a concentration of page rank on a small number of nodes and low page rank on high degree regimes in the real Web, which can be explained by topological properties of the network, e.g., network motifs, and connectivities of nearest neighbors.

  1. Fast Low-Rank Bayesian Matrix Completion With Hierarchical Gaussian Prior Models

    NASA Astrophysics Data System (ADS)

    Yang, Linxiao; Fang, Jun; Duan, Huiping; Li, Hongbin; Zeng, Bing

    2018-06-01

    The problem of low rank matrix completion is considered in this paper. To exploit the underlying low-rank structure of the data matrix, we propose a hierarchical Gaussian prior model, where columns of the low-rank matrix are assumed to follow a Gaussian distribution with zero mean and a common precision matrix, and a Wishart distribution is specified as a hyperprior over the precision matrix. We show that such a hierarchical Gaussian prior has the potential to encourage a low-rank solution. Based on the proposed hierarchical prior model, a variational Bayesian method is developed for matrix completion, where the generalized approximate massage passing (GAMP) technique is embedded into the variational Bayesian inference in order to circumvent cumbersome matrix inverse operations. Simulation results show that our proposed method demonstrates superiority over existing state-of-the-art matrix completion methods.

  2. Developing and Planning a Texas Based Homeschool Curriculum

    ERIC Educational Resources Information Center

    Terry, Bobby K.

    2011-01-01

    Texas has some of the lowest SAT scores in the nation. They are ranked 36th nationwide in graduation rates and teacher salaries rank at number 33. The public school system in Texas has problems with overcrowding, violence, and poor performance on standardized testing. Currently 300,000 families have opted out of the public school system in order…

  3. A Methodology for Calculating Prestige Ranks of Academic Journals in Communication: A More Inclusive Alternative to Citation Metrics

    ERIC Educational Resources Information Center

    Stephen, Timothy D.

    2011-01-01

    The problem of how to rank academic journals in the communication field (human interaction, mass communication, speech, and rhetoric) is one of practical importance to scholars, university administrators, and librarians, yet there is no methodology that covers the field's journals comprehensively and objectively. This article reports a new ranking…

  4. Two variants of minimum discarded fill ordering

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    D'Azevedo, E.F.; Forsyth, P.A.; Tang, Wei-Pai

    1991-01-01

    It is well known that the ordering of the unknowns can have a significant effect on the convergence of Preconditioned Conjugate Gradient (PCG) methods. There has been considerable experimental work on the effects of ordering for regular finite difference problems. In many cases, good results have been obtained with preconditioners based on diagonal, spiral or natural row orderings. However, for finite element problems having unstructured grids or grids generated by a local refinement approach, it is difficult to define many of the orderings for more regular problems. A recently proposed Minimum Discarded Fill (MDF) ordering technique is effective in findingmore » high quality Incomplete LU (ILU) preconditioners, especially for problems arising from unstructured finite element grids. Testing indicates this algorithm can identify a rather complicated physical structure in an anisotropic problem and orders the unknowns in the preferred'' direction. The MDF technique may be viewed as the numerical analogue of the minimum deficiency algorithm in sparse matrix technology. At any stage of the partial elimination, the MDF technique chooses the next pivot node so as to minimize the amount of discarded fill. In this work, two efficient variants of the MDF technique are explored to produce cost-effective high-order ILU preconditioners. The Threshold MDF orderings combine MDF ideas with drop tolerance techniques to identify the sparsity pattern in the ILU preconditioners. These techniques identify an ordering that encourages fast decay of the entries in the ILU factorization. The Minimum Update Matrix (MUM) ordering technique is a simplification of the MDF ordering and is closely related to the minimum degree algorithm. The MUM ordering is especially for large problems arising from Navier-Stokes problems. Some interesting pictures of the orderings are presented using a visualization tool. 22 refs., 4 figs., 7 tabs.« less

  5. Statistical physics when the minimum temperature is not absolute zero

    NASA Astrophysics Data System (ADS)

    Chung, Won Sang; Hassanabadi, Hassan

    2018-04-01

    In this paper, the nonzero minimum temperature is considered based on the third law of thermodynamics and existence of the minimal momentum. From the assumption of nonzero positive minimum temperature in nature, we deform the definitions of some thermodynamical quantities and investigate nonzero minimum temperature correction to the well-known thermodynamical problems.

  6. A minimum propellant solution to an orbit-to-orbit transfer using a low thrust propulsion system

    NASA Technical Reports Server (NTRS)

    Cobb, Shannon S.

    1991-01-01

    The Space Exploration Initiative is considering the use of low thrust (nuclear electric, solar electric) and intermediate thrust (nuclear thermal) propulsion systems for transfer to Mars and back. Due to the duration of such a mission, a low thrust minimum-fuel solution is of interest; a savings of fuel can be substantial if the propulsion system is allowed to be turned off and back on. This switching of the propulsion system helps distinguish the minimal-fuel problem from the well-known minimum-time problem. Optimal orbit transfers are also of interest to the development of a guidance system for orbital maneuvering vehicles which will be needed, for example, to deliver cargoes to the Space Station Freedom. The problem of optimizing trajectories for an orbit-to-orbit transfer with minimum-fuel expenditure using a low thrust propulsion system is addressed.

  7. Optimization of memory use of fragment extension-based protein-ligand docking with an original fast minimum cost flow algorithm.

    PubMed

    Yanagisawa, Keisuke; Komine, Shunta; Kubota, Rikuto; Ohue, Masahito; Akiyama, Yutaka

    2018-06-01

    The need to accelerate large-scale protein-ligand docking in virtual screening against a huge compound database led researchers to propose a strategy that entails memorizing the evaluation result of the partial structure of a compound and reusing it to evaluate other compounds. However, the previous method required frequent disk accesses, resulting in insufficient acceleration. Thus, more efficient memory usage can be expected to lead to further acceleration, and optimal memory usage could be achieved by solving the minimum cost flow problem. In this research, we propose a fast algorithm for the minimum cost flow problem utilizing the characteristics of the graph generated for this problem as constraints. The proposed algorithm, which optimized memory usage, was approximately seven times faster compared to existing minimum cost flow algorithms. Copyright © 2018 The Authors. Published by Elsevier Ltd.. All rights reserved.

  8. Enrichment and Ranking of the YouTube Tag Space and Integration with the Linked Data Cloud

    NASA Astrophysics Data System (ADS)

    Choudhury, Smitashree; Breslin, John G.; Passant, Alexandre

    The increase of personal digital cameras with video functionality and video-enabled camera phones has increased the amount of user-generated videos on the Web. People are spending more and more time viewing online videos as a major source of entertainment and "infotainment". Social websites allow users to assign shared free-form tags to user-generated multimedia resources, thus generating annotations for objects with a minimum amount of effort. Tagging allows communities to organise their multimedia items into browseable sets, but these tags may be poorly chosen and related tags may be omitted. Current techniques to retrieve, integrate and present this media to users are deficient and could do with improvement. In this paper, we describe a framework for semantic enrichment, ranking and integration of web video tags using Semantic Web technologies. Semantic enrichment of folksonomies can bridge the gap between the uncontrolled and flat structures typically found in user-generated content and structures provided by the Semantic Web. The enhancement of tag spaces with semantics has been accomplished through two major tasks: (1) a tag space expansion and ranking step; and (2) through concept matching and integration with the Linked Data cloud. We have explored social, temporal and spatial contexts to enrich and extend the existing tag space. The resulting semantic tag space is modelled via a local graph based on co-occurrence distances for ranking. A ranked tag list is mapped and integrated with the Linked Data cloud through the DBpedia resource repository. Multi-dimensional context filtering for tag expansion means that tag ranking is much easier and it provides less ambiguous tag to concept matching.

  9. A new concept for stainless steels ranking upon the resistance to cavitation erosion

    NASA Astrophysics Data System (ADS)

    Bordeasu, I.; Popoviciu, M. O.; Salcianu, L. C.; Ghera, C.; Micu, L. M.; Badarau, R.; Iosif, A.; Pirvulescu, L. D.; Podoleanu, C. E.

    2017-01-01

    In present, the ranking of materials as their resistance to cavitation erosion is obtained by using laboratory tests finalized with the characteristic curves mean depth erosion against time MDE(t) and mean depth erosion rate against time MDER(t). In some previous papers, Bordeasu and co-workers give procedures to establish exponential equation representing the curves, with minimum scatter of the experimental obtained results. For a given material, both exponential equations MDE(t) and MDER(t) have the same values for the parameters of scale and for the shape one. For the ranking of materials is sometimes important to establish single figure. Till now in Timisoara Polytechnic University Cavitation Laboratory were used three such numbers: the stable value of the curve MDER(t), the resistance to cavitation erosion (Rcav ≡ 1/MDERstable) and the normalized cavitation resistance Rns which is the rate between vs = MDERstable for the analyzed material and vse= MDERse the mean depth erosion rate for the steel OH12NDL (Rns = vs/vse ). OH12NDL is a material used for manufacturing the blades of numerous Kaplan turbines in Romania for which both cavitation erosion laboratory tests and field measurements of cavitation erosions are available. In the present paper we recommend a new method for ranking the materials upon cavitation erosion resistance. This method uses the scale and shape parameters of the exponential equations which represents the characteristic cavitation erosion curves. Till now the method was applied only for stainless steels. The experimental results show that the scale parameter represents an excellent method for ranking the stainless steels. In the future this kind of ranking will be tested also for other materials especially for bronzes used for manufacturing ship propellers.

  10. Real-time trajectory optimization on parallel processors

    NASA Technical Reports Server (NTRS)

    Psiaki, Mark L.

    1993-01-01

    A parallel algorithm has been developed for rapidly solving trajectory optimization problems. The goal of the work has been to develop an algorithm that is suitable to do real-time, on-line optimal guidance through repeated solution of a trajectory optimization problem. The algorithm has been developed on an INTEL iPSC/860 message passing parallel processor. It uses a zero-order-hold discretization of a continuous-time problem and solves the resulting nonlinear programming problem using a custom-designed augmented Lagrangian nonlinear programming algorithm. The algorithm achieves parallelism of function, derivative, and search direction calculations through the principle of domain decomposition applied along the time axis. It has been encoded and tested on 3 example problems, the Goddard problem, the acceleration-limited, planar minimum-time to the origin problem, and a National Aerospace Plane minimum-fuel ascent guidance problem. Execution times as fast as 118 sec of wall clock time have been achieved for a 128-stage Goddard problem solved on 32 processors. A 32-stage minimum-time problem has been solved in 151 sec on 32 processors. A 32-stage National Aerospace Plane problem required 2 hours when solved on 32 processors. A speed-up factor of 7.2 has been achieved by using 32-nodes instead of 1-node to solve a 64-stage Goddard problem.

  11. MiRNA-TF-gene network analysis through ranking of biomolecules for multi-informative uterine leiomyoma dataset.

    PubMed

    Mallik, Saurav; Maulik, Ujjwal

    2015-10-01

    Gene ranking is an important problem in bioinformatics. Here, we propose a new framework for ranking biomolecules (viz., miRNAs, transcription-factors/TFs and genes) in a multi-informative uterine leiomyoma dataset having both gene expression and methylation data using (statistical) eigenvector centrality based approach. At first, genes that are both differentially expressed and methylated, are identified using Limma statistical test. A network, comprising these genes, corresponding TFs from TRANSFAC and ITFP databases, and targeter miRNAs from miRWalk database, is then built. The biomolecules are then ranked based on eigenvector centrality. Our proposed method provides better average accuracy in hub gene and non-hub gene classifications than other methods. Furthermore, pre-ranked Gene set enrichment analysis is applied on the pathway database as well as GO-term databases of Molecular Signatures Database with providing a pre-ranked gene-list based on different centrality values for comparing among the ranking methods. Finally, top novel potential gene-markers for the uterine leiomyoma are provided. Copyright © 2015 Elsevier Inc. All rights reserved.

  12. Fair ranking of researchers and research teams.

    PubMed

    Vavryčuk, Václav

    2018-01-01

    The main drawback of ranking of researchers by the number of papers, citations or by the Hirsch index is ignoring the problem of distributing authorship among authors in multi-author publications. So far, the single-author or multi-author publications contribute to the publication record of a researcher equally. This full counting scheme is apparently unfair and causes unjust disproportions, in particular, if ranked researchers have distinctly different collaboration profiles. These disproportions are removed by less common fractional or authorship-weighted counting schemes, which can distribute the authorship credit more properly and suppress a tendency to unjustified inflation of co-authors. The urgent need of widely adopting a fair ranking scheme in practise is exemplified by analysing citation profiles of several highly-cited astronomers and astrophysicists. While the full counting scheme often leads to completely incorrect and misleading ranking, the fractional or authorship-weighted schemes are more accurate and applicable to ranking of researchers as well as research teams. In addition, they suppress differences in ranking among scientific disciplines. These more appropriate schemes should urgently be adopted by scientific publication databases as the Web of Science (Thomson Reuters) or the Scopus (Elsevier).

  13. The effect of uncertainties in distance-based ranking methods for multi-criteria decision making

    NASA Astrophysics Data System (ADS)

    Jaini, Nor I.; Utyuzhnikov, Sergei V.

    2017-08-01

    Data in the multi-criteria decision making are often imprecise and changeable. Therefore, it is important to carry out sensitivity analysis test for the multi-criteria decision making problem. The paper aims to present a sensitivity analysis for some ranking techniques based on the distance measures in multi-criteria decision making. Two types of uncertainties are considered for the sensitivity analysis test. The first uncertainty is related to the input data, while the second uncertainty is towards the Decision Maker preferences (weights). The ranking techniques considered in this study are TOPSIS, the relative distance and trade-off ranking methods. TOPSIS and the relative distance method measure a distance from an alternative to the ideal and antiideal solutions. In turn, the trade-off ranking calculates a distance of an alternative to the extreme solutions and other alternatives. Several test cases are considered to study the performance of each ranking technique in both types of uncertainties.

  14. Schmidt-number witnesses and bound entanglement

    NASA Astrophysics Data System (ADS)

    Sanpera, Anna; Bruß, Dagmar; Lewenstein, Maciej

    2001-05-01

    The Schmidt number of a mixed state characterizes the minimum Schmidt rank of the pure states needed to construct it. We investigate the Schmidt number of an arbitrary mixed state by studying Schmidt-number witnesses that detect it. We present a canonical form of such witnesses and provide constructive methods for their optimization. Finally, we present strong evidence that all bound entangled states with positive partial transpose in C3⊗C3 have Schmidt number 2.

  15. Observations on Leadership, Problem Solving, and Preferred Futures of Universities

    ERIC Educational Resources Information Center

    Puncochar, Judith

    2013-01-01

    A focus on enrollments, rankings, uncertain budgets, and branding efforts to operate universities could have serious implications for discussions of sustainable solutions to complex problems and the decision-making processes of leaders. The Authentic Leadership Model for framing ill-defined problems in higher education is posited to improve the…

  16. Structure-preserving and rank-revealing QR-factorizations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bischof, C.H.; Hansen, P.C.

    1991-11-01

    The rank-revealing QR-factorization (RRQR-factorization) is a special QR-factorization that is guaranteed to reveal the numerical rank of the matrix under consideration. This makes the RRQR-factorization a useful tool in the numerical treatment of many rank-deficient problems in numerical linear algebra. In this paper, a framework is presented for the efficient implementation of RRQR algorithms, in particular, for sparse matrices. A sparse RRQR-algorithm should seek to preserve the structure and sparsity of the matrix as much as possible while retaining the ability to capture safely the numerical rank. To this end, the paper proposes to compute an initial QR-factorization using amore » restricted pivoting strategy guarded by incremental condition estimation (ICE), and then applies the algorithm suggested by Chan and Foster to this QR-factorization. The column exchange strategy used in the initial QR factorization will exploit the fact that certain column exchanges do not change the sparsity structure, and compute a sparse QR-factorization that is a good approximation of the sought-after RRQR-factorization. Due to quantities produced by ICE, the Chan/Foster RRQR algorithm can be implemented very cheaply, thus verifying that the sought-after RRQR-factorization has indeed been computed. Experimental results on a model problem show that the initial QR-factorization is indeed very likely to produce RRQR-factorization.« less

  17. Solving portfolio selection problems with minimum transaction lots based on conditional-value-at-risk

    NASA Astrophysics Data System (ADS)

    Setiawan, E. P.; Rosadi, D.

    2017-01-01

    Portfolio selection problems conventionally means ‘minimizing the risk, given the certain level of returns’ from some financial assets. This problem is frequently solved with quadratic or linear programming methods, depending on the risk measure that used in the objective function. However, the solutions obtained by these method are in real numbers, which may give some problem in real application because each asset usually has its minimum transaction lots. In the classical approach considering minimum transaction lots were developed based on linear Mean Absolute Deviation (MAD), variance (like Markowitz’s model), and semi-variance as risk measure. In this paper we investigated the portfolio selection methods with minimum transaction lots with conditional value at risk (CVaR) as risk measure. The mean-CVaR methodology only involves the part of the tail of the distribution that contributed to high losses. This approach looks better when we work with non-symmetric return probability distribution. Solution of this method can be found with Genetic Algorithm (GA) methods. We provide real examples using stocks from Indonesia stocks market.

  18. Solving the influence maximization problem reveals regulatory organization of the yeast cell cycle.

    PubMed

    Gibbs, David L; Shmulevich, Ilya

    2017-06-01

    The Influence Maximization Problem (IMP) aims to discover the set of nodes with the greatest influence on network dynamics. The problem has previously been applied in epidemiology and social network analysis. Here, we demonstrate the application to cell cycle regulatory network analysis for Saccharomyces cerevisiae. Fundamentally, gene regulation is linked to the flow of information. Therefore, our implementation of the IMP was framed as an information theoretic problem using network diffusion. Utilizing more than 26,000 regulatory edges from YeastMine, gene expression dynamics were encoded as edge weights using time lagged transfer entropy, a method for quantifying information transfer between variables. By picking a set of source nodes, a diffusion process covers a portion of the network. The size of the network cover relates to the influence of the source nodes. The set of nodes that maximizes influence is the solution to the IMP. By solving the IMP over different numbers of source nodes, an influence ranking on genes was produced. The influence ranking was compared to other metrics of network centrality. Although the top genes from each centrality ranking contained well-known cell cycle regulators, there was little agreement and no clear winner. However, it was found that influential genes tend to directly regulate or sit upstream of genes ranked by other centrality measures. The influential nodes act as critical sources of information flow, potentially having a large impact on the state of the network. Biological events that affect influential nodes and thereby affect information flow could have a strong effect on network dynamics, potentially leading to disease. Code and data can be found at: https://github.com/gibbsdavidl/miergolf.

  19. Depression literacy and help-seeking in Australian police.

    PubMed

    Reavley, Nicola J; Milner, Allison J; Martin, Angela; Too, Lay San; Papas, Alicia; Witt, Katrina; Keegel, Tessa; LaMontagne, Anthony D

    2018-02-01

    To assess depression literacy, help-seeking and help-offering to others in members of the police force in the state of Victoria, Australia. All staff in police stations involved in a cluster randomised controlled trial of an integrated workplace mental health intervention were invited to participate. Survey questions covered sociodemographic and employment information, recognition of depression in a vignette, stigma, treatment beliefs, willingness to assist co-workers with mental health problems, help-giving and help-seeking behaviours, and intentions to seek help. Using the baseline dataset associated with the trial, the paper presents a descriptive analysis of mental health literacy and helping behaviours, comparing police station leaders and lower ranks. Respondents were 806 staff, comprising 618 lower-ranked staff and 188 leaders. Almost 84% of respondents were able to correctly label the problem described in the vignette. Among those who had helped someone with a mental health problem, both lower ranks and leaders most commonly reported 'talking to the person' although leaders were more likely to facilitate professional help. Leaders' willingness to assist the person and confidence in doing so was very high, and over 80% of leaders appropriately rated police psychologists, general practitioners, psychologists, talking to a peer and contacting welfare as helpful. However, among both leaders and lower ranks with mental health problems, the proportion of those unlikely to seek professional help was greater than those who were likely to seek it. Knowledge about evidence-based interventions for depression was lower in this police sample than surveys in the general population, pointing to the need for education and training to improve mental health literacy. Such education should also aim to overcome barriers to professional help-seeking. Interventions that aim to improve mental health literacy and help-seeking behaviour appear to be suitable targets for better protecting police member mental health.

  20. Peeling Onions: Some Tools and a Recipe for Solving Ethical Dilemmas.

    ERIC Educational Resources Information Center

    Gordon, Joan Claire

    1993-01-01

    Presents a process for solving ethical dilemmas: define the problem; identify facts; determine values; "slice" the problem different ways--duties, virtues, rights, and common good; rank ethical considerations; consult colleagues; and take action. (SK)

  1. Direct Solve of Electrically Large Integral Equations for Problem Sizes to 1M Unknowns

    NASA Technical Reports Server (NTRS)

    Shaeffer, John

    2008-01-01

    Matrix methods for solving integral equations via direct solve LU factorization are presently limited to weeks to months of very expensive supercomputer time for problems sizes of several hundred thousand unknowns. This report presents matrix LU factor solutions for electromagnetic scattering problems for problem sizes to one million unknowns with thousands of right hand sides that run in mere days on PC level hardware. This EM solution is accomplished by utilizing the numerical low rank nature of spatially blocked unknowns using the Adaptive Cross Approximation for compressing the rank deficient blocks of the system Z matrix, the L and U factors, the right hand side forcing function and the final current solution. This compressed matrix solution is applied to a frequency domain EM solution of Maxwell's equations using standard Method of Moments approach. Compressed matrix storage and operations count leads to orders of magnitude reduction in memory and run time.

  2. The effect of tidal forces on the minimum energy configurations of the full three-body problem

    NASA Astrophysics Data System (ADS)

    Levine, Edward

    We investigate the evolution of minimum energy configurations for the Full Three Body Problem (3BP). A stable ternary asteroid system will gradually become unstable due to the Yarkovsky-O'Keefe-Radzievskii-Paddack (YORP) effect and an unpredictable trajectory will ensue. Through the interaction of tidal torques, energy in the system will dissipate in the form of heat until a stable minimum energy configuration is reached. We present a simulation that describes the dynamical evolution of three bodies under the mutual effects of gravity and tidal torques. Simulations show that bodies do not get stuck in local minima and transition to the predicted minimum energy configuration.

  3. Risk Management using Dependency Stucture Matrix

    NASA Astrophysics Data System (ADS)

    Petković, Ivan

    2011-09-01

    An efficient method based on dependency structure matrix (DSM) analysis is given for ranking risks in a complex system or process whose entities are mutually dependent. This rank is determined according to the element's values of the unique positive eigenvector which corresponds to the matrix spectral radius modeling the considered engineering system. For demonstration, the risk problem of NASA's robotic spacecraft is analyzed.

  4. Spectral Regularization Algorithms for Learning Large Incomplete Matrices.

    PubMed

    Mazumder, Rahul; Hastie, Trevor; Tibshirani, Robert

    2010-03-01

    We use convex relaxation techniques to provide a sequence of regularized low-rank solutions for large-scale matrix completion problems. Using the nuclear norm as a regularizer, we provide a simple and very efficient convex algorithm for minimizing the reconstruction error subject to a bound on the nuclear norm. Our algorithm Soft-Impute iteratively replaces the missing elements with those obtained from a soft-thresholded SVD. With warm starts this allows us to efficiently compute an entire regularization path of solutions on a grid of values of the regularization parameter. The computationally intensive part of our algorithm is in computing a low-rank SVD of a dense matrix. Exploiting the problem structure, we show that the task can be performed with a complexity linear in the matrix dimensions. Our semidefinite-programming algorithm is readily scalable to large matrices: for example it can obtain a rank-80 approximation of a 10(6) × 10(6) incomplete matrix with 10(5) observed entries in 2.5 hours, and can fit a rank 40 approximation to the full Netflix training set in 6.6 hours. Our methods show very good performance both in training and test error when compared to other competitive state-of-the art techniques.

  5. Spectral Regularization Algorithms for Learning Large Incomplete Matrices

    PubMed Central

    Mazumder, Rahul; Hastie, Trevor; Tibshirani, Robert

    2010-01-01

    We use convex relaxation techniques to provide a sequence of regularized low-rank solutions for large-scale matrix completion problems. Using the nuclear norm as a regularizer, we provide a simple and very efficient convex algorithm for minimizing the reconstruction error subject to a bound on the nuclear norm. Our algorithm Soft-Impute iteratively replaces the missing elements with those obtained from a soft-thresholded SVD. With warm starts this allows us to efficiently compute an entire regularization path of solutions on a grid of values of the regularization parameter. The computationally intensive part of our algorithm is in computing a low-rank SVD of a dense matrix. Exploiting the problem structure, we show that the task can be performed with a complexity linear in the matrix dimensions. Our semidefinite-programming algorithm is readily scalable to large matrices: for example it can obtain a rank-80 approximation of a 106 × 106 incomplete matrix with 105 observed entries in 2.5 hours, and can fit a rank 40 approximation to the full Netflix training set in 6.6 hours. Our methods show very good performance both in training and test error when compared to other competitive state-of-the art techniques. PMID:21552465

  6. Steiner trees and spanning trees in six-pin soap films

    NASA Astrophysics Data System (ADS)

    Dutta, Prasun; Khastgir, S. Pratik; Roy, Anushree

    2010-02-01

    The problem of finding minimum (local as well as absolute) path lengths joining given points (or terminals) on a plane is known as the Steiner problem. The Steiner problem arises in finding the minimum total road length joining several towns and cities. We study the Steiner tree problem using six-pin soap films. Experimentally, we observe spanning trees as well as Steiner trees partly by varying the pin diameter. We propose a possibly exact expression for the length of a spanning tree or a Steiner tree, which fails mysteriously in certain cases.

  7. Weighted Discriminative Dictionary Learning based on Low-rank Representation

    NASA Astrophysics Data System (ADS)

    Chang, Heyou; Zheng, Hao

    2017-01-01

    Low-rank representation has been widely used in the field of pattern classification, especially when both training and testing images are corrupted with large noise. Dictionary plays an important role in low-rank representation. With respect to the semantic dictionary, the optimal representation matrix should be block-diagonal. However, traditional low-rank representation based dictionary learning methods cannot effectively exploit the discriminative information between data and dictionary. To address this problem, this paper proposed weighted discriminative dictionary learning based on low-rank representation, where a weighted representation regularization term is constructed. The regularization associates label information of both training samples and dictionary atoms, and encourages to generate a discriminative representation with class-wise block-diagonal structure, which can further improve the classification performance where both training and testing images are corrupted with large noise. Experimental results demonstrate advantages of the proposed method over the state-of-the-art methods.

  8. Scalable Faceted Ranking in Tagging Systems

    NASA Astrophysics Data System (ADS)

    Orlicki, José I.; Alvarez-Hamelin, J. Ignacio; Fierens, Pablo I.

    Nowadays, web collaborative tagging systems which allow users to upload, comment on and recommend contents, are growing. Such systems can be represented as graphs where nodes correspond to users and tagged-links to recommendations. In this paper we analyze the problem of computing a ranking of users with respect to a facet described as a set of tags. A straightforward solution is to compute a PageRank-like algorithm on a facet-related graph, but it is not feasible for online computation. We propose an alternative: (i) a ranking for each tag is computed offline on the basis of tag-related subgraphs; (ii) a faceted order is generated online by merging rankings corresponding to all the tags in the facet. Based on the graph analysis of YouTube and Flickr, we show that step (i) is scalable. We also present efficient algorithms for step (ii), which are evaluated by comparing their results with two gold standards.

  9. An ensemble rank learning approach for gene prioritization.

    PubMed

    Lee, Po-Feng; Soo, Von-Wun

    2013-01-01

    Several different computational approaches have been developed to solve the gene prioritization problem. We intend to use the ensemble boosting learning techniques to combine variant computational approaches for gene prioritization in order to improve the overall performance. In particular we add a heuristic weighting function to the Rankboost algorithm according to: 1) the absolute ranks generated by the adopted methods for a certain gene, and 2) the ranking relationship between all gene-pairs from each prioritization result. We select 13 known prostate cancer genes in OMIM database as training set and protein coding gene data in HGNC database as test set. We adopt the leave-one-out strategy for the ensemble rank boosting learning. The experimental results show that our ensemble learning approach outperforms the four gene-prioritization methods in ToppGene suite in the ranking results of the 13 known genes in terms of mean average precision, ROC and AUC measures.

  10. Routing Algorithm based on Minimum Spanning Tree and Minimum Cost Flow for Hybrid Wireless-optical Broadband Access Network

    NASA Astrophysics Data System (ADS)

    Le, Zichun; Suo, Kaihua; Fu, Minglei; Jiang, Ling; Dong, Wen

    2012-03-01

    In order to minimize the average end to end delay for data transporting in hybrid wireless optical broadband access network, a novel routing algorithm named MSTMCF (minimum spanning tree and minimum cost flow) is devised. The routing problem is described as a minimum spanning tree and minimum cost flow model and corresponding algorithm procedures are given. To verify the effectiveness of MSTMCF algorithm, extensively simulations based on OWNS have been done under different types of traffic source.

  11. EPA'S WATERSHED MANAGEMENT AND MODELING RESEARCH PROGRAM

    EPA Science Inventory

    Watershed management presumes that community groups can best solve many water quality and ecosystem problems at the watershed level rather than at the individual site, receiving waterbody, or discharger level. After assessing and ranking watershed problems, and setting environ...

  12. Multichannel myopic deconvolution in underwater acoustic channels via low-rank recovery

    PubMed Central

    Tian, Ning; Byun, Sung-Hoon; Sabra, Karim; Romberg, Justin

    2017-01-01

    This paper presents a technique for solving the multichannel blind deconvolution problem. The authors observe the convolution of a single (unknown) source with K different (unknown) channel responses; from these channel outputs, the authors want to estimate both the source and the channel responses. The authors show how this classical signal processing problem can be viewed as solving a system of bilinear equations, and in turn can be recast as recovering a rank-1 matrix from a set of linear observations. Results of prior studies in the area of low-rank matrix recovery have identified effective convex relaxations for problems of this type and efficient, scalable heuristic solvers that enable these techniques to work with thousands of unknown variables. The authors show how a priori information about the channels can be used to build a linear model for the channels, which in turn makes solving these systems of equations well-posed. This study demonstrates the robustness of this methodology to measurement noises and parametrization errors of the channel impulse responses with several stylized and shallow water acoustic channel simulations. The performance of this methodology is also verified experimentally using shipping noise recorded on short bottom-mounted vertical line arrays. PMID:28599565

  13. The Role of Grunt Calls in the Social Dominance Hierarchy of the White-Lipped Peccary (Mammalia, Tayassuidae).

    PubMed

    Nogueira, Selene S C; Caselli, Christini B; Costa, Thaise S O; Moura, Leiliany N; Nogueira-Filho, Sérgio L G

    2016-01-01

    Grunt-like calls are present in the vocal repertoire of many group-living mammals and seem to facilitate social interactions between lower and higher-ranking members. The white-lipped peccary (Tayassu pecari) lives in stable hierarchical mixed-sex groups and like non-human primates, usually emits grunt-like calls following aggressive interactions, mainly during feeding contexts. We investigated the possible functions of peccaries' grunt-like calls and their relationship to the individuals' social rank, identity, and sexual dimorphism. We observed that low-ranking individuals emitted grunt-like calls more often than high-ranking ones, and that the alpha male never emitted this vocalization. Moreover, the mean minimum frequency of grunt-like calls decreased as the peccary's rank increased. The findings revealed differences among individual grunts, but the low accuracy of cross-validation (16%) suggests that individual recognition in peccaries may be less important than an honest signal of individual social status. In addition, the absence of differences in the acoustic parameters of grunt-like calls between males and females points to the lack of sexual dimorphism in this species. We verified that after hearing grunt calls, dominant opponents were more likely to cease attacking a victim, or at least delay the continuation of conflict, probably decreasing the severity of agonistic interactions. Our findings are particularly important to improve the current understanding of the role of grunt-like calls in herd-living mammals with linear dominant hierarchies, and strongly suggest that they are involved in the maintenance of herd social stability and cohesion.

  14. PROFESSIONAL INSECURITIES OF PROSPECTIVE TEACHERS.

    ERIC Educational Resources Information Center

    LUECK, WILLIAM R.

    TO DETERMINE WHICH COMMON TEACHING PROBLEMS CAUSE THE GREATEST CONCERN OR INSECURITY AMONG PROSPECTIVE TEACHERS, 445 JUNIORS (243 IN 1962-63 AND 205 IN 1963-64) TAKING A SECONDARY SCHOOL METHODS COURSE WERE ASKED TO RANK TWELVE MAJOR PROBLEMS IN THE ORDER IN WHICH THEY CAUSED CONCERN. THE PROBLEMS WERE COMPILED FROM THOSE OCCURRING FREQUENTLY IN…

  15. Identifying Epigenetic Biomarkers using Maximal Relevance and Minimal Redundancy Based Feature Selection for Multi-Omics Data.

    PubMed

    Mallik, Saurav; Bhadra, Tapas; Maulik, Ujjwal

    2017-01-01

    Epigenetic Biomarker discovery is an important task in bioinformatics. In this article, we develop a new framework of identifying statistically significant epigenetic biomarkers using maximal-relevance and minimal-redundancy criterion based feature (gene) selection for multi-omics dataset. Firstly, we determine the genes that have both expression as well as methylation values, and follow normal distribution. Similarly, we identify the genes which consist of both expression and methylation values, but do not follow normal distribution. For each case, we utilize a gene-selection method that provides maximal-relevant, but variable-weighted minimum-redundant genes as top ranked genes. For statistical validation, we apply t-test on both the expression and methylation data consisting of only the normally distributed top ranked genes to determine how many of them are both differentially expressed andmethylated. Similarly, we utilize Limma package for performing non-parametric Empirical Bayes test on both expression and methylation data comprising only the non-normally distributed top ranked genes to identify how many of them are both differentially expressed and methylated. We finally report the top-ranking significant gene-markerswith biological validation. Moreover, our framework improves positive predictive rate and reduces false positive rate in marker identification. In addition, we provide a comparative analysis of our gene-selection method as well as othermethods based on classificationperformances obtained using several well-known classifiers.

  16. Achieving spectrum conservation for the minimum-span and minimum-order frequency assignment problems

    NASA Technical Reports Server (NTRS)

    Heyward, Ann O.

    1992-01-01

    Effective and efficient solutions of frequency assignment problems assumes increasing importance as the radiofrequency spectrum experiences ever increasing utilization by diverse communications services, requiring that the most efficient use of this resource be achieved. The research presented explores a general approach to the frequency assignment problem, in which such problems are categorized by the appropriate spectrum conserving objective function, and are each treated as an N-job, M-machine scheduling problem appropriate for the objective. Results obtained and presented illustrate that such an approach presents an effective means of achieving spectrum conserving frequency assignments for communications systems in a variety of environments.

  17. First principles crystal engineering of nonlinear optical materials. I. Prototypical case of urea

    NASA Astrophysics Data System (ADS)

    Masunov, Artëm E.; Tannu, Arman; Dyakov, Alexander A.; Matveeva, Anastasia D.; Freidzon, Alexandra Ya.; Odinokov, Alexey V.; Bagaturyants, Alexander A.

    2017-06-01

    The crystalline materials with nonlinear optical (NLO) properties are critically important for several technological applications, including nanophotonic and second harmonic generation devices. Urea is often considered to be a standard NLO material, due to the combination of non-centrosymmetric crystal packing and capacity for intramolecular charge transfer. Various approaches to crystal engineering of non-centrosymmetric molecular materials were reported in the literature. Here we propose using global lattice energy minimization to predict the crystal packing from the first principles. We developed a methodology that includes the following: (1) parameter derivation for polarizable force field AMOEBA; (2) local minimizations of crystal structures with these parameters, combined with the evolutionary algorithm for a global minimum search, implemented in program USPEX; (3) filtering out duplicate polymorphs produced; (4) reoptimization and final ranking based on density functional theory (DFT) with many-body dispersion (MBD) correction; and (5) prediction of the second-order susceptibility tensor by finite field approach. This methodology was applied to predict virtual urea polymorphs. After filtering based on packing similarity, only two distinct packing modes were predicted: one experimental and one hypothetical. DFT + MBD ranking established non-centrosymmetric crystal packing as the global minimum, in agreement with the experiment. Finite field approach was used to predict nonlinear susceptibility, and H-bonding was found to account for a 2.5-fold increase in molecular hyperpolarizability to the bulk value.

  18. A minimum version of log-rank test for testing the existence of cancer cure using relative survival data.

    PubMed

    Yu, Binbing

    2012-01-01

    Cancer survival is one of the most important measures to evaluate the effectiveness of treatment and early diagnosis. The ultimate goal of cancer research and patient care is the cure of cancer. As cancer treatments progress, cure becomes a reality for many cancers if patients are diagnosed early and get effective treatment. If a cure does exist for a certain type of cancer, it is useful to estimate the time of cure. For cancers that impose excess risk of mortality, it is informative to understand the difference in survival between cancer patients and the general cancer-free population. In population-based cancer survival studies, relative survival is the standard measure of excess mortality due to cancer. Cure is achieved when the survival of cancer patients is equivalent to that of the general population. This definition of cure is usually called the statistical cure, which is an important measure of burden due to cancer. In this paper, a minimum version of the log-rank test is proposed to test the equivalence of cancer patients' survival using the relative survival data. Performance of the proposed test is evaluated by simulation. Relative survival data from population-based cancer registries in SEER Program are used to examine patients' survival after diagnosis for various major cancer sites. Copyright © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  19. Evaluating space station applications of automation and robotics technologies from a human productivity point of view

    NASA Technical Reports Server (NTRS)

    Bard, J. F.

    1986-01-01

    The role that automation, robotics, and artificial intelligence will play in Space Station operations is now beginning to take shape. Although there is only limited data on the precise nature of the payoffs that these technologies are likely to afford there is a general consensus that, at a minimum, the following benefits will be realized: increased responsiveness to innovation, lower operating costs, and reduction of exposure to hazards. Nevertheless, the question arises as to how much automation can be justified with the technical and economic constraints of the program? The purpose of this paper is to present a methodology which can be used to evaluate and rank different approaches to automating the functions and tasks planned for the Space Station. Special attention is given to the impact of advanced automation on human productivity. The methodology employed is based on the Analytic Hierarchy Process. This permits the introduction of individual judgements to resolve the confict that normally arises when incomparable criteria underly the selection process. Because of the large number of factors involved in the model, the overall problem is decomposed into four subproblems individually focusing on human productivity, economics, design, and operations, respectively. The results from each are then combined to yield the final rankings. To demonstrate the methodology, an example is developed based on the selection of an on-orbit assembly system. Five alternatives for performing this task are identified, ranging from an astronaut working in space, to a dexterous manipulator with sensory feedback. Computational results are presented along with their implications. A final parametric analysis shows that the outcome is locally insensitive to all but complete reversals in preference.

  20. A simplified risk-ranking system for prioritizing toxic pollution sites in low- and middle-income countries.

    PubMed

    Caravanos, Jack; Gualtero, Sandra; Dowling, Russell; Ericson, Bret; Keith, John; Hanrahan, David; Fuller, Richard

    2014-01-01

    In low- and middle-income countries (LMICs), chemical exposures in the environment due to hazardous waste sites and toxic pollutants are typically poorly documented and their health impacts insufficiently quantified. Furthermore, there often is only limited understanding of the health and environmental consequences of point source pollution problems, and little consensus on how to assess and rank them. The contributions of toxic environmental exposures to the global burden of disease are not well characterized. The aim of this study was to describe the simple but effective approach taken by Blacksmith Institute's Toxic Sites Identification Program to quantify and rank toxic exposures in LMICs. This system is already in use at more than 3000 sites in 48 countries such as India, Indonesia, China, Ghana, Kenya, Tanzania, Peru, Bolivia, Argentina, Uruguay, Armenia, Azerbaijan, and Ukraine. A hazard ranking system formula, the Blacksmith Index (BI), takes into account important factors such as the scale of the pollution source, the size of the population possibly affected, and the exposure pathways, and is designed for use reliably in low-resource settings by local personnel provided with limited training. Four representative case studies are presented, with varying locations, populations, pollutants, and exposure pathways. The BI was successfully applied to assess the extent and severity of environmental pollution problems at these sites. The BI is a risk-ranking tool that provides direct and straightforward characterization, quantification, and prioritization of toxic pollution sites in settings where time, money, or resources are limited. It will be an important and useful tool for addressing toxic pollution problems in LMICs. Although the BI does not have the sophistication of the US Environmental Protection Agency's Hazard Ranking System, the case studies presented here document the effectiveness of the BI in the field, especially in low-resource settings. Understanding of the risks posed by toxic pollution sites helps assure better use of resources to manage sites and mitigate risks to public health. Quantification of these hazards is an important input to assessments of the global burden of disease. Copyright © 2014 Icahn School of Medicine at Mount Sinai. Published by Elsevier Inc. All rights reserved.

  1. Rank-based pooling for deep convolutional neural networks.

    PubMed

    Shi, Zenglin; Ye, Yangdong; Wu, Yunpeng

    2016-11-01

    Pooling is a key mechanism in deep convolutional neural networks (CNNs) which helps to achieve translation invariance. Numerous studies, both empirically and theoretically, show that pooling consistently boosts the performance of the CNNs. The conventional pooling methods are operated on activation values. In this work, we alternatively propose rank-based pooling. It is derived from the observations that ranking list is invariant under changes of activation values in a pooling region, and thus rank-based pooling operation may achieve more robust performance. In addition, the reasonable usage of rank can avoid the scale problems encountered by value-based methods. The novel pooling mechanism can be regarded as an instance of weighted pooling where a weighted sum of activations is used to generate the pooling output. This pooling mechanism can also be realized as rank-based average pooling (RAP), rank-based weighted pooling (RWP) and rank-based stochastic pooling (RSP) according to different weighting strategies. As another major contribution, we present a novel criterion to analyze the discriminant ability of various pooling methods, which is heavily under-researched in machine learning and computer vision community. Experimental results on several image benchmarks show that rank-based pooling outperforms the existing pooling methods in classification performance. We further demonstrate better performance on CIFAR datasets by integrating RSP into Network-in-Network. Copyright © 2016 Elsevier Ltd. All rights reserved.

  2. Planning Minimum-Energy Paths in an Off-Road Environment with Anisotropic Traversal Costs and Motion Constraints

    DTIC Science & Technology

    1989-06-01

    problems, and (3) weighted-region problems. Since the minimum-energy path-planning problem addressed in this dissertation is a hybrid between the two...contains components that are strictly vehicle dependent, components that are strictly terrain dependent, and components representing a hybrid of...Single Segment Braking/Multiple Segment Hybrid Using Eq. (3.46), the traversal cost U 1,.-1 can be rewritten as Uop- 1 = mgD Itan01 , (4.12a) and the

  3. Fair ranking of researchers and research teams

    PubMed Central

    2018-01-01

    The main drawback of ranking of researchers by the number of papers, citations or by the Hirsch index is ignoring the problem of distributing authorship among authors in multi-author publications. So far, the single-author or multi-author publications contribute to the publication record of a researcher equally. This full counting scheme is apparently unfair and causes unjust disproportions, in particular, if ranked researchers have distinctly different collaboration profiles. These disproportions are removed by less common fractional or authorship-weighted counting schemes, which can distribute the authorship credit more properly and suppress a tendency to unjustified inflation of co-authors. The urgent need of widely adopting a fair ranking scheme in practise is exemplified by analysing citation profiles of several highly-cited astronomers and astrophysicists. While the full counting scheme often leads to completely incorrect and misleading ranking, the fractional or authorship-weighted schemes are more accurate and applicable to ranking of researchers as well as research teams. In addition, they suppress differences in ranking among scientific disciplines. These more appropriate schemes should urgently be adopted by scientific publication databases as the Web of Science (Thomson Reuters) or the Scopus (Elsevier). PMID:29621316

  4. A new mutually reinforcing network node and link ranking algorithm

    PubMed Central

    Wang, Zhenghua; Dueñas-Osorio, Leonardo; Padgett, Jamie E.

    2015-01-01

    This study proposes a novel Normalized Wide network Ranking algorithm (NWRank) that has the advantage of ranking nodes and links of a network simultaneously. This algorithm combines the mutual reinforcement feature of Hypertext Induced Topic Selection (HITS) and the weight normalization feature of PageRank. Relative weights are assigned to links based on the degree of the adjacent neighbors and the Betweenness Centrality instead of assigning the same weight to every link as assumed in PageRank. Numerical experiment results show that NWRank performs consistently better than HITS, PageRank, eigenvector centrality, and edge betweenness from the perspective of network connectivity and approximate network flow, which is also supported by comparisons with the expensive N-1 benchmark removal criteria based on network efficiency. Furthermore, it can avoid some problems, such as the Tightly Knit Community effect, which exists in HITS. NWRank provides a new inexpensive way to rank nodes and links of a network, which has practical applications, particularly to prioritize resource allocation for upgrade of hierarchical and distributed networks, as well as to support decision making in the design of networks, where node and link importance depend on a balance of local and global integrity. PMID:26492958

  5. Learning to rank using user clicks and visual features for image retrieval.

    PubMed

    Yu, Jun; Tao, Dacheng; Wang, Meng; Rui, Yong

    2015-04-01

    The inconsistency between textual features and visual contents can cause poor image search results. To solve this problem, click features, which are more reliable than textual information in justifying the relevance between a query and clicked images, are adopted in image ranking model. However, the existing ranking model cannot integrate visual features, which are efficient in refining the click-based search results. In this paper, we propose a novel ranking model based on the learning to rank framework. Visual features and click features are simultaneously utilized to obtain the ranking model. Specifically, the proposed approach is based on large margin structured output learning and the visual consistency is integrated with the click features through a hypergraph regularizer term. In accordance with the fast alternating linearization method, we design a novel algorithm to optimize the objective function. This algorithm alternately minimizes two different approximations of the original objective function by keeping one function unchanged and linearizing the other. We conduct experiments on a large-scale dataset collected from the Microsoft Bing image search engine, and the results demonstrate that the proposed learning to rank models based on visual features and user clicks outperforms state-of-the-art algorithms.

  6. Low-Rank Correction Methods for Algebraic Domain Decomposition Preconditioners

    DOE PAGES

    Li, Ruipeng; Saad, Yousef

    2017-08-01

    This study presents a parallel preconditioning method for distributed sparse linear systems, based on an approximate inverse of the original matrix, that adopts a general framework of distributed sparse matrices and exploits domain decomposition (DD) and low-rank corrections. The DD approach decouples the matrix and, once inverted, a low-rank approximation is applied by exploiting the Sherman--Morrison--Woodbury formula, which yields two variants of the preconditioning methods. The low-rank expansion is computed by the Lanczos procedure with reorthogonalizations. Numerical experiments indicate that, when combined with Krylov subspace accelerators, this preconditioner can be efficient and robust for solving symmetric sparse linear systems. Comparisonsmore » with pARMS, a DD-based parallel incomplete LU (ILU) preconditioning method, are presented for solving Poisson's equation and linear elasticity problems.« less

  7. Low-Rank Correction Methods for Algebraic Domain Decomposition Preconditioners

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Ruipeng; Saad, Yousef

    This study presents a parallel preconditioning method for distributed sparse linear systems, based on an approximate inverse of the original matrix, that adopts a general framework of distributed sparse matrices and exploits domain decomposition (DD) and low-rank corrections. The DD approach decouples the matrix and, once inverted, a low-rank approximation is applied by exploiting the Sherman--Morrison--Woodbury formula, which yields two variants of the preconditioning methods. The low-rank expansion is computed by the Lanczos procedure with reorthogonalizations. Numerical experiments indicate that, when combined with Krylov subspace accelerators, this preconditioner can be efficient and robust for solving symmetric sparse linear systems. Comparisonsmore » with pARMS, a DD-based parallel incomplete LU (ILU) preconditioning method, are presented for solving Poisson's equation and linear elasticity problems.« less

  8. Primary-Care Weight-Management Strategies: Parental Priorities and Preferences

    PubMed Central

    Turer, Christy Boling; Upperman, Carla; Merchant, Zahra; Montaño, Sergio; Flores, Glenn

    2015-01-01

    Objective Examine parental perspectives/rankings of the most important weight-management clinical practices; and, determine whether preferences/rankings differ when parents disagree that their child is overweight. Methods Mixed-methods analysis of a 32-question survey of parents of 2-18 year-old overweight children assessing parental agreement that their child is overweight, the single most important thing providers can do to improve weight status, ranking AAP-recommended clinical practices, and preferred follow-up interval. Four independent reviewers analyzed open-response data to identify qualitative themes/subthemes. Multivariable analyses examined parental rankings, preferred follow-up interval, and differences by agreement with their child’s overweight assessment. Results Thirty-six percent of 219 children were overweight, 42% were obese, and 22% severely obese; 16% of parents disagreed with their child’s overweight assessment. Qualitative analysis of the most important practice to help overweight children yielded 10 themes; unique to parents disagreeing with their children’s overweight assessments was, “change weight-status assessments.” After adjustment, the three highest-ranked clinical practices included, “check for weight-related problems,” “review growth chart,” and “recommend general dietary changes” (all P<.01);” parents disagreeing with their children’s overweight assessments ranked “review growth chart” as less important, and “reducing screen time” and “general activity changes” as more important. The mean preferred weight-management follow-up interval (10-12 weeks) did not differ by agreement with children’s overweight assessments. Conclusions Parents prefer weight-management strategies that prioritize evaluating weight-related problems, growth-chart review, and regular follow-up. Parents who disagree that their child is overweight want changes in how overweight is assessed. Using parent-preferred weight-management strategies may prove useful in improving child weight status. PMID:26514648

  9. Constrained low-rank matrix estimation: phase transitions, approximate message passing and applications

    NASA Astrophysics Data System (ADS)

    Lesieur, Thibault; Krzakala, Florent; Zdeborová, Lenka

    2017-07-01

    This article is an extended version of previous work of Lesieur et al (2015 IEEE Int. Symp. on Information Theory Proc. pp 1635-9 and 2015 53rd Annual Allerton Conf. on Communication, Control and Computing (IEEE) pp 680-7) on low-rank matrix estimation in the presence of constraints on the factors into which the matrix is factorized. Low-rank matrix factorization is one of the basic methods used in data analysis for unsupervised learning of relevant features and other types of dimensionality reduction. We present a framework to study the constrained low-rank matrix estimation for a general prior on the factors, and a general output channel through which the matrix is observed. We draw a parallel with the study of vector-spin glass models—presenting a unifying way to study a number of problems considered previously in separate statistical physics works. We present a number of applications for the problem in data analysis. We derive in detail a general form of the low-rank approximate message passing (Low-RAMP) algorithm, that is known in statistical physics as the TAP equations. We thus unify the derivation of the TAP equations for models as different as the Sherrington-Kirkpatrick model, the restricted Boltzmann machine, the Hopfield model or vector (xy, Heisenberg and other) spin glasses. The state evolution of the Low-RAMP algorithm is also derived, and is equivalent to the replica symmetric solution for the large class of vector-spin glass models. In the section devoted to result we study in detail phase diagrams and phase transitions for the Bayes-optimal inference in low-rank matrix estimation. We present a typology of phase transitions and their relation to performance of algorithms such as the Low-RAMP or commonly used spectral methods.

  10. Fast iterative solution of the Bethe-Salpeter eigenvalue problem using low-rank and QTT tensor approximation

    NASA Astrophysics Data System (ADS)

    Benner, Peter; Dolgov, Sergey; Khoromskaia, Venera; Khoromskij, Boris N.

    2017-04-01

    In this paper, we propose and study two approaches to approximate the solution of the Bethe-Salpeter equation (BSE) by using structured iterative eigenvalue solvers. Both approaches are based on the reduced basis method and low-rank factorizations of the generating matrices. We also propose to represent the static screen interaction part in the BSE matrix by a small active sub-block, with a size balancing the storage for rank-structured representations of other matrix blocks. We demonstrate by various numerical tests that the combination of the diagonal plus low-rank plus reduced-block approximation exhibits higher precision with low numerical cost, providing as well a distinct two-sided error estimate for the smallest eigenvalues of the Bethe-Salpeter operator. The complexity is reduced to O (Nb2) in the size of the atomic orbitals basis set, Nb, instead of the practically intractable O (Nb6) scaling for the direct diagonalization. In the second approach, we apply the quantized-TT (QTT) tensor representation to both, the long eigenvectors and the column vectors in the rank-structured BSE matrix blocks, and combine this with the ALS-type iteration in block QTT format. The QTT-rank of the matrix entities possesses almost the same magnitude as the number of occupied orbitals in the molecular systems, No

  11. Tensor Factorization for Low-Rank Tensor Completion.

    PubMed

    Zhou, Pan; Lu, Canyi; Lin, Zhouchen; Zhang, Chao

    2018-03-01

    Recently, a tensor nuclear norm (TNN) based method was proposed to solve the tensor completion problem, which has achieved state-of-the-art performance on image and video inpainting tasks. However, it requires computing tensor singular value decomposition (t-SVD), which costs much computation and thus cannot efficiently handle tensor data, due to its natural large scale. Motivated by TNN, we propose a novel low-rank tensor factorization method for efficiently solving the 3-way tensor completion problem. Our method preserves the low-rank structure of a tensor by factorizing it into the product of two tensors of smaller sizes. In the optimization process, our method only needs to update two smaller tensors, which can be more efficiently conducted than computing t-SVD. Furthermore, we prove that the proposed alternating minimization algorithm can converge to a Karush-Kuhn-Tucker point. Experimental results on the synthetic data recovery, image and video inpainting tasks clearly demonstrate the superior performance and efficiency of our developed method over state-of-the-arts including the TNN and matricization methods.

  12. Quantum annealing versus classical machine learning applied to a simplified computational biology problem

    NASA Astrophysics Data System (ADS)

    Li, Richard Y.; Di Felice, Rosa; Rohs, Remo; Lidar, Daniel A.

    2018-03-01

    Transcription factors regulate gene expression, but how these proteins recognize and specifically bind to their DNA targets is still debated. Machine learning models are effective means to reveal interaction mechanisms. Here we studied the ability of a quantum machine learning approach to classify and rank binding affinities. Using simplified data sets of a small number of DNA sequences derived from actual binding affinity experiments, we trained a commercially available quantum annealer to classify and rank transcription factor binding. The results were compared to state-of-the-art classical approaches for the same simplified data sets, including simulated annealing, simulated quantum annealing, multiple linear regression, LASSO, and extreme gradient boosting. Despite technological limitations, we find a slight advantage in classification performance and nearly equal ranking performance using the quantum annealer for these fairly small training data sets. Thus, we propose that quantum annealing might be an effective method to implement machine learning for certain computational biology problems.

  13. Influence of protonation, tautomeric, and stereoisomeric states on protein-ligand docking results.

    PubMed

    ten Brink, Tim; Exner, Thomas E

    2009-06-01

    In this work, we present a systematical investigation of the influence of ligand protonation states, stereoisomers, and tautomers on results obtained with the two protein-ligand docking programs GOLD and PLANTS. These different states were generated with a fully automated tool, called SPORES (Structure PrOtonation and Recognition System). First, the most probable protonations, as defined by this rule based system, were compared to the ones stored in the well-known, manually revised CCDC/ASTEX data set. Then, to investigate the influence of the ligand protonation state on the docking results, different protonation states were created. Redocking and virtual screening experiments were conducted demonstrating that both docking programs have problems in identifying the correct protomer for each complex. Therefore, a preselection of plausible protomers or the improvement of the scoring functions concerning their ability to rank different molecules/states is needed. Additionally, ligand stereoisomers were tested for a subset of the CCDC/ASTEX set, showing similar problems regarding the ranking of these stereoisomers as the ranking of the protomers.

  14. Analyzing Quadratic Unconstrained Binary Optimization Problems Via Multicommodity Flows

    PubMed Central

    Wang, Di; Kleinberg, Robert D.

    2009-01-01

    Quadratic Unconstrained Binary Optimization (QUBO) problems concern the minimization of quadratic polynomials in n {0, 1}-valued variables. These problems are NP-complete, but prior work has identified a sequence of polynomial-time computable lower bounds on the minimum value, denoted by C2, C3, C4,…. It is known that C2 can be computed by solving a maximum-flow problem, whereas the only previously known algorithms for computing Ck (k > 2) require solving a linear program. In this paper we prove that C3 can be computed by solving a maximum multicommodity flow problem in a graph constructed from the quadratic function. In addition to providing a lower bound on the minimum value of the quadratic function on {0, 1}n, this multicommodity flow problem also provides some information about the coordinates of the point where this minimum is achieved. By looking at the edges that are never saturated in any maximum multicommodity flow, we can identify relational persistencies: pairs of variables that must have the same or different values in any minimizing assignment. We furthermore show that all of these persistencies can be detected by solving single-commodity flow problems in the same network. PMID:20161596

  15. Analyzing Quadratic Unconstrained Binary Optimization Problems Via Multicommodity Flows.

    PubMed

    Wang, Di; Kleinberg, Robert D

    2009-11-28

    Quadratic Unconstrained Binary Optimization (QUBO) problems concern the minimization of quadratic polynomials in n {0, 1}-valued variables. These problems are NP-complete, but prior work has identified a sequence of polynomial-time computable lower bounds on the minimum value, denoted by C(2), C(3), C(4),…. It is known that C(2) can be computed by solving a maximum-flow problem, whereas the only previously known algorithms for computing C(k) (k > 2) require solving a linear program. In this paper we prove that C(3) can be computed by solving a maximum multicommodity flow problem in a graph constructed from the quadratic function. In addition to providing a lower bound on the minimum value of the quadratic function on {0, 1}(n), this multicommodity flow problem also provides some information about the coordinates of the point where this minimum is achieved. By looking at the edges that are never saturated in any maximum multicommodity flow, we can identify relational persistencies: pairs of variables that must have the same or different values in any minimizing assignment. We furthermore show that all of these persistencies can be detected by solving single-commodity flow problems in the same network.

  16. Network information improves cancer outcome prediction.

    PubMed

    Roy, Janine; Winter, Christof; Isik, Zerrin; Schroeder, Michael

    2014-07-01

    Disease progression in cancer can vary substantially between patients. Yet, patients often receive the same treatment. Recently, there has been much work on predicting disease progression and patient outcome variables from gene expression in order to personalize treatment options. Despite first diagnostic kits in the market, there are open problems such as the choice of random gene signatures or noisy expression data. One approach to deal with these two problems employs protein-protein interaction networks and ranks genes using the random surfer model of Google's PageRank algorithm. In this work, we created a benchmark dataset collection comprising 25 cancer outcome prediction datasets from literature and systematically evaluated the use of networks and a PageRank derivative, NetRank, for signature identification. We show that the NetRank performs significantly better than classical methods such as fold change or t-test. Despite an order of magnitude difference in network size, a regulatory and protein-protein interaction network perform equally well. Experimental evaluation on cancer outcome prediction in all of the 25 underlying datasets suggests that the network-based methodology identifies highly overlapping signatures over all cancer types, in contrast to classical methods that fail to identify highly common gene sets across the same cancer types. Integration of network information into gene expression analysis allows the identification of more reliable and accurate biomarkers and provides a deeper understanding of processes occurring in cancer development and progression. © The Author 2012. Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.

  17. Satellite Monitoring of Cyanobacterial Harmful Algal Bloom Frequency in Recreational Waters and Drinking Water Sources

    NASA Technical Reports Server (NTRS)

    Clark, John M.; Schaeffer, Blake A.; Darling, John A.; Urquhart, Erin A.; Johnston, John M.; Ignatius, Amber R.; Myer, Mark H.; Loftin, Keith A.; Werdell, P. Jeremy; Stumpf, Richard P.

    2017-01-01

    Cyanobacterial harmful algal blooms (cyanoHAB) cause extensive problems in lakes worldwide, including human and ecological health risks, anoxia and sh kills, and taste and odor problems. CyanoHABs are a particular concern in both recreational waters and drinking water sources because of their dense biomass and the risk of exposure to toxins. Successful cyanoHAB assessment using satellites may provide an indicator for human and ecological health protection. In this study, methods were developed to assess the utility of satellite technology for detecting cyanoHAB frequency of occurrence at locations of potential management interest. The European Space Agency's MEdium Resolution Imaging Spectrometer (MERIS) was evaluated to prepare for the equivalent series of Sentinel-3 Ocean and Land Colour Imagers (OLCI) launched in 2016 as part of the Copernicus program. Based on the 2012 National Lakes Assessment site evaluation guidelines and National Hydrography Dataset, the continental United States contains 275,897 lakes and reservoirs greater than 1 ha in area. Results from this study show that 5.6% of waterbodies were resolvable by satellites with 300 m single-pixel resolution and 0.7% of waterbodies were resolvable when a three by three pixel (3 x 3-pixel) array was applied based on minimum Euclidian distance from shore. Satellite data were spatially joined to U.S. public water surface intake (PWSI) locations, where single-pixel resolution resolved 57% of the PWSI locations and a 3 x 3-pixel array resolved 33% of the PWSI locations. Recreational and drinking water sources in Florida and Ohio were ranked from 2008 through 2011 by cyanoHAB frequency above the World Health Organizations (WHO) high threshold for risk of 100,000 cells m/L. The ranking identified waterbodies with values above the WHO high threshold, where Lake Apopka, FL (99.1%) and Grand Lake St. Marys, OH (83%) had the highest observed bloom frequencies per region. The method presented here may indicate locations with high exposure to cyanoHABs and therefore can be used to assist in prioritizing management resources and actions for recreational and drinking water sources.

  18. Satellite monitoring of cyanobacterial harmful algal bloom frequency in recreational waters and drinking water sources

    USGS Publications Warehouse

    Clark, John M.; Schaeffer, Blake A.; Darling, John A.; Urquhart, Erin A.; Johnston, John M.; Ignatius, Amber R.; Myer, Mark H.; Loftin, Keith A.; Werdell, P. Jeremy; Stumpf, Richard P.

    2017-01-01

    Cyanobacterial harmful algal blooms (cyanoHAB) cause extensive problems in lakes worldwide, including human and ecological health risks, anoxia and fish kills, and taste and odor problems. CyanoHABs are a particular concern in both recreational waters and drinking water sources because of their dense biomass and the risk of exposure to toxins. Successful cyanoHAB assessment using satellites may provide an indicator for human and ecological health protection. In this study, methods were developed to assess the utility of satellite technology for detecting cyanoHAB frequency of occurrence at locations of potential management interest. The European Space Agency's MEdium Resolution Imaging Spectrometer (MERIS) was evaluated to prepare for the equivalent series of Sentinel-3 Ocean and Land Colour Imagers (OLCI) launched in 2016 as part of the Copernicus program. Based on the 2012 National Lakes Assessment site evaluation guidelines and National Hydrography Dataset, the continental United States contains 275,897 lakes and reservoirs >1 ha in area. Results from this study show that 5.6% of waterbodies were resolvable by satellites with 300 m single-pixel resolution and 0.7% of waterbodies were resolvable when a three by three pixel (3 × 3-pixel) array was applied based on minimum Euclidian distance from shore. Satellite data were spatially joined to U.S. public water surface intake (PWSI) locations, where single-pixel resolution resolved 57% of the PWSI locations and a 3 × 3-pixel array resolved 33% of the PWSI locations. Recreational and drinking water sources in Florida and Ohio were ranked from 2008 through 2011 by cyanoHAB frequency above the World Health Organization’s (WHO) high threshold for risk of 100,000 cells mL−1. The ranking identified waterbodies with values above the WHO high threshold, where Lake Apopka, FL (99.1%) and Grand Lake St. Marys, OH (83%) had the highest observed bloom frequencies per region. The method presented here may indicate locations with high exposure to cyanoHABs and therefore can be used to assist in prioritizing management resources and actions for recreational and drinking water sources.

  19. FBST for Cointegration Problems

    NASA Astrophysics Data System (ADS)

    Diniz, M.; Pereira, C. A. B.; Stern, J. M.

    2008-11-01

    In order to estimate causal relations, the time series econometrics has to be aware of spurious correlation, a problem first mentioned by Yule [21]. To solve the problem, one can work with differenced series or use multivariate models like VAR or VEC models. In this case, the analysed series are going to present a long run relation i.e. a cointegration relation. Even though the Bayesian literature about inference on VAR/VEC models is quite advanced, Bauwens et al. [2] highlight that "the topic of selecting the cointegrating rank has not yet given very useful and convincing results." This paper presents the Full Bayesian Significance Test applied to cointegration rank selection tests in multivariate (VAR/VEC) time series models and shows how to implement it using available in the literature and simulated data sets. A standard non-informative prior is assumed.

  20. Finding minimum-quotient cuts in planar graphs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Park, J.K.; Phillips, C.A.

    Given a graph G = (V, E) where each vertex v [element of] V is assigned a weight w(v) and each edge e [element of] E is assigned a cost c(e), the quotient of a cut partitioning the vertices of V into sets S and [bar S] is c(S, [bar S])/min[l brace]w(S), w(S)[r brace], where c(S, [bar S]) is the sum of the costs of the edges crossing the cut and w(S) and w([bar S]) are the sum of the weights of the vertices in S and [bar S], respectively. The problem of finding a cut whose quotient is minimummore » for a graph has in recent years attracted considerable attention, due in large part to the work of Rao and Leighton and Rao. They have shown that an algorithm (exact or approximation) for the minimum-quotient-cut problem can be used to obtain an approximation algorithm for the more famous minimumb-balanced-cut problem, which requires finding a cut (S,[bar S]) minimizing c(S,[bar S]) subject to the constraint bW [le] w(S) [le] (1 [minus] b)W, where W is the total vertex weight and b is some fixed balance in the range 0 < b [le] [1/2]. Unfortunately, the minimum-quotient-cut problem is strongly NP-hard for general graphs, and the best polynomial-time approximation algorithm known for the general problem guarantees only a cut whose quotient is at mostO(lg n) times optimal, where n is the size of the graph. However, for planar graphs, the minimum-quotient-cut problem appears more tractable, as Rao has developed several efficient approximation algorithms for the planar version of the problem capable of finding a cut whose quotient is at most some constant times optimal. In this paper, we improve Rao's algorithms, both in terms of accuracy and speed. As our first result, we present two pseudopolynomial-time exact algorithms for the planar minimum-quotient-cut problem. As Rao's most accurate approximation algorithm for the problem -- also a pseudopolynomial-time algorithm -- guarantees only a 1.5-times-optimal cut, our algorithms represent a significant advance.« less

  1. A divide and conquer approach to the nonsymmetric eigenvalue problem

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jessup, E.R.

    1991-01-01

    Serial computation combined with high communication costs on distributed-memory multiprocessors make parallel implementations of the QR method for the nonsymmetric eigenvalue problem inefficient. This paper introduces an alternative algorithm for the nonsymmetric tridiagonal eigenvalue problem based on rank two tearing and updating of the matrix. The parallelism of this divide and conquer approach stems from independent solution of the updating problems. 11 refs.

  2. Diameter-Constrained Steiner Tree

    NASA Astrophysics Data System (ADS)

    Ding, Wei; Lin, Guohui; Xue, Guoliang

    Given an edge-weighted undirected graph G = (V,E,c,w), where each edge e ∈ E has a cost c(e) and a weight w(e), a set S ⊆ V of terminals and a positive constant D 0, we seek a minimum cost Steiner tree where all terminals appear as leaves and its diameter is bounded by D 0. Note that the diameter of a tree represents the maximum weight of path connecting two different leaves in the tree. Such problem is called the minimum cost diameter-constrained Steiner tree problem. This problem is NP-hard even when the topology of Steiner tree is fixed. In present paper we focus on this restricted version and present a fully polynomial time approximation scheme (FPTAS) for computing a minimum cost diameter-constrained Steiner tree under a fixed topology.

  3. 78 FR 16465 - Energy and Environment Trade Mission to Malaysia, Thailand and the Philippines

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-03-15

    ... experienced problems with the discharge of untreated sewage, particularly along the west coast. Malaysia's water pollution problem also extends to its rivers, of which 40 percent are polluted. The nation has 580... sources is still a problem. In the mid-1990s, Malaysia ranked among 50 nations with the world's highest...

  4. Development of Internalizing Problems from Adolescence to Emerging Adulthood: Accounting for Heterotypic Continuity with Vertical Scaling

    ERIC Educational Resources Information Center

    Petersen, Isaac T.; Lindhiem, Oliver; LeBeau, Brandon; Bates, John E.; Pettit, Gregory S.; Lansford, Jennifer E.; Dodge, Kenneth A.

    2018-01-01

    Manifestations of internalizing problems, such as specific symptoms of anxiety and depression, can change across development, even if individuals show strong continuity in rank-order levels of internalizing problems. This illustrates the concept of heterotypic continuity, and raises the question of whether common measures might be construct-valid…

  5. Simulation of Intra- or transboundary surface-water-rights hierarchies using the farm process for MODFLOW-2000

    USGS Publications Warehouse

    Schmid, W.; Hanson, R.T.

    2007-01-01

    Water-rights driven surface-water allocations for irrigated agriculture can be simulated using the farm process for MODFLOW-2000. This paper describes and develops a model, which simulates routed surface-water deliveries to farms limited by streamflow, equal-appropriation allotments, or a ranked prior-appropriation system. Simulated diversions account for deliveries to all farms along a canal according to their water-rights ranking and for conveyance losses and gains. Simulated minimum streamflow requirements on diversions help guarantee supplies to senior farms located on downstream diverting canals. Prior appropriation can be applied to individual farms or to groups of farms modeled as "virtual farms" representing irrigation districts, irrigated regions in transboundary settings, or natural vegetation habitats. The integrated approach of jointly simulating canal diversions, surface-water deliveries subject to water-rights constraints, and groundwater allocations is verified on numerical experiments based on a realistic, but hypothetical, system of ranked virtual farms. Results are discussed in light of transboundary water appropriation and demonstrate the approach's suitability for simulating effects of water-rights hierarchies represented by international treaties, interstate stream compacts, intrastate water rights, or ecological requirements. ?? 2007 ASCE.

  6. Development of education program for physical therapy assistant in Quang Tri province of Vietnam.

    PubMed

    Noh, Jin Won; Cho, Sang Hyun; Kim, Min Hee; Kim, Eun Joo

    2017-02-01

    [Purpose] The purpose of the present study was to develop an education program for physical therapy assistants in order to provide high quality physical therapy for the province of Quang Tri in Vietnam. [Subjects and Methods] Subjects consisted of 9 professors in Quang Tri medical college and 1 physical therapist in Quang Tri General hospital. The survey research to lecturer for education of physical therapy assistant in Quang Tri medical college was conducted as pre-analysis of demand for the physical therapy assistant curriculum development. The priority rank of expectation and consciousness were measured in curriculum subjects. [Results] Results of educational expectation of the curriculum total educational expectation were presented as minimum 4 to maximum 5. In the result of educational expectation according to background variable, the differences of educational expectation on scores according to the educational experience were significant. Among the consciousness priority of each curriculum subject, the priority rank of basic kinesiology and physical therapy for international medicine & surgery were 9, the highest first rank frequency. [Conclusion] The curriculum for physical therapy assistant was developed to 5 main subjects including a total of 420 hours (120 hours of theory and 300 hours of practice).

  7. Development of education program for physical therapy assistant in Quang Tri province of Vietnam

    PubMed Central

    Noh, Jin Won; Cho, Sang Hyun; Kim, Min Hee; Kim, Eun Joo

    2017-01-01

    [Purpose] The purpose of the present study was to develop an education program for physical therapy assistants in order to provide high quality physical therapy for the province of Quang Tri in Vietnam. [Subjects and Methods] Subjects consisted of 9 professors in Quang Tri medical college and 1 physical therapist in Quang Tri General hospital. The survey research to lecturer for education of physical therapy assistant in Quang Tri medical college was conducted as pre-analysis of demand for the physical therapy assistant curriculum development. The priority rank of expectation and consciousness were measured in curriculum subjects. [Results] Results of educational expectation of the curriculum total educational expectation were presented as minimum 4 to maximum 5. In the result of educational expectation according to background variable, the differences of educational expectation on scores according to the educational experience were significant. Among the consciousness priority of each curriculum subject, the priority rank of basic kinesiology and physical therapy for international medicine & surgery were 9, the highest first rank frequency. [Conclusion] The curriculum for physical therapy assistant was developed to 5 main subjects including a total of 420 hours (120 hours of theory and 300 hours of practice). PMID:28265176

  8. One-shot exogenous interventions increase subsequent coordination in Denmark, Spain and Ghana

    PubMed Central

    Thorsen, Bo Jellesmark

    2017-01-01

    Everyday, we are bombarded with periodic, exogenous appeals and instructions on how to behave. How do these appeals and instructions affect subsequent coordination? Using experimental methods, we investigate how a one-time exogenous instruction affects subsequent coordination among individuals in a lab. Participants play a minimum effort game repeated 5 times under fixed matching with a one-time behavioral instruction in either the first or second round. Since coordination behavior may vary across countries, we run experiments in Denmark, Spain and Ghana, and map cross-country rankings in coordination with known national measures of fractualization, uncertainty avoidance and long-term orientation. Our results show that exogenous interventions increase subsequent coordination, with earlier interventions yielding better coordination than later interventions. We also find that cross-country rankings in coordination map with published national measures of fractualization, uncertainty avoidance, and long-term orientation. PMID:29145411

  9. One-shot exogenous interventions increase subsequent coordination in Denmark, Spain and Ghana.

    PubMed

    Abatayo, Anna Lou; Thorsen, Bo Jellesmark

    2017-01-01

    Everyday, we are bombarded with periodic, exogenous appeals and instructions on how to behave. How do these appeals and instructions affect subsequent coordination? Using experimental methods, we investigate how a one-time exogenous instruction affects subsequent coordination among individuals in a lab. Participants play a minimum effort game repeated 5 times under fixed matching with a one-time behavioral instruction in either the first or second round. Since coordination behavior may vary across countries, we run experiments in Denmark, Spain and Ghana, and map cross-country rankings in coordination with known national measures of fractualization, uncertainty avoidance and long-term orientation. Our results show that exogenous interventions increase subsequent coordination, with earlier interventions yielding better coordination than later interventions. We also find that cross-country rankings in coordination map with published national measures of fractualization, uncertainty avoidance, and long-term orientation.

  10. Quantum teleportation via quantum channels with non-maximal Schmidt rank

    NASA Astrophysics Data System (ADS)

    Solís-Prosser, M. A.; Jiménez, O.; Neves, L.; Delgado, A.

    2013-03-01

    We study the problem of teleporting unknown pure states of a single qudit via a pure quantum channel with non-maximal Schmidt rank. We relate this process to the discrimination of linearly dependent symmetric states with the help of the maximum-confidence discrimination strategy. We show that with a certain probability, it is possible to teleport with a fidelity larger than the fidelity optimal deterministic teleportation.

  11. Enhancing collaborative filtering by user interest expansion via personalized ranking.

    PubMed

    Liu, Qi; Chen, Enhong; Xiong, Hui; Ding, Chris H Q; Chen, Jian

    2012-02-01

    Recommender systems suggest a few items from many possible choices to the users by understanding their past behaviors. In these systems, the user behaviors are influenced by the hidden interests of the users. Learning to leverage the information about user interests is often critical for making better recommendations. However, existing collaborative-filtering-based recommender systems are usually focused on exploiting the information about the user's interaction with the systems; the information about latent user interests is largely underexplored. To that end, inspired by the topic models, in this paper, we propose a novel collaborative-filtering-based recommender system by user interest expansion via personalized ranking, named iExpand. The goal is to build an item-oriented model-based collaborative-filtering framework. The iExpand method introduces a three-layer, user-interests-item, representation scheme, which leads to more accurate ranking recommendation results with less computation cost and helps the understanding of the interactions among users, items, and user interests. Moreover, iExpand strategically deals with many issues that exist in traditional collaborative-filtering approaches, such as the overspecialization problem and the cold-start problem. Finally, we evaluate iExpand on three benchmark data sets, and experimental results show that iExpand can lead to better ranking performance than state-of-the-art methods with a significant margin.

  12. A Note on Alternating Minimization Algorithm for the Matrix Completion Problem

    DOE PAGES

    Gamarnik, David; Misra, Sidhant

    2016-06-06

    Here, we consider the problem of reconstructing a low-rank matrix from a subset of its entries and analyze two variants of the so-called alternating minimization algorithm, which has been proposed in the past.We establish that when the underlying matrix has rank one, has positive bounded entries, and the graph underlying the revealed entries has diameter which is logarithmic in the size of the matrix, both algorithms succeed in reconstructing the matrix approximately in polynomial time starting from an arbitrary initialization.We further provide simulation results which suggest that the second variant which is based on the message passing type updates performsmore » significantly better.« less

  13. Modified truncated randomized singular value decomposition (MTRSVD) algorithms for large scale discrete ill-posed problems with general-form regularization

    NASA Astrophysics Data System (ADS)

    Jia, Zhongxiao; Yang, Yanfei

    2018-05-01

    In this paper, we propose new randomization based algorithms for large scale linear discrete ill-posed problems with general-form regularization: subject to , where L is a regularization matrix. Our algorithms are inspired by the modified truncated singular value decomposition (MTSVD) method, which suits only for small to medium scale problems, and randomized SVD (RSVD) algorithms that generate good low rank approximations to A. We use rank-k truncated randomized SVD (TRSVD) approximations to A by truncating the rank- RSVD approximations to A, where q is an oversampling parameter. The resulting algorithms are called modified TRSVD (MTRSVD) methods. At every step, we use the LSQR algorithm to solve the resulting inner least squares problem, which is proved to become better conditioned as k increases so that LSQR converges faster. We present sharp bounds for the approximation accuracy of the RSVDs and TRSVDs for severely, moderately and mildly ill-posed problems, and substantially improve a known basic bound for TRSVD approximations. We prove how to choose the stopping tolerance for LSQR in order to guarantee that the computed and exact best regularized solutions have the same accuracy. Numerical experiments illustrate that the best regularized solutions by MTRSVD are as accurate as the ones by the truncated generalized singular value decomposition (TGSVD) algorithm, and at least as accurate as those by some existing truncated randomized generalized singular value decomposition (TRGSVD) algorithms. This work was supported in part by the National Science Foundation of China (Nos. 11771249 and 11371219).

  14. Methods of Combinatorial Optimization to Reveal Factors Affecting Gene Length

    PubMed Central

    Bolshoy, Alexander; Tatarinova, Tatiana

    2012-01-01

    In this paper we present a novel method for genome ranking according to gene lengths. The main outcomes described in this paper are the following: the formulation of the genome ranking problem, presentation of relevant approaches to solve it, and the demonstration of preliminary results from prokaryotic genomes ordering. Using a subset of prokaryotic genomes, we attempted to uncover factors affecting gene length. We have demonstrated that hyperthermophilic species have shorter genes as compared with mesophilic organisms, which probably means that environmental factors affect gene length. Moreover, these preliminary results show that environmental factors group together in ranking evolutionary distant species. PMID:23300345

  15. Performance of low-rank QR approximation of the finite element Biot-Savart law

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    White, D A; Fasenfest, B J

    2006-01-12

    We are concerned with the computation of magnetic fields from known electric currents in the finite element setting. In finite element eddy current simulations it is necessary to prescribe the magnetic field (or potential, depending upon the formulation) on the conductor boundary. In situations where the magnetic field is due to a distributed current density, the Biot-Savart law can be used, eliminating the need to mesh the nonconducting regions. Computation of the Biot-Savart law can be significantly accelerated using a low-rank QR approximation. We review the low-rank QR method and report performance on selected problems.

  16. Relaxations to Sparse Optimization Problems and Applications

    NASA Astrophysics Data System (ADS)

    Skau, Erik West

    Parsimony is a fundamental property that is applied to many characteristics in a variety of fields. Of particular interest are optimization problems that apply rank, dimensionality, or support in a parsimonious manner. In this thesis we study some optimization problems and their relaxations, and focus on properties and qualities of the solutions of these problems. The Gramian tensor decomposition problem attempts to decompose a symmetric tensor as a sum of rank one tensors.We approach the Gramian tensor decomposition problem with a relaxation to a semidefinite program. We study conditions which ensure that the solution of the relaxed semidefinite problem gives the minimal Gramian rank decomposition. Sparse representations with learned dictionaries are one of the leading image modeling techniques for image restoration. When learning these dictionaries from a set of training images, the sparsity parameter of the dictionary learning algorithm strongly influences the content of the dictionary atoms.We describe geometrically the content of trained dictionaries and how it changes with the sparsity parameter.We use statistical analysis to characterize how the different content is used in sparse representations. Finally, a method to control the structure of the dictionaries is demonstrated, allowing us to learn a dictionary which can later be tailored for specific applications. Variations of dictionary learning can be broadly applied to a variety of applications.We explore a pansharpening problem with a triple factorization variant of coupled dictionary learning. Another application of dictionary learning is computer vision. Computer vision relies heavily on object detection, which we explore with a hierarchical convolutional dictionary learning model. Data fusion of disparate modalities is a growing topic of interest.We do a case study to demonstrate the benefit of using social media data with satellite imagery to estimate hazard extents. In this case study analysis we apply a maximum entropy model, guided by the social media data, to estimate the flooded regions during a 2013 flood in Boulder, CO and show that the results are comparable to those obtained using expert information.

  17. Analyzing the BBOB results by means of benchmarking concepts.

    PubMed

    Mersmann, O; Preuss, M; Trautmann, H; Bischl, B; Weihs, C

    2015-01-01

    We present methods to answer two basic questions that arise when benchmarking optimization algorithms. The first one is: which algorithm is the "best" one? and the second one is: which algorithm should I use for my real-world problem? Both are connected and neither is easy to answer. We present a theoretical framework for designing and analyzing the raw data of such benchmark experiments. This represents a first step in answering the aforementioned questions. The 2009 and 2010 BBOB benchmark results are analyzed by means of this framework and we derive insight regarding the answers to the two questions. Furthermore, we discuss how to properly aggregate rankings from algorithm evaluations on individual problems into a consensus, its theoretical background and which common pitfalls should be avoided. Finally, we address the grouping of test problems into sets with similar optimizer rankings and investigate whether these are reflected by already proposed test problem characteristics, finding that this is not always the case.

  18. The Role of Social Support on the Relationship between Gender and Career Progression in STEM Academia

    DTIC Science & Technology

    2015-03-26

    sciences. According to the article, gender stereotyping was still an issue for women in STEM while treatment in other fields improved. The authors...Minimum Maximum Sum Mean Std. Deviation Gender DV (1= Male ) .00 1.00 50.00 .7692 .42460 Google Scholar Articles 0 192 2658 42.87 38.863 Years...Predictors: (Constant), Gender DV (1= Male ) Dependent Variable: Rank Results of analysis supported the hypothesis that gender is a predictor of the

  19. Effects of Energy Dissipation in the Sphere-Restricted Full Three-Body Problem

    NASA Astrophysics Data System (ADS)

    Gabriel, T. S. J.

    Recently, the classical N-Body Problem has been adjusted to account for celestial bodies made of constituents of finite density. By imposing a minima on the achievable distance between particles, minimum energy resting states are allowed by the problem. The Full N-Body Problem allows for the dissipation of mechanical energy through surface-surface interactions via impacts or by way of tidal deformation. Barring exogeneous forces and allowing for the dissipation of energy, these systems have discrete, and sometimes multiple, minimum energy states for a given angular momentum. Building the dynamical framework of such finite density systems is a necessary process in outlining the evolution of rubble pile asteroids and other gravitational-granular systems such as protoplanetary discs, and potentially planetary rings, from a theoretical point of view. In all cases, resting states are expected to occur as a necessary step in the ongoing processes of solar system formation and evolution. Previous studies of this problem have been performed in the N=3 case where the bodies are indistinguishable spheres, with all possible relative equilibria and their stability having been identified as a function of the angular momentum of the system. These studies uncovered that at certain levels of angular momentum there exists two minimum energy states, a global and local minimum. Thus a question of interest is in which of these states a dissipative system would preferentially settle and the sensitivity of results to changes in dissipation parameters. Assuming equal-sized, perfectly-rigid bodies, this study investigates the dynamical evolution of three spheres under the influence of mutual gravity and impact mechanics as a function of dissipation parameters. A purpose-written, C-based, Hard Sphere Discrete Element Method code has been developed to integrate trajectories and resolve contact mechanics as grains evolve into minimum energy configurations. By testing many randomized initial conditions, statistics are measured regarding minimum energy states for a given angular momentum range. A trend in the Sphere-Restricted Full Three-Body Problem producing an end state of one configuration over another is found as a function of angular momentum and restitution.

  20. New Variational Formulations of Hybrid Stress Elements

    NASA Technical Reports Server (NTRS)

    Pian, T. H. H.; Sumihara, K.; Kang, D.

    1984-01-01

    In the variational formulations of finite elements by the Hu-Washizu and Hellinger-Reissner principles the stress equilibrium condition is maintained by the inclusion of internal displacements which function as the Lagrange multipliers for the constraints. These versions permit the use of natural coordinates and the relaxation of the equilibrium conditions and render considerable improvements in the assumed stress hybrid elements. These include the derivation of invariant hybrid elements which possess the ideal qualities such as minimum sensitivity to geometric distortions, minimum number of independent stress parameters, rank sufficient, and ability to represent constant strain states and bending moments. Another application is the formulation of semiLoof thin shell elements which can yield excellent results for many severe test cases because the rigid body nodes, the momentless membrane strains, and the inextensional bending modes are all represented.

  1. A statistical approach to rank multiple priorities in environmental epidemiology: an example from high-risk areas in Sardinia, Italy.

    PubMed

    Catelan, Dolores; Biggeri, Annibale

    2008-11-01

    In environmental epidemiology, long lists of relative risk estimates from exposed populations are compared to a reference to scrutinize the dataset for extremes. Here, inference on disease profiles for given areas, or for fixed disease population signatures, are of interest and summaries can be obtained averaging over areas or diseases. We have developed a multivariate hierarchical Bayesian approach to estimate posterior rank distributions and we show how to produce league tables of ranks with credibility intervals useful to address the above mentioned inferential problems. Applying the procedure to a real dataset from the report "Environment and Health in Sardinia (Italy)" we selected 18 areas characterized by high environmental pressure for industrial, mining or military activities investigated for 29 causes of deaths among male residents. Ranking diseases highlighted the increased burdens of neoplastic (cancerous), and non-neoplastic respiratory diseases in the heavily polluted area of Portoscuso. The averaged ranks by disease over areas showed lung cancer among the three highest positions.

  2. Ranking Reputation and Quality in Online Rating Systems

    PubMed Central

    Liao, Hao; Zeng, An; Xiao, Rui; Ren, Zhuo-Ming; Chen, Duan-Bing; Zhang, Yi-Cheng

    2014-01-01

    How to design an accurate and robust ranking algorithm is a fundamental problem with wide applications in many real systems. It is especially significant in online rating systems due to the existence of some spammers. In the literature, many well-performed iterative ranking methods have been proposed. These methods can effectively recognize the unreliable users and reduce their weight in judging the quality of objects, and finally lead to a more accurate evaluation of the online products. In this paper, we design an iterative ranking method with high performance in both accuracy and robustness. More specifically, a reputation redistribution process is introduced to enhance the influence of highly reputed users and two penalty factors enable the algorithm resistance to malicious behaviors. Validation of our method is performed in both artificial and real user-object bipartite networks. PMID:24819119

  3. Blind separation of positive sources by globally convergent gradient search.

    PubMed

    Oja, Erkki; Plumbley, Mark

    2004-09-01

    The instantaneous noise-free linear mixing model in independent component analysis is largely a solved problem under the usual assumption of independent nongaussian sources and full column rank mixing matrix. However, with some prior information on the sources, like positivity, new analysis and perhaps simplified solution methods may yet become possible. In this letter, we consider the task of independent component analysis when the independent sources are known to be nonnegative and well grounded, which means that they have a nonzero pdf in the region of zero. It can be shown that in this case, the solution method is basically very simple: an orthogonal rotation of the whitened observation vector into nonnegative outputs will give a positive permutation of the original sources. We propose a cost function whose minimum coincides with nonnegativity and derive the gradient algorithm under the whitening constraint, under which the separating matrix is orthogonal. We further prove that in the Stiefel manifold of orthogonal matrices, the cost function is a Lyapunov function for the matrix gradient flow, implying global convergence. Thus, this algorithm is guaranteed to find the nonnegative well-grounded independent sources. The analysis is complemented by a numerical simulation, which illustrates the algorithm.

  4. Quantum Max-flow/Min-cut

    NASA Astrophysics Data System (ADS)

    Cui, Shawn X.; Freedman, Michael H.; Sattath, Or; Stong, Richard; Minton, Greg

    2016-06-01

    The classical max-flow min-cut theorem describes transport through certain idealized classical networks. We consider the quantum analog for tensor networks. By associating an integral capacity to each edge and a tensor to each vertex in a flow network, we can also interpret it as a tensor network and, more specifically, as a linear map from the input space to the output space. The quantum max-flow is defined to be the maximal rank of this linear map over all choices of tensors. The quantum min-cut is defined to be the minimum product of the capacities of edges over all cuts of the tensor network. We show that unlike the classical case, the quantum max-flow=min-cut conjecture is not true in general. Under certain conditions, e.g., when the capacity on each edge is some power of a fixed integer, the quantum max-flow is proved to equal the quantum min-cut. However, concrete examples are also provided where the equality does not hold. We also found connections of quantum max-flow/min-cut with entropy of entanglement and the quantum satisfiability problem. We speculate that the phenomena revealed may be of interest both in spin systems in condensed matter and in quantum gravity.

  5. Comparisons of methods for determining dominance rank in male and female prairie voles (Microtus ochrogastor)

    USGS Publications Warehouse

    Lanctot, Richard B.; Best, Louis B.

    2000-01-01

    Dominance ranks in male and female prairie voles (Microtus ochrogaster) were determined from 6 measurements that mimicked environmental situations that might be encountered by prairie voles in communal groups, including agonistic interactions resulting from competition for food and water and encounters in burrows. Male and female groups of 6 individuals each were tested against one another in pairwise encounters (i.e., dyads) for 5 of the measurements and together as a group in a 6th measurement. Two types of response variables, aggressive behaviors and possession time of a limiting resource, were collected during trials, and those data were used to determine cardinal ranks and principal component ranks for all animals within each group. Cardinal ranks and principal component ranks seldom yielded similar rankings for each animal across measurements. However, dominance measurements that were conducted in similar environmental contexts, regardless of the response variable recorded, ranked animals similarly. Our results suggest that individual dominance measurements assessed situation- or resource-specific responses. Our study demonstrates problems inherent in determining dominance rankings of individuals within groups, including choosing measurements, response variables, and statistical techniques. Researchers should avoid using a single measurement to represent social dominance until they have first demonstrated that a dominance relationship between 2 individuals has been learned (i.e., subsequent interactions show a reduced response rather than an escalation), that this relationship is relatively constant through time, and that the relationship is not context dependent. Such assessments of dominance status between all dyads then can be used to generate dominance rankings within social groups.

  6. On Making a Distinguished Vertex Minimum Degree by Vertex Deletion

    NASA Astrophysics Data System (ADS)

    Betzler, Nadja; Bredereck, Robert; Niedermeier, Rolf; Uhlmann, Johannes

    For directed and undirected graphs, we study the problem to make a distinguished vertex the unique minimum-(in)degree vertex through deletion of a minimum number of vertices. The corresponding NP-hard optimization problems are motivated by applications concerning control in elections and social network analysis. Continuing previous work for the directed case, we show that the problem is W[2]-hard when parameterized by the graph's feedback arc set number, whereas it becomes fixed-parameter tractable when combining the parameters "feedback vertex set number" and "number of vertices to delete". For the so far unstudied undirected case, we show that the problem is NP-hard and W[1]-hard when parameterized by the "number of vertices to delete". On the positive side, we show fixed-parameter tractability for several parameterizations measuring tree-likeness, including a vertex-linear problem kernel with respect to the parameter "feedback edge set number". On the contrary, we show a non-existence result concerning polynomial-size problem kernels for the combined parameter "vertex cover number and number of vertices to delete", implying corresponding nonexistence results when replacing vertex cover number by treewidth or feedback vertex set number.

  7. Conservation threats and the phylogenetic utility of IUCN Red List rankings in Incilius toads.

    PubMed

    Schachat, Sandra R; Mulcahy, Daniel G; Mendelson, Joseph R

    2016-02-01

    Phylogenetic analysis of extinction threat is an emerging tool in the field of conservation. However, there are problems with the methods and data as commonly used. Phylogenetic sampling usually extends to the level of family or genus, but International Union for Conservation of Nature (IUCN) rankings are available only for individual species, and, although different species within a taxonomic group may have the same IUCN rank, the species may have been ranked as such for different reasons. Therefore, IUCN rank may not reflect evolutionary history and thus may not be appropriate for use in a phylogenetic context. To be used appropriately, threat-risk data should reflect the cause of extinction threat rather than the IUCN threat ranking. In a case study of the toad genus Incilius, with phylogenetic sampling at the species level (so that the resolution of the phylogeny matches character data from the IUCN Red List), we analyzed causes of decline and IUCN threat rankings by calculating metrics of phylogenetic signal (such as Fritz and Purvis' D). We also analyzed the extent to which cause of decline and threat ranking overlap by calculating phylogenetic correlation between these 2 types of character data. Incilius species varied greatly in both threat ranking and cause of decline; this variability would be lost at a coarser taxonomic resolution. We found far more phylogenetic signal, likely correlated with evolutionary history, for causes of decline than for IUCN threat ranking. Individual causes of decline and IUCN threat rankings were largely uncorrelated on the phylogeny. Our results demonstrate the importance of character selection and taxonomic resolution when extinction threat is analyzed in a phylogenetic context. © 2015 Society for Conservation Biology.

  8. Fluorescence Excitation Spectroscopy for Phytoplankton Species Classification Using an All-Pairs Method: Characterization of a System with Unexpectedly Low Rank.

    PubMed

    Rekully, Cameron M; Faulkner, Stefan T; Lachenmyer, Eric M; Cunningham, Brady R; Shaw, Timothy J; Richardson, Tammi L; Myrick, Michael L

    2018-03-01

    An all-pairs method is used to analyze phytoplankton fluorescence excitation spectra. An initial set of nine phytoplankton species is analyzed in pairwise fashion to select two optical filter sets, and then the two filter sets are used to explore variations among a total of 31 species in a single-cell fluorescence imaging photometer. Results are presented in terms of pair analyses; we report that 411 of the 465 possible pairings of the larger group of 31 species can be distinguished using the initial nine-species-based selection of optical filters. A bootstrap analysis based on the larger data set shows that the distribution of possible pair separation results based on a randomly selected nine-species initial calibration set is strongly peaked in the 410-415 pair separation range, consistent with our experimental result. Further, the result for filter selection using all 31 species is also 411 pair separations; The set of phytoplankton fluorescence excitation spectra is intuitively high in rank due to the number and variety of pigments that contribute to the spectrum. However, the results in this report are consistent with an effective rank as determined by a variety of heuristic and statistical methods in the range of 2-3. These results are reviewed in consideration of how consistent the filter selections are from model to model for the data presented here. We discuss the common observation that rank is generally found to be relatively low even in many seemingly complex circumstances, so that it may be productive to assume a low rank from the beginning. If a low-rank hypothesis is valid, then relatively few samples are needed to explore an experimental space. Under very restricted circumstances for uniformly distributed samples, the minimum number for an initial analysis might be as low as 8-11 random samples for 1-3 factors.

  9. Stochastic Methods for Aircraft Design

    NASA Technical Reports Server (NTRS)

    Pelz, Richard B.; Ogot, Madara

    1998-01-01

    The global stochastic optimization method, simulated annealing (SA), was adapted and applied to various problems in aircraft design. The research was aimed at overcoming the problem of finding an optimal design in a space with multiple minima and roughness ubiquitous to numerically generated nonlinear objective functions. SA was modified to reduce the number of objective function evaluations for an optimal design, historically the main criticism of stochastic methods. SA was applied to many CFD/MDO problems including: low sonic-boom bodies, minimum drag on supersonic fore-bodies, minimum drag on supersonic aeroelastic fore-bodies, minimum drag on HSCT aeroelastic wings, FLOPS preliminary design code, another preliminary aircraft design study with vortex lattice aerodynamics, HSR complete aircraft aerodynamics. In every case, SA provided a simple, robust and reliable optimization method which found optimal designs in order 100 objective function evaluations. Perhaps most importantly, from this academic/industrial project, technology has been successfully transferred; this method is the method of choice for optimization problems at Northrop Grumman.

  10. Identification of priorities for improvement of medication safety in primary care: a PRIORITIZE study.

    PubMed

    Tudor Car, Lorainne; Papachristou, Nikolaos; Gallagher, Joseph; Samra, Rajvinder; Wazny, Kerri; El-Khatib, Mona; Bull, Adrian; Majeed, Azeem; Aylin, Paul; Atun, Rifat; Rudan, Igor; Car, Josip; Bell, Helen; Vincent, Charles; Franklin, Bryony Dean

    2016-11-16

    Medication error is a frequent, harmful and costly patient safety incident. Research to date has mostly focused on medication errors in hospitals. In this study, we aimed to identify the main causes of, and solutions to, medication error in primary care. We used a novel priority-setting method for identifying and ranking patient safety problems and solutions called PRIORITIZE. We invited 500 North West London primary care clinicians to complete an open-ended questionnaire to identify three main problems and solutions relating to medication error in primary care. 113 clinicians submitted responses, which we thematically synthesized into a composite list of 48 distinct problems and 45 solutions. A group of 57 clinicians randomly selected from the initial cohort scored these and an overall ranking was derived. The agreement between the clinicians' scores was presented using the average expert agreement (AEA). The study was conducted between September 2013 and November 2014. The top three problems were incomplete reconciliation of medication during patient 'hand-overs', inadequate patient education about their medication use and poor discharge summaries. The highest ranked solutions included development of a standardized discharge summary template, reduction of unnecessary prescribing, and minimisation of polypharmacy. Overall, better communication between the healthcare provider and patient, quality assurance approaches during medication prescribing and monitoring, and patient education on how to use their medication were considered the top priorities. The highest ranked suggestions received the strongest agreement among the clinicians, i.e. the highest AEA score. Clinicians identified a range of suggestions for better medication management, quality assurance procedures and patient education. According to clinicians, medication errors can be largely prevented with feasible and affordable interventions. PRIORITIZE is a new, convenient, systematic, and replicable method, and merits further exploration with a view to becoming a part of a routine preventative patient safety monitoring mechanism.

  11. Transforming graph states using single-qubit operations.

    PubMed

    Dahlberg, Axel; Wehner, Stephanie

    2018-07-13

    Stabilizer states form an important class of states in quantum information, and are of central importance in quantum error correction. Here, we provide an algorithm for deciding whether one stabilizer (target) state can be obtained from another stabilizer (source) state by single-qubit Clifford operations (LC), single-qubit Pauli measurements (LPM) and classical communication (CC) between sites holding the individual qubits. What is more, we provide a recipe to obtain the sequence of LC+LPM+CC operations which prepare the desired target state from the source state, and show how these operations can be applied in parallel to reach the target state in constant time. Our algorithm has applications in quantum networks, quantum computing, and can also serve as a design tool-for example, to find transformations between quantum error correcting codes. We provide a software implementation of our algorithm that makes this tool easier to apply. A key insight leading to our algorithm is to show that the problem is equivalent to one in graph theory, which is to decide whether some graph G ' is a vertex-minor of another graph G The vertex-minor problem is, in general, [Formula: see text]-Complete, but can be solved efficiently on graphs which are not too complex. A measure of the complexity of a graph is the rank-width which equals the Schmidt-rank width of a subclass of stabilizer states called graph states, and thus intuitively is a measure of entanglement. Here, we show that the vertex-minor problem can be solved in time O (| G | 3 ), where | G | is the size of the graph G , whenever the rank-width of G and the size of G ' are bounded. Our algorithm is based on techniques by Courcelle for solving fixed parameter tractable problems, where here the relevant fixed parameter is the rank width. The second half of this paper serves as an accessible but far from exhausting introduction to these concepts, that could be useful for many other problems in quantum information.This article is part of a discussion meeting issue 'Foundations of quantum mechanics and their impact on contemporary society'. © 2018 The Author(s).

  12. Estimates of the absolute error and a scheme for an approximate solution to scheduling problems

    NASA Astrophysics Data System (ADS)

    Lazarev, A. A.

    2009-02-01

    An approach is proposed for estimating absolute errors and finding approximate solutions to classical NP-hard scheduling problems of minimizing the maximum lateness for one or many machines and makespan is minimized. The concept of a metric (distance) between instances of the problem is introduced. The idea behind the approach is, given the problem instance, to construct another instance for which an optimal or approximate solution can be found at the minimum distance from the initial instance in the metric introduced. Instead of solving the original problem (instance), a set of approximating polynomially/pseudopolynomially solvable problems (instances) are considered, an instance at the minimum distance from the given one is chosen, and the resulting schedule is then applied to the original instance.

  13. Stopping Criteria for Log-Domain Diffeomorphic Demons Registration: An Experimental Survey for Radiotherapy Application.

    PubMed

    Peroni, M; Golland, P; Sharp, G C; Baroni, G

    2016-02-01

    A crucial issue in deformable image registration is achieving a robust registration algorithm at a reasonable computational cost. Given the iterative nature of the optimization procedure an algorithm must automatically detect convergence, and stop the iterative process when most appropriate. This paper ranks the performances of three stopping criteria and six stopping value computation strategies for a Log-Domain Demons Deformable registration method simulating both a coarse and a fine registration. The analyzed stopping criteria are: (a) velocity field update magnitude, (b) mean squared error, and (c) harmonic energy. Each stoping condition is formulated so that the user defines a threshold ∊, which quantifies the residual error that is acceptable for the particular problem and calculation strategy. In this work, we did not aim at assigning a value to e, but to give insights in how to evaluate and to set the threshold on a given exit strategy in a very popular registration scheme. Experiments on phantom and patient data demonstrate that comparing the optimization metric minimum over the most recent three iterations with the minimum over the fourth to sixth most recent iterations can be an appropriate algorithm stopping strategy. The harmonic energy was found to provide best trade-off between robustness and speed of convergence for the analyzed registration method at coarse registration, but was outperformed by mean squared error when all the original pixel information is used. This suggests the need of developing mathematically sound new convergence criteria in which both image and vector field information could be used to detect the actual convergence, which could be especially useful when considering multi-resolution registrations. Further work should be also dedicated to study same strategies performances in other deformable registration methods and body districts. © The Author(s) 2014.

  14. Realistic Approach to Innovation.

    ERIC Educational Resources Information Center

    Dawson, Garth C.

    Part of the Omaha police in-service training program was devoted to innovative approaches to solving police department problems and improving community relations. The sessions were an attempt to use the brainstorming technique to elicit new solutions to everyday problems faced by the rank-and-file members of the police department. The report…

  15. Telemetry, Tracking, and Control Working Group report

    NASA Technical Reports Server (NTRS)

    Campbell, Richard; Rogers, L. Joseph

    1986-01-01

    After assessing the design implications and the criteria to be used in technology selection, the technical problems that face the telemetry, tracking, and control (TTC) area were defined. For each of the problems identified, recommendations were made for needed technology developments. These recommendations are listed and ranked according to priority.

  16. Legislation Affecting School Crime and Violence.

    ERIC Educational Resources Information Center

    Menacker, Julius

    National polls of public attitudes toward public education consistently rank school safety and drug abuse at the top of the problem list. This paper describes some federal and state legislative responses to the problems and offers a preventative approach. Federal legislation has taken the form of two major statutes--the Comprehensive Drug Abuse…

  17. Indoor Air Quality Basics for Schools.

    ERIC Educational Resources Information Center

    Environmental Protection Agency, Washington, DC. Office of Radiation and Indoor Air.

    This fact sheet details important information on Indoor Air Quality (IAQ) in school buildings, problems associated with IAQ, and various prevention and problem-solving strategies. Most people spend 90 percent of their time indoors, therefore the Environmental Protection Agency ranks IAQ in the top four environmental risks to the public. The…

  18. Low rank approach to computing first and higher order derivatives using automatic differentiation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reed, J. A.; Abdel-Khalik, H. S.; Utke, J.

    2012-07-01

    This manuscript outlines a new approach for increasing the efficiency of applying automatic differentiation (AD) to large scale computational models. By using the principles of the Efficient Subspace Method (ESM), low rank approximations of the derivatives for first and higher orders can be calculated using minimized computational resources. The output obtained from nuclear reactor calculations typically has a much smaller numerical rank compared to the number of inputs and outputs. This rank deficiency can be exploited to reduce the number of derivatives that need to be calculated using AD. The effective rank can be determined according to ESM by computingmore » derivatives with AD at random inputs. Reduced or pseudo variables are then defined and new derivatives are calculated with respect to the pseudo variables. Two different AD packages are used: OpenAD and Rapsodia. OpenAD is used to determine the effective rank and the subspace that contains the derivatives. Rapsodia is then used to calculate derivatives with respect to the pseudo variables for the desired order. The overall approach is applied to two simple problems and to MATWS, a safety code for sodium cooled reactors. (authors)« less

  19. Validation of an association rule mining-based method to infer associations between medications and problems.

    PubMed

    Wright, A; McCoy, A; Henkin, S; Flaherty, M; Sittig, D

    2013-01-01

    In a prior study, we developed methods for automatically identifying associations between medications and problems using association rule mining on a large clinical data warehouse and validated these methods at a single site which used a self-developed electronic health record. To demonstrate the generalizability of these methods by validating them at an external site. We received data on medications and problems for 263,597 patients from the University of Texas Health Science Center at Houston Faculty Practice, an ambulatory practice that uses the Allscripts Enterprise commercial electronic health record product. We then conducted association rule mining to identify associated pairs of medications and problems and characterized these associations with five measures of interestingness: support, confidence, chi-square, interest and conviction and compared the top-ranked pairs to a gold standard. 25,088 medication-problem pairs were identified that exceeded our confidence and support thresholds. An analysis of the top 500 pairs according to each measure of interestingness showed a high degree of accuracy for highly-ranked pairs. The same technique was successfully employed at the University of Texas and accuracy was comparable to our previous results. Top associations included many medications that are highly specific for a particular problem as well as a large number of common, accurate medication-problem pairs that reflect practice patterns.

  20. Low-rank matrix fitting based on subspace perturbation analysis with applications to structure from motion.

    PubMed

    Jia, Hongjun; Martinez, Aleix M

    2009-05-01

    The task of finding a low-rank (r) matrix that best fits an original data matrix of higher rank is a recurring problem in science and engineering. The problem becomes especially difficult when the original data matrix has some missing entries and contains an unknown additive noise term in the remaining elements. The former problem can be solved by concatenating a set of r-column matrices that share a common single r-dimensional solution space. Unfortunately, the number of possible submatrices is generally very large and, hence, the results obtained with one set of r-column matrices will generally be different from that captured by a different set. Ideally, we would like to find that solution that is least affected by noise. This requires that we determine which of the r-column matrices (i.e., which of the original feature points) are less influenced by the unknown noise term. This paper presents a criterion to successfully carry out such a selection. Our key result is to formally prove that the more distinct the r vectors of the r-column matrices are, the less they are swayed by noise. This key result is then combined with the use of a noise model to derive an upper bound for the effect that noise and occlusions have on each of the r-column matrices. It is shown how this criterion can be effectively used to recover the noise-free matrix of rank r. Finally, we derive the affine and projective structure-from-motion (SFM) algorithms using the proposed criterion. Extensive validation on synthetic and real data sets shows the superiority of the proposed approach over the state of the art.

  1. Comparative minimum inhibitory and mutant prevention drug concentrations of enrofloxacin, ceftiofur, florfenicol, tilmicosin and tulathromycin against bovine clinical isolates of Mannheimia haemolytica.

    PubMed

    Blondeau, J M; Borsos, S; Blondeau, L D; Blondeau, B J J; Hesje, C E

    2012-11-09

    Mannheimia haemolytica is the most prevalent cause of bovine respiratory disease (BRD) and this disease accounts for 75% of morbidity, 50-70% of feedlot deaths and is estimated to cost up to $1 billion dollars annually in the USA. Antimicrobial therapy is essential for reducing morbidity, mortality and impacting on the financial burden of this disease. Due to the concern of increasing antimicrobial resistance, investigation of antibacterial agents for their potential for selecting for resistance is of paramount importance. A novel in vitro measurement called the mutant prevention concentration (MPC) defines the antimicrobial drug concentration necessary to block the growth of the least susceptible cells present in high density (≥10(7) colony forming units/ml) bacterial populations such as those seen in acute infection. We compared the minimum inhibitory concentration (MIC) and MPC values for 5 antimicrobial agents (ceftiofur, enrofloxacin, florfenicol, tilmicosin, tulathromycin) against 285 M. haemolytica clinical isolates. The MIC(90)/MPC(90) values for each agent respectively were as follows: 0.016/2, 0.125/1, 2/≥16, 8/≥32, 2/8. Dosing to achieve MPC concentrations (where possible) may serve to reduce the selection of bacterial subpopulations with reduced antimicrobial susceptibility. The rank order of potency based on MIC(90) values was ceftiofur > enrofloxacin > florfenicol = tulathromycin > tilmicosin. The rank order of potency based on MPC(90) values was enrofloxacin > ceftiofur > tulathromycin > florfenicol ≥ tilmicosin. Copyright © 2012 Elsevier B.V. All rights reserved.

  2. Data integration in physiology using Bayes’ rule and minimum Bayes’ factors: deubiquitylating enzymes in the renal collecting duct

    PubMed Central

    Xue, Zhe; Chen, Jia-Xu; Zhao, Yue; Medvar, Barbara

    2017-01-01

    A major challenge in physiology is to exploit the many large-scale data sets available from “-omic” studies to seek answers to key physiological questions. In previous studies, Bayes’ theorem has been used for this purpose. This approach requires a means to map continuously distributed experimental data to probabilities (likelihood values) to derive posterior probabilities from the combination of prior probabilities and new data. Here, we introduce the use of minimum Bayes’ factors for this purpose and illustrate the approach by addressing a physiological question, “Which deubiquitylating enzymes (DUBs) encoded by mammalian genomes are most likely to regulate plasma membrane transport processes in renal cortical collecting duct principal cells?” To do this, we have created a comprehensive online database of 110 DUBs present in the mammalian genome (https://hpcwebapps.cit.nih.gov/ESBL/Database/DUBs/). We used Bayes’ theorem to integrate available information from large-scale data sets derived from proteomic and transcriptomic studies of renal collecting duct cells to rank the 110 known DUBs with regard to likelihood of interacting with and regulating transport processes. The top-ranked DUBs were OTUB1, USP14, PSMD7, PSMD14, USP7, USP9X, OTUD4, USP10, and UCHL5. Among these USP7, USP9X, OTUD4, and USP10 are known to be involved in endosomal trafficking and have potential roles in endosomal recycling of plasma membrane proteins in the mammalian cortical collecting duct. PMID:28039431

  3. Data integration in physiology using Bayes' rule and minimum Bayes' factors: deubiquitylating enzymes in the renal collecting duct.

    PubMed

    Xue, Zhe; Chen, Jia-Xu; Zhao, Yue; Medvar, Barbara; Knepper, Mark A

    2017-03-01

    A major challenge in physiology is to exploit the many large-scale data sets available from "-omic" studies to seek answers to key physiological questions. In previous studies, Bayes' theorem has been used for this purpose. This approach requires a means to map continuously distributed experimental data to probabilities (likelihood values) to derive posterior probabilities from the combination of prior probabilities and new data. Here, we introduce the use of minimum Bayes' factors for this purpose and illustrate the approach by addressing a physiological question, "Which deubiquitylating enzymes (DUBs) encoded by mammalian genomes are most likely to regulate plasma membrane transport processes in renal cortical collecting duct principal cells?" To do this, we have created a comprehensive online database of 110 DUBs present in the mammalian genome (https://hpcwebapps.cit.nih.gov/ESBL/Database/DUBs/). We used Bayes' theorem to integrate available information from large-scale data sets derived from proteomic and transcriptomic studies of renal collecting duct cells to rank the 110 known DUBs with regard to likelihood of interacting with and regulating transport processes. The top-ranked DUBs were OTUB1, USP14, PSMD7, PSMD14, USP7, USP9X, OTUD4, USP10, and UCHL5. Among these USP7, USP9X, OTUD4, and USP10 are known to be involved in endosomal trafficking and have potential roles in endosomal recycling of plasma membrane proteins in the mammalian cortical collecting duct. Copyright © 2017 the American Physiological Society.

  4. Application of multiple criteria decision methods in space exploration initiative design and planning

    NASA Technical Reports Server (NTRS)

    Masud, Abu S. M.

    1991-01-01

    Fellowship activities were directed towards the identification of opportunities for application of the Multiple Criteria Decision Making (MCDM) techniques in the Space Exploration Initiative (SEI) domain. I identified several application possibilities and proposed demonstration application in these three areas: evaluation and ranking of SEI architectures, space mission planning and selection, and space system design. Here, only the first problem is discussed. The most meaningful result of the analysis is the wide separation between the two top ranked architectures, indicating a significant preference difference between them. It must also be noted that the final ranking reflects, to some extent, the biases of the evaluators and their understanding of the architecture.

  5. SIMULTANEOUS MULTISLICE MAGNETIC RESONANCE FINGERPRINTING WITH LOW-RANK AND SUBSPACE MODELING

    PubMed Central

    Zhao, Bo; Bilgic, Berkin; Adalsteinsson, Elfar; Griswold, Mark A.; Wald, Lawrence L.; Setsompop, Kawin

    2018-01-01

    Magnetic resonance fingerprinting (MRF) is a new quantitative imaging paradigm that enables simultaneous acquisition of multiple magnetic resonance tissue parameters (e.g., T1, T2, and spin density). Recently, MRF has been integrated with simultaneous multislice (SMS) acquisitions to enable volumetric imaging with faster scan time. In this paper, we present a new image reconstruction method based on low-rank and subspace modeling for improved SMS-MRF. Here the low-rank model exploits strong spatiotemporal correlation among contrast-weighted images, while the subspace model captures the temporal evolution of magnetization dynamics. With the proposed model, the image reconstruction problem is formulated as a convex optimization problem, for which we develop an algorithm based on variable splitting and the alternating direction method of multipliers. The performance of the proposed method has been evaluated by numerical experiments, and the results demonstrate that the proposed method leads to improved accuracy over the conventional approach. Practically, the proposed method has a potential to allow for a 3x speedup with minimal reconstruction error, resulting in less than 5 sec imaging time per slice. PMID:29060594

  6. Simultaneous multislice magnetic resonance fingerprinting with low-rank and subspace modeling.

    PubMed

    Bo Zhao; Bilgic, Berkin; Adalsteinsson, Elfar; Griswold, Mark A; Wald, Lawrence L; Setsompop, Kawin

    2017-07-01

    Magnetic resonance fingerprinting (MRF) is a new quantitative imaging paradigm that enables simultaneous acquisition of multiple magnetic resonance tissue parameters (e.g., T 1 , T 2 , and spin density). Recently, MRF has been integrated with simultaneous multislice (SMS) acquisitions to enable volumetric imaging with faster scan time. In this paper, we present a new image reconstruction method based on low-rank and subspace modeling for improved SMS-MRF. Here the low-rank model exploits strong spatiotemporal correlation among contrast-weighted images, while the subspace model captures the temporal evolution of magnetization dynamics. With the proposed model, the image reconstruction problem is formulated as a convex optimization problem, for which we develop an algorithm based on variable splitting and the alternating direction method of multipliers. The performance of the proposed method has been evaluated by numerical experiments, and the results demonstrate that the proposed method leads to improved accuracy over the conventional approach. Practically, the proposed method has a potential to allow for a 3× speedup with minimal reconstruction error, resulting in less than 5 sec imaging time per slice.

  7. Supplier Selection Using Weighted Utility Additive Method

    NASA Astrophysics Data System (ADS)

    Karande, Prasad; Chakraborty, Shankar

    2015-10-01

    Supplier selection is a multi-criteria decision-making (MCDM) problem which mainly involves evaluating a number of available suppliers according to a set of common criteria for choosing the best one to meet the organizational needs. For any manufacturing or service organization, selecting the right upstream suppliers is a key success factor that will significantly reduce purchasing cost, increase downstream customer satisfaction and improve competitive ability. The past researchers have attempted to solve the supplier selection problem employing different MCDM techniques which involve active participation of the decision makers in the decision-making process. This paper deals with the application of weighted utility additive (WUTA) method for solving supplier selection problems. The WUTA method, an extension of utility additive approach, is based on ordinal regression and consists of building a piece-wise linear additive decision model from a preference structure using linear programming (LP). It adopts preference disaggregation principle and addresses the decision-making activities through operational models which need implicit preferences in the form of a preorder of reference alternatives or a subset of these alternatives present in the process. The preferential preorder provided by the decision maker is used as a restriction of a LP problem, which has its own objective function, minimization of the sum of the errors associated with the ranking of each alternative. Based on a given reference ranking of alternatives, one or more additive utility functions are derived. Using these utility functions, the weighted utilities for individual criterion values are combined into an overall weighted utility for a given alternative. It is observed that WUTA method, having a sound mathematical background, can provide accurate ranking to the candidate suppliers and choose the best one to fulfill the organizational requirements. Two real time examples are illustrated to prove its applicability and appropriateness in solving supplier selection problems.

  8. C-semiring Frameworks for Minimum Spanning Tree Problems

    NASA Astrophysics Data System (ADS)

    Bistarelli, Stefano; Santini, Francesco

    In this paper we define general algebraic frameworks for the Minimum Spanning Tree problem based on the structure of c-semirings. We propose general algorithms that can compute such trees by following different cost criteria, which must be all specific instantiation of c-semirings. Our algorithms are extensions of well-known procedures, as Prim or Kruskal, and show the expressivity of these algebraic structures. They can deal also with partially-ordered costs on the edges.

  9. Web Image Search Re-ranking with Click-based Similarity and Typicality.

    PubMed

    Yang, Xiaopeng; Mei, Tao; Zhang, Yong Dong; Liu, Jie; Satoh, Shin'ichi

    2016-07-20

    In image search re-ranking, besides the well known semantic gap, intent gap, which is the gap between the representation of users' query/demand and the real intent of the users, is becoming a major problem restricting the development of image retrieval. To reduce human effects, in this paper, we use image click-through data, which can be viewed as the "implicit feedback" from users, to help overcome the intention gap, and further improve the image search performance. Generally, the hypothesis visually similar images should be close in a ranking list and the strategy images with higher relevance should be ranked higher than others are widely accepted. To obtain satisfying search results, thus, image similarity and the level of relevance typicality are determinate factors correspondingly. However, when measuring image similarity and typicality, conventional re-ranking approaches only consider visual information and initial ranks of images, while overlooking the influence of click-through data. This paper presents a novel re-ranking approach, named spectral clustering re-ranking with click-based similarity and typicality (SCCST). First, to learn an appropriate similarity measurement, we propose click-based multi-feature similarity learning algorithm (CMSL), which conducts metric learning based on clickbased triplets selection, and integrates multiple features into a unified similarity space via multiple kernel learning. Then based on the learnt click-based image similarity measure, we conduct spectral clustering to group visually and semantically similar images into same clusters, and get the final re-rank list by calculating click-based clusters typicality and withinclusters click-based image typicality in descending order. Our experiments conducted on two real-world query-image datasets with diverse representative queries show that our proposed reranking approach can significantly improve initial search results, and outperform several existing re-ranking approaches.

  10. An improved stochastic fractal search algorithm for 3D protein structure prediction.

    PubMed

    Zhou, Changjun; Sun, Chuan; Wang, Bin; Wang, Xiaojun

    2018-05-03

    Protein structure prediction (PSP) is a significant area for biological information research, disease treatment, and drug development and so on. In this paper, three-dimensional structures of proteins are predicted based on the known amino acid sequences, and the structure prediction problem is transformed into a typical NP problem by an AB off-lattice model. This work applies a novel improved Stochastic Fractal Search algorithm (ISFS) to solve the problem. The Stochastic Fractal Search algorithm (SFS) is an effective evolutionary algorithm that performs well in exploring the search space but falls into local minimums sometimes. In order to avoid the weakness, Lvy flight and internal feedback information are introduced in ISFS. In the experimental process, simulations are conducted by ISFS algorithm on Fibonacci sequences and real peptide sequences. Experimental results prove that the ISFS performs more efficiently and robust in terms of finding the global minimum and avoiding getting stuck in local minimums.

  11. Portfolio optimization and the random magnet problem

    NASA Astrophysics Data System (ADS)

    Rosenow, B.; Plerou, V.; Gopikrishnan, P.; Stanley, H. E.

    2002-08-01

    Diversification of an investment into independently fluctuating assets reduces its risk. In reality, movements of assets are mutually correlated and therefore knowledge of cross-correlations among asset price movements are of great importance. Our results support the possibility that the problem of finding an investment in stocks which exposes invested funds to a minimum level of risk is analogous to the problem of finding the magnetization of a random magnet. The interactions for this "random magnet problem" are given by the cross-correlation matrix C of stock returns. We find that random matrix theory allows us to make an estimate for C which outperforms the standard estimate in terms of constructing an investment which carries a minimum level of risk.

  12. Scope of Various Random Number Generators in Ant System Approach for TSP

    NASA Technical Reports Server (NTRS)

    Sen, S. K.; Shaykhian, Gholam Ali

    2007-01-01

    Experimented on heuristic, based on an ant system approach for traveling Salesman problem, are several quasi and pseudo-random number generators. This experiment is to explore if any particular generator is most desirable. Such an experiment on large samples has the potential to rank the performance of the generators for the foregoing heuristic. This is just to seek an answer to the controversial performance ranking of the generators in probabilistic/statically sense.

  13. Profile of senior high school students’ creative thinking skills on biology material in low, medium, and high academic perspective

    NASA Astrophysics Data System (ADS)

    Nurhamidah, D.; Masykuri, M.; Dwiastuti, S.

    2018-04-01

    Creative thinking is one of the most important skills of the 21st Century. Students are demanded not only be able to solve the cognitive problems but also to face the life problems. The aim of this study is to determine students’ creative thinking skills in biology class for XI grade of three Senior High Schools in Ngawi regency. The approach used to categorised the three schools into low, medium and high academic rank was a norm-referenced test. The study involved 92 students who completed a test. Guilford's alternative uses task was used to measure the level of students’ creative thinking skills. The results showed that in the school of high academic rank, 89,74% of students had low creative thinking skills and 10,25% of them are in moderate category. In the medium academic rank school, 85,71% of students had low creative thinking skills and 14,29% of them are moderate. In the school of low academic rank, 8% of students had very low creative thinking skills, 88% are low, and 4% are moderate. Based on the finding of the research, the creative thinking skills of students in the three school was categorised as low level, therefore the learning design should be developed which can improve the students’ creative thinking skills.

  14. Kernelized rank learning for personalized drug recommendation.

    PubMed

    He, Xiao; Folkman, Lukas; Borgwardt, Karsten

    2018-03-08

    Large-scale screenings of cancer cell lines with detailed molecular profiles against libraries of pharmacological compounds are currently being performed in order to gain a better understanding of the genetic component of drug response and to enhance our ability to recommend therapies given a patient's molecular profile. These comprehensive screens differ from the clinical setting in which (1) medical records only contain the response of a patient to very few drugs, (2) drugs are recommended by doctors based on their expert judgment, and (3) selecting the most promising therapy is often more important than accurately predicting the sensitivity to all potential drugs. Current regression models for drug sensitivity prediction fail to account for these three properties. We present a machine learning approach, named Kernelized Rank Learning (KRL), that ranks drugs based on their predicted effect per cell line (patient), circumventing the difficult problem of precisely predicting the sensitivity to the given drug. Our approach outperforms several state-of-the-art predictors in drug recommendation, particularly if the training dataset is sparse, and generalizes to patient data. Our work phrases personalized drug recommendation as a new type of machine learning problem with translational potential to the clinic. The Python implementation of KRL and scripts for running our experiments are available at https://github.com/BorgwardtLab/Kernelized-Rank-Learning. xiao.he@bsse.ethz.ch, lukas.folkman@bsse.ethz.ch. Supplementary data are available at Bioinformatics online.

  15. Distance estimation and collision prediction for on-line robotic motion planning

    NASA Technical Reports Server (NTRS)

    Kyriakopoulos, K. J.; Saridis, G. N.

    1992-01-01

    An efficient method for computing the minimum distance and predicting collisions between moving objects is presented. This problem is incorporated into the framework of an in-line motion-planning algorithm to satisfy collision avoidance between a robot and moving objects modeled as convex polyhedra. In the beginning, the deterministic problem where the information about the objects is assumed to be certain is examined. L(1) or L(infinity) norms are used to represent distance and the problem becomes a linear programming problem. The stochastic problem is formulated where the uncertainty is induced by sensing and the unknown dynamics of the moving obstacles. Two problems are considered: First, filtering of the distance between the robot and the moving object at the present time. Second, prediction of the minimum distance in the future in order to predict the collision time.

  16. Optimization of self-study room open problem based on green and low-carbon campus construction

    NASA Astrophysics Data System (ADS)

    Liu, Baoyou

    2017-04-01

    The optimization of self-study room open arrangement problem in colleges and universities is conducive to accelerate the fine management of the campus and promote green and low-carbon campus construction. Firstly, combined with the actual survey data, the self-study area and living area were divided into different blocks, and the electricity consumption in each self-study room and distance between different living and studying areas were normalized. Secondly, the minimum of total satisfaction index and the minimum of the total electricity consumption were selected as the optimization targets respectively. The mathematical models of linear programming were established and resolved by LINGO software. The results showed that the minimum of total satisfaction index was 4055.533 and the total minimum electricity consumption was 137216 W. Finally, some advice had been put forward on how to realize the high efficient administration of the study room.

  17. Learning Low-Rank Class-Specific Dictionary and Sparse Intra-Class Variant Dictionary for Face Recognition.

    PubMed

    Tang, Xin; Feng, Guo-Can; Li, Xiao-Xin; Cai, Jia-Xin

    2015-01-01

    Face recognition is challenging especially when the images from different persons are similar to each other due to variations in illumination, expression, and occlusion. If we have sufficient training images of each person which can span the facial variations of that person under testing conditions, sparse representation based classification (SRC) achieves very promising results. However, in many applications, face recognition often encounters the small sample size problem arising from the small number of available training images for each person. In this paper, we present a novel face recognition framework by utilizing low-rank and sparse error matrix decomposition, and sparse coding techniques (LRSE+SC). Firstly, the low-rank matrix recovery technique is applied to decompose the face images per class into a low-rank matrix and a sparse error matrix. The low-rank matrix of each individual is a class-specific dictionary and it captures the discriminative feature of this individual. The sparse error matrix represents the intra-class variations, such as illumination, expression changes. Secondly, we combine the low-rank part (representative basis) of each person into a supervised dictionary and integrate all the sparse error matrix of each individual into a within-individual variant dictionary which can be applied to represent the possible variations between the testing and training images. Then these two dictionaries are used to code the query image. The within-individual variant dictionary can be shared by all the subjects and only contribute to explain the lighting conditions, expressions, and occlusions of the query image rather than discrimination. At last, a reconstruction-based scheme is adopted for face recognition. Since the within-individual dictionary is introduced, LRSE+SC can handle the problem of the corrupted training data and the situation that not all subjects have enough samples for training. Experimental results show that our method achieves the state-of-the-art results on AR, FERET, FRGC and LFW databases.

  18. Learning Low-Rank Class-Specific Dictionary and Sparse Intra-Class Variant Dictionary for Face Recognition

    PubMed Central

    Tang, Xin; Feng, Guo-can; Li, Xiao-xin; Cai, Jia-xin

    2015-01-01

    Face recognition is challenging especially when the images from different persons are similar to each other due to variations in illumination, expression, and occlusion. If we have sufficient training images of each person which can span the facial variations of that person under testing conditions, sparse representation based classification (SRC) achieves very promising results. However, in many applications, face recognition often encounters the small sample size problem arising from the small number of available training images for each person. In this paper, we present a novel face recognition framework by utilizing low-rank and sparse error matrix decomposition, and sparse coding techniques (LRSE+SC). Firstly, the low-rank matrix recovery technique is applied to decompose the face images per class into a low-rank matrix and a sparse error matrix. The low-rank matrix of each individual is a class-specific dictionary and it captures the discriminative feature of this individual. The sparse error matrix represents the intra-class variations, such as illumination, expression changes. Secondly, we combine the low-rank part (representative basis) of each person into a supervised dictionary and integrate all the sparse error matrix of each individual into a within-individual variant dictionary which can be applied to represent the possible variations between the testing and training images. Then these two dictionaries are used to code the query image. The within-individual variant dictionary can be shared by all the subjects and only contribute to explain the lighting conditions, expressions, and occlusions of the query image rather than discrimination. At last, a reconstruction-based scheme is adopted for face recognition. Since the within-individual dictionary is introduced, LRSE+SC can handle the problem of the corrupted training data and the situation that not all subjects have enough samples for training. Experimental results show that our method achieves the state-of-the-art results on AR, FERET, FRGC and LFW databases. PMID:26571112

  19. At-Least Version of the Generalized Minimum Spanning Tree Problem: Optimization Through Ant Colony System and Genetic Algorithms

    NASA Technical Reports Server (NTRS)

    Janich, Karl W.

    2005-01-01

    The At-Least version of the Generalized Minimum Spanning Tree Problem (L-GMST) is a problem in which the optimal solution connects all defined clusters of nodes in a given network at a minimum cost. The L-GMST is NPHard; therefore, metaheuristic algorithms have been used to find reasonable solutions to the problem as opposed to computationally feasible exact algorithms, which many believe do not exist for such a problem. One such metaheuristic uses a swarm-intelligent Ant Colony System (ACS) algorithm, in which agents converge on a solution through the weighing of local heuristics, such as the shortest available path and the number of agents that recently used a given path. However, in a network using a solution derived from the ACS algorithm, some nodes may move around to different clusters and cause small changes in the network makeup. Rerunning the algorithm from the start would be somewhat inefficient due to the significance of the changes, so a genetic algorithm based on the top few solutions found in the ACS algorithm is proposed to quickly and efficiently adapt the network to these small changes.

  20. A new fast direct solver for the boundary element method

    NASA Astrophysics Data System (ADS)

    Huang, S.; Liu, Y. J.

    2017-09-01

    A new fast direct linear equation solver for the boundary element method (BEM) is presented in this paper. The idea of the new fast direct solver stems from the concept of the hierarchical off-diagonal low-rank matrix. The hierarchical off-diagonal low-rank matrix can be decomposed into the multiplication of several diagonal block matrices. The inverse of the hierarchical off-diagonal low-rank matrix can be calculated efficiently with the Sherman-Morrison-Woodbury formula. In this paper, a more general and efficient approach to approximate the coefficient matrix of the BEM with the hierarchical off-diagonal low-rank matrix is proposed. Compared to the current fast direct solver based on the hierarchical off-diagonal low-rank matrix, the proposed method is suitable for solving general 3-D boundary element models. Several numerical examples of 3-D potential problems with the total number of unknowns up to above 200,000 are presented. The results show that the new fast direct solver can be applied to solve large 3-D BEM models accurately and with better efficiency compared with the conventional BEM.

  1. Automated diagnosis of dry eye using infrared thermography images

    NASA Astrophysics Data System (ADS)

    Acharya, U. Rajendra; Tan, Jen Hong; Koh, Joel E. W.; Sudarshan, Vidya K.; Yeo, Sharon; Too, Cheah Loon; Chua, Chua Kuang; Ng, E. Y. K.; Tong, Louis

    2015-07-01

    Dry Eye (DE) is a condition of either decreased tear production or increased tear film evaporation. Prolonged DE damages the cornea causing the corneal scarring, thinning and perforation. There is no single uniform diagnosis test available to date; combinations of diagnostic tests are to be performed to diagnose DE. The current diagnostic methods available are subjective, uncomfortable and invasive. Hence in this paper, we have developed an efficient, fast and non-invasive technique for the automated identification of normal and DE classes using infrared thermography images. The features are extracted from nonlinear method called Higher Order Spectra (HOS). Features are ranked using t-test ranking strategy. These ranked features are fed to various classifiers namely, K-Nearest Neighbor (KNN), Nave Bayesian Classifier (NBC), Decision Tree (DT), Probabilistic Neural Network (PNN), and Support Vector Machine (SVM) to select the best classifier using minimum number of features. Our proposed system is able to identify the DE and normal classes automatically with classification accuracy of 99.8%, sensitivity of 99.8%, and specificity if 99.8% for left eye using PNN and KNN classifiers. And we have reported classification accuracy of 99.8%, sensitivity of 99.9%, and specificity if 99.4% for right eye using SVM classifier with polynomial order 2 kernel.

  2. Learning Short Binary Codes for Large-scale Image Retrieval.

    PubMed

    Liu, Li; Yu, Mengyang; Shao, Ling

    2017-03-01

    Large-scale visual information retrieval has become an active research area in this big data era. Recently, hashing/binary coding algorithms prove to be effective for scalable retrieval applications. Most existing hashing methods require relatively long binary codes (i.e., over hundreds of bits, sometimes even thousands of bits) to achieve reasonable retrieval accuracies. However, for some realistic and unique applications, such as on wearable or mobile devices, only short binary codes can be used for efficient image retrieval due to the limitation of computational resources or bandwidth on these devices. In this paper, we propose a novel unsupervised hashing approach called min-cost ranking (MCR) specifically for learning powerful short binary codes (i.e., usually the code length shorter than 100 b) for scalable image retrieval tasks. By exploring the discriminative ability of each dimension of data, MCR can generate one bit binary code for each dimension and simultaneously rank the discriminative separability of each bit according to the proposed cost function. Only top-ranked bits with minimum cost-values are then selected and grouped together to compose the final salient binary codes. Extensive experimental results on large-scale retrieval demonstrate that MCR can achieve comparative performance as the state-of-the-art hashing algorithms but with significantly shorter codes, leading to much faster large-scale retrieval.

  3. Learning Robust and Discriminative Subspace With Low-Rank Constraints.

    PubMed

    Li, Sheng; Fu, Yun

    2016-11-01

    In this paper, we aim at learning robust and discriminative subspaces from noisy data. Subspace learning is widely used in extracting discriminative features for classification. However, when data are contaminated with severe noise, the performance of most existing subspace learning methods would be limited. Recent advances in low-rank modeling provide effective solutions for removing noise or outliers contained in sample sets, which motivates us to take advantage of low-rank constraints in order to exploit robust and discriminative subspace for classification. In particular, we present a discriminative subspace learning method called the supervised regularization-based robust subspace (SRRS) approach, by incorporating the low-rank constraint. SRRS seeks low-rank representations from the noisy data, and learns a discriminative subspace from the recovered clean data jointly. A supervised regularization function is designed to make use of the class label information, and therefore to enhance the discriminability of subspace. Our approach is formulated as a constrained rank-minimization problem. We design an inexact augmented Lagrange multiplier optimization algorithm to solve it. Unlike the existing sparse representation and low-rank learning methods, our approach learns a low-dimensional subspace from recovered data, and explicitly incorporates the supervised information. Our approach and some baselines are evaluated on the COIL-100, ALOI, Extended YaleB, FERET, AR, and KinFace databases. The experimental results demonstrate the effectiveness of our approach, especially when the data contain considerable noise or variations.

  4. Discriminative Transfer Subspace Learning via Low-Rank and Sparse Representation.

    PubMed

    Xu, Yong; Fang, Xiaozhao; Wu, Jian; Li, Xuelong; Zhang, David

    2016-02-01

    In this paper, we address the problem of unsupervised domain transfer learning in which no labels are available in the target domain. We use a transformation matrix to transfer both the source and target data to a common subspace, where each target sample can be represented by a combination of source samples such that the samples from different domains can be well interlaced. In this way, the discrepancy of the source and target domains is reduced. By imposing joint low-rank and sparse constraints on the reconstruction coefficient matrix, the global and local structures of data can be preserved. To enlarge the margins between different classes as much as possible and provide more freedom to diminish the discrepancy, a flexible linear classifier (projection) is obtained by learning a non-negative label relaxation matrix that allows the strict binary label matrix to relax into a slack variable matrix. Our method can avoid a potentially negative transfer by using a sparse matrix to model the noise and, thus, is more robust to different types of noise. We formulate our problem as a constrained low-rankness and sparsity minimization problem and solve it by the inexact augmented Lagrange multiplier method. Extensive experiments on various visual domain adaptation tasks show the superiority of the proposed method over the state-of-the art methods. The MATLAB code of our method will be publicly available at http://www.yongxu.org/lunwen.html.

  5. Air-To-Air Visual Target Acquisition Pilot Interview Survey.

    DTIC Science & Technology

    1979-01-01

    8217top’ 5 p~lots in air-tu-air visual target acqui- sition in your squadron," would/could you do it? yes no Comment : 2. Is the term "acquisition" as...meaningful as "spotting" and "seeing" in 1he con- text of visually detecting a "bogey" or another aircraft? yes no Comment : 3. Would/could you rank all...squadron pilots on the basis of their visual target acquisition capability? yes no Comment : 4. Is there a minimum number of observations requi.red for

  6. A Corpus-Based Approach for Automatic Thai Unknown Word Recognition Using Boosting Techniques

    NASA Astrophysics Data System (ADS)

    Techo, Jakkrit; Nattee, Cholwich; Theeramunkong, Thanaruk

    While classification techniques can be applied for automatic unknown word recognition in a language without word boundary, it faces with the problem of unbalanced datasets where the number of positive unknown word candidates is dominantly smaller than that of negative candidates. To solve this problem, this paper presents a corpus-based approach that introduces a so-called group-based ranking evaluation technique into ensemble learning in order to generate a sequence of classification models that later collaborate to select the most probable unknown word from multiple candidates. Given a classification model, the group-based ranking evaluation (GRE) is applied to construct a training dataset for learning the succeeding model, by weighing each of its candidates according to their ranks and correctness when the candidates of an unknown word are considered as one group. A number of experiments have been conducted on a large Thai medical text to evaluate performance of the proposed group-based ranking evaluation approach, namely V-GRE, compared to the conventional naïve Bayes classifier and our vanilla version without ensemble learning. As the result, the proposed method achieves an accuracy of 90.93±0.50% when the first rank is selected while it gains 97.26±0.26% when the top-ten candidates are considered, that is 8.45% and 6.79% improvement over the conventional record-based naïve Bayes classifier and the vanilla version. Another result on applying only best features show 93.93±0.22% and up to 98.85±0.15% accuracy for top-1 and top-10, respectively. They are 3.97% and 9.78% improvement over naive Bayes and the vanilla version. Finally, an error analysis is given.

  7. A Delphi study on research priorities for trauma nursing.

    PubMed

    Bayley, E W; Richmond, T; Noroian, E L; Allen, L R

    1994-05-01

    To identify and prioritize research questions of importance to trauma patient care and of interest to trauma nurses. A three-round Delphi technique was used to solicit, identify, and prioritize problems for trauma nursing research. In round 1, experienced trauma nurses (N = 208) generated 513 problems, which were analyzed, categorized, and collapsed into 111 items for subsequent rounds. Round 2 participants rated each research question on a 1 to 7 scale on two criteria: impact on patient welfare and value for practicing nurses. Group median scores provided by 166 round 2 respondents and respondents' individual round 2 scores were indicated on the round 3 questionnaire. Subjects rated the questions again on the same criteria and indicated whether nurses, independently or in collaboration with other health professionals, should assume responsibility for that research. Median and mean scores and rank order were determined for each item. Respondents who completed all three rounds (n = 137) had a mean of 8.3 years of trauma experience. Nine research questions ranked within the top 20 on both criteria. The two research questions that ranked highest on both criteria were: What are the most effective nursing interventions in the prevention of pulmonary and circulatory complications in trauma patients? and What are the most effective methods for preventing aspiration in trauma patients during the postoperative phase? The third-ranked question regarding patient welfare was: What psychological and lifestyle changes result from traumatic injury? Regarding value for practicing nurses, What are the most effective educational methods to prepare and maintain proficiency in trauma care providers? ranked third. These research priorities provide impetus and direction for nursing and collaborative investigation in trauma care.

  8. Statistical mechanical analysis of linear programming relaxation for combinatorial optimization problems

    NASA Astrophysics Data System (ADS)

    Takabe, Satoshi; Hukushima, Koji

    2016-05-01

    Typical behavior of the linear programming (LP) problem is studied as a relaxation of the minimum vertex cover (min-VC), a type of integer programming (IP) problem. A lattice-gas model on the Erdös-Rényi random graphs of α -uniform hyperedges is proposed to express both the LP and IP problems of the min-VC in the common statistical mechanical model with a one-parameter family. Statistical mechanical analyses reveal for α =2 that the LP optimal solution is typically equal to that given by the IP below the critical average degree c =e in the thermodynamic limit. The critical threshold for good accuracy of the relaxation extends the mathematical result c =1 and coincides with the replica symmetry-breaking threshold of the IP. The LP relaxation for the minimum hitting sets with α ≥3 , minimum vertex covers on α -uniform random graphs, is also studied. Analytic and numerical results strongly suggest that the LP relaxation fails to estimate optimal values above the critical average degree c =e /(α -1 ) where the replica symmetry is broken.

  9. Statistical mechanical analysis of linear programming relaxation for combinatorial optimization problems.

    PubMed

    Takabe, Satoshi; Hukushima, Koji

    2016-05-01

    Typical behavior of the linear programming (LP) problem is studied as a relaxation of the minimum vertex cover (min-VC), a type of integer programming (IP) problem. A lattice-gas model on the Erdös-Rényi random graphs of α-uniform hyperedges is proposed to express both the LP and IP problems of the min-VC in the common statistical mechanical model with a one-parameter family. Statistical mechanical analyses reveal for α=2 that the LP optimal solution is typically equal to that given by the IP below the critical average degree c=e in the thermodynamic limit. The critical threshold for good accuracy of the relaxation extends the mathematical result c=1 and coincides with the replica symmetry-breaking threshold of the IP. The LP relaxation for the minimum hitting sets with α≥3, minimum vertex covers on α-uniform random graphs, is also studied. Analytic and numerical results strongly suggest that the LP relaxation fails to estimate optimal values above the critical average degree c=e/(α-1) where the replica symmetry is broken.

  10. Interactive Inverse Groundwater Modeling - Addressing User Fatigue

    NASA Astrophysics Data System (ADS)

    Singh, A.; Minsker, B. S.

    2006-12-01

    This paper builds on ongoing research on developing an interactive and multi-objective framework to solve the groundwater inverse problem. In this work we solve the classic groundwater inverse problem of estimating a spatially continuous conductivity field, given field measurements of hydraulic heads. The proposed framework is based on an interactive multi-objective genetic algorithm (IMOGA) that not only considers quantitative measures such as calibration error and degree of regularization, but also takes into account expert knowledge about the structure of the underlying conductivity field expressed as subjective rankings of potential conductivity fields by the expert. The IMOGA converges to the optimal Pareto front representing the best trade- off among the qualitative as well as quantitative objectives. However, since the IMOGA is a population-based iterative search it requires the user to evaluate hundreds of solutions. This leads to the problem of 'user fatigue'. We propose a two step methodology to combat user fatigue in such interactive systems. The first step is choosing only a few highly representative solutions to be shown to the expert for ranking. Spatial clustering is used to group the search space based on the similarity of the conductivity fields. Sampling is then carried out from different clusters to improve the diversity of solutions shown to the user. Once the expert has ranked representative solutions from each cluster a machine learning model is used to 'learn user preference' and extrapolate these for the solutions not ranked by the expert. We investigate different machine learning models such as Decision Trees, Bayesian learning model, and instance based weighting to model user preference. In addition, we also investigate ways to improve the performance of these models by providing information about the spatial structure of the conductivity fields (which is what the expert bases his or her rank on). Results are shown for each of these machine learning models and the advantages and disadvantages for each approach are discussed. These results indicate that using the proposed two-step methodology leads to significant reduction in user-fatigue without deteriorating the solution quality of the IMOGA.

  11. Using Perturbed QR Factorizations To Solve Linear Least-Squares Problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Avron, Haim; Ng, Esmond G.; Toledo, Sivan

    2008-03-21

    We propose and analyze a new tool to help solve sparse linear least-squares problems min{sub x} {parallel}Ax-b{parallel}{sub 2}. Our method is based on a sparse QR factorization of a low-rank perturbation {cflx A} of A. More precisely, we show that the R factor of {cflx A} is an effective preconditioner for the least-squares problem min{sub x} {parallel}Ax-b{parallel}{sub 2}, when solved using LSQR. We propose applications for the new technique. When A is rank deficient we can add rows to ensure that the preconditioner is well-conditioned without column pivoting. When A is sparse except for a few dense rows we canmore » drop these dense rows from A to obtain {cflx A}. Another application is solving an updated or downdated problem. If R is a good preconditioner for the original problem A, it is a good preconditioner for the updated/downdated problem {cflx A}. We can also solve what-if scenarios, where we want to find the solution if a column of the original matrix is changed/removed. We present a spectral theory that analyzes the generalized spectrum of the pencil (A*A,R*R) and analyze the applications.« less

  12. Existence and stability, and discrete BB and rank conditions, for general mixed-hybrid finite elements in elasticity

    NASA Technical Reports Server (NTRS)

    Xue, W.-M.; Atluri, S. N.

    1985-01-01

    In this paper, all possible forms of mixed-hybrid finite element methods that are based on multi-field variational principles are examined as to the conditions for existence, stability, and uniqueness of their solutions. The reasons as to why certain 'simplified hybrid-mixed methods' in general, and the so-called 'simplified hybrid-displacement method' in particular (based on the so-called simplified variational principles), become unstable, are discussed. A comprehensive discussion of the 'discrete' BB-conditions, and the rank conditions, of the matrices arising in mixed-hybrid methods, is given. Some recent studies aimed at the assurance of such rank conditions, and the related problem of the avoidance of spurious kinematic modes, are presented.

  13. Development of an Obesity Prevention Dashboard for Wisconsin.

    PubMed

    Ryan, Karissa; Pillai, Parvathy; Remington, Patrick L; Malecki, Kristen; Lindberg, Sara

    2016-11-01

    A comprehensive obesity surveillance system monitors obesity rates along with causes and related health policies, which are valuable for tracking and identifying problems needing intervention. A statewide obesity dashboard was created using the County Health Rankings model. Indicators were obtained through publicly available secondary data sources and used to rank Wisconsin amongst other states on obesity rates, health factors, and policies. Wisconsin consistently ranks in the middle of states for a majority of indicators and has not implemented any of the evidence-based health policies. This state of obesity report shows Wisconsin has marked room for improvement regarding obesity prevention, especially with obesity-related health policies. Physicians and health care systems can play a pivotal role in making progress on obesity prevention.

  14. Panel flutter optimization by gradient projection

    NASA Technical Reports Server (NTRS)

    Pierson, B. L.

    1975-01-01

    A gradient projection optimal control algorithm incorporating conjugate gradient directions of search is described and applied to several minimum weight panel design problems subject to a flutter speed constraint. New numerical solutions are obtained for both simply-supported and clamped homogeneous panels of infinite span for various levels of inplane loading and minimum thickness. The minimum thickness inequality constraint is enforced by a simple transformation of variables.

  15. Minimum-Time Consensus-Based Approach for Power System Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Tao; Wu, Di; Sun, Yannan

    2016-02-01

    This paper presents minimum-time consensus based distributed algorithms for power system applications, such as load shedding and economic dispatch. The proposed algorithms are capable of solving these problems in a minimum number of time steps instead of asymptotically as in most of existing studies. Moreover, these algorithms are applicable to both undirected and directed communication networks. Simulation results are used to validate the proposed algorithms.

  16. Identification of Successive ``Unobservable'' Cyber Data Attacks in Power Systems Through Matrix Decomposition

    NASA Astrophysics Data System (ADS)

    Gao, Pengzhi; Wang, Meng; Chow, Joe H.; Ghiocel, Scott G.; Fardanesh, Bruce; Stefopoulos, George; Razanousky, Michael P.

    2016-11-01

    This paper presents a new framework of identifying a series of cyber data attacks on power system synchrophasor measurements. We focus on detecting "unobservable" cyber data attacks that cannot be detected by any existing method that purely relies on measurements received at one time instant. Leveraging the approximate low-rank property of phasor measurement unit (PMU) data, we formulate the identification problem of successive unobservable cyber attacks as a matrix decomposition problem of a low-rank matrix plus a transformed column-sparse matrix. We propose a convex-optimization-based method and provide its theoretical guarantee in the data identification. Numerical experiments on actual PMU data from the Central New York power system and synthetic data are conducted to verify the effectiveness of the proposed method.

  17. Psychological distress and alcohol use among fire fighters.

    PubMed

    Boxer, P A; Wild, D

    1993-04-01

    Few studies have investigated stressors to which fire fighters are subjected and the potential psychological consequences. One hundred and forty-five fire fighters were studied to enumerate potential occupational stressors, assess psychological distress and problems with alcohol use, and determine whether a relationship exists between these measures and self-reported stressors. Hearing that children are in a burning building was the highest ranked stressor. According to three self-report instruments, between 33 and 41% of the fire fighters were experiencing significant psychological distress, and 29% had possible or probable problems with alcohol use. These figures are significantly higher than would be expected in a typical community or working population. In a logistic regression analysis, no relationship was found between measures of psychological distress and alcohol use and the 10 most highly ranked work stressors.

  18. A path following algorithm for the graph matching problem.

    PubMed

    Zaslavskiy, Mikhail; Bach, Francis; Vert, Jean-Philippe

    2009-12-01

    We propose a convex-concave programming approach for the labeled weighted graph matching problem. The convex-concave programming formulation is obtained by rewriting the weighted graph matching problem as a least-square problem on the set of permutation matrices and relaxing it to two different optimization problems: a quadratic convex and a quadratic concave optimization problem on the set of doubly stochastic matrices. The concave relaxation has the same global minimum as the initial graph matching problem, but the search for its global minimum is also a hard combinatorial problem. We, therefore, construct an approximation of the concave problem solution by following a solution path of a convex-concave problem obtained by linear interpolation of the convex and concave formulations, starting from the convex relaxation. This method allows to easily integrate the information on graph label similarities into the optimization problem, and therefore, perform labeled weighted graph matching. The algorithm is compared with some of the best performing graph matching methods on four data sets: simulated graphs, QAPLib, retina vessel images, and handwritten Chinese characters. In all cases, the results are competitive with the state of the art.

  19. Ant colony optimization for solving university facility layout problem

    NASA Astrophysics Data System (ADS)

    Mohd Jani, Nurul Hafiza; Mohd Radzi, Nor Haizan; Ngadiman, Mohd Salihin

    2013-04-01

    Quadratic Assignment Problems (QAP) is classified as the NP hard problem. It has been used to model a lot of problem in several areas such as operational research, combinatorial data analysis and also parallel and distributed computing, optimization problem such as graph portioning and Travel Salesman Problem (TSP). In the literature, researcher use exact algorithm, heuristics algorithm and metaheuristic approaches to solve QAP problem. QAP is largely applied in facility layout problem (FLP). In this paper we used QAP to model university facility layout problem. There are 8 facilities that need to be assigned to 8 locations. Hence we have modeled a QAP problem with n ≤ 10 and developed an Ant Colony Optimization (ACO) algorithm to solve the university facility layout problem. The objective is to assign n facilities to n locations such that the minimum product of flows and distances is obtained. Flow is the movement from one to another facility, whereas distance is the distance between one locations of a facility to other facilities locations. The objective of the QAP is to obtain minimum total walking (flow) of lecturers from one destination to another (distance).

  20. Optimization of rainfall networks using information entropy and temporal variability analysis

    NASA Astrophysics Data System (ADS)

    Wang, Wenqi; Wang, Dong; Singh, Vijay P.; Wang, Yuankun; Wu, Jichun; Wang, Lachun; Zou, Xinqing; Liu, Jiufu; Zou, Ying; He, Ruimin

    2018-04-01

    Rainfall networks are the most direct sources of precipitation data and their optimization and evaluation are essential and important. Information entropy can not only represent the uncertainty of rainfall distribution but can also reflect the correlation and information transmission between rainfall stations. Using entropy this study performs optimization of rainfall networks that are of similar size located in two big cities in China, Shanghai (in Yangtze River basin) and Xi'an (in Yellow River basin), with respect to temporal variability analysis. Through an easy-to-implement greedy ranking algorithm based on the criterion called, Maximum Information Minimum Redundancy (MIMR), stations of the networks in the two areas (each area is further divided into two subareas) are ranked during sliding inter-annual series and under different meteorological conditions. It is found that observation series with different starting days affect the ranking, alluding to the temporal variability during network evaluation. We propose a dynamic network evaluation framework for considering temporal variability, which ranks stations under different starting days with a fixed time window (1-year, 2-year, and 5-year). Therefore, we can identify rainfall stations which are temporarily of importance or redundancy and provide some useful suggestions for decision makers. The proposed framework can serve as a supplement for the primary MIMR optimization approach. In addition, during different periods (wet season or dry season) the optimal network from MIMR exhibits differences in entropy values and the optimal network from wet season tended to produce higher entropy values. Differences in spatial distribution of the optimal networks suggest that optimizing the rainfall network for changing meteorological conditions may be more recommended.

  1. Accurate computation of survival statistics in genome-wide studies.

    PubMed

    Vandin, Fabio; Papoutsaki, Alexandra; Raphael, Benjamin J; Upfal, Eli

    2015-05-01

    A key challenge in genomics is to identify genetic variants that distinguish patients with different survival time following diagnosis or treatment. While the log-rank test is widely used for this purpose, nearly all implementations of the log-rank test rely on an asymptotic approximation that is not appropriate in many genomics applications. This is because: the two populations determined by a genetic variant may have very different sizes; and the evaluation of many possible variants demands highly accurate computation of very small p-values. We demonstrate this problem for cancer genomics data where the standard log-rank test leads to many false positive associations between somatic mutations and survival time. We develop and analyze a novel algorithm, Exact Log-rank Test (ExaLT), that accurately computes the p-value of the log-rank statistic under an exact distribution that is appropriate for any size populations. We demonstrate the advantages of ExaLT on data from published cancer genomics studies, finding significant differences from the reported p-values. We analyze somatic mutations in six cancer types from The Cancer Genome Atlas (TCGA), finding mutations with known association to survival as well as several novel associations. In contrast, standard implementations of the log-rank test report dozens-hundreds of likely false positive associations as more significant than these known associations.

  2. Model diagnostics in reduced-rank estimation

    PubMed Central

    Chen, Kun

    2016-01-01

    Reduced-rank methods are very popular in high-dimensional multivariate analysis for conducting simultaneous dimension reduction and model estimation. However, the commonly-used reduced-rank methods are not robust, as the underlying reduced-rank structure can be easily distorted by only a few data outliers. Anomalies are bound to exist in big data problems, and in some applications they themselves could be of the primary interest. While naive residual analysis is often inadequate for outlier detection due to potential masking and swamping, robust reduced-rank estimation approaches could be computationally demanding. Under Stein's unbiased risk estimation framework, we propose a set of tools, including leverage score and generalized information score, to perform model diagnostics and outlier detection in large-scale reduced-rank estimation. The leverage scores give an exact decomposition of the so-called model degrees of freedom to the observation level, which lead to exact decomposition of many commonly-used information criteria; the resulting quantities are thus named information scores of the observations. The proposed information score approach provides a principled way of combining the residuals and leverage scores for anomaly detection. Simulation studies confirm that the proposed diagnostic tools work well. A pattern recognition example with hand-writing digital images and a time series analysis example with monthly U.S. macroeconomic data further demonstrate the efficacy of the proposed approaches. PMID:28003860

  3. Accurate Computation of Survival Statistics in Genome-Wide Studies

    PubMed Central

    Vandin, Fabio; Papoutsaki, Alexandra; Raphael, Benjamin J.; Upfal, Eli

    2015-01-01

    A key challenge in genomics is to identify genetic variants that distinguish patients with different survival time following diagnosis or treatment. While the log-rank test is widely used for this purpose, nearly all implementations of the log-rank test rely on an asymptotic approximation that is not appropriate in many genomics applications. This is because: the two populations determined by a genetic variant may have very different sizes; and the evaluation of many possible variants demands highly accurate computation of very small p-values. We demonstrate this problem for cancer genomics data where the standard log-rank test leads to many false positive associations between somatic mutations and survival time. We develop and analyze a novel algorithm, Exact Log-rank Test (ExaLT), that accurately computes the p-value of the log-rank statistic under an exact distribution that is appropriate for any size populations. We demonstrate the advantages of ExaLT on data from published cancer genomics studies, finding significant differences from the reported p-values. We analyze somatic mutations in six cancer types from The Cancer Genome Atlas (TCGA), finding mutations with known association to survival as well as several novel associations. In contrast, standard implementations of the log-rank test report dozens-hundreds of likely false positive associations as more significant than these known associations. PMID:25950620

  4. Model diagnostics in reduced-rank estimation.

    PubMed

    Chen, Kun

    2016-01-01

    Reduced-rank methods are very popular in high-dimensional multivariate analysis for conducting simultaneous dimension reduction and model estimation. However, the commonly-used reduced-rank methods are not robust, as the underlying reduced-rank structure can be easily distorted by only a few data outliers. Anomalies are bound to exist in big data problems, and in some applications they themselves could be of the primary interest. While naive residual analysis is often inadequate for outlier detection due to potential masking and swamping, robust reduced-rank estimation approaches could be computationally demanding. Under Stein's unbiased risk estimation framework, we propose a set of tools, including leverage score and generalized information score, to perform model diagnostics and outlier detection in large-scale reduced-rank estimation. The leverage scores give an exact decomposition of the so-called model degrees of freedom to the observation level, which lead to exact decomposition of many commonly-used information criteria; the resulting quantities are thus named information scores of the observations. The proposed information score approach provides a principled way of combining the residuals and leverage scores for anomaly detection. Simulation studies confirm that the proposed diagnostic tools work well. A pattern recognition example with hand-writing digital images and a time series analysis example with monthly U.S. macroeconomic data further demonstrate the efficacy of the proposed approaches.

  5. Multimodal biometric system using rank-level fusion approach.

    PubMed

    Monwar, Md Maruf; Gavrilova, Marina L

    2009-08-01

    In many real-world applications, unimodal biometric systems often face significant limitations due to sensitivity to noise, intraclass variability, data quality, nonuniversality, and other factors. Attempting to improve the performance of individual matchers in such situations may not prove to be highly effective. Multibiometric systems seek to alleviate some of these problems by providing multiple pieces of evidence of the same identity. These systems help achieve an increase in performance that may not be possible using a single-biometric indicator. This paper presents an effective fusion scheme that combines information presented by multiple domain experts based on the rank-level fusion integration method. The developed multimodal biometric system possesses a number of unique qualities, starting from utilizing principal component analysis and Fisher's linear discriminant methods for individual matchers (face, ear, and signature) identity authentication and utilizing the novel rank-level fusion method in order to consolidate the results obtained from different biometric matchers. The ranks of individual matchers are combined using the highest rank, Borda count, and logistic regression approaches. The results indicate that fusion of individual modalities can improve the overall performance of the biometric system, even in the presence of low quality data. Insights on multibiometric design using rank-level fusion and its performance on a variety of biometric databases are discussed in the concluding section.

  6. Setting priorities for a research agenda to combat drug-resistant tuberculosis in children.

    PubMed

    Velayutham, B; Nair, D; Ramalingam, S; Perez-Velez, C M; Becerra, M C; Swaminathan, S

    2015-12-21

    Numerous knowledge gaps hamper the prevention and treatment of childhood drug-resistant tuberculosis (TB). Identifying research priorities is vital to inform and develop strategies to address this neglected problem. To systematically identify and rank research priorities in childhood drug-resistant TB. Adapting the Child Health and Nutrition Research Initiative (CHNRI) methodology, we compiled 53 research questions in four research areas, then classified the questions into three research types. We invited experts in childhood drug-resistant TB to score these questions through an online survey. A total of 81 respondents participated in the survey. The top-ranked research question was to identify the best combination of existing diagnostic tools for early diagnosis. Highly ranked treatment-related questions centred on the reasons for and interventions to improve treatment outcomes, adverse effects of drugs and optimal treatment duration. The prevalence of drug-resistant TB was the highest-ranked question in the epidemiology area. The development type questions that ranked highest focused on interventions for optimal diagnosis, treatment and modalities for treatment delivery. This is the first effort to identify and rank research priorities for childhood drug-resistant TB. The result is a resource to guide research to improve prevention and treatment of drug-resistant TB in children.

  7. Sequence-dependent rotation axis changes in tennis.

    PubMed

    Hansen, Clint; Martin, Caroline; Rezzoug, Nasser; Gorce, Philippe; Bideau, Benoit; Isableu, Brice

    2017-09-01

    The purpose of this study was to evaluate the role of rotation axes during a tennis serve. A motion capture system was used to evaluate the contribution of the potential axes of rotation (minimum inertia axis, shoulder-centre of mass axis and the shoulder-elbow axis) during the four discrete tennis serve phases (loading, cocking, acceleration and follow through). Ten ranked athletes (International Tennis Number 1-3) repeatedly performed a flat service aiming at a target on the other side of the net. The four serve phases are distinct and thus, each movement phase seems to be organised around specific rotation axes. The results showed that the limbs' rotational axis does not necessarily coincide with the minimum inertia axis across the cocking phase of the tennis serve. Even though individual serving strategies were exposed, all participants showed an effect due to the cocking phase and changed the rotation axis during the task. Taken together, the results showed that despite inter-individual differences, nine out of 10 participants changed the rotation axis towards the minimum inertia and/or the mass axis in an endeavour to maximise external rotation of the shoulder to optimally prepare for the acceleration phase.

  8. 76 FR 23631 - Proposed Collection; Comment Request

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-04-27

    ... non-compliance with the Commission's minimum performance standards regarding registered transfer agents, and (2) to assure that issuers are aware of certain problems and poor performances with respect... notice of a registered transfer agent's failure to comply with the Commission's minimum performance...

  9. Birth order rank as a moderator of the relation between behavior problems among children with an autism spectrum disorder and their siblings.

    PubMed

    Tomeny, Theodore S; Barry, Tammy D; Bader, Stephanie H

    2014-02-01

    Variability within the literature investigating typically-developing siblings of children with an autism spectrum disorder suggests that the quality of sibling outcomes may depend on specific factors. For this study, 42 parents of a child with an autism spectrum disorder and a typically- developing sibling provided data via online questionnaires. Birth order rank of the child with an autism spectrum disorder significantly moderated the relation between externalizing behaviors in children with an autism spectrum disorder and externalizing behaviors in their typically-developing siblings. Children with an autism spectrum disorder and higher levels of behavior problems had typically-developing siblings with higher levels of behavior problems only when the child with an autism spectrum disorder was older. These results provide a hint of clarification about the complex nature of sibling relations, but a great deal more research is needed to further examine outcomes of typically-developing siblings of children with an autism spectrum disorder.

  10. Efficient marginalization to compute protein posterior probabilities from shotgun mass spectrometry data

    PubMed Central

    Serang, Oliver; MacCoss, Michael J.; Noble, William Stafford

    2010-01-01

    The problem of identifying proteins from a shotgun proteomics experiment has not been definitively solved. Identifying the proteins in a sample requires ranking them, ideally with interpretable scores. In particular, “degenerate” peptides, which map to multiple proteins, have made such a ranking difficult to compute. The problem of computing posterior probabilities for the proteins, which can be interpreted as confidence in a protein’s presence, has been especially daunting. Previous approaches have either ignored the peptide degeneracy problem completely, addressed it by computing a heuristic set of proteins or heuristic posterior probabilities, or by estimating the posterior probabilities with sampling methods. We present a probabilistic model for protein identification in tandem mass spectrometry that recognizes peptide degeneracy. We then introduce graph-transforming algorithms that facilitate efficient computation of protein probabilities, even for large data sets. We evaluate our identification procedure on five different well-characterized data sets and demonstrate our ability to efficiently compute high-quality protein posteriors. PMID:20712337

  11. libSRES: a C library for stochastic ranking evolution strategy for parameter estimation.

    PubMed

    Ji, Xinglai; Xu, Ying

    2006-01-01

    Estimation of kinetic parameters in a biochemical pathway or network represents a common problem in systems studies of biological processes. We have implemented a C library, named libSRES, to facilitate a fast implementation of computer software for study of non-linear biochemical pathways. This library implements a (mu, lambda)-ES evolutionary optimization algorithm that uses stochastic ranking as the constraint handling technique. Considering the amount of computing time it might require to solve a parameter-estimation problem, an MPI version of libSRES is provided for parallel implementation, as well as a simple user interface. libSRES is freely available and could be used directly in any C program as a library function. We have extensively tested the performance of libSRES on various pathway parameter-estimation problems and found its performance to be satisfactory. The source code (in C) is free for academic users at http://csbl.bmb.uga.edu/~jix/science/libSRES/

  12. An Adaptive Pheromone Updation of the Ant-System using LMS Technique

    NASA Astrophysics Data System (ADS)

    Paul, Abhishek; Mukhopadhyay, Sumitra

    2010-10-01

    We propose a modified model of pheromone updation for Ant-System, entitled as Adaptive Ant System (AAS), using the properties of basic Adaptive Filters. Here, we have exploited the properties of Least Mean Square (LMS) algorithm for the pheromone updation to find out the best minimum tour for the Travelling Salesman Problem (TSP). TSP library has been used for the selection of benchmark problem and the proposed AAS determines the minimum tour length for the problems containing large number of cities. Our algorithm shows effective results and gives least tour length in most of the cases as compared to other existing approaches.

  13. A Comparison of Trajectory Optimization Methods for the Impulsive Minimum Fuel Rendezvous Problem

    NASA Technical Reports Server (NTRS)

    Hughes, Steven P.; Mailhe, Laurie M.; Guzman, Jose J.

    2002-01-01

    In this paper we present a comparison of optimization approaches to the minimum fuel rendezvous problem. Both indirect and direct methods are compared for a variety of test cases. The indirect approach is based on primer vector theory. The direct approaches are implemented numerically and include Sequential Quadratic Programming (SQP), Quasi-Newton, Simplex, Genetic Algorithms, and Simulated Annealing. Each method is applied to a variety of test cases including, circular to circular coplanar orbits, LEO to GEO, and orbit phasing in highly elliptic orbits. We also compare different constrained optimization routines on complex orbit rendezvous problems with complicated, highly nonlinear constraints.

  14. Pollution Abatement Management System--Concept Definition.

    DTIC Science & Technology

    1978-05-01

    and (3) identify priority ranking of environmental pollution problems within the Department of the Army. This report formalizes the overall concept development of PAMS and the system’s developmental strategy.

  15. Generalized minimum dominating set and application in automatic text summarization

    NASA Astrophysics Data System (ADS)

    Xu, Yi-Zhi; Zhou, Hai-Jun

    2016-03-01

    For a graph formed by vertices and weighted edges, a generalized minimum dominating set (MDS) is a vertex set of smallest cardinality such that the summed weight of edges from each outside vertex to vertices in this set is equal to or larger than certain threshold value. This generalized MDS problem reduces to the conventional MDS problem in the limiting case of all the edge weights being equal to the threshold value. We treat the generalized MDS problem in the present paper by a replica-symmetric spin glass theory and derive a set of belief-propagation equations. As a practical application we consider the problem of extracting a set of sentences that best summarize a given input text document. We carry out a preliminary test of the statistical physics-inspired method to this automatic text summarization problem.

  16. The Ranking of Global Environmental Issues and Problems by Polish Secondary Students and Teachers.

    ERIC Educational Resources Information Center

    Robinson, Michael; Trojok, Tomasz; Norwisz, Jan

    1997-01-01

    Identifies and discusses Polish student and teacher priorities of Bybee's 12 environmental problems in two cities in Katowice Province. Provides pertinent background on the Polish educational system. Presents reasons why the current science teaching model must be changed if the science curriculum is to provide more understanding of Bybee's 12…

  17. Shortcomings of the IQ-Based Construct of Underachievement

    ERIC Educational Resources Information Center

    Ziegler, Albert; Ziegler, Albert; Stoeger, Heidrun

    2012-01-01

    Despite being plagued by serious conceptual problems, underachievement ranks among the most popular constructs in research on the gifted. Many of its problems have their roots in the use of the IQ as the supposedly best method of measuring ability levels. Only a few decades ago the opinion was still widespread that the IQ-based construct of…

  18. Human Performance on Hard Non-Euclidean Graph Problems: Vertex Cover

    ERIC Educational Resources Information Center

    Carruthers, Sarah; Masson, Michael E. J.; Stege, Ulrike

    2012-01-01

    Recent studies on a computationally hard visual optimization problem, the Traveling Salesperson Problem (TSP), indicate that humans are capable of finding close to optimal solutions in near-linear time. The current study is a preliminary step in investigating human performance on another hard problem, the Minimum Vertex Cover Problem, in which…

  19. Tin Cans Revisited.

    ERIC Educational Resources Information Center

    Verderber, Nadine L.

    1992-01-01

    Presents the use of spreadsheets as an alternative method for precalculus students to solve maximum or minimum problems involving surface area and volume. Concludes that students with less technical backgrounds can solve problems normally requiring calculus and suggests sources for additional problems. (MDH)

  20. Solving fully fuzzy transportation problem using pentagonal fuzzy numbers

    NASA Astrophysics Data System (ADS)

    Maheswari, P. Uma; Ganesan, K.

    2018-04-01

    In this paper, we propose a simple approach for the solution of fuzzy transportation problem under fuzzy environment in which the transportation costs, supplies at sources and demands at destinations are represented by pentagonal fuzzy numbers. The fuzzy transportation problem is solved without converting to its equivalent crisp form using a robust ranking technique and a new fuzzy arithmetic on pentagonal fuzzy numbers. To illustrate the proposed approach a numerical example is provided.

  1. Objective evaluation of linear and nonlinear tomosynthetic reconstruction algorithms

    NASA Astrophysics Data System (ADS)

    Webber, Richard L.; Hemler, Paul F.; Lavery, John E.

    2000-04-01

    This investigation objectively tests five different tomosynthetic reconstruction methods involving three different digital sensors, each used in a different radiologic application: chest, breast, and pelvis, respectively. The common task was to simulate a specific representative projection for each application by summation of appropriately shifted tomosynthetically generated slices produced by using the five algorithms. These algorithms were, respectively, (1) conventional back projection, (2) iteratively deconvoluted back projection, (3) a nonlinear algorithm similar to back projection, except that the minimum value from all of the component projections for each pixel is computed instead of the average value, (4) a similar algorithm wherein the maximum value was computed instead of the minimum value, and (5) the same type of algorithm except that the median value was computed. Using these five algorithms, we obtained data from each sensor-tissue combination, yielding three factorially distributed series of contiguous tomosynthetic slices. The respective slice stacks then were aligned orthogonally and averaged to yield an approximation of a single orthogonal projection radiograph of the complete (unsliced) tissue thickness. Resulting images were histogram equalized, and actual projection control images were subtracted from their tomosynthetically synthesized counterparts. Standard deviations of the resulting histograms were recorded as inverse figures of merit (FOMs). Visual rankings of image differences by five human observers of a subset (breast data only) also were performed to determine whether their subjective observations correlated with homologous FOMs. Nonparametric statistical analysis of these data demonstrated significant differences (P > 0.05) between reconstruction algorithms. The nonlinear minimization reconstruction method nearly always outperformed the other methods tested. Observer rankings were similar to those measured objectively.

  2. Integrated design of multivariable hydrometric networks using entropy theory with a multiobjective optimization approach

    NASA Astrophysics Data System (ADS)

    Kim, Y.; Hwang, T.; Vose, J. M.; Martin, K. L.; Band, L. E.

    2016-12-01

    Obtaining quality hydrologic observations is the first step towards a successful water resources management. While remote sensing techniques have enabled to convert satellite images of the Earth's surface to hydrologic data, the importance of ground-based observations has never been diminished because in-situ data are often highly accurate and can be used to validate remote measurements. The existence of efficient hydrometric networks is becoming more important to obtain as much as information with minimum redundancy. The World Meteorological Organization (WMO) has recommended a guideline for the minimum hydrometric network density based on physiography; however, this guideline is not for the optimum network design but for avoiding serious deficiency from a network. Moreover, all hydrologic variables are interconnected within the hydrologic cycle, while monitoring networks have been designed individually. This study proposes an integrated network design method using entropy theory with a multiobjective optimization approach. In specific, a precipitation and a streamflow networks in a semi-urban watershed in Ontario, Canada were designed simultaneously by maximizing joint entropy, minimizing total correlation, and maximizing conditional entropy of streamflow network given precipitation network. After comparing with the typical individual network designs, the proposed design method would be able to determine more efficient optimal networks by avoiding the redundant stations, in which hydrologic information is transferable. Additionally, four quantization cases were applied in entropy calculations to assess their implications on the station rankings and the optimal networks. The results showed that the selection of quantization method should be considered carefully because the rankings and optimal networks are subject to change accordingly.

  3. Integrated design of multivariable hydrometric networks using entropy theory with a multiobjective optimization approach

    NASA Astrophysics Data System (ADS)

    Keum, J.; Coulibaly, P. D.

    2017-12-01

    Obtaining quality hydrologic observations is the first step towards a successful water resources management. While remote sensing techniques have enabled to convert satellite images of the Earth's surface to hydrologic data, the importance of ground-based observations has never been diminished because in-situ data are often highly accurate and can be used to validate remote measurements. The existence of efficient hydrometric networks is becoming more important to obtain as much as information with minimum redundancy. The World Meteorological Organization (WMO) has recommended a guideline for the minimum hydrometric network density based on physiography; however, this guideline is not for the optimum network design but for avoiding serious deficiency from a network. Moreover, all hydrologic variables are interconnected within the hydrologic cycle, while monitoring networks have been designed individually. This study proposes an integrated network design method using entropy theory with a multiobjective optimization approach. In specific, a precipitation and a streamflow networks in a semi-urban watershed in Ontario, Canada were designed simultaneously by maximizing joint entropy, minimizing total correlation, and maximizing conditional entropy of streamflow network given precipitation network. After comparing with the typical individual network designs, the proposed design method would be able to determine more efficient optimal networks by avoiding the redundant stations, in which hydrologic information is transferable. Additionally, four quantization cases were applied in entropy calculations to assess their implications on the station rankings and the optimal networks. The results showed that the selection of quantization method should be considered carefully because the rankings and optimal networks are subject to change accordingly.

  4. Using multi-attribute decision-making approaches in the selection of a hospital management system.

    PubMed

    Arasteh, Mohammad Ali; Shamshirband, Shahaboddin; Yee, Por Lip

    2018-01-01

    The most appropriate organizational software is always a real challenge for managers, especially, the IT directors. The illustration of the term "enterprise software selection", is to purchase, create, or order a software that; first, is best adapted to require of the organization; and second, has suitable price and technical support. Specifying selection criteria and ranking them, is the primary prerequisite for this action. This article provides a method to evaluate, rank, and compare the available enterprise software for choosing the apt one. The prior mentioned method is constituted of three-stage processes. First, the method identifies the organizational requires and assesses them. Second, it selects the best method throughout three possibilities; indoor-production, buying software, and ordering special software for the native use. Third, the method evaluates, compares and ranks the alternative software. The third process uses different methods of multi attribute decision making (MADM), and compares the consequent results. Based on different characteristics of the problem; several methods had been tested, namely, Analytic Hierarchy Process (AHP), Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS), Elimination and Choice Expressing Reality (ELECTURE), and easy weight method. After all, we propose the most practical method for same problems.

  5. Pure state `really' informationally complete with rank-1 POVM

    NASA Astrophysics Data System (ADS)

    Wang, Yu; Shang, Yun

    2018-03-01

    What is the minimal number of elements in a rank-1 positive operator-valued measure (POVM) which can uniquely determine any pure state in d-dimensional Hilbert space H_d? The known result is that the number is no less than 3d-2. We show that this lower bound is not tight except for d=2 or 4. Then we give an upper bound 4d-3. For d=2, many rank-1 POVMs with four elements can determine any pure states in H_2. For d=3, we show eight is the minimal number by construction. For d=4, the minimal number is in the set of {10,11,12,13}. We show that if this number is greater than 10, an unsettled open problem can be solved that three orthonormal bases cannot distinguish all pure states in H_4. For any dimension d, we construct d+2k-2 adaptive rank-1 positive operators for the reconstruction of any unknown pure state in H_d, where 1≤ k ≤ d.

  6. Robust subspace clustering via joint weighted Schatten-p norm and Lq norm minimization

    NASA Astrophysics Data System (ADS)

    Zhang, Tao; Tang, Zhenmin; Liu, Qing

    2017-05-01

    Low-rank representation (LRR) has been successfully applied to subspace clustering. However, the nuclear norm in the standard LRR is not optimal for approximating the rank function in many real-world applications. Meanwhile, the L21 norm in LRR also fails to characterize various noises properly. To address the above issues, we propose an improved LRR method, which achieves low rank property via the new formulation with weighted Schatten-p norm and Lq norm (WSPQ). Specifically, the nuclear norm is generalized to be the Schatten-p norm and different weights are assigned to the singular values, and thus it can approximate the rank function more accurately. In addition, Lq norm is further incorporated into WSPQ to model different noises and improve the robustness. An efficient algorithm based on the inexact augmented Lagrange multiplier method is designed for the formulated problem. Extensive experiments on face clustering and motion segmentation clearly demonstrate the superiority of the proposed WSPQ over several state-of-the-art methods.

  7. A Model-Free Scheme for Meme Ranking in Social Media.

    PubMed

    He, Saike; Zheng, Xiaolong; Zeng, Daniel

    2016-01-01

    The prevalence of social media has greatly catalyzed the dissemination and proliferation of online memes (e.g., ideas, topics, melodies, tags, etc.). However, this information abundance is exceeding the capability of online users to consume it. Ranking memes based on their popularities could promote online advertisement and content distribution. Despite such importance, few existing work can solve this problem well. They are either daunted by unpractical assumptions or incapability of characterizing dynamic information. As such, in this paper, we elaborate a model-free scheme to rank online memes in the context of social media. This scheme is capable to characterize the nonlinear interactions of online users, which mark the process of meme diffusion. Empirical studies on two large-scale, real-world datasets (one in English and one in Chinese) demonstrate the effectiveness and robustness of the proposed scheme. In addition, due to its fine-grained modeling of user dynamics, this ranking scheme can also be utilized to explain meme popularity through the lens of social influence.

  8. A Model-Free Scheme for Meme Ranking in Social Media

    PubMed Central

    He, Saike; Zheng, Xiaolong; Zeng, Daniel

    2015-01-01

    The prevalence of social media has greatly catalyzed the dissemination and proliferation of online memes (e.g., ideas, topics, melodies, tags, etc.). However, this information abundance is exceeding the capability of online users to consume it. Ranking memes based on their popularities could promote online advertisement and content distribution. Despite such importance, few existing work can solve this problem well. They are either daunted by unpractical assumptions or incapability of characterizing dynamic information. As such, in this paper, we elaborate a model-free scheme to rank online memes in the context of social media. This scheme is capable to characterize the nonlinear interactions of online users, which mark the process of meme diffusion. Empirical studies on two large-scale, real-world datasets (one in English and one in Chinese) demonstrate the effectiveness and robustness of the proposed scheme. In addition, due to its fine-grained modeling of user dynamics, this ranking scheme can also be utilized to explain meme popularity through the lens of social influence. PMID:26823638

  9. Solving the wrong hierarchy problem

    DOE PAGES

    Blinov, Nikita; Hook, Anson

    2016-06-29

    Many theories require augmenting the Standard Model with additional scalar fields with large order one couplings. We present a new solution to the hierarchy problem for these scalar fields. We explore parity- and Z 2-symmetric theories where the Standard Model Higgs potential has two vacua. The parity or Z 2 copy of the Higgs lives in the minimum far from the origin while our Higgs occupies the minimum near the origin of the potential. This approach results in a theory with multiple light scalar fields but with only a single hierarchy problem, since the bare mass is tied to themore » Higgs mass by a discrete symmetry. The new scalar does not have a new hierarchy problem associated with it because its expectation value and mass are generated by dimensional transmutation of the scalar quartic coupling. The location of the second Higgs minimum is not a free parameter, but is rather a function of the matter content of the theory. As a result, these theories are extremely predictive. We develop this idea in the context of a solution to the strong CP problem. Lastly, we show this mechanism postdicts the top Yukawa to be within 1σ of the currently measured value and predicts scalar color octets with masses in the range 9-200 TeV.« less

  10. Multi-UAV Routing for Area Coverage and Remote Sensing with Minimum Time.

    PubMed

    Avellar, Gustavo S C; Pereira, Guilherme A S; Pimenta, Luciano C A; Iscold, Paulo

    2015-11-02

    This paper presents a solution for the problem of minimum time coverage of ground areas using a group of unmanned air vehicles (UAVs) equipped with image sensors. The solution is divided into two parts: (i) the task modeling as a graph whose vertices are geographic coordinates determined in such a way that a single UAV would cover the area in minimum time; and (ii) the solution of a mixed integer linear programming problem, formulated according to the graph variables defined in the first part, to route the team of UAVs over the area. The main contribution of the proposed methodology, when compared with the traditional vehicle routing problem's (VRP) solutions, is the fact that our method solves some practical problems only encountered during the execution of the task with actual UAVs. In this line, one of the main contributions of the paper is that the number of UAVs used to cover the area is automatically selected by solving the optimization problem. The number of UAVs is influenced by the vehicles' maximum flight time and by the setup time, which is the time needed to prepare and launch a UAV. To illustrate the methodology, the paper presents experimental results obtained with two hand-launched, fixed-wing UAVs.

  11. Adaptive low-rank subspace learning with online optimization for robust visual tracking.

    PubMed

    Liu, Risheng; Wang, Di; Han, Yuzhuo; Fan, Xin; Luo, Zhongxuan

    2017-04-01

    In recent years, sparse and low-rank models have been widely used to formulate appearance subspace for visual tracking. However, most existing methods only consider the sparsity or low-rankness of the coefficients, which is not sufficient enough for appearance subspace learning on complex video sequences. Moreover, as both the low-rank and the column sparse measures are tightly related to all the samples in the sequences, it is challenging to incrementally solve optimization problems with both nuclear norm and column sparse norm on sequentially obtained video data. To address above limitations, this paper develops a novel low-rank subspace learning with adaptive penalization (LSAP) framework for subspace based robust visual tracking. Different from previous work, which often simply decomposes observations as low-rank features and sparse errors, LSAP simultaneously learns the subspace basis, low-rank coefficients and column sparse errors to formulate appearance subspace. Within LSAP framework, we introduce a Hadamard production based regularization to incorporate rich generative/discriminative structure constraints to adaptively penalize the coefficients for subspace learning. It is shown that such adaptive penalization can significantly improve the robustness of LSAP on severely corrupted dataset. To utilize LSAP for online visual tracking, we also develop an efficient incremental optimization scheme for nuclear norm and column sparse norm minimizations. Experiments on 50 challenging video sequences demonstrate that our tracker outperforms other state-of-the-art methods. Copyright © 2017 Elsevier Ltd. All rights reserved.

  12. Tensor completion for estimating missing values in visual data.

    PubMed

    Liu, Ji; Musialski, Przemyslaw; Wonka, Peter; Ye, Jieping

    2013-01-01

    In this paper, we propose an algorithm to estimate missing values in tensors of visual data. The values can be missing due to problems in the acquisition process or because the user manually identified unwanted outliers. Our algorithm works even with a small amount of samples and it can propagate structure to fill larger missing regions. Our methodology is built on recent studies about matrix completion using the matrix trace norm. The contribution of our paper is to extend the matrix case to the tensor case by proposing the first definition of the trace norm for tensors and then by building a working algorithm. First, we propose a definition for the tensor trace norm that generalizes the established definition of the matrix trace norm. Second, similarly to matrix completion, the tensor completion is formulated as a convex optimization problem. Unfortunately, the straightforward problem extension is significantly harder to solve than the matrix case because of the dependency among multiple constraints. To tackle this problem, we developed three algorithms: simple low rank tensor completion (SiLRTC), fast low rank tensor completion (FaLRTC), and high accuracy low rank tensor completion (HaLRTC). The SiLRTC algorithm is simple to implement and employs a relaxation technique to separate the dependent relationships and uses the block coordinate descent (BCD) method to achieve a globally optimal solution; the FaLRTC algorithm utilizes a smoothing scheme to transform the original nonsmooth problem into a smooth one and can be used to solve a general tensor trace norm minimization problem; the HaLRTC algorithm applies the alternating direction method of multipliers (ADMMs) to our problem. Our experiments show potential applications of our algorithms and the quantitative evaluation indicates that our methods are more accurate and robust than heuristic approaches. The efficiency comparison indicates that FaLTRC and HaLRTC are more efficient than SiLRTC and between FaLRTC an- HaLRTC the former is more efficient to obtain a low accuracy solution and the latter is preferred if a high-accuracy solution is desired.

  13. Quantum Max-flow/Min-cut

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cui, Shawn X., E-mail: xingshan@math.ucsb.edu; Quantum Architectures and Computation Group, Microsoft Research, Redmond, Washington 98052; Freedman, Michael H., E-mail: michaelf@microsoft.com

    2016-06-15

    The classical max-flow min-cut theorem describes transport through certain idealized classical networks. We consider the quantum analog for tensor networks. By associating an integral capacity to each edge and a tensor to each vertex in a flow network, we can also interpret it as a tensor network and, more specifically, as a linear map from the input space to the output space. The quantum max-flow is defined to be the maximal rank of this linear map over all choices of tensors. The quantum min-cut is defined to be the minimum product of the capacities of edges over all cuts ofmore » the tensor network. We show that unlike the classical case, the quantum max-flow=min-cut conjecture is not true in general. Under certain conditions, e.g., when the capacity on each edge is some power of a fixed integer, the quantum max-flow is proved to equal the quantum min-cut. However, concrete examples are also provided where the equality does not hold. We also found connections of quantum max-flow/min-cut with entropy of entanglement and the quantum satisfiability problem. We speculate that the phenomena revealed may be of interest both in spin systems in condensed matter and in quantum gravity.« less

  14. Measuring economic complexity of countries and products: which metric to use?

    NASA Astrophysics Data System (ADS)

    Mariani, Manuel Sebastian; Vidmer, Alexandre; Medo, Matsúš; Zhang, Yi-Cheng

    2015-11-01

    Evaluating the economies of countries and their relations with products in the global market is a central problem in economics, with far-reaching implications to our theoretical understanding of the international trade as well as to practical applications, such as policy making and financial investment planning. The recent Economic Complexity approach aims to quantify the competitiveness of countries and the quality of the exported products based on the empirical observation that the most competitive countries have diversified exports, whereas developing countries only export few low quality products - typically those exported by many other countries. Two different metrics, Fitness-Complexity and the Method of Reflections, have been proposed to measure country and product score in the Economic Complexity framework. We use international trade data and a recent ranking evaluation measure to quantitatively compare the ability of the two metrics to rank countries and products according to their importance in the network. The results show that the Fitness-Complexity metric outperforms the Method of Reflections in both the ranking of products and the ranking of countries. We also investigate a generalization of the Fitness-Complexity metric and show that it can produce improved rankings provided that the input data are reliable.

  15. A multimedia retrieval framework based on semi-supervised ranking and relevance feedback.

    PubMed

    Yang, Yi; Nie, Feiping; Xu, Dong; Luo, Jiebo; Zhuang, Yueting; Pan, Yunhe

    2012-04-01

    We present a new framework for multimedia content analysis and retrieval which consists of two independent algorithms. First, we propose a new semi-supervised algorithm called ranking with Local Regression and Global Alignment (LRGA) to learn a robust Laplacian matrix for data ranking. In LRGA, for each data point, a local linear regression model is used to predict the ranking scores of its neighboring points. A unified objective function is then proposed to globally align the local models from all the data points so that an optimal ranking score can be assigned to each data point. Second, we propose a semi-supervised long-term Relevance Feedback (RF) algorithm to refine the multimedia data representation. The proposed long-term RF algorithm utilizes both the multimedia data distribution in multimedia feature space and the history RF information provided by users. A trace ratio optimization problem is then formulated and solved by an efficient algorithm. The algorithms have been applied to several content-based multimedia retrieval applications, including cross-media retrieval, image retrieval, and 3D motion/pose data retrieval. Comprehensive experiments on four data sets have demonstrated its advantages in precision, robustness, scalability, and computational efficiency.

  16. The application of fuzzy Delphi and fuzzy inference system in supplier ranking and selection

    NASA Astrophysics Data System (ADS)

    Tahriri, Farzad; Mousavi, Maryam; Hozhabri Haghighi, Siamak; Zawiah Md Dawal, Siti

    2014-06-01

    In today's highly rival market, an effective supplier selection process is vital to the success of any manufacturing system. Selecting the appropriate supplier is always a difficult task because suppliers posses varied strengths and weaknesses that necessitate careful evaluations prior to suppliers' ranking. This is a complex process with many subjective and objective factors to consider before the benefits of supplier selection are achieved. This paper identifies six extremely critical criteria and thirteen sub-criteria based on the literature. A new methodology employing those criteria and sub-criteria is proposed for the assessment and ranking of a given set of suppliers. To handle the subjectivity of the decision maker's assessment, an integration of fuzzy Delphi with fuzzy inference system has been applied and a new ranking method is proposed for supplier selection problem. This supplier selection model enables decision makers to rank the suppliers based on three classifications including "extremely preferred", "moderately preferred", and "weakly preferred". In addition, in each classification, suppliers are put in order from highest final score to the lowest. Finally, the methodology is verified and validated through an example of a numerical test bed.

  17. Exchange-Hole Dipole Dispersion Model for Accurate Energy Ranking in Molecular Crystal Structure Prediction.

    PubMed

    Whittleton, Sarah R; Otero-de-la-Roza, A; Johnson, Erin R

    2017-02-14

    Accurate energy ranking is a key facet to the problem of first-principles crystal-structure prediction (CSP) of molecular crystals. This work presents a systematic assessment of B86bPBE-XDM, a semilocal density functional combined with the exchange-hole dipole moment (XDM) dispersion model, for energy ranking using 14 compounds from the first five CSP blind tests. Specifically, the set of crystals studied comprises 11 rigid, planar compounds and 3 co-crystals. The experimental structure was correctly identified as the lowest in lattice energy for 12 of the 14 total crystals. One of the exceptions is 4-hydroxythiophene-2-carbonitrile, for which the experimental structure was correctly identified once a quasi-harmonic estimate of the vibrational free-energy contribution was included, evidencing the occasional importance of thermal corrections for accurate energy ranking. The other exception is an organic salt, where charge-transfer error (also called delocalization error) is expected to cause the base density functional to be unreliable. Provided the choice of base density functional is appropriate and an estimate of temperature effects is used, XDM-corrected density-functional theory is highly reliable for the energetic ranking of competing crystal structures.

  18. Low-rank regularization for learning gene expression programs.

    PubMed

    Ye, Guibo; Tang, Mengfan; Cai, Jian-Feng; Nie, Qing; Xie, Xiaohui

    2013-01-01

    Learning gene expression programs directly from a set of observations is challenging due to the complexity of gene regulation, high noise of experimental measurements, and insufficient number of experimental measurements. Imposing additional constraints with strong and biologically motivated regularizations is critical in developing reliable and effective algorithms for inferring gene expression programs. Here we propose a new form of regulation that constrains the number of independent connectivity patterns between regulators and targets, motivated by the modular design of gene regulatory programs and the belief that the total number of independent regulatory modules should be small. We formulate a multi-target linear regression framework to incorporate this type of regulation, in which the number of independent connectivity patterns is expressed as the rank of the connectivity matrix between regulators and targets. We then generalize the linear framework to nonlinear cases, and prove that the generalized low-rank regularization model is still convex. Efficient algorithms are derived to solve both the linear and nonlinear low-rank regularized problems. Finally, we test the algorithms on three gene expression datasets, and show that the low-rank regularization improves the accuracy of gene expression prediction in these three datasets.

  19. Target detection in GPR data using joint low-rank and sparsity constraints

    NASA Astrophysics Data System (ADS)

    Bouzerdoum, Abdesselam; Tivive, Fok Hing Chi; Abeynayake, Canicious

    2016-05-01

    In ground penetrating radars, background clutter, which comprises the signals backscattered from the rough, uneven ground surface and the background noise, impairs the visualization of buried objects and subsurface inspections. In this paper, a clutter mitigation method is proposed for target detection. The removal of background clutter is formulated as a constrained optimization problem to obtain a low-rank matrix and a sparse matrix. The low-rank matrix captures the ground surface reflections and the background noise, whereas the sparse matrix contains the target reflections. An optimization method based on split-Bregman algorithm is developed to estimate these two matrices from the input GPR data. Evaluated on real radar data, the proposed method achieves promising results in removing the background clutter and enhancing the target signature.

  20. Comparison of Document Index Graph Using TextRank and HITS Weighting Method in Automatic Text Summarization

    NASA Astrophysics Data System (ADS)

    Hadyan, Fadhlil; Shaufiah; Arif Bijaksana, Moch.

    2017-01-01

    Automatic summarization is a system that can help someone to take the core information of a long text instantly. The system can help by summarizing text automatically. there’s Already many summarization systems that have been developed at this time but there are still many problems in those system. In this final task proposed summarization method using document index graph. This method utilizes the PageRank and HITS formula used to assess the web page, adapted to make an assessment of words in the sentences in a text document. The expected outcome of this final task is a system that can do summarization of a single document, by utilizing document index graph with TextRank and HITS to improve the quality of the summary results automatically.

  1. Controlling misses and false alarms in a machine learning framework for predicting uniformity of printed pages

    NASA Astrophysics Data System (ADS)

    Nguyen, Minh Q.; Allebach, Jan P.

    2015-01-01

    In our previous work1 , we presented a block-based technique to analyze printed page uniformity both visually and metrically. The features learned from the models were then employed in a Support Vector Machine (SVM) framework to classify the pages into one of the two categories of acceptable and unacceptable quality. In this paper, we introduce a set of tools for machine learning in the assessment of printed page uniformity. This work is primarily targeted to the printing industry, specifically the ubiquitous laser, electrophotographic printer. We use features that are well-correlated with the rankings of expert observers to develop a novel machine learning framework that allows one to achieve the minimum "false alarm" rate, subject to a chosen "miss" rate. Surprisingly, most of the research that has been conducted on machine learning does not consider this framework. During the process of developing a new product, test engineers will print hundreds of test pages, which can be scanned and then analyzed by an autonomous algorithm. Among these pages, most may be of acceptable quality. The objective is to find the ones that are not. These will provide critically important information to systems designers, regarding issues that need to be addressed in improving the printer design. A "miss" is defined to be a page that is not of acceptable quality to an expert observer that the prediction algorithm declares to be a "pass". Misses are a serious problem, since they represent problems that will not be seen by the systems designers. On the other hand, "false alarms" correspond to pages that an expert observer would declare to be of acceptable quality, but which are flagged by the prediction algorithm as "fails". In a typical printer testing and development scenario, such pages would be examined by an expert, and found to be of acceptable quality after all. "False alarm" pages result in extra pages to be examined by expert observers, which increases labor cost. But "false alarms" are not nearly as catastrophic as "misses", which represent potentially serious problems that are never seen by the systems developers. This scenario motivates us to develop a machine learning framework that will achieve the minimum "false alarm" rate subject to a specified "miss" rate. In order to construct such a set of receiver operating characteristic2 (ROC) curves, we examine various tools for the prediction, ranging from an exhaustive search over the space of the nonlinear discriminants to a Cost-Sentitive SVM3 framework. We then compare the curves gained from those methods. Our work shows promise for applying a standard framework to obtain a full ROC curve when it comes to tackling other machine learning problems in industry.

  2. Making sense of differing overdose mortality: contributions to improved understanding of European patterns.

    PubMed

    Waal, Helge; Gossop, Michael

    2014-01-01

    The European Monitoring Centre for Drugs and Drug Addiction, EMCDDA, publishes statistics for overdose deaths giving a European mean number, and ranking nations in a national 'league table' for overdose deaths. The interpretation of differing national levels of mortality is more problematic and more complex than is usually recognised. Different systems are used to compile mortality data and this causes problems for cross-national comparisons. Addiction behaviour can only be properly understood within its specific social and environmental ecology. Risk factors for overdose, such as the type of drug consumed, and the route of administration, are known to differ across countries. This paper describes problems associated with ranking and suggests how mortality data might be used in high-level countries aiming at reduction in the number of overdose deaths. Copyright © 2013 S. Karger AG, Basel.

  3. Model reduction method using variable-separation for stochastic saddle point problems

    NASA Astrophysics Data System (ADS)

    Jiang, Lijian; Li, Qiuqi

    2018-02-01

    In this paper, we consider a variable-separation (VS) method to solve the stochastic saddle point (SSP) problems. The VS method is applied to obtain the solution in tensor product structure for stochastic partial differential equations (SPDEs) in a mixed formulation. The aim of such a technique is to construct a reduced basis approximation of the solution of the SSP problems. The VS method attempts to get a low rank separated representation of the solution for SSP in a systematic enrichment manner. No iteration is performed at each enrichment step. In order to satisfy the inf-sup condition in the mixed formulation, we enrich the separated terms for the primal system variable at each enrichment step. For the SSP problems by regularization or penalty, we propose a more efficient variable-separation (VS) method, i.e., the variable-separation by penalty method. This can avoid further enrichment of the separated terms in the original mixed formulation. The computation of the variable-separation method decomposes into offline phase and online phase. Sparse low rank tensor approximation method is used to significantly improve the online computation efficiency when the number of separated terms is large. For the applications of SSP problems, we present three numerical examples to illustrate the performance of the proposed methods.

  4. Precision of Sensitivity in the Design Optimization of Indeterminate Structures

    NASA Technical Reports Server (NTRS)

    Patnaik, Surya N.; Pai, Shantaram S.; Hopkins, Dale A.

    2006-01-01

    Design sensitivity is central to most optimization methods. The analytical sensitivity expression for an indeterminate structural design optimization problem can be factored into a simple determinate term and a complicated indeterminate component. Sensitivity can be approximated by retaining only the determinate term and setting the indeterminate factor to zero. The optimum solution is reached with the approximate sensitivity. The central processing unit (CPU) time to solution is substantially reduced. The benefit that accrues from using the approximate sensitivity is quantified by solving a set of problems in a controlled environment. Each problem is solved twice: first using the closed-form sensitivity expression, then using the approximation. The problem solutions use the CometBoards testbed as the optimization tool with the integrated force method as the analyzer. The modification that may be required, to use the stiffener method as the analysis tool in optimization, is discussed. The design optimization problem of an indeterminate structure contains many dependent constraints because of the implicit relationship between stresses, as well as the relationship between the stresses and displacements. The design optimization process can become problematic because the implicit relationship reduces the rank of the sensitivity matrix. The proposed approximation restores the full rank and enhances the robustness of the design optimization method.

  5. Analysis of Criteria Influencing Contractor Selection Using TOPSIS Method

    NASA Astrophysics Data System (ADS)

    Alptekin, Orkun; Alptekin, Nesrin

    2017-10-01

    Selection of the most suitable contractor is an important process in public construction projects. This process is a major decision which may influence the progress and success of a construction project. Improper selection of contractors may lead to problems such as bad quality of work and delay in project duration. Especially in the construction projects of public buildings, the proper choice of contractor is beneficial to the public institution. Public procurement processes have different characteristics in respect to dissimilarities in political, social and economic features of every country. In Turkey, Turkish Public Procurement Law PPL 4734 is the main regulatory law for the procurement of the public buildings. According to the PPL 4734, public construction administrators have to contract with the lowest bidder who has the minimum requirements according to the criteria in prequalification process. Public administrators are not sufficient for selection of the proper contractor because of the restrictive provisions of the PPL 4734. The lowest bid method does not enable public construction administrators to select the most qualified contractor and they have realised the fact that the selection of a contractor based on lowest bid alone is inadequate and may lead to the failure of the project in terms of time delay Eand poor quality standards. In order to evaluate the overall efficiency of a project, it is necessary to identify selection criteria. This study aims to focus on identify importance of other criteria besides lowest bid criterion in contractor selection process of PPL 4734. In this study, a survey was conducted to staff of Department of Construction Works of Eskisehir Osmangazi University. According to TOPSIS (Technique for Order Preference by Similarity to the Ideal Solution) for analysis results, termination of construction work in previous tenders is the most important criterion of 12 determined criteria. The lowest bid criterion is ranked in rank 5.

  6. A method for minimum risk portfolio optimization under hybrid uncertainty

    NASA Astrophysics Data System (ADS)

    Egorova, Yu E.; Yazenin, A. V.

    2018-03-01

    In this paper, we investigate a minimum risk portfolio model under hybrid uncertainty when the profitability of financial assets is described by fuzzy random variables. According to Feng, the variance of a portfolio is defined as a crisp value. To aggregate fuzzy information the weakest (drastic) t-norm is used. We construct an equivalent stochastic problem of the minimum risk portfolio model and specify the stochastic penalty method for solving it.

  7. Student Difficulties in Analyzing Thin-Film Interference

    NASA Astrophysics Data System (ADS)

    Newburgh, Ronald; Goodale, Douglass

    2009-04-01

    A question we posed in a recent final examination has uncovered a fundamental difficulty for students in understanding destructive interference. The problem stated that glass of index n3 was coated with a thin film of a substance with index n2. The question then asked the student to calculate (a) the minimum coating thickness for maximum transmission into the glass and (b) the minimum thickness for minimum transmission into the glass, in both cases for a given wavelength. Questions from students during and after the examination showed that many had a problem in relating the interference to the transmission. We finally concluded that the source of confusion lay with an almost universally used figure in teaching interference in thin films, as well as the omission of the role of the electric field in reflection.

  8. Path planning for mobile robot using the novel repulsive force algorithm

    NASA Astrophysics Data System (ADS)

    Sun, Siyue; Yin, Guoqiang; Li, Xueping

    2018-01-01

    A new type of repulsive force algorithm is proposed to solve the problem of local minimum and the target unreachable of the classic Artificial Potential Field (APF) method in this paper. The Gaussian function that is related to the distance between the robot and the target is added to the traditional repulsive force, solving the problem of the goal unreachable with the obstacle nearby; variable coefficient is added to the repulsive force component to resize the repulsive force, which can solve the local minimum problem when the robot, the obstacle and the target point are in the same line. The effectiveness of the algorithm is verified by simulation based on MATLAB and actual mobile robot platform.

  9. The development of human factors research objectives for civil aviation

    NASA Technical Reports Server (NTRS)

    Post, T. J.

    1970-01-01

    Human factors research programs which would support civil aviation and be suitable for accomplishment by NASA research centers are identified. Aviation problems formed the basis for the research program recommendations and, accordingly, problems were identified, ranked and briefly defined in an informal report to the project monitor and other cognizant NASA personnel. The sources for this problem foundation were literature reviews and extensive interviews with NASA and non-NASA personnel. An overview of these findings is presented.

  10. Installation Restoration Program. Phase I. Records Search, Hazardous Materials Disposal Sites, Griffiss AFB, New York.

    DTIC Science & Technology

    1981-07-01

    Disposal Methods 4-31 Evaluation of Past and Present Waste 4-35 Disposal Facilities Landfills 4-35 Dry Wells 4-37 Rating of Waste Disposal Sites 4-37 V 2...Problems Identified at GAPE Landfills 4-36 4.12 Priority Ranking of Potential 4-38 Contamination Sources 4.13 -4.31 Rating Forms for Waste Disposal Sites 4...39 -4-76 5.1 Priority Ranking of Potential Con- 5-2 tamination Sources B.1 Rating Factor System B-2 -B-5 4W EXECUTIVE SUMMARY The Resource

  11. Analysis of the Hessian for Inverse Scattering Problems. Part 3. Inverse Medium Scattering of Electromagnetic Waves in Three Dimensions

    DTIC Science & Technology

    2012-08-01

    small data noise and model error, the discrete Hessian can be approximated by a low-rank matrix. This in turn enables fast solution of an appropriately...implication of the compactness of the Hessian is that for small data noise and model error, the discrete Hessian can be approximated by a low-rank matrix. This...probability distribution is given by the inverse of the Hessian of the negative log likelihood function. For Gaussian data noise and model error, this

  12. Anxiety and Art Therapy: Treatment in the Public Eye

    ERIC Educational Resources Information Center

    Chambala, Amanda

    2008-01-01

    Anxiety is one of the most common mental health problems in the United States today. It is the number one mental health problem among American women and ranks as a close second to substance abuse among men. In fact, alcoholism is the only other disorder that affects a greater number of people throughout the county. Fifteen percent of the American…

  13. 76 FR 38431 - Submission for OMB Review; Comment Request

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-06-30

    ... Commission's minimum performance standards regarding registered transfer agents, and (2) to assure that issuers are aware of certain problems and poor performances with respect to the transfer agents that are... failure to comply with the Commission's minimum performance standards then the issuer will be unable to...

  14. 75 FR 60333 - Hazardous Material; Miscellaneous Packaging Amendments

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-09-30

    ... minimum thickness requirements for remanufactured steel and plastic drums; (2) reinstate the previous... communication problem for emergency responders in that it may interfere with them discovering a large amount of... prescribed in Sec. 178.2(c). D. Minimum Thickness Requirement for Remanufactured Steel and Plastic Drums...

  15. High-dimensional statistical inference: From vector to matrix

    NASA Astrophysics Data System (ADS)

    Zhang, Anru

    Statistical inference for sparse signals or low-rank matrices in high-dimensional settings is of significant interest in a range of contemporary applications. It has attracted significant recent attention in many fields including statistics, applied mathematics and electrical engineering. In this thesis, we consider several problems in including sparse signal recovery (compressed sensing under restricted isometry) and low-rank matrix recovery (matrix recovery via rank-one projections and structured matrix completion). The first part of the thesis discusses compressed sensing and affine rank minimization in both noiseless and noisy cases and establishes sharp restricted isometry conditions for sparse signal and low-rank matrix recovery. The analysis relies on a key technical tool which represents points in a polytope by convex combinations of sparse vectors. The technique is elementary while leads to sharp results. It is shown that, in compressed sensing, delta kA < 1/3, deltak A+ thetak,kA < 1, or deltatkA < √( t - 1)/t for any given constant t ≥ 4/3 guarantee the exact recovery of all k sparse signals in the noiseless case through the constrained ℓ1 minimization, and similarly in affine rank minimization delta rM < 1/3, deltar M + thetar, rM < 1, or deltatrM< √( t - 1)/t ensure the exact reconstruction of all matrices with rank at most r in the noiseless case via the constrained nuclear norm minimization. Moreover, for any epsilon > 0, delta kA < 1/3 + epsilon, deltak A + thetak,kA < 1 + epsilon, or deltatkA< √(t - 1) / t + epsilon are not sufficient to guarantee the exact recovery of all k-sparse signals for large k. Similar result also holds for matrix recovery. In addition, the conditions delta kA<1/3, deltak A+ thetak,kA<1, delta tkA < √(t - 1)/t and deltarM<1/3, delta rM+ thetar,rM<1, delta trM< √(t - 1)/ t are also shown to be sufficient respectively for stable recovery of approximately sparse signals and low-rank matrices in the noisy case. For the second part of the thesis, we introduce a rank-one projection model for low-rank matrix recovery and propose a constrained nuclear norm minimization method for stable recovery of low-rank matrices in the noisy case. The procedure is adaptive to the rank and robust against small perturbations. Both upper and lower bounds for the estimation accuracy under the Frobenius norm loss are obtained. The proposed estimator is shown to be rate-optimal under certain conditions. The estimator is easy to implement via convex programming and performs well numerically. The techniques and main results developed in the chapter also have implications to other related statistical problems. An application to estimation of spiked covariance matrices from one-dimensional random projections is considered. The results demonstrate that it is still possible to accurately estimate the covariance matrix of a high-dimensional distribution based only on one-dimensional projections. For the third part of the thesis, we consider another setting of low-rank matrix completion. Current literature on matrix completion focuses primarily on independent sampling models under which the individual observed entries are sampled independently. Motivated by applications in genomic data integration, we propose a new framework of structured matrix completion (SMC) to treat structured missingness by design. Specifically, our proposed method aims at efficient matrix recovery when a subset of the rows and columns of an approximately low-rank matrix are observed. We provide theoretical justification for the proposed SMC method and derive lower bound for the estimation errors, which together establish the optimal rate of recovery over certain classes of approximately low-rank matrices. Simulation studies show that the method performs well in finite sample under a variety of configurations. The method is applied to integrate several ovarian cancer genomic studies with different extent of genomic measurements, which enables us to construct more accurate prediction rules for ovarian cancer survival.

  16. Sex differences in academic advancement. Results of a national study of pediatricians.

    PubMed

    Kaplan, S H; Sullivan, L M; Dukes, K A; Phillips, C F; Kelch, R P; Schaller, J G

    1996-10-24

    Although the numbers of women in training and in entry-level academic positions in medicine have increased substantially in recent years, the proportion of women in senior faculty positions has not changed. We conducted a study to determine the contributions of background and training, academic productivity, distribution of work time, institutional support, career attitudes, and family responsibilities to sex differences in academic rank and salary among faculty members of academic pediatric departments. We conducted a cross-sectional survey of all salaried physicians in 126 academic departments of pediatrics in the United States in January 1992. Of the 6441 questionnaires distributed, 4285 (67 percent) were returned. The sample was representative of U.S. pediatric faculty members. Multivariate models were used to relate academic rank and salary to 16 independent variables. Significantly fewer women than men achieved the rank of associate professor or higher. For both men and women, higher salaries and ranks were related to greater academic productivity (more publications and grants), more hours worked, more institutional support of research, greater overall career satisfaction, and fewer career problems. Less time spent in teaching and patient care was related to greater academic productivity for both sexes. Women in the low ranks were less academically productive and spent significantly more time in teaching and patient care than men in those ranks. Adjustment for all independent variables eliminated sex differences in academic rank but not in salary. Lower rates of academic productivity, more time spent in teaching and patient care and less time spent in research, less institutional support for research, and lower rates of specialization in highly paid subspecialties contributed to the lower ranks and salaries of female faculty members.

  17. Deep Multimodal Distance Metric Learning Using Click Constraints for Image Ranking.

    PubMed

    Yu, Jun; Yang, Xiaokang; Gao, Fei; Tao, Dacheng

    2017-12-01

    How do we retrieve images accurately? Also, how do we rank a group of images precisely and efficiently for specific queries? These problems are critical for researchers and engineers to generate a novel image searching engine. First, it is important to obtain an appropriate description that effectively represent the images. In this paper, multimodal features are considered for describing images. The images unique properties are reflected by visual features, which are correlated to each other. However, semantic gaps always exist between images visual features and semantics. Therefore, we utilize click feature to reduce the semantic gap. The second key issue is learning an appropriate distance metric to combine these multimodal features. This paper develops a novel deep multimodal distance metric learning (Deep-MDML) method. A structured ranking model is adopted to utilize both visual and click features in distance metric learning (DML). Specifically, images and their related ranking results are first collected to form the training set. Multimodal features, including click and visual features, are collected with these images. Next, a group of autoencoders is applied to obtain initially a distance metric in different visual spaces, and an MDML method is used to assign optimal weights for different modalities. Next, we conduct alternating optimization to train the ranking model, which is used for the ranking of new queries with click features. Compared with existing image ranking methods, the proposed method adopts a new ranking model to use multimodal features, including click features and visual features in DML. We operated experiments to analyze the proposed Deep-MDML in two benchmark data sets, and the results validate the effects of the method.

  18. Unified method of knowledge representation in the evolutionary artificial intelligence systems

    NASA Astrophysics Data System (ADS)

    Bykov, Nickolay M.; Bykova, Katherina N.

    2003-03-01

    The evolution of artificial intelligence systems called by complicating of their operation topics and science perfecting has resulted in a diversification of the methods both the algorithms of knowledge representation and usage in these systems. Often by this reason it is very difficult to design the effective methods of knowledge discovering and operation for such systems. In the given activity the authors offer a method of unitized representation of the systems knowledge about objects of an external world by rank transformation of their descriptions, made in the different features spaces: deterministic, probabilistic, fuzzy and other. The proof of a sufficiency of the information about the rank configuration of the object states in the features space for decision making is presented. It is shown that the geometrical and combinatorial model of the rank configurations set introduce their by group of some system of incidence, that allows to store the information on them in a convolute kind. The method of the rank configuration description by the DRP - code (distance rank preserving code) is offered. The problems of its completeness, information capacity, noise immunity and privacy are reviewed. It is shown, that the capacity of a transmission channel for such submission of the information is more than unit, as the code words contain the information both about the object states, and about the distance ranks between them. The effective algorithm of the data clustering for the object states identification, founded on the given code usage, is described. The knowledge representation with the help of the rank configurations allows to unitize and to simplify algorithms of the decision making by fulfillment of logic operations above the DRP - code words. Examples of the proposed clustering techniques operation on the given samples set, the rank configuration of resulted clusters and its DRP-codes are presented.

  19. DrugE-Rank: improving drug–target interaction prediction of new candidate drugs or targets by ensemble learning to rank

    PubMed Central

    Yuan, Qingjun; Gao, Junning; Wu, Dongliang; Zhang, Shihua; Mamitsuka, Hiroshi; Zhu, Shanfeng

    2016-01-01

    Motivation: Identifying drug–target interactions is an important task in drug discovery. To reduce heavy time and financial cost in experimental way, many computational approaches have been proposed. Although these approaches have used many different principles, their performance is far from satisfactory, especially in predicting drug–target interactions of new candidate drugs or targets. Methods: Approaches based on machine learning for this problem can be divided into two types: feature-based and similarity-based methods. Learning to rank is the most powerful technique in the feature-based methods. Similarity-based methods are well accepted, due to their idea of connecting the chemical and genomic spaces, represented by drug and target similarities, respectively. We propose a new method, DrugE-Rank, to improve the prediction performance by nicely combining the advantages of the two different types of methods. That is, DrugE-Rank uses LTR, for which multiple well-known similarity-based methods can be used as components of ensemble learning. Results: The performance of DrugE-Rank is thoroughly examined by three main experiments using data from DrugBank: (i) cross-validation on FDA (US Food and Drug Administration) approved drugs before March 2014; (ii) independent test on FDA approved drugs after March 2014; and (iii) independent test on FDA experimental drugs. Experimental results show that DrugE-Rank outperforms competing methods significantly, especially achieving more than 30% improvement in Area under Prediction Recall curve for FDA approved new drugs and FDA experimental drugs. Availability: http://datamining-iip.fudan.edu.cn/service/DrugE-Rank Contact: zhusf@fudan.edu.cn Supplementary information: Supplementary data are available at Bioinformatics online. PMID:27307615

  20. DrugE-Rank: improving drug-target interaction prediction of new candidate drugs or targets by ensemble learning to rank.

    PubMed

    Yuan, Qingjun; Gao, Junning; Wu, Dongliang; Zhang, Shihua; Mamitsuka, Hiroshi; Zhu, Shanfeng

    2016-06-15

    Identifying drug-target interactions is an important task in drug discovery. To reduce heavy time and financial cost in experimental way, many computational approaches have been proposed. Although these approaches have used many different principles, their performance is far from satisfactory, especially in predicting drug-target interactions of new candidate drugs or targets. Approaches based on machine learning for this problem can be divided into two types: feature-based and similarity-based methods. Learning to rank is the most powerful technique in the feature-based methods. Similarity-based methods are well accepted, due to their idea of connecting the chemical and genomic spaces, represented by drug and target similarities, respectively. We propose a new method, DrugE-Rank, to improve the prediction performance by nicely combining the advantages of the two different types of methods. That is, DrugE-Rank uses LTR, for which multiple well-known similarity-based methods can be used as components of ensemble learning. The performance of DrugE-Rank is thoroughly examined by three main experiments using data from DrugBank: (i) cross-validation on FDA (US Food and Drug Administration) approved drugs before March 2014; (ii) independent test on FDA approved drugs after March 2014; and (iii) independent test on FDA experimental drugs. Experimental results show that DrugE-Rank outperforms competing methods significantly, especially achieving more than 30% improvement in Area under Prediction Recall curve for FDA approved new drugs and FDA experimental drugs. http://datamining-iip.fudan.edu.cn/service/DrugE-Rank zhusf@fudan.edu.cn Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press.

  1. Groupwise registration of MR brain images with tumors.

    PubMed

    Tang, Zhenyu; Wu, Yihong; Fan, Yong

    2017-08-04

    A novel groupwise image registration framework is developed for registering MR brain images with tumors. Our method iteratively estimates a normal-appearance counterpart for each tumor image to be registered and constructs a directed graph (digraph) of normal-appearance images to guide the groupwise image registration. Particularly, our method maps each tumor image to its normal appearance counterpart by identifying and inpainting brain tumor regions with intensity information estimated using a low-rank plus sparse matrix decomposition based image representation technique. The estimated normal-appearance images are groupwisely registered to a group center image guided by a digraph of images so that the total length of 'image registration paths' to be the minimum, and then the original tumor images are warped to the group center image using the resulting deformation fields. We have evaluated our method based on both simulated and real MR brain tumor images. The registration results were evaluated with overlap measures of corresponding brain regions and average entropy of image intensity information, and Wilcoxon signed rank tests were adopted to compare different methods with respect to their regional overlap measures. Compared with a groupwise image registration method that is applied to normal-appearance images estimated using the traditional low-rank plus sparse matrix decomposition based image inpainting, our method achieved higher image registration accuracy with statistical significance (p  =  7.02  ×  10 -9 ).

  2. The Steiner Multigraph Problem: Wildlife corridor design for multiple species

    Treesearch

    Katherine J. Lai; Carla P. Gomes; Michael K. Schwartz; Kevin S. McKelvey; David E. Calkin; Claire A. Montgomery

    2011-01-01

    The conservation of wildlife corridors between existing habitat preserves is important for combating the effects of habitat loss and fragmentation facing species of concern. We introduce the Steiner Multigraph Problem to model the problem of minimum-cost wildlife corridor design for multiple species with different landscape requirements. This problem can also model...

  3. Application of genetic algorithms in nonlinear heat conduction problems.

    PubMed

    Kadri, Muhammad Bilal; Khan, Waqar A

    2014-01-01

    Genetic algorithms are employed to optimize dimensionless temperature in nonlinear heat conduction problems. Three common geometries are selected for the analysis and the concept of minimum entropy generation is used to determine the optimum temperatures under the same constraints. The thermal conductivity is assumed to vary linearly with temperature while internal heat generation is assumed to be uniform. The dimensionless governing equations are obtained for each selected geometry and the dimensionless temperature distributions are obtained using MATLAB. It is observed that GA gives the minimum dimensionless temperature in each selected geometry.

  4. Minimum Bayes risk image correlation

    NASA Technical Reports Server (NTRS)

    Minter, T. C., Jr.

    1980-01-01

    In this paper, the problem of designing a matched filter for image correlation will be treated as a statistical pattern recognition problem. It is shown that, by minimizing a suitable criterion, a matched filter can be estimated which approximates the optimum Bayes discriminant function in a least-squares sense. It is well known that the use of the Bayes discriminant function in target classification minimizes the Bayes risk, which in turn directly minimizes the probability of a false fix. A fast Fourier implementation of the minimum Bayes risk correlation procedure is described.

  5. The Wilkins Institute for Science Education: A science-centered magnet school

    NASA Astrophysics Data System (ADS)

    Wilkins, Gary Dean

    The problem that this study addressed is that excellent science instruction is not consistently provided by traditional public schools. This study utilized a review of the literature, interviews, surveys, and focus groups. This study provides the basis for the proposed design of a school that can be the solution to the problem. Conducted in 1995, the Third International Mathematics and Science Study (TIMSS) showed that our efforts to improve U.S. education have had some successes, but overall have been ineffective in raising U.S. performance from a middle-of-the-pack position. At the end of secondary schooling, which in the U.S. is 12 th grade, U.S. performance was among the lowest in both science and math, including our most advanced students (National Center for Educational Statistics, 2001). For this research project I surveyed 412 students and 218 parents or guardians. I conducted interviews and focus groups with 10 participants who were science teachers or educators, and 10 participants who were scientists. The surveys presented 12 factors, believed to be valued as part of an excellent science education, which were security, social activities, sports, computers, reading and writing, hands-on equipment, industry support, and cafeteria. The survey participants rated each factor from most to least important. The focus groups and the interviews covered science education in general, as well as these same 12 topics. Students and parents agreed that qualified instructors is the item that is most important to provide quality science instruction. Students and parents disagreed most on the item reading and writing, which students ranked 9th, but parents ranked 2nd, a difference of 7 rankings. Considering only the item that was ranked number 1, students identified sports most often as most important, but parents disagreed and ranked this 8th, a difference of 7 ranks. After this dissertation is completed, it is my intent to benefit students with the implementation of the Wilkins Institute for Science Education (WISE), a model K--12 school dedicated to the field of science. The school will be named for my father, George Wilkins, who made outstanding contributions to the field of aircraft engineering.

  6. Droplet squeezing through a narrow constriction: Minimum impulse and critical velocity

    NASA Astrophysics Data System (ADS)

    Zhang, Zhifeng; Drapaca, Corina; Chen, Xiaolin; Xu, Jie

    2017-07-01

    Models of a droplet passing through narrow constrictions have wide applications in science and engineering. In this paper, we report our findings on the minimum impulse (momentum change) of pushing a droplet through a narrow circular constriction. The existence of this minimum impulse is mathematically derived and numerically verified. The minimum impulse happens at a critical velocity when the time-averaged Young-Laplace pressure balances the total minor pressure loss in the constriction. Finally, numerical simulations are conducted to verify these concepts. These results could be relevant to problems of energy optimization and studies of chemical and biomedical systems.

  7. Statistical indicators of collective behavior and functional clusters in gene networks of yeast

    NASA Astrophysics Data System (ADS)

    Živković, J.; Tadić, B.; Wick, N.; Thurner, S.

    2006-03-01

    We analyze gene expression time-series data of yeast (S. cerevisiae) measured along two full cell-cycles. We quantify these data by using q-exponentials, gene expression ranking and a temporal mean-variance analysis. We construct gene interaction networks based on correlation coefficients and study the formation of the corresponding giant components and minimum spanning trees. By coloring genes according to their cell function we find functional clusters in the correlation networks and functional branches in the associated trees. Our results suggest that a percolation point of functional clusters can be identified on these gene expression correlation networks.

  8. Team Dynamics. Implications for Coaching.

    ERIC Educational Resources Information Center

    Freishlag, Jerry

    1985-01-01

    A recent survey of coaches ranks team cohesion as the most critical problem coaches face. Optimal interpersonal relationships among athletes and their coaches can maximize collective performance. Team dynamics are discussed and coaching tips are provided. (MT)

  9. Seasonal differences in climate in the Chianti region of Tuscany and the relationship to vintage wine quality

    NASA Astrophysics Data System (ADS)

    Salinger, Michael James; Baldi, Marina; Grifoni, Daniele; Jones, Greg; Bartolini, Giorgio; Cecchi, Stefano; Messeri, Gianni; Dalla Marta, Anna; Orlandini, Simone; Dalu, Giovanni A.; Maracchi, Gianpiero

    2015-12-01

    Climatic factors and weather type frequencies affecting Tuscany are examined to discriminate between vintages ranked into the upper- and lower-quartile years as a consensus from six rating sources of Chianti wine during the period 1980 to 2011. These rankings represent a considerable improvement on any individual publisher ranking, displaying an overall good consensus for the best and worst vintage years. Climate variables are calculated and weather type frequencies are matched between the eight highest and the eight lowest ranked vintages in the main phenological phases of Sangiovese grapevine. Results show that higher heat units; mean, maximum and minimum temperature; and more days with temperature above 35 °C were the most important discriminators between good- and poor-quality vintages in the spring and summer growth phases, with heat units important during ripening. Precipitation influences on vintage quality are significant only during veraison where low precipitation amounts and precipitation days are important for better quality vintages. In agreement with these findings, weather type analysis shows good vintages are favoured by weather type 4 (more anticyclones over central Mediterranean Europe (CME)), giving warm dry growing season conditions. Poor vintages all relate to higher frequencies of either weather type 3, which, by producing perturbation crossing CME, favours cooler and wetter conditions, and/or weather type 7 which favours cold dry continental air masses from the east and north east over CME. This approach shows there are important weather type frequency differences between good- and poor-quality vintages. Trend analysis shows that changes in weather type frequencies are more important than any due to global warming.

  10. Seasonal differences in climate in the Chianti region of Tuscany and the relationship to vintage wine quality.

    PubMed

    Salinger, Michael James; Baldi, Marina; Grifoni, Daniele; Jones, Greg; Bartolini, Giorgio; Cecchi, Stefano; Messeri, Gianni; Dalla Marta, Anna; Orlandini, Simone; Dalu, Giovanni A; Maracchi, Gianpiero

    2015-12-01

    Climatic factors and weather type frequencies affecting Tuscany are examined to discriminate between vintages ranked into the upper- and lower-quartile years as a consensus from six rating sources of Chianti wine during the period 1980 to 2011. These rankings represent a considerable improvement on any individual publisher ranking, displaying an overall good consensus for the best and worst vintage years. Climate variables are calculated and weather type frequencies are matched between the eight highest and the eight lowest ranked vintages in the main phenological phases of Sangiovese grapevine. Results show that higher heat units; mean, maximum and minimum temperature; and more days with temperature above 35 °C were the most important discriminators between good- and poor-quality vintages in the spring and summer growth phases, with heat units important during ripening. Precipitation influences on vintage quality are significant only during veraison where low precipitation amounts and precipitation days are important for better quality vintages. In agreement with these findings, weather type analysis shows good vintages are favoured by weather type 4 (more anticyclones over central Mediterranean Europe (CME)), giving warm dry growing season conditions. Poor vintages all relate to higher frequencies of either weather type 3, which, by producing perturbation crossing CME, favours cooler and wetter conditions, and/or weather type 7 which favours cold dry continental air masses from the east and north east over CME. This approach shows there are important weather type frequency differences between good- and poor-quality vintages. Trend analysis shows that changes in weather type frequencies are more important than any due to global warming.

  11. Specific Features of Executive Dysfunction in Alzheimer-Type Mild Dementia Based on Computerized Cambridge Neuropsychological Test Automated Battery (CANTAB) Test Results.

    PubMed

    Kuzmickienė, Jurgita; Kaubrys, Gintaras

    2016-10-08

    BACKGROUND The primary manifestation of Alzheimer's disease (AD) is decline in memory. Dysexecutive symptoms have tremendous impact on functional activities and quality of life. Data regarding frontal-executive dysfunction in mild AD are controversial. The aim of this study was to assess the presence and specific features of executive dysfunction in mild AD based on Cambridge Neuropsychological Test Automated Battery (CANTAB) results. MATERIAL AND METHODS Fifty newly diagnosed, treatment-naïve, mild, late-onset AD patients (MMSE ≥20, AD group) and 25 control subjects (CG group) were recruited in this prospective, cross-sectional study. The CANTAB tests CRT, SOC, PAL, SWM were used for in-depth cognitive assessment. Comparisons were performed using the t test or Mann-Whitney U test, as appropriate. Correlations were evaluated by Pearson r or Spearman R. Statistical significance was set at p<0.05. RESULTS AD and CG groups did not differ according to age, education, gender, or depression. Few differences were found between groups in the SOC test for performance measures: Mean moves (minimum 3 moves): AD (Rank Sum=2227), CG (Rank Sum=623), p<0.001. However, all SOC test time measures differed significantly between groups: SOC Mean subsequent thinking time (4 moves): AD (Rank Sum=2406), CG (Rank Sum=444), p<0.001. Correlations were weak between executive function (SOC) and episodic/working memory (PAL, SWM) (R=0.01-0.38) or attention/psychomotor speed (CRT) (R=0.02-0.37). CONCLUSIONS Frontal-executive functions are impaired in mild AD patients. Executive dysfunction is highly prominent in time measures, but minimal in performance measures. Executive disorders do not correlate with a decline in episodic and working memory or psychomotor speed in mild AD.

  12. Effect of Weight Transfer on a Vehicle's Stopping Distance.

    ERIC Educational Resources Information Center

    Whitmire, Daniel P.; Alleman, Timothy J.

    1979-01-01

    An analysis of the minimum stopping distance problem is presented taking into account the effect of weight transfer on nonskidding vehicles and front- or rear-wheels-skidding vehicles. Expressions for the minimum stopping distances are given in terms of vehicle geometry and the coefficients of friction. (Author/BB)

  13. Minimum Disclosure Counting for the Alternative Vote

    NASA Astrophysics Data System (ADS)

    Wen, Roland; Buckland, Richard

    Although there is a substantial body of work on preventing bribery and coercion of voters in cryptographic election schemes for plurality electoral systems, there are few attempts to construct such schemes for preferential electoral systems. The problem is preferential systems are prone to bribery and coercion via subtle signature attacks during the counting. We introduce a minimum disclosure counting scheme for the alternative vote preferential system. Minimum disclosure provides protection from signature attacks by revealing only the winning candidate.

  14. A comparison of approaches for finding minimum identifying codes on graphs

    NASA Astrophysics Data System (ADS)

    Horan, Victoria; Adachi, Steve; Bak, Stanley

    2016-05-01

    In order to formulate mathematical conjectures likely to be true, a number of base cases must be determined. However, many combinatorial problems are NP-hard and the computational complexity makes this research approach difficult using a standard brute force approach on a typical computer. One sample problem explored is that of finding a minimum identifying code. To work around the computational issues, a variety of methods are explored and consist of a parallel computing approach using MATLAB, an adiabatic quantum optimization approach using a D-Wave quantum annealing processor, and lastly using satisfiability modulo theory (SMT) and corresponding SMT solvers. Each of these methods requires the problem to be formulated in a unique manner. In this paper, we address the challenges of computing solutions to this NP-hard problem with respect to each of these methods.

  15. Three-Axis Time-Optimal Attitude Maneuvers of a Rigid-Body

    NASA Astrophysics Data System (ADS)

    Wang, Xijing; Li, Jisheng

    With the development trends for modern satellites towards macro-scale and micro-scale, new demands are requested for its attitude adjustment. Precise pointing control and rapid maneuvering capabilities have long been part of many space missions. While the development of computer technology enables new optimal algorithms being used continuously, a powerful tool for solving problem is provided. Many papers about attitude adjustment have been published, the configurations of the spacecraft are considered rigid body with flexible parts or gyrostate-type systems. The object function always include minimum time or minimum fuel. During earlier satellite missions, the attitude acquisition was achieved by using the momentum ex change devices, performed by a sequential single-axis slewing strategy. Recently, the simultaneous three-axis minimum-time maneuver(reorientation) problems have been studied by many researchers. It is important to research the minimum-time maneuver of a rigid spacecraft within onboard power limits, because of potential space application such as surveying multiple targets in space and academic value. The minimum-time maneuver of a rigid spacecraft is a basic problem because the solutions for maneuvering flexible spacecraft are based on the solution to the rigid body slew problem. A new method for the open-loop solution for a rigid spacecraft maneuver is presented. Having neglected all perturbation torque, the necessary conditions of spacecraft from one state to another state can be determined. There is difference between single-axis with multi-axis. For single- axis analytical solution is possible and the switching line passing through the state-space origin belongs to parabolic. For multi-axis, it is impossible to get analytical solution due to the dynamic coupling between the axes and must be solved numerically. Proved by modern research, Euler axis rotations are quasi-time-optimal in general. On the basis of minimum value principles, a research for reorienting an inertial syrnmetric spacecraft with time cost function from an initial state of rest to a final state of rest is deduced. And the solution to it is stated below: Firstly, the essential condition for solving the problem is deduced with the minimum value principle. The necessary conditions for optimality yield a two point boundary-value problem (TPBVP), which, when solved, produces the control history that minimize time performance index. In the nonsingular control, the solution is the' bang-bang maneuver. The control profile is characterized by Saturated controls for the entire maneuver. The singular control maybe existed. It is only singular in mathematics. According to physical principle, the bigger the mode of the control torque is, the shorter the time is. So saturated controls are used in singular control. Secondly, the control parameters are always in maximum, so the key problem is to determine switch point thus original problem is changed to find the changing time. By the use of adjusting the switch on/off time, the genetic algorithm, which is a new robust method is optimized to determine the switch features without the gyroscopic coupling. There is improvement upon the traditional GA in this research. The homotopy method to find the nonlinear algebra is based on rigorous topology continuum theory. Based on the idea of the homotopy, the relaxation parameters are introduced, and the switch point is figured out with simulated annealing. Computer simulation results using a rigid body show that the new method is feasible and efficient. A practical method of computing approximate solutions to the time-optimal control- switch times for rigid body reorientation has been developed.

  16. Frames for exact inversion of the rank order coder.

    PubMed

    Masmoudi, Khaled; Antonini, Marc; Kornprobst, Pierre

    2012-02-01

    Our goal is to revisit rank order coding by proposing an original exact decoding procedure for it. Rank order coding was proposed by Thorpe et al. who stated that the order in which the retina cells are activated encodes for the visual stimulus. Based on this idea, the authors proposed in [1] a rank order coder/decoder associated to a retinal model. Though, it appeared that the decoding procedure employed yields reconstruction errors that limit the model bit-cost/quality performances when used as an image codec. The attempts made in the literature to overcome this issue are time consuming and alter the coding procedure, or are lacking mathematical support and feasibility for standard size images. Here we solve this problem in an original fashion by using the frames theory, where a frame of a vector space designates an extension for the notion of basis. Our contribution is twofold. First, we prove that the analyzing filter bank considered is a frame, and then we define the corresponding dual frame that is necessary for the exact image reconstruction. Second, to deal with the problem of memory overhead, we design a recursive out-of-core blockwise algorithm for the computation of this dual frame. Our work provides a mathematical formalism for the retinal model under study and defines a simple and exact reverse transform for it with over than 265 dB of increase in the peak signal-to-noise ratio quality compared to [1]. Furthermore, the framework presented here can be extended to several models of the visual cortical areas using redundant representations.

  17. Multiobjective fuzzy stochastic linear programming problems with inexact probability distribution

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hamadameen, Abdulqader Othman; Zainuddin, Zaitul Marlizawati

    This study deals with multiobjective fuzzy stochastic linear programming problems with uncertainty probability distribution which are defined as fuzzy assertions by ambiguous experts. The problem formulation has been presented and the two solutions strategies are; the fuzzy transformation via ranking function and the stochastic transformation when α{sup –}. cut technique and linguistic hedges are used in the uncertainty probability distribution. The development of Sen’s method is employed to find a compromise solution, supported by illustrative numerical example.

  18. On the Problems of Construction and Statistical Inference Associated with a Generalization of Canonical Variables.

    DTIC Science & Technology

    1982-02-01

    of them are pre- sented in this paper. As an application, important practical problems similar to the one posed by Gnanadesikan (1977), p. 77 can be... Gnanadesikan and Wilk (1969) to search for a non-linear combination, giving rise to non-linear first principal component. So, a p-dinensional vector can...distribution, Gnanadesikan and Gupta (1970) and earlier Eaton (1967) have considered the problem of ranking the r underlying populations according to the

  19. An analysis of the least-squares problem for the DSN systematic pointing error model

    NASA Technical Reports Server (NTRS)

    Alvarez, L. S.

    1991-01-01

    A systematic pointing error model is used to calibrate antennas in the Deep Space Network. The least squares problem is described and analyzed along with the solution methods used to determine the model's parameters. Specifically studied are the rank degeneracy problems resulting from beam pointing error measurement sets that incorporate inadequate sky coverage. A least squares parameter subset selection method is described and its applicability to the systematic error modeling process is demonstrated on Voyager 2 measurement distribution.

  20. Using dreams to assess clinical change during treatment.

    PubMed

    Glucksman, Myron L; Kramer, Milton

    2004-01-01

    This article describes several studies that examine the relationship between the manifest content of selected dreams reported by patients and their clinical progress during psychoanalytic and psychodynamically oriented treatment. There are a number of elements that dreaming and psychotherapy have in common: affect regulation; conflict resolution; problem-solving; self-awareness; mastery and adaptation. Four different studies examined the relationship between the manifest content of selected dreams and clinical progress during treatment. In each study, the ratings of manifest content and clinical progress by independent observers were rank-ordered and compared. In three of the four studies there was a significant correlation between the rankings of manifest content and the rankings of clinical progress. This finding suggests that the manifest content of dreams can be used as an independent variable to assess clinical progress during psychoanalytic and psychodynamically oriented treatment.

  1. Efficient algorithms for computing a strong rank-revealing QR factorization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gu, M.; Eisenstat, S.C.

    1996-07-01

    Given an m x n matrix M with m {ge} n, it is shown that there exists a permutation {Pi} and an integer k such that the QR factorization given by equation (1) reveals the numerical rank of M: the k x k upper-triangular matrix A{sub k} is well conditioned, norm of (C{sub k}){sub 2} is small, and B{sub k} is linearly dependent on A{sub k} with coefficients bounded by a low-degree polynomial in n. Existing rank-revealing QR (RRQR) algorithms are related to such factorizations and two algorithms are presented for computing them. The new algorithms are nearly as efficientmore » as QR with column pivoting for most problems and take O(mn{sup 2}) floating-point operations in the worst case.« less

  2. A Z-number-based decision making procedure with ranking fuzzy numbers method

    NASA Astrophysics Data System (ADS)

    Mohamad, Daud; Shaharani, Saidatull Akma; Kamis, Nor Hanimah

    2014-12-01

    The theory of fuzzy set has been in the limelight of various applications in decision making problems due to its usefulness in portraying human perception and subjectivity. Generally, the evaluation in the decision making process is represented in the form of linguistic terms and the calculation is performed using fuzzy numbers. In 2011, Zadeh has extended this concept by presenting the idea of Z-number, a 2-tuple fuzzy numbers that describes the restriction and the reliability of the evaluation. The element of reliability in the evaluation is essential as it will affect the final result. Since this concept can still be considered as new, available methods that incorporate reliability for solving decision making problems is still scarce. In this paper, a decision making procedure based on Z-numbers is proposed. Due to the limitation of its basic properties, Z-numbers will be first transformed to fuzzy numbers for simpler calculations. A method of ranking fuzzy number is later used to prioritize the alternatives. A risk analysis problem is presented to illustrate the effectiveness of this proposed procedure.

  3. A multiple criteria analysis for household solid waste management in the urban community of Dakar.

    PubMed

    Kapepula, Ka-Mbayu; Colson, Gerard; Sabri, Karim; Thonart, Philippe

    2007-01-01

    Household solid waste management is a severe problem in big cities of developing countries. Mismanaged solid waste dumpsites produce bad sanitary, ecological and economic consequences for the whole population, especially for the poorest urban inhabitants. Dealing with this problem, this paper utilizes field data collected in the urban community of Dakar, in view of ranking nine areas of the city with respect to multiple criteria of nuisance. Nine criteria are built and organized in three families that represent three classical viewpoints: the production of wastes, their collection and their treatment. Thanks to the method PROMETHEE and the software ARGOS, we do a pair-wise comparison of the nine areas, which allows their multiple criteria rankings according to each viewpoint and then globally. Finding the worst and best areas in terms of nuisance for a better waste management in the city is our final purpose, fitting as well as possible the needs of the urban community. Based on field knowledge and on the literature, we suggest applying general and area-specific remedies to the household solid waste problems.

  4. A systematic approach for locating optimum sites

    Treesearch

    Angel Ramos; Isabel Otero

    1979-01-01

    The basic information collected for landscape planning studies may be given the form of a "s x m" matrix, where s is the number of landscape units and m the number of data gathered for each unit. The problem of finding the optimum location for a given project is translated in the problem of ranking the series of vectors in the matrix which represent landscape...

  5. The Current State and Problems of Czechoslovak Medical Biometeorology

    DTIC Science & Technology

    1961-03-24

    First Working Conference of the Bioclimatological Group of the Czechoslovak Meteorological Society of the Czechoslovak Academy of Sciences, held on 13...shows soms systematic work, V Struzka had already referred to the problems and tasks of biometeorology, bioclimatology , and spa meteorology at the First...International Society fo? Bioclimatology and Biometeorology in 1956j its ranks were joined by members of the individual laboratories in our country

  6. Efficient solution of a multi objective fuzzy transportation problem

    NASA Astrophysics Data System (ADS)

    Vidhya, V.; Ganesan, K.

    2018-04-01

    In this paper we present a methodology for the solution of multi-objective fuzzy transportation problem when all the cost and time coefficients are trapezoidal fuzzy numbers and the supply and demand are crisp numbers. Using a new fuzzy arithmetic on parametric form of trapezoidal fuzzy numbers and a new ranking method all efficient solutions are obtained. The proposed method is illustrated with an example.

  7. Scattering theory for graphs isomorphic to a regular tree at infinity

    NASA Astrophysics Data System (ADS)

    Colin de Verdière, Yves; Truc, Françoise

    2013-06-01

    We describe the spectral theory of the adjacency operator of a graph which is isomorphic to a regular tree at infinity. Using some combinatorics, we reduce the problem to a scattering problem for a finite rank perturbation of the adjacency operator on a regular tree. We develop this scattering theory using the classical recipes for Schrödinger operators in Euclidian spaces.

  8. Household income and health problems during a period of labour-market change and widening income inequalities - a study among the Finnish population between 1987 and 2007.

    PubMed

    Aittomäki, Akseli; Martikainen, Pekka; Rahkonen, Ossi; Lahelma, Eero

    2014-01-01

    Income inequalities widened considerably from 1987 to 2007 in Finland. We compared the association between household income and health problems across three periods and in several different ways of modelling the dependence. Our aim was to find out whether the change in the distribution of income might have led to wider income-related inequalities in health problems. The data represent an 11-per-cent random sample of the Finnish population, and we restricted the analysed sample to those between 18 and 67 years of age and not in receipt of any pension in each of the three six-year periods examined (n between 280,106 and 291,198). The health outcome was sickness-allowance days compensated. Household-equivalent taxable income was applied with two different scale transformations: firstly, as real income adjusted for price level and secondly, as rank position on the income distribution. We used negative binomial regression models, with and without zero inflation, as well as decomposition analysis. We found that sickness-allowance days decreased with increasing income, while differences in the shape and magnitude of the association were found between the scales and the periods. During the study period the association strengthened considerably at both the lowest fifth and the top fifth of the rank scale, while the observed per-unit effect of real income changed less. Decomposition analysis suggested that slightly less than half of the observed increase in concentration of health problems at lower end of the rank scale could be accounted for by the change in real income distribution. The results indicate that widening differences in household consumption potential may have contributed to an intensified impact of household income on inequalities in health problems. Explaining the change only in terms of consumption potential, however, was problematic, and changes in the interdependence of labour-market advantage and health problems are likely to contribute as well. Copyright © 2013 Elsevier Ltd. All rights reserved.

  9. Application of the GNSS-R in tomographic sounding of the Earth atmosphere

    NASA Astrophysics Data System (ADS)

    Jaberi Shafei, Milad; Mashhadi-Hossainali, Masoud

    2018-07-01

    Reflected GNSS signals offer a great opportunity for detecting and monitoring of water level variation, land surface roughness and the atmosphere around the Earth. The application type intensely depends on satellites' geometry and the topography of study area. GNSS-R can be used in sounding the water vapor as one of the most important parameters in troposphere. In view of temporal and spatial changes, retrieval of this parameter is complicated. GNSS tomography is a common approach for this purpose. Considering the dependency of this inverse approach to the number of stations and satellites' coverage at study area, tomographic reconstruction of water vapor is an ill-posed problem. Additional constraints are usually used to find a solution. In this research reflected signals known as GNSS-R are offered for the first time to resolve the rank deficiency of this problem. This has been implemented to a tomographic model which has been already developed for modeling the water vapor in the North West of Iran. In view of low number of GPS stations in this area, the design matrix of the model is rank deficient. Simulated results demonstrate that the rank deficiency of this matrix can be reduced by implementing appropriate number of GNSS-R stations when the spatial resolution of model is optimized. Resolution matrix is used as a measure for analyzing the efficiency of the proposed method. Results from DOY 300 and 301 in year 2011 show that the applied method can even remedy the rank deficiency of the design matrix. The satellites' constellation and the time response of the model are the effective parameters in this respect. On average the rank deficiency of the design matrix is improved more than 90% when the reflected signals are used. This is easily seen in terms of the resolution matrix of the model. Here, the mean bias and RMSE of reconstructed image are 0.2593 and 1.847 ppm, respectively.

  10. Assessment and improvement of statistical tools for comparative proteomics analysis of sparse data sets with few experimental replicates.

    PubMed

    Schwämmle, Veit; León, Ileana Rodríguez; Jensen, Ole Nørregaard

    2013-09-06

    Large-scale quantitative analyses of biological systems are often performed with few replicate experiments, leading to multiple nonidentical data sets due to missing values. For example, mass spectrometry driven proteomics experiments are frequently performed with few biological or technical replicates due to sample-scarcity or due to duty-cycle or sensitivity constraints, or limited capacity of the available instrumentation, leading to incomplete results where detection of significant feature changes becomes a challenge. This problem is further exacerbated for the detection of significant changes on the peptide level, for example, in phospho-proteomics experiments. In order to assess the extent of this problem and the implications for large-scale proteome analysis, we investigated and optimized the performance of three statistical approaches by using simulated and experimental data sets with varying numbers of missing values. We applied three tools, including standard t test, moderated t test, also known as limma, and rank products for the detection of significantly changing features in simulated and experimental proteomics data sets with missing values. The rank product method was improved to work with data sets containing missing values. Extensive analysis of simulated and experimental data sets revealed that the performance of the statistical analysis tools depended on simple properties of the data sets. High-confidence results were obtained by using the limma and rank products methods for analyses of triplicate data sets that exhibited more than 1000 features and more than 50% missing values. The maximum number of differentially represented features was identified by using limma and rank products methods in a complementary manner. We therefore recommend combined usage of these methods as a novel and optimal way to detect significantly changing features in these data sets. This approach is suitable for large quantitative data sets from stable isotope labeling and mass spectrometry experiments and should be applicable to large data sets of any type. An R script that implements the improved rank products algorithm and the combined analysis is available.

  11. Online Low-Rank Representation Learning for Joint Multi-subspace Recovery and Clustering.

    PubMed

    Li, Bo; Liu, Risheng; Cao, Junjie; Zhang, Jie; Lai, Yu-Kun; Liua, Xiuping

    2017-10-06

    Benefiting from global rank constraints, the lowrank representation (LRR) method has been shown to be an effective solution to subspace learning. However, the global mechanism also means that the LRR model is not suitable for handling large-scale data or dynamic data. For large-scale data, the LRR method suffers from high time complexity, and for dynamic data, it has to recompute a complex rank minimization for the entire data set whenever new samples are dynamically added, making it prohibitively expensive. Existing attempts to online LRR either take a stochastic approach or build the representation purely based on a small sample set and treat new input as out-of-sample data. The former often requires multiple runs for good performance and thus takes longer time to run, and the latter formulates online LRR as an out-ofsample classification problem and is less robust to noise. In this paper, a novel online low-rank representation subspace learning method is proposed for both large-scale and dynamic data. The proposed algorithm is composed of two stages: static learning and dynamic updating. In the first stage, the subspace structure is learned from a small number of data samples. In the second stage, the intrinsic principal components of the entire data set are computed incrementally by utilizing the learned subspace structure, and the low-rank representation matrix can also be incrementally solved by an efficient online singular value decomposition (SVD) algorithm. The time complexity is reduced dramatically for large-scale data, and repeated computation is avoided for dynamic problems. We further perform theoretical analysis comparing the proposed online algorithm with the batch LRR method. Finally, experimental results on typical tasks of subspace recovery and subspace clustering show that the proposed algorithm performs comparably or better than batch methods including the batch LRR, and significantly outperforms state-of-the-art online methods.

  12. Quantum annealing versus classical machine learning applied to a simplified computational biology problem

    PubMed Central

    Li, Richard Y.; Di Felice, Rosa; Rohs, Remo; Lidar, Daniel A.

    2018-01-01

    Transcription factors regulate gene expression, but how these proteins recognize and specifically bind to their DNA targets is still debated. Machine learning models are effective means to reveal interaction mechanisms. Here we studied the ability of a quantum machine learning approach to predict binding specificity. Using simplified datasets of a small number of DNA sequences derived from actual binding affinity experiments, we trained a commercially available quantum annealer to classify and rank transcription factor binding. The results were compared to state-of-the-art classical approaches for the same simplified datasets, including simulated annealing, simulated quantum annealing, multiple linear regression, LASSO, and extreme gradient boosting. Despite technological limitations, we find a slight advantage in classification performance and nearly equal ranking performance using the quantum annealer for these fairly small training data sets. Thus, we propose that quantum annealing might be an effective method to implement machine learning for certain computational biology problems. PMID:29652405

  13. Drogue detection for vision-based autonomous aerial refueling via low rank and sparse decomposition with multiple features

    NASA Astrophysics Data System (ADS)

    Gao, Shibo; Cheng, Yongmei; Song, Chunhua

    2013-09-01

    The technology of vision-based probe-and-drogue autonomous aerial refueling is an amazing task in modern aviation for both manned and unmanned aircraft. A key issue is to determine the relative orientation and position of the drogue and the probe accurately for relative navigation system during the approach phase, which requires locating the drogue precisely. Drogue detection is a challenging task due to disorderly motion of drogue caused by both the tanker wake vortex and atmospheric turbulence. In this paper, the problem of drogue detection is considered as a problem of moving object detection. A drogue detection algorithm based on low rank and sparse decomposition with local multiple features is proposed. The global and local information of drogue is introduced into the detection model in a unified way. The experimental results on real autonomous aerial refueling videos show that the proposed drogue detection algorithm is effective.

  14. Application of the fuzzy topsis multi-attribute decision making method to determine scholarship recipients

    NASA Astrophysics Data System (ADS)

    Irvanizam, I.

    2018-03-01

    Some scholarships have been routinely offered by Ministry of Research, Technology and Higher Education of the Republic of Indonesia for students at Syiah Kuala University. In reality, the scholarship selection process is becoming subjective and highly complex problem. Multi-Attribute Decision Making (MADM) techniques can be a solution in order to solve scholarship selection problem. In this study, we demonstrated the application of a fuzzy TOPSIS as an MADM technique by using a numerical example in order to calculate a triangular fuzzy number for the fuzzy data onto a normalized weight. We then use this normalized value to construct the normalized fuzzy decision matrix. We finally use the fuzzy TOPSIS to rank alternatives in descending order based on the relative closeness to the ideal solution. The result in terms of final ranking shows slightly different from the previous work.

  15. E-Learning Technologies: Employing Matlab Web Server to Facilitate the Education of Mathematical Programming

    ERIC Educational Resources Information Center

    Karagiannis, P.; Markelis, I.; Paparrizos, K.; Samaras, N.; Sifaleras, A.

    2006-01-01

    This paper presents new web-based educational software (webNetPro) for "Linear Network Programming." It includes many algorithms for "Network Optimization" problems, such as shortest path problems, minimum spanning tree problems, maximum flow problems and other search algorithms. Therefore, webNetPro can assist the teaching process of courses such…

  16. Feedback laws for fuel minimization for transport aircraft

    NASA Technical Reports Server (NTRS)

    Price, D. B.; Gracey, C.

    1984-01-01

    The Theoretical Mechanics Branch has as one of its long-range goals to work toward solving real-time trajectory optimization problems on board an aircraft. This is a generic problem that has application to all aspects of aviation from general aviation through commercial to military. Overall interest is in the generic problem, but specific problems to achieve concrete results are examined. The problem is to develop control laws that generate approximately optimal trajectories with respect to some criteria such as minimum time, minimum fuel, or some combination of the two. These laws must be simple enough to be implemented on a computer that is flown on board an aircraft, which implies a major simplification from the two point boundary value problem generated by a standard trajectory optimization problem. In addition, the control laws allow for changes in end conditions during the flight, and changes in weather along a planned flight path. Therefore, a feedback control law that generates commands based on the current state rather than a precomputed open-loop control law is desired. This requirement, along with the need for order reduction, argues for the application of singular perturbation techniques.

  17. Great geomagnetic storm of 9 November 1991: Association with a disappearing solar filament

    NASA Astrophysics Data System (ADS)

    Cliver, E. W.; Balasubramaniam, K. S.; Nitta, N. V.; Li, X.

    2009-02-01

    We attribute the great geomagnetic storm on 8-10 November 1991 to a large-scale eruption that encompassed the disappearance of a ~25° solar filament in the southern solar hemisphere. The resultant soft X-ray arcade spanned ~90° of solar longitude. The rapid growth of an active region lying at one end of the X-ray arcade appears to have triggered the eruption. This is the largest geomagnetic storm yet associated with the eruption of a quiescent filament. The minimum hourly Dst value of -354 nT on 9 November 1991 compares with a minimum Dst value of -161 nT for the largest 27-day recurrent (coronal hole) storm observed from 1972 to 2005 and the minimum -559 nT value observed during the flare-associated storm of 14 March 1989, the greatest magnetic storm recorded during the space age. Overall, the November 1991 storm ranks 15th on a list of Dst storms from 1905 to 2004, surpassing in intensity such well-known storms as 14 July 1982 (-310 nT) and 15 July 2000 (-317 nT). We used the Cliver et al. and Gopalswamy et al. empirical models of coronal mass ejection propagation in the solar wind to provide consistency checks on the eruption/storm association.

  18. Method for using global optimization to the estimation of surface-consistent residual statics

    DOEpatents

    Reister, David B.; Barhen, Jacob; Oblow, Edward M.

    2001-01-01

    An efficient method for generating residual statics corrections to compensate for surface-consistent static time shifts in stacked seismic traces. The method includes a step of framing the residual static corrections as a global optimization problem in a parameter space. The method also includes decoupling the global optimization problem involving all seismic traces into several one-dimensional problems. The method further utilizes a Stochastic Pijavskij Tunneling search to eliminate regions in the parameter space where a global minimum is unlikely to exist so that the global minimum may be quickly discovered. The method finds the residual statics corrections by maximizing the total stack power. The stack power is a measure of seismic energy transferred from energy sources to receivers.

  19. Proportional Topology Optimization: A New Non-Sensitivity Method for Solving Stress Constrained and Minimum Compliance Problems and Its Implementation in MATLAB

    PubMed Central

    Biyikli, Emre; To, Albert C.

    2015-01-01

    A new topology optimization method called the Proportional Topology Optimization (PTO) is presented. As a non-sensitivity method, PTO is simple to understand, easy to implement, and is also efficient and accurate at the same time. It is implemented into two MATLAB programs to solve the stress constrained and minimum compliance problems. Descriptions of the algorithm and computer programs are provided in detail. The method is applied to solve three numerical examples for both types of problems. The method shows comparable efficiency and accuracy with an existing optimality criteria method which computes sensitivities. Also, the PTO stress constrained algorithm and minimum compliance algorithm are compared by feeding output from one algorithm to the other in an alternative manner, where the former yields lower maximum stress and volume fraction but higher compliance compared to the latter. Advantages and disadvantages of the proposed method and future works are discussed. The computer programs are self-contained and publicly shared in the website www.ptomethod.org. PMID:26678849

  20. Testing the robustness of optimal access vessel fleet selection for operation and maintenance of offshore wind farms

    DOE PAGES

    Sperstad, Iver Bakken; Stålhane, Magnus; Dinwoodie, Iain; ...

    2017-09-23

    Optimising the operation and maintenance (O&M) and logistics strategy of offshore wind farms implies the decision problem of selecting the vessel fleet for O&M. Different strategic decision support tools can be applied to this problem, but much uncertainty remains regarding both input data and modelling assumptions. Our paper aims to investigate and ultimately reduce this uncertainty by comparing four simulation tools, one mathematical optimisation tool and one analytic spreadsheet-based tool applied to select the O&M access vessel fleet that minimizes the total O&M cost of a reference wind farm. The comparison shows that the tools generally agree on the optimalmore » vessel fleet, but only partially agree on the relative ranking of the different vessel fleets in terms of total O&M cost. The robustness of the vessel fleet selection to various input data assumptions was tested, and the ranking was found to be particularly sensitive to the vessels' limiting significant wave height for turbine access. Also the parameter with the greatest discrepancy between the tools, implies that accurate quantification and modelling of this parameter is crucial. The ranking is moderately sensitive to turbine failure rates and vessel day rates but less sensitive to electricity price and vessel transit speed.« less

  1. Testing the robustness of optimal access vessel fleet selection for operation and maintenance of offshore wind farms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sperstad, Iver Bakken; Stålhane, Magnus; Dinwoodie, Iain

    Optimising the operation and maintenance (O&M) and logistics strategy of offshore wind farms implies the decision problem of selecting the vessel fleet for O&M. Different strategic decision support tools can be applied to this problem, but much uncertainty remains regarding both input data and modelling assumptions. Our paper aims to investigate and ultimately reduce this uncertainty by comparing four simulation tools, one mathematical optimisation tool and one analytic spreadsheet-based tool applied to select the O&M access vessel fleet that minimizes the total O&M cost of a reference wind farm. The comparison shows that the tools generally agree on the optimalmore » vessel fleet, but only partially agree on the relative ranking of the different vessel fleets in terms of total O&M cost. The robustness of the vessel fleet selection to various input data assumptions was tested, and the ranking was found to be particularly sensitive to the vessels' limiting significant wave height for turbine access. Also the parameter with the greatest discrepancy between the tools, implies that accurate quantification and modelling of this parameter is crucial. The ranking is moderately sensitive to turbine failure rates and vessel day rates but less sensitive to electricity price and vessel transit speed.« less

  2. Neural Network Solves "Traveling-Salesman" Problem

    NASA Technical Reports Server (NTRS)

    Thakoor, Anilkumar P.; Moopenn, Alexander W.

    1990-01-01

    Experimental electronic neural network solves "traveling-salesman" problem. Plans round trip of minimum distance among N cities, visiting every city once and only once (without backtracking). This problem is paradigm of many problems of global optimization (e.g., routing or allocation of resources) occuring in industry, business, and government. Applied to large number of cities (or resources), circuits of this kind expected to solve problem faster and more cheaply.

  3. Robustness of mission plans for unmanned aircraft

    NASA Astrophysics Data System (ADS)

    Niendorf, Moritz

    This thesis studies the robustness of optimal mission plans for unmanned aircraft. Mission planning typically involves tactical planning and path planning. Tactical planning refers to task scheduling and in multi aircraft scenarios also includes establishing a communication topology. Path planning refers to computing a feasible and collision-free trajectory. For a prototypical mission planning problem, the traveling salesman problem on a weighted graph, the robustness of an optimal tour is analyzed with respect to changes to the edge costs. Specifically, the stability region of an optimal tour is obtained, i.e., the set of all edge cost perturbations for which that tour is optimal. The exact stability region of solutions to variants of the traveling salesman problems is obtained from a linear programming relaxation of an auxiliary problem. Edge cost tolerances and edge criticalities are derived from the stability region. For Euclidean traveling salesman problems, robustness with respect to perturbations to vertex locations is considered and safe radii and vertex criticalities are introduced. For weighted-sum multi-objective problems, stability regions with respect to changes in the objectives, weights, and simultaneous changes are given. Most critical weight perturbations are derived. Computing exact stability regions is intractable for large instances. Therefore, tractable approximations are desirable. The stability region of solutions to relaxations of the traveling salesman problem give under approximations and sets of tours give over approximations. The application of these results to the two-neighborhood and the minimum 1-tree relaxation are discussed. Bounds on edge cost tolerances and approximate criticalities are obtainable likewise. A minimum spanning tree is an optimal communication topology for minimizing the cumulative transmission power in multi aircraft missions. The stability region of a minimum spanning tree is given and tolerances, stability balls, and criticalities are derived. This analysis is extended to Euclidean minimum spanning trees. This thesis aims at enabling increased mission performance by providing means of assessing the robustness and optimality of a mission and methods for identifying critical elements. Examples of the application to mission planning in contested environments, cargo aircraft mission planning, multi-objective mission planning, and planning optimal communication topologies for teams of unmanned aircraft are given.

  4. Query-Adaptive Hash Code Ranking for Large-Scale Multi-View Visual Search.

    PubMed

    Liu, Xianglong; Huang, Lei; Deng, Cheng; Lang, Bo; Tao, Dacheng

    2016-10-01

    Hash-based nearest neighbor search has become attractive in many applications. However, the quantization in hashing usually degenerates the discriminative power when using Hamming distance ranking. Besides, for large-scale visual search, existing hashing methods cannot directly support the efficient search over the data with multiple sources, and while the literature has shown that adaptively incorporating complementary information from diverse sources or views can significantly boost the search performance. To address the problems, this paper proposes a novel and generic approach to building multiple hash tables with multiple views and generating fine-grained ranking results at bitwise and tablewise levels. For each hash table, a query-adaptive bitwise weighting is introduced to alleviate the quantization loss by simultaneously exploiting the quality of hash functions and their complement for nearest neighbor search. From the tablewise aspect, multiple hash tables are built for different data views as a joint index, over which a query-specific rank fusion is proposed to rerank all results from the bitwise ranking by diffusing in a graph. Comprehensive experiments on image search over three well-known benchmarks show that the proposed method achieves up to 17.11% and 20.28% performance gains on single and multiple table search over the state-of-the-art methods.

  5. Intrusion detection using rough set classification.

    PubMed

    Zhang, Lian-hua; Zhang, Guan-hua; Zhang, Jie; Bai, Ying-cai

    2004-09-01

    Recently machine learning-based intrusion detection approaches have been subjected to extensive researches because they can detect both misuse and anomaly. In this paper, rough set classification (RSC), a modern learning algorithm, is used to rank the features extracted for detecting intrusions and generate intrusion detection models. Feature ranking is a very critical step when building the model. RSC performs feature ranking before generating rules, and converts the feature ranking to minimal hitting set problem addressed by using genetic algorithm (GA). This is done in classical approaches using Support Vector Machine (SVM) by executing many iterations, each of which removes one useless feature. Compared with those methods, our method can avoid many iterations. In addition, a hybrid genetic algorithm is proposed to increase the convergence speed and decrease the training time of RSC. The models generated by RSC take the form of "IF-THEN" rules, which have the advantage of explication. Tests and comparison of RSC with SVM on DARPA benchmark data showed that for Probe and DoS attacks both RSC and SVM yielded highly accurate results (greater than 99% accuracy on testing set).

  6. Sigmund Freud and Otto Rank: debates and confrontations about anxiety and birth.

    PubMed

    Pizarro Obaid, Francisco

    2012-06-01

    The publication of Otto Rank's The Trauma of Birth (1924) gave rise to an intense debate within the secret Committee and confronted Freud with one of his most beloved disciples. After analyzing the letters that the Professor exchanged with his closest collaborators and reviewing the works he published during this period, it is clear that anxiety was a crucial element among the topics in dispute. His reflections linked to the signal anxiety concept allowed Freud to refute Rank's thesis that defined birth trauma as the paradigmatic key to understanding neurosis, and, in turn, was a way of confirming the validity of the concepts of Oedipus complex, repression and castration in the conceptualization of anxiety. The reasons for the modifications of anxiety theory in the mid-1920s cannot be reduced, as Freud would affirm officially in his work of 1926, to the detection of internal contradictions in his theory or to the desire to establish a metapsychological version of the problem, for they gain their essential impulse from the debate with Rank. Copyright © 2012 Institute of Psychoanalysis.

  7. Application of the PageRank Algorithm to Alarm Graphs

    NASA Astrophysics Data System (ADS)

    Treinen, James J.; Thurimella, Ramakrishna

    The task of separating genuine attacks from false alarms in large intrusion detection infrastructures is extremely difficult. The number of alarms received in such environments can easily enter into the millions of alerts per day. The overwhelming noise created by these alarms can cause genuine attacks to go unnoticed. As means of highlighting these attacks, we introduce a host ranking technique utilizing Alarm Graphs. Rather than enumerate all potential attack paths as in Attack Graphs, we build and analyze graphs based on the alarms generated by the intrusion detection sensors installed on a network. Given that the alarms are predominantly false positives, the challenge is to identify, separate, and ideally predict future attacks. In this paper, we propose a novel approach to tackle this problem based on the PageRank algorithm. By elevating the rank of known attackers and victims we are able to observe the effect that these hosts have on the other nodes in the Alarm Graph. Using this information we are able to discover previously overlooked attacks, as well as defend against future intrusions.

  8. Dual Dynamically Orthogonal approximation of incompressible Navier Stokes equations with random boundary conditions

    NASA Astrophysics Data System (ADS)

    Musharbash, Eleonora; Nobile, Fabio

    2018-02-01

    In this paper we propose a method for the strong imposition of random Dirichlet boundary conditions in the Dynamical Low Rank (DLR) approximation of parabolic PDEs and, in particular, incompressible Navier Stokes equations. We show that the DLR variational principle can be set in the constrained manifold of all S rank random fields with a prescribed value on the boundary, expressed in low rank format, with rank smaller then S. We characterize the tangent space to the constrained manifold by means of a Dual Dynamically Orthogonal (Dual DO) formulation, in which the stochastic modes are kept orthonormal and the deterministic modes satisfy suitable boundary conditions, consistent with the original problem. The Dual DO formulation is also convenient to include the incompressibility constraint, when dealing with incompressible Navier Stokes equations. We show the performance of the proposed Dual DO approximation on two numerical test cases: the classical benchmark of a laminar flow around a cylinder with random inflow velocity, and a biomedical application for simulating blood flow in realistic carotid artery reconstructed from MRI data with random inflow conditions coming from Doppler measurements.

  9. Evaluation of ASPAN's preoperative patient teaching videos on general, regional, and minimum alveolar concentration/conscious sedation anesthesia.

    PubMed

    Krenzischek, D A; Wilson, L; Poole, E L

    2001-06-01

    This descriptive study was undertaken as part of a clinical improvement effort by the ASPAN Research and Education Committees to evaluate adult patients' perception of and satisfaction with the ASPAN Preoperative Patient Teaching videotape on general, regional, and minimum alveolar concentration (MAC)/conscious sedation anesthesia. Research findings on the use of videotapes for preoperative education are mixed. Some studies have reported that the use of videotapes increases knowledge and decreases anxiety, whereas other studies have shown a minimal effect on knowledge and anxiety. A convenience sample of 96 adult patients was chosen from those who were scheduled for surgeries with the above anesthesia techniques in 11 US hospitals and/or surgical centers within 4 ASPAN regional boundaries. Patients viewed the videotape the day(s) before surgery and then completed ASPAN's Preoperative Anesthesia Patient Teaching Questionnaire to measure patient perception and satisfaction. Sixty percent of the patients were women, and 50% had a college degree or higher. The average age of the patients was 51 (+/-17.2). Overall satisfaction scores had a potential range of 10 to 40, with higher scores indicating greater satisfaction. The mean satisfaction score for this study was 35 (+/-6.6). No significant relationships were found between satisfaction with the videotape and age, gender, or educational level. Patients were asked to rank each of 4 teaching methods. Among the choices of individualized instruction, written materials, Internet-based instruction, and videotape, the videotape method was ranked as most preferred. The information obtained from this study will be used to modify and improve the content of the patient education videotape produced by ASPAN. Copyright 2001 by American Society of PeriAnesthesia Nurses.

  10. Variability of Offending Allergens of Allergic Rhinitis According to Age: Optimization of Skin Prick Test Allergens

    PubMed Central

    Lee, Ji-Eun; Ahn, Jae-Chul; Han, Doo Hee; Kim, Dong-Young; Kim, Jung-Whun; Cho, Sang-Heon; Park, Heung-Woo

    2014-01-01

    Purpose This study evaluates offending allergens in patients with allergic rhinitis (AR) according to age that establish a minimal panel for skin prick test (SPT) allergens required to identify if a patient is sensitized. Methods We retrospectively analyzed SPT results according to age to determine the minimum test battery panel necessary to screen at least 93%-95% of AR patients. Allergic skin tests (common airborne indoor and outdoor allergens) were performed on 7,182 patients from January 2007 to June 2011. All patients were classified into 9 groups according to age; subsequently, we investigated offending allergens by age group. Results A total of 5,032 (70.1%) patients were found sensitized to at least one of the 55 aeroallergen extracts tested. The annual ranking of offending allergens was not significantly different from each other over the past 5 years. House dust mites (HDM) were the most prevalent allergens ranked from first to third for all 5 years. The allergens in the minimum test panel differed slightly among all age groups; in addition, the types of sensitized allergen sources were more diverse in the older versus younger age group. HDM covered a larger proportion of the sensitized allergens in the younger age group versus the older age group. Testing with 5 allergens (Dermatophagoides farinae, Tetranychus urticae, oak, mugwort and cockroach) adequately identified over 90% of the sensitized patients. Conclusions A SPT with around 5-7 allergens adequately detected most of the sensitization in the majority of the age groups in Korea. However, this study suggests that physicians perform the SPT with appropriately selected allergens in each age category for the screening of AR. PMID:24404393

  11. Variability of offending allergens of allergic rhinitis according to age: optimization of skin prick test allergens.

    PubMed

    Lee, Ji-Eun; Ahn, Jae-Chul; Han, Doo Hee; Kim, Dong-Young; Kim, Jung-Whun; Cho, Sang-Heon; Park, Heung-Woo; Rhee, Chae-Seo

    2014-01-01

    This study evaluates offending allergens in patients with allergic rhinitis (AR) according to age that establish a minimal panel for skin prick test (SPT) allergens required to identify if a patient is sensitized. We retrospectively analyzed SPT results according to age to determine the minimum test battery panel necessary to screen at least 93%-95% of AR patients. Allergic skin tests (common airborne indoor and outdoor allergens) were performed on 7,182 patients from January 2007 to June 2011. All patients were classified into 9 groups according to age; subsequently, we investigated offending allergens by age group. A total of 5,032 (70.1%) patients were found sensitized to at least one of the 55 aeroallergen extracts tested. The annual ranking of offending allergens was not significantly different from each other over the past 5 years. House dust mites (HDM) were the most prevalent allergens ranked from first to third for all 5 years. The allergens in the minimum test panel differed slightly among all age groups; in addition, the types of sensitized allergen sources were more diverse in the older versus younger age group. HDM covered a larger proportion of the sensitized allergens in the younger age group versus the older age group. Testing with 5 allergens (Dermatophagoides farinae, Tetranychus urticae, oak, mugwort and cockroach) adequately identified over 90% of the sensitized patients. A SPT with around 5-7 allergens adequately detected most of the sensitization in the majority of the age groups in Korea. However, this study suggests that physicians perform the SPT with appropriately selected allergens in each age category for the screening of AR.

  12. Child Labour Remains "Massive Problem."

    ERIC Educational Resources Information Center

    World of Work, 2002

    2002-01-01

    Despite significant progress in efforts to abolish child labor, an alarming number of children are engaged in its worst forms. Although 106 million are engaged in acceptable labor (light work for those above the minimum age for employment), 246 million are involved in child labor that should be abolished (under minimum age, hazardous work). (JOW)

  13. Hyperspectral Super-Resolution of Locally Low Rank Images From Complementary Multisource Data.

    PubMed

    Veganzones, Miguel A; Simoes, Miguel; Licciardi, Giorgio; Yokoya, Naoto; Bioucas-Dias, Jose M; Chanussot, Jocelyn

    2016-01-01

    Remote sensing hyperspectral images (HSIs) are quite often low rank, in the sense that the data belong to a low dimensional subspace/manifold. This has been recently exploited for the fusion of low spatial resolution HSI with high spatial resolution multispectral images in order to obtain super-resolution HSI. Most approaches adopt an unmixing or a matrix factorization perspective. The derived methods have led to state-of-the-art results when the spectral information lies in a low-dimensional subspace/manifold. However, if the subspace/manifold dimensionality spanned by the complete data set is large, i.e., larger than the number of multispectral bands, the performance of these methods mainly decreases because the underlying sparse regression problem is severely ill-posed. In this paper, we propose a local approach to cope with this difficulty. Fundamentally, we exploit the fact that real world HSIs are locally low rank, that is, pixels acquired from a given spatial neighborhood span a very low-dimensional subspace/manifold, i.e., lower or equal than the number of multispectral bands. Thus, we propose to partition the image into patches and solve the data fusion problem independently for each patch. This way, in each patch the subspace/manifold dimensionality is low enough, such that the problem is not ill-posed anymore. We propose two alternative approaches to define the hyperspectral super-resolution through local dictionary learning using endmember induction algorithms. We also explore two alternatives to define the local regions, using sliding windows and binary partition trees. The effectiveness of the proposed approaches is illustrated with synthetic and semi real data.

  14. Relation between inflammables and ignition sources in aircraft environments

    NASA Technical Reports Server (NTRS)

    Scull, Wilfred E

    1951-01-01

    A literature survey was conducted to determine the relation between aircraft ignition sources and inflammables. Available literature applicable to the problem of aircraft fire hazards is analyzed and discussed. Data pertaining to the effect of many variables on ignition temperatures, minimum ignition pressures, minimum spark-ignition energies of inflammables, quenching distances of electrode configurations, and size of openings through which flame will not propagate are presented and discussed. Ignition temperatures and limits of inflammability of gasoline in air in different test environments, and the minimum ignition pressures and minimum size of opening for flame propagation in gasoline-air mixtures are included; inerting of gasoline-air mixtures is discussed.

  15. Multi-UAV Routing for Area Coverage and Remote Sensing with Minimum Time

    PubMed Central

    Avellar, Gustavo S. C.; Pereira, Guilherme A. S.; Pimenta, Luciano C. A.; Iscold, Paulo

    2015-01-01

    This paper presents a solution for the problem of minimum time coverage of ground areas using a group of unmanned air vehicles (UAVs) equipped with image sensors. The solution is divided into two parts: (i) the task modeling as a graph whose vertices are geographic coordinates determined in such a way that a single UAV would cover the area in minimum time; and (ii) the solution of a mixed integer linear programming problem, formulated according to the graph variables defined in the first part, to route the team of UAVs over the area. The main contribution of the proposed methodology, when compared with the traditional vehicle routing problem’s (VRP) solutions, is the fact that our method solves some practical problems only encountered during the execution of the task with actual UAVs. In this line, one of the main contributions of the paper is that the number of UAVs used to cover the area is automatically selected by solving the optimization problem. The number of UAVs is influenced by the vehicles’ maximum flight time and by the setup time, which is the time needed to prepare and launch a UAV. To illustrate the methodology, the paper presents experimental results obtained with two hand-launched, fixed-wing UAVs. PMID:26540055

  16. Minimizing the semantic gap in biomedical content-based image retrieval

    NASA Astrophysics Data System (ADS)

    Guan, Haiying; Antani, Sameer; Long, L. Rodney; Thoma, George R.

    2010-03-01

    A major challenge in biomedical Content-Based Image Retrieval (CBIR) is to achieve meaningful mappings that minimize the semantic gap between the high-level biomedical semantic concepts and the low-level visual features in images. This paper presents a comprehensive learning-based scheme toward meeting this challenge and improving retrieval quality. The article presents two algorithms: a learning-based feature selection and fusion algorithm and the Ranking Support Vector Machine (Ranking SVM) algorithm. The feature selection algorithm aims to select 'good' features and fuse them using different similarity measurements to provide a better representation of the high-level concepts with the low-level image features. Ranking SVM is applied to learn the retrieval rank function and associate the selected low-level features with query concepts, given the ground-truth ranking of the training samples. The proposed scheme addresses four major issues in CBIR to improve the retrieval accuracy: image feature extraction, selection and fusion, similarity measurements, the association of the low-level features with high-level concepts, and the generation of the rank function to support high-level semantic image retrieval. It models the relationship between semantic concepts and image features, and enables retrieval at the semantic level. We apply it to the problem of vertebra shape retrieval from a digitized spine x-ray image set collected by the second National Health and Nutrition Examination Survey (NHANES II). The experimental results show an improvement of up to 41.92% in the mean average precision (MAP) over conventional image similarity computation methods.

  17. Astronauts' menu problem.

    NASA Technical Reports Server (NTRS)

    Lesso, W. G.; Kenyon, E.

    1972-01-01

    Consideration of the problems involved in choosing appropriate menus for astronauts carrying out SKYLAB missions lasting up to eight weeks. The problem of planning balanced menus on the basis of prepackaged food items within limitations on the intake of calories, protein, and certain elements is noted, as well as a number of other restrictions of both physical and arbitrary nature. The tailoring of a set of menus for each astronaut on the basis of subjective rankings of each food by the astronaut in terms of a 'measure of pleasure' is described, and a computer solution to this problem by means of a mixed integer programming code is presented.

  18. Working on Extremum Problems with the Help of Dynamic Geometry Systems

    ERIC Educational Resources Information Center

    Gortcheva, Iordanka

    2013-01-01

    Two problems from high school mathematics on finding minimum or maximum are discussed. The focus is on students' approaches and difficulties in identifying a correct solution and how dynamic geometry systems can help.

  19. Target Coverage in Wireless Sensor Networks with Probabilistic Sensors

    PubMed Central

    Shan, Anxing; Xu, Xianghua; Cheng, Zongmao

    2016-01-01

    Sensing coverage is a fundamental problem in wireless sensor networks (WSNs), which has attracted considerable attention. Conventional research on this topic focuses on the 0/1 coverage model, which is only a coarse approximation to the practical sensing model. In this paper, we study the target coverage problem, where the objective is to find the least number of sensor nodes in randomly-deployed WSNs based on the probabilistic sensing model. We analyze the joint detection probability of target with multiple sensors. Based on the theoretical analysis of the detection probability, we formulate the minimum ϵ-detection coverage problem. We prove that the minimum ϵ-detection coverage problem is NP-hard and present an approximation algorithm called the Probabilistic Sensor Coverage Algorithm (PSCA) with provable approximation ratios. To evaluate our design, we analyze the performance of PSCA theoretically and also perform extensive simulations to demonstrate the effectiveness of our proposed algorithm. PMID:27618902

  20. An Effective Evolutionary Approach for Bicriteria Shortest Path Routing Problems

    NASA Astrophysics Data System (ADS)

    Lin, Lin; Gen, Mitsuo

    Routing problem is one of the important research issues in communication network fields. In this paper, we consider a bicriteria shortest path routing (bSPR) model dedicated to calculating nondominated paths for (1) the minimum total cost and (2) the minimum transmission delay. To solve this bSPR problem, we propose a new multiobjective genetic algorithm (moGA): (1) an efficient chromosome representation using the priority-based encoding method; (2) a new operator of GA parameters auto-tuning, which is adaptively regulation of exploration and exploitation based on the change of the average fitness of parents and offspring which is occurred at each generation; and (3) an interactive adaptive-weight fitness assignment mechanism is implemented that assigns weights to each objective and combines the weighted objectives into a single objective function. Numerical experiments with various scales of network design problems show the effectiveness and the efficiency of our approach by comparing with the recent researches.

  1. A MATLAB implementation of the minimum relative entropy method for linear inverse problems

    NASA Astrophysics Data System (ADS)

    Neupauer, Roseanna M.; Borchers, Brian

    2001-08-01

    The minimum relative entropy (MRE) method can be used to solve linear inverse problems of the form Gm= d, where m is a vector of unknown model parameters and d is a vector of measured data. The MRE method treats the elements of m as random variables, and obtains a multivariate probability density function for m. The probability density function is constrained by prior information about the upper and lower bounds of m, a prior expected value of m, and the measured data. The solution of the inverse problem is the expected value of m, based on the derived probability density function. We present a MATLAB implementation of the MRE method. Several numerical issues arise in the implementation of the MRE method and are discussed here. We present the source history reconstruction problem from groundwater hydrology as an example of the MRE implementation.

  2. Annealing Ant Colony Optimization with Mutation Operator for Solving TSP.

    PubMed

    Mohsen, Abdulqader M

    2016-01-01

    Ant Colony Optimization (ACO) has been successfully applied to solve a wide range of combinatorial optimization problems such as minimum spanning tree, traveling salesman problem, and quadratic assignment problem. Basic ACO has drawbacks of trapping into local minimum and low convergence rate. Simulated annealing (SA) and mutation operator have the jumping ability and global convergence; and local search has the ability to speed up the convergence. Therefore, this paper proposed a hybrid ACO algorithm integrating the advantages of ACO, SA, mutation operator, and local search procedure to solve the traveling salesman problem. The core of algorithm is based on the ACO. SA and mutation operator were used to increase the ants population diversity from time to time and the local search was used to exploit the current search area efficiently. The comparative experiments, using 24 TSP instances from TSPLIB, show that the proposed algorithm outperformed some well-known algorithms in the literature in terms of solution quality.

  3. Approximate Solutions for a Self-Folding Problem of Carbon Nanotubes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Y Mikata

    2006-08-22

    This paper treats approximate solutions for a self-folding problem of carbon nanotubes. It has been observed in the molecular dynamics calculations [1] that a carbon nanotube with a large aspect ratio can self-fold due to van der Waals force between the parts of the same carbon nanotube. The main issue in the self-folding problem is to determine the minimum threshold length of the carbon nanotube at which it becomes possible for the carbon nanotube to self-fold due to the van der Waals force. An approximate mathematical model based on the force method is constructed for the self-folding problem of carbonmore » nanotubes, and it is solved exactly as an elastica problem using elliptic functions. Additionally, three other mathematical models are constructed based on the energy method. As a particular example, the lower and upper estimates for the critical threshold (minimum) length are determined based on both methods for the (5,5) armchair carbon nanotube.« less

  4. Dental Continuing Education Preference Survey

    DTIC Science & Technology

    1992-06-01

    Subjects Clinical Dentistry Dx/Tx TMJ Problems Equilibration Discolored Teeth Facial Pain Health/Nutrition Orofacial Infect. Endo. Failures Perio-Endo...Diagnosis/Treatment of Orofacial Infections. Of these six subjects, Medical Emergencies was ranked as a topic most needed by slightly over 36% of the...Treatment of TMJ Problems, and Diagnosis/Treatment of Orofacial Infections among the top six topics. A "high need" for Oral Surgery topics was perceived by

  5. Reduction theorems for optimal unambiguous state discrimination of density matrices

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Raynal, Philippe; Luetkenhaus, Norbert; Enk, Steven J. van

    2003-08-01

    We present reduction theorems for the problem of optimal unambiguous state discrimination of two general density matrices. We show that this problem can be reduced to that of two density matrices that have the same rank n and are described in a Hilbert space of dimensions 2n. We also show how to use the reduction theorems to discriminate unambiguously between N mixed states (N{>=}2)

  6. Regularity Aspects in Inverse Musculoskeletal Biomechanics

    NASA Astrophysics Data System (ADS)

    Lund, Marie; Stâhl, Fredrik; Gulliksson, Mârten

    2008-09-01

    Inverse simulations of musculoskeletal models computes the internal forces such as muscle and joint reaction forces, which are hard to measure, using the more easily measured motion and external forces as input data. Because of the difficulties of measuring muscle forces and joint reactions, simulations are hard to validate. One way of reducing errors for the simulations is to ensure that the mathematical problem is well-posed. This paper presents a study of regularity aspects for an inverse simulation method, often called forward dynamics or dynamical optimization, that takes into account both measurement errors and muscle dynamics. Regularity is examined for a test problem around the optimum using the approximated quadratic problem. The results shows improved rank by including a regularization term in the objective that handles the mechanical over-determinancy. Using the 3-element Hill muscle model the chosen regularization term is the norm of the activation. To make the problem full-rank only the excitation bounds should be included in the constraints. However, this results in small negative values of the activation which indicates that muscles are pushing and not pulling, which is unrealistic but the error maybe small enough to be accepted for specific applications. These results are a start to ensure better results of inverse musculoskeletal simulations from a numerical point of view.

  7. The constraints satisfaction problem approach in the design of an architectural functional layout

    NASA Astrophysics Data System (ADS)

    Zawidzki, Machi; Tateyama, Kazuyoshi; Nishikawa, Ikuko

    2011-09-01

    A design support system with a new strategy for finding the optimal functional configurations of rooms for architectural layouts is presented. A set of configurations satisfying given constraints is generated and ranked according to multiple objectives. The method can be applied to problems in architectural practice, urban or graphic design-wherever allocation of related geometrical elements of known shape is optimized. Although the methodology is shown using simplified examples-a single story residential building with two apartments each having two rooms-the results resemble realistic functional layouts. One example of a practical size problem of a layout of three apartments with a total of 20 rooms is demonstrated, where the generated solution can be used as a base for a realistic architectural blueprint. The discretization of design space is discussed, followed by application of a backtrack search algorithm used for generating a set of potentially 'good' room configurations. Next the solutions are classified by a machine learning method (FFN) as 'proper' or 'improper' according to the internal communication criteria. Examples of interactive ranking of the 'proper' configurations according to multiple criteria and choosing 'the best' ones are presented. The proposed framework is general and universal-the criteria, parameters and weights can be individually defined by a user and the search algorithm can be adjusted to a specific problem.

  8. An analysis of hypercritical states in elastic and inelastic systems

    NASA Astrophysics Data System (ADS)

    Kowalczk, Maciej

    The author raises a wide range of problems whose common characteristic is an analysis of hypercritical states in elastic and inelastic systems. the article consists of two basic parts. The first part primarily discusses problems of modelling hypercritical states, while the second analyzes numerical methods (so-called continuation methods) used to solve non-linear problems. The original approaches for modelling hypercritical states found in this article include the combination of plasticity theory and an energy condition for cracking, accounting for the variability and cyclical nature of the forms of fracture of a brittle material under a die, and the combination of plasticity theory and a simplified description of the phenomenon of localization along a discontinuity line. The author presents analytical solutions of three non-linear problems for systems made of elastic/brittle/plastic and elastic/ideally plastic materials. The author proceeds to discuss the analytical basics of continuation methods and analyzes the significance of the parameterization of non-linear problems, provides a method for selecting control parameters based on an analysis of the rank of a rectangular matrix of a uniform system of increment equations, and also provides a new method for selecting an equilibrium path originating from a bifurcation point. The author provides a general outline of continuation methods based on an analysis of the rank of a matrix of a corrective system of equations. The author supplements his theoretical solutions with numerical solutions of non-linear problems for rod systems and problems of the plastic disintegration of a notched rectangular plastic plate.

  9. Ascent trajectory optimization for stratospheric airship with thermal effects

    NASA Astrophysics Data System (ADS)

    Guo, Xiao; Zhu, Ming

    2013-09-01

    Ascent trajectory optimization with thermal effects is addressed for a stratospheric airship. Basic thermal characteristics of the stratospheric airship are introduced. Besides, the airship’s equations of motion are constructed by including the factors about aerodynamic force, added mass and wind profiles which are developed based on horizontal-wind model. For both minimum-time and minimum-energy flights during ascent, the trajectory optimization problem is described with the path and terminal constraints in different scenarios and then, is converted into a parameter optimization problem by a direct collocation method. Sparse Nonlinear OPTimizer(SNOPT) is employed as a nonlinear programming solver and two scenarios are adopted. The solutions obtained illustrate that the trajectories are greatly affected by the thermal behaviors which prolong the daytime minimum-time flights of about 20.8% compared with that of nighttime in scenario 1 and of about 10.5% in scenario 2. And there is the same trend for minimum-energy flights. For the energy consumption of minimum-time flights, 6% decrease is abstained in scenario 1 and 5% decrease in scenario 2. However, a few energy consumption reduction is achieved for minimum-energy flights. Solar radiation is the principal component and the natural wind also affects the thermal behaviors of stratospheric airship during ascent. The relationship between take-off time and performance of airship during ascent is discussed. it is found that the take-off time at dusk is best choice for stratospheric airship. And in addition, for saving energy, airship prefers to fly downwind.

  10. The spanwise distribution of lift for minimum induced drag of wings having a given lift and a given bending moment

    NASA Technical Reports Server (NTRS)

    Jones, R. T.

    1950-01-01

    The problem of the minimum induced drag of wings having a given lift and a given span is extended to include cases in which the bending moment to be supported by the wing is also given. The theory is limited to lifting surfaces traveling at subsonic speeds. It is found that the required shape of the downwash distribution can be obtained in an elementary way which is applicable to a variety of such problems. Expressions for the minimum drag and the corresponding spanwise load distributions are also given for the case in which the lift and the bending moment about the wing root are fixed while the span is allowed to vary. The results show a 15-percent reduction of the induced drag with a 15-percent increase in span as compared with results for an elliptically loaded wing having the same total lift and bending moment.

  11. An indirect method for numerical optimization using the Kreisselmeir-Steinhauser function

    NASA Technical Reports Server (NTRS)

    Wrenn, Gregory A.

    1989-01-01

    A technique is described for converting a constrained optimization problem into an unconstrained problem. The technique transforms one of more objective functions into reduced objective functions, which are analogous to goal constraints used in the goal programming method. These reduced objective functions are appended to the set of constraints and an envelope of the entire function set is computed using the Kreisselmeir-Steinhauser function. This envelope function is then searched for an unconstrained minimum. The technique may be categorized as a SUMT algorithm. Advantages of this approach are the use of unconstrained optimization methods to find a constrained minimum without the draw down factor typical of penalty function methods, and that the technique may be started from the feasible or infeasible design space. In multiobjective applications, the approach has the advantage of locating a compromise minimum design without the need to optimize for each individual objective function separately.

  12. Multipoint Optimal Minimum Entropy Deconvolution and Convolution Fix: Application to vibration fault detection

    NASA Astrophysics Data System (ADS)

    McDonald, Geoff L.; Zhao, Qing

    2017-01-01

    Minimum Entropy Deconvolution (MED) has been applied successfully to rotating machine fault detection from vibration data, however this method has limitations. A convolution adjustment to the MED definition and solution is proposed in this paper to address the discontinuity at the start of the signal - in some cases causing spurious impulses to be erroneously deconvolved. A problem with the MED solution is that it is an iterative selection process, and will not necessarily design an optimal filter for the posed problem. Additionally, the problem goal in MED prefers to deconvolve a single-impulse, while in rotating machine faults we expect one impulse-like vibration source per rotational period of the faulty element. Maximum Correlated Kurtosis Deconvolution was proposed to address some of these problems, and although it solves the target goal of multiple periodic impulses, it is still an iterative non-optimal solution to the posed problem and only solves for a limited set of impulses in a row. Ideally, the problem goal should target an impulse train as the output goal, and should directly solve for the optimal filter in a non-iterative manner. To meet these goals, we propose a non-iterative deconvolution approach called Multipoint Optimal Minimum Entropy Deconvolution Adjusted (MOMEDA). MOMEDA proposes a deconvolution problem with an infinite impulse train as the goal and the optimal filter solution can be solved for directly. From experimental data on a gearbox with and without a gear tooth chip, we show that MOMEDA and its deconvolution spectrums according to the period between the impulses can be used to detect faults and study the health of rotating machine elements effectively.

  13. Security techniques for prevention of rank manipulation in social tagging services including robotic domains.

    PubMed

    Choi, Okkyung; Jung, Hanyoung; Moon, Seungbin

    2014-01-01

    With smartphone distribution becoming common and robotic applications on the rise, social tagging services for various applications including robotic domains have advanced significantly. Though social tagging plays an important role when users are finding the exact information through web search, reliability and semantic relation between web contents and tags are not considered. Spams are making ill use of this aspect and put irrelevant tags deliberately on contents and induce users to advertise contents when they click items of search results. Therefore, this study proposes a detection method for tag-ranking manipulation to solve the problem of the existing methods which cannot guarantee the reliability of tagging. Similarity is measured for ranking the grade of registered tag on the contents, and weighted values of each tag are measured by means of synonym relevance, frequency, and semantic distances between tags. Lastly, experimental evaluation results are provided and its efficiency and accuracy are verified through them.

  14. Non-Convex Sparse and Low-Rank Based Robust Subspace Segmentation for Data Mining.

    PubMed

    Cheng, Wenlong; Zhao, Mingbo; Xiong, Naixue; Chui, Kwok Tai

    2017-07-15

    Parsimony, including sparsity and low-rank, has shown great importance for data mining in social networks, particularly in tasks such as segmentation and recognition. Traditionally, such modeling approaches rely on an iterative algorithm that minimizes an objective function with convex l ₁-norm or nuclear norm constraints. However, the obtained results by convex optimization are usually suboptimal to solutions of original sparse or low-rank problems. In this paper, a novel robust subspace segmentation algorithm has been proposed by integrating l p -norm and Schatten p -norm constraints. Our so-obtained affinity graph can better capture local geometrical structure and the global information of the data. As a consequence, our algorithm is more generative, discriminative and robust. An efficient linearized alternating direction method is derived to realize our model. Extensive segmentation experiments are conducted on public datasets. The proposed algorithm is revealed to be more effective and robust compared to five existing algorithms.

  15. Dual Low-Rank Pursuit: Learning Salient Features for Saliency Detection.

    PubMed

    Lang, Congyan; Feng, Jiashi; Feng, Songhe; Wang, Jingdong; Yan, Shuicheng

    2016-06-01

    Saliency detection is an important procedure for machines to understand visual world as humans do. In this paper, we consider a specific saliency detection problem of predicting human eye fixations when they freely view natural images, and propose a novel dual low-rank pursuit (DLRP) method. DLRP learns saliency-aware feature transformations by utilizing available supervision information and constructs discriminative bases for effectively detecting human fixation points under the popular low-rank and sparsity-pursuit framework. Benefiting from the embedded high-level information in the supervised learning process, DLRP is able to predict fixations accurately without performing the expensive object segmentation as in the previous works. Comprehensive experiments clearly show the superiority of the proposed DLRP method over the established state-of-the-art methods. We also empirically demonstrate that DLRP provides stronger generalization performance across different data sets and inherits the advantages of both the bottom-up- and top-down-based saliency detection methods.

  16. Knowledge extraction from evolving spiking neural networks with rank order population coding.

    PubMed

    Soltic, Snjezana; Kasabov, Nikola

    2010-12-01

    This paper demonstrates how knowledge can be extracted from evolving spiking neural networks with rank order population coding. Knowledge discovery is a very important feature of intelligent systems. Yet, a disproportionally small amount of research is centered on the issue of knowledge extraction from spiking neural networks which are considered to be the third generation of artificial neural networks. The lack of knowledge representation compatibility is becoming a major detriment to end users of these networks. We show that a high-level knowledge can be obtained from evolving spiking neural networks. More specifically, we propose a method for fuzzy rule extraction from an evolving spiking network with rank order population coding. The proposed method was used for knowledge discovery on two benchmark taste recognition problems where the knowledge learnt by an evolving spiking neural network was extracted in the form of zero-order Takagi-Sugeno fuzzy IF-THEN rules.

  17. Multicriteria ranking of workplaces regarding working conditions in a mining company.

    PubMed

    Bogdanović, Dejan; Stanković, Vladimir; Urošević, Snežana; Stojanović, Miloš

    2016-12-01

    Ranking of workplaces with respect to working conditions is very significant for each company. It indicates the positions where employees are most exposed to adverse effects resulting from the working environment, which endangers their health. This article presents the results obtained for 12 different production workplaces in the copper mining and smelting complex RTB Bor - 'Veliki Krivelj' open pit, based on six parameters measured regularly which defined the following working environment conditions: air temperature, light, noise, dustiness, chemical hazards and vibrations. The ranking of workplaces has been performed by PROMETHEE/GAIA. Additional optimization of workplaces is done by PROMETHEE V with the given limits related to maximum permitted values for working environment parameters. The obtained results indicate that the most difficult workplace is on the excavation location (excavator operator). This method can be successfully used for solving similar kinds of problems, in order to improve working conditions.

  18. Optimization in First Semester Calculus: A Look at a Classic Problem

    ERIC Educational Resources Information Center

    LaRue, Renee; Infante, Nicole Engelke

    2015-01-01

    Optimization problems in first semester calculus have historically been a challenge for students. Focusing on the classic optimization problem of finding the minimum amount of fencing required to enclose a fixed area, we examine students' activity through the lens of Tall and Vinner's concept image and Carlson and Bloom's multidimensional…

  19. An historical survey of computational methods in optimal control.

    NASA Technical Reports Server (NTRS)

    Polak, E.

    1973-01-01

    Review of some of the salient theoretical developments in the specific area of optimal control algorithms. The first algorithms for optimal control were aimed at unconstrained problems and were derived by using first- and second-variation methods of the calculus of variations. These methods have subsequently been recognized as gradient, Newton-Raphson, or Gauss-Newton methods in function space. A much more recent addition to the arsenal of unconstrained optimal control algorithms are several variations of conjugate-gradient methods. At first, constrained optimal control problems could only be solved by exterior penalty function methods. Later algorithms specifically designed for constrained problems have appeared. Among these are methods for solving the unconstrained linear quadratic regulator problem, as well as certain constrained minimum-time and minimum-energy problems. Differential-dynamic programming was developed from dynamic programming considerations. The conditional-gradient method, the gradient-projection method, and a couple of feasible directions methods were obtained as extensions or adaptations of related algorithms for finite-dimensional problems. Finally, the so-called epsilon-methods combine the Ritz method with penalty function techniques.

  20. Heuristics for Multiobjective Optimization of Two-Sided Assembly Line Systems

    PubMed Central

    Jawahar, N.; Ponnambalam, S. G.; Sivakumar, K.; Thangadurai, V.

    2014-01-01

    Products such as cars, trucks, and heavy machinery are assembled by two-sided assembly line. Assembly line balancing has significant impacts on the performance and productivity of flow line manufacturing systems and is an active research area for several decades. This paper addresses the line balancing problem of a two-sided assembly line in which the tasks are to be assigned at L side or R side or any one side (addressed as E). Two objectives, minimum number of workstations and minimum unbalance time among workstations, have been considered for balancing the assembly line. There are two approaches to solve multiobjective optimization problem: first approach combines all the objectives into a single composite function or moves all but one objective to the constraint set; second approach determines the Pareto optimal solution set. This paper proposes two heuristics to evolve optimal Pareto front for the TALBP under consideration: Enumerative Heuristic Algorithm (EHA) to handle problems of small and medium size and Simulated Annealing Algorithm (SAA) for large-sized problems. The proposed approaches are illustrated with example problems and their performances are compared with a set of test problems. PMID:24790568

Top