Single-machine common/slack due window assignment problems with linear decreasing processing times
NASA Astrophysics Data System (ADS)
Zhang, Xingong; Lin, Win-Chin; Wu, Wen-Hsiang; Wu, Chin-Chia
2017-08-01
This paper studies linear non-increasing processing times and the common/slack due window assignment problems on a single machine, where the actual processing time of a job is a linear non-increasing function of its starting time. The aim is to minimize the sum of the earliness cost, tardiness cost, due window location and due window size. Some optimality results are discussed for the common/slack due window assignment problems and two O(n log n) time algorithms are presented to solve the two problems. Finally, two examples are provided to illustrate the correctness of the corresponding algorithms.
Generalised Assignment Matrix Methodology in Linear Programming
ERIC Educational Resources Information Center
Jerome, Lawrence
2012-01-01
Discrete Mathematics instructors and students have long been struggling with various labelling and scanning algorithms for solving many important problems. This paper shows how to solve a wide variety of Discrete Mathematics and OR problems using assignment matrices and linear programming, specifically using Excel Solvers although the same…
Integer Linear Programming for Constrained Multi-Aspect Committee Review Assignment
Karimzadehgan, Maryam; Zhai, ChengXiang
2011-01-01
Automatic review assignment can significantly improve the productivity of many people such as conference organizers, journal editors and grant administrators. A general setup of the review assignment problem involves assigning a set of reviewers on a committee to a set of documents to be reviewed under the constraint of review quota so that the reviewers assigned to a document can collectively cover multiple topic aspects of the document. No previous work has addressed such a setup of committee review assignments while also considering matching multiple aspects of topics and expertise. In this paper, we tackle the problem of committee review assignment with multi-aspect expertise matching by casting it as an integer linear programming problem. The proposed algorithm can naturally accommodate any probabilistic or deterministic method for modeling multiple aspects to automate committee review assignments. Evaluation using a multi-aspect review assignment test set constructed using ACM SIGIR publications shows that the proposed algorithm is effective and efficient for committee review assignments based on multi-aspect expertise matching. PMID:22711970
Fleet Assignment Using Collective Intelligence
NASA Technical Reports Server (NTRS)
Antoine, Nicolas E.; Bieniawski, Stefan R.; Kroo, Ilan M.; Wolpert, David H.
2004-01-01
Product distribution theory is a new collective intelligence-based framework for analyzing and controlling distributed systems. Its usefulness in distributed stochastic optimization is illustrated here through an airline fleet assignment problem. This problem involves the allocation of aircraft to a set of flights legs in order to meet passenger demand, while satisfying a variety of linear and non-linear constraints. Over the course of the day, the routing of each aircraft is determined in order to minimize the number of required flights for a given fleet. The associated flow continuity and aircraft count constraints have led researchers to focus on obtaining quasi-optimal solutions, especially at larger scales. In this paper, the authors propose the application of this new stochastic optimization algorithm to a non-linear objective cold start fleet assignment problem. Results show that the optimizer can successfully solve such highly-constrained problems (130 variables, 184 constraints).
Frequency assignments for HFDF receivers in a search and rescue network
NASA Astrophysics Data System (ADS)
Johnson, Krista E.
1990-03-01
This thesis applies a multiobjective linear programming approach to the problem of assigning frequencies to high frequency direction finding (HFDF) receivers in a search-and-rescue network in order to maximize the expected number of geolocations of vessels in distress. The problem is formulated as a multiobjective integer linear programming problem. The integrality of the solutions is guaranteed by the totally unimodularity of the A-matrix. Two approaches are taken to solve the multiobjective linear programming problem: (1) the multiobjective simplex method as implemented in ADBASE; and (2) an iterative approach. In this approach, the individual objective functions are weighted and combined in a single additive objective function. The resulting single objective problem is expressed as a network programming problem and solved using SAS NETFLOW. The process is then repeated with different weightings for the objective functions. The solutions obtained from the multiobjective linear programs are evaluated using a FORTRAN program to determine which solution provides the greatest expected number of geolocations. This solution is then compared to the sample mean and standard deviation for the expected number of geolocations resulting from 10,000 random frequency assignments for the network.
The generalized pole assignment problem. [dynamic output feedback problems
NASA Technical Reports Server (NTRS)
Djaferis, T. E.; Mitter, S. K.
1979-01-01
Two dynamic output feedback problems for a linear, strictly proper system are considered, along with their interrelationships. The problems are formulated in the frequency domain and investigated in terms of linear equations over rings of polynomials. Necessary and sufficient conditions are expressed using genericity.
Caetano, Tibério S; McAuley, Julian J; Cheng, Li; Le, Quoc V; Smola, Alex J
2009-06-01
As a fundamental problem in pattern recognition, graph matching has applications in a variety of fields, from computer vision to computational biology. In graph matching, patterns are modeled as graphs and pattern recognition amounts to finding a correspondence between the nodes of different graphs. Many formulations of this problem can be cast in general as a quadratic assignment problem, where a linear term in the objective function encodes node compatibility and a quadratic term encodes edge compatibility. The main research focus in this theme is about designing efficient algorithms for approximately solving the quadratic assignment problem, since it is NP-hard. In this paper we turn our attention to a different question: how to estimate compatibility functions such that the solution of the resulting graph matching problem best matches the expected solution that a human would manually provide. We present a method for learning graph matching: the training examples are pairs of graphs and the 'labels' are matches between them. Our experimental results reveal that learning can substantially improve the performance of standard graph matching algorithms. In particular, we find that simple linear assignment with such a learning scheme outperforms Graduated Assignment with bistochastic normalisation, a state-of-the-art quadratic assignment relaxation algorithm.
Zhang, Meiyan; Zheng, Yahong Rosa
2017-01-01
This paper investigates the task assignment and path planning problem for multiple AUVs in three dimensional (3D) underwater wireless sensor networks where nonholonomic motion constraints of underwater AUVs in 3D space are considered. The multi-target task assignment and path planning problem is modeled by the Multiple Traveling Sales Person (MTSP) problem and the Genetic Algorithm (GA) is used to solve the MTSP problem with Euclidean distance as the cost function and the Tour Hop Balance (THB) or Tour Length Balance (TLB) constraints as the stop criterion. The resulting tour sequences are mapped to 2D Dubins curves in the X−Y plane, and then interpolated linearly to obtain the Z coordinates. We demonstrate that the linear interpolation fails to achieve G1 continuity in the 3D Dubins path for multiple targets. Therefore, the interpolated 3D Dubins curves are checked against the AUV dynamics constraint and the ones satisfying the constraint are accepted to finalize the 3D Dubins curve selection. Simulation results demonstrate that the integration of the 3D Dubins curve with the MTSP model is successful and effective for solving the 3D target assignment and path planning problem. PMID:28696377
Cai, Wenyu; Zhang, Meiyan; Zheng, Yahong Rosa
2017-07-11
This paper investigates the task assignment and path planning problem for multiple AUVs in three dimensional (3D) underwater wireless sensor networks where nonholonomic motion constraints of underwater AUVs in 3D space are considered. The multi-target task assignment and path planning problem is modeled by the Multiple Traveling Sales Person (MTSP) problem and the Genetic Algorithm (GA) is used to solve the MTSP problem with Euclidean distance as the cost function and the Tour Hop Balance (THB) or Tour Length Balance (TLB) constraints as the stop criterion. The resulting tour sequences are mapped to 2D Dubins curves in the X - Y plane, and then interpolated linearly to obtain the Z coordinates. We demonstrate that the linear interpolation fails to achieve G 1 continuity in the 3D Dubins path for multiple targets. Therefore, the interpolated 3D Dubins curves are checked against the AUV dynamics constraint and the ones satisfying the constraint are accepted to finalize the 3D Dubins curve selection. Simulation results demonstrate that the integration of the 3D Dubins curve with the MTSP model is successful and effective for solving the 3D target assignment and path planning problem.
Optimal processor assignment for pipeline computations
NASA Technical Reports Server (NTRS)
Nicol, David M.; Simha, Rahul; Choudhury, Alok N.; Narahari, Bhagirath
1991-01-01
The availability of large scale multitasked parallel architectures introduces the following processor assignment problem for pipelined computations. Given a set of tasks and their precedence constraints, along with their experimentally determined individual responses times for different processor sizes, find an assignment of processor to tasks. Two objectives are of interest: minimal response given a throughput requirement, and maximal throughput given a response time requirement. These assignment problems differ considerably from the classical mapping problem in which several tasks share a processor; instead, it is assumed that a large number of processors are to be assigned to a relatively small number of tasks. Efficient assignment algorithms were developed for different classes of task structures. For a p processor system and a series parallel precedence graph with n constituent tasks, an O(np2) algorithm is provided that finds the optimal assignment for the response time optimization problem; it was found that the assignment optimizing the constrained throughput in O(np2log p) time. Special cases of linear, independent, and tree graphs are also considered.
An automated system for reduction of the firm's employees under maximal overall efficiency
NASA Astrophysics Data System (ADS)
Yonchev, Yoncho; Nikolov, Simeon; Baeva, Silvia
2012-11-01
Achieving maximal overall efficiency is a priority in all companies. This problem is formulated as a knap-sack problem and afterwards as a linear assignment problem. An automated system is created for solving of this problem.
Frequency Assignments for HFDF Receivers in a Search and Rescue Network
1990-03-01
SAR problem where whether or not a signal is detected by RS or HFDF at the various stations is described by probabilities. Daskin assumes the...allows the problem to be formulated with a linear objective function (6:52-53). Daskin also developed a heuristic solution algorithm to solve this...en CM in o CM CM < I Q < - -.~- -^ * . . . ■ . ,■ . :ST.-.r . 5 Frequency Assignments for HFDF Receivers in a Search and
NASA Astrophysics Data System (ADS)
Lahaie, Sébastien; Parkes, David C.
We consider the problem of fair allocation in the package assignment model, where a set of indivisible items, held by single seller, must be efficiently allocated to agents with quasi-linear utilities. A fair assignment is one that is efficient and envy-free. We consider a model where bidders have superadditive valuations, meaning that items are pure complements. Our central result is that core outcomes are fair and even coalition-fair over this domain, while fair distributions may not even exist for general valuations. Of relevance to auction design, we also establish that the core is equivalent to the set of anonymous-price competitive equilibria, and that superadditive valuations are a maximal domain that guarantees the existence of anonymous-price competitive equilibrium. Our results are analogs of core equivalence results for linear prices in the standard assignment model, and for nonlinear, non-anonymous prices in the package assignment model with general valuations.
Thayer, Edward C.; Olson, Maynard V.; Karp, Richard M.
1999-01-01
Genetic and physical maps display the relative positions of objects or markers occurring within a target DNA molecule. In constructing maps, the primary objective is to determine the ordering of these objects. A further objective is to assign a coordinate to each object, indicating its distance from a reference end of the target molecule. This paper describes a computational method and a body of software for assigning coordinates to map objects, given a solution or partial solution to the ordering problem. We describe our method in the context of multiple–complete–digest (MCD) mapping, but it should be applicable to a variety of other mapping problems. Because of errors in the data or insufficient clone coverage to uniquely identify the true ordering of the map objects, a partial ordering is typically the best one can hope for. Once a partial ordering has been established, one often seeks to overlay a metric along the map to assess the distances between the map objects. This problem often proves intractable because of data errors such as erroneous local length measurements (e.g., large clone lengths on low-resolution physical maps). We present a solution to the coordinate assignment problem for MCD restriction-fragment mapping, in which a coordinated set of single-enzyme restriction maps are simultaneously constructed. We show that the coordinate assignment problem can be expressed as the solution of a system of linear constraints. If the linear system is free of inconsistencies, it can be solved using the standard Bellman–Ford algorithm. In the more typical case where the system is inconsistent, our program perturbs it to find a new consistent system of linear constraints, close to those of the given inconsistent system, using a modified Bellman–Ford algorithm. Examples are provided of simple map inconsistencies and the methods by which our program detects candidate data errors and directs the user to potential suspect regions of the map. PMID:9927487
1993-05-31
program. In paper [28], we give a brief and elementary proof of a result of Hoffman [1952) about approximate solutions to systems, of linear inequalities...UCLA, Vestvood, CA, February 1993. " Linear Problems: Formulation and Solution," International Linear Algebra Society, Pensacola, FL, May 1993. Denise S...thresAold If there is a number h and a linear k-separator w assigning a real number to each vertex so that for any subset S of vertices, the sum of w
Engineering calculations for communications satellite systems planning
NASA Technical Reports Server (NTRS)
Reilly, C. H.; Levis, C. A.; Mount-Campbell, C.; Gonsalvez, D. J.; Wang, C. W.; Yamamura, Y.
1985-01-01
Computer-based techniques for optimizing communications-satellite orbit and frequency assignments are discussed. A gradient-search code was tested against a BSS scenario derived from the RARC-83 data. Improvement was obtained, but each iteration requires about 50 minutes of IBM-3081 CPU time. Gradient-search experiments on a small FSS test problem, consisting of a single service area served by 8 satellites, showed quickest convergence when the satellites were all initially placed near the center of the available orbital arc with moderate spacing. A transformation technique is proposed for investigating the surface topography of the objective function used in the gradient-search method. A new synthesis approach is based on transforming single-entry interference constraints into corresponding constraints on satellite spacings. These constraints are used with linear objective functions to formulate the co-channel orbital assignment task as a linear-programming (LP) problem or mixed integer programming (MIP) problem. Globally optimal solutions are always found with the MIP problems, but not necessarily with the LP problems. The MIP solutions can be used to evaluate the quality of the LP solutions. The initial results are very encouraging.
Analysis of labor employment assessment on production machine to minimize time production
NASA Astrophysics Data System (ADS)
Hernawati, Tri; Suliawati; Sari Gumay, Vita
2018-03-01
Every company both in the field of service and manufacturing always trying to pass efficiency of it’s resource use. One resource that has an important role is labor. Labor has different efficiency levels for different jobs anyway. Problems related to the optimal allocation of labor that has different levels of efficiency for different jobs are called assignment problems, which is a special case of linear programming. In this research, Analysis of Labor Employment Assesment on Production Machine to Minimize Time Production, in PT PDM is done by using Hungarian algorithm. The aim of the research is to get the assignment of optimal labor on production machine to minimize time production. The results showed that the assignment of existing labor is not suitable because the time of completion of the assignment is longer than the assignment by using the Hungarian algorithm. By applying the Hungarian algorithm obtained time savings of 16%.
A primary shift rotation nurse scheduling using zero-one linear goal programming.
Huarng, F
1999-01-01
In this study, the author discusses the effect of nurse shift schedules on circadian rhythm and some important ergonomics criteria. The author also reviews and compares different nurse shift scheduling methods via the criteria of flexibility, fairness, continuity in shift assignments, nurses' preferences, and ergonomics principles. In this article, a primary shift rotation system is proposed to provide better continuity in shift assignments to satisfy nurses' preferences. The primary shift rotation system is modeled as a zero-one linear goal programming (LGP) problem. To generate the shift assignment for a unit with 13 nurses, the zero-one LGP model takes less than 3 minutes on average, whereas the head nurses spend approximately 2 to 3 hours on shift scheduling. This study reports the process of implementing the primary shift rotation system.
ERIC Educational Resources Information Center
KANTASEWI, NIPHON
THE PURPOSE OF THE STUDY WAS TO COMPARE THE EFFECTIVENESS OF (1) LECTURE PRESENTATIONS, (2) LINEAR PROGRAM USE IN CLASS WITH AND WITHOUT DISCUSSION, AND (3) LINEAR PROGRAMS USED OUTSIDE OF CLASS WITH INCLASS PROBLEMS OR DISCUSSION. THE 126 COLLEGE STUDENTS ENROLLED IN A BACTERIOLOGY COURSE WERE RANDOMLY ASSIGNED TO THREE GROUPS. IN A SUCCEEDING…
NASA Astrophysics Data System (ADS)
Azmi, N. I. L. Mohd; Ahmad, R.; Zainuddin, Z. M.
2017-09-01
This research explores the Mixed-Model Two-Sided Assembly Line (MMTSAL). There are two interrelated problems in MMTSAL which are line balancing and model sequencing. In previous studies, many researchers considered these problems separately and only few studied them simultaneously for one-sided line. However in this study, these two problems are solved simultaneously to obtain more efficient solution. The Mixed Integer Linear Programming (MILP) model with objectives of minimizing total utility work and idle time is generated by considering variable launching interval and assignment restriction constraint. The problem is analysed using small-size test cases to validate the integrated model. Throughout this paper, numerical experiment was conducted by using General Algebraic Modelling System (GAMS) with the solver CPLEX. Experimental results indicate that integrating the problems of model sequencing and line balancing help to minimise the proposed objectives function.
A Unique Technique to get Kaprekar Iteration in Linear Programming Problem
NASA Astrophysics Data System (ADS)
Sumathi, P.; Preethy, V.
2018-04-01
This paper explores about a frivolous number popularly known as Kaprekar constant and Kaprekar numbers. A large number of courses and the different classroom capacities with difference in study periods make the assignment between classrooms and courses complicated. An approach of getting the minimum value of number of iterations to reach the Kaprekar constant for four digit numbers and maximum value is also obtained through linear programming techniques.
Minimizing distortion and internal forces in truss structures by simulated annealing
NASA Technical Reports Server (NTRS)
Kincaid, Rex K.
1989-01-01
Inaccuracies in the length of members and the diameters of joints of large truss reflector backup structures may produce unacceptable levels of surface distortion and member forces. However, if the member lengths and joint diameters can be measured accurately it is possible to configure the members and joints so that root-mean-square (rms) surface error and/or rms member forces is minimized. Following Greene and Haftka (1989) it is assumed that the force vector f is linearly proportional to the member length errors e(sub M) of dimension NMEMB (the number of members) and joint errors e(sub J) of dimension NJOINT (the number of joints), and that the best-fit displacement vector d is a linear function of f. Let NNODES denote the number of positions on the surface of the truss where error influences are measured. The solution of the problem is discussed. To classify, this problem was compared to a similar combinatorial optimization problem. In particular, when only the member length errors are considered, minimizing d(sup 2)(sub rms) is equivalent to the quadratic assignment problem. The quadratic assignment problem is a well known NP-complete problem in operations research literature. Hence minimizing d(sup 2)(sub rms) is is also an NP-complete problem. The focus of the research is the development of a simulated annealing algorithm to reduce d(sup 2)(sub rms). The plausibility of this technique is its recent success on a variety of NP-complete combinatorial optimization problems including the quadratic assignment problem. A physical analogy for simulated annealing is the way liquids freeze and crystallize. All computational experiments were done on a MicroVAX. The two interchange heuristic is very fast but produces widely varying results. The two and three interchange heuristic provides less variability in the final objective function values but runs much more slowly. Simulated annealing produced the best objective function values for every starting configuration and was faster than the two and three interchange heuristic.
Train repathing in emergencies based on fuzzy linear programming.
Meng, Xuelei; Cui, Bingmou
2014-01-01
Train pathing is a typical problem which is to assign the train trips on the sets of rail segments, such as rail tracks and links. This paper focuses on the train pathing problem, determining the paths of the train trips in emergencies. We analyze the influencing factors of train pathing, such as transferring cost, running cost, and social adverse effect cost. With the overall consideration of the segment and station capability constraints, we build the fuzzy linear programming model to solve the train pathing problem. We design the fuzzy membership function to describe the fuzzy coefficients. Furthermore, the contraction-expansion factors are introduced to contract or expand the value ranges of the fuzzy coefficients, coping with the uncertainty of the value range of the fuzzy coefficients. We propose a method based on triangular fuzzy coefficient and transfer the train pathing (fuzzy linear programming model) to a determinate linear model to solve the fuzzy linear programming problem. An emergency is supposed based on the real data of the Beijing-Shanghai Railway. The model in this paper was solved and the computation results prove the availability of the model and efficiency of the algorithm.
Autonomous Guidance Strategy for Spacecraft Formations and Reconfiguration Maneuvers
NASA Astrophysics Data System (ADS)
Wahl, Theodore P.
A guidance strategy for autonomous spacecraft formation reconfiguration maneuvers is presented. The guidance strategy is presented as an algorithm that solves the linked assignment and delivery problems. The assignment problem is the task of assigning the member spacecraft of the formation to their new positions in the desired formation geometry. The guidance algorithm uses an auction process (also called an "auction algorithm''), presented in the dissertation, to solve the assignment problem. The auction uses the estimated maneuver and time of flight costs between the spacecraft and targets to create assignments which minimize a specific "expense'' function for the formation. The delivery problem is the task of delivering the spacecraft to their assigned positions, and it is addressed through one of two guidance schemes described in this work. The first is a delivery scheme based on artificial potential function (APF) guidance. APF guidance uses the relative distances between the spacecraft, targets, and any obstacles to design maneuvers based on gradients of potential fields. The second delivery scheme is based on model predictive control (MPC); this method uses a model of the system dynamics to plan a series of maneuvers designed to minimize a unique cost function. The guidance algorithm uses an analytic linearized approximation of the relative orbital dynamics, the Yamanaka-Ankersen state transition matrix, in the auction process and in both delivery methods. The proposed guidance strategy is successful, in simulations, in autonomously assigning the members of the formation to new positions and in delivering the spacecraft to these new positions safely using both delivery methods. This guidance algorithm can serve as the basis for future autonomous guidance strategies for spacecraft formation missions.
Multicasting for all-optical multifiber networks
NASA Astrophysics Data System (ADS)
Kã¶Ksal, Fatih; Ersoy, Cem
2007-02-01
All-optical wavelength-routed WDM WANs can support the high bandwidth and the long session duration requirements of the application scenarios such as interactive distance learning or on-line diagnosis of patients simultaneously in different hospitals. However, multifiber and limited sparse light splitting and wavelength conversion capabilities of switches result in a difficult optimization problem. We attack this problem using a layered graph model. The problem is defined as a k-edge-disjoint degree-constrained Steiner tree problem for routing and fiber and wavelength assignment of k multicasts. A mixed integer linear programming formulation for the problem is given, and a solution using CPLEX is provided. However, the complexity of the problem grows quickly with respect to the number of edges in the layered graph, which depends on the number of nodes, fibers, wavelengths, and multicast sessions. Hence, we propose two heuristics layered all-optical multicast algorithm [(LAMA) and conservative fiber and wavelength assignment (C-FWA)] to compare with CPLEX, existing work, and unicasting. Extensive computational experiments show that LAMA's performance is very close to CPLEX, and it is significantly better than existing work and C-FWA for nearly all metrics, since LAMA jointly optimizes routing and fiber-wavelength assignment phases compared with the other candidates, which attack the problem by decomposing two phases. Experiments also show that important metrics (e.g., session and group blocking probability, transmitter wavelength, and fiber conversion resources) are adversely affected by the separation of two phases. Finally, the fiber-wavelength assignment strategy of C-FWA (Ex-Fit) uses wavelength and fiber conversion resources more effectively than the First Fit.
Solution of the determinantal assignment problem using the Grassmann matrices
NASA Astrophysics Data System (ADS)
Karcanias, Nicos; Leventides, John
2016-02-01
The paper provides a direct solution to the determinantal assignment problem (DAP) which unifies all frequency assignment problems of the linear control theory. The current approach is based on the solvability of the exterior equation ? where ? is an n -dimensional vector space over ? which is an integral part of the solution of DAP. New criteria for existence of solution and their computation based on the properties of structured matrices are referred to as Grassmann matrices. The solvability of this exterior equation is referred to as decomposability of ?, and it is in turn characterised by the set of quadratic Plücker relations (QPRs) describing the Grassmann variety of the corresponding projective space. Alternative new tests for decomposability of the multi-vector ? are given in terms of the rank properties of the Grassmann matrix, ? of the vector ?, which is constructed by the coordinates of ?. It is shown that the exterior equation is solvable (? is decomposable), if and only if ? where ?; the solution space for a decomposable ?, is the space ?. This provides an alternative linear algebra characterisation of the decomposability problem and of the Grassmann variety to that defined by the QPRs. Further properties of the Grassmann matrices are explored by defining the Hodge-Grassmann matrix as the dual of the Grassmann matrix. The connections of the Hodge-Grassmann matrix to the solution of exterior equations are examined, and an alternative new characterisation of decomposability is given in terms of the dimension of its image space. The framework based on the Grassmann matrices provides the means for the development of a new computational method for the solutions of the exact DAP (when such solutions exist), as well as computing approximate solutions, when exact solutions do not exist.
A binary linear programming formulation of the graph edit distance.
Justice, Derek; Hero, Alfred
2006-08-01
A binary linear programming formulation of the graph edit distance for unweighted, undirected graphs with vertex attributes is derived and applied to a graph recognition problem. A general formulation for editing graphs is used to derive a graph edit distance that is proven to be a metric, provided the cost function for individual edit operations is a metric. Then, a binary linear program is developed for computing this graph edit distance, and polynomial time methods for determining upper and lower bounds on the solution of the binary program are derived by applying solution methods for standard linear programming and the assignment problem. A recognition problem of comparing a sample input graph to a database of known prototype graphs in the context of a chemical information system is presented as an application of the new method. The costs associated with various edit operations are chosen by using a minimum normalized variance criterion applied to pairwise distances between nearest neighbors in the database of prototypes. The new metric is shown to perform quite well in comparison to existing metrics when applied to a database of chemical graphs.
The role of service areas in the optimization of FSS orbital and frequency assignments
NASA Technical Reports Server (NTRS)
Levis, C. A.; Wang, C. W.; Yamamura, Y.; Reilly, C. H.; Gonsalvez, D. J.
1985-01-01
A relationship is derived, on a single-entry interference basis, for the minimum allowable spacing between two satellites as a function of electrical parameters and service-area geometries. For circular beams, universal curves relate the topocentric satellite spacing angle to the service-area separation angle measured at the satellite. The corresponding geocentric spacing depends only weakly on the mean longitude of the two satellites, and this is true also for alliptical antenna beams. As a consequence, if frequency channels are preassigned, the orbital assignment synthesis of a satellite system can be formulated as a mixed-integer programming (MIP) problem or approximated by a linear programming (LP) problem, with the interference protection requirements enforced by constraints while some linear function is optimized. Possible objective-function choices are discussed and explicit formulations are presented for the choice of the sum of the absolute deviations of the orbital locations from some prescribed ideal location set. A test problem is posed consisting of six service areas, each served by one satellite, all using elliptical antenna beams and the same frequency channels. Numerical results are given for the three ideal location prescriptions for both the MIP and LP formulations. The resulting scenarios also satisfy reasonable aggregate interference protection requirements.
From Feynman rules to conserved quantum numbers, I
NASA Astrophysics Data System (ADS)
Nogueira, P.
2017-05-01
In the context of Quantum Field Theory (QFT) there is often the need to find sets of graph-like diagrams (the so-called Feynman diagrams) for a given physical model. If negative, the answer to the related problem 'Are there any diagrams with this set of external fields?' may settle certain physical questions at once. Here the latter problem is formulated in terms of a system of linear diophantine equations derived from the Lagrangian density, from which necessary conditions for the existence of the required diagrams may be obtained. Those conditions are equalities that look like either linear diophantine equations or linear modular (i.e. congruence) equations, and may be found by means of fairly simple algorithms that involve integer computations. The diophantine equations so obtained represent (particle) number conservation rules, and are related to the conserved (additive) quantum numbers that may be assigned to the fields of the model.
da Fonseca Neto, João Viana; Abreu, Ivanildo Silva; da Silva, Fábio Nogueira
2010-04-01
Toward the synthesis of state-space controllers, a neural-genetic model based on the linear quadratic regulator design for the eigenstructure assignment of multivariable dynamic systems is presented. The neural-genetic model represents a fusion of a genetic algorithm and a recurrent neural network (RNN) to perform the selection of the weighting matrices and the algebraic Riccati equation solution, respectively. A fourth-order electric circuit model is used to evaluate the convergence of the computational intelligence paradigms and the control design method performance. The genetic search convergence evaluation is performed in terms of the fitness function statistics and the RNN convergence, which is evaluated by landscapes of the energy and norm, as a function of the parameter deviations. The control problem solution is evaluated in the time and frequency domains by the impulse response, singular values, and modal analysis.
Optimization of orbital assignment and specification of service areas in satellite communications
NASA Technical Reports Server (NTRS)
Wang, Cou-Way; Levis, Curt A.; Buyukdura, O. Merih
1987-01-01
The mathematical nature of the orbital and frequency assignment problem for communications satellites is explored, and it is shown that choosing the correct permutations of the orbit locations and frequency assignments is an important step in arriving at values which satisfy the signal-quality requirements. Two methods are proposed to achieve better spectrum/orbit utilization. The first, called the delta S concept, leads to orbital assignment solutions via either mixed-integer or restricted basis entry linear programming techniques; the method guarantees good single-entry carrier-to-interference ratio results. In the second, a basis for specifying service areas is proposed for the Fixed Satellite Service. It is suggested that service areas should be specified according to the communications-demand density in conjunction with the delta S concept in order to enable the system planner to specify more satellites and provide more communications supply.
Contributions au probleme d'affectation des types d'avion
NASA Astrophysics Data System (ADS)
Belanger, Nicolas
In this thesis, we approach the problem of assigning aircraft types to flights (what is called aircraft fleet assignment) in a strategic planning context. The literature mentions many studies considering this problem on a daily flight schedule basis, but the proposed models do no allow to consider many elements that are either necessary to assure the practical feasibility of the solutions, or relevant to get more beneficial solutions. After describing the practical context of the problem (Chapter 1) and presenting the literature on the subject (Chapter 2), we propose new models and solution approaches to improve the quality of' the solutions obtained. The general scheme of the thesis is presented in Chapter 3. We summarize here the models and solution approaches that we propose; and present the main elements of our conclusions. First, in Chapter 4, we consider the problem of aircraft fleet Assignment over a weekly flight schedule, integrating into the objective an homogeneity factor for driving the choice of the aircraft types for the flights with the same flight number over the week. We present an integer linear model based on a time-space multicommodity network. This model includes, among others, decision variables relative to the aircraft type assigned to each flight and to the dominant aircraft type assigned to each flight number. We present in Chapter 5 the results of a research project made in collaboration with Air Canada within a consulting contract. The project aimed at analyzing the relevance for the planners of using an optimization software to help them to first identify non profitable flight legs in the network, and second to efficiently establish the aircraft fleet assignment. In this chapter, we propose an iterative approach to take into account the fact that the passenger demand is not known on a leg basis, but rather on an origin-destination and departure time basis. Finally, in Chapter 6, we propose a model and a solution approach that aim at solving the fleet assignment problem over a periodic schedule in the case where there is a flexibility on the flight departure times and the fleet size must be minimized. Moreover, the objective of this model includes the impact on the passenger demand for each flight of the variation of the flight departure times and the closing of the departure times of consecutive flights connecting the same pairs of stations. (Abstract shortened by UMI.)
Adaptive Discrete Hypergraph Matching.
Yan, Junchi; Li, Changsheng; Li, Yin; Cao, Guitao
2018-02-01
This paper addresses the problem of hypergraph matching using higher-order affinity information. We propose a solver that iteratively updates the solution in the discrete domain by linear assignment approximation. The proposed method is guaranteed to converge to a stationary discrete solution and avoids the annealing procedure and ad-hoc post binarization step that are required in several previous methods. Specifically, we start with a simple iterative discrete gradient assignment solver. This solver can be trapped in an -circle sequence under moderate conditions, where is the order of the graph matching problem. We then devise an adaptive relaxation mechanism to jump out this degenerating case and show that the resulting new path will converge to a fixed solution in the discrete domain. The proposed method is tested on both synthetic and real-world benchmarks. The experimental results corroborate the efficacy of our method.
Apaydin, Mehmet Serkan; Çatay, Bülent; Patrick, Nicholas; Donald, Bruce R
2011-05-01
Nuclear magnetic resonance (NMR) spectroscopy is an important experimental technique that allows one to study protein structure and dynamics in solution. An important bottleneck in NMR protein structure determination is the assignment of NMR peaks to the corresponding nuclei. Structure-based assignment (SBA) aims to solve this problem with the help of a template protein which is homologous to the target and has applications in the study of structure-activity relationship, protein-protein and protein-ligand interactions. We formulate SBA as a linear assignment problem with additional nuclear overhauser effect constraints, which can be solved within nuclear vector replacement's (NVR) framework (Langmead, C., Yan, A., Lilien, R., Wang, L. and Donald, B. (2003) A Polynomial-Time Nuclear Vector Replacement Algorithm for Automated NMR Resonance Assignments. Proc. the 7th Annual Int. Conf. Research in Computational Molecular Biology (RECOMB) , Berlin, Germany, April 10-13, pp. 176-187. ACM Press, New York, NY. J. Comp. Bio. , (2004), 11, pp. 277-298; Langmead, C. and Donald, B. (2004) An expectation/maximization nuclear vector replacement algorithm for automated NMR resonance assignments. J. Biomol. NMR , 29, 111-138). Our approach uses NVR's scoring function and data types and also gives the option of using CH and NH residual dipolar coupling (RDCs), instead of NH RDCs which NVR requires. We test our technique on NVR's data set as well as on four new proteins. Our results are comparable to NVR's assignment accuracy on NVR's test set, but higher on novel proteins. Our approach allows partial assignments. It is also complete and can return the optimum as well as near-optimum assignments. Furthermore, it allows us to analyze the information content of each data type and is easily extendable to accept new forms of input data, such as additional RDCs.
Multipoint to multipoint routing and wavelength assignment in multi-domain optical networks
NASA Astrophysics Data System (ADS)
Qin, Panke; Wu, Jingru; Li, Xudong; Tang, Yongli
2018-01-01
In multi-point to multi-point (MP2MP) routing and wavelength assignment (RWA) problems, researchers usually assume the optical networks to be a single domain. However, the optical networks develop toward to multi-domain and larger scale in practice. In this context, multi-core shared tree (MST)-based MP2MP RWA are introduced problems including optimal multicast domain sequence selection, core nodes belonging in which domains and so on. In this letter, we focus on MST-based MP2MP RWA problems in multi-domain optical networks, mixed integer linear programming (MILP) formulations to optimally construct MP2MP multicast trees is presented. A heuristic algorithm base on network virtualization and weighted clustering algorithm (NV-WCA) is proposed. Simulation results show that, under different traffic patterns, the proposed algorithm achieves significant improvement on network resources occupation and multicast trees setup latency in contrast with the conventional algorithms which were proposed base on a single domain network environment.
Congestion patterns of electric vehicles with limited battery capacity.
Jing, Wentao; Ramezani, Mohsen; An, Kun; Kim, Inhi
2018-01-01
The path choice behavior of battery electric vehicle (BEV) drivers is influenced by the lack of public charging stations, limited battery capacity, range anxiety and long battery charging time. This paper investigates the congestion/flow pattern captured by stochastic user equilibrium (SUE) traffic assignment problem in transportation networks with BEVs, where the BEV paths are restricted by their battery capacities. The BEV energy consumption is assumed to be a linear function of path length and path travel time, which addresses both path distance limit problem and road congestion effect. A mathematical programming model is proposed for the path-based SUE traffic assignment where the path cost is the sum of the corresponding link costs and a path specific out-of-energy penalty. We then apply the convergent Lagrangian dual method to transform the original problem into a concave maximization problem and develop a customized gradient projection algorithm to solve it. A column generation procedure is incorporated to generate the path set. Finally, two numerical examples are presented to demonstrate the applicability of the proposed model and the solution algorithm.
Congestion patterns of electric vehicles with limited battery capacity
2018-01-01
The path choice behavior of battery electric vehicle (BEV) drivers is influenced by the lack of public charging stations, limited battery capacity, range anxiety and long battery charging time. This paper investigates the congestion/flow pattern captured by stochastic user equilibrium (SUE) traffic assignment problem in transportation networks with BEVs, where the BEV paths are restricted by their battery capacities. The BEV energy consumption is assumed to be a linear function of path length and path travel time, which addresses both path distance limit problem and road congestion effect. A mathematical programming model is proposed for the path-based SUE traffic assignment where the path cost is the sum of the corresponding link costs and a path specific out-of-energy penalty. We then apply the convergent Lagrangian dual method to transform the original problem into a concave maximization problem and develop a customized gradient projection algorithm to solve it. A column generation procedure is incorporated to generate the path set. Finally, two numerical examples are presented to demonstrate the applicability of the proposed model and the solution algorithm. PMID:29543875
Optimum use of air tankers in initial attack: selection, basing, and transfer rules
Francis E. Greulich; William G. O' Regan
1982-01-01
Fire managers face two interrelated problems in deciding the most efficient use of air tankers: where best to base them, and how best to reallocate them each day in anticipation of fire occurrence. A computerized model based on a mixed integer linear program can help in assigning air tankers throughout the fire season. The model was tested using information from...
Ceberio, Josu; Calvo, Borja; Mendiburu, Alexander; Lozano, Jose A
2018-02-15
In the last decade, many works in combinatorial optimisation have shown that, due to the advances in multi-objective optimisation, the algorithms from this field could be used for solving single-objective problems as well. In this sense, a number of papers have proposed multi-objectivising single-objective problems in order to use multi-objective algorithms in their optimisation. In this article, we follow up this idea by presenting a methodology for multi-objectivising combinatorial optimisation problems based on elementary landscape decompositions of their objective function. Under this framework, each of the elementary landscapes obtained from the decomposition is considered as an independent objective function to optimise. In order to illustrate this general methodology, we consider four problems from different domains: the quadratic assignment problem and the linear ordering problem (permutation domain), the 0-1 unconstrained quadratic optimisation problem (binary domain), and the frequency assignment problem (integer domain). We implemented two widely known multi-objective algorithms, NSGA-II and SPEA2, and compared their performance with that of a single-objective GA. The experiments conducted on a large benchmark of instances of the four problems show that the multi-objective algorithms clearly outperform the single-objective approaches. Furthermore, a discussion on the results suggests that the multi-objective space generated by this decomposition enhances the exploration ability, thus permitting NSGA-II and SPEA2 to obtain better results in the majority of the tested instances.
The generalized quadratic knapsack problem. A neuronal network approach.
Talaván, Pedro M; Yáñez, Javier
2006-05-01
The solution of an optimization problem through the continuous Hopfield network (CHN) is based on some energy or Lyapunov function, which decreases as the system evolves until a local minimum value is attained. A new energy function is proposed in this paper so that any 0-1 linear constrains programming with quadratic objective function can be solved. This problem, denoted as the generalized quadratic knapsack problem (GQKP), includes as particular cases well-known problems such as the traveling salesman problem (TSP) and the quadratic assignment problem (QAP). This new energy function generalizes those proposed by other authors. Through this energy function, any GQKP can be solved with an appropriate parameter setting procedure, which is detailed in this paper. As a particular case, and in order to test this generalized energy function, some computational experiments solving the traveling salesman problem are also included.
Manycast routing, modulation level and spectrum assignment over elastic optical networks
NASA Astrophysics Data System (ADS)
Luo, Xiao; Zhao, Yang; Chen, Xue; Wang, Lei; Zhang, Min; Zhang, Jie; Ji, Yuefeng; Wang, Huitao; Wang, Taili
2017-07-01
Manycast is a point to multi-point transmission framework that requires a subset of destination nodes successfully reached. It is particularly applicable for dealing with large amounts of data simultaneously in bandwidth-hungry, dynamic and cloud-based applications. As rapid increasing of traffics in these applications, the elastic optical networks (EONs) may be relied on to achieve high throughput manycast. In terms of finer spectrum granularity, the EONs could reach flexible accessing to network spectrum and efficient providing exact spectrum resource to demands. In this paper, we focus on the manycast routing, modulation level and spectrum assignment (MA-RMLSA) problem in EONs. Both EONs planning with static manycast traffic and EONs provisioning with dynamic manycast traffic are investigated. An integer linear programming (ILP) model is formulated to derive MA-RMLSA problem in static manycast scenario. Then corresponding heuristic algorithm called manycast routing, modulation level and spectrum assignment genetic algorithm (MA-RMLSA-GA) is proposed to adapt for both static and dynamic manycast scenarios. The MA-RMLSA-GA optimizes MA-RMLSA problem in destination nodes selection, routing light-tree constitution, modulation level allocation and spectrum resource assignment jointly, to achieve an effective improvement in network performance. Simulation results reveal that MA-RMLSA strategies offered by MA-RMLSA-GA have slightly disparity from the optimal solutions provided by ILP model in static scenario. Moreover, the results demonstrate that MA-RMLSA-GA realizes a highly efficient MA-RMLSA strategy with the lowest blocking probability in dynamic scenario compared with benchmark algorithms.
NASA Astrophysics Data System (ADS)
Yari, Mojtaba; Bagherpour, Raheb; Jamali, Saeed; Asadi, Fatemeh
2015-03-01
One of the most important operations in mining is blasting. Improper design of blasting pattern will cause technical and safety problems. Considering impact of results of blasting on next steps of mining, correct pattern selection needs a great cautiousness. In selecting of blasting pattern, technical, economical and safety aspects should be considered. Thus, most appropriate pattern selection can be defined as a Multi Attribute Decision Making (MADM) problem. Linear assignment method is one of the very applicable methods in decision making problems. In this paper, this method was used for the first time to evaluate blasting patterns in mine. In this ranking, safety and technical parameters have been considered to evaluate blasting patterns. Finally, blasting pattern with burden of 3.5 m, spacing of 4.5 m, stemming of 3.8 m and hole length of 12.1 m has been presented as the most suitable pattern obtained from linear assignment model for Sungun Copper Mine.
Principles for problem aggregation and assignment in medium scale multiprocessors
NASA Technical Reports Server (NTRS)
Nicol, David M.; Saltz, Joel H.
1987-01-01
One of the most important issues in parallel processing is the mapping of workload to processors. This paper considers a large class of problems having a high degree of potential fine grained parallelism, and execution requirements that are either not predictable, or are too costly to predict. The main issues in mapping such a problem onto medium scale multiprocessors are those of aggregation and assignment. We study a method of parameterized aggregation that makes few assumptions about the workload. The mapping of aggregate units of work onto processors is uniform, and exploits locality of workload intensity to balance the unknown workload. In general, a finer aggregate granularity leads to a better balance at the price of increased communication/synchronization costs; the aggregation parameters can be adjusted to find a reasonable granularity. The effectiveness of this scheme is demonstrated on three model problems: an adaptive one-dimensional fluid dynamics problem with message passing, a sparse triangular linear system solver on both a shared memory and a message-passing machine, and a two-dimensional time-driven battlefield simulation employing message passing. Using the model problems, the tradeoffs are studied between balanced workload and the communication/synchronization costs. Finally, an analytical model is used to explain why the method balances workload and minimizes the variance in system behavior.
Lyubetsky, Vassily; Gershgorin, Roman; Gorbunov, Konstantin
2017-12-06
Chromosome structure is a very limited model of the genome including the information about its chromosomes such as their linear or circular organization, the order of genes on them, and the DNA strand encoding a gene. Gene lengths, nucleotide composition, and intergenic regions are ignored. Although highly incomplete, such structure can be used in many cases, e.g., to reconstruct phylogeny and evolutionary events, to identify gene synteny, regulatory elements and promoters (considering highly conserved elements), etc. Three problems are considered; all assume unequal gene content and the presence of gene paralogs. The distance problem is to determine the minimum number of operations required to transform one chromosome structure into another and the corresponding transformation itself including the identification of paralogs in two structures. We use the DCJ model which is one of the most studied combinatorial rearrangement models. Double-, sesqui-, and single-operations as well as deletion and insertion of a chromosome region are considered in the model; the single ones comprise cut and join. In the reconstruction problem, a phylogenetic tree with chromosome structures in the leaves is given. It is necessary to assign the structures to inner nodes of the tree to minimize the sum of distances between terminal structures of each edge and to identify the mutual paralogs in a fairly large set of structures. A linear algorithm is known for the distance problem without paralogs, while the presence of paralogs makes it NP-hard. If paralogs are allowed but the insertion and deletion operations are missing (and special constraints are imposed), the reduction of the distance problem to integer linear programming is known. Apparently, the reconstruction problem is NP-hard even in the absence of paralogs. The problem of contigs is to find the optimal arrangements for each given set of contigs, which also includes the mutual identification of paralogs. We proved that these problems can be reduced to integer linear programming formulations, which allows an algorithm to redefine the problems to implement a very special case of the integer linear programming tool. The results were tested on synthetic and biological samples. Three well-known problems were reduced to a very special case of integer linear programming, which is a new method of their solutions. Integer linear programming is clearly among the main computational methods and, as generally accepted, is fast on average; in particular, computation systems specifically targeted at it are available. The challenges are to reduce the size of the corresponding integer linear programming formulations and to incorporate a more detailed biological concept in our model of the reconstruction.
Sensor selection cost optimisation for tracking structurally cyclic systems: a P-order solution
NASA Astrophysics Data System (ADS)
Doostmohammadian, M.; Zarrabi, H.; Rabiee, H. R.
2017-08-01
Measurements and sensing implementations impose certain cost in sensor networks. The sensor selection cost optimisation is the problem of minimising the sensing cost of monitoring a physical (or cyber-physical) system. Consider a given set of sensors tracking states of a dynamical system for estimation purposes. For each sensor assume different costs to measure different (realisable) states. The idea is to assign sensors to measure states such that the global cost is minimised. The number and selection of sensor measurements need to ensure the observability to track the dynamic state of the system with bounded estimation error. The main question we address is how to select the state measurements to minimise the cost while satisfying the observability conditions. Relaxing the observability condition for structurally cyclic systems, the main contribution is to propose a graph theoretic approach to solve the problem in polynomial time. Note that polynomial time algorithms are suitable for large-scale systems as their running time is upper-bounded by a polynomial expression in the size of input for the algorithm. We frame the problem as a linear sum assignment with solution complexity of ?.
Guaranteed cost control with poles assignment for a flexible air-breathing hypersonic vehicle
NASA Astrophysics Data System (ADS)
Li, Hongyi; Si, Yulin; Wu, Ligang; Hu, Xiaoxiang; Gao, Huijun
2011-05-01
This article investigates the problem of guaranteed cost control for a flexible air-breathing hypersonic vehicle (FAHV). The FAHV includes intricate coupling between the engine and flight dynamics as well as complex interplay between flexible and rigid modes, which results in an intractable system for the control design. A longitudinal model is adopted for control design due to the complexity of the vehicle. First, for a highly nonlinear and coupled FAHV, a linearised model is established around the trim condition, which includes the state of altitude, velocity, angle of attack, pitch angle and pitch rate, etc. Secondly, by using the Lyapunov approach, performance analysis is carried out for the resulting closed-loop FAHV system, whose criterion with respect to guaranteed performance cost and poles assignment is expressed in the framework of linear matrix inequalities (LMIs). The established criterion exhibits a kind of decoupling between the Lyapunov positive-definite matrices to be determined and the FAHV system matrices, which is enabled by the introduction of additional slack matrix variables. Thirdly, a convex optimisation problem with LMI constraints is formulated for designing an admissible controller, which guarantees a prescribed performance cost with the simultaneous consideration of poles assignment for the resulting closed-loop system. Finally, some simulation results are provided to show that the guaranteed cost controller could assign the poles into the desired regional and achieve excellent reference altitude and velocity tracking performance.
Transmit Designs for the MIMO Broadcast Channel With Statistical CSI
NASA Astrophysics Data System (ADS)
Wu, Yongpeng; Jin, Shi; Gao, Xiqi; McKay, Matthew R.; Xiao, Chengshan
2014-09-01
We investigate the multiple-input multiple-output broadcast channel with statistical channel state information available at the transmitter. The so-called linear assignment operation is employed, and necessary conditions are derived for the optimal transmit design under general fading conditions. Based on this, we introduce an iterative algorithm to maximize the linear assignment weighted sum-rate by applying a gradient descent method. To reduce complexity, we derive an upper bound of the linear assignment achievable rate of each receiver, from which a simplified closed-form expression for a near-optimal linear assignment matrix is derived. This reveals an interesting construction analogous to that of dirty-paper coding. In light of this, a low complexity transmission scheme is provided. Numerical examples illustrate the significant performance of the proposed low complexity scheme.
Flight control synthesis for flexible aircraft using Eigenspace assignment
NASA Technical Reports Server (NTRS)
Davidson, J. B.; Schmidt, D. K.
1986-01-01
The use of eigenspace assignment techniques to synthesize flight control systems for flexible aircraft is explored. Eigenspace assignment techniques are used to achieve a specified desired eigenspace, chosen to yield desirable system impulse residue magnitudes for selected system responses. Two of these are investigated. The first directly determines constant measurement feedback gains that will yield a close-loop system eigenspace close to a desired eigenspace. The second technique selects quadratic weighting matrices in a linear quadratic control synthesis that will asymptotically yield the close-loop achievable eigenspace. Finally, the possibility of using either of these techniques with state estimation is explored. Application of the methods to synthesize integrated flight-control and structural-mode-control laws for a large flexible aircraft is demonstrated and results discussed. Eigenspace selection criteria based on design goals are discussed, and for the study case it would appear that a desirable eigenspace can be obtained. In addition, the importance of state-space selection is noted along with problems with reduced-order measurement feedback. Since the full-state control laws may be implemented with dynamic compensation (state estimation), the use of reduced-order measurement feedback is less desirable. This is especially true since no change in the transient response from the pilot's input results if state estimation is used appropriately. The potential is also noted for high actuator bandwidth requirements if the linear quadratic synthesis approach is utilized. Even with the actuator pole location selected, a problem with unmodeled modes is noted due to high bandwidth. Some suggestions for future research include investigating how to choose an eigenspace that will achieve certain desired dynamics and stability robustness, determining how the choice of measurements effects synthesis results, and exploring how the phase relationships between desired eigenvector elements effects the synthesis results.
Fragment assignment in the cloud with eXpress-D
2013-01-01
Background Probabilistic assignment of ambiguously mapped fragments produced by high-throughput sequencing experiments has been demonstrated to greatly improve accuracy in the analysis of RNA-Seq and ChIP-Seq, and is an essential step in many other sequence census experiments. A maximum likelihood method using the expectation-maximization (EM) algorithm for optimization is commonly used to solve this problem. However, batch EM-based approaches do not scale well with the size of sequencing datasets, which have been increasing dramatically over the past few years. Thus, current approaches to fragment assignment rely on heuristics or approximations for tractability. Results We present an implementation of a distributed EM solution to the fragment assignment problem using Spark, a data analytics framework that can scale by leveraging compute clusters within datacenters–“the cloud”. We demonstrate that our implementation easily scales to billions of sequenced fragments, while providing the exact maximum likelihood assignment of ambiguous fragments. The accuracy of the method is shown to be an improvement over the most widely used tools available and can be run in a constant amount of time when cluster resources are scaled linearly with the amount of input data. Conclusions The cloud offers one solution for the difficulties faced in the analysis of massive high-thoughput sequencing data, which continue to grow rapidly. Researchers in bioinformatics must follow developments in distributed systems–such as new frameworks like Spark–for ways to port existing methods to the cloud and help them scale to the datasets of the future. Our software, eXpress-D, is freely available at: http://github.com/adarob/express-d. PMID:24314033
Homans, S W; Dwek, R A; Fernandes, D L; Rademacher, T W
1984-01-01
A general property of the high-resolution proton NMR spectra of oligosaccharides is the appearance of low-field well-resolved resonances corresponding to the anomeric (H1) and H2 protons. The remaining skeletal protons resonate in the region 3-4 ppm, giving rise to an envelope of poorly resolved resonances. Assignments can be made from the H1 and H2 protons to their J-coupled neighbors (H2 and H3) within this main envelope by using 1H-1H correlated spectroscopy. However, the tight coupling (J congruent to delta) between further protons results in poor spectral dispersion with consequent assignment ambiguities. We describe here three-step two-dimensional relayed correlation spectroscopy and show how it can be used to correlate the resolved anomeric (H1) and H2 protons with remote (H4, H5) protons directly through a linear network of couplings using sequential magnetization transfer around the oligosaccharide rings. Resonance assignments are then obtained by inspection of cross-peaks that appear in well-resolved regions of the two-dimensional spectrum. This offers a general solution to the assignment problem in oligosaccharides and, importantly, these assignments will subsequently allow for the three-dimensional solution conformation to be determined by using one-dimensional and two-dimensional nuclear Overhauser experiments. PMID:6593701
ERIC Educational Resources Information Center
Parkhurst, John T.; Fleisher, Matthew S.; Skinner, Christopher H.; Woehr, David J.; Hawthorn-Embree, Meredith L.
2011-01-01
After completing the Multidimensional Work-Ethic Profile (MWEP), 98 college students were given a 20-problem math computation assignment and instructed to stop working on the assignment after completing 10 problems. Next, they were allowed to choose to finish either the partially completed assignment that had 10 problems remaining or a new…
Simulated annealing algorithm for solving chambering student-case assignment problem
NASA Astrophysics Data System (ADS)
Ghazali, Saadiah; Abdul-Rahman, Syariza
2015-12-01
The problem related to project assignment problem is one of popular practical problem that appear nowadays. The challenge of solving the problem raise whenever the complexity related to preferences, the existence of real-world constraints and problem size increased. This study focuses on solving a chambering student-case assignment problem by using a simulated annealing algorithm where this problem is classified under project assignment problem. The project assignment problem is considered as hard combinatorial optimization problem and solving it using a metaheuristic approach is an advantage because it could return a good solution in a reasonable time. The problem of assigning chambering students to cases has never been addressed in the literature before. For the proposed problem, it is essential for law graduates to peruse in chambers before they are qualified to become legal counselor. Thus, assigning the chambering students to cases is a critically needed especially when involving many preferences. Hence, this study presents a preliminary study of the proposed project assignment problem. The objective of the study is to minimize the total completion time for all students in solving the given cases. This study employed a minimum cost greedy heuristic in order to construct a feasible initial solution. The search then is preceded with a simulated annealing algorithm for further improvement of solution quality. The analysis of the obtained result has shown that the proposed simulated annealing algorithm has greatly improved the solution constructed by the minimum cost greedy heuristic. Hence, this research has demonstrated the advantages of solving project assignment problem by using metaheuristic techniques.
Assessment of combating-desertification strategies using the linear assignment method
NASA Astrophysics Data System (ADS)
Hassan Sadeghravesh, Mohammad; Khosravi, Hassan; Ghasemian, Soudeh
2016-04-01
Nowadays desertification, as a global problem, affects many countries in the world, especially developing countries like Iran. With respect to increasing importance of desertification and its complexity, the necessity of attention to the optimal combating-desertification alternatives is essential. Selecting appropriate strategies according to all effective criteria to combat the desertification process can be useful in rehabilitating degraded lands and avoiding degradation in vulnerable fields. This study provides systematic and optimal strategies of combating desertification by use of a group decision-making model. To this end, the preferences of indexes were obtained through using the Delphi model, within the framework of multi-attribute decision making (MADM). Then, priorities of strategies were evaluated by using linear assignment (LA) method. According to the results, the strategies to prevent improper change of land use (A18), development and reclamation of plant cover (A23), and control overcharging of groundwater resources (A31) were identified as the most important strategies for combating desertification in this study area. Therefore, it is suggested that the aforementioned ranking results be considered in projects which control and reduce the effects of desertification and rehabilitate degraded lands.
Lee, Chaewoo
2014-01-01
The advancement in wideband wireless network supports real time services such as IPTV and live video streaming. However, because of the sharing nature of the wireless medium, efficient resource allocation has been studied to achieve a high level of acceptability and proliferation of wireless multimedia. Scalable video coding (SVC) with adaptive modulation and coding (AMC) provides an excellent solution for wireless video streaming. By assigning different modulation and coding schemes (MCSs) to video layers, SVC can provide good video quality to users in good channel conditions and also basic video quality to users in bad channel conditions. For optimal resource allocation, a key issue in applying SVC in the wireless multicast service is how to assign MCSs and the time resources to each SVC layer in the heterogeneous channel condition. We formulate this problem with integer linear programming (ILP) and provide numerical results to show the performance under 802.16 m environment. The result shows that our methodology enhances the overall system throughput compared to an existing algorithm. PMID:25276862
NASA Astrophysics Data System (ADS)
Mandrà, Salvatore; Giacomo Guerreschi, Gian; Aspuru-Guzik, Alán
2016-07-01
We present an exact quantum algorithm for solving the Exact Satisfiability problem, which belongs to the important NP-complete complexity class. The algorithm is based on an intuitive approach that can be divided into two parts: the first step consists in the identification and efficient characterization of a restricted subspace that contains all the valid assignments of the Exact Satisfiability; while the second part performs a quantum search in such restricted subspace. The quantum algorithm can be used either to find a valid assignment (or to certify that no solution exists) or to count the total number of valid assignments. The query complexities for the worst-case are respectively bounded by O(\\sqrt{{2}n-{M\\prime }}) and O({2}n-{M\\prime }), where n is the number of variables and {M}\\prime the number of linearly independent clauses. Remarkably, the proposed quantum algorithm results to be faster than any known exact classical algorithm to solve dense formulas of Exact Satisfiability. As a concrete application, we provide the worst-case complexity for the Hamiltonian cycle problem obtained after mapping it to a suitable Occupation problem. Specifically, we show that the time complexity for the proposed quantum algorithm is bounded by O({2}n/4) for 3-regular undirected graphs, where n is the number of nodes. The same worst-case complexity holds for (3,3)-regular bipartite graphs. As a reference, the current best classical algorithm has a (worst-case) running time bounded by O({2}31n/96). Finally, when compared to heuristic techniques for Exact Satisfiability problems, the proposed quantum algorithm is faster than the classical WalkSAT and Adiabatic Quantum Optimization for random instances with a density of constraints close to the satisfiability threshold, the regime in which instances are typically the hardest to solve. The proposed quantum algorithm can be straightforwardly extended to the generalized version of the Exact Satisfiability known as Occupation problem. The general version of the algorithm is presented and analyzed.
An Exact Algorithm to Compute the Double-Cut-and-Join Distance for Genomes with Duplicate Genes.
Shao, Mingfu; Lin, Yu; Moret, Bernard M E
2015-05-01
Computing the edit distance between two genomes is a basic problem in the study of genome evolution. The double-cut-and-join (DCJ) model has formed the basis for most algorithmic research on rearrangements over the last few years. The edit distance under the DCJ model can be computed in linear time for genomes without duplicate genes, while the problem becomes NP-hard in the presence of duplicate genes. In this article, we propose an integer linear programming (ILP) formulation to compute the DCJ distance between two genomes with duplicate genes. We also provide an efficient preprocessing approach to simplify the ILP formulation while preserving optimality. Comparison on simulated genomes demonstrates that our method outperforms MSOAR in computing the edit distance, especially when the genomes contain long duplicated segments. We also apply our method to assign orthologous gene pairs among human, mouse, and rat genomes, where once again our method outperforms MSOAR.
Liu, Lan; Jiang, Tao
2007-01-01
With the launch of the international HapMap project, the haplotype inference problem has attracted a great deal of attention in the computational biology community recently. In this paper, we study the question of how to efficiently infer haplotypes from genotypes of individuals related by a pedigree without mating loops, assuming that the hereditary process was free of mutations (i.e. the Mendelian law of inheritance) and recombinants. We model the haplotype inference problem as a system of linear equations as in [10] and present an (optimal) linear-time (i.e. O(mn) time) algorithm to generate a particular solution (A particular solution of any linear system is an assignment of numerical values to the variables in the system which satisfies the equations in the system.) to the haplotype inference problem, where m is the number of loci (or markers) in a genotype and n is the number of individuals in the pedigree. Moreover, the algorithm also provides a general solution (A general solution of any linear system is denoted by the span of a basis in the solution space to its associated homogeneous system, offset from the origin by a vector, namely by any particular solution. A general solution for ZRHC is very useful in practice because it allows the end user to efficiently enumerate all solutions for ZRHC and performs tasks such as random sampling.) in O(mn2) time, which is optimal because the size of a general solution could be as large as Theta(mn2). The key ingredients of our construction are (i) a fast consistency checking procedure for the system of linear equations introduced in [10] based on a careful investigation of the relationship between the equations (ii) a novel linear-time method for solving linear equations without invoking the Gaussian elimination method. Although such a fast method for solving equations is not known for general systems of linear equations, we take advantage of the underlying loop-free pedigree graph and some special properties of the linear equations.
Geometry Helps to Compare Persistence Diagrams
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kerber, Michael; Morozov, Dmitriy; Nigmetov, Arnur
2015-11-16
Exploiting geometric structure to improve the asymptotic complexity of discrete assignment problems is a well-studied subject. In contrast, the practical advantages of using geometry for such problems have not been explored. We implement geometric variants of the Hopcroft--Karp algorithm for bottleneck matching (based on previous work by Efrat el al.), and of the auction algorithm by Bertsekas for Wasserstein distance computation. Both implementations use k-d trees to replace a linear scan with a geometric proximity query. Our interest in this problem stems from the desire to compute distances between persistence diagrams, a problem that comes up frequently in topological datamore » analysis. We show that our geometric matching algorithms lead to a substantial performance gain, both in running time and in memory consumption, over their purely combinatorial counterparts. Moreover, our implementation significantly outperforms the only other implementation available for comparing persistence diagrams.« less
Constrained model predictive control, state estimation and coordination
NASA Astrophysics Data System (ADS)
Yan, Jun
In this dissertation, we study the interaction between the control performance and the quality of the state estimation in a constrained Model Predictive Control (MPC) framework for systems with stochastic disturbances. This consists of three parts: (i) the development of a constrained MPC formulation that adapts to the quality of the state estimation via constraints; (ii) the application of such a control law in a multi-vehicle formation coordinated control problem in which each vehicle operates subject to a no-collision constraint posed by others' imperfect prediction computed from finite bit-rate, communicated data; (iii) the design of the predictors and the communication resource assignment problem that satisfy the performance requirement from Part (ii). Model Predictive Control (MPC) is of interest because it is one of the few control design methods which preserves standard design variables and yet handles constraints. MPC is normally posed as a full-state feedback control and is implemented in a certainty-equivalence fashion with best estimates of the states being used in place of the exact state. However, if the state constraints were handled in the same certainty-equivalence fashion, the resulting control law could drive the real state to violate the constraints frequently. Part (i) focuses on exploring the inclusion of state estimates into the constraints. It does this by applying constrained MPC to a system with stochastic disturbances. The stochastic nature of the problem requires re-posing the constraints in a probabilistic form. In Part (ii), we consider applying constrained MPC as a local control law in a coordinated control problem of a group of distributed autonomous systems. Interactions between the systems are captured via constraints. First, we inspect the application of constrained MPC to a completely deterministic case. Formation stability theorems are derived for the subsystems and conditions on the local constraint set are derived in order to guarantee local stability or convergence to a target state. If these conditions are met for all subsystems, then this stability is inherited by the overall system. For the case when each subsystem suffers from disturbances in the dynamics, own self-measurement noises, and quantization errors on neighbors' information due to the finite-bit-rate channels, the constrained MPC strategy developed in Part (i) is appropriate to apply. In Part (iii), we discuss the local predictor design and bandwidth assignment problem in a coordinated vehicle formation context. The MPC controller used in Part (ii) relates the formation control performance and the information quality in the way that large standoff implies conservative performance. We first develop an LMI (Linear Matrix Inequality) formulation for cross-estimator design in a simple two-vehicle scenario with non-standard information: one vehicle does not have access to the other's exact control value applied at each sampling time, but to its known, pre-computed, coupling linear feedback control law. Then a similar LMI problem is formulated for the bandwidth assignment problem that minimizes the total number of bits by adjusting the prediction gain matrices and the number of bits assigned to each variable. (Abstract shortened by UMI.)
NASA Astrophysics Data System (ADS)
Murali, R. V.; Puri, A. B.; Fathi, Khalid
2010-10-01
This paper presents an extended version of study already undertaken on development of an artificial neural networks (ANNs) model for assigning workforce into virtual cells under virtual cellular manufacturing systems (VCMS) environments. Previously, the same authors have introduced this concept and applied it to virtual cells of two-cell configuration and the results demonstrated that ANNs could be a worth applying tool for carrying out workforce assignments. In this attempt, three-cell configurations problems are considered for worker assignment task. Virtual cells are formed under dual resource constraint (DRC) context in which the number of available workers is less than the total number of machines available. Since worker assignment tasks are quite non-linear and highly dynamic in nature under varying inputs & conditions and, in parallel, ANNs have the ability to model complex relationships between inputs and outputs and find similar patterns effectively, an attempt was earlier made to employ ANNs into the above task. In this paper, the multilayered perceptron with feed forward (MLP-FF) neural network model has been reused for worker assignment tasks of three-cell configurations under DRC context and its performance at different time periods has been analyzed. The previously proposed worker assignment model has been reconfigured and cell formation solutions available for three-cell configuration in the literature are used in combination to generate datasets for training ANNs framework. Finally, results of the study have been presented and discussed.
Split delivery vehicle routing problem with time windows: a case study
NASA Astrophysics Data System (ADS)
Latiffianti, E.; Siswanto, N.; Firmandani, R. A.
2018-04-01
This paper aims to implement an extension of VRP so called split delivery vehicle routing problem (SDVRP) with time windows in a case study involving pickups and deliveries of workers from several points of origin and several destinations. Each origin represents a bus stop and the destination represents either site or office location. An integer linear programming of the SDVRP problem is presented. The solution was generated using three stages of defining the starting points, assigning busses, and solving the SDVRP with time windows using an exact method. Although the overall computational time was relatively lengthy, the results indicated that the produced solution was better than the existing routing and scheduling that the firm used. The produced solution was also capable of reducing fuel cost by 9% that was obtained from shorter total distance travelled by the shuttle buses.
2013-03-30
Abstract: We study multi-robot routing problems (MR- LDR ) where a team of robots has to visit a set of given targets with linear decreasing rewards over...time, such as required for the delivery of goods to rescue sites after disasters. The objective of MR- LDR is to find an assignment of targets to...We develop a mixed integer program that solves MR- LDR optimally with a flow-type formulation and can be solved faster than the standard TSP-type
Unifying Temporal and Structural Credit Assignment Problems
NASA Technical Reports Server (NTRS)
Agogino, Adrian K.; Tumer, Kagan
2004-01-01
Single-agent reinforcement learners in time-extended domains and multi-agent systems share a common dilemma known as the credit assignment problem. Multi-agent systems have the structural credit assignment problem of determining the contributions of a particular agent to a common task. Instead, time-extended single-agent systems have the temporal credit assignment problem of determining the contribution of a particular action to the quality of the full sequence of actions. Traditionally these two problems are considered different and are handled in separate ways. In this article we show how these two forms of the credit assignment problem are equivalent. In this unified frame-work, a single-agent Markov decision process can be broken down into a single-time-step multi-agent process. Furthermore we show that Monte-Carlo estimation or Q-learning (depending on whether the values of resulting actions in the episode are known at the time of learning) are equivalent to different agent utility functions in a multi-agent system. This equivalence shows how an often neglected issue in multi-agent systems is equivalent to a well-known deficiency in multi-time-step learning and lays the basis for solving time-extended multi-agent problems, where both credit assignment problems are present.
2017-03-23
solutions obtained through their proposed method to comparative instances of a generalized assignment problem with either ordinal cost components or... method flag: Designates the method by which the changed/ new assignment problem instance is solved. methodFlag = 0:SMAWarmstart Returns a matching...of randomized perturbations. We examine the contrasts between these methods in the context of assigning Army Officers among a set of identified
A Multiple Ant Colony Metahuristic for the Air Refueling Tanker Assignment Problem
2002-03-01
Problem The tanker assignment problem can be modeled as a job shop scheduling problem ( JSSP ). The JSSP is made up of n jobs, composed of m ordered...points) to be processed on all the machines (tankers). The problem with using JSSP is that the tanker assignment problem has multiple objectives... JSSP will minimize the time it takes for all jobs, but this may take an inordinate number of tankers. Thus using JSSP alone is not necessarily a good
Warid, Warid; Hizam, Hashim; Mariun, Norman; Abdul-Wahab, Noor Izzri
2016-01-01
This paper proposes a new formulation for the multi-objective optimal power flow (MOOPF) problem for meshed power networks considering distributed generation. An efficacious multi-objective fuzzy linear programming optimization (MFLP) algorithm is proposed to solve the aforementioned problem with and without considering the distributed generation (DG) effect. A variant combination of objectives is considered for simultaneous optimization, including power loss, voltage stability, and shunt capacitors MVAR reserve. Fuzzy membership functions for these objectives are designed with extreme targets, whereas the inequality constraints are treated as hard constraints. The multi-objective fuzzy optimal power flow (OPF) formulation was converted into a crisp OPF in a successive linear programming (SLP) framework and solved using an efficient interior point method (IPM). To test the efficacy of the proposed approach, simulations are performed on the IEEE 30-busand IEEE 118-bus test systems. The MFLP optimization is solved for several optimization cases. The obtained results are compared with those presented in the literature. A unique solution with a high satisfaction for the assigned targets is gained. Results demonstrate the effectiveness of the proposed MFLP technique in terms of solution optimality and rapid convergence. Moreover, the results indicate that using the optimal DG location with the MFLP algorithm provides the solution with the highest quality.
Warid, Warid; Hizam, Hashim; Mariun, Norman; Abdul-Wahab, Noor Izzri
2016-01-01
This paper proposes a new formulation for the multi-objective optimal power flow (MOOPF) problem for meshed power networks considering distributed generation. An efficacious multi-objective fuzzy linear programming optimization (MFLP) algorithm is proposed to solve the aforementioned problem with and without considering the distributed generation (DG) effect. A variant combination of objectives is considered for simultaneous optimization, including power loss, voltage stability, and shunt capacitors MVAR reserve. Fuzzy membership functions for these objectives are designed with extreme targets, whereas the inequality constraints are treated as hard constraints. The multi-objective fuzzy optimal power flow (OPF) formulation was converted into a crisp OPF in a successive linear programming (SLP) framework and solved using an efficient interior point method (IPM). To test the efficacy of the proposed approach, simulations are performed on the IEEE 30-busand IEEE 118-bus test systems. The MFLP optimization is solved for several optimization cases. The obtained results are compared with those presented in the literature. A unique solution with a high satisfaction for the assigned targets is gained. Results demonstrate the effectiveness of the proposed MFLP technique in terms of solution optimality and rapid convergence. Moreover, the results indicate that using the optimal DG location with the MFLP algorithm provides the solution with the highest quality. PMID:26954783
Some single-machine scheduling problems with learning effects and two competing agents.
Li, Hongjie; Li, Zeyuan; Yin, Yunqiang
2014-01-01
This study considers a scheduling environment in which there are two agents and a set of jobs, each of which belongs to one of the two agents and its actual processing time is defined as a decreasing linear function of its starting time. Each of the two agents competes to process its respective jobs on a single machine and has its own scheduling objective to optimize. The objective is to assign the jobs so that the resulting schedule performs well with respect to the objectives of both agents. The objective functions addressed in this study include the maximum cost, the total weighted completion time, and the discounted total weighted completion time. We investigate three problems arising from different combinations of the objectives of the two agents. The computational complexity of the problems is discussed and solution algorithms where possible are presented.
Achieving spectrum conservation for the minimum-span and minimum-order frequency assignment problems
NASA Technical Reports Server (NTRS)
Heyward, Ann O.
1992-01-01
Effective and efficient solutions of frequency assignment problems assumes increasing importance as the radiofrequency spectrum experiences ever increasing utilization by diverse communications services, requiring that the most efficient use of this resource be achieved. The research presented explores a general approach to the frequency assignment problem, in which such problems are categorized by the appropriate spectrum conserving objective function, and are each treated as an N-job, M-machine scheduling problem appropriate for the objective. Results obtained and presented illustrate that such an approach presents an effective means of achieving spectrum conserving frequency assignments for communications systems in a variety of environments.
NASA Astrophysics Data System (ADS)
Woradit, Kampol; Guyot, Matthieu; Vanichchanunt, Pisit; Saengudomlert, Poompat; Wuttisittikulkij, Lunchakorn
While the problem of multicast routing and wavelength assignment (MC-RWA) in optical wavelength division multiplexing (WDM) networks has been investigated, relatively few researchers have considered network survivability for multicasting. This paper provides an optimization framework to solve the MC-RWA problem in a multi-fiber WDM network that can recover from a single-link failure with shared protection. Using the light-tree (LT) concept to support multicast sessions, we consider two protection strategies that try to reduce service disruptions after a link failure. The first strategy, called light-tree reconfiguration (LTR) protection, computes a new multicast LT for each session affected by the failure. The second strategy, called optical branch reconfiguration (OBR) protection, tries to restore a logical connection between two adjacent multicast members disconnected by the failure. To solve the MC-RWA problem optimally, we propose an integer linear programming (ILP) formulation that minimizes the total number of fibers required for both working and backup traffic. The ILP formulation takes into account joint routing of working and backup traffic, the wavelength continuity constraint, and the limited splitting degree of multicast-capable optical cross-connects (MC-OXCs). After showing some numerical results for optimal solutions, we propose heuristic algorithms that reduce the computational complexity and make the problem solvable for large networks. Numerical results suggest that the proposed heuristic yields efficient solutions compared to optimal solutions obtained from exact optimization.
NASA Astrophysics Data System (ADS)
Giaccu, Gian Felice
2018-05-01
Pre-tensioned cable braces are widely used as bracing systems in various structural typologies. This technology is fundamentally utilized for stiffening purposes in the case of steel and timber structures. The pre-stressing force imparted to the braces provides to the system a remarkable increment of stiffness. On the other hand, the pre-tensioning force in the braces must be properly calibrated in order to satisfactorily meet both serviceability and ultimate limit states. Dynamic properties of these systems are however affected by non-linear behavior due to potential slackening of the pre-tensioned brace. In the recent years the author has been working on a similar problem regarding the non-linear response of cables in cable-stayed bridges and braced structures. In the present paper a displacement-based approach is used to examine the non-linear behavior of a building system. The methodology operates through linearization and allows obtaining an equivalent linearized frequency to approximately characterize, mode by mode, the dynamic behavior of the system. The equivalent frequency depends on both the mechanical characteristics of the system, the pre-tensioning level assigned to the braces and a characteristic vibration amplitude. The proposed approach can be used as a simplified technique, capable of linearizing the response of structural systems, characterized by non-linearity induced by the slackening of pre-tensioned braces.
Cai, C; Rodet, T; Legoupil, S; Mohammad-Djafari, A
2013-11-01
Dual-energy computed tomography (DECT) makes it possible to get two fractions of basis materials without segmentation. One is the soft-tissue equivalent water fraction and the other is the hard-matter equivalent bone fraction. Practical DECT measurements are usually obtained with polychromatic x-ray beams. Existing reconstruction approaches based on linear forward models without counting the beam polychromaticity fail to estimate the correct decomposition fractions and result in beam-hardening artifacts (BHA). The existing BHA correction approaches either need to refer to calibration measurements or suffer from the noise amplification caused by the negative-log preprocessing and the ill-conditioned water and bone separation problem. To overcome these problems, statistical DECT reconstruction approaches based on nonlinear forward models counting the beam polychromaticity show great potential for giving accurate fraction images. This work proposes a full-spectral Bayesian reconstruction approach which allows the reconstruction of high quality fraction images from ordinary polychromatic measurements. This approach is based on a Gaussian noise model with unknown variance assigned directly to the projections without taking negative-log. Referring to Bayesian inferences, the decomposition fractions and observation variance are estimated by using the joint maximum a posteriori (MAP) estimation method. Subject to an adaptive prior model assigned to the variance, the joint estimation problem is then simplified into a single estimation problem. It transforms the joint MAP estimation problem into a minimization problem with a nonquadratic cost function. To solve it, the use of a monotone conjugate gradient algorithm with suboptimal descent steps is proposed. The performance of the proposed approach is analyzed with both simulated and experimental data. The results show that the proposed Bayesian approach is robust to noise and materials. It is also necessary to have the accurate spectrum information about the source-detector system. When dealing with experimental data, the spectrum can be predicted by a Monte Carlo simulator. For the materials between water and bone, less than 5% separation errors are observed on the estimated decomposition fractions. The proposed approach is a statistical reconstruction approach based on a nonlinear forward model counting the full beam polychromaticity and applied directly to the projections without taking negative-log. Compared to the approaches based on linear forward models and the BHA correction approaches, it has advantages in noise robustness and reconstruction accuracy.
Borghero, Francesco; Demontis, Francesco
2016-09-01
In the framework of geometrical optics, we consider the following inverse problem: given a two-parameter family of curves (congruence) (i.e., f(x,y,z)=c1,g(x,y,z)=c2), construct the refractive-index distribution function n=n(x,y,z) of a 3D continuous transparent inhomogeneous isotropic medium, allowing for the creation of the given congruence as a family of monochromatic light rays. We solve this problem by following two different procedures: 1. By applying Fermat's principle, we establish a system of two first-order linear nonhomogeneous PDEs in the unique unknown function n=n(x,y,z) relating the assigned congruence of rays with all possible refractive-index profiles compatible with this family. Moreover, we furnish analytical proof that the family of rays must be a normal congruence. 2. By applying the eikonal equation, we establish a second system of two first-order linear homogeneous PDEs whose solutions give the equation S(x,y,z)=const. of the geometric wavefronts and, consequently, all pertinent refractive-index distribution functions n=n(x,y,z). Finally, we make a comparison between the two procedures described above, discussing appropriate examples having exact solutions.
NASA Astrophysics Data System (ADS)
Davoodi, M.; Meskin, N.; Khorasani, K.
2018-03-01
The problem of simultaneous fault detection, isolation and tracking (SFDIT) control design for linear systems subject to both bounded energy and bounded peak disturbances is considered in this work. A dynamic observer is proposed and implemented by using the H∞/H-/L1 formulation of the SFDIT problem. A single dynamic observer module is designed that generates the residuals as well as the control signals. The objective of the SFDIT module is to ensure that simultaneously the effects of disturbances and control signals on the residual signals are minimised (in order to accomplish the fault detection goal) subject to the constraint that the transfer matrix from the faults to the residuals is equal to a pre-assigned diagonal transfer matrix (in order to accomplish the fault isolation goal), while the effects of disturbances, reference inputs and faults on the specified control outputs are minimised (in order to accomplish the fault-tolerant and tracking control goals). A set of linear matrix inequality (LMI) feasibility conditions are derived to ensure solvability of the problem. In order to illustrate and demonstrate the effectiveness of our proposed design methodology, the developed and proposed schemes are applied to an autonomous unmanned underwater vehicle (AUV).
An Efficacy Study of Interleaved Mathematics Practice. Revised
ERIC Educational Resources Information Center
Rohrer, Doug; Dedrick, Robert F.; Burgess, Kaleena
2013-01-01
In a typical mathematics course, the material is divided into many lessons, and each lesson is followed by an assignment consisting of practice problems. Most commonly, each assignment consists solely of problems on the preceding lesson. For example, a lesson on ratios might be followed by an assignment with 12 problems on ratios. In other words,…
Capacity planning of link restorable optical networks under dynamic change of traffic
NASA Astrophysics Data System (ADS)
Ho, Kwok Shing; Cheung, Kwok Wai
2005-11-01
Future backbone networks shall require full-survivability and support dynamic changes of traffic demands. The Generalized Survivable Networks (GSN) was proposed to meet these challenges. GSN is fully-survivable under dynamic traffic demand changes, so it offers a practical and guaranteed characterization framework for ASTN / ASON survivable network planning and bandwidth-on-demand resource allocation 4. The basic idea of GSN is to incorporate the non-blocking network concept into the survivable network models. In GSN, each network node must specify its I/O capacity bound which is taken as constraints for any allowable traffic demand matrix. In this paper, we consider the following generic GSN network design problem: Given the I/O bounds of each network node, find a routing scheme (and the corresponding rerouting scheme under failure) and the link capacity assignment (both working and spare) which minimize the cost, such that any traffic matrix consistent with the given I/O bounds can be feasibly routed and it is single-fault tolerant under the link restoration scheme. We first show how the initial, infeasible formal mixed integer programming formulation can be transformed into a more feasible problem using the duality transformation of the linear program. Then we show how the problem can be simplified using the Lagrangian Relaxation approach. Previous work has outlined a two-phase approach for solving this problem where the first phase optimizes the working capacity assignment and the second phase optimizes the spare capacity assignment. In this paper, we present a jointly optimized framework for dimensioning the survivable optical network with the GSN model. Experiment results show that the jointly optimized GSN can bring about on average of 3.8% cost savings when compared with the separate, two-phase approach. Finally, we perform a cost comparison and show that GSN can be deployed with a reasonable cost.
NASA Astrophysics Data System (ADS)
Linde, N.; Vrugt, J. A.
2009-04-01
Geophysical models are increasingly used in hydrological simulations and inversions, where they are typically treated as an artificial data source with known uncorrelated "data errors". The model appraisal problem in classical deterministic linear and non-linear inversion approaches based on linearization is often addressed by calculating model resolution and model covariance matrices. These measures offer only a limited potential to assign a more appropriate "data covariance matrix" for future hydrological applications, simply because the regularization operators used to construct a stable inverse solution bear a strong imprint on such estimates and because the non-linearity of the geophysical inverse problem is not explored. We present a parallelized Markov Chain Monte Carlo (MCMC) scheme to efficiently derive the posterior spatially distributed radar slowness and water content between boreholes given first-arrival traveltimes. This method is called DiffeRential Evolution Adaptive Metropolis (DREAM_ZS) with snooker updater and sampling from past states. Our inverse scheme does not impose any smoothness on the final solution, and uses uniform prior ranges of the parameters. The posterior distribution of radar slowness is converted into spatially distributed soil moisture values using a petrophysical relationship. To benchmark the performance of DREAM_ZS, we first apply our inverse method to a synthetic two-dimensional infiltration experiment using 9421 traveltimes contaminated with Gaussian errors and 80 different model parameters, corresponding to a model discretization of 0.3 m × 0.3 m. After this, the method is applied to field data acquired in the vadose zone during snowmelt. This work demonstrates that fully non-linear stochastic inversion can be applied with few limiting assumptions to a range of common two-dimensional tomographic geophysical problems. The main advantage of DREAM_ZS is that it provides a full view of the posterior distribution of spatially distributed soil moisture, which is key to appropriately treat geophysical parameter uncertainty and infer hydrologic models.
Zhao, Chuan-Li; Hsu, Hua-Feng
2014-01-01
This paper considers single machine scheduling and due date assignment with setup time. The setup time is proportional to the length of the already processed jobs; that is, the setup time is past-sequence-dependent (p-s-d). It is assumed that a job's processing time depends on its position in a sequence. The objective functions include total earliness, the weighted number of tardy jobs, and the cost of due date assignment. We analyze these problems with two different due date assignment methods. We first consider the model with job-dependent position effects. For each case, by converting the problem to a series of assignment problems, we proved that the problems can be solved in O(n 4) time. For the model with job-independent position effects, we proved that the problems can be solved in O(n 3) time by providing a dynamic programming algorithm. PMID:25258727
Zhao, Chuan-Li; Hsu, Chou-Jung; Hsu, Hua-Feng
2014-01-01
This paper considers single machine scheduling and due date assignment with setup time. The setup time is proportional to the length of the already processed jobs; that is, the setup time is past-sequence-dependent (p-s-d). It is assumed that a job's processing time depends on its position in a sequence. The objective functions include total earliness, the weighted number of tardy jobs, and the cost of due date assignment. We analyze these problems with two different due date assignment methods. We first consider the model with job-dependent position effects. For each case, by converting the problem to a series of assignment problems, we proved that the problems can be solved in O(n(4)) time. For the model with job-independent position effects, we proved that the problems can be solved in O(n(3)) time by providing a dynamic programming algorithm.
QUICR-learning for Multi-Agent Coordination
NASA Technical Reports Server (NTRS)
Agogino, Adrian K.; Tumer, Kagan
2006-01-01
Coordinating multiple agents that need to perform a sequence of actions to maximize a system level reward requires solving two distinct credit assignment problems. First, credit must be assigned for an action taken at time step t that results in a reward at time step t > t. Second, credit must be assigned for the contribution of agent i to the overall system performance. The first credit assignment problem is typically addressed with temporal difference methods such as Q-learning. The second credit assignment problem is typically addressed by creating custom reward functions. To address both credit assignment problems simultaneously, we propose the "Q Updates with Immediate Counterfactual Rewards-learning" (QUICR-learning) designed to improve both the convergence properties and performance of Q-learning in large multi-agent problems. QUICR-learning is based on previous work on single-time-step counterfactual rewards described by the collectives framework. Results on a traffic congestion problem shows that QUICR-learning is significantly better than a Q-learner using collectives-based (single-time-step counterfactual) rewards. In addition QUICR-learning provides significant gains over conventional and local Q-learning. Additional results on a multi-agent grid-world problem show that the improvements due to QUICR-learning are not domain specific and can provide up to a ten fold increase in performance over existing methods.
Wang, Zhaocai; Pu, Jun; Cao, Liling; Tan, Jian
2015-10-23
The unbalanced assignment problem (UAP) is to optimally resolve the problem of assigning n jobs to m individuals (m < n), such that minimum cost or maximum profit obtained. It is a vitally important Non-deterministic Polynomial (NP) complete problem in operation management and applied mathematics, having numerous real life applications. In this paper, we present a new parallel DNA algorithm for solving the unbalanced assignment problem using DNA molecular operations. We reasonably design flexible-length DNA strands representing different jobs and individuals, take appropriate steps, and get the solutions of the UAP in the proper length range and O(mn) time. We extend the application of DNA molecular operations and simultaneity to simplify the complexity of the computation.
Measuring Conceptual Gains and Benefits of Student Problem Designs
NASA Astrophysics Data System (ADS)
Mandell, Eric; Snyder, Rachel; Oswald, Wayne
2011-10-01
Writing assignments can be an effective way of getting students to practice higher-order learning skills in physics. One example of such an assignment is that of problem design. One version of the problem design assignment asks the student to evaluate the material from a chapter, after all instruction and other activities are complete. The student is to decide what concepts and ideas are most central, or critical in the chapter, and construct a problem that he or she feels best encompasses the major themes. Here, we use two concept surveys (FCI and EMCS) to measure conceptual gains for students completing the problem design assignment and present the preliminary results, comparing across several categories including gender, age, degree program, and class standing.
White Matter Tract Segmentation as Multiple Linear Assignment Problems
Sharmin, Nusrat; Olivetti, Emanuele; Avesani, Paolo
2018-01-01
Diffusion magnetic resonance imaging (dMRI) allows to reconstruct the main pathways of axons within the white matter of the brain as a set of polylines, called streamlines. The set of streamlines of the whole brain is called the tractogram. Organizing tractograms into anatomically meaningful structures, called tracts, is known as the tract segmentation problem, with important applications to neurosurgical planning and tractometry. Automatic tract segmentation techniques can be unsupervised or supervised. A common criticism of unsupervised methods, like clustering, is that there is no guarantee to obtain anatomically meaningful tracts. In this work, we focus on supervised tract segmentation, which is driven by prior knowledge from anatomical atlases or from examples, i.e., segmented tracts from different subjects. We present a supervised tract segmentation method that segments a given tract of interest in the tractogram of a new subject using multiple examples as prior information. Our proposed tract segmentation method is based on the idea of streamline correspondence i.e., on finding corresponding streamlines across different tractograms. In the literature, streamline correspondence has been addressed with the nearest neighbor (NN) strategy. Differently, here we formulate the problem of streamline correspondence as a linear assignment problem (LAP), which is a cornerstone of combinatorial optimization. With respect to the NN, the LAP introduces a constraint of one-to-one correspondence between streamlines, that forces the correspondences to follow the local anatomical differences between the example and the target tract, neglected by the NN. In the proposed solution, we combined the Jonker-Volgenant algorithm (LAPJV) for solving the LAP together with an efficient way of computing the nearest neighbors of a streamline, which massively reduces the total amount of computations needed to segment a tract. Moreover, we propose a ranking strategy to merge correspondences coming from different examples. We validate the proposed method on tractograms generated from the human connectome project (HCP) dataset and compare the segmentations with the NN method and the ROI-based method. The results show that LAP-based segmentation is vastly more accurate than ROI-based segmentation and substantially more accurate than the NN strategy. We provide a Free/OpenSource implementation of the proposed method. PMID:29467600
White Matter Tract Segmentation as Multiple Linear Assignment Problems.
Sharmin, Nusrat; Olivetti, Emanuele; Avesani, Paolo
2017-01-01
Diffusion magnetic resonance imaging (dMRI) allows to reconstruct the main pathways of axons within the white matter of the brain as a set of polylines, called streamlines. The set of streamlines of the whole brain is called the tractogram. Organizing tractograms into anatomically meaningful structures, called tracts, is known as the tract segmentation problem, with important applications to neurosurgical planning and tractometry. Automatic tract segmentation techniques can be unsupervised or supervised. A common criticism of unsupervised methods, like clustering, is that there is no guarantee to obtain anatomically meaningful tracts. In this work, we focus on supervised tract segmentation, which is driven by prior knowledge from anatomical atlases or from examples, i.e., segmented tracts from different subjects. We present a supervised tract segmentation method that segments a given tract of interest in the tractogram of a new subject using multiple examples as prior information. Our proposed tract segmentation method is based on the idea of streamline correspondence i.e., on finding corresponding streamlines across different tractograms. In the literature, streamline correspondence has been addressed with the nearest neighbor (NN) strategy. Differently, here we formulate the problem of streamline correspondence as a linear assignment problem (LAP), which is a cornerstone of combinatorial optimization. With respect to the NN, the LAP introduces a constraint of one-to-one correspondence between streamlines, that forces the correspondences to follow the local anatomical differences between the example and the target tract, neglected by the NN. In the proposed solution, we combined the Jonker-Volgenant algorithm (LAPJV) for solving the LAP together with an efficient way of computing the nearest neighbors of a streamline, which massively reduces the total amount of computations needed to segment a tract. Moreover, we propose a ranking strategy to merge correspondences coming from different examples. We validate the proposed method on tractograms generated from the human connectome project (HCP) dataset and compare the segmentations with the NN method and the ROI-based method. The results show that LAP-based segmentation is vastly more accurate than ROI-based segmentation and substantially more accurate than the NN strategy. We provide a Free/OpenSource implementation of the proposed method.
Analyzing Quadratic Unconstrained Binary Optimization Problems Via Multicommodity Flows
Wang, Di; Kleinberg, Robert D.
2009-01-01
Quadratic Unconstrained Binary Optimization (QUBO) problems concern the minimization of quadratic polynomials in n {0, 1}-valued variables. These problems are NP-complete, but prior work has identified a sequence of polynomial-time computable lower bounds on the minimum value, denoted by C2, C3, C4,…. It is known that C2 can be computed by solving a maximum-flow problem, whereas the only previously known algorithms for computing Ck (k > 2) require solving a linear program. In this paper we prove that C3 can be computed by solving a maximum multicommodity flow problem in a graph constructed from the quadratic function. In addition to providing a lower bound on the minimum value of the quadratic function on {0, 1}n, this multicommodity flow problem also provides some information about the coordinates of the point where this minimum is achieved. By looking at the edges that are never saturated in any maximum multicommodity flow, we can identify relational persistencies: pairs of variables that must have the same or different values in any minimizing assignment. We furthermore show that all of these persistencies can be detected by solving single-commodity flow problems in the same network. PMID:20161596
Analyzing Quadratic Unconstrained Binary Optimization Problems Via Multicommodity Flows.
Wang, Di; Kleinberg, Robert D
2009-11-28
Quadratic Unconstrained Binary Optimization (QUBO) problems concern the minimization of quadratic polynomials in n {0, 1}-valued variables. These problems are NP-complete, but prior work has identified a sequence of polynomial-time computable lower bounds on the minimum value, denoted by C(2), C(3), C(4),…. It is known that C(2) can be computed by solving a maximum-flow problem, whereas the only previously known algorithms for computing C(k) (k > 2) require solving a linear program. In this paper we prove that C(3) can be computed by solving a maximum multicommodity flow problem in a graph constructed from the quadratic function. In addition to providing a lower bound on the minimum value of the quadratic function on {0, 1}(n), this multicommodity flow problem also provides some information about the coordinates of the point where this minimum is achieved. By looking at the edges that are never saturated in any maximum multicommodity flow, we can identify relational persistencies: pairs of variables that must have the same or different values in any minimizing assignment. We furthermore show that all of these persistencies can be detected by solving single-commodity flow problems in the same network.
The airport gate assignment problem: a survey.
Bouras, Abdelghani; Ghaleb, Mageed A; Suryahatmaja, Umar S; Salem, Ahmed M
2014-01-01
The airport gate assignment problem (AGAP) is one of the most important problems operations managers face daily. Many researches have been done to solve this problem and tackle its complexity. The objective of the task is assigning each flight (aircraft) to an available gate while maximizing both conveniences to passengers and the operational efficiency of airport. This objective requires a solution that provides the ability to change and update the gate assignment data on a real time basis. In this paper, we survey the state of the art of these problems and the various methods to obtain the solution. Our survey covers both theoretical and real AGAP with the description of mathematical formulations and resolution methods such as exact algorithms, heuristic algorithms, and metaheuristic algorithms. We also provide a research trend that can inspire researchers about new problems in this area.
The Airport Gate Assignment Problem: A Survey
Ghaleb, Mageed A.; Salem, Ahmed M.
2014-01-01
The airport gate assignment problem (AGAP) is one of the most important problems operations managers face daily. Many researches have been done to solve this problem and tackle its complexity. The objective of the task is assigning each flight (aircraft) to an available gate while maximizing both conveniences to passengers and the operational efficiency of airport. This objective requires a solution that provides the ability to change and update the gate assignment data on a real time basis. In this paper, we survey the state of the art of these problems and the various methods to obtain the solution. Our survey covers both theoretical and real AGAP with the description of mathematical formulations and resolution methods such as exact algorithms, heuristic algorithms, and metaheuristic algorithms. We also provide a research trend that can inspire researchers about new problems in this area. PMID:25506074
Nonlinear Modeling by Assembling Piecewise Linear Models
NASA Technical Reports Server (NTRS)
Yao, Weigang; Liou, Meng-Sing
2013-01-01
To preserve nonlinearity of a full order system over a parameters range of interest, we propose a simple modeling approach by assembling a set of piecewise local solutions, including the first-order Taylor series terms expanded about some sampling states. The work by Rewienski and White inspired our use of piecewise linear local solutions. The assembly of these local approximations is accomplished by assigning nonlinear weights, through radial basis functions in this study. The efficacy of the proposed procedure is validated for a two-dimensional airfoil moving at different Mach numbers and pitching motions, under which the flow exhibits prominent nonlinear behaviors. All results confirm that our nonlinear model is accurate and stable for predicting not only aerodynamic forces but also detailed flowfields. Moreover, the model is robustness-accurate for inputs considerably different from the base trajectory in form and magnitude. This modeling preserves nonlinearity of the problems considered in a rather simple and accurate manner.
Wang, Zhaocai; Pu, Jun; Cao, Liling; Tan, Jian
2015-01-01
The unbalanced assignment problem (UAP) is to optimally resolve the problem of assigning n jobs to m individuals (m < n), such that minimum cost or maximum profit obtained. It is a vitally important Non-deterministic Polynomial (NP) complete problem in operation management and applied mathematics, having numerous real life applications. In this paper, we present a new parallel DNA algorithm for solving the unbalanced assignment problem using DNA molecular operations. We reasonably design flexible-length DNA strands representing different jobs and individuals, take appropriate steps, and get the solutions of the UAP in the proper length range and O(mn) time. We extend the application of DNA molecular operations and simultaneity to simplify the complexity of the computation. PMID:26512650
NASA Astrophysics Data System (ADS)
Akram, Muhammad Farooq Bin
The management of technology portfolios is an important element of aerospace system design. New technologies are often applied to new product designs to ensure their competitiveness at the time they are introduced to market. The future performance of yet-to- be designed components is inherently uncertain, necessitating subject matter expert knowledge, statistical methods and financial forecasting. Estimates of the appropriate parameter settings often come from disciplinary experts, who may disagree with each other because of varying experience and background. Due to inherent uncertain nature of expert elicitation in technology valuation process, appropriate uncertainty quantification and propagation is very critical. The uncertainty in defining the impact of an input on performance parameters of a system makes it difficult to use traditional probability theory. Often the available information is not enough to assign the appropriate probability distributions to uncertain inputs. Another problem faced during technology elicitation pertains to technology interactions in a portfolio. When multiple technologies are applied simultaneously on a system, often their cumulative impact is non-linear. Current methods assume that technologies are either incompatible or linearly independent. It is observed that in case of lack of knowledge about the problem, epistemic uncertainty is the most suitable representation of the process. It reduces the number of assumptions during the elicitation process, when experts are forced to assign probability distributions to their opinions without sufficient knowledge. Epistemic uncertainty can be quantified by many techniques. In present research it is proposed that interval analysis and Dempster-Shafer theory of evidence are better suited for quantification of epistemic uncertainty in technology valuation process. Proposed technique seeks to offset some of the problems faced by using deterministic or traditional probabilistic approaches for uncertainty propagation. Non-linear behavior in technology interactions is captured through expert elicitation based technology synergy matrices (TSM). Proposed TSMs increase the fidelity of current technology forecasting methods by including higher order technology interactions. A test case for quantification of epistemic uncertainty on a large scale problem of combined cycle power generation system was selected. A detailed multidisciplinary modeling and simulation environment was adopted for this problem. Results have shown that evidence theory based technique provides more insight on the uncertainties arising from incomplete information or lack of knowledge as compared to deterministic or probability theory methods. Margin analysis was also carried out for both the techniques. A detailed description of TSMs and their usage in conjunction with technology impact matrices and technology compatibility matrices is discussed. Various combination methods are also proposed for higher order interactions, which can be applied according to the expert opinion or historical data. The introduction of technology synergy matrix enabled capturing the higher order technology interactions, and improvement in predicted system performance.
Comparing genomes with rearrangements and segmental duplications.
Shao, Mingfu; Moret, Bernard M E
2015-06-15
Large-scale evolutionary events such as genomic rearrange.ments and segmental duplications form an important part of the evolution of genomes and are widely studied from both biological and computational perspectives. A basic computational problem is to infer these events in the evolutionary history for given modern genomes, a task for which many algorithms have been proposed under various constraints. Algorithms that can handle both rearrangements and content-modifying events such as duplications and losses remain few and limited in their applicability. We study the comparison of two genomes under a model including general rearrangements (through double-cut-and-join) and segmental duplications. We formulate the comparison as an optimization problem and describe an exact algorithm to solve it by using an integer linear program. We also devise a sufficient condition and an efficient algorithm to identify optimal substructures, which can simplify the problem while preserving optimality. Using the optimal substructures with the integer linear program (ILP) formulation yields a practical and exact algorithm to solve the problem. We then apply our algorithm to assign in-paralogs and orthologs (a necessary step in handling duplications) and compare its performance with that of the state-of-the-art method MSOAR, using both simulations and real data. On simulated datasets, our method outperforms MSOAR by a significant margin, and on five well-annotated species, MSOAR achieves high accuracy, yet our method performs slightly better on each of the 10 pairwise comparisons. http://lcbb.epfl.ch/softwares/coser. © The Author 2015. Published by Oxford University Press.
A scalable approach to solving dense linear algebra problems on hybrid CPU-GPU systems
Song, Fengguang; Dongarra, Jack
2014-10-01
Aiming to fully exploit the computing power of all CPUs and all graphics processing units (GPUs) on hybrid CPU-GPU systems to solve dense linear algebra problems, in this paper we design a class of heterogeneous tile algorithms to maximize the degree of parallelism, to minimize the communication volume, and to accommodate the heterogeneity between CPUs and GPUs. The new heterogeneous tile algorithms are executed upon our decentralized dynamic scheduling runtime system, which schedules a task graph dynamically and transfers data between compute nodes automatically. The runtime system uses a new distributed task assignment protocol to solve data dependencies between tasksmore » without any coordination between processing units. By overlapping computation and communication through dynamic scheduling, we are able to attain scalable performance for the double-precision Cholesky factorization and QR factorization. Finally, our approach demonstrates a performance comparable to Intel MKL on shared-memory multicore systems and better performance than both vendor (e.g., Intel MKL) and open source libraries (e.g., StarPU) in the following three environments: heterogeneous clusters with GPUs, conventional clusters without GPUs, and shared-memory systems with multiple GPUs.« less
Minimizing embedding impact in steganography using trellis-coded quantization
NASA Astrophysics Data System (ADS)
Filler, Tomáš; Judas, Jan; Fridrich, Jessica
2010-01-01
In this paper, we propose a practical approach to minimizing embedding impact in steganography based on syndrome coding and trellis-coded quantization and contrast its performance with bounds derived from appropriate rate-distortion bounds. We assume that each cover element can be assigned a positive scalar expressing the impact of making an embedding change at that element (single-letter distortion). The problem is to embed a given payload with minimal possible average embedding impact. This task, which can be viewed as a generalization of matrix embedding or writing on wet paper, has been approached using heuristic and suboptimal tools in the past. Here, we propose a fast and very versatile solution to this problem that can theoretically achieve performance arbitrarily close to the bound. It is based on syndrome coding using linear convolutional codes with the optimal binary quantizer implemented using the Viterbi algorithm run in the dual domain. The complexity and memory requirements of the embedding algorithm are linear w.r.t. the number of cover elements. For practitioners, we include detailed algorithms for finding good codes and their implementation. Finally, we report extensive experimental results for a large set of relative payloads and for different distortion profiles, including the wet paper channel.
A scalable approach to solving dense linear algebra problems on hybrid CPU-GPU systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Song, Fengguang; Dongarra, Jack
Aiming to fully exploit the computing power of all CPUs and all graphics processing units (GPUs) on hybrid CPU-GPU systems to solve dense linear algebra problems, in this paper we design a class of heterogeneous tile algorithms to maximize the degree of parallelism, to minimize the communication volume, and to accommodate the heterogeneity between CPUs and GPUs. The new heterogeneous tile algorithms are executed upon our decentralized dynamic scheduling runtime system, which schedules a task graph dynamically and transfers data between compute nodes automatically. The runtime system uses a new distributed task assignment protocol to solve data dependencies between tasksmore » without any coordination between processing units. By overlapping computation and communication through dynamic scheduling, we are able to attain scalable performance for the double-precision Cholesky factorization and QR factorization. Finally, our approach demonstrates a performance comparable to Intel MKL on shared-memory multicore systems and better performance than both vendor (e.g., Intel MKL) and open source libraries (e.g., StarPU) in the following three environments: heterogeneous clusters with GPUs, conventional clusters without GPUs, and shared-memory systems with multiple GPUs.« less
Primal-dual techniques for online algorithms and mechanisms
NASA Astrophysics Data System (ADS)
Liaghat, Vahid
An offline algorithm is one that knows the entire input in advance. An online algorithm, however, processes its input in a serial fashion. In contrast to offline algorithms, an online algorithm works in a local fashion and has to make irrevocable decisions without having the entire input. Online algorithms are often not optimal since their irrevocable decisions may turn out to be inefficient after receiving the rest of the input. For a given online problem, the goal is to design algorithms which are competitive against the offline optimal solutions. In a classical offline scenario, it is often common to see a dual analysis of problems that can be formulated as a linear or convex program. Primal-dual and dual-fitting techniques have been successfully applied to many such problems. Unfortunately, the usual tricks come short in an online setting since an online algorithm should make decisions without knowing even the whole program. In this thesis, we study the competitive analysis of fundamental problems in the literature such as different variants of online matching and online Steiner connectivity, via online dual techniques. Although there are many generic tools for solving an optimization problem in the offline paradigm, in comparison, much less is known for tackling online problems. The main focus of this work is to design generic techniques for solving integral linear optimization problems where the solution space is restricted via a set of linear constraints. A general family of these problems are online packing/covering problems. Our work shows that for several seemingly unrelated problems, primal-dual techniques can be successfully applied as a unifying approach for analyzing these problems. We believe this leads to generic algorithmic frameworks for solving online problems. In the first part of the thesis, we show the effectiveness of our techniques in the stochastic settings and their applications in Bayesian mechanism design. In particular, we introduce new techniques for solving a fundamental linear optimization problem, namely, the stochastic generalized assignment problem (GAP). This packing problem generalizes various problems such as online matching, ad allocation, bin packing, etc. We furthermore show applications of such results in the mechanism design by introducing Prophet Secretary, a novel Bayesian model for online auctions. In the second part of the thesis, we focus on the covering problems. We develop the framework of "Disk Painting" for a general class of network design problems that can be characterized by proper functions. This class generalizes the node-weighted and edge-weighted variants of several well-known Steiner connectivity problems. We furthermore design a generic technique for solving the prize-collecting variants of these problems when there exists a dual analysis for the non-prize-collecting counterparts. Hence, we solve the online prize-collecting variants of several network design problems for the first time. Finally we focus on designing techniques for online problems with mixed packing/covering constraints. We initiate the study of degree-bounded graph optimization problems in the online setting by designing an online algorithm with a tight competitive ratio for the degree-bounded Steiner forest problem. We hope these techniques establishes a starting point for the analysis of the important class of online degree-bounded optimization on graphs.
On the Global and Linear Convergence of the Generalized Alternating Direction Method of Multipliers
2012-08-01
the less exact one is solved later — assigned as step 4 of Algorithm 2 — because at each iteration , the ADM updates the variables in the Gauss - Seidel ...k) and that of an accelerated version descends at O(1/k2). Then, work [14] establishes the same rates on a Gauss - Seidel version and requires only one... iteration Fig. 5.1. Convergence curves of ADM for the elastic net problem. 17 0 50 100 150 200 0.75 0.8 0.85 0.9 0.95 1 Iteration ‖u k + 1 − u ∗ ‖ 2 G / ‖u k
NASA Astrophysics Data System (ADS)
Mukherjee, Sathi; Basu, Kajla
2010-10-01
In this paper we develop a methodology to solve the multiple attribute assignment problems where the attributes are considered to be Intuitionistic Fuzzy Sets (IFS). We apply the concept of similarity measures of IFS to solve the Intuitionistic Fuzzy Multi-Attribute Assignment Problem (IFMAAP). The weights of the attributes are determined from expert opinion. An illustrative example is solved to verify the developed approach and to demonstrate its practicality.
Sarkar, Jit; Cornuelle, Bruce D; Kuperman, W A
2011-09-01
Wave-theoretic ocean acoustic propagation modeling is used to derive the sensitivity of pressure, and complex demodulated amplitude and phase, at a receiver to the sound speed of the medium using the Born-Fréchet derivative. Although the procedure can be applied for pressure as a function of frequency instead of time, the time domain has advantages in practical problems, as linearity and signal-to-noise are more easily assigned in the time domain. The linearity and information content of these sensitivity kernels is explored for an example of a 3-4 kHz broadband pulse transmission in a 1 km shallow water Pekeris waveguide. Full-wave observations (pressure as a function of time) are seen to be too nonlinear for use in most practical cases, whereas envelope and phase data have a wider range of validity and provide complementary information. These results are used in simulated inversions with a more realistic sound speed profile, comparing the performance of amplitude and phase observations. © 2011 Acoustical Society of America
Self-organizing feature maps for dynamic control of radio resources in CDMA microcellular networks
NASA Astrophysics Data System (ADS)
Hortos, William S.
1998-03-01
The application of artificial neural networks to the channel assignment problem for cellular code-division multiple access (CDMA) cellular networks has previously been investigated. CDMA takes advantage of voice activity and spatial isolation because its capacity is only interference limited, unlike time-division multiple access (TDMA) and frequency-division multiple access (FDMA) where capacities are bandwidth-limited. Any reduction in interference in CDMA translates linearly into increased capacity. To satisfy the high demands for new services and improved connectivity for mobile communications, microcellular and picocellular systems are being introduced. For these systems, there is a need to develop robust and efficient management procedures for the allocation of power and spectrum to maximize radio capacity. Topology-conserving mappings play an important role in the biological processing of sensory inputs. The same principles underlying Kohonen's self-organizing feature maps (SOFMs) are applied to the adaptive control of radio resources to minimize interference, hence, maximize capacity in direct-sequence (DS) CDMA networks. The approach based on SOFMs is applied to some published examples of both theoretical and empirical models of DS/CDMA microcellular networks in metropolitan areas. The results of the approach for these examples are informally compared to the performance of algorithms, based on Hopfield- Tank neural networks and on genetic algorithms, for the channel assignment problem.
Automatic trajectory measurement of large numbers of crowded objects
NASA Astrophysics Data System (ADS)
Li, Hui; Liu, Ye; Chen, Yan Qiu
2013-06-01
Complex motion patterns of natural systems, such as fish schools, bird flocks, and cell groups, have attracted great attention from scientists for years. Trajectory measurement of individuals is vital for quantitative and high-throughput study of their collective behaviors. However, such data are rare mainly due to the challenges of detection and tracking of large numbers of objects with similar visual features and frequent occlusions. We present an automatic and effective framework to measure trajectories of large numbers of crowded oval-shaped objects, such as fish and cells. We first use a novel dual ellipse locator to detect the coarse position of each individual and then propose a variance minimization active contour method to obtain the optimal segmentation results. For tracking, cost matrix of assignment between consecutive frames is trainable via a random forest classifier with many spatial, texture, and shape features. The optimal trajectories are found for the whole image sequence by solving two linear assignment problems. We evaluate the proposed method on many challenging data sets.
Multi-object tracking of human spermatozoa
NASA Astrophysics Data System (ADS)
Sørensen, Lauge; Østergaard, Jakob; Johansen, Peter; de Bruijne, Marleen
2008-03-01
We propose a system for tracking of human spermatozoa in phase-contrast microscopy image sequences. One of the main aims of a computer-aided sperm analysis (CASA) system is to automatically assess sperm quality based on spermatozoa motility variables. In our case, the problem of assessing sperm quality is cast as a multi-object tracking problem, where the objects being tracked are the spermatozoa. The system combines a particle filter and Kalman filters for robust motion estimation of the spermatozoa tracks. Further, the combinatorial aspect of assigning observations to labels in the particle filter is formulated as a linear assignment problem solved using the Hungarian algorithm on a rectangular cost matrix, making the algorithm capable of handling missing or spurious observations. The costs are calculated using hidden Markov models that express the plausibility of an observation being the next position in the track history of the particle labels. Observations are extracted using a scale-space blob detector utilizing the fact that the spermatozoa appear as bright blobs in a phase-contrast microscope. The output of the system is the complete motion track of each of the spermatozoa. Based on these tracks, different CASA motility variables can be computed, for example curvilinear velocity or straight-line velocity. The performance of the system is tested on three different phase-contrast image sequences of varying complexity, both by visual inspection of the estimated spermatozoa tracks and by measuring the mean squared error (MSE) between the estimated spermatozoa tracks and manually annotated tracks, showing good agreement.
Evaluation of a Brief Homework Assignment Designed to Reduce Citation Problems
ERIC Educational Resources Information Center
Schuetze, Pamela
2004-01-01
I evaluated a brief homework assignment designed to reduce citation problems in research-based term papers. Students in 2 developmental psychology classes received a brief presentation and handout defining plagiarism with tips on how to cite sources to avoid plagiarizing. In addition, students in 1 class completed 2 brief homework assignments in…
Absolute Points for Multiple Assignment Problems
ERIC Educational Resources Information Center
Adlakha, V.; Kowalski, K.
2006-01-01
An algorithm is presented to solve multiple assignment problems in which a cost is incurred only when an assignment is made at a given cell. The proposed method recursively searches for single/group absolute points to identify cells that must be loaded in any optimal solution. Unlike other methods, the first solution is the optimal solution. The…
Computing Role Assignments of Proper Interval Graphs in Polynomial Time
NASA Astrophysics Data System (ADS)
Heggernes, Pinar; van't Hof, Pim; Paulusma, Daniël
A homomorphism from a graph G to a graph R is locally surjective if its restriction to the neighborhood of each vertex of G is surjective. Such a homomorphism is also called an R-role assignment of G. Role assignments have applications in distributed computing, social network theory, and topological graph theory. The Role Assignment problem has as input a pair of graphs (G,R) and asks whether G has an R-role assignment. This problem is NP-complete already on input pairs (G,R) where R is a path on three vertices. So far, the only known non-trivial tractable case consists of input pairs (G,R) where G is a tree. We present a polynomial time algorithm that solves Role Assignment on all input pairs (G,R) where G is a proper interval graph. Thus we identify the first graph class other than trees on which the problem is tractable. As a complementary result, we show that the problem is Graph Isomorphism-hard on chordal graphs, a superclass of proper interval graphs and trees.
Web-Based Problem-Solving Assignment and Grading System
NASA Astrophysics Data System (ADS)
Brereton, Giles; Rosenberg, Ronald
2014-11-01
In engineering courses with very specific learning objectives, such as fluid mechanics and thermodynamics, it is conventional to reinforce concepts and principles with problem-solving assignments and to measure success in problem solving as an indicator of student achievement. While the modern-day ease of copying and searching for online solutions can undermine the value of traditional assignments, web-based technologies also provide opportunities to generate individualized well-posed problems with an infinite number of different combinations of initial/final/boundary conditions, so that the probability of any two students being assigned identical problems in a course is vanishingly small. Such problems can be designed and programmed to be: single or multiple-step, self-grading, allow students single or multiple attempts; provide feedback when incorrect; selectable according to difficulty; incorporated within gaming packages; etc. In this talk, we discuss the use of a homework/exam generating program of this kind in a single-semester course, within a web-based client-server system that ensures secure operation.
On nonlinear finite element analysis in single-, multi- and parallel-processors
NASA Technical Reports Server (NTRS)
Utku, S.; Melosh, R.; Islam, M.; Salama, M.
1982-01-01
Numerical solution of nonlinear equilibrium problems of structures by means of Newton-Raphson type iterations is reviewed. Each step of the iteration is shown to correspond to the solution of a linear problem, therefore the feasibility of the finite element method for nonlinear analysis is established. Organization and flow of data for various types of digital computers, such as single-processor/single-level memory, single-processor/two-level-memory, vector-processor/two-level-memory, and parallel-processors, with and without sub-structuring (i.e. partitioning) are given. The effect of the relative costs of computation, memory and data transfer on substructuring is shown. The idea of assigning comparable size substructures to parallel processors is exploited. Under Cholesky type factorization schemes, the efficiency of parallel processing is shown to decrease due to the occasional shared data, just as that due to the shared facilities.
Prediction of aquatic toxicity mode of action using linear discriminant and random forest models.
Martin, Todd M; Grulke, Christopher M; Young, Douglas M; Russom, Christine L; Wang, Nina Y; Jackson, Crystal R; Barron, Mace G
2013-09-23
The ability to determine the mode of action (MOA) for a diverse group of chemicals is a critical part of ecological risk assessment and chemical regulation. However, existing MOA assignment approaches in ecotoxicology have been limited to a relatively few MOAs, have high uncertainty, or rely on professional judgment. In this study, machine based learning algorithms (linear discriminant analysis and random forest) were used to develop models for assigning aquatic toxicity MOA. These methods were selected since they have been shown to be able to correlate diverse data sets and provide an indication of the most important descriptors. A data set of MOA assignments for 924 chemicals was developed using a combination of high confidence assignments, international consensus classifications, ASTER (ASessment Tools for the Evaluation of Risk) predictions, and weight of evidence professional judgment based an assessment of structure and literature information. The overall data set was randomly divided into a training set (75%) and a validation set (25%) and then used to develop linear discriminant analysis (LDA) and random forest (RF) MOA assignment models. The LDA and RF models had high internal concordance and specificity and were able to produce overall prediction accuracies ranging from 84.5 to 87.7% for the validation set. These results demonstrate that computational chemistry approaches can be used to determine the acute toxicity MOAs across a large range of structures and mechanisms.
An LMI approach for the Integral Sliding Mode and H∞ State Feedback Control Problem
NASA Astrophysics Data System (ADS)
Bezzaoucha, Souad; Henry, David
2015-11-01
This paper deals with the state feedback control problem for linear uncertain systems subject to both matched and unmatched perturbations. The proposed control law is based on an the Integral Sliding Mode Control (ISMC) approach to tackle matched perturbations as well as the H∞ paradigm for robustness against unmatched perturbations. The proposed method also parallels the work presented in [1] which addressed the same problem and proposed a solution involving an Algebraic Riccati Equation (ARE)-based formulation. The contribution of this paper is concerned by the establishment of a Linear Matrix Inequality (LMI)-based solution which offers the possibility to consider other types of constraints such as 𝓓-stability constraints (pole assignment-like constraints). The proposed methodology is applied to a pilot three-tank system and experiment results illustrate the feasibility. Note that only a few real experiments have been rarely considered using SMC in the past. This is due to the high energetic behaviour of the control signal. It is important to outline that the paper does not aim at proposing a LMI formulation of an ARE. This is done since 1971 [2] and further discussed in [3] where the link between AREs and ARIs (algebraic Riccati inequality) is established for the H∞ control problem. The main contribution of this paper is to establish the adequate LMI-based methodology (changes of matrix variables) so that the ARE that corresponds to the particular structure of the mixed ISMC/H∞ structure proposed by [1] can be re-formulated within the LMI paradigm.
More reliable protein NMR peak assignment via improved 2-interval scheduling.
Chen, Zhi-Zhong; Lin, Guohui; Rizzi, Romeo; Wen, Jianjun; Xu, Dong; Xu, Ying; Jiang, Tao
2005-03-01
Protein NMR peak assignment refers to the process of assigning a group of "spin systems" obtained experimentally to a protein sequence of amino acids. The automation of this process is still an unsolved and challenging problem in NMR protein structure determination. Recently, protein NMR peak assignment has been formulated as an interval scheduling problem (ISP), where a protein sequence P of amino acids is viewed as a discrete time interval I (the amino acids on P one-to-one correspond to the time units of I), each subset S of spin systems that are known to originate from consecutive amino acids from P is viewed as a "job" j(s), the preference of assigning S to a subsequence P of consecutive amino acids on P is viewed as the profit of executing job j(s) in the subinterval of I corresponding to P, and the goal is to maximize the total profit of executing the jobs (on a single machine) during I. The interval scheduling problem is max SNP-hard in general; but in the real practice of protein NMR peak assignment, each job j(s) usually requires at most 10 consecutive time units, and typically the jobs that require one or two consecutive time units are the most difficult to assign/schedule. In order to solve these most difficult assignments, we present an efficient 13/7-approximation algorithm for the special case of the interval scheduling problem where each job takes one or two consecutive time units. Combining this algorithm with a greedy filtering strategy for handling long jobs (i.e., jobs that need more than two consecutive time units), we obtain a new efficient heuristic for protein NMR peak assignment. Our experimental study shows that the new heuristic produces the best peak assignment in most of the cases, compared with the NMR peak assignment algorithms in the recent literature. The above algorithm is also the first approximation algorithm for a nontrivial case of the well-known interval scheduling problem that breaks the ratio 2 barrier.
Single machine scheduling with slack due dates assignment
NASA Astrophysics Data System (ADS)
Liu, Weiguo; Hu, Xiangpei; Wang, Xuyin
2017-04-01
This paper considers a single machine scheduling problem in which each job is assigned an individual due date based on a common flow allowance (i.e. all jobs have slack due date). The goal is to find a sequence for jobs, together with a due date assignment, that minimizes a non-regular criterion comprising the total weighted absolute lateness value and common flow allowance cost, where the weight is a position-dependent weight. In order to solve this problem, an ? time algorithm is proposed. Some extensions of the problem are also shown.
A Stochastic Employment Problem
ERIC Educational Resources Information Center
Wu, Teng
2013-01-01
The Stochastic Employment Problem(SEP) is a variation of the Stochastic Assignment Problem which analyzes the scenario that one assigns balls into boxes. Balls arrive sequentially with each one having a binary vector X = (X[subscript 1], X[subscript 2],...,X[subscript n]) attached, with the interpretation being that if X[subscript i] = 1 the ball…
ERIC Educational Resources Information Center
Saslow Gomez, Sarah A.; Faurie-Wisniewski, Danielle; Parsa, Arlen; Spitz, Jeff; Spitz, Jennifer Amdur; Loeb, Nancy C.; Geiger, Franz M.
2015-01-01
The classroom exercise outlined here is a self-directed assignment that connects students to the environmental contamination problem surrounding the DePue Superfund site. By connecting chemistry knowledge gained in the classroom with a real-world problem, students are encouraged to personally connect with the problem while simultaneously…
Pre-service teachers’ challenges in presenting mathematical problems
NASA Astrophysics Data System (ADS)
Desfitri, R.
2018-01-01
The purpose of this study was to analyzed how pre-service teachers prepare and assigned tasks or assignments in teaching practice situations. This study was also intended to discuss about kind of tasks or assignments they gave to students. Participants of this study were 15 selected pre-service mathematics teachers from mathematics education department who took part on microteaching class as part of teaching preparation program. Based on data obtained, it was occasionally found that there were hidden errors on questions or tasks assigned by pre-service teachers which might lead their students not to be able to reach a logical or correct answer. Although some answers might seem to be true, they were illogical or unfavourable. It is strongly recommended that pre-service teachers be more careful when posing mathematical problems so that students do not misunderstand the problems or the concepts, since both teachers and students were sometimes unaware of errors in problems being worked on.
The role of service areas in the optimization of FSS orbital and frequency assignments
NASA Technical Reports Server (NTRS)
Levis, C. A.; Wang, C.-W.; Yamamura, Y.; Reilly, C. H.; Gonsalvez, D. J.
1986-01-01
An implicit relationship is derived which relates the topocentric separation of two satellites required for a given level of single-entry protection to the separation and orientation of their service areas. The results are presented explicitly for circular beams and topocentric angles. A computational approach is given for elliptical beams and for use with longitude and latitude variables. It is found that the geocentric separation depends primarily on the service area separation, secondarily on a parameter which characterizes the electrical design, and only slightly on the mean orbital position of the satellites. Both linear programming and mixed integer programming algorithms are implemented. Possible objective function choices are discussed, and explicit formulations are presented for the choice of the sum of the absolute deviations of the orbital locations from some prescribed 'ideal' location set. A test problem involving six service areas is examined with results that are encouraging with respect to applying the linear programming procedure to larger scenarios.
Assigning uncertainties in the inversion of NMR relaxation data.
Parker, Robert L; Song, Yi-Qaio
2005-06-01
Recovering the relaxation-time density function (or distribution) from NMR decay records requires inverting a Laplace transform based on noisy data, an ill-posed inverse problem. An important objective in the face of the consequent ambiguity in the solutions is to establish what reliable information is contained in the measurements. To this end we describe how upper and lower bounds on linear functionals of the density function, and ratios of linear functionals, can be calculated using optimization theory. Those bounded quantities cover most of those commonly used in the geophysical NMR, such as porosity, T(2) log-mean, and bound fluid volume fraction, and include averages over any finite interval of the density function itself. In the theory presented statistical considerations enter to account for the presence of significant noise in the signal, but not in a prior characterization of density models. Our characterization of the uncertainties is conservative and informative; it will have wide application in geophysical NMR and elsewhere.
New optimization model for routing and spectrum assignment with nodes insecurity
NASA Astrophysics Data System (ADS)
Xuan, Hejun; Wang, Yuping; Xu, Zhanqi; Hao, Shanshan; Wang, Xiaoli
2017-04-01
By adopting the orthogonal frequency division multiplexing technology, elastic optical networks can provide the flexible and variable bandwidth allocation to each connection request and get higher spectrum utilization. The routing and spectrum assignment problem in elastic optical network is a well-known NP-hard problem. In addition, information security has received worldwide attention. We combine these two problems to investigate the routing and spectrum assignment problem with the guaranteed security in elastic optical network, and establish a new optimization model to minimize the maximum index of the used frequency slots, which is used to determine an optimal routing and spectrum assignment schemes. To solve the model effectively, a hybrid genetic algorithm framework integrating a heuristic algorithm into a genetic algorithm is proposed. The heuristic algorithm is first used to sort the connection requests and then the genetic algorithm is designed to look for an optimal routing and spectrum assignment scheme. In the genetic algorithm, tailor-made crossover, mutation and local search operators are designed. Moreover, simulation experiments are conducted with three heuristic strategies, and the experimental results indicate that the effectiveness of the proposed model and algorithm framework.
Pupils' over-reliance on linearity: a scholastic effect?
Van Dooren, Wim; De Bock, Dirk; Janssens, Dirk; Verschaffel, Lieven
2007-06-01
From upper elementary education on, children develop a tendency to over-use linearity. Particularly, it is found that many pupils assume that if a figure enlarges k times, the area enlarges k times too. However, most research was conducted with traditional, school-like word problems. This study examines whether pupils also over-use linearity if non-linear problems are embedded in meaningful, authentic performance tasks instead of traditional, school-like word problems, and whether this experience influences later behaviour. Ninety-three sixth graders from two primary schools in Flanders, Belgium. Pupils received a pre-test with traditional word problems. Those who made a linear error on the non-linear area problem were subjected to individual interviews. They received one new non-linear problem, in the S-condition (again a traditional, scholastic word problem), D-condition (the same word problem with a drawing) or P-condition (a meaningful performance-based task). Shortly afterwards, pupils received a post-test, containing again a non-linear word problem. Most pupils from the S-condition displayed linear reasoning during the interview. Offering drawings (D-condition) had a positive effect, but presenting the problem as a performance task (P-condition) was more beneficial. Linear reasoning was nearly absent in the P-condition. Remarkably, at the post-test, most pupils from all three groups again applied linear strategies. Pupils' over-reliance on linearity seems partly elicited by the school-like word problem format of test items. Pupils perform much better if non-linear problems are offered as performance tasks. However, a single experience does not change performances on a comparable word problem test afterwards.
Robust stochastic optimization for reservoir operation
NASA Astrophysics Data System (ADS)
Pan, Limeng; Housh, Mashor; Liu, Pan; Cai, Ximing; Chen, Xin
2015-01-01
Optimal reservoir operation under uncertainty is a challenging engineering problem. Application of classic stochastic optimization methods to large-scale problems is limited due to computational difficulty. Moreover, classic stochastic methods assume that the estimated distribution function or the sample inflow data accurately represents the true probability distribution, which may be invalid and the performance of the algorithms may be undermined. In this study, we introduce a robust optimization (RO) approach, Iterative Linear Decision Rule (ILDR), so as to provide a tractable approximation for a multiperiod hydropower generation problem. The proposed approach extends the existing LDR method by accommodating nonlinear objective functions. It also provides users with the flexibility of choosing the accuracy of ILDR approximations by assigning a desired number of piecewise linear segments to each uncertainty. The performance of the ILDR is compared with benchmark policies including the sampling stochastic dynamic programming (SSDP) policy derived from historical data. The ILDR solves both the single and multireservoir systems efficiently. The single reservoir case study results show that the RO method is as good as SSDP when implemented on the original historical inflows and it outperforms SSDP policy when tested on generated inflows with the same mean and covariance matrix as those in history. For the multireservoir case study, which considers water supply in addition to power generation, numerical results show that the proposed approach performs as well as in the single reservoir case study in terms of optimal value and distributional robustness.
Problem-Based Assignments as a Trigger for Developing Ethical and Reflective Competencies
ERIC Educational Resources Information Center
Euler, Dieter; Kühner, Patrizia
2017-01-01
The following research question serves as the starting point of this research and development project: How, in the context of a didactic design, can problem-based assignments trigger learning activities for the development of ethical and reflective competencies in students in economics courses? This paper focuses on the design of problem-based…
ERIC Educational Resources Information Center
Zehavi, Nurit
This study explored student mathematical activity in open problem-solving situations, derived from the work of Polya on problem solving and Skemp on intelligent learning and teaching. Assignment projects with problems for ninth-grade students were developed, whether they elicit the desired cognitive and cogno-affective goals was investigated, and…
Ant colony optimization for solving university facility layout problem
NASA Astrophysics Data System (ADS)
Mohd Jani, Nurul Hafiza; Mohd Radzi, Nor Haizan; Ngadiman, Mohd Salihin
2013-04-01
Quadratic Assignment Problems (QAP) is classified as the NP hard problem. It has been used to model a lot of problem in several areas such as operational research, combinatorial data analysis and also parallel and distributed computing, optimization problem such as graph portioning and Travel Salesman Problem (TSP). In the literature, researcher use exact algorithm, heuristics algorithm and metaheuristic approaches to solve QAP problem. QAP is largely applied in facility layout problem (FLP). In this paper we used QAP to model university facility layout problem. There are 8 facilities that need to be assigned to 8 locations. Hence we have modeled a QAP problem with n ≤ 10 and developed an Ant Colony Optimization (ACO) algorithm to solve the university facility layout problem. The objective is to assign n facilities to n locations such that the minimum product of flows and distances is obtained. Flow is the movement from one to another facility, whereas distance is the distance between one locations of a facility to other facilities locations. The objective of the QAP is to obtain minimum total walking (flow) of lecturers from one destination to another (distance).
Aggregation of LoD 1 building models as an optimization problem
NASA Astrophysics Data System (ADS)
Guercke, R.; Götzelmann, T.; Brenner, C.; Sester, M.
3D city models offered by digital map providers typically consist of several thousands or even millions of individual buildings. Those buildings are usually generated in an automated fashion from high resolution cadastral and remote sensing data and can be very detailed. However, not in every application such a high degree of detail is desirable. One way to remove complexity is to aggregate individual buildings, simplify the ground plan and assign an appropriate average building height. This task is computationally complex because it includes the combinatorial optimization problem of determining which subset of the original set of buildings should best be aggregated to meet the demands of an application. In this article, we introduce approaches to express different aspects of the aggregation of LoD 1 building models in the form of Mixed Integer Programming (MIP) problems. The advantage of this approach is that for linear (and some quadratic) MIP problems, sophisticated software exists to find exact solutions (global optima) with reasonable effort. We also propose two different heuristic approaches based on the region growing strategy and evaluate their potential for optimization by comparing their performance to a MIP-based approach.
Use of infrared spectroscopy for the determination of electronegativity of rare earth elements.
Frost, Ray L; Erickson, Kristy L; Weier, Matt L; McKinnon, Adam R; Williams, Peter A; Leverett, Peter
2004-07-01
Infrared spectroscopy has been used to study a series of synthetic agardite minerals. Four OH stretching bands are observed at around 3568, 3482, 3362, and 3296 cm(-1). The first band is assigned to zeolitic, non-hydrogen-bonded water. The band at 3296 cm(-1) is assigned to strongly hydrogen-bonded water with an H bond distance of 2.72 A. The water in agardites is better described as structured water and not as zeolitic water. Two bands at around 999 and 975 cm(-1) are assigned to OH deformation modes. Two sets of AsO symmetric stretching vibrations were found and assigned to the vibrational modes of AsO(4) and HAsO(4) units. Linear relationships between positions of infrared bands associated with bonding to the OH units and the electronegativity of the rare earth elements were derived, with correlation coefficients >0.92. These linear functions were then used to calculate the electronegativity of Eu, for which a value of 1.1808 on the Pauling scale was found.
Comparing Looping Teacher-Assigned and Traditional Teacher-Assigned Student Achievement Scores
ERIC Educational Resources Information Center
Lloyd, Melissa C.
2014-01-01
A problem in many elementary schools is determining which teacher assignment strategy best promotes the academic progress of students. To find and implement educational practices that address the academic needs of all learners, schools need research-based data focusing on the 2 teacher assignment strategies: looping assignment (LA) and traditional…
Code of Federal Regulations, 2012 CFR
2012-10-01
... geographical areas assigned to a County Zone Number and Per Acre Zone Value? 2806.21 Section 2806.21 Public... MANAGEMENT ACT Rents Linear Rights-Of-Way § 2806.21 When and how are counties or other geographical areas assigned to a County Zone Number and Per Acre Zone Value? Counties (or other geographical areas) are...
Code of Federal Regulations, 2011 CFR
2011-10-01
... geographical areas assigned to a County Zone Number and Per Acre Zone Value? 2806.21 Section 2806.21 Public... MANAGEMENT ACT Rents Linear Rights-Of-Way § 2806.21 When and how are counties or other geographical areas assigned to a County Zone Number and Per Acre Zone Value? Counties (or other geographical areas) are...
Code of Federal Regulations, 2013 CFR
2013-10-01
... geographical areas assigned to a County Zone Number and Per Acre Zone Value? 2806.21 Section 2806.21 Public... MANAGEMENT ACT Rents Linear Rights-Of-Way § 2806.21 When and how are counties or other geographical areas assigned to a County Zone Number and Per Acre Zone Value? Counties (or other geographical areas) are...
Code of Federal Regulations, 2014 CFR
2014-10-01
... geographical areas assigned to a County Zone Number and Per Acre Zone Value? 2806.21 Section 2806.21 Public... MANAGEMENT ACT Rents Linear Rights-Of-Way § 2806.21 When and how are counties or other geographical areas assigned to a County Zone Number and Per Acre Zone Value? Counties (or other geographical areas) are...
NASA Astrophysics Data System (ADS)
Gutin, Gregory; Kim, Eun Jung; Soleimanfallah, Arezou; Szeider, Stefan; Yeo, Anders
The NP-hard general factor problem asks, given a graph and for each vertex a list of integers, whether the graph has a spanning subgraph where each vertex has a degree that belongs to its assigned list. The problem remains NP-hard even if the given graph is bipartite with partition U ⊎ V, and each vertex in U is assigned the list {1}; this subproblem appears in the context of constraint programming as the consistency problem for the extended global cardinality constraint. We show that this subproblem is fixed-parameter tractable when parameterized by the size of the second partite set V. More generally, we show that the general factor problem for bipartite graphs, parameterized by |V |, is fixed-parameter tractable as long as all vertices in U are assigned lists of length 1, but becomes W[1]-hard if vertices in U are assigned lists of length at most 2. We establish fixed-parameter tractability by reducing the problem instance to a bounded number of acyclic instances, each of which can be solved in polynomial time by dynamic programming.
Quicker Q-Learning in Multi-Agent Systems
NASA Technical Reports Server (NTRS)
Agogino, Adrian K.; Tumer, Kagan
2005-01-01
Multi-agent learning in Markov Decisions Problems is challenging because of the presence ot two credit assignment problems: 1) How to credit an action taken at time step t for rewards received at t' greater than t; and 2) How to credit an action taken by agent i considering the system reward is a function of the actions of all the agents. The first credit assignment problem is typically addressed with temporal difference methods such as Q-learning OK TD(lambda) The second credit assi,onment problem is typically addressed either by hand-crafting reward functions that assign proper credit to an agent, or by making certain independence assumptions about an agent's state-space and reward function. To address both credit assignment problems simultaneously, we propose the Q Updates with Immediate Counterfactual Rewards-learning (QUICR-learning) designed to improve both the convergence properties and performance of Q-learning in large multi-agent problems. Instead of assuming that an agent s value function can be made independent of other agents, this method suppresses the impact of other agents using counterfactual rewards. Results on multi-agent grid-world problems over multiple topologies show that QUICR-learning can achieve up to thirty fold improvements in performance over both conventional and local Q-learning in the largest tested systems.
Meta-cognitive student reflections
NASA Astrophysics Data System (ADS)
Barquist, Britt; Stewart, Jim
2009-05-01
We have recently concluded a project testing the effectiveness of a weekly assignment designed to encourage awareness and improvement of meta-cognitive skills. The project is based on the idea that successful problem solvers implement a meta-cognitive process in which they identify the specific concept they are struggling with, and then identify what they understand, what they don't understand, and what they need to know in order to resolve their problem. The assignment required the students to write an email assessing the level of completion of a weekly workbook assignment and to examine in detail their experiences regarding a specific topic they struggled with. The assignment guidelines were designed to coach them through this meta-cognitive process. We responded to most emails with advice for next week's assignment. Our data follow 12 students through a quarter consisting of 11 email assignments which were scored using a rubric based on the assignment guidelines. We found no correlation between rubric scores and final grades. We do have anecdotal evidence that the assignment was beneficial.
NASA Astrophysics Data System (ADS)
Tinney, Charles Evan
2007-12-01
By using the book "Physics for Scientists and Engineers" by Raymond A. Serway as a guide, CD problem sets for teaching a calculus-based physics course were developed, programmed, and evaluated for homework assignments during the 2003-2004 academic year at Utah State University. These CD sets were used to replace the traditionally handwritten and submitted homework sets. They included a research-based format that guided the students through problem-solving techniques using responseactivated helps and suggestions. The CD contents were designed to help the student improve his/her physics problem-solving skills. The analyzed score results showed a direct correlation between the scores obtained on the homework and the students' time spent per problem, as well as the number of helps used per problem.
Landsat D Thematic Mapper image dimensionality reduction and geometric correction accuracy
NASA Technical Reports Server (NTRS)
Ford, G. E.
1986-01-01
To characterize and quantify the performance of the Landsat thematic mapper (TM), techniques for dimensionality reduction by linear transformation have been studied and evaluated and the accuracy of the correction of geometric errors in TM images analyzed. Theoretical evaluations and comparisons for existing methods for the design of linear transformation for dimensionality reduction are presented. These methods include the discrete Karhunen Loeve (KL) expansion, Multiple Discriminant Analysis (MDA), Thematic Mapper (TM)-Tasseled Cap Linear Transformation and Singular Value Decomposition (SVD). A unified approach to these design problems is presented in which each method involves optimizing an objective function with respect to the linear transformation matrix. From these studies, four modified methods are proposed. They are referred to as the Space Variant Linear Transformation, the KL Transform-MDA hybrid method, and the First and Second Version of the Weighted MDA method. The modifications involve the assignment of weights to classes to achieve improvements in the class conditional probability of error for classes with high weights. Experimental evaluations of the existing and proposed methods have been performed using the six reflective bands of the TM data. It is shown that in terms of probability of classification error and the percentage of the cumulative eigenvalues, the six reflective bands of the TM data require only a three dimensional feature space. It is shown experimentally as well that for the proposed methods, the classes with high weights have improvements in class conditional probability of error estimates as expected.
Qualls, Joseph; Russomanno, David J.
2011-01-01
The lack of knowledge models to represent sensor systems, algorithms, and missions makes opportunistically discovering a synthesis of systems and algorithms that can satisfy high-level mission specifications impractical. A novel ontological problem-solving framework has been designed that leverages knowledge models describing sensors, algorithms, and high-level missions to facilitate automated inference of assigning systems to subtasks that may satisfy a given mission specification. To demonstrate the efficacy of the ontological problem-solving architecture, a family of persistence surveillance sensor systems and algorithms has been instantiated in a prototype environment to demonstrate the assignment of systems to subtasks of high-level missions. PMID:22164081
Introducing Computational Approaches in Intermediate Mechanics
NASA Astrophysics Data System (ADS)
Cook, David M.
2006-12-01
In the winter of 2003, we at Lawrence University moved Lagrangian mechanics and rigid body dynamics from a required sophomore course to an elective junior/senior course, freeing 40% of the time for computational approaches to ordinary differential equations (trajectory problems, the large amplitude pendulum, non-linear dynamics); evaluation of integrals (finding centers of mass and moment of inertia tensors, calculating gravitational potentials for various sources); and finding eigenvalues and eigenvectors of matrices (diagonalizing the moment of inertia tensor, finding principal axes), and to generating graphical displays of computed results. Further, students begin to use LaTeX to prepare some of their submitted problem solutions. Placed in the middle of the sophomore year, this course provides the background that permits faculty members as appropriate to assign computer-based exercises in subsequent courses. Further, students are encouraged to use our Computational Physics Laboratory on their own initiative whenever that use seems appropriate. (Curricular development supported in part by the W. M. Keck Foundation, the National Science Foundation, and Lawrence University.)
Topological numbering of features on a mesh
NASA Technical Reports Server (NTRS)
Atallah, Mikhail J.; Hambrusch, Susanne E.; Tewinkel, Lynn E.
1988-01-01
Assume a nxn binary image is given containing horizontally convex features; i.e., for each feature, each of its row's pixels form an interval on that row. The problem of assigning topological numbers to such features is considered; i.e., assign a number to every feature f so that all features to the left of f have a smaller number assigned to them. This problem arises in solutions to the stereo matching problem. A parallel algorithm to solve the topological numbering problem in O(n) time on an nxn mesh of processors is presented. The key idea of the solution is to create a tree from which the topological numbers can be obtained even though the tree does not uniquely represent the to the left of relationship of the features.
NASA Astrophysics Data System (ADS)
Chaves-González, José M.; Vega-Rodríguez, Miguel A.; Gómez-Pulido, Juan A.; Sánchez-Pérez, Juan M.
2011-08-01
This article analyses the use of a novel parallel evolutionary strategy to solve complex optimization problems. The work developed here has been focused on a relevant real-world problem from the telecommunication domain to verify the effectiveness of the approach. The problem, known as frequency assignment problem (FAP), basically consists of assigning a very small number of frequencies to a very large set of transceivers used in a cellular phone network. Real data FAP instances are very difficult to solve due to the NP-hard nature of the problem, therefore using an efficient parallel approach which makes the most of different evolutionary strategies can be considered as a good way to obtain high-quality solutions in short periods of time. Specifically, a parallel hyper-heuristic based on several meta-heuristics has been developed. After a complete experimental evaluation, results prove that the proposed approach obtains very high-quality solutions for the FAP and beats any other result published.
Linear dimension reduction and Bayes classification
NASA Technical Reports Server (NTRS)
Decell, H. P., Jr.; Odell, P. L.; Coberly, W. A.
1978-01-01
An explicit expression for a compression matrix T of smallest possible left dimension K consistent with preserving the n variate normal Bayes assignment of X to a given one of a finite number of populations and the K variate Bayes assignment of TX to that population was developed. The Bayes population assignment of X and TX were shown to be equivalent for a compression matrix T explicitly calculated as a function of the means and covariances of the given populations.
Automated 3D trajectory measuring of large numbers of moving particles.
Wu, Hai Shan; Zhao, Qi; Zou, Danping; Chen, Yan Qiu
2011-04-11
Complex dynamics of natural particle systems, such as insect swarms, bird flocks, fish schools, has attracted great attention of scientists for years. Measuring 3D trajectory of each individual in a group is vital for quantitative study of their dynamic properties, yet such empirical data is rare mainly due to the challenges of maintaining the identities of large numbers of individuals with similar visual features and frequent occlusions. We here present an automatic and efficient algorithm to track 3D motion trajectories of large numbers of moving particles using two video cameras. Our method solves this problem by formulating it as three linear assignment problems (LAP). For each video sequence, the first LAP obtains 2D tracks of moving targets and is able to maintain target identities in the presence of occlusions; the second one matches the visually similar targets across two views via a novel technique named maximum epipolar co-motion length (MECL), which is not only able to effectively reduce matching ambiguity but also further diminish the influence of frequent occlusions; the last one links 3D track segments into complete trajectories via computing a globally optimal assignment based on temporal and kinematic cues. Experiment results on simulated particle swarms with various particle densities validated the accuracy and robustness of the proposed method. As real-world case, our method successfully acquired 3D flight paths of fruit fly (Drosophila melanogaster) group comprising hundreds of freely flying individuals. © 2011 Optical Society of America
Show, Don't Tell: Using Photographic "Snapsignments" to Advance and Assess Creative Problem Solving
ERIC Educational Resources Information Center
Machin, Jane E.
2016-01-01
Traditional assignments that aim to develop and evaluate creative problem solving skills are frequently foregone in large marketing classes due to the daunting grading prospect they present. Here, a new assessment method is introduced: the "snapsignment." Through photography, individual projects can be assigned that promote higher order…
Menu-Driven Solver Of Linear-Programming Problems
NASA Technical Reports Server (NTRS)
Viterna, L. A.; Ferencz, D.
1992-01-01
Program assists inexperienced user in formulating linear-programming problems. A Linear Program Solver (ALPS) computer program is full-featured LP analysis program. Solves plain linear-programming problems as well as more-complicated mixed-integer and pure-integer programs. Also contains efficient technique for solution of purely binary linear-programming problems. Written entirely in IBM's APL2/PC software, Version 1.01. Packed program contains licensed material, property of IBM (copyright 1988, all rights reserved).
Domain decomposition methods for the parallel computation of reacting flows
NASA Technical Reports Server (NTRS)
Keyes, David E.
1988-01-01
Domain decomposition is a natural route to parallel computing for partial differential equation solvers. Subdomains of which the original domain of definition is comprised are assigned to independent processors at the price of periodic coordination between processors to compute global parameters and maintain the requisite degree of continuity of the solution at the subdomain interfaces. In the domain-decomposed solution of steady multidimensional systems of PDEs by finite difference methods using a pseudo-transient version of Newton iteration, the only portion of the computation which generally stands in the way of efficient parallelization is the solution of the large, sparse linear systems arising at each Newton step. For some Jacobian matrices drawn from an actual two-dimensional reacting flow problem, comparisons are made between relaxation-based linear solvers and also preconditioned iterative methods of Conjugate Gradient and Chebyshev type, focusing attention on both iteration count and global inner product count. The generalized minimum residual method with block-ILU preconditioning is judged the best serial method among those considered, and parallel numerical experiments on the Encore Multimax demonstrate for it approximately 10-fold speedup on 16 processors.
The intelligence of dual simplex method to solve linear fractional fuzzy transportation problem.
Narayanamoorthy, S; Kalyani, S
2015-01-01
An approach is presented to solve a fuzzy transportation problem with linear fractional fuzzy objective function. In this proposed approach the fractional fuzzy transportation problem is decomposed into two linear fuzzy transportation problems. The optimal solution of the two linear fuzzy transportations is solved by dual simplex method and the optimal solution of the fractional fuzzy transportation problem is obtained. The proposed method is explained in detail with an example.
Estimation of absolute solvent and solvation shell entropies via permutation reduction
NASA Astrophysics Data System (ADS)
Reinhard, Friedemann; Grubmüller, Helmut
2007-01-01
Despite its prominent contribution to the free energy of solvated macromolecules such as proteins or DNA, and although principally contained within molecular dynamics simulations, the entropy of the solvation shell is inaccessible to straightforward application of established entropy estimation methods. The complication is twofold. First, the configurational space density of such systems is too complex for a sufficiently accurate fit. Second, and in contrast to the internal macromolecular dynamics, the configurational space volume explored by the diffusive motion of the solvent molecules is too large to be exhaustively sampled by current simulation techniques. Here, we develop a method to overcome the second problem and to significantly alleviate the first one. We propose to exploit the permutation symmetry of the solvent by transforming the trajectory in a way that renders established estimation methods applicable, such as the quasiharmonic approximation or principal component analysis. Our permutation-reduced approach involves a combinatorial problem, which is solved through its equivalence with the linear assignment problem, for which O(N3) methods exist. From test simulations of dense Lennard-Jones gases, enhanced convergence and improved entropy estimates are obtained. Moreover, our approach renders diffusive systems accessible to improved fit functions.
Two-MILP models for scheduling elective surgeries within a private healthcare facility.
Khlif Hachicha, Hejer; Zeghal Mansour, Farah
2016-11-05
This paper deals with an Integrated Elective Surgery-Scheduling Problem (IESSP) that arises in a privately operated healthcare facility. It aims to optimize the resource utilization of the entire surgery process including pre-operative, per-operative and post-operative activities. Moreover, it addresses a specific feature of private facilities where surgeons are independent service providers and may conduct their surgeries in different private healthcare facilities. Thus, the problem requires the assignment of surgery patients to hospital beds, operating rooms and recovery beds as well as their sequencing over a 1-day period while taking into account surgeons' availability constraints. We present two Mixed Integer Linear Programs (MILP) that model the IESSP as a three-stage hybrid flow-shop scheduling problem with recirculation, resource synchronization, dedicated machines, and blocking constraints. To assess the empirical performance of the proposed models, we conducted experiments on real-world data of a Tunisian private clinic: Clinique Ennasr and on randomly generated instances. Two criteria were minimised: the patients' average length of stay and the number of patients' overnight stays. The computational results show that the proposed models can solve instances with up to 44 surgical cases in a reasonable CPU time using a general-purpose MILP solver.
Self-Coexistence among IEEE 802.22 Networks: Distributed Allocation of Power and Channel
Sakin, Sayef Azad; Alamri, Atif; Tran, Nguyen H.
2017-01-01
Ensuring self-coexistence among IEEE 802.22 networks is a challenging problem owing to opportunistic access of incumbent-free radio resources by users in co-located networks. In this study, we propose a fully-distributed non-cooperative approach to ensure self-coexistence in downlink channels of IEEE 802.22 networks. We formulate the self-coexistence problem as a mixed-integer non-linear optimization problem for maximizing the network data rate, which is an NP-hard one. This work explores a sub-optimal solution by dividing the optimization problem into downlink channel allocation and power assignment sub-problems. Considering fairness, quality of service and minimum interference for customer-premises-equipment, we also develop a greedy algorithm for channel allocation and a non-cooperative game-theoretic framework for near-optimal power allocation. The base stations of networks are treated as players in a game, where they try to increase spectrum utilization by controlling power and reaching a Nash equilibrium point. We further develop a utility function for the game to increase the data rate by minimizing the transmission power and, subsequently, the interference from neighboring networks. A theoretical proof of the uniqueness and existence of the Nash equilibrium has been presented. Performance improvements in terms of data-rate with a degree of fairness compared to a cooperative branch-and-bound-based algorithm and a non-cooperative greedy approach have been shown through simulation studies. PMID:29215591
Self-Coexistence among IEEE 802.22 Networks: Distributed Allocation of Power and Channel.
Sakin, Sayef Azad; Razzaque, Md Abdur; Hassan, Mohammad Mehedi; Alamri, Atif; Tran, Nguyen H; Fortino, Giancarlo
2017-12-07
Ensuring self-coexistence among IEEE 802.22 networks is a challenging problem owing to opportunistic access of incumbent-free radio resources by users in co-located networks. In this study, we propose a fully-distributed non-cooperative approach to ensure self-coexistence in downlink channels of IEEE 802.22 networks. We formulate the self-coexistence problem as a mixed-integer non-linear optimization problem for maximizing the network data rate, which is an NP-hard one. This work explores a sub-optimal solution by dividing the optimization problem into downlink channel allocation and power assignment sub-problems. Considering fairness, quality of service and minimum interference for customer-premises-equipment, we also develop a greedy algorithm for channel allocation and a non-cooperative game-theoretic framework for near-optimal power allocation. The base stations of networks are treated as players in a game, where they try to increase spectrum utilization by controlling power and reaching a Nash equilibrium point. We further develop a utility function for the game to increase the data rate by minimizing the transmission power and, subsequently, the interference from neighboring networks. A theoretical proof of the uniqueness and existence of the Nash equilibrium has been presented. Performance improvements in terms of data-rate with a degree of fairness compared to a cooperative branch-and-bound-based algorithm and a non-cooperative greedy approach have been shown through simulation studies.
WWC Review of the Report "Effects of Problem Based Economics on High School Economics Instruction"
ERIC Educational Resources Information Center
What Works Clearinghouse, 2012
2012-01-01
The study described in this report included 128 high school economics teachers from 106 schools in Arizona and California, half of whom were randomly assigned to the "Problem Based Economics Instruction" condition and half of whom were randomly assigned to the comparison condition. High levels of teacher attrition occurred after…
Neural Mechanisms of Credit Assignment in a Multicue Environment
Kolling, Nils; Brown, Joshua W.; Rushworth, Matthew
2016-01-01
In complex environments, many potential cues can guide a decision or be assigned responsibility for the outcome of the decision. We know little, however, about how humans and animals select relevant information sources that should guide behavior. We show that subjects solve this relevance selection and credit assignment problem by selecting one cue and its association with a particular outcome as the main focus of a hypothesis. To do this, we examined learning while using a task design that allowed us to estimate the focus of each subject's hypotheses on a trial-by-trial basis. When a prediction is confirmed by the outcome, then credit for the outcome is assigned to that cue rather than an alternative. Activity in medial frontal cortex is associated with the assignment of credit to the cue that is the main focus of the hypothesis. However, when the outcome disconfirms a prediction, the focus shifts between cues, and the credit for the outcome is assigned to an alternative cue. This process of reselection for credit assignment to an alternative cue is associated with lateral orbitofrontal cortex. SIGNIFICANCE STATEMENT Learners should infer which features of environments are predictive of significant events, such as rewards. This “credit assignment” problem is particularly challenging when any of several cues might be predictive. We show that human subjects solve the credit assignment problem by implicitly “hypothesizing” which cue is relevant for predicting subsequent outcomes, and then credit is assigned according to this hypothesis. This process is associated with a distinctive pattern of activity in a part of medial frontal cortex. By contrast, when unexpected outcomes occur, hypotheses are redirected toward alternative cues, and this process is associated with activity in lateral orbitofrontal cortex. PMID:26818500
Optimal assignment of workers to supporting services in a hospital
NASA Astrophysics Data System (ADS)
Sawik, Bartosz; Mikulik, Jerzy
2008-01-01
Supporting services play an important role in health care institutions such as hospitals. This paper presents an application of operations research model for optimal allocation of workers among supporting services in a public hospital. The services include logistics, inventory management, financial management, operations management, medical analysis, etc. The optimality criterion of the problem is to minimize operations costs of supporting services subject to some specific constraints. The constraints represent specific conditions for resource allocation in a hospital. The overall problem is formulated as an integer program in the literature known as the assignment problem, where the decision variables represent the assignment of people to various jobs. The results of some computational experiments modeled on a real data from a selected Polish hospital are reported.
The Intelligence of Dual Simplex Method to Solve Linear Fractional Fuzzy Transportation Problem
Narayanamoorthy, S.; Kalyani, S.
2015-01-01
An approach is presented to solve a fuzzy transportation problem with linear fractional fuzzy objective function. In this proposed approach the fractional fuzzy transportation problem is decomposed into two linear fuzzy transportation problems. The optimal solution of the two linear fuzzy transportations is solved by dual simplex method and the optimal solution of the fractional fuzzy transportation problem is obtained. The proposed method is explained in detail with an example. PMID:25810713
NASA Astrophysics Data System (ADS)
Guo, Peng; Cheng, Wenming; Wang, Yi
2014-10-01
The quay crane scheduling problem (QCSP) determines the handling sequence of tasks at ship bays by a set of cranes assigned to a container vessel such that the vessel's service time is minimized. A number of heuristics or meta-heuristics have been proposed to obtain the near-optimal solutions to overcome the NP-hardness of the problem. In this article, the idea of generalized extremal optimization (GEO) is adapted to solve the QCSP with respect to various interference constraints. The resulting GEO is termed the modified GEO. A randomized searching method for neighbouring task-to-QC assignments to an incumbent task-to-QC assignment is developed in executing the modified GEO. In addition, a unidirectional search decoding scheme is employed to transform a task-to-QC assignment to an active quay crane schedule. The effectiveness of the developed GEO is tested on a suite of benchmark problems introduced by K.H. Kim and Y.M. Park in 2004 (European Journal of Operational Research, Vol. 156, No. 3). Compared with other well-known existing approaches, the experiment results show that the proposed modified GEO is capable of obtaining the optimal or near-optimal solution in a reasonable time, especially for large-sized problems.
NASA Astrophysics Data System (ADS)
Supianto, A. A.; Hayashi, Y.; Hirashima, T.
2017-02-01
Problem-posing is well known as an effective activity to learn problem-solving methods. Monsakun is an interactive problem-posing learning environment to facilitate arithmetic word problems learning for one operation of addition and subtraction. The characteristic of Monsakun is problem-posing as sentence-integration that lets learners make a problem of three sentences. Monsakun provides learners with five or six sentences including dummies, which are designed through careful considerations by an expert teacher as a meaningful distraction to the learners in order to learn the structure of arithmetic word problems. The results of the practical use of Monsakun in elementary schools show that many learners have difficulties in arranging the proper answer at the high level of assignments. The analysis of the problem-posing process of such learners found that their misconception of arithmetic word problems causes impasses in their thinking and mislead them to use dummies. This study proposes a method of changing assignments as a support for overcoming bottlenecks of thinking. In Monsakun, the bottlenecks are often detected as a frequently repeated use of a specific dummy. If such dummy can be detected, it is the key factor to support learners to overcome their difficulty. This paper discusses how to detect the bottlenecks and to realize such support in learning by problem-posing.
QAPgrid: A Two Level QAP-Based Approach for Large-Scale Data Analysis and Visualization
Inostroza-Ponta, Mario; Berretta, Regina; Moscato, Pablo
2011-01-01
Background The visualization of large volumes of data is a computationally challenging task that often promises rewarding new insights. There is great potential in the application of new algorithms and models from combinatorial optimisation. Datasets often contain “hidden regularities” and a combined identification and visualization method should reveal these structures and present them in a way that helps analysis. While several methodologies exist, including those that use non-linear optimization algorithms, severe limitations exist even when working with only a few hundred objects. Methodology/Principal Findings We present a new data visualization approach (QAPgrid) that reveals patterns of similarities and differences in large datasets of objects for which a similarity measure can be computed. Objects are assigned to positions on an underlying square grid in a two-dimensional space. We use the Quadratic Assignment Problem (QAP) as a mathematical model to provide an objective function for assignment of objects to positions on the grid. We employ a Memetic Algorithm (a powerful metaheuristic) to tackle the large instances of this NP-hard combinatorial optimization problem, and we show its performance on the visualization of real data sets. Conclusions/Significance Overall, the results show that QAPgrid algorithm is able to produce a layout that represents the relationships between objects in the data set. Furthermore, it also represents the relationships between clusters that are feed into the algorithm. We apply the QAPgrid on the 84 Indo-European languages instance, producing a near-optimal layout. Next, we produce a layout of 470 world universities with an observed high degree of correlation with the score used by the Academic Ranking of World Universities compiled in the The Shanghai Jiao Tong University Academic Ranking of World Universities without the need of an ad hoc weighting of attributes. Finally, our Gene Ontology-based study on Saccharomyces cerevisiae fully demonstrates the scalability and precision of our method as a novel alternative tool for functional genomics. PMID:21267077
QAPgrid: a two level QAP-based approach for large-scale data analysis and visualization.
Inostroza-Ponta, Mario; Berretta, Regina; Moscato, Pablo
2011-01-18
The visualization of large volumes of data is a computationally challenging task that often promises rewarding new insights. There is great potential in the application of new algorithms and models from combinatorial optimisation. Datasets often contain "hidden regularities" and a combined identification and visualization method should reveal these structures and present them in a way that helps analysis. While several methodologies exist, including those that use non-linear optimization algorithms, severe limitations exist even when working with only a few hundred objects. We present a new data visualization approach (QAPgrid) that reveals patterns of similarities and differences in large datasets of objects for which a similarity measure can be computed. Objects are assigned to positions on an underlying square grid in a two-dimensional space. We use the Quadratic Assignment Problem (QAP) as a mathematical model to provide an objective function for assignment of objects to positions on the grid. We employ a Memetic Algorithm (a powerful metaheuristic) to tackle the large instances of this NP-hard combinatorial optimization problem, and we show its performance on the visualization of real data sets. Overall, the results show that QAPgrid algorithm is able to produce a layout that represents the relationships between objects in the data set. Furthermore, it also represents the relationships between clusters that are feed into the algorithm. We apply the QAPgrid on the 84 Indo-European languages instance, producing a near-optimal layout. Next, we produce a layout of 470 world universities with an observed high degree of correlation with the score used by the Academic Ranking of World Universities compiled in the The Shanghai Jiao Tong University Academic Ranking of World Universities without the need of an ad hoc weighting of attributes. Finally, our Gene Ontology-based study on Saccharomyces cerevisiae fully demonstrates the scalability and precision of our method as a novel alternative tool for functional genomics.
An analysis of spectral envelope-reduction via quadratic assignment problems
NASA Technical Reports Server (NTRS)
George, Alan; Pothen, Alex
1994-01-01
A new spectral algorithm for reordering a sparse symmetric matrix to reduce its envelope size was described. The ordering is computed by associating a Laplacian matrix with the given matrix and then sorting the components of a specified eigenvector of the Laplacian. In this paper, we provide an analysis of the spectral envelope reduction algorithm. We described related 1- and 2-sum problems; the former is related to the envelope size, while the latter is related to an upper bound on the work involved in an envelope Cholesky factorization scheme. We formulate the latter two problems as quadratic assignment problems, and then study the 2-sum problem in more detail. We obtain lower bounds on the 2-sum by considering a projected quadratic assignment problem, and then show that finding a permutation matrix closest to an orthogonal matrix attaining one of the lower bounds justifies the spectral envelope reduction algorithm. The lower bound on the 2-sum is seen to be tight for reasonably 'uniform' finite element meshes. We also obtain asymptotically tight lower bounds for the envelope size for certain classes of meshes.
Towards Automated Structure-Based NMR Resonance Assignment
NASA Astrophysics Data System (ADS)
Jang, Richard; Gao, Xin; Li, Ming
We propose a general framework for solving the structure-based NMR backbone resonance assignment problem. The core is a novel 0-1 integer programming model that can start from a complete or partial assignment, generate multiple assignments, and model not only the assignment of spins to residues, but also pairwise dependencies consisting of pairs of spins to pairs of residues. It is still a challenge for automated resonance assignment systems to perform the assignment directly from spectra without any manual intervention. To test the feasibility of this for structure-based assignment, we integrated our system with our automated peak picking and sequence-based resonance assignment system to obtain an assignment for the protein TM1112 with 91% recall and 99% precision without manual intervention. Since using a known structure has the potential to allow one to use only N-labeled NMR data and avoid the added expense of using C-labeled data, we work towards the goal of automated structure-based assignment using only such labeled data. Our system reduced the assignment error of Xiong-Pandurangan-Bailey-Kellogg's contact replacement (CR) method, which to our knowledge is the most error-tolerant method for this problem, by 5 folds on average. By using an iterative algorithm, our system has the added capability of using the NOESY data to correct assignment errors due to errors in predicting the amino acid and secondary structure type of each spin system. On a publicly available data set for Ubiquitin, where the type prediction accuracy is 83%, we achieved 91% assignment accuracy, compared to the 59% accuracy that was obtained without correcting for typing errors.
Design of Linear Quadratic Regulators and Kalman Filters
NASA Technical Reports Server (NTRS)
Lehtinen, B.; Geyser, L.
1986-01-01
AESOP solves problems associated with design of controls and state estimators for linear time-invariant systems. Systems considered are modeled in state-variable form by set of linear differential and algebraic equations with constant coefficients. Two key problems solved by AESOP are linear quadratic regulator (LQR) design problem and steady-state Kalman filter design problem. AESOP is interactive. User solves design problems and analyzes solutions in single interactive session. Both numerical and graphical information available to user during the session.
NASA Astrophysics Data System (ADS)
Rakhmawati, Fibri; Mawengkang, Herman; Buulolo, F.; Mardiningsih
2018-01-01
The hub location with single assignment is the problem of locating hubs and assigning the terminal nodes to hubs in order to minimize the cost of hub installation and the cost of routing the traffic in the network. There may also be capacity restrictions on the amount of traffic that can transit by hubs. This paper discusses how to model the polyhedral properties of the problems and develop a feasible neighbourhood search method to solve the model.
Abbas, Ahmed; Guo, Xianrong; Jing, Bing-Yi; Gao, Xin
2014-06-01
Despite significant advances in automated nuclear magnetic resonance-based protein structure determination, the high numbers of false positives and false negatives among the peaks selected by fully automated methods remain a problem. These false positives and negatives impair the performance of resonance assignment methods. One of the main reasons for this problem is that the computational research community often considers peak picking and resonance assignment to be two separate problems, whereas spectroscopists use expert knowledge to pick peaks and assign their resonances at the same time. We propose a novel framework that simultaneously conducts slice picking and spin system forming, an essential step in resonance assignment. Our framework then employs a genetic algorithm, directed by both connectivity information and amino acid typing information from the spin systems, to assign the spin systems to residues. The inputs to our framework can be as few as two commonly used spectra, i.e., CBCA(CO)NH and HNCACB. Different from the existing peak picking and resonance assignment methods that treat peaks as the units, our method is based on 'slices', which are one-dimensional vectors in three-dimensional spectra that correspond to certain ([Formula: see text]) values. Experimental results on both benchmark simulated data sets and four real protein data sets demonstrate that our method significantly outperforms the state-of-the-art methods while using a less number of spectra than those methods. Our method is freely available at http://sfb.kaust.edu.sa/Pages/Software.aspx.
Analyzing the multiple-target-multiple-agent scenario using optimal assignment algorithms
NASA Astrophysics Data System (ADS)
Kwok, Kwan S.; Driessen, Brian J.; Phillips, Cynthia A.; Tovey, Craig A.
1997-09-01
This work considers the problem of maximum utilization of a set of mobile robots with limited sensor-range capabilities and limited travel distances. The robots are initially in random positions. A set of robots properly guards or covers a region if every point within the region is within the effective sensor range of at least one vehicle. We wish to move the vehicles into surveillance positions so as to guard or cover a region, while minimizing the maximum distance traveled by any vehicle. This problem can be formulated as an assignment problem, in which we must optimally decide which robot to assign to which slot of a desired matrix of grid points. The cost function is the maximum distance traveled by any robot. Assignment problems can be solved very efficiently. Solution times for one hundred robots took only seconds on a silicon graphics crimson workstation. The initial positions of all the robots can be sampled by a central base station and their newly assigned positions communicated back to the robots. Alternatively, the robots can establish their own coordinate system with the origin fixed at one of the robots and orientation determined by the compass bearing of another robot relative to this robot. This paper presents example solutions to the multiple-target-multiple-agent scenario using a matching algorithm. Two separate cases with one hundred agents in each were analyzed using this method. We have found these mobile robot problems to be a very interesting application of network optimization methods, and we expect this to be a fruitful area for future research.
Allen, Nancy; Whittemore, Robin; Melkus, Gail
2011-11-01
Diabetes technology has the potential to provide useful data for theory-based behavioral counseling. The aims of this study are to evaluate the feasibility, acceptability, and preliminary efficacy of a continuous glucose monitoring and problem-solving counseling intervention to change physical activity (PA) behavior in women with type 2 diabetes. Women (n=29) with type 2 diabetes were randomly assigned to one of two treatment conditions: continuous glucose counseling and problem-solving skills or continuous glucose monitoring counseling and general diabetes education. Feasibility data were obtained on intervention dose, implementation, and satisfaction. Preliminary efficacy data were collected at baseline and 12 weeks on the following measures: PA amount and intensity, diet, problem-solving skills, self-efficacy for PA, depression, hemogoloin A1c, weight, and blood pressure. Demographic and implementation variables were described using frequency distributions and summary statistics. Satisfaction data were analyzed using Wilcoxon rank. Differences between groups were analyzed using linear mixed-modeling. Women were mostly white/non-Latina with a mean age of 53 years, a 6.5-year history of diabetes, and suboptimal glycemic control. Continuous glucose monitoring plus problem-solving group participants had significantly greater problem-solving skills and had greater, although not statistically significant, dietary adherence, moderate activity minutes, weight loss, and higher intervention satisfaction pre- to post-intervention than did participants in the continuous glucose monitoring plus education group. A continuous glucose monitoring plus problem-solving intervention was feasible and acceptable, and participants had greater problem-solving skills than continuous glucose monitoring plus education group participants.
Zeng, Jianyang; Zhou, Pei; Donald, Bruce Randall
2011-01-01
One bottleneck in NMR structure determination lies in the laborious and time-consuming process of side-chain resonance and NOE assignments. Compared to the well-studied backbone resonance assignment problem, automated side-chain resonance and NOE assignments are relatively less explored. Most NOE assignment algorithms require nearly complete side-chain resonance assignments from a series of through-bond experiments such as HCCH-TOCSY or HCCCONH. Unfortunately, these TOCSY experiments perform poorly on large proteins. To overcome this deficiency, we present a novel algorithm, called NASCA (NOE Assignment and Side-Chain Assignment), to automate both side-chain resonance and NOE assignments and to perform high-resolution protein structure determination in the absence of any explicit through-bond experiment to facilitate side-chain resonance assignment, such as HCCH-TOCSY. After casting the assignment problem into a Markov Random Field (MRF), NASCA extends and applies combinatorial protein design algorithms to compute optimal assignments that best interpret the NMR data. The MRF captures the contact map information of the protein derived from NOESY spectra, exploits the backbone structural information determined by RDCs, and considers all possible side-chain rotamers. The complexity of the combinatorial search is reduced by using a dead-end elimination (DEE) algorithm, which prunes side-chain resonance assignments that are provably not part of the optimal solution. Then an A* search algorithm is employed to find a set of optimal side-chain resonance assignments that best fit the NMR data. These side-chain resonance assignments are then used to resolve the NOE assignment ambiguity and compute high-resolution protein structures. Tests on five proteins show that NASCA assigns resonances for more than 90% of side-chain protons, and achieves about 80% correct assignments. The final structures computed using the NOE distance restraints assigned by NASCA have backbone RMSD 0.8 – 1.5 Å from the reference structures determined by traditional NMR approaches. PMID:21706248
NASA Astrophysics Data System (ADS)
Ramdhani, M. N.; Baihaqi, I.; Siswanto, N.
2018-04-01
Waste collection and disposal become a major problem for many metropolitan cities. Growing population, limited vehicles, and increased road traffic make the waste transportation become more complex. Waste collection involves some key considerations, such as vehicle assignment, vehicle routes, and vehicle scheduling. In the scheduling process, each vehicle has a scheduled departure that serve each route. Therefore, vehicle’s assignments should consider the time required to finish one assigment on that route. The objective of this study is to minimize the number of vehicles needed to serve all routes by developing a mathematical model which uses assignment problem approach. The first step is to generated possible routes from the existing routes, followed by vehicle assignments for those certain routes. The result of the model shows fewer vehicles required to perform waste collection asa well as the the number of journeys that the vehicle to collect the waste to the landfill. The comparison of existing conditions with the model result indicates that the latter’s has better condition than the existing condition because each vehicle with certain route has an equal workload, all the result’s model has the maximum of two journeys for each route.
ERIC Educational Resources Information Center
Linehan, Margaret; Walsh, James S.
2000-01-01
A study of 50 female senior managers who made international career moves found that senior experience before international assignments was more necessary for female than male managers. The glass ceiling in the home country resulted in fewer women in international management, and those with international assignments faced many gender-related…
Gassman-Pines, Anna; Godfrey, Erin B.; Yoshikawa, Hirokazu
2012-01-01
Grounded in Person-Environment Fit Theory, this study examined whether low-income mothers' preferences for education moderated the effects of employment- and education-focused welfare programs on children's positive and problem behaviors. The sample included 1,365 families with children between ages 3 and 5 at study entry. Results 5 years after random assignment, when children were ages 8 to 10, indicated that mothers' education preferences did moderate program impacts on teacher-reported child behavior problems and positive behavior. Children whose mothers were assigned to the education program were rated by teachers to have less externalizing behavior and more positive behavior than children whose mothers were assigned to the employment program, but only when mothers had strong preferences for education. PMID:22861169
Supervising Unsuccessful Student Teaching Assignments: Two Terminator's Tales.
ERIC Educational Resources Information Center
St. Maurice, Henry
2001-01-01
Discusses problems that arise when there is a conflict between a student teacher and the supervising teacher and when a student teacher does not perform satisfactorily. Focuses on how supervisors deal with failed assignments and how beginning teachers improve their teaching and learn from failed assignments. (Contains 21 references.) (JOW)
The Biomes of Homewood: Interactive Map Software
ERIC Educational Resources Information Center
Shingles, Richard; Feist, Theron; Brosnan, Rae
2005-01-01
To build a learning community, the General Biology faculty at Johns Hopkins University conducted collaborative, problem-based learning assignments outside of class in which students are assigned to specific areas on campus, and gather and report data about their area. To overcome the logistics challenges presented by conducting such assignments in…
Zhao, Yingfeng; Liu, Sanyang
2016-01-01
We present a practical branch and bound algorithm for globally solving generalized linear multiplicative programming problem with multiplicative constraints. To solve the problem, a relaxation programming problem which is equivalent to a linear programming is proposed by utilizing a new two-phase relaxation technique. In the algorithm, lower and upper bounds are simultaneously obtained by solving some linear relaxation programming problems. Global convergence has been proved and results of some sample examples and a small random experiment show that the proposed algorithm is feasible and efficient.
NASA Technical Reports Server (NTRS)
Peslen, C. A.; Koch, S. E.; Uccellini, L. W.
1984-01-01
Satellite-derived cloud motion 'wind' vectors (CMV) are increasingly used in mesoscale and in global analyses, and questions have been raised regarding the uncertainty of the level assignment for the CMV. One of two major problems in selecting a level for the CMV is related to uncertainties in assigning the motion vector to either the cloud top or base. The second problem is related to the inability to transfer the 'wind' derived from the CMV at individually specified heights to a standard coordinated surface. The present investigation has the objective to determine if the arbitrary level assignment represents a serious obstacle to the use of cloud motion wind vectors in the mesoscale analysis of a severe storm environment.
Xia, Youshen; Sun, Changyin; Zheng, Wei Xing
2012-05-01
There is growing interest in solving linear L1 estimation problems for sparsity of the solution and robustness against non-Gaussian noise. This paper proposes a discrete-time neural network which can calculate large linear L1 estimation problems fast. The proposed neural network has a fixed computational step length and is proved to be globally convergent to an optimal solution. Then, the proposed neural network is efficiently applied to image restoration. Numerical results show that the proposed neural network is not only efficient in solving degenerate problems resulting from the nonunique solutions of the linear L1 estimation problems but also needs much less computational time than the related algorithms in solving both linear L1 estimation and image restoration problems.
Multi Objective Decision Analysis for Assignment Problems
2011-03-01
needed data or try to get data from related databases. 2.3.8 Deterministic Analysis In order to determine an overall score for each...The views expressed in this thesis are those of the author and do not reflect the official policy or position of the Turkish Air...DECISION ANALYSIS FOR ASSIGNMENT PROBLEMS THESIS Presented to the Faculty Department of Operational Sciences Graduate School of
ERIC Educational Resources Information Center
Newby, Michael; Nguyen, ThuyUyen H.
2010-01-01
This paper examines the effectiveness of a technique that first appeared as a Teaching Tip in the Journal of Information Systems Education. In this approach the same problem is used in every programming assignment within a course, but the students are required to use different programming techniques. This approach was used in an intermediate C++…
Can Linear Superiorization Be Useful for Linear Optimization Problems?
Censor, Yair
2017-01-01
Linear superiorization considers linear programming problems but instead of attempting to solve them with linear optimization methods it employs perturbation resilient feasibility-seeking algorithms and steers them toward reduced (not necessarily minimal) target function values. The two questions that we set out to explore experimentally are (i) Does linear superiorization provide a feasible point whose linear target function value is lower than that obtained by running the same feasibility-seeking algorithm without superiorization under identical conditions? and (ii) How does linear superiorization fare in comparison with the Simplex method for solving linear programming problems? Based on our computational experiments presented here, the answers to these two questions are: “yes” and “very well”, respectively. PMID:29335660
Yun, Lifen; Wang, Xifu; Fan, Hongqiang; Li, Xiaopeng
2017-01-01
This paper proposes a reliable facility location design model under imperfect information with site-dependent disruptions; i.e., each facility is subject to a unique disruption probability that varies across the space. In the imperfect information contexts, customers adopt a realistic “trial-and-error” strategy to visit facilities; i.e., they visit a number of pre-assigned facilities sequentially until they arrive at the first operational facility or give up looking for the service. This proposed model aims to balance initial facility investment and expected long-term operational cost by finding the optimal facility locations. A nonlinear integer programming model is proposed to describe this problem. We apply a linearization technique to reduce the difficulty of solving the proposed model. A number of problem instances are studied to illustrate the performance of the proposed model. The results indicate that our proposed model can reveal a number of interesting insights into the facility location design with site-dependent disruptions, including the benefit of backup facilities and system robustness against variation of the loss-of-service penalty. PMID:28486564
Integrated consensus-based frameworks for unmanned vehicle routing and targeting assignment
NASA Astrophysics Data System (ADS)
Barnawi, Waleed T.
Unmanned aerial vehicles (UAVs) are increasingly deployed in complex and dynamic environments to perform multiple tasks cooperatively with other UAVs that contribute to overarching mission effectiveness. Studies by the Department of Defense (DoD) indicate future operations may include anti-access/area-denial (A2AD) environments which limit human teleoperator decision-making and control. This research addresses the problem of decentralized vehicle re-routing and task reassignments through consensus-based UAV decision-making. An Integrated Consensus-Based Framework (ICF) is formulated as a solution to the combined single task assignment problem and vehicle routing problem. The multiple assignment and vehicle routing problem is solved with the Integrated Consensus-Based Bundle Framework (ICBF). The frameworks are hierarchically decomposed into two levels. The bottom layer utilizes the renowned Dijkstra's Algorithm. The top layer addresses task assignment with two methods. The single assignment approach is called the Caravan Auction Algorithm (CarA) Algorithm. This technique extends the Consensus-Based Auction Algorithm (CBAA) to provide awareness for task completion by agents and adopt abandoned tasks. The multiple assignment approach called the Caravan Auction Bundle Algorithm (CarAB) extends the Consensus-Based Bundle Algorithm (CBBA) by providing awareness for lost resources, prioritizing remaining tasks, and adopting abandoned tasks. Research questions are investigated regarding the novelty and performance of the proposed frameworks. Conclusions regarding the research questions will be provided through hypothesis testing. Monte Carlo simulations will provide evidence to support conclusions regarding the research hypotheses for the proposed frameworks. The approach provided in this research addresses current and future military operations for unmanned aerial vehicles. However, the general framework implied by the proposed research is adaptable to any unmanned vehicle. Civil applications that involve missions where human observability would be limited could benefit from the independent UAV task assignment, such as exploration and fire surveillance are also notable uses for this approach.
On the problem of resonance assignments in solid state NMR of uniformly 15N, 13C-labeled proteins
NASA Astrophysics Data System (ADS)
Tycko, Robert
2015-04-01
Determination of accurate resonance assignments from multidimensional chemical shift correlation spectra is one of the major problems in biomolecular solid state NMR, particularly for relative large proteins with less-than-ideal NMR linewidths. This article investigates the difficulty of resonance assignment, using a computational Monte Carlo/simulated annealing (MCSA) algorithm to search for assignments from artificial three-dimensional spectra that are constructed from the reported isotropic 15N and 13C chemical shifts of two proteins whose structures have been determined by solution NMR methods. The results demonstrate how assignment simulations can provide new insights into factors that affect the assignment process, which can then help guide the design of experimental strategies. Specifically, simulations are performed for the catalytic domain of SrtC (147 residues, primarily β-sheet secondary structure) and the N-terminal domain of MLKL (166 residues, primarily α-helical secondary structure). Assuming unambiguous residue-type assignments and four ideal three-dimensional data sets (NCACX, NCOCX, CONCA, and CANCA), uncertainties in chemical shifts must be less than 0.4 ppm for assignments for SrtC to be unique, and less than 0.2 ppm for MLKL. Eliminating CANCA data has no significant effect, but additionally eliminating CONCA data leads to more stringent requirements for chemical shift precision. Introducing moderate ambiguities in residue-type assignments does not have a significant effect.
Due-Window Assignment Scheduling with Variable Job Processing Times
Wu, Yu-Bin
2015-01-01
We consider a common due-window assignment scheduling problem jobs with variable job processing times on a single machine, where the processing time of a job is a function of its position in a sequence (i.e., learning effect) or its starting time (i.e., deteriorating effect). The problem is to determine the optimal due-windows, and the processing sequence simultaneously to minimize a cost function includes earliness, tardiness, the window location, window size, and weighted number of tardy jobs. We prove that the problem can be solved in polynomial time. PMID:25918745
An A Priori Multiobjective Optimization Model of a Search and Rescue Network
1992-03-01
sequences. Classical sensitivity analysis and tolerance analysis were used to analyze the frequency assignments generated by the different weight...function for excess coverage of a frequency. Sensitivity analysis is used to investigate the robustness of the frequency assignments produced by the...interest. The linear program solution is used to produce classical sensitivity analysis for the weight ranges. 17 III. Model Formulation This chapter
NASA Astrophysics Data System (ADS)
Aurora, Tarlok
2005-04-01
In a calculus-based introductory physics course, students were assigned to write the statements of word problems (along with the accompanying diagrams if any), analyze these, identify important concepts/equations and try to solve these end-of- chapter homework problems. They were required to bring to class their written assignment until the chapter was completed in lecture. These were quickly checked at the beginning of the class. In addition, re-doing selected solved examples in the textbook were assigned as homework. Where possible, students were asked to look for similarities between the solved-examples and the end-of-the-chapter problems, or occasionally these were brought to the students' attention. It was observed that many students were able to solve several of the solved-examples on the test even though the instructor had not solved these in class. This was seen as an improvement over the previous years. It made the students more responsible for their learning. Another benefit was that it alleviated the problems previously created by many students not bringing the textbooks to class. It allowed more time for problem solving/discussions in class.
Linear System of Equations, Matrix Inversion, and Linear Programming Using MS Excel
ERIC Educational Resources Information Center
El-Gebeily, M.; Yushau, B.
2008-01-01
In this note, we demonstrate with illustrations two different ways that MS Excel can be used to solve Linear Systems of Equation, Linear Programming Problems, and Matrix Inversion Problems. The advantage of using MS Excel is its availability and transparency (the user is responsible for most of the details of how a problem is solved). Further, we…
NASA Astrophysics Data System (ADS)
Pipkins, Daniel Scott
Two diverse topics of relevance in modern computational mechanics are treated. The first involves the modeling of linear and non-linear wave propagation in flexible, lattice structures. The technique used combines the Laplace Transform with the Finite Element Method (FEM). The procedure is to transform the governing differential equations and boundary conditions into the transform domain where the FEM formulation is carried out. For linear problems, the transformed differential equations can be solved exactly, hence the method is exact. As a result, each member of the lattice structure is modeled using only one element. In the non-linear problem, the method is no longer exact. The approximation introduced is a spatial discretization of the transformed non-linear terms. The non-linear terms are represented in the transform domain by making use of the complex convolution theorem. A weak formulation of the resulting transformed non-linear equations yields a set of element level matrix equations. The trial and test functions used in the weak formulation correspond to the exact solution of the linear part of the transformed governing differential equation. Numerical results are presented for both linear and non-linear systems. The linear systems modeled are longitudinal and torsional rods and Bernoulli-Euler and Timoshenko beams. For non-linear systems, a viscoelastic rod and Von Karman type beam are modeled. The second topic is the analysis of plates and shallow shells under-going finite deflections by the Field/Boundary Element Method. Numerical results are presented for two plate problems. The first is the bifurcation problem associated with a square plate having free boundaries which is loaded by four, self equilibrating corner forces. The results are compared to two existing numerical solutions of the problem which differ substantially.
DCJ-indel and DCJ-substitution distances with distinct operation costs
2013-01-01
Background Classical approaches to compute the genomic distance are usually limited to genomes with the same content and take into consideration only rearrangements that change the organization of the genome (i.e. positions and orientation of pieces of DNA, number and type of chromosomes, etc.), such as inversions, translocations, fusions and fissions. These operations are generically represented by the double-cut and join (DCJ) operation. The distance between two genomes, in terms of number of DCJ operations, can be computed in linear time. In order to handle genomes with distinct contents, also insertions and deletions of fragments of DNA – named indels – must be allowed. More powerful than an indel is a substitution of a fragment of DNA by another fragment of DNA. Indels and substitutions are called content-modifying operations. It has been shown that both the DCJ-indel and the DCJ-substitution distances can also be computed in linear time, assuming that the same cost is assigned to any DCJ or content-modifying operation. Results In the present study we extend the DCJ-indel and the DCJ-substitution models, considering that the content-modifying cost is distinct from and upper bounded by the DCJ cost, and show that the distance in both models can still be computed in linear time. Although the triangular inequality can be disrupted in both models, we also show how to efficiently fix this problem a posteriori. PMID:23879938
A sequential linear optimization approach for controller design
NASA Technical Reports Server (NTRS)
Horta, L. G.; Juang, J.-N.; Junkins, J. L.
1985-01-01
A linear optimization approach with a simple real arithmetic algorithm is presented for reliable controller design and vibration suppression of flexible structures. Using first order sensitivity of the system eigenvalues with respect to the design parameters in conjunction with a continuation procedure, the method converts a nonlinear optimization problem into a maximization problem with linear inequality constraints. The method of linear programming is then applied to solve the converted linear optimization problem. The general efficiency of the linear programming approach allows the method to handle structural optimization problems with a large number of inequality constraints on the design vector. The method is demonstrated using a truss beam finite element model for the optimal sizing and placement of active/passive-structural members for damping augmentation. Results using both the sequential linear optimization approach and nonlinear optimization are presented and compared. The insensitivity to initial conditions of the linear optimization approach is also demonstrated.
Using linear programming to minimize the cost of nurse personnel.
Matthews, Charles H
2005-01-01
Nursing personnel costs make up a major portion of most hospital budgets. This report evaluates and optimizes the utility of the nurse personnel at the Internal Medicine Outpatient Clinic of Wake Forest University Baptist Medical Center. Linear programming (LP) was employed to determine the effective combination of nurses that would allow for all weekly clinic tasks to be covered while providing the lowest possible cost to the department. Linear programming is a standard application of standard spreadsheet software that allows the operator to establish the variables to be optimized and then requires the operator to enter a series of constraints that will each have an impact on the ultimate outcome. The application is therefore able to quantify and stratify the nurses necessary to execute the tasks. With the report, a specific sensitivity analysis can be performed to assess just how sensitive the outcome is to the stress of adding or deleting a nurse to or from the payroll. The nurse employee cost structure in this study consisted of five certified nurse assistants (CNA), three licensed practicing nurses (LPN), and five registered nurses (RN). The LP revealed that the outpatient clinic should staff four RNs, three LPNs, and four CNAs with 95 percent confidence of covering nurse demand on the floor. This combination of nurses would enable the clinic to: 1. Reduce annual staffing costs by 16 percent; 2. Force each level of nurse to be optimally productive by focusing on tasks specific to their expertise; 3. Assign accountability more efficiently as the nurses adhere to their specific duties; and 4. Ultimately provide a competitive advantage to the clinic as it relates to nurse employee and patient satisfaction. Linear programming can be used to solve capacity problems for just about any staffing situation, provided the model is indeed linear.
Fleet Assignment Using Collective Intelligence
NASA Technical Reports Server (NTRS)
Antoine, Nicolas E.; Bieniawski, Stefan R.; Kroo, Ilan M.; Wolpert, David H.
2004-01-01
Airline fleet assignment involves the allocation of aircraft to a set of flights legs in order to meet passenger demand, while satisfying a variety of constraints. Over the course of the day, the routing of each aircraft is determined in order to minimize the number of required flights for a given fleet. The associated flow continuity and aircraft count constraints have led researchers to focus on obtaining quasi-optimal solutions, especially at larger scales. In this paper, the authors propose the application of an agent-based integer optimization algorithm to a "cold start" fleet assignment problem. Results show that the optimizer can successfully solve such highly- constrained problems (129 variables, 184 constraints).
On the linear programming bound for linear Lee codes.
Astola, Helena; Tabus, Ioan
2016-01-01
Based on an invariance-type property of the Lee-compositions of a linear Lee code, additional equality constraints can be introduced to the linear programming problem of linear Lee codes. In this paper, we formulate this property in terms of an action of the multiplicative group of the field [Formula: see text] on the set of Lee-compositions. We show some useful properties of certain sums of Lee-numbers, which are the eigenvalues of the Lee association scheme, appearing in the linear programming problem of linear Lee codes. Using the additional equality constraints, we formulate the linear programming problem of linear Lee codes in a very compact form, leading to a fast execution, which allows to efficiently compute the bounds for large parameter values of the linear codes.
Evaluation of Goal Programming for the Optimal Assignment of Inspectors to Construction Projects
1988-09-01
Inputs ..... .............. 90 Equation Coefficients . ....... .. 90 Weights, Priorities and the AHP . . 91 Right-Hand Side Values ........ .. 91...the AHP Hierarchy with k Levels . . 36 3. Sample Matrix for Pairwise Comparison ........ .. 37 4. Assignment of I and p for Example Problem...Weights for Example Problem ... 61 3. AHP Weights and Coefficient ci, Values. ........ 63 vii AFIT/GEM/LSM/88S-16 Abstract The purpose of this study was
ERIC Educational Resources Information Center
Vick, John W.; Houden, Dorothy
This report contains recommendations of a Wisconsin Task Assignment Steering Committee created to explore solutions to some significant problems facing adult chronic "revolving-detox-door" alcohol abusers (CRA's), persons with repeated admissions for detoxification services; and to examine the system that serves and funds them. This…
ERIC Educational Resources Information Center
Barak, Moshe; Assal, Muhammad
2018-01-01
This study presents the case of development and evaluation of a STEM-oriented 30-h robotics course for junior high school students (n = 32). Class activities were designed according to the P3 Task Taxonomy, which included: (1) practice-basic closed-ended tasks and exercises; (2) problem solving--small-scale open-ended assignments in which the…
Scheduling Jobs and a Variable Maintenance on a Single Machine with Common Due-Date Assignment
Wan, Long
2014-01-01
We investigate a common due-date assignment scheduling problem with a variable maintenance on a single machine. The goal is to minimize the total earliness, tardiness, and due-date cost. We derive some properties on an optimal solution for our problem. For a special case with identical jobs we propose an optimal polynomial time algorithm followed by a numerical example. PMID:25147861
NASA Technical Reports Server (NTRS)
Voigt, Kerstin
1992-01-01
We present MENDER, a knowledge based system that implements software design techniques that are specialized to automatically compile generate-and-patch problem solvers that satisfy global resource assignments problems. We provide empirical evidence of the superior performance of generate-and-patch over generate-and-test: even with constrained generation, for a global constraint in the domain of '2D-floorplanning'. For a second constraint in '2D-floorplanning' we show that even when it is possible to incorporate the constraint into a constrained generator, a generate-and-patch problem solver may satisfy the constraint more rapidly. We also briefly summarize how an extended version of our system applies to a constraint in the domain of 'multiprocessor scheduling'.
Fundamental solution of the problem of linear programming and method of its determination
NASA Technical Reports Server (NTRS)
Petrunin, S. V.
1978-01-01
The idea of a fundamental solution to a problem in linear programming is introduced. A method of determining the fundamental solution and of applying this method to the solution of a problem in linear programming is proposed. Numerical examples are cited.
Static assignment of complex stochastic tasks using stochastic majorization
NASA Technical Reports Server (NTRS)
Nicol, David; Simha, Rahul; Towsley, Don
1992-01-01
We consider the problem of statically assigning many tasks to a (smaller) system of homogeneous processors, where a task's structure is modeled as a branching process, and all tasks are assumed to have identical behavior. We show how the theory of majorization can be used to obtain a partial order among possible task assignments. Our results show that if the vector of numbers of tasks assigned to each processor under one mapping is majorized by that of another mapping, then the former mapping is better than the latter with respect to a large number of objective functions. In particular, we show how measurements of finishing time, resource utilization, and reliability are all captured by the theory. We also show how the theory may be applied to the problem of partitioning a pool of processors for distribution among parallelizable tasks.
NASA Astrophysics Data System (ADS)
Cui, Ke; Ren, Zhongjie; Li, Xiangyu; Liu, Zongkai; Zhu, Rihong
2017-01-01
Time-to-digital converters (TDCs) using dedicated carry chains of field programmable gate arrays (FPGAs) are usually organized in tapped-delay-line type which are intensively researched in recent years. However this method incurs poor differential nonlinearity (DNL) which arises from the inherent uneven bin granularity. This paper proposes a TDC architecture which utilizes the carry chains in a quite different manner in order to alleviate this long-standing problem. Two independent carry chains working as the delay lines for the fine time interpolation are organized in a ring-oscillator-based Vernier style and the time difference between them is finely adjusted by assigning different number of basic delay cells. A specific design flow is described to obtain the desired delay difference. The TDC was implemented on a Stratix III FPGA. Test results show that the obtained resolution is 31 ps and the DNL\\INL is in the range of (-0.080 LSB, 0.073 LSB)(-0.087 LSB, 0.091 LSB). This demonstrates that the proposed architecture greatly improves linearity compared to previous techniques. Additionally the resource cost is rather low which uses only 319 LUTs and 104 registers per TDC channel.
Comparison of four approaches to a rock facies classification problem
Dubois, M.K.; Bohling, Geoffrey C.; Chakrabarti, S.
2007-01-01
In this study, seven classifiers based on four different approaches were tested in a rock facies classification problem: classical parametric methods using Bayes' rule, and non-parametric methods using fuzzy logic, k-nearest neighbor, and feed forward-back propagating artificial neural network. Determining the most effective classifier for geologic facies prediction in wells without cores in the Panoma gas field, in Southwest Kansas, was the objective. Study data include 3600 samples with known rock facies class (from core) with each sample having either four or five measured properties (wire-line log curves), and two derived geologic properties (geologic constraining variables). The sample set was divided into two subsets, one for training and one for testing the ability of the trained classifier to correctly assign classes. Artificial neural networks clearly outperformed all other classifiers and are effective tools for this particular classification problem. Classical parametric models were inadequate due to the nature of the predictor variables (high dimensional and not linearly correlated), and feature space of the classes (overlapping). The other non-parametric methods tested, k-nearest neighbor and fuzzy logic, would need considerable improvement to match the neural network effectiveness, but further work, possibly combining certain aspects of the three non-parametric methods, may be justified. ?? 2006 Elsevier Ltd. All rights reserved.
Tracking cells in Life Cell Imaging videos using topological alignments.
Mosig, Axel; Jäger, Stefan; Wang, Chaofeng; Nath, Sumit; Ersoy, Ilker; Palaniappan, Kannap-pan; Chen, Su-Shing
2009-07-16
With the increasing availability of live cell imaging technology, tracking cells and other moving objects in live cell videos has become a major challenge for bioimage informatics. An inherent problem for most cell tracking algorithms is over- or under-segmentation of cells - many algorithms tend to recognize one cell as several cells or vice versa. We propose to approach this problem through so-called topological alignments, which we apply to address the problem of linking segmentations of two consecutive frames in the video sequence. Starting from the output of a conventional segmentation procedure, we align pairs of consecutive frames through assigning sets of segments in one frame to sets of segments in the next frame. We achieve this through finding maximum weighted solutions to a generalized "bipartite matching" between two hierarchies of segments, where we derive weights from relative overlap scores of convex hulls of sets of segments. For solving the matching task, we rely on an integer linear program. Practical experiments demonstrate that the matching task can be solved efficiently in practice, and that our method is both effective and useful for tracking cells in data sets derived from a so-called Large Scale Digital Cell Analysis System (LSDCAS). The source code of the implementation is available for download from http://www.picb.ac.cn/patterns/Software/topaln.
Rate Adaptive Based Resource Allocation with Proportional Fairness Constraints in OFDMA Systems
Yin, Zhendong; Zhuang, Shufeng; Wu, Zhilu; Ma, Bo
2015-01-01
Orthogonal frequency division multiple access (OFDMA), which is widely used in the wireless sensor networks, allows different users to obtain different subcarriers according to their subchannel gains. Therefore, how to assign subcarriers and power to different users to achieve a high system sum rate is an important research area in OFDMA systems. In this paper, the focus of study is on the rate adaptive (RA) based resource allocation with proportional fairness constraints. Since the resource allocation is a NP-hard and non-convex optimization problem, a new efficient resource allocation algorithm ACO-SPA is proposed, which combines ant colony optimization (ACO) and suboptimal power allocation (SPA). To reduce the computational complexity, the optimization problem of resource allocation in OFDMA systems is separated into two steps. For the first one, the ant colony optimization algorithm is performed to solve the subcarrier allocation. Then, the suboptimal power allocation algorithm is developed with strict proportional fairness, and the algorithm is based on the principle that the sums of power and the reciprocal of channel-to-noise ratio for each user in different subchannels are equal. To support it, plenty of simulation results are presented. In contrast with root-finding and linear methods, the proposed method provides better performance in solving the proportional resource allocation problem in OFDMA systems. PMID:26426016
Problems on Divisibility of Binomial Coefficients
ERIC Educational Resources Information Center
Osler, Thomas J.; Smoak, James
2004-01-01
Twelve unusual problems involving divisibility of the binomial coefficients are represented in this article. The problems are listed in "The Problems" section. All twelve problems have short solutions which are listed in "The Solutions" section. These problems could be assigned to students in any course in which the binomial theorem and Pascal's…
An agglomerative hierarchical clustering approach to visualisation in Bayesian clustering problems
Dawson, Kevin J.; Belkhir, Khalid
2009-01-01
Clustering problems (including the clustering of individuals into outcrossing populations, hybrid generations, full-sib families and selfing lines) have recently received much attention in population genetics. In these clustering problems, the parameter of interest is a partition of the set of sampled individuals, - the sample partition. In a fully Bayesian approach to clustering problems of this type, our knowledge about the sample partition is represented by a probability distribution on the space of possible sample partitions. Since the number of possible partitions grows very rapidly with the sample size, we can not visualise this probability distribution in its entirety, unless the sample is very small. As a solution to this visualisation problem, we recommend using an agglomerative hierarchical clustering algorithm, which we call the exact linkage algorithm. This algorithm is a special case of the maximin clustering algorithm that we introduced previously. The exact linkage algorithm is now implemented in our software package Partition View. The exact linkage algorithm takes the posterior co-assignment probabilities as input, and yields as output a rooted binary tree, - or more generally, a forest of such trees. Each node of this forest defines a set of individuals, and the node height is the posterior co-assignment probability of this set. This provides a useful visual representation of the uncertainty associated with the assignment of individuals to categories. It is also a useful starting point for a more detailed exploration of the posterior distribution in terms of the co-assignment probabilities. PMID:19337306
Regulation of Split Linear Systems Over Rings: Coefficient-Assignment and Observers,
1980-02-22
we give for the first time , a method to obtain an observer for a finite -free strongly observable The K-linear map irQ is defined as system 5" ( F. G...NAME a ADORESS~if dif!ttrent from Controlling Office) IS1 SECURITY CLASS . (of this report) SIS.. DE CL ASSI ’I CATION/ODOWNGRADING SCHEDULE 16...Entered) IEEE rRANSACTIONS ON AUTOMATIC CONTROL . VOL. Ac-27 . No. 1. FEaRUAay 1982 Regutlation of Split Linear Systems Over Rings: Coefficient
Reading Aloud, Play, and Social-Emotional Development.
Mendelsohn, Alan L; Cates, Carolyn Brockmeyer; Weisleder, Adriana; Berkule Johnson, Samantha; Seery, Anne M; Canfield, Caitlin F; Huberman, Harris S; Dreyer, Benard P
2018-05-01
To determine impacts on social-emotional development at school entry of a pediatric primary care intervention (Video Interaction Project [VIP]) promoting positive parenting through reading aloud and play, delivered in 2 phases: infant through toddler (VIP birth to 3 years [VIP 0-3]) and preschool-age (VIP 3 to 5 years [VIP 3-5]). Factorial randomized controlled trial with postpartum enrollment and random assignment to VIP 0-3, control 0 to 3 years, and a third group without school entry follow-up (Building Blocks) and 3-year second random assignment of VIP 0-3 and control 0 to 3 years to VIP 3-5 or control 3 to 5 years. In the VIP, a bilingual facilitator video recorded the parent and child reading and/or playing using provided learning materials and reviewed videos to reinforce positive interactions. Social-emotional development at 4.5 years was assessed by parent-report Behavior Assessment System for Children, Second Edition (Social Skills, Attention Problems, Hyperactivity, Aggression, Externalizing Problems). VIP 0-3 and VIP 3-5 were independently associated with improved 4.5-year Behavior Assessment System for Children, Second Edition T-scores, with effect sizes (Cohen's d) ∼-0.25 to -0.30. Receipt of combined VIP 0-3 and VIP 3-5 was associated with d = -0.63 reduction in Hyperactivity ( P = .001). VIP 0-3 resulted in reduced "Clinically Significant" Hyperactivity (relative risk reduction for overall sample: 69.2%; P = .03; relative risk reduction for increased psychosocial risk: 100%; P = .006). Multilevel models revealed significant VIP 0-3 linear effects and age × VIP 3-5 interactions. Phase VIP 0-3 resulted in sustained impacts on behavior problems 1.5 years after program completion. VIP 3-5 had additional, independent impacts. With our findings, we support the use of pediatric primary care to promote reading aloud and play from birth to 5 years, and the potential for such programs to enhance social-emotional development. Copyright © 2018 by the American Academy of Pediatrics.
Prefrontal Neurons Encode a Solution to the Credit-Assignment Problem
Perge, János A.; Eskandar, Emad N.
2017-01-01
To adapt successfully to our environments, we must use the outcomes of our choices to guide future behavior. Critically, we must be able to correctly assign credit for any particular outcome to the causal features which preceded it. In some cases, the causal features may be immediately evident, whereas in others they may be separated in time or intermingled with irrelevant environmental stimuli, creating a potentially nontrivial credit-assignment problem. We examined the neuronal representation of information relevant for credit assignment in the dorsolateral prefrontal cortex (dlPFC) of two male rhesus macaques performing a task that elicited key aspects of this problem. We found that neurons conveyed the information necessary for credit assignment. Specifically, neuronal activity reflected both the relevant cues and outcomes at the time of feedback and did so in a manner that was stable over time, in contrast to prior reports of representational instability in the dlPFC. Furthermore, these representations were most stable early in learning, when credit assignment was most needed. When the same features were not needed for credit assignment, these neuronal representations were much weaker or absent. These results demonstrate that the activity of dlPFC neurons conforms to the basic requirements of a system that performs credit assignment, and that spiking activity can serve as a stable mechanism that links causes and effects. SIGNIFICANCE STATEMENT Credit assignment is the process by which we infer the causes of our successes and failures. We found that neuronal activity in the dorsolateral prefrontal cortex conveyed the necessary information for performing credit assignment. Importantly, while there are various potential mechanisms to retain a “trace” of the causal events over time, we observed that spiking activity was sufficiently stable to act as the link between causes and effects, in contrast to prior reports that suggested spiking representations were unstable over time. In addition, we observed that this stability varied as a function of learning, such that the neural code was more reliable over time during early learning, when it was most needed. PMID:28634307
Approaches to catheter ablation for persistent atrial fibrillation.
Verma, Atul; Jiang, Chen-yang; Betts, Timothy R; Chen, Jian; Deisenhofer, Isabel; Mantovan, Roberto; Macle, Laurent; Morillo, Carlos A; Haverkamp, Wilhelm; Weerasooriya, Rukshen; Albenque, Jean-Paul; Nardi, Stefano; Menardi, Endrj; Novak, Paul; Sanders, Prashanthan
2015-05-07
Catheter ablation is less successful for persistent atrial fibrillation than for paroxysmal atrial fibrillation. Guidelines suggest that adjuvant substrate modification in addition to pulmonary-vein isolation is required in persistent atrial fibrillation. We randomly assigned 589 patients with persistent atrial fibrillation in a 1:4:4 ratio to ablation with pulmonary-vein isolation alone (67 patients), pulmonary-vein isolation plus ablation of electrograms showing complex fractionated activity (263 patients), or pulmonary-vein isolation plus additional linear ablation across the left atrial roof and mitral valve isthmus (259 patients). The duration of follow-up was 18 months. The primary end point was freedom from any documented recurrence of atrial fibrillation lasting longer than 30 seconds after a single ablation procedure. Procedure time was significantly shorter for pulmonary-vein isolation alone than for the other two procedures (P<0.001). After 18 months, 59% of patients assigned to pulmonary-vein isolation alone were free from recurrent atrial fibrillation, as compared with 49% of patients assigned to pulmonary-vein isolation plus complex electrogram ablation and 46% of patients assigned to pulmonary-vein isolation plus linear ablation (P=0.15). There were also no significant differences among the three groups for the secondary end points, including freedom from atrial fibrillation after two ablation procedures and freedom from any atrial arrhythmia. Complications included tamponade (three patients), stroke or transient ischemic attack (three patients), and atrioesophageal fistula (one patient). Among patients with persistent atrial fibrillation, we found no reduction in the rate of recurrent atrial fibrillation when either linear ablation or ablation of complex fractionated electrograms was performed in addition to pulmonary-vein isolation. (Funded by St. Jude Medical; ClinicalTrials.gov number, NCT01203748.).
Distributed resource allocation under communication constraints
NASA Astrophysics Data System (ADS)
Dodin, Pierre; Nimier, Vincent
2001-03-01
This paper deals with a study of the multi-sensor management problem for multi-target tracking. The collaboration between many sensors observing the same target means that they are able to fuse their data during the information process. Then one must take into account this possibility to compute the optimal association sensors-target at each step of time. In order to solve this problem for real large scale system, one must both consider the information aspect and the control aspect of the problem. To unify these problems, one possibility is to use a decentralized filtering algorithm locally driven by an assignment algorithm. The decentralized filtering algorithm we use in our model is the filtering algorithm of Grime, which relaxes the usual full-connected hypothesis. By full-connected, one means that the information in a full-connected system is totally distributed everywhere at the same moment, which is unacceptable for a real large scale system. We modelize the distributed assignment decision with the help of a greedy algorithm. Each sensor performs a global optimization, in order to estimate other information sets. A consequence of the relaxation of the full- connected hypothesis is that the sensors' information set are not the same at each step of time, producing an information dis- symmetry in the system. The assignment algorithm uses a local knowledge of this dis-symmetry. By testing the reactions and the coherence of the local assignment decisions of our system, against maneuvering targets, we show that it is still possible to manage with decentralized assignment control even though the system is not full-connected.
Eeren, Hester V; Goossens, Lucas M A; Scholte, Ron H J; Busschbach, Jan J V; van der Rijken, Rachel E A
2018-01-09
Multisystemic Therapy (MST) and Functional Family Therapy (FFT) have overlapping target populations and treatment goals. In this study, these interventions were compared on their effectiveness using a quasi-experimental design. Between October, 2009 and June, 2014, outcome data were collected from 697 adolescents (mean age 15.3 (SD 1.48), 61.9% male) assigned to either MST or FFT (422 MST; 275 FFT). Data were gathered during Routine Outcome Monitoring. The primary outcome was externalizing problem behavior (Child Behavior Checklist and Youth Self Report). Secondary outcomes were the proportion of adolescents living at home, engaged in school or work, and who lacked police contact during treatment. Because of the non-random assignment, a propensity score method was used to control for observed pre-treatment differences. Because the risk-need-responsivity (RNR) model guided treatment assignment, effectiveness was also estimated in youth with and without a court order as an indicator of their risk level. Looking at the whole sample, no difference in effect was found with regard to externalizing problems. For adolescents without a court order, effects on externalizing problems were larger after MST. Because many more adolescents with a court order were assigned to MST compared to FFT, the propensity score method could not balance the treatment groups in this subsample. In conclusion, few differences between MST and FFT were found. In line with the RNR model, higher risk adolescents were assigned to the more intensive treatment, namely MST. In the group with lower risk adolescents, this more intensive treatment was more effective in reducing externalizing problems.
Occupational stress and related factors among surgical residents in Korea
Kang, Sanghee; Jo, Hye Sung; Lee, Ji Sung; Kim, Chong Suk
2015-01-01
Purpose The application rate for surgical residents in Korea has continuously decreased over the past few years. The demanding workload and the occupational stress of surgical training are likely causes of this problem. The aim of this study was to investigate occupational stress and its related factors in Korean surgical residents. Methods With the support of the Korean Surgical Society, we conducted an electronic survey of Korean surgical residents related to occupational stress. We used the Korean Occupational Stress Scale (KOSS) to measure occupational stress. We analyzed the data focused on the stress level and the factors associated with occupational stress. Results The mean KOSS score of the surgical residents was 55.39, which was significantly higher than that of practicing surgeons (48.16, P < 0.001) and the average score of specialized professionals (46.03, P < 0.001). Exercise was the only factor found to be significantly associated with KOSS score (P = 0.001) in univariate analysis. However, in multiple linear regression analysis, the mean number of assigned patients, resident occupation rate and exercise were all significantly associated with KOSS score. Conclusion Surgical residents have high occupational stress compared to practicing surgeons and other professionals. Their mean number of assigned patients, resident recruitment rate and exercise were all significantly associated with occupational stress for surgical residents. PMID:26576407
Occupational stress and related factors among surgical residents in Korea.
Kang, Sanghee; Jo, Hye Sung; Boo, Yoon Jung; Lee, Ji Sung; Kim, Chong Suk
2015-11-01
The application rate for surgical residents in Korea has continuously decreased over the past few years. The demanding workload and the occupational stress of surgical training are likely causes of this problem. The aim of this study was to investigate occupational stress and its related factors in Korean surgical residents. With the support of the Korean Surgical Society, we conducted an electronic survey of Korean surgical residents related to occupational stress. We used the Korean Occupational Stress Scale (KOSS) to measure occupational stress. We analyzed the data focused on the stress level and the factors associated with occupational stress. The mean KOSS score of the surgical residents was 55.39, which was significantly higher than that of practicing surgeons (48.16, P < 0.001) and the average score of specialized professionals (46.03, P < 0.001). Exercise was the only factor found to be significantly associated with KOSS score (P = 0.001) in univariate analysis. However, in multiple linear regression analysis, the mean number of assigned patients, resident occupation rate and exercise were all significantly associated with KOSS score. Surgical residents have high occupational stress compared to practicing surgeons and other professionals. Their mean number of assigned patients, resident recruitment rate and exercise were all significantly associated with occupational stress for surgical residents.
Characteristics of Transgender Individuals Entering Substance Abuse Treatment
Heck, Nicholas C.; Sorensen, James L.
2014-01-01
Little is known about the needs or characteristics of transgender individuals in substance abuse treatment settings. Transgender (n=199) and non-transgender (cisgender, n=13440) individuals were compared on psychosocial factors related to treatment, health risk behaviors, medical and mental health status and utilization, and substance use behaviors within a database that documented individuals entering substance abuse treatment in San Francisco, CA from 2007–2009 using logistic and linear regression analyses (run separately by identified gender). Transgender men (assigned birth sex of female) differed from cisgender men across many psychosocial factors, including having more recent employment, less legal system involvement, greater incidence of living with a substance abuser, and greater family conflict, while transgender women (assigned birth sex of male) were less likely to have minor children than cisgender women. Transgender women reported greater needle use and HIV testing rates were greater among transgender women. Transgender men and women reported higher rates of physical health problems, mental health diagnoses, and psychiatric medications but there were no differences in service utilization. There were no differences in substance use behaviors except that transgender women were more likely to endorse primary methamphetamine use. Transgender individuals evidence unique strengths and challenges that could inform targeted services in substance abuse treatment. PMID:24561017
Gassman-Pines, Anna; Godfrey, Erin B; Yoshikawa, Hirokazu
2013-01-01
Grounded in person-environment fit theory, this study examined whether low-income mothers' preferences for education moderated the effects of employment- and education-focused welfare programs on children's positive and problem behaviors. The sample included 1,365 families with children between ages 3 and 5 years at study entry. Results 5 years after random assignment, when children were ages 8-10 years, indicated that mothers' education preferences did moderate program impacts on teacher-reported child behavior problems and positive behavior. Children whose mothers were assigned to the education program were rated by teachers to have less externalizing behavior and more positive behavior than children whose mothers were assigned to the employment program but only when mothers had strong preferences for education. © 2012 The Authors. Child Development © 2012 Society for Research in Child Development, Inc.
NASA Astrophysics Data System (ADS)
Colantonio, Alessandro; di Pietro, Roberto; Ocello, Alberto; Verde, Nino Vincenzo
In this paper we address the problem of generating a candidate role-set for an RBAC configuration that enjoys the following two key features: it minimizes the administration cost; and, it is a stable candidate role-set. To achieve these goals, we implement a three steps methodology: first, we associate a weight to roles; second, we identify and remove the user-permission assignments that cannot belong to a role that have a weight exceeding a given threshold; third, we restrict the problem of finding a candidate role-set for the given system configuration using only the user-permission assignments that have not been removed in the second step—that is, user-permission assignments that belong to roles with a weight exceeding the given threshold. We formally show—proof of our results are rooted in graph theory—that this methodology achieves the intended goals. Finally, we discuss practical applications of our approach to the role mining problem.
Can linear superiorization be useful for linear optimization problems?
NASA Astrophysics Data System (ADS)
Censor, Yair
2017-04-01
Linear superiorization (LinSup) considers linear programming problems but instead of attempting to solve them with linear optimization methods it employs perturbation resilient feasibility-seeking algorithms and steers them toward reduced (not necessarily minimal) target function values. The two questions that we set out to explore experimentally are: (i) does LinSup provide a feasible point whose linear target function value is lower than that obtained by running the same feasibility-seeking algorithm without superiorization under identical conditions? (ii) How does LinSup fare in comparison with the Simplex method for solving linear programming problems? Based on our computational experiments presented here, the answers to these two questions are: ‘yes’ and ‘very well’, respectively.
Castillo, Andrés M; Bernal, Andrés; Dieden, Reiner; Patiny, Luc; Wist, Julien
2016-01-01
We present "Ask Ernö", a self-learning system for the automatic analysis of NMR spectra, consisting of integrated chemical shift assignment and prediction tools. The output of the automatic assignment component initializes and improves a database of assigned protons that is used by the chemical shift predictor. In turn, the predictions provided by the latter facilitate improvement of the assignment process. Iteration on these steps allows Ask Ernö to improve its ability to assign and predict spectra without any prior knowledge or assistance from human experts. This concept was tested by training such a system with a dataset of 2341 molecules and their (1)H-NMR spectra, and evaluating the accuracy of chemical shift predictions on a test set of 298 partially assigned molecules (2007 assigned protons). After 10 iterations, Ask Ernö was able to decrease its prediction error by 17 %, reaching an average error of 0.265 ppm. Over 60 % of the test chemical shifts were predicted within 0.2 ppm, while only 5 % still presented a prediction error of more than 1 ppm. Ask Ernö introduces an innovative approach to automatic NMR analysis that constantly learns and improves when provided with new data. Furthermore, it completely avoids the need for manually assigned spectra. This system has the potential to be turned into a fully autonomous tool able to compete with the best alternatives currently available.Graphical abstractSelf-learning loop. Any progress in the prediction (forward problem) will improve the assignment ability (reverse problem) and vice versa.
Portfolio optimization using fuzzy linear programming
NASA Astrophysics Data System (ADS)
Pandit, Purnima K.
2013-09-01
Portfolio Optimization (PO) is a problem in Finance, in which investor tries to maximize return and minimize risk by carefully choosing different assets. Expected return and risk are the most important parameters with regard to optimal portfolios. In the simple form PO can be modeled as quadratic programming problem which can be put into equivalent linear form. PO problems with the fuzzy parameters can be solved as multi-objective fuzzy linear programming problem. In this paper we give the solution to such problems with an illustrative example.
NASA Technical Reports Server (NTRS)
Tuey, R. C.
1972-01-01
Computer solutions of linear programming problems are outlined. Information covers vector spaces, convex sets, and matrix algebra elements for solving simultaneous linear equations. Dual problems, reduced cost analysis, ranges, and error analysis are illustrated.
NASA Astrophysics Data System (ADS)
Li, Ni; Huai, Wenqing; Wang, Shaodan
2017-08-01
C2 (command and control) has been understood to be a critical military component to meet an increasing demand for rapid information gathering and real-time decision-making in a dynamically changing battlefield environment. In this article, to improve a C2 behaviour model's reusability and interoperability, a behaviour modelling framework was proposed to specify a C2 model's internal modules and a set of interoperability interfaces based on the C-BML (coalition battle management language). WTA (weapon target assignment) is a typical C2 autonomous decision-making behaviour modelling problem. Different from most WTA problem descriptions, here sensors were considered to be available resources of detection and the relationship constraints between weapons and sensors were also taken into account, which brought it much closer to actual application. A modified differential evolution (MDE) algorithm was developed to solve this high-dimension optimisation problem and obtained an optimal assignment plan with high efficiency. In case study, we built a simulation system to validate the proposed C2 modelling framework and interoperability interface specification. Also, a new optimisation solution was used to solve the WTA problem efficiently and successfully.
ERIC Educational Resources Information Center
Natriello, Gary; And Others
By studying the process by which disadvantaged and low-achieving high school students are assigned to classes and special programs, how and why disadvantaged students are placed in inappropriate programs can be understood. Reasons exist to question the assumption that students are assigned to programs rationally on the basis of information about…
Game theory and traffic assignment.
DOT National Transportation Integrated Search
2013-09-01
Traffic assignment is used to determine the number of users on roadway links in a network. While this problem has : been widely studied in transportation literature, its use of the concept of equilibrium has attracted considerable interest : in the f...
Task Assignment Heuristics for Parallel and Distributed CFD Applications
NASA Technical Reports Server (NTRS)
Lopez-Benitez, Noe; Djomehri, M. Jahed; Biswas, Rupak
2003-01-01
This paper proposes a task graph (TG) model to represent a single discrete step of multi-block overset grid computational fluid dynamics (CFD) applications. The TG model is then used to not only balance the computational workload across the overset grids but also to reduce inter-grid communication costs. We have developed a set of task assignment heuristics based on the constraints inherent in this class of CFD problems. Two basic assignments, the smallest task first (STF) and the largest task first (LTF), are first presented. They are then systematically costs. To predict the performance of the proposed task assignment heuristics, extensive performance evaluations are conducted on a synthetic TG with tasks defined in terms of the number of grid points in predetermined overlapping grids. A TG derived from a realistic problem with eight million grid points is also used as a test case.
NASA Technical Reports Server (NTRS)
Banks, H. T.; Silcox, R. J.; Keeling, S. L.; Wang, C.
1989-01-01
A unified treatment of the linear quadratic tracking (LQT) problem, in which a control system's dynamics are modeled by a linear evolution equation with a nonhomogeneous component that is linearly dependent on the control function u, is presented; the treatment proceeds from the theoretical formulation to a numerical approximation framework. Attention is given to two categories of LQT problems in an infinite time interval: the finite energy and the finite average energy. The behavior of the optimal solution for finite time-interval problems as the length of the interval tends to infinity is discussed. Also presented are the formulations and properties of LQT problems in a finite time interval.
Eigenvalue assignment by minimal state-feedback gain in LTI multivariable systems
NASA Astrophysics Data System (ADS)
Ataei, Mohammad; Enshaee, Ali
2011-12-01
In this article, an improved method for eigenvalue assignment via state feedback in the linear time-invariant multivariable systems is proposed. This method is based on elementary similarity operations, and involves mainly utilisation of vector companion forms, and thus is very simple and easy to implement on a digital computer. In addition to the controllable systems, the proposed method can be applied for the stabilisable ones and also systems with linearly dependent inputs. Moreover, two types of state-feedback gain matrices can be achieved by this method: (1) the numerical one, which is unique, and (2) the parametric one, in which its parameters are determined in order to achieve a gain matrix with minimum Frobenius norm. The numerical examples are presented to demonstrate the advantages of the proposed method.
Streaming fragment assignment for real-time analysis of sequencing experiments
Roberts, Adam; Pachter, Lior
2013-01-01
We present eXpress, a software package for highly efficient probabilistic assignment of ambiguously mapping sequenced fragments. eXpress uses a streaming algorithm with linear run time and constant memory use. It can determine abundances of sequenced molecules in real time, and can be applied to ChIP-seq, metagenomics and other large-scale sequencing data. We demonstrate its use on RNA-seq data, showing greater efficiency than other quantification methods. PMID:23160280
Xu, Andrew Wei
2010-09-01
In genome rearrangement, given a set of genomes G and a distance measure d, the median problem asks for another genome q that minimizes the total distance [Formula: see text]. This is a key problem in genome rearrangement based phylogenetic analysis. Although this problem is known to be NP-hard, we have shown in a previous article, on circular genomes and under the DCJ distance measure, that a family of patterns in the given genomes--represented by adequate subgraphs--allow us to rapidly find exact solutions to the median problem in a decomposition approach. In this article, we extend this result to the case of linear multichromosomal genomes, in order to solve more interesting problems on eukaryotic nuclear genomes. A multi-way capping problem in the linear multichromosomal case imposes an extra computational challenge on top of the difficulty in the circular case, and this difficulty has been underestimated in our previous study and is addressed in this article. We represent the median problem by the capped multiple breakpoint graph, extend the adequate subgraphs into the capped adequate subgraphs, and prove optimality-preserving decomposition theorems, which give us the tools to solve the median problem and the multi-way capping optimization problem together. We also develop an exact algorithm ASMedian-linear, which iteratively detects instances of (capped) adequate subgraphs and decomposes problems into subproblems. Tested on simulated data, ASMedian-linear can rapidly solve most problems with up to several thousand genes, and it also can provide optimal or near-optimal solutions to the median problem under the reversal/HP distance measures. ASMedian-linear is available at http://sites.google.com/site/andrewweixu .
ERIC Educational Resources Information Center
Kar, Tugrul
2016-01-01
This study examined prospective middle school mathematics teachers' problem-posing skills by investigating their ability to associate linear graphs with daily life situations. Prospective teachers were given linear graphs and asked to pose problems that could potentially be represented by the graphs. Their answers were analyzed in two stages. In…
ERIC Educational Resources Information Center
Rocconi, Louis M.
2011-01-01
Hierarchical linear models (HLM) solve the problems associated with the unit of analysis problem such as misestimated standard errors, heterogeneity of regression and aggregation bias by modeling all levels of interest simultaneously. Hierarchical linear modeling resolves the problem of misestimated standard errors by incorporating a unique random…
Quadratic constrained mixed discrete optimization with an adiabatic quantum optimizer
NASA Astrophysics Data System (ADS)
Chandra, Rishabh; Jacobson, N. Tobias; Moussa, Jonathan E.; Frankel, Steven H.; Kais, Sabre
2014-07-01
We extend the family of problems that may be implemented on an adiabatic quantum optimizer (AQO). When a quadratic optimization problem has at least one set of discrete controls and the constraints are linear, we call this a quadratic constrained mixed discrete optimization (QCMDO) problem. QCMDO problems are NP-hard, and no efficient classical algorithm for their solution is known. Included in the class of QCMDO problems are combinatorial optimization problems constrained by a linear partial differential equation (PDE) or system of linear PDEs. An essential complication commonly encountered in solving this type of problem is that the linear constraint may introduce many intermediate continuous variables into the optimization while the computational cost grows exponentially with problem size. We resolve this difficulty by developing a constructive mapping from QCMDO to quadratic unconstrained binary optimization (QUBO) such that the size of the QUBO problem depends only on the number of discrete control variables. With a suitable embedding, taking into account the physical constraints of the realizable coupling graph, the resulting QUBO problem can be implemented on an existing AQO. The mapping itself is efficient, scaling cubically with the number of continuous variables in the general case and linearly in the PDE case if an efficient preconditioner is available.
Overcoming an obstacle in expanding a UMLS semantic type extent.
Chen, Yan; Gu, Huanying; Perl, Yehoshua; Geller, James
2012-02-01
This paper strives to overcome a major problem encountered by a previous expansion methodology for discovering concepts highly likely to be missing a specific semantic type assignment in the UMLS. This methodology is the basis for an algorithm that presents the discovered concepts to a human auditor for review and possible correction. We analyzed the problem of the previous expansion methodology and discovered that it was due to an obstacle constituted by one or more concepts assigned the UMLS Semantic Network semantic type Classification. A new methodology was designed that bypasses such an obstacle without a combinatorial explosion in the number of concepts presented to the human auditor for review. The new expansion methodology with obstacle avoidance was tested with the semantic type Experimental Model of Disease and found over 500 concepts missed by the previous methodology that are in need of this semantic type assignment. Furthermore, other semantic types suffering from the same major problem were discovered, indicating that the methodology is of more general applicability. The algorithmic discovery of concepts that are likely missing a semantic type assignment is possible even in the face of obstacles, without an explosion in the number of processed concepts. Copyright © 2011 Elsevier Inc. All rights reserved.
Overcoming an Obstacle in Expanding a UMLS Semantic Type Extent
Chen, Yan; Gu, Huanying; Perl, Yehoshua; Geller, James
2011-01-01
This paper strives to overcome a major problem encountered by a previous expansion methodology for discovering concepts highly likely to be missing a specific semantic type assignment in the UMLS. This methodology is the basis for an algorithm that presents the discovered concepts to a human auditor for review and possible correction. We analyzed the problem of the previous expansion methodology and discovered that it was due to an obstacle constituted by one or more concepts assigned the UMLS Semantic Network semantic type Classification. A new methodology was designed that bypasses such an obstacle without a combinatorial explosion in the number of concepts presented to the human auditor for review. The new expansion methodology with obstacle avoidance was tested with the semantic type Experimental Model of Disease and found over 500 concepts missed by the previous methodology that are in need of this semantic type assignment. Furthermore, other semantic types suffering from the same major problem were discovered, indicating that the methodology is of more general applicability. The algorithmic discovery of concepts that are likely missing a semantic type assignment is possible even in the face of obstacles, without an explosion in the number of processed concepts. PMID:21925287
Blum, Nancee; St John, Don; Pfohl, Bruce; Stuart, Scott; McCormick, Brett; Allen, Jeff; Arndt, Stephan; Black, Donald W
2008-04-01
Systems Training for Emotional Predictability and Problem Solving (STEPPS) is a 20-week manual-based group treatment program for outpatients with borderline personality disorder that combines cognitive behavioral elements and skills training with a systems component. The authors compared STEPPS plus treatment as usual with treatment as usual alone in a randomized controlled trial. Subjects with borderline personality disorder were randomly assigned to STEPPS plus treatment as usual or treatment as usual alone. Total score on the Zanarini Rating Scale for Borderline Personality Disorder was the primary outcome measure. Secondary outcomes included measures of global functioning, depression, impulsivity, and social functioning; suicide attempts and self-harm acts; and crisis utilization. Subjects were followed 1 year posttreatment. A linear mixed-effects model was used in the analysis. Data pertaining to 124 subjects (STEPPS plus treatment as usual [N=65]; treatment as usual alone [N=59]) were analyzed. Subjects assigned to STEPPS plus treatment as usual experienced greater improvement in the Zanarini Rating Scale for Borderline Personality Disorder total score and subscales assessing affective, cognitive, interpersonal, and impulsive domains. STEPPS plus treatment as usual also led to greater improvements in impulsivity, negative affectivity, mood, and global functioning. These differences yielded moderate to large effect sizes. There were no differences between groups for suicide attempts, self-harm acts, or hospitalizations. Most gains attributed to STEPPS were maintained during follow-up. Fewer STEPPS plus treatment as usual subjects had emergency department visits during treatment and follow-up. The discontinuation rate was high in both groups. STEPPS, an adjunctive group treatment, can deliver clinically meaningful improvements in borderline personality disorder-related symptoms and behaviors, enhance global functioning, and relieve depression.
NASA Astrophysics Data System (ADS)
Coco, Armando; Russo, Giovanni
2018-05-01
In this paper we propose a second-order accurate numerical method to solve elliptic problems with discontinuous coefficients (with general non-homogeneous jumps in the solution and its gradient) in 2D and 3D. The method consists of a finite-difference method on a Cartesian grid in which complex geometries (boundaries and interfaces) are embedded, and is second order accurate in the solution and the gradient itself. In order to avoid the drop in accuracy caused by the discontinuity of the coefficients across the interface, two numerical values are assigned on grid points that are close to the interface: a real value, that represents the numerical solution on that grid point, and a ghost value, that represents the numerical solution extrapolated from the other side of the interface, obtained by enforcing the assigned non-homogeneous jump conditions on the solution and its flux. The method is also extended to the case of matrix coefficient. The linear system arising from the discretization is solved by an efficient multigrid approach. Unlike the 1D case, grid points are not necessarily aligned with the normal derivative and therefore suitable stencils must be chosen to discretize interface conditions in order to achieve second order accuracy in the solution and its gradient. A proper treatment of the interface conditions will allow the multigrid to attain the optimal convergence factor, comparable with the one obtained by Local Fourier Analysis for rectangular domains. The method is robust enough to handle large jump in the coefficients: order of accuracy, monotonicity of the errors and good convergence factor are maintained by the scheme.
Eigensolution of finite element problems in a completely connected parallel architecture
NASA Technical Reports Server (NTRS)
Akl, F.; Morel, M.
1989-01-01
A parallel algorithm is presented for the solution of the generalized eigenproblem in linear elastic finite element analysis. The algorithm is based on a completely connected parallel architecture in which each processor is allowed to communicate with all other processors. The algorithm is successfully implemented on a tightly coupled MIMD parallel processor. A finite element model is divided into m domains each of which is assumed to process n elements. Each domain is then assigned to a processor or to a logical processor (task) if the number of domains exceeds the number of physical processors. The effect of the number of domains, the number of degrees-of-freedom located along the global fronts, and the dimension of the subspace on the performance of the algorithm is investigated. For a 64-element rectangular plate, speed-ups of 1.86, 3.13, 3.18, and 3.61 are achieved on two, four, six, and eight processors, respectively.
Self-adjoint elliptic operators with boundary conditions on not closed hypersurfaces
NASA Astrophysics Data System (ADS)
Mantile, Andrea; Posilicano, Andrea; Sini, Mourad
2016-07-01
The theory of self-adjoint extensions of symmetric operators is used to construct self-adjoint realizations of a second-order elliptic differential operator on Rn with linear boundary conditions on (a relatively open part of) a compact hypersurface. Our approach allows to obtain Kreĭn-like resolvent formulae where the reference operator coincides with the ;free; operator with domain H2 (Rn); this provides an useful tool for the scattering problem from a hypersurface. Concrete examples of this construction are developed in connection with the standard boundary conditions, Dirichlet, Neumann, Robin, δ and δ‧-type, assigned either on a (n - 1) dimensional compact boundary Γ = ∂ Ω or on a relatively open part Σ ⊂ Γ. Schatten-von Neumann estimates for the difference of the powers of resolvents of the free and the perturbed operators are also proven; these give existence and completeness of the wave operators of the associated scattering systems.
On the optimal use of a slow server in two-stage queueing systems
NASA Astrophysics Data System (ADS)
Papachristos, Ioannis; Pandelis, Dimitrios G.
2017-07-01
We consider two-stage tandem queueing systems with a dedicated server in each queue and a slower flexible server that can attend both queues. We assume Poisson arrivals and exponential service times, and linear holding costs for jobs present in the system. We study the optimal dynamic assignment of servers to jobs assuming that two servers cannot collaborate to work on the same job and preemptions are not allowed. We formulate the problem as a Markov decision process and derive properties of the optimal allocation for the dedicated (fast) servers. Specifically, we show that the one downstream should not idle, and the same is true for the one upstream when holding costs are larger there. The optimal allocation of the slow server is investigated through extensive numerical experiments that lead to conjectures on the structure of the optimal policy.
Smart-Grid Backbone Network Real-Time Delay Reduction via Integer Programming.
Pagadrai, Sasikanth; Yilmaz, Muhittin; Valluri, Pratyush
2016-08-01
This research investigates an optimal delay-based virtual topology design using integer linear programming (ILP), which is applied to the current backbone networks such as smart-grid real-time communication systems. A network traffic matrix is applied and the corresponding virtual topology problem is solved using the ILP formulations that include a network delay-dependent objective function and lightpath routing, wavelength assignment, wavelength continuity, flow routing, and traffic loss constraints. The proposed optimization approach provides an efficient deterministic integration of intelligent sensing and decision making, and network learning features for superior smart grid operations by adaptively responding the time-varying network traffic data as well as operational constraints to maintain optimal virtual topologies. A representative optical backbone network has been utilized to demonstrate the proposed optimization framework whose simulation results indicate that superior smart-grid network performance can be achieved using commercial networks and integer programming.
Increases in Tolerance within Naturalistic, Self-Help Recovery Homes
Olson, Brad D.; Jason, Leonard A.; Davidson, Michelle; Ferrari, Joseph R.
2011-01-01
Changes in tolerance toward others (i.e., universality/diversity measure) among 150 participants (93 women, 57 men) discharged from inpatient treatment centers randomly assigned to either a self-help, communal living setting or usual after-care and interviewed every 6 months for a 24 month period was explored. Hierarchical Linear Modeling examined the effect of condition (Therapeutic Communal Living versus Usual Care) and other moderator variables on wave trajectories of tolerance attitudes (i.e., universality/diversity scores). Over time, residents of the communal living recovery model showed significantly greater tolerance trajectories than usual care participants. Results supported the claim that residents of communal living settings unit around super-ordinate goals of overcoming substance abuse problems. Also older compared to younger residents living in a house for 6 or more months experienced the greatest increases in tolerance. Theories regarding these differential increases in tolerance, such as social contact theory and transtheoretical processes of change, are discussed. PMID:19838787
Linear solver performance in elastoplastic problem solution on GPU cluster
NASA Astrophysics Data System (ADS)
Khalevitsky, Yu. V.; Konovalov, A. V.; Burmasheva, N. V.; Partin, A. S.
2017-12-01
Applying the finite element method to severe plastic deformation problems involves solving linear equation systems. While the solution procedure is relatively hard to parallelize and computationally intensive by itself, a long series of large scale systems need to be solved for each problem. When dealing with fine computational meshes, such as in the simulations of three-dimensional metal matrix composite microvolume deformation, tens and hundreds of hours may be needed to complete the whole solution procedure, even using modern supercomputers. In general, one of the preconditioned Krylov subspace methods is used in a linear solver for such problems. The method convergence highly depends on the operator spectrum of a problem stiffness matrix. In order to choose the appropriate method, a series of computational experiments is used. Different methods may be preferable for different computational systems for the same problem. In this paper we present experimental data obtained by solving linear equation systems from an elastoplastic problem on a GPU cluster. The data can be used to substantiate the choice of the appropriate method for a linear solver to use in severe plastic deformation simulations.
Formative feedback and scaffolding for developing complex problem solving and modelling outcomes
NASA Astrophysics Data System (ADS)
Frank, Brian; Simper, Natalie; Kaupp, James
2018-07-01
This paper discusses the use and impact of formative feedback and scaffolding to develop outcomes for complex problem solving in a required first-year course in engineering design and practice at a medium-sized research-intensive Canadian university. In 2010, the course began to use team-based, complex, open-ended contextualised problems to develop problem solving, communications, teamwork, modelling, and professional skills. Since then, formative feedback has been incorporated into: task and process-level feedback on scaffolded tasks in-class, formative assignments, and post-assignment review. Development in complex problem solving and modelling has been assessed through analysis of responses from student surveys, direct criterion-referenced assessment of course outcomes from 2013 to 2015, and an external longitudinal study. The findings suggest that students are improving in outcomes related to complex problem solving over the duration of the course. Most notably, the addition of new feedback and scaffolding coincided with improved student performance.
Child-Level Predictors of Responsiveness to Evidence-Based Mathematics Intervention.
Powell, Sarah R; Cirino, Paul T; Malone, Amelia S
2017-07-01
We identified child-level predictors of responsiveness to 2 types of mathematics (calculation and word-problem) intervention among 2nd-grade children with mathematics difficulty. Participants were 250 children in 107 classrooms in 23 schools pretested on mathematics and general cognitive measures and posttested on mathematics measures. We assigned classrooms randomly assigned to calculation intervention, word-problem intervention, or business-as-usual control. Intervention lasted 17 weeks. Path analyses indicated that scores on working memory and language comprehension assessments moderated responsiveness to calculation intervention. No moderators were identified for responsiveness to word-problem intervention. Across both intervention groups and the control group, attentive behavior predicted both outcomes. Initial calculation skill predicted the calculation outcome, and initial language comprehension predicted word-problem outcomes. These results indicate that screening for calculation intervention should include a focus on working memory, language comprehension, attentive behavior, and calculations. Screening for word-problem intervention should focus on attentive behavior and word problems.
29 CFR 785.37 - Home to work on special one-day assignment in another city.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 29 Labor 3 2011-07-01 2011-07-01 false Home to work on special one-day assignment in another city... another city. A problem arises when an employee who regularly works at a fixed location in one city is given a special 1-day work assignment in another city. For example, an employee who works in Washington...
29 CFR 785.37 - Home to work on special one-day assignment in another city.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 29 Labor 3 2010-07-01 2010-07-01 false Home to work on special one-day assignment in another city... another city. A problem arises when an employee who regularly works at a fixed location in one city is given a special 1-day work assignment in another city. For example, an employee who works in Washington...
Multiscale Methods, Parallel Computation, and Neural Networks for Real-Time Computer Vision.
NASA Astrophysics Data System (ADS)
Battiti, Roberto
1990-01-01
This thesis presents new algorithms for low and intermediate level computer vision. The guiding ideas in the presented approach are those of hierarchical and adaptive processing, concurrent computation, and supervised learning. Processing of the visual data at different resolutions is used not only to reduce the amount of computation necessary to reach the fixed point, but also to produce a more accurate estimation of the desired parameters. The presented adaptive multiple scale technique is applied to the problem of motion field estimation. Different parts of the image are analyzed at a resolution that is chosen in order to minimize the error in the coefficients of the differential equations to be solved. Tests with video-acquired images show that velocity estimation is more accurate over a wide range of motion with respect to the homogeneous scheme. In some cases introduction of explicit discontinuities coupled to the continuous variables can be used to avoid propagation of visual information from areas corresponding to objects with different physical and/or kinematic properties. The human visual system uses concurrent computation in order to process the vast amount of visual data in "real -time." Although with different technological constraints, parallel computation can be used efficiently for computer vision. All the presented algorithms have been implemented on medium grain distributed memory multicomputers with a speed-up approximately proportional to the number of processors used. A simple two-dimensional domain decomposition assigns regions of the multiresolution pyramid to the different processors. The inter-processor communication needed during the solution process is proportional to the linear dimension of the assigned domain, so that efficiency is close to 100% if a large region is assigned to each processor. Finally, learning algorithms are shown to be a viable technique to engineer computer vision systems for different applications starting from multiple-purpose modules. In the last part of the thesis a well known optimization method (the Broyden-Fletcher-Goldfarb-Shanno memoryless quasi -Newton method) is applied to simple classification problems and shown to be superior to the "error back-propagation" algorithm for numerical stability, automatic selection of parameters, and convergence properties.
NASA Astrophysics Data System (ADS)
Han, Xiaobao; Li, Huacong; Jia, Qiusheng
2017-12-01
For dynamic decoupling of polynomial linear parameter varying(PLPV) system, a robust dominance pre-compensator design method is given. The parameterized precompensator design problem is converted into an optimal problem constrained with parameterized linear matrix inequalities(PLMI) by using the conception of parameterized Lyapunov function(PLF). To solve the PLMI constrained optimal problem, the precompensator design problem is reduced into a normal convex optimization problem with normal linear matrix inequalities (LMI) constraints on a new constructed convex polyhedron. Moreover, a parameter scheduling pre-compensator is achieved, which satisfies robust performance and decoupling performances. Finally, the feasibility and validity of the robust diagonal dominance pre-compensator design method are verified by the numerical simulation on a turbofan engine PLPV model.
Spectrum-to-Spectrum Searching Using a Proteome-wide Spectral Library*
Yen, Chia-Yu; Houel, Stephane; Ahn, Natalie G.; Old, William M.
2011-01-01
The unambiguous assignment of tandem mass spectra (MS/MS) to peptide sequences remains a key unsolved problem in proteomics. Spectral library search strategies have emerged as a promising alternative for peptide identification, in which MS/MS spectra are directly compared against a reference library of confidently assigned spectra. Two problems relate to library size. First, reference spectral libraries are limited to rediscovery of previously identified peptides and are not applicable to new peptides, because of their incomplete coverage of the human proteome. Second, problems arise when searching a spectral library the size of the entire human proteome. We observed that traditional dot product scoring methods do not scale well with spectral library size, showing reduction in sensitivity when library size is increased. We show that this problem can be addressed by optimizing scoring metrics for spectrum-to-spectrum searches with large spectral libraries. MS/MS spectra for the 1.3 million predicted tryptic peptides in the human proteome are simulated using a kinetic fragmentation model (MassAnalyzer version2.1) to create a proteome-wide simulated spectral library. Searches of the simulated library increase MS/MS assignments by 24% compared with Mascot, when using probabilistic and rank based scoring methods. The proteome-wide coverage of the simulated library leads to 11% increase in unique peptide assignments, compared with parallel searches of a reference spectral library. Further improvement is attained when reference spectra and simulated spectra are combined into a hybrid spectral library, yielding 52% increased MS/MS assignments compared with Mascot searches. Our study demonstrates the advantages of using probabilistic and rank based scores to improve performance of spectrum-to-spectrum search strategies. PMID:21532008
Students’ difficulties in solving linear equation problems
NASA Astrophysics Data System (ADS)
Wati, S.; Fitriana, L.; Mardiyana
2018-03-01
A linear equation is an algebra material that exists in junior high school to university. It is a very important material for students in order to learn more advanced mathematics topics. Therefore, linear equation material is essential to be mastered. However, the result of 2016 national examination in Indonesia showed that students’ achievement in solving linear equation problem was low. This fact became a background to investigate students’ difficulties in solving linear equation problems. This study used qualitative descriptive method. An individual written test on linear equation tasks was administered, followed by interviews. Twenty-one sample students of grade VIII of SMPIT Insan Kamil Karanganyar did the written test, and 6 of them were interviewed afterward. The result showed that students with high mathematics achievement donot have difficulties, students with medium mathematics achievement have factual difficulties, and students with low mathematics achievement have factual, conceptual, operational, and principle difficulties. Based on the result there is a need of meaningfulness teaching strategy to help students to overcome difficulties in solving linear equation problems.
NASA Astrophysics Data System (ADS)
Eyono Obono, S. D.; Basak, Sujit Kumar
2011-12-01
The general formulation of the assignment problem consists in the optimal allocation of a given set of tasks to a workforce. This problem is covered by existing literature for different domains such as distributed databases, distributed systems, transportation, packets radio networks, IT outsourcing, and teaching allocation. This paper presents a new version of the assignment problem for the allocation of academic tasks to staff members in departments with long leave opportunities. It presents the description of a workload allocation scheme and its algorithm, for the allocation of an equitable number of tasks in academic departments where long leaves are necessary.
Assessing non-uniqueness: An algebraic approach
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vasco, Don W.
Geophysical inverse problems are endowed with a rich mathematical structure. When discretized, most differential and integral equations of interest are algebraic (polynomial) in form. Techniques from algebraic geometry and computational algebra provide a means to address questions of existence and uniqueness for both linear and non-linear inverse problem. In a sense, the methods extend ideas which have proven fruitful in treating linear inverse problems.
NASA Technical Reports Server (NTRS)
Ferencz, Donald C.; Viterna, Larry A.
1991-01-01
ALPS is a computer program which can be used to solve general linear program (optimization) problems. ALPS was designed for those who have minimal linear programming (LP) knowledge and features a menu-driven scheme to guide the user through the process of creating and solving LP formulations. Once created, the problems can be edited and stored in standard DOS ASCII files to provide portability to various word processors or even other linear programming packages. Unlike many math-oriented LP solvers, ALPS contains an LP parser that reads through the LP formulation and reports several types of errors to the user. ALPS provides a large amount of solution data which is often useful in problem solving. In addition to pure linear programs, ALPS can solve for integer, mixed integer, and binary type problems. Pure linear programs are solved with the revised simplex method. Integer or mixed integer programs are solved initially with the revised simplex, and the completed using the branch-and-bound technique. Binary programs are solved with the method of implicit enumeration. This manual describes how to use ALPS to create, edit, and solve linear programming problems. Instructions for installing ALPS on a PC compatible computer are included in the appendices along with a general introduction to linear programming. A programmers guide is also included for assistance in modifying and maintaining the program.
Bruhn, Peter; Geyer-Schulz, Andreas
2002-01-01
In this paper, we introduce genetic programming over context-free languages with linear constraints for combinatorial optimization, apply this method to several variants of the multidimensional knapsack problem, and discuss its performance relative to Michalewicz's genetic algorithm with penalty functions. With respect to Michalewicz's approach, we demonstrate that genetic programming over context-free languages with linear constraints improves convergence. A final result is that genetic programming over context-free languages with linear constraints is ideally suited to modeling complementarities between items in a knapsack problem: The more complementarities in the problem, the stronger the performance in comparison to its competitors.
Encouraging Sixth-Grade Students' Problem-Solving Performance by Teaching through Problem Solving
ERIC Educational Resources Information Center
Bostic, Jonathan D.; Pape, Stephen J.; Jacobbe, Tim
2016-01-01
This teaching experiment provided students with continuous engagement in a problem-solving based instructional approach during one mathematics unit. Three sections of sixth-grade mathematics were sampled from a school in Florida, U.S.A. and one section was randomly assigned to experience teaching through problem solving. Students' problem-solving…
Baran, Richard; Northen, Trent R
2013-10-15
Untargeted metabolite profiling using liquid chromatography and mass spectrometry coupled via electrospray ionization is a powerful tool for the discovery of novel natural products, metabolic capabilities, and biomarkers. However, the elucidation of the identities of uncharacterized metabolites from spectral features remains challenging. A critical step in the metabolite identification workflow is the assignment of redundant spectral features (adducts, fragments, multimers) and calculation of the underlying chemical formula. Inspection of the data by experts using computational tools solving partial problems (e.g., chemical formula calculation for individual ions) can be performed to disambiguate alternative solutions and provide reliable results. However, manual curation is tedious and not readily scalable or standardized. Here we describe an automated procedure for the robust automated mass spectra interpretation and chemical formula calculation using mixed integer linear programming optimization (RAMSI). Chemical rules among related ions are expressed as linear constraints and both the spectra interpretation and chemical formula calculation are performed in a single optimization step. This approach is unbiased in that it does not require predefined sets of neutral losses and positive and negative polarity spectra can be combined in a single optimization. The procedure was evaluated with 30 experimental mass spectra and was found to effectively identify the protonated or deprotonated molecule ([M + H](+) or [M - H](-)) while being robust to the presence of background ions. RAMSI provides a much-needed standardized tool for interpreting ions for subsequent identification in untargeted metabolomics workflows.
Preference in Random Assignment: Implications for the Interpretation of Randomized Trials
Gold, Paul B.; Hargreaves, William A.; Aronson, Elliot; Bickman, Leonard; Barreira, Paul J.; Jones, Danson R.; Rodican, Charles F.; Fisher, William H.
2009-01-01
Random assignment to a preferred experimental condition can increase service engagement and enhance outcomes, while assignment to a less-preferred condition can discourage service receipt and limit outcome attainment. We examined randomized trials for one prominent psychiatric rehabilitation intervention, supported employment, to gauge how often assignment preference might have complicated the interpretation of findings. Condition descriptions, and greater early attrition from services-as-usual comparison conditions, suggest that many study enrollees favored assignment to new rapid-job-placement supported employment, but no study took this possibility into account. Reviews of trials in other service fields are needed to determine whether this design problem is widespread. PMID:19434489
Assigning Oxidation States to Some Metal Dioxygen Complexes of Biological Interest.
ERIC Educational Resources Information Center
Summerville, David A.; And Others
1979-01-01
The bonding of dioxygen in metal-dioxygen complexes is discussed, paying particular attention to the problems encountered in assigning conventional oxidation numbers to both the metal center and coordinated dioxygen. Complexes of iron, cobalt, chromium, and manganese are considered. (BB)
ERIC Educational Resources Information Center
Joyce, Aaron W.; Ross, Michael J.; Vander Wal, Jillon S.; Austin, Chammie C.
2009-01-01
The present study examined differences in college students' preferences for processes of change across four kinds of problems: academic, relationship, depression, and anxiety. Two hundred eighteen undergraduates were randomly assigned to complete either an academic problems, relationship problems, depression, or anxiety Processes of Change…
Interleaved Practice Improves Mathematics Learning
ERIC Educational Resources Information Center
Rohrer, Doug; Dedrick, Robert F.; Stershic, Sandra
2015-01-01
A typical mathematics assignment consists primarily of practice problems requiring the strategy introduced in the immediately preceding lesson (e.g., a dozen problems that are solved by using the Pythagorean theorem). This means that students know which strategy is needed to solve each problem before they read the problem. In an alternative…
Leake, S.A.; Lilly, M.R.
1995-01-01
The Fairbanks, Alaska, area has many contaminated sites in a shallow alluvial aquifer. A ground-water flow model is being developed using the MODFLOW finite-difference ground-water flow model program with the River Package. The modeled area is discretized in the horizontal dimensions into 118 rows and 158 columns of approximately 150-meter square cells. The fine grid spacing has the advantage of providing needed detail at the contaminated sites and surface-water features that bound the aquifer. However, the fine spacing of cells adds difficulty to simulating interaction between the aquifer and the large, braided Tanana River. In particular, the assignment of a river head is difficult if cells are much smaller than the river width. This was solved by developing a procedure for interpolating and extrapolating river head using a river distance function. Another problem is that future transient simulations would require excessive numbers of input records using the current version of the River Package. The proposed solution to this problem is to modify the River Package to linearly interpolate river head for time steps within each stress period, thereby reducing the number of stress periods required.
On the Structure of a Best Possible Crossover Selection Strategy in Genetic Algorithms
NASA Astrophysics Data System (ADS)
Lässig, Jörg; Hoffmann, Karl Heinz
The paper considers the problem of selecting individuals in the current population in genetic algorithms for crossover to find a solution with high fitness for a given optimization problem. Many different schemes have been described in the literature as possible strategies for this task but so far comparisons have been predominantly empirical. It is shown that if one wishes to maximize any linear function of the final state probabilities, e.g. the fitness of the best individual in the final population of the algorithm, then a best probability distribution for selecting an individual in each generation is a rectangular distribution over the individuals sorted in descending sequence by their fitness values. This means uniform probabilities have to be assigned to a group of the best individuals of the population but probabilities equal to zero to individuals with lower fitness, assuming that the probability distribution to choose individuals from the current population can be chosen independently for each iteration and each individual. This result is then generalized also to typical practically applied performance measures, such as maximizing the expected fitness value of the best individual seen in any generation.
A hybrid quantum-inspired genetic algorithm for multiobjective flow shop scheduling.
Li, Bin-Bin; Wang, Ling
2007-06-01
This paper proposes a hybrid quantum-inspired genetic algorithm (HQGA) for the multiobjective flow shop scheduling problem (FSSP), which is a typical NP-hard combinatorial optimization problem with strong engineering backgrounds. On the one hand, a quantum-inspired GA (QGA) based on Q-bit representation is applied for exploration in the discrete 0-1 hyperspace by using the updating operator of quantum gate and genetic operators of Q-bit. Moreover, random-key representation is used to convert the Q-bit representation to job permutation for evaluating the objective values of the schedule solution. On the other hand, permutation-based GA (PGA) is applied for both performing exploration in permutation-based scheduling space and stressing exploitation for good schedule solutions. To evaluate solutions in multiobjective sense, a randomly weighted linear-sum function is used in QGA, and a nondominated sorting technique including classification of Pareto fronts and fitness assignment is applied in PGA with regard to both proximity and diversity of solutions. To maintain the diversity of the population, two trimming techniques for population are proposed. The proposed HQGA is tested based on some multiobjective FSSPs. Simulation results and comparisons based on several performance metrics demonstrate the effectiveness of the proposed HQGA.
Non-linear analytic and coanalytic problems ( L_p-theory, Clifford analysis, examples)
NASA Astrophysics Data System (ADS)
Dubinskii, Yu A.; Osipenko, A. S.
2000-02-01
Two kinds of new mathematical model of variational type are put forward: non-linear analytic and coanalytic problems. The formulation of these non-linear boundary-value problems is based on a decomposition of the complete scale of Sobolev spaces into the "orthogonal" sum of analytic and coanalytic subspaces. A similar decomposition is considered in the framework of Clifford analysis. Explicit examples are presented.
NASA Astrophysics Data System (ADS)
Xu, Shuo; Ji, Ze; Truong Pham, Duc; Yu, Fan
2011-11-01
The simultaneous mission assignment and home allocation for hospital service robots studied is a Multidimensional Assignment Problem (MAP) with multiobjectives and multiconstraints. A population-based metaheuristic, the Binary Bees Algorithm (BBA), is proposed to optimize this NP-hard problem. Inspired by the foraging mechanism of honeybees, the BBA's most important feature is an explicit functional partitioning between global search and local search for exploration and exploitation, respectively. Its key parts consist of adaptive global search, three-step elitism selection (constraint handling, non-dominated solutions selection, and diversity preservation), and elites-centred local search within a Hamming neighbourhood. Two comparative experiments were conducted to investigate its single objective optimization, optimization effectiveness (indexed by the S-metric and C-metric) and optimization efficiency (indexed by computational burden and CPU time) in detail. The BBA outperformed its competitors in almost all the quantitative indices. Hence, the above overall scheme, and particularly the searching history-adapted global search strategy was validated.
A computational algorithm for spacecraft control and momentum management
NASA Technical Reports Server (NTRS)
Dzielski, John; Bergmann, Edward; Paradiso, Joseph
1990-01-01
Developments in the area of nonlinear control theory have shown how coordinate changes in the state and input spaces of a dynamical system can be used to transform certain nonlinear differential equations into equivalent linear equations. These techniques are applied to the control of a spacecraft equipped with momentum exchange devices. An optimal control problem is formulated that incorporates a nonlinear spacecraft model. An algorithm is developed for solving the optimization problem using feedback linearization to transform to an equivalent problem involving a linear dynamical constraint and a functional approximation technique to solve for the linear dynamics in terms of the control. The original problem is transformed into an unconstrained nonlinear quadratic program that yields an approximate solution to the original problem. Two examples are presented to illustrate the results.
GCSE Assessment Notes: Six GCSE Assessment Assignments.
ERIC Educational Resources Information Center
Graham, Stephen
1988-01-01
Provided are copy masters, instructions for use, and grading criteria for six problems used as part of the practical assessment for a modular science course. Each problem gives a narrative and a list of materials necessary to complete the problem. (CW)
Solving the Problem of Linear Viscoelasticity for Piecewise-Homogeneous Anisotropic Plates
NASA Astrophysics Data System (ADS)
Kaloerov, S. A.; Koshkin, A. A.
2017-11-01
An approximate method for solving the problem of linear viscoelasticity for thin anisotropic plates subject to transverse bending is proposed. The method of small parameter is used to reduce the problem to a sequence of boundary problems of applied theory of bending of plates solved using complex potentials. The general form of complex potentials in approximations and the boundary conditions for determining them are obtained. Problems for a plate with elliptic elastic inclusions are solved as an example. The numerical results for a plate with one, two elliptical (circular), and linear inclusions are analyzed.
Sequential design of discrete linear quadratic regulators via optimal root-locus techniques
NASA Technical Reports Server (NTRS)
Shieh, Leang S.; Yates, Robert E.; Ganesan, Sekar
1989-01-01
A sequential method employing classical root-locus techniques has been developed in order to determine the quadratic weighting matrices and discrete linear quadratic regulators of multivariable control systems. At each recursive step, an intermediate unity rank state-weighting matrix that contains some invariant eigenvectors of that open-loop matrix is assigned, and an intermediate characteristic equation of the closed-loop system containing the invariant eigenvalues is created.
Real life working shift assignment problem
NASA Astrophysics Data System (ADS)
Sze, San-Nah; Kwek, Yeek-Ling; Tiong, Wei-King; Chiew, Kang-Leng
2017-07-01
This study concerns about the working shift assignment in an outlet of Supermarket X in Eastern Mall, Kuching. The working shift assignment needs to be solved at least once in every month. Current approval process of working shifts is too troublesome and time-consuming. Furthermore, the management staff cannot have an overview of manpower and working shift schedule. Thus, the aim of this study is to develop working shift assignment simulation and propose a working shift assignment solution. The main objective for this study is to fulfill manpower demand at minimum operation cost. Besides, the day off and meal break policy should be fulfilled accordingly. Demand based heuristic is proposed to assign working shift and the quality of the solution is evaluated by using the real data.
Discriminative Learning of Receptive Fields from Responses to Non-Gaussian Stimulus Ensembles
Meyer, Arne F.; Diepenbrock, Jan-Philipp; Happel, Max F. K.; Ohl, Frank W.; Anemüller, Jörn
2014-01-01
Analysis of sensory neurons' processing characteristics requires simultaneous measurement of presented stimuli and concurrent spike responses. The functional transformation from high-dimensional stimulus space to the binary space of spike and non-spike responses is commonly described with linear-nonlinear models, whose linear filter component describes the neuron's receptive field. From a machine learning perspective, this corresponds to the binary classification problem of discriminating spike-eliciting from non-spike-eliciting stimulus examples. The classification-based receptive field (CbRF) estimation method proposed here adapts a linear large-margin classifier to optimally predict experimental stimulus-response data and subsequently interprets learned classifier weights as the neuron's receptive field filter. Computational learning theory provides a theoretical framework for learning from data and guarantees optimality in the sense that the risk of erroneously assigning a spike-eliciting stimulus example to the non-spike class (and vice versa) is minimized. Efficacy of the CbRF method is validated with simulations and for auditory spectro-temporal receptive field (STRF) estimation from experimental recordings in the auditory midbrain of Mongolian gerbils. Acoustic stimulation is performed with frequency-modulated tone complexes that mimic properties of natural stimuli, specifically non-Gaussian amplitude distribution and higher-order correlations. Results demonstrate that the proposed approach successfully identifies correct underlying STRFs, even in cases where second-order methods based on the spike-triggered average (STA) do not. Applied to small data samples, the method is shown to converge on smaller amounts of experimental recordings and with lower estimation variance than the generalized linear model and recent information theoretic methods. Thus, CbRF estimation may prove useful for investigation of neuronal processes in response to natural stimuli and in settings where rapid adaptation is induced by experimental design. PMID:24699631
Discriminative learning of receptive fields from responses to non-Gaussian stimulus ensembles.
Meyer, Arne F; Diepenbrock, Jan-Philipp; Happel, Max F K; Ohl, Frank W; Anemüller, Jörn
2014-01-01
Analysis of sensory neurons' processing characteristics requires simultaneous measurement of presented stimuli and concurrent spike responses. The functional transformation from high-dimensional stimulus space to the binary space of spike and non-spike responses is commonly described with linear-nonlinear models, whose linear filter component describes the neuron's receptive field. From a machine learning perspective, this corresponds to the binary classification problem of discriminating spike-eliciting from non-spike-eliciting stimulus examples. The classification-based receptive field (CbRF) estimation method proposed here adapts a linear large-margin classifier to optimally predict experimental stimulus-response data and subsequently interprets learned classifier weights as the neuron's receptive field filter. Computational learning theory provides a theoretical framework for learning from data and guarantees optimality in the sense that the risk of erroneously assigning a spike-eliciting stimulus example to the non-spike class (and vice versa) is minimized. Efficacy of the CbRF method is validated with simulations and for auditory spectro-temporal receptive field (STRF) estimation from experimental recordings in the auditory midbrain of Mongolian gerbils. Acoustic stimulation is performed with frequency-modulated tone complexes that mimic properties of natural stimuli, specifically non-Gaussian amplitude distribution and higher-order correlations. Results demonstrate that the proposed approach successfully identifies correct underlying STRFs, even in cases where second-order methods based on the spike-triggered average (STA) do not. Applied to small data samples, the method is shown to converge on smaller amounts of experimental recordings and with lower estimation variance than the generalized linear model and recent information theoretic methods. Thus, CbRF estimation may prove useful for investigation of neuronal processes in response to natural stimuli and in settings where rapid adaptation is induced by experimental design.
NASA Astrophysics Data System (ADS)
Ebrahimnejad, Ali
2015-08-01
There are several methods, in the literature, for solving fuzzy variable linear programming problems (fuzzy linear programming in which the right-hand-side vectors and decision variables are represented by trapezoidal fuzzy numbers). In this paper, the shortcomings of some existing methods are pointed out and to overcome these shortcomings a new method based on the bounded dual simplex method is proposed to determine the fuzzy optimal solution of that kind of fuzzy variable linear programming problems in which some or all variables are restricted to lie within lower and upper bounds. To illustrate the proposed method, an application example is solved and the obtained results are given. The advantages of the proposed method over existing methods are discussed. Also, one application of this algorithm in solving bounded transportation problems with fuzzy supplies and demands is dealt with. The proposed method is easy to understand and to apply for determining the fuzzy optimal solution of bounded fuzzy variable linear programming problems occurring in real-life situations.
On Computing Breakpoint Distances for Genomes with Duplicate Genes.
Shao, Mingfu; Moret, Bernard M E
2017-06-01
A fundamental problem in comparative genomics is to compute the distance between two genomes in terms of its higher level organization (given by genes or syntenic blocks). For two genomes without duplicate genes, we can easily define (and almost always efficiently compute) a variety of distance measures, but the problem is NP-hard under most models when genomes contain duplicate genes. To tackle duplicate genes, three formulations (exemplar, maximum matching, and any matching) have been proposed, all of which aim to build a matching between homologous genes so as to minimize some distance measure. Of the many distance measures, the breakpoint distance (the number of nonconserved adjacencies) was the first one to be studied and remains of significant interest because of its simplicity and model-free property. The three breakpoint distance problems corresponding to the three formulations have been widely studied. Although we provided last year a solution for the exemplar problem that runs very fast on full genomes, computing optimal solutions for the other two problems has remained challenging. In this article, we describe very fast, exact algorithms for these two problems. Our algorithms rely on a compact integer-linear program that we further simplify by developing an algorithm to remove variables, based on new results on the structure of adjacencies and matchings. Through extensive experiments using both simulations and biological data sets, we show that our algorithms run very fast (in seconds) on mammalian genomes and scale well beyond. We also apply these algorithms (as well as the classic orthology tool MSOAR) to create orthology assignment, then compare their quality in terms of both accuracy and coverage. We find that our algorithm for the "any matching" formulation significantly outperforms other methods in terms of accuracy while achieving nearly maximum coverage.
ERIC Educational Resources Information Center
Brusco, Michael J.
2007-01-01
The study of human performance on discrete optimization problems has a considerable history that spans various disciplines. The two most widely studied problems are the Euclidean traveling salesperson problem and the quadratic assignment problem. The purpose of this paper is to outline a program of study for the measurement of human performance on…
An investigation of the use of temporal decomposition in space mission scheduling
NASA Technical Reports Server (NTRS)
Bullington, Stanley E.; Narayanan, Venkat
1994-01-01
This research involves an examination of techniques for solving scheduling problems in long-duration space missions. The mission timeline is broken up into several time segments, which are then scheduled incrementally. Three methods are presented for identifying the activities that are to be attempted within these segments. The first method is a mathematical model, which is presented primarily to illustrate the structure of the temporal decomposition problem. Since the mathematical model is bound to be computationally prohibitive for realistic problems, two heuristic assignment procedures are also presented. The first heuristic method is based on dispatching rules for activity selection, and the second heuristic assigns performances of a model evenly over timeline segments. These heuristics are tested using a sample Space Station mission and a Spacelab mission. The results are compared with those obtained by scheduling the missions without any problem decomposition. The applicability of this approach to large-scale mission scheduling problems is also discussed.
Parallel, Asynchronous Executive (PAX): System concepts, facilities, and architecture
NASA Technical Reports Server (NTRS)
Jones, W. H.
1983-01-01
The Parallel, Asynchronous Executive (PAX) is a software operating system simulation that allows many computers to work on a single problem at the same time. PAX is currently implemented on a UNIVAC 1100/42 computer system. Independent UNIVAC runstreams are used to simulate independent computers. Data are shared among independent UNIVAC runstreams through shared mass-storage files. PAX has achieved the following: (1) applied several computing processes simultaneously to a single, logically unified problem; (2) resolved most parallel processor conflicts by careful work assignment; (3) resolved by means of worker requests to PAX all conflicts not resolved by work assignment; (4) provided fault isolation and recovery mechanisms to meet the problems of an actual parallel, asynchronous processing machine. Additionally, one real-life problem has been constructed for the PAX environment. This is CASPER, a collection of aerodynamic and structural dynamic problem simulation routines. CASPER is not discussed in this report except to provide examples of parallel-processing techniques.
Linear Programming and Its Application to Pattern Recognition Problems
NASA Technical Reports Server (NTRS)
Omalley, M. J.
1973-01-01
Linear programming and linear programming like techniques as applied to pattern recognition problems are discussed. Three relatively recent research articles on such applications are summarized. The main results of each paper are described, indicating the theoretical tools needed to obtain them. A synopsis of the author's comments is presented with regard to the applicability or non-applicability of his methods to particular problems, including computational results wherever given.
Sparks-Thissen, Rebecca L
2017-02-01
Biology education is undergoing a transformation toward a more student-centered, inquiry-driven classroom. Many educators have designed engaging assignments that are designed to help undergraduate students gain exposure to the scientific process and data analysis. One of these types of assignments is use of a grant proposal assignment. Many instructors have used these assignments in lecture-based courses to help students process information in the literature and apply that information to a novel problem such as design of an antiviral drug or a vaccine. These assignments have been helpful in engaging students in the scientific process in the absence of an inquiry-driven laboratory. This commentary discusses the application of these grant proposal writing assignments to undergraduate biology courses. © FEMS 2017. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Ecologically-Based Family Therapy Outcome with Substance Abusing Runaway Adolescents
Slesnick, Natasha; Prestopnik, Jillian L.
2007-01-01
Runaway youth report a broader range and higher severity of substance-related, mental health and family problems relative to non-runaway youth. Most studies to date have collected self-report data on the family and social history; virtually no research has examined treatment effectiveness with this population. This study is a treatment development project in which 124 runaway youth were randomly assigned to 1) Ecologically-Based Family Therapy (EBFT) or 2) Service as Usual (SAU) through a shelter. Youth completed an intake, posttreatment, 6 and 12 month follow-up assessment. Youth assigned to EBFT reported greater reductions in overall substance abuse compared to youth assigned to SAU while other problem areas improved in both conditions. Findings suggest that EBFT is an efficacious intervention for this relatively severe population of youth. PMID:15878048
Using sound to solve syntactic problems: the role of phonology in grammatical category assignments.
Kelly, M H
1992-04-01
One ubiquitous problem in language processing involves the assignment of words to the correct grammatical category, such as noun or verb. In general, semantic and syntactic cues have been cited as the principal information for grammatical category assignment, to the neglect of possible phonological cues. This neglect is unwarranted, and the following claims are made: (a) Numerous correlations between phonology and grammatical class exist, (b) some of these correlations are large and can pervade the entire lexicon of a language and hence can involve thousands of words, (c) experiments have repeatedly found that adults and children have learned these correlations, and (d) explanations for how these correlations arose can be proposed and evaluated. Implications of these phenomena for language representation and processing are discussed.
Problem Solution Project: Transforming Curriculum and Empowering Urban Students and Teachers
ERIC Educational Resources Information Center
Jarrett, Olga S.; Stenhouse, Vera
2011-01-01
This article presents findings of 6 years of implementing a Problem Solution Project, an assignment influenced by service learning, problem-based learning, critical theory, and critical pedagogy whereby teachers help children tackle real problems. Projects of 135 teachers in an urban certification/master's program were summarized by cohort year…
2015-12-24
minimizing a weighted sum ofthe time and control effort needed to collect sensor data. This problem formulation is a modified traveling salesman ...29 2.5 The Shortest Path Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 2.5.1 Traveling Salesman Problem ...48 3.3.1 Initial Guess by Traveling Salesman Problem Solution
American Viticultural Areas: A Problem in Regional Geography.
ERIC Educational Resources Information Center
Macdonald, Gerald M.; Lemaire, Denyse
1995-01-01
Maintains that growing grapes for winemaking has increased dramatically in the United States. Describes a college class assignment in which students analyzed climate and soil type to identify appropriate viticulture areas. Reports high student interest in the assignment and includes four figures illustrating the approach. (CFR)
Kwon, Oh-Hun; Park, Hyunjin; Seo, Sang-Won; Na, Duk L.; Lee, Jong-Min
2015-01-01
The mean diffusivity (MD) value has been used to describe microstructural properties in Diffusion Tensor Imaging (DTI) in cortical gray matter (GM). Recently, researchers have applied a cortical surface generated from the T1-weighted volume. When the DTI data are analyzed using the cortical surface, it is important to assign an accurate MD value from the volume space to the vertex of the cortical surface, considering the anatomical correspondence between the DTI and the T1-weighted image. Previous studies usually sampled the MD value using the nearest-neighbor (NN) method or Linear method, even though there are geometric distortions in diffusion-weighted volumes. Here we introduce a Surface Guided Diffusion Mapping (SGDM) method to compensate for such geometric distortions. We compared our SGDM method with results using NN and Linear methods by investigating differences in the sampled MD value. We also projected the tissue classification results of non-diffusion-weighted volumes to the cortical midsurface. The CSF probability values provided by the SGDM method were lower than those produced by the NN and Linear methods. The MD values provided by the NN and Linear methods were significantly greater than those of the SGDM method in regions suffering from geometric distortion. These results indicate that the NN and Linear methods assigned the MD value in the CSF region to the cortical midsurface (GM region). Our results suggest that the SGDM method is an effective way to correct such mapping errors. PMID:26236180
2014-10-24
problem was formalized as the Dubins travelling salesman problem (TSP). In the second phase of the research we have...given constraints on its motion. This problem was formalized as the Dubins travelling salesman problem (TSP). The contributions of the study in the... problem was formalized as the Dubins travelling salesman problem (TSP). The Dubins Travelling Salesperson Problem (DTSP) and its variants [8, 12,
PROBABILISTIC CROSS-IDENTIFICATION IN CROWDED FIELDS AS AN ASSIGNMENT PROBLEM
DOE Office of Scientific and Technical Information (OSTI.GOV)
Budavári, Tamás; Basu, Amitabh, E-mail: budavari@jhu.edu, E-mail: basu.amitabh@jhu.edu
2016-10-01
One of the outstanding challenges of cross-identification is multiplicity: detections in crowded regions of the sky are often linked to more than one candidate associations of similar likelihoods. We map the resulting maximum likelihood partitioning to the fundamental assignment problem of discrete mathematics and efficiently solve the two-way catalog-level matching in the realm of combinatorial optimization using the so-called Hungarian algorithm. We introduce the method, demonstrate its performance in a mock universe where the true associations are known, and discuss the applicability of the new procedure to large surveys.
NASA Astrophysics Data System (ADS)
2018-05-01
Eigenvalues and eigenvectors, together, constitute the eigenstructure of the system. The design of vibrating systems aimed at satisfying specifications on eigenvalues and eigenvectors, which is commonly known as eigenstructure assignment, has drawn increasing interest over the recent years. The most natural mathematical framework for such problems is constituted by the inverse eigenproblems, which consist in the determination of the system model that features a desired set of eigenvalues and eigenvectors. Although such a problem is intrinsically challenging, several solutions have been proposed in the literature. The approaches to eigenstructure assignment can be basically divided into passive control and active control.
Probabilistic Cross-identification in Crowded Fields as an Assignment Problem
NASA Astrophysics Data System (ADS)
Budavári, Tamás; Basu, Amitabh
2016-10-01
One of the outstanding challenges of cross-identification is multiplicity: detections in crowded regions of the sky are often linked to more than one candidate associations of similar likelihoods. We map the resulting maximum likelihood partitioning to the fundamental assignment problem of discrete mathematics and efficiently solve the two-way catalog-level matching in the realm of combinatorial optimization using the so-called Hungarian algorithm. We introduce the method, demonstrate its performance in a mock universe where the true associations are known, and discuss the applicability of the new procedure to large surveys.
THE SUCCESSIVE LINEAR ESTIMATOR: A REVISIT. (R827114)
This paper examines the theoretical basis of the successive linear estimator (SLE) that has been developed for the inverse problem in subsurface hydrology. We show that the SLE algorithm is a non-linear iterative estimator to the inverse problem. The weights used in the SLE al...
Woelfle, J; Hoepffner, W; Sippell, W G; Brämswig, J H; Heidemann, P; Deiss, D; Bökenkamp, A; Roth, C; Irle, U; Wollmann, H A; Zachmann, M; Kubini, K; Albers, N
2002-02-01
In girls with congenital adrenal hyperplasia (CAH), genital ambiguity usually leads to a rapid neonatal diagnosis. Rarely, CAH causes complete virilization and male sex assignment with a delayed diagnosis. After being confronted with very specific problems in two of such patients, we collected data of patients with CAH and complete virilization in a nationwide study to delineate specific problems of these rare patients in order to improve their management. Through the German Working Group of Paediatric Endocrinology (Arbeitsgemeinschaft Pädiatrische Endokrinologie, APE), questionnaires were sent to all members caring for patients with CAH and complete virilization in their endocrine clinics. Data from 16 patients from 10 paediatric endocrine centres were assessed by questionnaire. The following problems have been encountered. (1) Sex assignment/gender identity: initially all patients had a male sex assignment. Six patients were diagnosed during the first month of life. Five were reassigned to female sex immediately, one at the age of 19 months. Except in one girl demonstrating some tomboyish behaviour, gender role behaviour in these patients did not differ from unaffected girls. Ten patients were diagnosed late at 3.4--7 years of age. In seven patients with a late diagnosis, male sex assignment was maintained; one of them expressed some concerns about living as a male. In three patients late sex reversal was performed, gender identity is very poor in one and new sex assignment is currently under consideration. (2) SURGERY: irrespective of the sex assigned, all patients had between one and three surgical procedures, including clitoris reduction and (repeated) vaginoplasties in patients with female sex assignment. Hysterectomy and ovarectomy were performed in patients with male sex assignment. (3) Short stature: patients with a late diagnosis of CAH had extremely advanced bone ages of +6.3 to +9.5 years, leading to severely reduced final height of 137 to 150 cm in adult patients. Patients tended to follow height percentiles of genetic females. One pubertal patient was suicidal due to short stature. (4) Central precocious puberty (CPP): prolonged exposition to adrenal androgens led to CPP in one patient. He was treated with GnRH analogues until gonadectomy. Patients with CAH and complete virilization have a high risk of being diagnosed late. There are major problems and uncertainties of the patients' families and the treating physicians concerning gender assignment. Gender identity is disturbed in some patients. In addition, multiple surgical procedures are necessary and short stature as well as central precocious puberty might be important to avoid late sequelae. While some surgical interventions are probably unavoidable, most of these issues could be resolved with an early diagnosis. Thus, especially for these patients, a neonatal screening programme for CAH would be of paramount importance.
A neural network approach to job-shop scheduling.
Zhou, D N; Cherkassky, V; Baldwin, T R; Olson, D E
1991-01-01
A novel analog computational network is presented for solving NP-complete constraint satisfaction problems, i.e. job-shop scheduling. In contrast to most neural approaches to combinatorial optimization based on quadratic energy cost function, the authors propose to use linear cost functions. As a result, the network complexity (number of neurons and the number of resistive interconnections) grows only linearly with problem size, and large-scale implementations become possible. The proposed approach is related to the linear programming network described by D.W. Tank and J.J. Hopfield (1985), which also uses a linear cost function for a simple optimization problem. It is shown how to map a difficult constraint-satisfaction problem onto a simple neural net in which the number of neural processors equals the number of subjobs (operations) and the number of interconnections grows linearly with the total number of operations. Simulations show that the authors' approach produces better solutions than existing neural approaches to job-shop scheduling, i.e. the traveling salesman problem-type Hopfield approach and integer linear programming approach of J.P.S. Foo and Y. Takefuji (1988), in terms of the quality of the solution and the network complexity.
Development of a Multileaf Collimator for Proton Radiotherapy
2006-06-01
voxel size and slice thickness can be adjusted and determine the resolution. Each voxel is assigned a CT Number, in Hounsfield units , which is a...measure of the linear attenuation of the material in that voxel. The Hounsfield unit is a comparison of the linear attenuation coefficient of some...a header, which contains relevant patient and scan information, and the data, which is a sequential listing of the Hounsfield units of each voxel
A feasible DY conjugate gradient method for linear equality constraints
NASA Astrophysics Data System (ADS)
LI, Can
2017-09-01
In this paper, we propose a feasible conjugate gradient method for solving linear equality constrained optimization problem. The method is an extension of the Dai-Yuan conjugate gradient method proposed by Dai and Yuan to linear equality constrained optimization problem. It can be applied to solve large linear equality constrained problem due to lower storage requirement. An attractive property of the method is that the generated direction is always feasible and descent direction. Under mild conditions, the global convergence of the proposed method with exact line search is established. Numerical experiments are also given which show the efficiency of the method.
NASA Technical Reports Server (NTRS)
Gibson, J. S.; Rosen, I. G.
1986-01-01
An abstract approximation framework is developed for the finite and infinite time horizon discrete-time linear-quadratic regulator problem for systems whose state dynamics are described by a linear semigroup of operators on an infinite dimensional Hilbert space. The schemes included the framework yield finite dimensional approximations to the linear state feedback gains which determine the optimal control law. Convergence arguments are given. Examples involving hereditary and parabolic systems and the vibration of a flexible beam are considered. Spline-based finite element schemes for these classes of problems, together with numerical results, are presented and discussed.
Linear systems on balancing chemical reaction problem
NASA Astrophysics Data System (ADS)
Kafi, R. A.; Abdillah, B.
2018-01-01
The concept of linear systems appears in a variety of applications. This paper presents a small sample of the wide variety of real-world problems regarding our study of linear systems. We show that the problem in balancing chemical reaction can be described by homogeneous linear systems. The solution of the systems is obtained by performing elementary row operations. The obtained solution represents the finding coefficients of chemical reaction. In addition, we present a computational calculation to show that mathematical software such as Matlab can be used to simplify completion of the systems, instead of manually using row operations.
Fixed order dynamic compensation for multivariable linear systems
NASA Technical Reports Server (NTRS)
Kramer, F. S.; Calise, A. J.
1986-01-01
This paper considers the design of fixed order dynamic compensators for multivariable time invariant linear systems, minimizing a linear quadratic performance cost functional. Attention is given to robustness issues in terms of multivariable frequency domain specifications. An output feedback formulation is adopted by suitably augmenting the system description to include the compensator states. Either a controller or observer canonical form is imposed on the compensator description to reduce the number of free parameters to its minimal number. The internal structure of the compensator is prespecified by assigning a set of ascending feedback invariant indices, thus forming a Brunovsky structure for the nominal compensator.
Neely, K W; Eldurkar, J A; Drake, M E
2000-02-01
Emergency medical services (EMS) systems increasingly seek to triage patients to alternative EMS resources. Emergency medical services dispatchers may be asked to perform this triage. New protocols may be necessary. Alternatively, existing protocols may be sufficient for this task. For an existing dispatch protocol to be sufficient, it at least must accurately categorize patient condition and severity based on an external standard. To examine the extent to which nature codes (NCs), or patient condition codes, and severity codes (SCs) currently assigned in one urban 911 center agree with paramedic field findings. The null hypothesis was that there is no routine agreement (75%) between dispatcher-assigned NC or SC and paramedic-assigned NC or SC for the same patient using the same protocol. Emergency medical services dispatch nature and severity code data and matching out-of-hospital data were prospectively gathered over six months. Dispatch data included the NC: caller-identified problem, and the SC: dispatcher-assessed severity. Each NC is modified by one of three SCs (1, 3, or 9): 1 is emergent, 3 is urgent, and 9 is neither. Paramedics verified and/or corrected dispatcher-assigned NCs and SCs using the same dispatch protocol. One thousand forty usable cases fell into 33 unique NC/SC combinations. The designation of SC 1 was assigned 275 times, SC 3 was assigned 736 times, and SC 9 was assigned 24 times. The SC was missing five times. The overall NC agreement was 0.70 (95% CI = 0.697 to 0.703). The overall SC agreement was 0.65 (95% CI = 0.645 to 0.655). The NC agreement exceeded 75% for ten (59%) NC/SC combinations. The SC agreement exceeded 75% for five (29%) NC/SC combinations. There was both NC and SC agreement for four (24%) combinations: urgent breathing problems, urgent diabetic problems, urgent falls, and urgent overdoses. The greatest NC/SC disagreement occurred within emergent and urgent traffic crashes. Paramedics adjusted SC toward lower severity 29% of the time and toward higher severity 5.4% of the time. There was no upward SC adjustment for eight (47%) combinations. Certain dispatcher-assigned NC and SC codes and NC/SC combinations achieved the study threshold. Overall agreement failed to achieve the threshold. The lowest SC level was rarely assigned, preventing a meaningful analysis of all severity levels.
Some New Results in Astrophysical Problems of Nonlinear Theory of Radiative Transfer
NASA Astrophysics Data System (ADS)
Pikichyan, H. V.
2017-07-01
In the interpretation of the observed astrophysical spectra, a decisive role is related to nonlinear problems of radiative transfer, because the processes of multiple interactions of matter of cosmic medium with the exciting intense radiation ubiquitously occur in astrophysical objects, and in their vicinities. Whereas, the intensity of the exciting radiation changes the physical properties of the original medium, and itself was modified, simultaneously, in a self-consistent manner under its influence. In the present report, we show that the consistent application of the principle of invariance in the nonlinear problem of bilateral external illumination of a scattering/absorbing one-dimensional anisotropic medium of finite geometrical thickness allows for simplifications that were previously considered as a prerogative only of linear problems. The nonlinear problem is analyzed through the three methods of the principle of invariance: (i) an adding of layers, (ii) its limiting form, described by differential equations of invariant imbedding, and (iii) a transition to the, so-called, functional equations of the "Ambartsumyan's complete invariance". Thereby, as an alternative to the Boltzmann equation, a new type of equations, so-called "kinetic equations of equivalence", are obtained. By the introduction of new functions - the so-called "linear images" of solution of nonlinear problem of radiative transfer, the linear structure of the solution of the nonlinear problem under study is further revealed. Linear images allow to convert naturally the statistical characteristics of random walk of a "single quantum" or their "beam of unit intensity", as well as widely known "probabilistic interpretation of phenomena of transfer", to the field of nonlinear problems. The structure of the equations obtained for determination of linear images is typical of linear problems.
Optimal likelihood-based matching of volcanic sources and deposits in the Auckland Volcanic Field
NASA Astrophysics Data System (ADS)
Kawabata, Emily; Bebbington, Mark S.; Cronin, Shane J.; Wang, Ting
2016-09-01
In monogenetic volcanic fields, where each eruption forms a new volcano, focusing and migration of activity over time is a very real possibility. In order for hazard estimates to reflect future, rather than past, behavior, it is vital to assemble as much reliable age data as possible on past eruptions. Multiple swamp/lake records have been extracted from the Auckland Volcanic Field, underlying the 1.4 million-population city of Auckland. We examine here the problem of matching these dated deposits to the volcanoes that produced them. The simplest issue is separation in time, which is handled by simulating prior volcano age sequences from direct dates where known, thinned via ordering constraints between the volcanoes. The subproblem of varying deposition thicknesses (which may be zero) at five locations of known distance and azimuth is quantified using a statistical attenuation model for the volcanic ash thickness. These elements are combined with other constraints, from widespread fingerprinted ash layers that separate eruptions and time-censoring of the records, into a likelihood that was optimized via linear programming. A second linear program was used to optimize over the Monte-Carlo simulated set of prior age profiles to determine the best overall match and consequent volcano age assignments. Considering all 20 matches, and the multiple factors of age, direction, and size/distance simultaneously, results in some non-intuitive assignments which would not be produced by single factor analyses. Compared with earlier work, the results provide better age control on a number of smaller centers such as Little Rangitoto, Otuataua, Taylors Hill, Wiri Mountain, Green Hill, Otara Hill, Hampton Park and Mt Cambria. Spatio-temporal hazard estimates are updated on the basis of the new ordering, which suggest that the scale of the 'flare-up' around 30 ka, while still highly significant, was less than previously thought.
Three geographic decomposition approaches in transportation network analysis
DOT National Transportation Integrated Search
1980-03-01
This document describes the results of research into the application of geographic decomposition techniques to practical transportation network problems. Three approaches are described for the solution of the traffic assignment problem. One approach ...
Application of Decomposition to Transportation Network Analysis
DOT National Transportation Integrated Search
1976-10-01
This document reports preliminary results of five potential applications of the decomposition techniques from mathematical programming to transportation network problems. The five application areas are (1) the traffic assignment problem with fixed de...
Linear and Quadratic Change: A Problem from Japan
ERIC Educational Resources Information Center
Peterson, Blake E.
2006-01-01
In the fall of 2003, the author conducted research on the student teaching process in Japan. The basis for most of the lessons observed was rich mathematics problems. Upon returning to the US, the author used one such problem while teaching an algebra 2 class. This article introduces that problem, which gives rise to both linear and quadratic…
Accelerated Math®. Primary Mathematics. What Works Clearinghouse Intervention Report
ERIC Educational Resources Information Center
What Works Clearinghouse, 2017
2017-01-01
"Accelerated Math®," published by Renaissance Learning, is a software tool that provides practice problems for students in grades K-12 and provides teachers with reports to monitor student progress. "Accelerated Math®" creates individualized student assignments, scores the assignments, and generates reports on student progress.…
ERIC Educational Resources Information Center
McGuire, Linda
2014-01-01
This article will describe a writing assignment designed for use in a liberal arts college whose mission stresses effective written communication both within and across disciplines. In this assignment, students write three separate solutions to the same mathematics problem: one for a mathematical peer, a second for a contemporary that does not…
Using Portfolio Assignments to Assess Students' Mathematical Thinking
ERIC Educational Resources Information Center
Fukawa-Connelly, Timothy; Buck, Stephen
2010-01-01
Writing in mathematics can improve procedural knowledge and communication skills and may also help students better understand and then remember problems. The majority of mathematics teachers know that they ought to include some writing assignments in their instructional plans, but the challenge of covering the curriculum and the time required to…
Student's Lab Assignments in PDE Course with MAPLE.
ERIC Educational Resources Information Center
Ponidi, B. Alhadi
Computer-aided software has been used intensively in many mathematics courses, especially in computational subjects, to solve initial value and boundary value problems in Partial Differential Equations (PDE). Many software packages were used in student lab assignments such as FORTRAN, PASCAL, MATLAB, MATHEMATICA, and MAPLE in order to accelerate…
Listener Reliability in Assigning Utterance Boundaries in Children's Spontaneous Speech
ERIC Educational Resources Information Center
Stockman, Ida J.
2010-01-01
Research and clinical practices often rely on an utterance unit for spoken language analysis. This paper calls attention to the problems encountered when identifying utterance boundaries in young children's spontaneous conversational speech. The results of a reliability study of utterance boundary assignment are described for 20 females with…
Distributed Collaborative Homework Activities in a Problem-Based Usability Engineering Course
ERIC Educational Resources Information Center
Carroll, John M.; Jiang, Hao; Borge, Marcela
2015-01-01
Teams of students in an upper-division undergraduate Usability Engineering course used a collaborative environment to carry out a series of three distributed collaborative homework assignments. Assignments were case-based analyses structured using a jigsaw design; students were provided a collaborative software environment and introduced to a…
Increasing Student-Learning Team Effectiveness with Team Charters
ERIC Educational Resources Information Center
Hunsaker, Phillip; Pavett, Cynthia; Hunsaker, Johanna
2011-01-01
Because teams are a ubiquitous part of most organizations today, it is common for business educators to use team assignments to help students experientially learn about course concepts and team process. Unfortunately, students frequently experience a number of problems during team assignments. The authors describe the results of their research and…
Teaching Case: A Systems Analysis Role-Play Exercise and Assignment
ERIC Educational Resources Information Center
Mitri, Michel; Cole, Carey; Atkins, Laura
2017-01-01
This paper presents a role-play exercise and assignment that provides an active learning experience related to the system investigation phase of an SDLC. Whether using waterfall or agile approaches, the first SDLC step usually involves system investigation activities, including problem identification, feasibility study, cost-benefit analysis, and…
NASA Astrophysics Data System (ADS)
Heinkenschloss, Matthias
2005-01-01
We study a class of time-domain decomposition-based methods for the numerical solution of large-scale linear quadratic optimal control problems. Our methods are based on a multiple shooting reformulation of the linear quadratic optimal control problem as a discrete-time optimal control (DTOC) problem. The optimality conditions for this DTOC problem lead to a linear block tridiagonal system. The diagonal blocks are invertible and are related to the original linear quadratic optimal control problem restricted to smaller time-subintervals. This motivates the application of block Gauss-Seidel (GS)-type methods for the solution of the block tridiagonal systems. Numerical experiments show that the spectral radii of the block GS iteration matrices are larger than one for typical applications, but that the eigenvalues of the iteration matrices decay to zero fast. Hence, while the GS method is not expected to convergence for typical applications, it can be effective as a preconditioner for Krylov-subspace methods. This is confirmed by our numerical tests.A byproduct of this research is the insight that certain instantaneous control techniques can be viewed as the application of one step of the forward block GS method applied to the DTOC optimality system.
NASA Astrophysics Data System (ADS)
Ikelle, Luc T.; Osen, Are; Amundsen, Lasse; Shen, Yunqing
2004-12-01
The classical linear solutions to the problem of multiple attenuation, like predictive deconvolution, τ-p filtering, or F-K filtering, are generally fast, stable, and robust compared to non-linear solutions, which are generally either iterative or in the form of a series with an infinite number of terms. These qualities have made the linear solutions more attractive to seismic data-processing practitioners. However, most linear solutions, including predictive deconvolution or F-K filtering, contain severe assumptions about the model of the subsurface and the class of free-surface multiples they can attenuate. These assumptions limit their usefulness. In a recent paper, we described an exception to this assertion for OBS data. We showed in that paper that a linear and non-iterative solution to the problem of attenuating free-surface multiples which is as accurate as iterative non-linear solutions can be constructed for OBS data. We here present a similar linear and non-iterative solution for attenuating free-surface multiples in towed-streamer data. For most practical purposes, this linear solution is as accurate as the non-linear ones.
38 CFR 21.198 - “Discontinued” status.
Code of Federal Regulations, 2011 CFR
2011-07-01
... personal or other problems; or (ii) Inability of the veteran to benefit from rehabilitation services... of entitlement. (4) Medical and related problems. A veteran's case will be discontinued and assigned... program because of a serious physical or emotional problem for an extended period; and (ii) VA medical...
38 CFR 21.198 - “Discontinued” status.
Code of Federal Regulations, 2010 CFR
2010-07-01
... personal or other problems; or (ii) Inability of the veteran to benefit from rehabilitation services... of entitlement. (4) Medical and related problems. A veteran's case will be discontinued and assigned... program because of a serious physical or emotional problem for an extended period; and (ii) VA medical...
Iterative algorithms for a non-linear inverse problem in atmospheric lidar
NASA Astrophysics Data System (ADS)
Denevi, Giulia; Garbarino, Sara; Sorrentino, Alberto
2017-08-01
We consider the inverse problem of retrieving aerosol extinction coefficients from Raman lidar measurements. In this problem the unknown and the data are related through the exponential of a linear operator, the unknown is non-negative and the data follow the Poisson distribution. Standard methods work on the log-transformed data and solve the resulting linear inverse problem, but neglect to take into account the noise statistics. In this study we show that proper modelling of the noise distribution can improve substantially the quality of the reconstructed extinction profiles. To achieve this goal, we consider the non-linear inverse problem with non-negativity constraint, and propose two iterative algorithms derived using the Karush-Kuhn-Tucker conditions. We validate the algorithms with synthetic and experimental data. As expected, the proposed algorithms out-perform standard methods in terms of sensitivity to noise and reliability of the estimated profile.
Separation anxiety among birth-assigned male children in a specialty gender identity service.
VanderLaan, Doug P; Santarossa, Alanna; Nabbijohn, A Natisha; Wood, Hayley; Owen-Anderson, Allison; Zucker, Kenneth J
2018-01-01
Previous research suggested that separation anxiety disorder (SAD) is overrepresented among birth-assigned male children clinic-referred for gender dysphoria (GD). The present study examined maternally reported separation anxiety of birth-assigned male children assessed in a specialty gender identity service (N = 360). SAD was determined in relation to DSM-III and DSM-IV criteria, respectively. A dimensional metric of separation anxiety was examined in relation to several additional factors: age, ethnicity, parental marital status and social class, IQ, gender nonconformity, behavioral and emotional problems, and poor peer relations. When defined in a liberal fashion, 55.8% were classified as having SAD. When using a more conservative criterion, 5.3% were classified as having SAD, which was significantly greater than the estimated general population prevalence for boys, but not for girls. Dimensionally, separation anxiety was associated with having parents who were not married or cohabitating as well as with elevations in gender nonconformity; however, the association with gender nonconformity was no longer significant when statistically controlling for internalizing problems. Thus, SAD appears to be common among birth-assigned males clinic-referred for GD when defined in a liberal fashion, and more common than in boys, but not girls, from the general population even when more stringent criteria were applied. Also, the degree of separation anxiety appears to be linked to generic risk factors (i.e., parental marital status, internalizing problems). As such, although separation anxiety is common among birth-assigned male children clinic-referred for GD, it seems unlikely to hold unique significance for this population based on the current data.
Cui, Licong; Xu, Rong; Luo, Zhihui; Wentz, Susan; Scarberry, Kyle; Zhang, Guo-Qiang
2014-08-03
Finding quality consumer health information online can effectively bring important public health benefits to the general population. It can empower people with timely and current knowledge for managing their health and promoting wellbeing. Despite a popular belief that search engines such as Google can solve all information access problems, recent studies show that using search engines and simple search terms is not sufficient. Our objective is to provide an approach to organizing consumer health information for navigational exploration, complementing keyword-based direct search. Multi-topic assignment to health information, such as online questions, is a fundamental step for navigational exploration. We introduce a new multi-topic assignment method combining semantic annotation using UMLS concepts (CUIs) and Formal Concept Analysis (FCA). Each question was tagged with CUIs identified by MetaMap. The CUIs were filtered with term-frequency and a new term-strength index to construct a CUI-question context. The CUI-question context and a topic-subject context were used for multi-topic assignment, resulting in a topic-question context. The topic-question context was then directly used for constructing a prototype navigational exploration interface. Experimental evaluation was performed on the task of automatic multi-topic assignment of 99 predefined topics for about 60,000 consumer health questions from NetWellness. Using example-based metrics, suitable for multi-topic assignment problems, our method achieved a precision of 0.849, recall of 0.774, and F₁ measure of 0.782, using a reference standard of 278 questions with manually assigned topics. Compared to NetWellness' original topic assignment, a 36.5% increase in recall is achieved with virtually no sacrifice in precision. Enhancing the recall of multi-topic assignment without sacrificing precision is a prerequisite for achieving the benefits of navigational exploration. Our new multi-topic assignment method, combining term-strength, FCA, and information retrieval techniques, significantly improved recall and performed well according to example-based metrics.
2014-01-01
Background Finding quality consumer health information online can effectively bring important public health benefits to the general population. It can empower people with timely and current knowledge for managing their health and promoting wellbeing. Despite a popular belief that search engines such as Google can solve all information access problems, recent studies show that using search engines and simple search terms is not sufficient. Our objective is to provide an approach to organizing consumer health information for navigational exploration, complementing keyword-based direct search. Multi-topic assignment to health information, such as online questions, is a fundamental step for navigational exploration. Methods We introduce a new multi-topic assignment method combining semantic annotation using UMLS concepts (CUIs) and Formal Concept Analysis (FCA). Each question was tagged with CUIs identified by MetaMap. The CUIs were filtered with term-frequency and a new term-strength index to construct a CUI-question context. The CUI-question context and a topic-subject context were used for multi-topic assignment, resulting in a topic-question context. The topic-question context was then directly used for constructing a prototype navigational exploration interface. Results Experimental evaluation was performed on the task of automatic multi-topic assignment of 99 predefined topics for about 60,000 consumer health questions from NetWellness. Using example-based metrics, suitable for multi-topic assignment problems, our method achieved a precision of 0.849, recall of 0.774, and F1 measure of 0.782, using a reference standard of 278 questions with manually assigned topics. Compared to NetWellness’ original topic assignment, a 36.5% increase in recall is achieved with virtually no sacrifice in precision. Conclusion Enhancing the recall of multi-topic assignment without sacrificing precision is a prerequisite for achieving the benefits of navigational exploration. Our new multi-topic assignment method, combining term-strength, FCA, and information retrieval techniques, significantly improved recall and performed well according to example-based metrics. PMID:25086916
Multigrid approaches to non-linear diffusion problems on unstructured meshes
NASA Technical Reports Server (NTRS)
Mavriplis, Dimitri J.; Bushnell, Dennis M. (Technical Monitor)
2001-01-01
The efficiency of three multigrid methods for solving highly non-linear diffusion problems on two-dimensional unstructured meshes is examined. The three multigrid methods differ mainly in the manner in which the nonlinearities of the governing equations are handled. These comprise a non-linear full approximation storage (FAS) multigrid method which is used to solve the non-linear equations directly, a linear multigrid method which is used to solve the linear system arising from a Newton linearization of the non-linear system, and a hybrid scheme which is based on a non-linear FAS multigrid scheme, but employs a linear solver on each level as a smoother. Results indicate that all methods are equally effective at converging the non-linear residual in a given number of grid sweeps, but that the linear solver is more efficient in cpu time due to the lower cost of linear versus non-linear grid sweeps.
Fast, Nonlinear, Fully Probabilistic Inversion of Large Geophysical Problems
NASA Astrophysics Data System (ADS)
Curtis, A.; Shahraeeni, M.; Trampert, J.; Meier, U.; Cho, G.
2010-12-01
Almost all Geophysical inverse problems are in reality nonlinear. Fully nonlinear inversion including non-approximated physics, and solving for probability distribution functions (pdf’s) that describe the solution uncertainty, generally requires sampling-based Monte-Carlo style methods that are computationally intractable in most large problems. In order to solve such problems, physical relationships are usually linearized leading to efficiently-solved, (possibly iterated) linear inverse problems. However, it is well known that linearization can lead to erroneous solutions, and in particular to overly optimistic uncertainty estimates. What is needed across many Geophysical disciplines is a method to invert large inverse problems (or potentially tens of thousands of small inverse problems) fully probabilistically and without linearization. This talk shows how very large nonlinear inverse problems can be solved fully probabilistically and incorporating any available prior information using mixture density networks (driven by neural network banks), provided the problem can be decomposed into many small inverse problems. In this talk I will explain the methodology, compare multi-dimensional pdf inversion results to full Monte Carlo solutions, and illustrate the method with two applications: first, inverting surface wave group and phase velocities for a fully-probabilistic global tomography model of the Earth’s crust and mantle, and second inverting industrial 3D seismic data for petrophysical properties throughout and around a subsurface hydrocarbon reservoir. The latter problem is typically decomposed into 104 to 105 individual inverse problems, each solved fully probabilistically and without linearization. The results in both cases are sufficiently close to the Monte Carlo solution to exhibit realistic uncertainty, multimodality and bias. This provides far greater confidence in the results, and in decisions made on their basis.
ERIC Educational Resources Information Center
Kapur, Manu
2018-01-01
The goal of this paper is to isolate the preparatory effects of problem-generation from solution generation in problem-posing contexts, and their underlying mechanisms on learning from instruction. Using a randomized-controlled design, students were assigned to one of two conditions: (a) problem-posing with solution generation, where they…
ERIC Educational Resources Information Center
Ehrman, Sheryl H.; Castellanos, Patricia; Dwivedi, Vivek; Diemer, R. Bertrum
2007-01-01
A particle technology design problem incorporating population balance modeling was developed and assigned to senior and first-year graduate students in a Particle Science and Technology course. The problem focused on particle collection, with a pipeline agglomerator, Cyclone, and baghouse comprising the collection system. The problem was developed…
ERIC Educational Resources Information Center
Tang, Hui; Kirk, John; Pienta, Norbert J.
2014-01-01
This paper includes two experiments, one investigating complexity factors in stoichiometry word problems, and the other identifying students' problem-solving protocols by using eye-tracking technology. The word problems used in this study had five different complexity factors, which were randomly assigned by a Web-based tool that we developed. The…
Exploiting Elementary Landscapes for TSP, Vehicle Routing and Scheduling
2015-09-03
Traveling Salesman Problem (TSP) and Graph Coloring are elementary. Problems such as MAX-kSAT are a superposition of k elementary landscapes. This...search space. Problems such as the Traveling Salesman Problem (TSP), Graph Coloring, the Frequency Assignment Problem , as well as Min-Cut and Max-Cut...echoing our earlier esults on the Traveling Salesman Problem . Using two locally optimal solutions as “parent” solutions, we have developed a
DOE Office of Scientific and Technical Information (OSTI.GOV)
Addona, Davide, E-mail: d.addona@campus.unimib.it
2015-08-15
We obtain weighted uniform estimates for the gradient of the solutions to a class of linear parabolic Cauchy problems with unbounded coefficients. Such estimates are then used to prove existence and uniqueness of the mild solution to a semi-linear backward parabolic Cauchy problem, where the differential equation is the Hamilton–Jacobi–Bellman equation of a suitable optimal control problem. Via backward stochastic differential equations, we show that the mild solution is indeed the value function of the controlled equation and that the feedback law is verified.
NASA Astrophysics Data System (ADS)
Xu, Jiuping; Li, Jun
2002-09-01
In this paper a class of stochastic multiple-objective programming problems with one quadratic, several linear objective functions and linear constraints has been introduced. The former model is transformed into a deterministic multiple-objective nonlinear programming model by means of the introduction of random variables' expectation. The reference direction approach is used to deal with linear objectives and results in a linear parametric optimization formula with a single linear objective function. This objective function is combined with the quadratic function using the weighted sums. The quadratic problem is transformed into a linear (parametric) complementary problem, the basic formula for the proposed approach. The sufficient and necessary conditions for (properly, weakly) efficient solutions and some construction characteristics of (weakly) efficient solution sets are obtained. An interactive algorithm is proposed based on reference direction and weighted sums. Varying the parameter vector on the right-hand side of the model, the DM can freely search the efficient frontier with the model. An extended portfolio selection model is formed when liquidity is considered as another objective to be optimized besides expectation and risk. The interactive approach is illustrated with a practical example.
Capacity-constrained traffic assignment in networks with residual queues
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lam, W.H.K.; Zhang, Y.
2000-04-01
This paper proposes a capacity-constrained traffic assignment model for strategic transport planning in which the steady-state user equilibrium principle is extended for road networks with residual queues. Therefore, the road-exit capacity and the queuing effects can be incorporated into the strategic transport model for traffic forecasting. The proposed model is applicable to the congested network particularly when the traffic demands exceeds the capacity of the network during the peak period. An efficient solution method is proposed for solving the steady-state traffic assignment problem with residual queues. Then a simple numerical example is employed to demonstrate the application of the proposedmore » model and solution method, while an example of a medium-sized arterial highway network in Sioux Falls, South Dakota, is used to test the applicability of the proposed solution to real problems.« less
A Markov Random Field Framework for Protein Side-Chain Resonance Assignment
NASA Astrophysics Data System (ADS)
Zeng, Jianyang; Zhou, Pei; Donald, Bruce Randall
Nuclear magnetic resonance (NMR) spectroscopy plays a critical role in structural genomics, and serves as a primary tool for determining protein structures, dynamics and interactions in physiologically-relevant solution conditions. The current speed of protein structure determination via NMR is limited by the lengthy time required in resonance assignment, which maps spectral peaks to specific atoms and residues in the primary sequence. Although numerous algorithms have been developed to address the backbone resonance assignment problem [68,2,10,37,14,64,1,31,60], little work has been done to automate side-chain resonance assignment [43, 48, 5]. Most previous attempts in assigning side-chain resonances depend on a set of NMR experiments that record through-bond interactions with side-chain protons for each residue. Unfortunately, these NMR experiments have low sensitivity and limited performance on large proteins, which makes it difficult to obtain enough side-chain resonance assignments. On the other hand, it is essential to obtain almost all of the side-chain resonance assignments as a prerequisite for high-resolution structure determination. To overcome this deficiency, we present a novel side-chain resonance assignment algorithm based on alternative NMR experiments measuring through-space interactions between protons in the protein, which also provide crucial distance restraints and are normally required in high-resolution structure determination. We cast the side-chain resonance assignment problem into a Markov Random Field (MRF) framework, and extend and apply combinatorial protein design algorithms to compute the optimal solution that best interprets the NMR data. Our MRF framework captures the contact map information of the protein derived from NMR spectra, and exploits the structural information available from the backbone conformations determined by orientational restraints and a set of discretized side-chain conformations (i.e., rotamers). A Hausdorff-based computation is employed in the scoring function to evaluate the probability of side-chain resonance assignments to generate the observed NMR spectra. The complexity of the assignment problem is first reduced by using a dead-end elimination (DEE) algorithm, which prunes side-chain resonance assignments that are provably not part of the optimal solution. Then an A* search algorithm is used to find a set of optimal side-chain resonance assignments that best fit the NMR data. We have tested our algorithm on NMR data for five proteins, including the FF Domain 2 of human transcription elongation factor CA150 (FF2), the B1 domain of Protein G (GB1), human ubiquitin, the ubiquitin-binding zinc finger domain of the human Y-family DNA polymerase Eta (pol η UBZ), and the human Set2-Rpb1 interacting domain (hSRI). Our algorithm assigns resonances for more than 90% of the protons in the proteins, and achieves about 80% correct side-chain resonance assignments. The final structures computed using distance restraints resulting from the set of assigned side-chain resonances have backbone RMSD 0.5 - 1.4 Å and all-heavy-atom RMSD 1.0 - 2.2 Å from the reference structures that were determined by X-ray crystallography or traditional NMR approaches. These results demonstrate that our algorithm can be successfully applied to automate side-chain resonance assignment and high-quality protein structure determination. Since our algorithm does not require any specific NMR experiments for measuring the through-bond interactions with side-chain protons, it can save a significant amount of both experimental cost and spectrometer time, and hence accelerate the NMR structure determination process.
Linear decomposition approach for a class of nonconvex programming problems.
Shen, Peiping; Wang, Chunfeng
2017-01-01
This paper presents a linear decomposition approach for a class of nonconvex programming problems by dividing the input space into polynomially many grids. It shows that under certain assumptions the original problem can be transformed and decomposed into a polynomial number of equivalent linear programming subproblems. Based on solving a series of liner programming subproblems corresponding to those grid points we can obtain the near-optimal solution of the original problem. Compared to existing results in the literature, the proposed algorithm does not require the assumptions of quasi-concavity and differentiability of the objective function, and it differs significantly giving an interesting approach to solving the problem with a reduced running time.
How did you guess? Or, what do multiple-choice questions measure?
Cox, K R
1976-06-05
Multiple-choice questions classified as requiring problem-solving skills have been interpreted as measuring problem-solving skills within students, with the implicit hypothesis that questions needing an increasingly complex intellectual process should present increasing difficulty to the student. This hypothesis was tested in a 150-question paper taken by 721 students in seven Australian medical schools. No correlation was observed between difficulty and assigned process. Consequently, the question-answering process was explored with a group of final-year students. Anecdotal recall by students gave heavy weight to knowledge rather than problem solving in answering these questions. Assignment of the 150 questions to the classification by three teachers and six students showed their congruence to be a little above random probability.
ERIC Educational Resources Information Center
Benjamin, Carl; And Others
Presented are student performance objectives, a student progress chart, and assignment sheets with objective and diagnostic measures for the stated performance objectives in College Algebra I. Topics covered include: sets; vocabulary; linear equations; inequalities; real numbers; operations; factoring; fractions; formulas; ratio, proportion, and…
NASA Astrophysics Data System (ADS)
Basin, M.; Maldonado, J. J.; Zendejo, O.
2016-07-01
This paper proposes new mean-square filter and parameter estimator design for linear stochastic systems with unknown parameters over linear observations, where unknown parameters are considered as combinations of Gaussian and Poisson white noises. The problem is treated by reducing the original problem to a filtering problem for an extended state vector that includes parameters as additional states, modelled as combinations of independent Gaussian and Poisson processes. The solution to this filtering problem is based on the mean-square filtering equations for incompletely polynomial states confused with Gaussian and Poisson noises over linear observations. The resulting mean-square filter serves as an identifier for the unknown parameters. Finally, a simulation example shows effectiveness of the proposed mean-square filter and parameter estimator.
NASA Astrophysics Data System (ADS)
Yang, Peng; Peng, Yongfei; Ye, Bin; Miao, Lixin
2017-09-01
This article explores the integrated optimization problem of location assignment and sequencing in multi-shuttle automated storage/retrieval systems under the modified 2n-command cycle pattern. The decision of storage and retrieval (S/R) location assignment and S/R request sequencing are jointly considered. An integer quadratic programming model is formulated to describe this integrated optimization problem. The optimal travel cycles for multi-shuttle S/R machines can be obtained to process S/R requests in the storage and retrieval request order lists by solving the model. The small-sized instances are optimally solved using CPLEX. For large-sized problems, two tabu search algorithms are proposed, in which the first come, first served and nearest neighbour are used to generate initial solutions. Various numerical experiments are conducted to examine the heuristics' performance and the sensitivity of algorithm parameters. Furthermore, the experimental results are analysed from the viewpoint of practical application, and a parameter list for applying the proposed heuristics is recommended under different real-life scenarios.
Design of adaptive load mitigating materials usingnonlinear stress wave tailoring
2016-02-24
for granular material use). 3 • Prof. Trudy Kriven (UIUC, Materials Science) is an expert in ceramic and geopolymer fabrication. • Prof. John...Figure A5.1: Schematic diagram showing the 1D chain of spherical elements in contact with (a) a uniform linear medium and (b) a composite linear...each material point to consisting of one of the given material constituents, we allow each material point to be assigned a composite material that is
Semilinear programming: applications and implementation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mohan, S.
Semilinear programming is a method of solving optimization problems with linear constraints where the non-negativity restrictions on the variables are dropped and the objective function coefficients can take on different values depending on whether the variable is positive or negative. The simplex method for linear programming is modified in this thesis to solve general semilinear and piecewise linear programs efficiently without having to transform them into equivalent standard linear programs. Several models in widely different areas of optimization such as production smoothing, facility locations, goal programming and L/sub 1/ estimation are presented first to demonstrate the compact formulation that arisesmore » when such problems are formulated as semilinear programs. A code SLP is constructed using the semilinear programming techniques. Problems in aggregate planning and L/sub 1/ estimation are solved using SLP and equivalent linear programs using a linear programming simplex code. Comparisons of CPU times and number iterations indicate SLP to be far superior. The semilinear programming techniques are extended to piecewise linear programming in the implementation of the code PLP. Piecewise linear models in aggregate planning are solved using PLP and equivalent standard linear programs using a simple upper bounded linear programming code SUBLP.« less
Computer Power. Part 2: Electrical Power Problems and Their Amelioration.
ERIC Educational Resources Information Center
Price, Bennett J.
1989-01-01
Describes electrical power problems that affect computer users, including spikes, sags, outages, noise, frequency variations, and static electricity. Ways in which these problems may be diagnosed and cured are discussed. Sidebars consider transformers; power distribution units; surge currents/linear and non-linear loads; and sizing the power…
Inverse Modelling Problems in Linear Algebra Undergraduate Courses
ERIC Educational Resources Information Center
Martinez-Luaces, Victor E.
2013-01-01
This paper will offer an analysis from a theoretical point of view of mathematical modelling, applications and inverse problems of both causation and specification types. Inverse modelling problems give the opportunity to establish connections between theory and practice and to show this fact, a simple linear algebra example in two different…
Zörnig, Peter
2015-08-01
We present integer programming models for some variants of the farthest string problem. The number of variables and constraints is substantially less than that of the integer linear programming models known in the literature. Moreover, the solution of the linear programming-relaxation contains only a small proportion of noninteger values, which considerably simplifies the rounding process. Numerical tests have shown excellent results, especially when a small set of long sequences is given.
Aspect-object alignment with Integer Linear Programming in opinion mining.
Zhao, Yanyan; Qin, Bing; Liu, Ting; Yang, Wei
2015-01-01
Target extraction is an important task in opinion mining. In this task, a complete target consists of an aspect and its corresponding object. However, previous work has always simply regarded the aspect as the target itself and has ignored the important "object" element. Thus, these studies have addressed incomplete targets, which are of limited use for practical applications. This paper proposes a novel and important sentiment analysis task, termed aspect-object alignment, to solve the "object neglect" problem. The objective of this task is to obtain the correct corresponding object for each aspect. We design a two-step framework for this task. We first provide an aspect-object alignment classifier that incorporates three sets of features, namely, the basic, relational, and special target features. However, the objects that are assigned to aspects in a sentence often contradict each other and possess many complicated features that are difficult to incorporate into a classifier. To resolve these conflicts, we impose two types of constraints in the second step: intra-sentence constraints and inter-sentence constraints. These constraints are encoded as linear formulations, and Integer Linear Programming (ILP) is used as an inference procedure to obtain a final global decision that is consistent with the constraints. Experiments on a corpus in the camera domain demonstrate that the three feature sets used in the aspect-object alignment classifier are effective in improving its performance. Moreover, the classifier with ILP inference performs better than the classifier without it, thereby illustrating that the two types of constraints that we impose are beneficial.
Hurtaud, C; Faucon, F; Couvreur, S; Peyraud, J-L
2010-04-01
The aim of this experiment was to compare the effects of increasing amounts of extruded linseed in dairy cow diet on milk fat yield, milk fatty acid (FA) composition, milk fat globule size, and butter properties. Thirty-six Prim'Holstein cows at 104 d in milk were sorted into 3 groups by milk production and milk fat globule size. Three diets were assigned: a total mixed ration (control) consisting of corn silage (70%) and concentrate (30%), or a supplemented ration based on the control ration but where part of the concentrate energy was replaced on a dry matter basis by 2.1% (LIN1) or 4.3% (LIN2) extruded linseed. The increased amounts of extruded linseed linearly decreased milk fat content and milk fat globule size and linearly increased the percentage of milk unsaturated FA, specifically alpha-linolenic acid and trans FA. Extruded linseed had no significant effect on butter color or on the sensory properties of butters, with only butter texture in the mouth improved. The LIN2 treatment induced a net improvement of milk nutritional properties but also created problems with transforming the cream into butter. The butters obtained were highly spreadable and melt-in-the-mouth, with no pronounced deficiency in taste. The LIN1 treatment appeared to offer a good tradeoff of improved milk FA profile and little effect on butter-making while still offering butters with improved functional properties. Copyright (c) 2010 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Sixth SIAM conference on applied linear algebra: Final program and abstracts. Final technical report
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
1997-12-31
Linear algebra plays a central role in mathematics and applications. The analysis and solution of problems from an amazingly wide variety of disciplines depend on the theory and computational techniques of linear algebra. In turn, the diversity of disciplines depending on linear algebra also serves to focus and shape its development. Some problems have special properties (numerical, structural) that can be exploited. Some are simply so large that conventional approaches are impractical. New computer architectures motivate new algorithms, and fresh ways to look at old ones. The pervasive nature of linear algebra in analyzing and solving problems means that peoplemore » from a wide spectrum--universities, industrial and government laboratories, financial institutions, and many others--share an interest in current developments in linear algebra. This conference aims to bring them together for their mutual benefit. Abstracts of papers presented are included.« less
Maximization of Learning Speed Due to Neuronal Redundancy in Reinforcement Learning
NASA Astrophysics Data System (ADS)
Takiyama, Ken
2016-11-01
Adaptable neural activity contributes to the flexibility of human behavior, which is optimized in situations such as motor learning and decision making. Although learning signals in motor learning and decision making are low-dimensional, neural activity, which is very high dimensional, must be modified to achieve optimal performance based on the low-dimensional signal, resulting in a severe credit-assignment problem. Despite this problem, the human brain contains a vast number of neurons, leaving an open question: what is the functional significance of the huge number of neurons? Here, I address this question by analyzing a redundant neural network with a reinforcement-learning algorithm in which the numbers of neurons and output units are N and M, respectively. Because many combinations of neural activity can generate the same output under the condition of N ≫ M, I refer to the index N - M as neuronal redundancy. Although greater neuronal redundancy makes the credit-assignment problem more severe, I demonstrate that a greater degree of neuronal redundancy facilitates learning speed. Thus, in an apparent contradiction of the credit-assignment problem, I propose the hypothesis that a functional role of a huge number of neurons or a huge degree of neuronal redundancy is to facilitate learning speed.
ERIC Educational Resources Information Center
Sweet, Colleen
2008-01-01
In this article, the author presents the "Rags to Riches" design project she introduced to her students. She assigned each of her students one item from an array to thrift store goods which included old scarves, sweaters, jackets, and even evening gowns. The design problem was to imagine what a clothing tag might look like if the assigned item…
ELECTRICAL AND ELECTRONIC INDUSTRIAL CONTROL. D-C MAGNETIC MOTOR CONTROL, UNIT 7, ASSIGNMENTS.
ERIC Educational Resources Information Center
SUTTON, MACK C.
THIS GUIDE IS FOR INDIVIDUAL STUDENT USE IN STUDYING DIRECT CURRENT MAGNETIC MOTOR CONTROL IN ELECTRICAL-ELECTRONIC PROGRAMS. IT WAS DEVELOPED BY AN INSTRUCTIONAL MATERIALS SPECIALIST AND ADVISERS. EACH OF THE 15 ASSIGNMENT SHEETS PROVIDES THE LESSON SUBJECT, PURPOSE, INTRODUCTORY INFORMATION, STUDY REFERENCES, AND PROBLEMS. SOME OF THE LESSONS…
Lights, Camera, Action! Learning about Management with Student-Produced Video Assignments
ERIC Educational Resources Information Center
Schultz, Patrick L.; Quinn, Andrew S.
2014-01-01
In this article, we present a proposal for fostering learning in the management classroom through the use of student-produced video assignments. We describe the potential for video technology to create active learning environments focused on problem solving, authentic and direct experiences, and interaction and collaboration to promote student…
Turnaround Time and Market Capacity in Contract Cheating
ERIC Educational Resources Information Center
Wallace, Melisa J.; Newton, Philip M.
2014-01-01
Contract cheating is the process whereby students auction off the opportunity for others to complete assignments for them. It is an apparently widespread yet under-researched problem. One suggested strategy to prevent contract cheating is to shorten the turnaround time between the release of assignment details and the submission date, thus making…
"Yes, a T-Shirt!": Assessing Visual Composition in the "Writing" Class
ERIC Educational Resources Information Center
Odell, Lee; Katz, Susan M.
2009-01-01
Computer technology is expanding our profession's conception of composing, allowing visual information to play a substantial role in an increasing variety of composition assignments. This expansion, however, creates a major problem: How does one assess student work on these assignments? Current work in assessment provides only partial answers to…
Newton's method: A link between continuous and discrete solutions of nonlinear problems
NASA Technical Reports Server (NTRS)
Thurston, G. A.
1980-01-01
Newton's method for nonlinear mechanics problems replaces the governing nonlinear equations by an iterative sequence of linear equations. When the linear equations are linear differential equations, the equations are usually solved by numerical methods. The iterative sequence in Newton's method can exhibit poor convergence properties when the nonlinear problem has multiple solutions for a fixed set of parameters, unless the iterative sequences are aimed at solving for each solution separately. The theory of the linear differential operators is often a better guide for solution strategies in applying Newton's method than the theory of linear algebra associated with the numerical analogs of the differential operators. In fact, the theory for the differential operators can suggest the choice of numerical linear operators. In this paper the method of variation of parameters from the theory of linear ordinary differential equations is examined in detail in the context of Newton's method to demonstrate how it might be used as a guide for numerical solutions.
Performance and limitations of p-version finite element method for problems containing singularities
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wong, K.K.; Surana, K.S.
1996-10-01
In this paper, the authors investigate the performance of p-version Least Squares Finite Element Formulation (LSFEF) for a hyperbolic system of equations describing a one-dimensional radial flow of an upper-convected Maxwell fluid. This problem has r{sup 2} singularity in stress and r{sup {minus}1} singularity in velocity at r = 0. By carefully controlling the inner radius r{sub j}, Deborah number DE and Reynolds number Re, this problem can be used to simulate the following four classes of problems: (a) smooth linear problems, (b) smooth non-linear problems, (c) singular linear problems and (d) singular non-linear problems. They demonstrate that in casesmore » (a) and (b) the p-version method, in particular p-version LSFEF is meritorious. However, for cases (c) and (d) p-version LSFEF, even with extreme mesh refinement and very high p-levels, either produces wrong solutions, or results in the failure of the iterative solution procedure. Even though in the numerical studies they have considered p-version LSFEF for the radial flow of the upper-convected Maxwell fluid, the findings and conclusions are equally valid for other smooth and singular problems as well, regardless of the formulation strategy chosen and element approximation functions employed.« less
Solving multiconstraint assignment problems using learning automata.
Horn, Geir; Oommen, B John
2010-02-01
This paper considers the NP-hard problem of object assignment with respect to multiple constraints: assigning a set of elements (or objects) into mutually exclusive classes (or groups), where the elements which are "similar" to each other are hopefully located in the same class. The literature reports solutions in which the similarity constraint consists of a single index that is inappropriate for the type of multiconstraint problems considered here and where the constraints could simultaneously be contradictory. This feature, where we permit possibly contradictory constraints, distinguishes this paper from the state of the art. Indeed, we are aware of no learning automata (or other heuristic) solutions which solve this problem in its most general setting. Such a scenario is illustrated with the static mapping problem, which consists of distributing the processes of a parallel application onto a set of computing nodes. This is a classical and yet very important problem within the areas of parallel computing, grid computing, and cloud computing. We have developed four learning-automata (LA)-based algorithms to solve this problem: First, a fixed-structure stochastic automata algorithm is presented, where the processes try to form pairs to go onto the same node. This algorithm solves the problem, although it requires some centralized coordination. As it is desirable to avoid centralized control, we subsequently present three different variable-structure stochastic automata (VSSA) algorithms, which have superior partitioning properties in certain settings, although they forfeit some of the scalability features of the fixed-structure algorithm. All three VSSA algorithms model the processes as automata having first the hosting nodes as possible actions; second, the processes as possible actions; and, third, attempting to estimate the process communication digraph prior to probabilistically mapping the processes. This paper, which, we believe, comprehensively reports the pioneering LA solutions to this problem, unequivocally demonstrates that LA can play an important role in solving complex combinatorial and integer optimization problems.
ERIC Educational Resources Information Center
Gale, David; And Others
Four units make up the contents of this document. The first examines applications of finite mathematics to business and economies. The user is expected to learn the method of optimization in optimal assignment problems. The second module presents applications of difference equations to economics and social sciences, and shows how to: 1) interpret…
An Effective Evolutionary Approach for Bicriteria Shortest Path Routing Problems
NASA Astrophysics Data System (ADS)
Lin, Lin; Gen, Mitsuo
Routing problem is one of the important research issues in communication network fields. In this paper, we consider a bicriteria shortest path routing (bSPR) model dedicated to calculating nondominated paths for (1) the minimum total cost and (2) the minimum transmission delay. To solve this bSPR problem, we propose a new multiobjective genetic algorithm (moGA): (1) an efficient chromosome representation using the priority-based encoding method; (2) a new operator of GA parameters auto-tuning, which is adaptively regulation of exploration and exploitation based on the change of the average fitness of parents and offspring which is occurred at each generation; and (3) an interactive adaptive-weight fitness assignment mechanism is implemented that assigns weights to each objective and combines the weighted objectives into a single objective function. Numerical experiments with various scales of network design problems show the effectiveness and the efficiency of our approach by comparing with the recent researches.
The unassigned distance geometry problem
Duxbury, P. M.; Granlund, L.; Gujarathi, S. R.; ...
2015-11-19
Studies of distance geometry problems (DGP) have focused on cases where the vertices at the ends of all or most of the given distances are known or assigned, which we call assigned distance geometry problems (aDGPs). In this contribution we consider the unassigned distance geometry problem (uDGP) where the vertices associated with a given distance are unknown, so the graph structure has to be discovered. uDGPs arises when attempting to find the atomic structure of molecules and nanoparticles using X-ray or neutron diffraction data from non-crystalline materials. Rigidity theory provides a useful foundation for both aDGPs and uDGPs, though itmore » is restricted to generic realizations of graphs, and key results are summarized. Conditions for unique realization are discussed for aDGP and uDGP cases, build-up algorithms for both cases are described and experimental results for uDGP are presented.« less
Characterizing the Fundamental Intellectual Steps Required in the Solution of Conceptual Problems
NASA Astrophysics Data System (ADS)
Stewart, John
2010-02-01
At some level, the performance of a science class must depend on what is taught, the information content of the materials and assignments of the course. The introductory calculus-based electricity and magnetism class at the University of Arkansas is examined using a catalog of the basic reasoning steps involved in the solution of problems assigned in the class. This catalog was developed by sampling popular physics textbooks for conceptual problems. The solution to each conceptual problem was decomposed into its fundamental reasoning steps. These fundamental steps are, then, used to quantify the distribution of conceptual content within the course. Using this characterization technique, an exceptionally detailed picture of the information flow and structure of the class can be produced. The intellectual structure of published conceptual inventories is compared with the information presented in the class and the dependence of conceptual performance on the details of coverage extracted. )
FINITE DIFFERENCE THEORY, * LINEAR ALGEBRA , APPLIED MATHEMATICS, APPROXIMATION(MATHEMATICS), BOUNDARY VALUE PROBLEMS, COMPUTATIONS, HYPERBOLAS, MATHEMATICAL MODELS, NUMERICAL ANALYSIS, PARTIAL DIFFERENTIAL EQUATIONS, STABILITY.
NASA Technical Reports Server (NTRS)
Lee, Y. M.
1971-01-01
Using a linearized theory of thermally and mechanically interacting mixture of linear elastic solid and viscous fluid, we derive a fundamental relation in an integral form called a reciprocity relation. This reciprocity relation relates the solution of one initial-boundary value problem with a given set of initial and boundary data to the solution of a second initial-boundary value problem corresponding to a different initial and boundary data for a given interacting mixture. From this general integral relation, reciprocity relations are derived for a heat-conducting linear elastic solid, and for a heat-conducting viscous fluid. An initial-boundary value problem is posed and solved for the mixture of linear elastic solid and viscous fluid. With the aid of the Laplace transform and the contour integration, a real integral representation for the displacement of the solid constituent is obtained as one of the principal results of the analysis.
A heterogeneous fleet vehicle routing model for solving the LPG distribution problem: A case study
NASA Astrophysics Data System (ADS)
Onut, S.; Kamber, M. R.; Altay, G.
2014-03-01
Vehicle Routing Problem (VRP) is an important management problem in the field of distribution and logistics. In VRPs, routes from a distribution point to geographically distributed points are designed with minimum cost and considering customer demands. All points should be visited only once and by one vehicle in one route. Total demand in one route should not exceed the capacity of the vehicle that assigned to that route. VRPs are varied due to real life constraints related to vehicle types, number of depots, transportation conditions and time periods, etc. Heterogeneous fleet vehicle routing problem is a kind of VRP that vehicles have different capacity and costs. There are two types of vehicles in our problem. In this study, it is used the real world data and obtained from a company that operates in LPG sector in Turkey. An optimization model is established for planning daily routes and assigned vehicles. The model is solved by GAMS and optimal solution is found in a reasonable time.
Interlocked Problem Posing and Children's Problem Posing Performance in Free Structured Situations
ERIC Educational Resources Information Center
Cankoy, Osman
2014-01-01
The aim of this study is to explore the mathematical problem posing performance of students in free structured situations. Two classes of fifth grade students (N = 30) were randomly assigned to experimental and control groups. The categories of the problems posed in free structured situations by the 2 groups of students were studied through…
ERIC Educational Resources Information Center
Rohrer, Doug; Dedrick, Robert F.; Burgess, Kaleena
2014-01-01
Most mathematics assignments consist of a group of problems requiring the same strategy. For example, a lesson on the quadratic formula is typically followed by a block of problems requiring students to use the quadratic formula, which means that students know the appropriate strategy before they read each problem. In an alternative approach,…
NASA Technical Reports Server (NTRS)
Rosatino, S. A.; Westbrook, R. M.
1979-01-01
Miniature, individual crystal-controlled RF transmitters located in EMG pressure sensors simplifies multichannel EMG telemetry for electronic gait monitoring. Transmitters which are assigned operating frequencies within 174 - 216 MHz band have linear frequency response from 20 - 2000 Hz and operate over range of 15 m.
Control problem for a system of linear loaded differential equations
NASA Astrophysics Data System (ADS)
Barseghyan, V. R.; Barseghyan, T. V.
2018-04-01
The problem of control and optimal control for a system of linear loaded differential equations is considered. Necessary and sufficient conditions for complete controllability and conditions for the existence of a program control and the corresponding motion are formulated. The explicit form of control action for the control problem is constructed and a method for solving the problem of optimal control is proposed.
ORACLS: A system for linear-quadratic-Gaussian control law design
NASA Technical Reports Server (NTRS)
Armstrong, E. S.
1978-01-01
A modern control theory design package (ORACLS) for constructing controllers and optimal filters for systems modeled by linear time-invariant differential or difference equations is described. Numerical linear-algebra procedures are used to implement the linear-quadratic-Gaussian (LQG) methodology of modern control theory. Algorithms are included for computing eigensystems of real matrices, the relative stability of a matrix, factored forms for nonnegative definite matrices, the solutions and least squares approximations to the solutions of certain linear matrix algebraic equations, the controllability properties of a linear time-invariant system, and the steady state covariance matrix of an open-loop stable system forced by white noise. Subroutines are provided for solving both the continuous and discrete optimal linear regulator problems with noise free measurements and the sampled-data optimal linear regulator problem. For measurement noise, duality theory and the optimal regulator algorithms are used to solve the continuous and discrete Kalman-Bucy filter problems. Subroutines are also included which give control laws causing the output of a system to track the output of a prescribed model.
EZLP: An Interactive Computer Program for Solving Linear Programming Problems. Final Report.
ERIC Educational Resources Information Center
Jarvis, John J.; And Others
Designed for student use in solving linear programming problems, the interactive computer program described (EZLP) permits the student to input the linear programming model in exactly the same manner in which it would be written on paper. This report includes a brief review of the development of EZLP; narrative descriptions of program features,…
Zeb, Salman; Yousaf, Muhammad
2017-01-01
In this article, we present a QR updating procedure as a solution approach for linear least squares problem with equality constraints. We reduce the constrained problem to unconstrained linear least squares and partition it into a small subproblem. The QR factorization of the subproblem is calculated and then we apply updating techniques to its upper triangular factor R to obtain its solution. We carry out the error analysis of the proposed algorithm to show that it is backward stable. We also illustrate the implementation and accuracy of the proposed algorithm by providing some numerical experiments with particular emphasis on dense problems.
Fuzzy self-learning control for magnetic servo system
NASA Technical Reports Server (NTRS)
Tarn, J. H.; Kuo, L. T.; Juang, K. Y.; Lin, C. E.
1994-01-01
It is known that an effective control system is the key condition for successful implementation of high-performance magnetic servo systems. Major issues to design such control systems are nonlinearity; unmodeled dynamics, such as secondary effects for copper resistance, stray fields, and saturation; and that disturbance rejection for the load effect reacts directly on the servo system without transmission elements. One typical approach to design control systems under these conditions is a special type of nonlinear feedback called gain scheduling. It accommodates linear regulators whose parameters are changed as a function of operating conditions in a preprogrammed way. In this paper, an on-line learning fuzzy control strategy is proposed. To inherit the wealth of linear control design, the relations between linear feedback and fuzzy logic controllers have been established. The exercise of engineering axioms of linear control design is thus transformed into tuning of appropriate fuzzy parameters. Furthermore, fuzzy logic control brings the domain of candidate control laws from linear into nonlinear, and brings new prospects into design of the local controllers. On the other hand, a self-learning scheme is utilized to automatically tune the fuzzy rule base. It is based on network learning infrastructure; statistical approximation to assign credit; animal learning method to update the reinforcement map with a fast learning rate; and temporal difference predictive scheme to optimize the control laws. Different from supervised and statistical unsupervised learning schemes, the proposed method learns on-line from past experience and information from the process and forms a rule base of an FLC system from randomly assigned initial control rules.
Krylov subspace methods - Theory, algorithms, and applications
NASA Technical Reports Server (NTRS)
Sad, Youcef
1990-01-01
Projection methods based on Krylov subspaces for solving various types of scientific problems are reviewed. The main idea of this class of methods when applied to a linear system Ax = b, is to generate in some manner an approximate solution to the original problem from the so-called Krylov subspace span. Thus, the original problem of size N is approximated by one of dimension m, typically much smaller than N. Krylov subspace methods have been very successful in solving linear systems and eigenvalue problems and are now becoming popular for solving nonlinear equations. The main ideas in Krylov subspace methods are shown and their use in solving linear systems, eigenvalue problems, parabolic partial differential equations, Liapunov matrix equations, and nonlinear system of equations are discussed.
Protection of Workers and Third Parties during the Construction of Linear Structures
NASA Astrophysics Data System (ADS)
Vlčková, Jitka; Venkrbec, Václav; Henková, Svatava; Chromý, Adam
2017-12-01
The minimization of risk in the workplace through a focus on occupational health and safety (OHS) is one of the primary objectives for every construction project. The most serious accidents in the construction industry occur during work on earthworks and linear structures. The character of such structures places them among those posing the greatest threat to the public (referred to as “third parties”). They can be characterized as large structures whose construction may involve the building site extending in a narrow lane alongside previously constructed objects currently in use by the public. Linear structures are often directly connected to existing objects or buildings, making it impossible to guard the whole construction site. However, many OHS problems related to linear structures can be prevented during the design stage. The aim of this article is to introduce a new methodology which has been implemented into a computer program that deals with safety measures at construction sites where work is performed on linear structures. Based on existing experience with the design of such structures and their execution and supervision by safety coordinators, the basic types of linear structures, their location in the terrain, the conditions present during their execution and other marginal conditions and influences were modelled. Basic safety information has been assigned to this elementary information, which is strictly necessary for the construction process. The safety provisions can be grouped according to type, e.g. technical, organizational and other necessary documentation, or into sets of provisions concerning areas such as construction site safety, transport safety, earthworks safety, etc. The selection of the given provisions takes place using multiple criteria. The aim of creating this program is to provide a practical tool for designers, contractors and construction companies. The model can contribute to the sufficient awareness of these participants about technical and organizational provisions that can help them to meet workplace safety requirements. The software for the selection of safety provisions also contains module that can calculate necessary cost estimates using a calculation formula chosen by the user. All software data conform to European standards harmonized for the Czech Republic.
NASA Astrophysics Data System (ADS)
Stewart, John
2015-04-01
The amount of time spent on out-of-class activities such as working homework, reading, and studying for examinations is presented for 10 years of an introductory, calculus-based physics class at a large public university. While the class underwent significant change in the 10 years studied, the amount of time invested by students in weeks not containing an in-semester examination was constant and did not vary with the length of the reading or homework assignments. The amount of time spent preparing for examinations did change as the course was modified. The time spent on class assignments, both reading and homework, did not scale linearly with the length of the assignment. The time invested in both reading and homework per length of the assignment decreased as the assignments became longer. The class average time invested in examination preparation did change with the average performance on previous examinations in the same class, with more time spent in preparation for lower previous examination scores (R2 = 0 . 70).
NASA Astrophysics Data System (ADS)
Kusumawati, Rosita; Subekti, Retno
2017-04-01
Fuzzy bi-objective linear programming (FBOLP) model is bi-objective linear programming model in fuzzy number set where the coefficients of the equations are fuzzy number. This model is proposed to solve portfolio selection problem which generate an asset portfolio with the lowest risk and the highest expected return. FBOLP model with normal fuzzy numbers for risk and expected return of stocks is transformed into linear programming (LP) model using magnitude ranking function.
An Introduction to Multilinear Formula Score Theory. Measurement Series 84-4.
ERIC Educational Resources Information Center
Levine, Michael V.
Formula score theory (FST) associates each multiple choice test with a linear operator and expresses all of the real functions of item response theory as linear combinations of the operator's eigenfunctions. Hard measurement problems can then often be reformulated as easier, standard mathematical problems. For example, the problem of estimating…
ERIC Educational Resources Information Center
Webster-Stratton, Carolyn; And Others
1988-01-01
Assigned parents of 114 conduct-problem young children to either individually administered videotape modeling treatment, group discussion videotape modeling treatment, group discussion treatment, or waiting-list control. Compared with controls, all three treatment groups of mothers reported significantly fewer child behavior problems, more…
Interpersonal Problem-Solving Skills Training in the Treatment of Self-Poisoning Patients.
ERIC Educational Resources Information Center
McLeavey, B. C.; And Others
1994-01-01
Evaluated the effectiveness of interpersonal problem-solving skills training (IPSST) for the treatment of self-poisoning patients. Subjects were assigned randomly either to IPSST or to a control treatment. Although both treatments reduced the number of presenting problems, the IPSST was more effective as determined by other outcome measures. (RJM)
The Effect of Contextual and Conceptual Rewording on Mathematical Problem-Solving Performance
ERIC Educational Resources Information Center
Haghverdi, Majid; Wiest, Lynda R.
2016-01-01
This study shows how separate and combined contextual and conceptual problem rewording can positively influence student performance in solving mathematical word problems. Participants included 80 seventh-grade Iranian students randomly assigned in groups of 20 to three experimental groups involving three types of rewording and a control group. All…
Student-Created Homework Problems Based on YouTube Videos
ERIC Educational Resources Information Center
Liberatore, Matthew W.; Marr, David W. M.; Herring, Andrew M.; Way, J. Douglas
2013-01-01
Inspired by YouTube videos, students created homework problems as part of a class project. The project has been successful at different parts of the semester and demonstrated learning of course concepts. These new problems were implemented both in class and as part of homework assignments without significant changes. Examples from a material and…
A New Algorithm to Create Balanced Teams Promoting More Diversity
ERIC Educational Resources Information Center
Dias, Teresa Galvão; Borges, José
2017-01-01
The problem of assigning students to teams can be described as maximising their profiles diversity within teams while minimising the differences among teams. This problem is commonly known as the maximally diverse grouping problem and it is usually formulated as maximising the sum of the pairwise distances among students within teams. We propose…
Effect of a "Look-Ahead" Problem on Undergraduate Engineering Students' Concept Comprehension
ERIC Educational Resources Information Center
Goodman, Kevin; Davis, Julian; McDonald, Thomas
2016-01-01
In an effort to motivate undergraduate engineering students to prepare for class by reviewing material before lectures, a "Look-Ahead" problem was utilized. Students from two undergraduate engineering courses; Statics and Electronic Circuits, were assigned problems from course material that had not yet been covered in class. These…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bernal, Andrés; Patiny, Luc; Castillo, Andrés M.
2015-02-21
Nuclear magnetic resonance (NMR) assignment of small molecules is presented as a typical example of a combinatorial optimization problem in chemical physics. Three strategies that help improve the efficiency of solution search by the branch and bound method are presented: 1. reduction of the size of the solution space by resort to a condensed structure formula, wherein symmetric nuclei are grouped together; 2. partitioning of the solution space based on symmetry, that becomes the basis for an efficient branching procedure; and 3. a criterion of selection of input restrictions that leads to increased gaps between branches and thus faster pruningmore » of non-viable solutions. Although the examples chosen to illustrate this work focus on small-molecule NMR assignment, the results are generic and might help solving other combinatorial optimization problems.« less
Global crop production forecasting: An analysis of the data system problems and their solutions
NASA Technical Reports Server (NTRS)
Neiers, J.; Graf, H.
1978-01-01
Data related problems in the acquisition and use of satellite data necessary for operational forecasting of global crop production are considered for the purpose of establishing a measurable baseline. For data acquisition the world was divided into 37 crop regions in 22 countries. These regions represent approximately 95 percent of the total world production of the selected crops of interest, i.e., wheat, corn, soybeans, and rice. Targets were assigned to each region. Limited time periods during which data could be taken (windows) were assigned to each target. Each target was assigned to a cloud region. The DSDS was used to measure the success of obtaining data for each target during the specified windows for the regional cloud conditions and the specific alternatives being analyzed. The results of this study suggest several approaches for an operational system that will perform satisfactorily with two LANDSAT type satellites.
NASA Astrophysics Data System (ADS)
Ferreira, Maria Teodora; Follmann, Rosangela; Domingues, Margarete O.; Macau, Elbert E. N.; Kiss, István Z.
2017-08-01
Phase synchronization may emerge from mutually interacting non-linear oscillators, even under weak coupling, when phase differences are bounded, while amplitudes remain uncorrelated. However, the detection of this phenomenon can be a challenging problem to tackle. In this work, we apply the Discrete Complex Wavelet Approach (DCWA) for phase assignment, considering signals from coupled chaotic systems and experimental data. The DCWA is based on the Dual-Tree Complex Wavelet Transform (DT-CWT), which is a discrete transformation. Due to its multi-scale properties in the context of phase characterization, it is possible to obtain very good results from scalar time series, even with non-phase-coherent chaotic systems without state space reconstruction or pre-processing. The method correctly predicts the phase synchronization for a chemical experiment with three locally coupled, non-phase-coherent chaotic processes. The impact of different time-scales is demonstrated on the synchronization process that outlines the advantages of DCWA for analysis of experimental data.
A systematic linear space approach to solving partially described inverse eigenvalue problems
NASA Astrophysics Data System (ADS)
Hu, Sau-Lon James; Li, Haujun
2008-06-01
Most applications of the inverse eigenvalue problem (IEP), which concerns the reconstruction of a matrix from prescribed spectral data, are associated with special classes of structured matrices. Solving the IEP requires one to satisfy both the spectral constraint and the structural constraint. If the spectral constraint consists of only one or few prescribed eigenpairs, this kind of inverse problem has been referred to as the partially described inverse eigenvalue problem (PDIEP). This paper develops an efficient, general and systematic approach to solve the PDIEP. Basically, the approach, applicable to various structured matrices, converts the PDIEP into an ordinary inverse problem that is formulated as a set of simultaneous linear equations. While solving simultaneous linear equations for model parameters, the singular value decomposition method is applied. Because of the conversion to an ordinary inverse problem, other constraints associated with the model parameters can be easily incorporated into the solution procedure. The detailed derivation and numerical examples to implement the newly developed approach to symmetric Toeplitz and quadratic pencil (including mass, damping and stiffness matrices of a linear dynamic system) PDIEPs are presented. Excellent numerical results for both kinds of problem are achieved under the situations that have either unique or infinitely many solutions.
Storage assignment optimization in a multi-tier shuttle warehousing system
NASA Astrophysics Data System (ADS)
Wang, Yanyan; Mou, Shandong; Wu, Yaohua
2016-03-01
The current mathematical models for the storage assignment problem are generally established based on the traveling salesman problem(TSP), which has been widely applied in the conventional automated storage and retrieval system(AS/RS). However, the previous mathematical models in conventional AS/RS do not match multi-tier shuttle warehousing systems(MSWS) because the characteristics of parallel retrieval in multiple tiers and progressive vertical movement destroy the foundation of TSP. In this study, a two-stage open queuing network model in which shuttles and a lift are regarded as servers at different stages is proposed to analyze system performance in the terms of shuttle waiting period (SWP) and lift idle period (LIP) during transaction cycle time. A mean arrival time difference matrix for pairwise stock keeping units(SKUs) is presented to determine the mean waiting time and queue length to optimize the storage assignment problem on the basis of SKU correlation. The decomposition method is applied to analyze the interactions among outbound task time, SWP, and LIP. The ant colony clustering algorithm is designed to determine storage partitions using clustering items. In addition, goods are assigned for storage according to the rearranging permutation and the combination of storage partitions in a 2D plane. This combination is derived based on the analysis results of the queuing network model and on three basic principles. The storage assignment method and its entire optimization algorithm method as applied in a MSWS are verified through a practical engineering project conducted in the tobacco industry. The applying results show that the total SWP and LIP can be reduced effectively to improve the utilization rates of all devices and to increase the throughput of the distribution center.
Tenderfooting: Tackling the Problems of Freshman Writers.
ERIC Educational Resources Information Center
Hobbs, Valerie; Rex-Kerish, Lesley
1986-01-01
University of California writing instructors must teach poorly prepared freshmen how to survive English classes and how to adapt the skills they learn to the rest of their university writing assignments. Reading, thinking, organizing, and stylistic problems are discussed. (MLW)
Propagation of Disturbances in Traffic Flow
DOT National Transportation Integrated Search
1977-09-01
The system-optimized static traffic-assignment problem in a freeway corridor network is the problem of choosing a distribution of vehicles in the network to minimize average travel time. It is of interest to know how sensitive the optimal steady-stat...
NASA Technical Reports Server (NTRS)
Arneson, Heather M.; Dousse, Nicholas; Langbort, Cedric
2014-01-01
We consider control design for positive compartmental systems in which each compartment's outflow rate is described by a concave function of the amount of material in the compartment.We address the problem of determining the routing of material between compartments to satisfy time-varying state constraints while ensuring that material reaches its intended destination over a finite time horizon. We give sufficient conditions for the existence of a time-varying state-dependent routing strategy which ensures that the closed-loop system satisfies basic network properties of positivity, conservation and interconnection while ensuring that capacity constraints are satisfied, when possible, or adjusted if a solution cannot be found. These conditions are formulated as a linear programming problem. Instances of this linear programming problem can be solved iteratively to generate a solution to the finite horizon routing problem. Results are given for the application of this control design method to an example problem. Key words: linear programming; control of networks; positive systems; controller constraints and structure.
Final Report: Design of adaptive load mitigating materials using nonlinear stress wave tailoring
2016-02-26
for granular material use). 3 • Prof. Trudy Kriven (UIUC, Materials Science) is an expert in ceramic and geopolymer fabrication. • Prof. John...Figure A5.1: Schematic diagram showing the 1D chain of spherical elements in contact with (a) a uniform linear medium and (b) a composite linear...each material point to consisting of one of the given material constituents, we allow each material point to be assigned a composite material that is
Pedagogy of the logic model: teaching undergraduates to work together to change their communities.
Zimmerman, Lindsey; Kamal, Zohra; Kim, Hannah
2013-01-01
Undergraduate community psychology courses can empower students to address challenging problems in their local communities. Creating a logic model is an experiential way to learn course concepts by "doing." Throughout the semester, students work with peers to define a problem, develop an intervention, and plan an evaluation focused on an issue of concern to them. This report provides an overview of how to organize a community psychology course around the creation of a logic model in order for students to develop this applied skill. Two undergraduate student authors report on their experience with the logic model assignment, describing the community problem they chose to address, what they learned from the assignment, what they found challenging, and what they are doing now in their communities based on what they learned.
Text Summarization Model based on Facility Location Problem
NASA Astrophysics Data System (ADS)
Takamura, Hiroya; Okumura, Manabu
e propose a novel multi-document generic summarization model based on the budgeted median problem, which is a facility location problem. The summarization method based on our model is an extractive method, which selects sentences from the given document cluster and generates a summary. Each sentence in the document cluster will be assigned to one of the selected sentences, where the former sentece is supposed to be represented by the latter. Our method selects sentences to generate a summary that yields a good sentence assignment and hence covers the whole content of the document cluster. An advantage of this method is that it can incorporate asymmetric relations between sentences such as textual entailment. Through experiments, we showed that the proposed method yields good summaries on the dataset of DUC'04.
Grouper: A Compact, Streamable Triangle Mesh Data Structure.
Luffel, Mark; Gurung, Topraj; Lindstrom, Peter; Rossignac, Jarek
2013-05-08
We present Grouper: an all-in-one compact file format, random-access data structure, and streamable representation for large triangle meshes. Similarly to the recently published SQuad representation, Grouper represents the geometry and connectivity of a mesh by grouping vertices and triangles into fixed-size records, most of which store two adjacent triangles and a shared vertex. Unlike SQuad, however, Grouper interleaves geometry with connectivity and uses a new connectivity representation to ensure that vertices and triangles can be stored in a coherent order that enables memory-efficient sequential stream processing. We present a linear-time construction algorithm that allows streaming out Grouper meshes using a small memory footprint while preserving the initial ordering of vertices. As part of this construction, we show how the problem of assigning vertices and triangles to groups reduces to a well-known NP-hard optimization problem, and present a simple yet effective heuristic solution that performs well in practice. Our array-based Grouper representation also doubles as a triangle mesh data structure that allows direct access to vertices and triangles. Storing only about two integer references per triangle, Grouper answers both incidence and adjacency queries in amortized constant time. Our compact representation enables data-parallel processing on multicore computers, instant partitioning and fast transmission for distributed processing, as well as efficient out-of-core access.
Navigating complex decision spaces: Problems and paradigms in sequential choice
Walsh, Matthew M.; Anderson, John R.
2015-01-01
To behave adaptively, we must learn from the consequences of our actions. Doing so is difficult when the consequences of an action follow a delay. This introduces the problem of temporal credit assignment. When feedback follows a sequence of decisions, how should the individual assign credit to the intermediate actions that comprise the sequence? Research in reinforcement learning provides two general solutions to this problem: model-free reinforcement learning and model-based reinforcement learning. In this review, we examine connections between stimulus-response and cognitive learning theories, habitual and goal-directed control, and model-free and model-based reinforcement learning. We then consider a range of problems related to temporal credit assignment. These include second-order conditioning and secondary reinforcers, latent learning and detour behavior, partially observable Markov decision processes, actions with distributed outcomes, and hierarchical learning. We ask whether humans and animals, when faced with these problems, behave in a manner consistent with reinforcement learning techniques. Throughout, we seek to identify neural substrates of model-free and model-based reinforcement learning. The former class of techniques is understood in terms of the neurotransmitter dopamine and its effects in the basal ganglia. The latter is understood in terms of a distributed network of regions including the prefrontal cortex, medial temporal lobes cerebellum, and basal ganglia. Not only do reinforcement learning techniques have a natural interpretation in terms of human and animal behavior, but they also provide a useful framework for understanding neural reward valuation and action selection. PMID:23834192
Molinski, Tadeusz F.; Reynolds, Kirk A.; Morinaka, Brandon I.
2012-01-01
The absolute stereostructures of the components of symplocin A (3), a new N,N-dimethyl-terminated peptide from the Bahamian cyanobacterium, Symploca sp., were assigned from spectroscopic analysis, including MS and 2D NMR and Marfey’s analysis. The complete absolute configuration of symplocin A, including the unexpected D-configurations of the terminal N,N-dimethylisoleucine and valic acid residues, were assigned by chiral-phase HPLC of the corresponding 2-naphthacyl esters, a highly sensitive, complementary strategy for assignment of N-blocked peptide residues where Marfey’s method is ineffectual, or other methods fall short. Symplocin A exhibited potent activity as an inhibitor of cathepsin E (IC50 300 pM). PMID:22360587
Xiaoqiu Zuo; Urs Buehlmann; R. Edward Thomas
2004-01-01
Solving the least-cost lumber grade mix problem allows dimension mills to minimize the cost of dimension part production. This problem, due to its economic importance, has attracted much attention from researchers and industry in the past. Most solutions used linear programming models and assumed that a simple linear relationship existed between lumber grade mix and...
Secrets to Writing Great Papers. The Study Smart Series.
ERIC Educational Resources Information Center
Kesselman-Turkel, Judi; Peterson, Franklynn
This book explains how to work with ideas to hone them into words, providing techniques and exercises for brainstorming, choosing the right approach, working with an unknown or boring assigned topic, and selecting the best point of view. It presents 10 steps, noting related problems: (1) "Decide on Size" (no specific length is assigned);…
Integrating Global Learning into a Psychology Course Using an Online Platform
ERIC Educational Resources Information Center
Forden, Carie L.; Carrillo, Amy M.
2014-01-01
There is a demand for the integration of global learning/diversity across the curriculum. A series of cross-cultural assignments was created to facilitate global learning in two social psychology classes, one in Egypt, and one in the USA. In these assignments, students collected data and applied course concepts to real-life problems, then…
Persistence in Expatriate Academic Assignments in the United Arab Emirates: A Case Study
ERIC Educational Resources Information Center
Ryan, Gerard D.
2012-01-01
This study explored factors that influenced persistence in expatriate academic assignments in the United Arab Emirates (UAE). Specifically, the problem that was addressed was an investigation of the reasons why some expatriate academics declared their intent to leave an academic position within one year of arrival while others choose to extend…
Us SAN DIEGO (May 22, 2018) Sailors assigned to Coastal Riverine Squadron (CRS) 3 operate a Mark VI patrol boat during a final evaluation problem conducted by Coastal Riverine Group (CRG) 1's training and ./Released) Sailors assigned to Coastal Riverine Squadron 3 operate a Mark VI patrol boat in waters off San
Gamification for Non-Majors Mathematics: An Innovative Assignment Model
ERIC Educational Resources Information Center
Leong, Siow Hoo; Tang, Howe Eng
2017-01-01
The most important ingredient of the pedagogy for teaching non-majors is getting their engagement. This paper proposes to use gamification to engage non-majors. An innovative game termed as Cover the Hungarian's Zeros is designed to tackle the common weakness of non-majors mathematics in solving the assignment problem using the Hungarian Method.…
ERIC Educational Resources Information Center
Petko, Dominik; Egger, Nives; Cantieni, Andrea
2017-01-01
The study examines the use of weblogs in teacher education internships and its impact on student stress levels, self-efficacy, and reflective abilities. One hundred and seventy-six student teachers were randomly assigned to five groups. Four groups used weblogs (a) with emotion-focused or with problem-focused writing assignments in combination (b)…
ERIC Educational Resources Information Center
Rooks, Ronica N.; Ford, Cassandra
2013-01-01
This personal reflection describes our experiences with incorporating the scholarship of teaching and learning and problem-based techniques to facilitate undergraduate student learning and their professional development in the health sciences. We created a family health history assignment to discuss key concepts in our courses, such as health…
ERIC Educational Resources Information Center
Whalen, D. Joel
2015-01-01
This article, the second of a two-part series, features 11 teaching innovations presented at the 2014 Association for Business Communication annual conference. These 11 assignments included leadership and other-focused communication--detecting communication style, adaptive communication, personality type, delivering feedback, problem solving, and…
ERIC Educational Resources Information Center
Donaldson, Morgaen L.
2013-01-01
Purpose: How principals hire, assign, evaluate, and provide growth opportunities to teachers likely have major ramifications for teacher effectiveness and student learning. This article reports on the barriers principals encountered when carrying out these functions and variations in the degree to which they identified obstacles and problem-solved…
Deformed Palmprint Matching Based on Stable Regions.
Wu, Xiangqian; Zhao, Qiushi
2015-12-01
Palmprint recognition (PR) is an effective technology for personal recognition. A main problem, which deteriorates the performance of PR, is the deformations of palmprint images. This problem becomes more severe on contactless occasions, in which images are acquired without any guiding mechanisms, and hence critically limits the applications of PR. To solve the deformation problems, in this paper, a model for non-linearly deformed palmprint matching is derived by approximating non-linear deformed palmprint images with piecewise-linear deformed stable regions. Based on this model, a novel approach for deformed palmprint matching, named key point-based block growing (KPBG), is proposed. In KPBG, an iterative M-estimator sample consensus algorithm based on scale invariant feature transform features is devised to compute piecewise-linear transformations to approximate the non-linear deformations of palmprints, and then, the stable regions complying with the linear transformations are decided using a block growing algorithm. Palmprint feature extraction and matching are performed over these stable regions to compute matching scores for decision. Experiments on several public palmprint databases show that the proposed models and the KPBG approach can effectively solve the deformation problem in palmprint verification and outperform the state-of-the-art methods.
Transference interpretations in dynamic psychotherapy: do they really yield sustained effects?
Høglend, Per; Bøgwald, Kjell-Petter; Amlo, Svein; Marble, Alice; Ulberg, Randi; Sjaastad, Mary Cosgrove; Sørbye, Oystein; Heyerdahl, Oscar; Johansson, Paul
2008-06-01
Transference interpretation has remained a core ingredient in the psychodynamic tradition, despite limited empirical evidence for its effectiveness. In this study, the authors examined long-term effects of transference interpretations. This was a randomized controlled clinical trial, dismantling design, plus follow-up evaluations 1 year and 3 years after treatment termination. One hundred outpatients seeking psychotherapy for depression, anxiety, personality disorders, and interpersonal problems were referred to the study therapists. Patients were randomly assigned to receive weekly sessions of dynamic psychotherapy for 1 year with or without transference interpretations. Five full sessions from each therapy were rated in order to document treatment fidelity. Outcome variables were the Psychodynamic Functioning Scales (clinician rated) and the Inventory of Interpersonal Problems (self-report). Rating on the Quality of Object Relations Scale (lifelong pattern) and presence of a personality disorder were postulated moderators of treatment effects. Change over time was assessed using linear mixed models. Despite an absence of differential treatment efficacy, both treatments demonstrated significant improvement during treatment and also after treatment termination. However, patients with a lifelong pattern of poor object relations profited more from 1 year of therapy with transference interpretations than from therapy without transference interpretations. This effect was sustained throughout the 4-year study period. The goal of transference interpretation is sustained improvement of the patient's relationships outside of therapy. Transference interpretation seems to be especially important for patients with long-standing, more severe interpersonal problems.
Feature and Region Selection for Visual Learning.
Zhao, Ji; Wang, Liantao; Cabral, Ricardo; De la Torre, Fernando
2016-03-01
Visual learning problems, such as object classification and action recognition, are typically approached using extensions of the popular bag-of-words (BoWs) model. Despite its great success, it is unclear what visual features the BoW model is learning. Which regions in the image or video are used to discriminate among classes? Which are the most discriminative visual words? Answering these questions is fundamental for understanding existing BoW models and inspiring better models for visual recognition. To answer these questions, this paper presents a method for feature selection and region selection in the visual BoW model. This allows for an intermediate visualization of the features and regions that are important for visual learning. The main idea is to assign latent weights to the features or regions, and jointly optimize these latent variables with the parameters of a classifier (e.g., support vector machine). There are four main benefits of our approach: 1) our approach accommodates non-linear additive kernels, such as the popular χ(2) and intersection kernel; 2) our approach is able to handle both regions in images and spatio-temporal regions in videos in a unified way; 3) the feature selection problem is convex, and both problems can be solved using a scalable reduced gradient method; and 4) we point out strong connections with multiple kernel learning and multiple instance learning approaches. Experimental results in the PASCAL VOC 2007, MSR Action Dataset II and YouTube illustrate the benefits of our approach.
A Stochastic Inversion Method for Potential Field Data: Ant Colony Optimization
NASA Astrophysics Data System (ADS)
Liu, Shuang; Hu, Xiangyun; Liu, Tianyou
2014-07-01
Simulating natural ants' foraging behavior, the ant colony optimization (ACO) algorithm performs excellently in combinational optimization problems, for example the traveling salesman problem and the quadratic assignment problem. However, the ACO is seldom used to inverted for gravitational and magnetic data. On the basis of the continuous and multi-dimensional objective function for potential field data optimization inversion, we present the node partition strategy ACO (NP-ACO) algorithm for inversion of model variables of fixed shape and recovery of physical property distributions of complicated shape models. We divide the continuous variables into discrete nodes and ants directionally tour the nodes by use of transition probabilities. We update the pheromone trails by use of Gaussian mapping between the objective function value and the quantity of pheromone. It can analyze the search results in real time and promote the rate of convergence and precision of inversion. Traditional mapping, including the ant-cycle system, weaken the differences between ant individuals and lead to premature convergence. We tested our method by use of synthetic data and real data from scenarios involving gravity and magnetic anomalies. The inverted model variables and recovered physical property distributions were in good agreement with the true values. The ACO algorithm for binary representation imaging and full imaging can recover sharper physical property distributions than traditional linear inversion methods. The ACO has good optimization capability and some excellent characteristics, for example robustness, parallel implementation, and portability, compared with other stochastic metaheuristics.
1981-06-15
relationships 5 3. Normalized energy in ambiguity function for i = 0 14 k ilI SACLANTCEN SR-50 A RESUME OF STOCHASTIC, TIME-VARYING, LINEAR SYSTEM THEORY WITH...the order in which systems are concatenated is unimportant. These results are exactly analogous to the results of time-invariant linear system theory in...REFERENCES 1. MEIER, L. A rdsum6 of deterministic time-varying linear system theory with application to active sonar signal processing problems, SACLANTCEN
Connected Component Model for Multi-Object Tracking.
He, Zhenyu; Li, Xin; You, Xinge; Tao, Dacheng; Tang, Yuan Yan
2016-08-01
In multi-object tracking, it is critical to explore the data associations by exploiting the temporal information from a sequence of frames rather than the information from the adjacent two frames. Since straightforwardly obtaining data associations from multi-frames is an NP-hard multi-dimensional assignment (MDA) problem, most existing methods solve this MDA problem by either developing complicated approximate algorithms, or simplifying MDA as a 2D assignment problem based upon the information extracted only from adjacent frames. In this paper, we show that the relation between associations of two observations is the equivalence relation in the data association problem, based on the spatial-temporal constraint that the trajectories of different objects must be disjoint. Therefore, the MDA problem can be equivalently divided into independent subproblems by equivalence partitioning. In contrast to existing works for solving the MDA problem, we develop a connected component model (CCM) by exploiting the constraints of the data association and the equivalence relation on the constraints. Based upon CCM, we can efficiently obtain the global solution of the MDA problem for multi-object tracking by optimizing a sequence of independent data association subproblems. Experiments on challenging public data sets demonstrate that our algorithm outperforms the state-of-the-art approaches.
Novel approaches for road congestion mitigation.
DOT National Transportation Integrated Search
2012-07-02
Transportation planning is usually aiming to solve two problems: the traffic assignment and the toll pricing problems. The latter one utilizes information from the first one, in order to find the optimal set of tolls that is the set of tolls that lea...
Novel approaches for road congestion minimization.
DOT National Transportation Integrated Search
2012-07-01
Transportation planning is usually aiming to solve two problems: the traffic assignment and the toll pricing problems. The latter one utilizes information from the first one, in order to find the optimal set of tolls that is the set of tolls that lea...
Nonlinear rescaling of control values simplifies fuzzy control
NASA Technical Reports Server (NTRS)
Vanlangingham, H.; Tsoukkas, A.; Kreinovich, V.; Quintana, C.
1993-01-01
Traditional control theory is well-developed mainly for linear control situations. In non-linear cases there is no general method of generating a good control, so we have to rely on the ability of the experts (operators) to control them. If we want to automate their control, we must acquire their knowledge and translate it into a precise control strategy. The experts' knowledge is usually represented in non-numeric terms, namely, in terms of uncertain statements of the type 'if the obstacle is straight ahead, the distance to it is small, and the velocity of the car is medium, press the brakes hard'. Fuzzy control is a methodology that translates such statements into precise formulas for control. The necessary first step of this strategy consists of assigning membership functions to all the terms that the expert uses in his rules (in our sample phrase these words are 'small', 'medium', and 'hard'). The appropriate choice of a membership function can drastically improve the quality of a fuzzy control. In the simplest cases, we can take the functions whose domains have equally spaced endpoints. Because of that, many software packages for fuzzy control are based on this choice of membership functions. This choice is not very efficient in more complicated cases. Therefore, methods have been developed that use neural networks or generic algorithms to 'tune' membership functions. But this tuning takes lots of time (for example, several thousands iterations are typical for neural networks). In some cases there are evident physical reasons why equally space domains do not work: e.g., if the control variable u is always positive (i.e., if we control temperature in a reactor), then negative values (that are generated by equal spacing) simply make no sense. In this case it sounds reasonable to choose another scale u' = f(u) to represent u, so that equal spacing will work fine for u'. In the present paper we formulate the problem of finding the best rescaling function, solve this problem, and show (on a real-life example) that after an optimal rescaling, the un-tuned fuzzy control can be as good as the best state-of-art traditional non-linear controls.
Digital program for solving the linear stochastic optimal control and estimation problem
NASA Technical Reports Server (NTRS)
Geyser, L. C.; Lehtinen, B.
1975-01-01
A computer program is described which solves the linear stochastic optimal control and estimation (LSOCE) problem by using a time-domain formulation. The LSOCE problem is defined as that of designing controls for a linear time-invariant system which is disturbed by white noise in such a way as to minimize a performance index which is quadratic in state and control variables. The LSOCE problem and solution are outlined; brief descriptions are given of the solution algorithms, and complete descriptions of each subroutine, including usage information and digital listings, are provided. A test case is included, as well as information on the IBM 7090-7094 DCS time and storage requirements.
NASA Astrophysics Data System (ADS)
Prasad, S.; Bruce, L. M.
2007-04-01
There is a growing interest in using multiple sources for automatic target recognition (ATR) applications. One approach is to take multiple, independent observations of a phenomenon and perform a feature level or a decision level fusion for ATR. This paper proposes a method to utilize these types of multi-source fusion techniques to exploit hyperspectral data when only a small number of training pixels are available. Conventional hyperspectral image based ATR techniques project the high dimensional reflectance signature onto a lower dimensional subspace using techniques such as Principal Components Analysis (PCA), Fisher's linear discriminant analysis (LDA), subspace LDA and stepwise LDA. While some of these techniques attempt to solve the curse of dimensionality, or small sample size problem, these are not necessarily optimal projections. In this paper, we present a divide and conquer approach to address the small sample size problem. The hyperspectral space is partitioned into contiguous subspaces such that the discriminative information within each subspace is maximized, and the statistical dependence between subspaces is minimized. We then treat each subspace as a separate source in a multi-source multi-classifier setup and test various decision fusion schemes to determine their efficacy. Unlike previous approaches which use correlation between variables for band grouping, we study the efficacy of higher order statistical information (using average mutual information) for a bottom up band grouping. We also propose a confidence measure based decision fusion technique, where the weights associated with various classifiers are based on their confidence in recognizing the training data. To this end, training accuracies of all classifiers are used for weight assignment in the fusion process of test pixels. The proposed methods are tested using hyperspectral data with known ground truth, such that the efficacy can be quantitatively measured in terms of target recognition accuracies.
NASA Astrophysics Data System (ADS)
Amsallem, David; Tezaur, Radek; Farhat, Charbel
2016-12-01
A comprehensive approach for real-time computations using a database of parametric, linear, projection-based reduced-order models (ROMs) based on arbitrary underlying meshes is proposed. In the offline phase of this approach, the parameter space is sampled and linear ROMs defined by linear reduced operators are pre-computed at the sampled parameter points and stored. Then, these operators and associated ROMs are transformed into counterparts that satisfy a certain notion of consistency. In the online phase of this approach, a linear ROM is constructed in real-time at a queried but unsampled parameter point by interpolating the pre-computed linear reduced operators on matrix manifolds and therefore computing an interpolated linear ROM. The proposed overall model reduction framework is illustrated with two applications: a parametric inverse acoustic scattering problem associated with a mockup submarine, and a parametric flutter prediction problem associated with a wing-tank system. The second application is implemented on a mobile device, illustrating the capability of the proposed computational framework to operate in real-time.
On the Inefficiency of Equilibria in Linear Bottleneck Congestion Games
NASA Astrophysics Data System (ADS)
de Keijzer, Bart; Schäfer, Guido; Telelis, Orestis A.
We study the inefficiency of equilibrium outcomes in bottleneck congestion games. These games model situations in which strategic players compete for a limited number of facilities. Each player allocates his weight to a (feasible) subset of the facilities with the goal to minimize the maximum (weight-dependent) latency that he experiences on any of these facilities. We derive upper and (asymptotically) matching lower bounds on the (strong) price of anarchy of linear bottleneck congestion games for a natural load balancing social cost objective (i.e., minimize the maximum latency of a facility). We restrict our studies to linear latency functions. Linear bottleneck congestion games still constitute a rich class of games and generalize, for example, load balancing games with identical or uniformly related machines with or without restricted assignments.
NASA Astrophysics Data System (ADS)
Courdurier, M.; Monard, F.; Osses, A.; Romero, F.
2015-09-01
In medical single-photon emission computed tomography (SPECT) imaging, we seek to simultaneously obtain the internal radioactive sources and the attenuation map using not only ballistic measurements but also first-order scattering measurements and assuming a very specific scattering regime. The problem is modeled using the radiative transfer equation by means of an explicit non-linear operator that gives the ballistic and scattering measurements as a function of the radioactive source and attenuation distributions. First, by differentiating this non-linear operator we obtain a linearized inverse problem. Then, under regularity hypothesis for the source distribution and attenuation map and considering small attenuations, we rigorously prove that the linear operator is invertible and we compute its inverse explicitly. This allows proof of local uniqueness for the non-linear inverse problem. Finally, using the previous inversion result for the linear operator, we propose a new type of iterative algorithm for simultaneous source and attenuation recovery for SPECT based on the Neumann series and a Newton-Raphson algorithm.
NASA Technical Reports Server (NTRS)
Belcastro, Christine M.
1998-01-01
Robust control system analysis and design is based on an uncertainty description, called a linear fractional transformation (LFT), which separates the uncertain (or varying) part of the system from the nominal system. These models are also useful in the design of gain-scheduled control systems based on Linear Parameter Varying (LPV) methods. Low-order LFT models are difficult to form for problems involving nonlinear parameter variations. This paper presents a numerical computational method for constructing and LFT model for a given LPV model. The method is developed for multivariate polynomial problems, and uses simple matrix computations to obtain an exact low-order LFT representation of the given LPV system without the use of model reduction. Although the method is developed for multivariate polynomial problems, multivariate rational problems can also be solved using this method by reformulating the rational problem into a polynomial form.
Pre-Service Teacher Scientific Behavior: Comparative Study of Paired Science Project Assignments
ERIC Educational Resources Information Center
Bulunuz, Mizrap; Tapan Broutin, Menekse Seden; Bulunuz, Nermin
2016-01-01
Problem Statement: University students usually lack the skills to rigorously define a multi-dimensional real-life problem and its limitations in an explicit, clear and testable way, which prevents them from forming a reliable method, obtaining relevant results and making balanced judgments to solve a problem. Purpose of the Study: The study…
A Randomized Trial of Brief Interventions for Problem and Pathological Gamblers
ERIC Educational Resources Information Center
Petry, Nancy M.; Weinstock, Jeremiah; Ledgerwood, David M.; Morasco, Benjamin
2008-01-01
Limited research exists regarding methods for reducing problem gambling. Problem gamblers (N = 180) were randomly assigned to assessment only control, 10 min of brief advice, 1 session of motivational enhancement therapy (MET), or 1 session of MET plus 3 sessions of cognitive-behavioral therapy. Gambling was assessed at baseline, at 6 weeks, and…
Can Short Duration Visual Cues Influence Students' Reasoning and Eye Movements in Physics Problems?
ERIC Educational Resources Information Center
Madsen, Adrian; Rouinfar, Amy; Larson, Adam M.; Loschky, Lester C.; Rebello, N. Sanjay
2013-01-01
We investigate the effects of visual cueing on students' eye movements and reasoning on introductory physics problems with diagrams. Participants in our study were randomly assigned to either the cued or noncued conditions, which differed by whether the participants saw conceptual physics problems overlaid with dynamic visual cues. Students in the…
Writing for Business: A Graduate-Level Course in Problem-Solving
ERIC Educational Resources Information Center
Seifert, Christine
2009-01-01
This paper details an assignment sequence that requires graduate students in an applied communication program to identify problems that clients may not be aware of. Good writing and good problem-solving are "inextricably linked to [a student's] ability to frame an issue, gather, and analyze information, and to structure a helpful response" (Musso,…
DOT National Transportation Integrated Search
2010-03-01
Urban transportation networks, consisting of numerous links and nodes, experience traffic incidents such as accidents and road : maintenance work. A typical consequence of incidents is congestion which results in long queues and causes high travel ti...
DOT National Transportation Integrated Search
2010-03-01
Urban transportation networks, consisting of numerous links and nodes, experience traffic incidents such as accidents and road maintenance work. A typical consequence of incidents is congestion which results in long queues and causes high travel time...
Some insights on hard quadratic assignment problem instances
NASA Astrophysics Data System (ADS)
Hussin, Mohamed Saifullah
2017-11-01
Since the formal introduction of metaheuristics, a huge number Quadratic Assignment Problem (QAP) instances have been introduced. Those instances however are loosely-structured, and therefore made it difficult to perform any systematic analysis. The QAPLIB for example, is a library that contains a huge number of QAP benchmark instances that consists of instances with different size and structure, but with a very limited availability for every instance type. This prevents researchers from performing organized study on those instances, such as parameter tuning and testing. In this paper, we will discuss several hard instances that have been introduced over the years, and algorithms that have been used for solving them.
Linear decentralized systems with special structure. [for twin lift helicopters
NASA Technical Reports Server (NTRS)
Martin, C. F.
1982-01-01
Certain fundamental structures associated with linear systems having internal symmetries are outlined. It is shown that the theory of finite-dimensional algebras and their representations are closely related to such systems. It is also demonstrated that certain problems in the decentralized control of symmetric systems are equivalent to long-standing problems of linear systems theory. Even though the structure imposed arose in considering the problems of twin-lift helicopters, any large system composed of several identical intercoupled control systems can be modeled by a linear system that satisfies the constraints imposed. Internal symmetry can be exploited to yield new system-theoretic invariants and a better understanding of the way in which the underlying structure affects overall system performance.
Finite Element Based Structural Damage Detection Using Artificial Boundary Conditions
2007-09-01
C. (2005). Elementary Linear Algebra . New York: John Wiley and Sons. Avitable, Peter (2001, January) Experimental Modal Analysis, A Simple Non...variables under consideration. 3 Frequency sensitivities are the basis for a linear approximation to compute the change in the natural frequencies of a...THEORY The general problem statement for a non- linear constrained optimization problem is: To minimize ( )f x Objective Function Subject to
Fiber optic configurations for local area networks
NASA Technical Reports Server (NTRS)
Nassehi, M. M.; Tobagi, F. A.; Marhic, M. E.
1985-01-01
A number of fiber optic configurations for a new class of demand assignment multiple-access local area networks requiring a physical ordering among stations are proposed. In such networks, the data transmission and linear-ordering functions may be distinguished and be provided by separate data and control subnetworks. The configurations proposed for the data subnetwork are based on the linear, star, and tree topologies. To provide the linear-ordering function, the control subnetwork must always have a linear unidirectional bus structure. Due to the reciprocity and excess loss of optical couplers, the number of stations that can be accommodated on a linear fiber optic bus is severely limited. Two techniques are proposed to overcome this limitation. For each of the data and control subnetwork configurations, the maximum number of stations as a function of the power margin, for both reciprocal and nonreciprocal couplers, is computed.
Solving optimization problems on computational grids.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wright, S. J.; Mathematics and Computer Science
2001-05-01
Multiprocessor computing platforms, which have become more and more widely available since the mid-1980s, are now heavily used by organizations that need to solve very demanding computational problems. Parallel computing is now central to the culture of many research communities. Novel parallel approaches were developed for global optimization, network optimization, and direct-search methods for nonlinear optimization. Activity was particularly widespread in parallel branch-and-bound approaches for various problems in combinatorial and network optimization. As the cost of personal computers and low-end workstations has continued to fall, while the speed and capacity of processors and networks have increased dramatically, 'cluster' platforms havemore » become popular in many settings. A somewhat different type of parallel computing platform know as a computational grid (alternatively, metacomputer) has arisen in comparatively recent times. Broadly speaking, this term refers not to a multiprocessor with identical processing nodes but rather to a heterogeneous collection of devices that are widely distributed, possibly around the globe. The advantage of such platforms is obvious: they have the potential to deliver enormous computing power. Just as obviously, however, the complexity of grids makes them very difficult to use. The Condor team, headed by Miron Livny at the University of Wisconsin, were among the pioneers in providing infrastructure for grid computations. More recently, the Globus project has developed technologies to support computations on geographically distributed platforms consisting of high-end computers, storage and visualization devices, and other scientific instruments. In 1997, we started the metaneos project as a collaborative effort between optimization specialists and the Condor and Globus groups. Our aim was to address complex, difficult optimization problems in several areas, designing and implementing the algorithms and the software infrastructure need to solve these problems on computational grids. This article describes some of the results we have obtained during the first three years of the metaneos project. Our efforts have led to development of the runtime support library MW for implementing algorithms with master-worker control structure on Condor platforms. This work is discussed here, along with work on algorithms and codes for integer linear programming, the quadratic assignment problem, and stochastic linear programmming. Our experiences in the metaneos project have shown that cheap, powerful computational grids can be used to tackle large optimization problems of various types. In an industrial or commercial setting, the results demonstrate that one may not have to buy powerful computational servers to solve many of the large problems arising in areas such as scheduling, portfolio optimization, or logistics; the idle time on employee workstations (or, at worst, an investment in a modest cluster of PCs) may do the job. For the optimization research community, our results motivate further work on parallel, grid-enabled algorithms for solving very large problems of other types. The fact that very large problems can be solved cheaply allows researchers to better understand issues of 'practical' complexity and of the role of heuristics.« less
NASA Technical Reports Server (NTRS)
Wendel, Thomas R.; Boland, Joseph R.; Hahne, David E.
1991-01-01
Flight-control laws are developed for a wind-tunnel aircraft model flying at a high angle of attack by using a synthesis technique called direct eigenstructure assignment. The method employs flight guidelines and control-power constraints to develop the control laws, and gain schedules and nonlinear feedback compensation provide a framework for considering the nonlinear nature of the attack angle. Linear and nonlinear evaluations show that the control laws are effective, a conclusion that is further confirmed by a scale model used for free-flight testing.
Understanding Solubility through Excel Spreadsheets
NASA Astrophysics Data System (ADS)
Brown, Pamela
2001-02-01
This article describes assignments related to the solubility of inorganic salts that can be given in an introductory general chemistry course. Le Châtelier's principle, solubility, unit conversion, and thermodynamics are tied together to calculate heats of solution by two methods: heats of formation and an application of the van't Hoff equation. These assignments address the need for math, graphing, and computer skills in the chemical technology program by developing skill in the use of Microsoft Excel to prepare spreadsheets and graphs and to perform linear and nonlinear curve-fitting. Background information on the value of understanding and predicting solubility is provided.
Solving the Credit Assignment Problem With the Prefrontal Cortex
Stolyarova, Alexandra
2018-01-01
In naturalistic multi-cue and multi-step learning tasks, where outcomes of behavior are delayed in time, discovering which choices are responsible for rewards can present a challenge, known as the credit assignment problem. In this review, I summarize recent work that highlighted a critical role for the prefrontal cortex (PFC) in assigning credit where it is due in tasks where only a few of the multitude of cues or choices are relevant to the final outcome of behavior. Collectively, these investigations have provided compelling support for specialized roles of the orbitofrontal (OFC), anterior cingulate (ACC), and dorsolateral prefrontal (dlPFC) cortices in contingent learning. However, recent work has similarly revealed shared contributions and emphasized rich and heterogeneous response properties of neurons in these brain regions. Such functional overlap is not surprising given the complexity of reciprocal projections spanning the PFC. In the concluding section, I overview the evidence suggesting that the OFC, ACC and dlPFC communicate extensively, sharing the information about presented options, executed decisions and received rewards, which enables them to assign credit for outcomes to choices on which they are contingent. This account suggests that lesion or inactivation/inhibition experiments targeting a localized PFC subregion will be insufficient to gain a fine-grained understanding of credit assignment during learning and instead poses refined questions for future research, shifting the focus from focal manipulations to experimental techniques targeting cortico-cortical projections. PMID:29636659
NASA Technical Reports Server (NTRS)
Voorhies, Coerte V.
1993-01-01
The problem of estimating a steady fluid velocity field near the top of Earth's core which induces the secular variation (SV) indicated by models of the observed geomagnetic field is examined in the source-free mantle/frozen-flux core (SFI/VFFC) approximation. This inverse problem is non-linear because solutions of the forward problem are deterministically chaotic. The SFM/FFC approximation is inexact, and neither the models nor the observations they represent are either complete or perfect. A method is developed for solving the non-linear inverse motional induction problem posed by the hypothesis of (piecewise, statistically) steady core surface flow and the supposition of a complete initial geomagnetic condition. The method features iterative solution of the weighted, linearized least-squares problem and admits optional biases favoring surficially geostrophic flow and/or spatially simple flow. Two types of weights are advanced radial field weights for fitting the evolution of the broad-scale portion of the radial field component near Earth's surface implied by the models, and generalized weights for fitting the evolution of the broad-scale portion of the scalar potential specified by the models.
Eigensensitivity analysis of rotating clamped uniform beams with the asymptotic numerical method
NASA Astrophysics Data System (ADS)
Bekhoucha, F.; Rechak, S.; Cadou, J. M.
2016-12-01
In this paper, free vibrations of a rotating clamped Euler-Bernoulli beams with uniform cross section are studied using continuation method, namely asymptotic numerical method. The governing equations of motion are derived using Lagrange's method. The kinetic and strain energy expression are derived from Rayleigh-Ritz method using a set of hybrid variables and based on a linear deflection assumption. The derived equations are transformed in two eigenvalue problems, where the first is a linear gyroscopic eigenvalue problem and presents the coupled lagging and stretch motions through gyroscopic terms. While the second is standard eigenvalue problem and corresponds to the flapping motion. Those two eigenvalue problems are transformed into two functionals treated by continuation method, the Asymptotic Numerical Method. New method proposed for the solution of the linear gyroscopic system based on an augmented system, which transforms the original problem to a standard form with real symmetric matrices. By using some techniques to resolve these singular problems by the continuation method, evolution curves of the natural frequencies against dimensionless angular velocity are determined. At high angular velocity, some singular points, due to the linear elastic assumption, are computed. Numerical tests of convergence are conducted and the obtained results are compared to the exact values. Results obtained by continuation are compared to those computed with discrete eigenvalue problem.
ERIC Educational Resources Information Center
Weissberg, Michael W.
In an effort to improve the writing performance of non-native English-speaking students in a college preparatory composition course, a project was undertaken to reduce problems of self-esteem caused by communication apprehension through a speech assignment involving critical thinking and peer reviews. To evaluate the effect of the assignment, the…
2013-12-01
patients. The top trading cycle is popular in market design because it can be Pareto efficient and strategy proof when making assignments such as...29 1. Results for Group 1: Platoons 1 and 2 .............................................29 2...Results for Group 2: Platoons 3 and 4 .............................................31 3. Results for Group 3: Platoons 5 and 6
2007-06-01
introduces ASC-U’s approach for solving the dynamic UAV allocation problem. 26 Christopher J...18 Figure 6. Assignments Dynamics Example (after) .........................................................20 Figure 7. ASC-U Dynamic Cueing...decisions in order to respond to the dynamic environment they face. Thus, to succeed, the Army’s transformation cannot rely
Model of load distribution for earth observation satellite
NASA Astrophysics Data System (ADS)
Tu, Shumin; Du, Min; Li, Wei
2017-03-01
For the system of multiple types of EOS (Earth Observing Satellites), it is a vital issue to assure that each type of payloads carried by the group of EOS can be used efficiently and reasonably for in astronautics fields. Currently, most of researches on configuration of satellite and payloads focus on the scheduling for launched satellites. However, the assignments of payloads for un-launched satellites are bit researched, which are the same crucial as the scheduling of tasks. Moreover, the current models of satellite resources scheduling lack of more general characteristics. Referring the idea about roles-based access control (RBAC) of information system, this paper brings forward a model based on role-mining of RBAC to improve the generality and foresight of the method of assignments of satellite-payload. By this way, the assignment of satellite-payload can be mapped onto the problem of role-mining. A novel method will be introduced, based on the idea of biclique-combination in graph theory and evolutionary algorithm in intelligence computing, to address the role-mining problem of satellite-payload assignments. The simulation experiments are performed to verify the novel method. Finally, the work of this paper is concluded.
de Graaf, Nastasja M; Cohen-Kettenis, Peggy T; Carmichael, Polly; de Vries, Annelou L C; Dhondt, Karlien; Laridaen, Jolien; Pauli, Dagmar; Ball, Juliane; Steensma, Thomas D
2018-07-01
Adolescents seeking professional help with their gender identity development often present with psychological difficulties. Existing literature on psychological functioning of gender diverse young people is limited and mostly bound to national chart reviews. This study examined the prevalence of psychological functioning and peer relationship problems in adolescents across four European specialist gender services (The Netherlands, Belgium, the UK, and Switzerland), using the Child Behavioural Checklist (CBCL) and the Youth Self-Report (YSR). Differences in psychological functioning and peer relationships were found in gender diverse adolescents across Europe. Overall, emotional and behavioural problems and peer relationship problems were most prevalent in adolescents from the UK, followed by Switzerland and Belgium. The least behavioural and emotional problems and peer relationship problems were reported by adolescents from The Netherlands. Across the four clinics, a similar pattern of gender differences was found. Birth-assigned girls showed more behavioural problems and externalising problems in the clinical range, as reported by their parents. According to self-report, internalising problems in the clinical range were more prevalent in adolescent birth-assigned boys. More research is needed to gain a better understanding of the difference in clinical presentations in gender diverse adolescents and to investigate what contextual factors that may contribute to this.
A New Pattern of Getting Nasty Number in Graphical Method
NASA Astrophysics Data System (ADS)
Sumathi, P.; Indhumathi, N.
2018-04-01
This paper proposed a new technique of getting nasty numbers using graphical method in linear programming problem and it has been proved for various Linear programming problems. And also some characterisation of nasty numbers is discussed in this paper.
NASA Astrophysics Data System (ADS)
Pradanti, Paskalia; Hartono
2018-03-01
Determination of insulin injection dose in diabetes mellitus treatment can be considered as an optimal control problem. This article is aimed to simulate optimal blood glucose control for patient with diabetes mellitus. The blood glucose regulation of diabetic patient is represented by Ackerman’s Linear Model. This problem is then solved using dynamic programming method. The desired blood glucose level is obtained by minimizing the performance index in Lagrange form. The results show that dynamic programming based on Ackerman’s Linear Model is quite good to solve the problem.
Program for the solution of multipoint boundary value problems of quasilinear differential equations
NASA Technical Reports Server (NTRS)
1973-01-01
Linear equations are solved by a method of superposition of solutions of a sequence of initial value problems. For nonlinear equations and/or boundary conditions, the solution is iterative and in each iteration a problem like the linear case is solved. A simple Taylor series expansion is used for the linearization of both nonlinear equations and nonlinear boundary conditions. The perturbation method of solution is used in preference to quasilinearization because of programming ease, and smaller storage requirements; and experiments indicate that the desired convergence properties exist although no proof or convergence is given.
The Vertical Linear Fractional Initialization Problem
NASA Technical Reports Server (NTRS)
Lorenzo, Carl F.; Hartley, Tom T.
1999-01-01
This paper presents a solution to the initialization problem for a system of linear fractional-order differential equations. The scalar problem is considered first, and solutions are obtained both generally and for a specific initialization. Next the vector fractional order differential equation is considered. In this case, the solution is obtained in the form of matrix F-functions. Some control implications of the vector case are discussed. The suggested method of problem solution is shown via an example.
NASA Astrophysics Data System (ADS)
Fukuda, Jun'ichi; Johnson, Kaj M.
2010-06-01
We present a unified theoretical framework and solution method for probabilistic, Bayesian inversions of crustal deformation data. The inversions involve multiple data sets with unknown relative weights, model parameters that are related linearly or non-linearly through theoretic models to observations, prior information on model parameters and regularization priors to stabilize underdetermined problems. To efficiently handle non-linear inversions in which some of the model parameters are linearly related to the observations, this method combines both analytical least-squares solutions and a Monte Carlo sampling technique. In this method, model parameters that are linearly and non-linearly related to observations, relative weights of multiple data sets and relative weights of prior information and regularization priors are determined in a unified Bayesian framework. In this paper, we define the mixed linear-non-linear inverse problem, outline the theoretical basis for the method, provide a step-by-step algorithm for the inversion, validate the inversion method using synthetic data and apply the method to two real data sets. We apply the method to inversions of multiple geodetic data sets with unknown relative data weights for interseismic fault slip and locking depth. We also apply the method to the problem of estimating the spatial distribution of coseismic slip on faults with unknown fault geometry, relative data weights and smoothing regularization weight.
A test of geographic assignment using isotope tracers in feathers of known origin
Wunder, Michael B.; Kester, C.L.; Knopf, F.L.; Rye, R.O.
2005-01-01
We used feathers of known origin collected from across the breeding range of a migratory shorebird to test the use of isotope tracers for assigning breeding origins. We analyzed δD, δ13C, and δ15N in feathers from 75 mountain plover (Charadrius montanus) chicks sampled in 2001 and from 119 chicks sampled in 2002. We estimated parameters for continuous-response inverse regression models and for discrete-response Bayesian probability models from data for each year independently. We evaluated model predictions with both the training data and by using the alternate year as an independent test dataset. Our results provide weak support for modeling latitude and isotope values as monotonic functions of one another, especially when data are pooled over known sources of variation such as sample year or location. We were unable to make even qualitative statements, such as north versus south, about the likely origin of birds using both δD and δ13C in inverse regression models; results were no better than random assignment. Probability models provided better results and a more natural framework for the problem. Correct assignment rates were highest when considering all three isotopes in the probability framework, but the use of even a single isotope was better than random assignment. The method appears relatively robust to temporal effects and is most sensitive to the isotope discrimination gradients over which samples are taken. We offer that the problem of using isotope tracers to infer geographic origin is best framed as one of assignment, rather than prediction.
NASA Technical Reports Server (NTRS)
Young, Katherine C.; Sobieszczanski-Sobieski, Jaroslaw
1988-01-01
This project has two objectives. The first is to determine whether linear programming techniques can improve performance when handling design optimization problems with a large number of design variables and constraints relative to the feasible directions algorithm. The second purpose is to determine whether using the Kreisselmeier-Steinhauser (KS) function to replace the constraints with one constraint will reduce the cost of total optimization. Comparisons are made using solutions obtained with linear and non-linear methods. The results indicate that there is no cost saving using the linear method or in using the KS function to replace constraints.
Analysis and control of hourglass instabilities in underintegrated linear and nonlinear elasticity
NASA Technical Reports Server (NTRS)
Jacquotte, Olivier P.; Oden, J. Tinsley
1994-01-01
Methods are described to identify and correct a bad finite element approximation of the governing operator obtained when under-integration is used in numerical code for several model problems: the Poisson problem, the linear elasticity problem, and for problems in the nonlinear theory of elasticity. For each of these problems, the reason for the occurrence of instabilities is given, a way to control or eliminate them is presented, and theorems of existence, uniqueness, and convergence for the given methods are established. Finally, numerical results are included which illustrate the theory.
Aircraft flight test trajectory control
NASA Technical Reports Server (NTRS)
Menon, P. K. A.; Walker, R. A.
1988-01-01
Two design techniques for linear flight test trajectory controllers (FTTCs) are described: Eigenstructure assignment and the minimum error excitation technique. The two techniques are used to design FTTCs for an F-15 aircraft model for eight different maneuvers at thirty different flight conditions. An evaluation of the FTTCs is presented.
Assessment of Student Memo Assignments in Management Science
ERIC Educational Resources Information Center
Williams, Julie Ann Stuart; Stanny, Claudia J.; Reid, Randall C.; Hill, Christopher J.; Rosa, Katie Martin
2015-01-01
Frequently in Management Science courses, instructors focus primarily on teaching students the mathematics of linear programming models. However, the ability to discuss mathematical expressions in business terms is an important professional skill. The authors present an analysis of student abilities to discuss management science concepts through…
NASA Astrophysics Data System (ADS)
Ogorodnikov, Yuri; Khachay, Michael; Pljonkin, Anton
2018-04-01
We describe the possibility of employing the special case of the 3-SAT problem stemming from the well known integer factorization problem for the quantum cryptography. It is known, that for every instance of our 3-SAT setting the given 3-CNF is satisfiable by a unique truth assignment, and the goal is to find this assignment. Since the complexity status of the factorization problem is still undefined, development of approximation algorithms and heuristics adopts interest of numerous researchers. One of promising approaches to construction of approximation techniques is based on real-valued relaxation of the given 3-CNF followed by minimizing of the appropriate differentiable loss function, and subsequent rounding of the fractional minimizer obtained. Actually, algorithms developed this way differ by the rounding scheme applied on their final stage. We propose a new rounding scheme based on Bayesian learning. The article shows that the proposed method can be used to determine the security in quantum key distribution systems. In the quantum distribution the Shannon rules is applied and the factorization problem is paramount when decrypting secret keys.
NASA Astrophysics Data System (ADS)
Zhao, Liang; Huang, Shoudong; Dissanayake, Gamini
2018-07-01
This paper presents a novel hierarchical approach to solving structure-from-motion (SFM) problems. The algorithm begins with small local reconstructions based on nonlinear bundle adjustment (BA). These are then joined in a hierarchical manner using a strategy that requires solving a linear least squares optimization problem followed by a nonlinear transform. The algorithm can handle ordered monocular and stereo image sequences. Two stereo images or three monocular images are adequate for building each initial reconstruction. The bulk of the computation involves solving a linear least squares problem and, therefore, the proposed algorithm avoids three major issues associated with most of the nonlinear optimization algorithms currently used for SFM: the need for a reasonably accurate initial estimate, the need for iterations, and the possibility of being trapped in a local minimum. Also, by summarizing all the original observations into the small local reconstructions with associated information matrices, the proposed Linear SFM manages to preserve all the information contained in the observations. The paper also demonstrates that the proposed problem formulation results in a sparse structure that leads to an efficient numerical implementation. The experimental results using publicly available datasets show that the proposed algorithm yields solutions that are very close to those obtained using a global BA starting with an accurate initial estimate. The C/C++ source code of the proposed algorithm is publicly available at https://github.com/LiangZhaoPKUImperial/LinearSFM.
Lauth, G W; Heubeck, B G; Mackowiak, K
2006-06-01
Observation studies of students with attention-deficit hyperactivity disorder (ADHD) problems in natural classroom situations are costly and relatively rare. The study enquired how teacher ratings are anchored in actual student classroom behaviours, and how the behaviour of children with ADHD problems differs from their classmates. The authors attempted to broaden the usual focus on disruptive and inattentive behaviours to elucidate the role of various on-task behaviours, as well as considering differences between classroom contexts. DSM-III-R criteria were used in conjunction with a teacher rating scale to select a sample of 55 students with ADHD problems, and 55 matched controls from a population of 569 primary school students. Students were observed in their natural classrooms using the Munich Observation of Attention Inventory (MAI; Helmke, 1988). Correlations between teacher reports and observation codes were computed, and systematic differences between students with ADHD problems and controls in different classroom contexts were examined using a generalized linear mixed model (GLMM). Global teacher reports showed moderate to strong correlations with observed student behaviours. Expected on-task behaviour demonstrated the strongest relationship (r>-.70) with teacher reports. As hypothesized, the children with ADHD were more disruptive and inattentive than their matched peers. They were also less often inconspicuous on-task as expected by their teachers. However, their behaviour was assigned to two other on-task categories more often than their peers, and this raised their total on-task behaviour to over 66%. Situational differences were found for all codes as well, which mostly affected all students in a similar way, not just students with ADHD. ADHD related behaviours are pervasive across the classroom situations coded. Teachers appear to distinguish between desirable and undesirable on-task behaviours. Nevertheless, assisting students with ADHD problems requires shaping both. Future studies need to include more differentiated codes for various types of on-task behaviours and also need to code the lesson context concurrently.
2013-01-01
Background Elective patient admission and assignment planning is an important task of the strategic and operational management of a hospital and early on became a central topic of clinical operations research. The management of hospital beds is an important subtask. Various approaches have been proposed, involving the computation of efficient assignments with regard to the patients’ condition, the necessity of the treatment, and the patients’ preferences. However, these approaches are mostly based on static, unadaptable estimates of the length of stay and, thus, do not take into account the uncertainty of the patient’s recovery. Furthermore, the effect of aggregated bed capacities have not been investigated in this context. Computer supported bed management, combining an adaptable length of stay estimation with the treatment of shared resources (aggregated bed capacities) has not yet been sufficiently investigated. The aim of our work is: 1) to define a cost function for patient admission taking into account adaptable length of stay estimations and aggregated resources, 2) to define a mathematical program formally modeling the assignment problem and an architecture for decision support, 3) to investigate four algorithmic methodologies addressing the assignment problem and one base-line approach, and 4) to evaluate these methodologies w.r.t. cost outcome, performance, and dismissal ratio. Methods The expected free ward capacity is calculated based on individual length of stay estimates, introducing Bernoulli distributed random variables for the ward occupation states and approximating the probability densities. The assignment problem is represented as a binary integer program. Four strategies for solving the problem are applied and compared: an exact approach, using the mixed integer programming solver SCIP; and three heuristic strategies, namely the longest expected processing time, the shortest expected processing time, and random choice. A baseline approach serves to compare these optimization strategies with a simple model of the status quo. All the approaches are evaluated by a realistic discrete event simulation: the outcomes are the ratio of successful assignments and dismissals, the computation time, and the model’s cost factors. Results A discrete event simulation of 226,000 cases shows a reduction of the dismissal rate compared to the baseline by more than 30 percentage points (from a mean dismissal ratio of 74.7% to 40.06% comparing the status quo with the optimization strategies). Each of the optimization strategies leads to an improved assignment. The exact approach has only a marginal advantage over the heuristic strategies in the model’s cost factors (≤3%). Moreover,this marginal advantage was only achieved at the price of a computational time fifty times that of the heuristic models (an average computing time of 141 s using the exact method, vs. 2.6 s for the heuristic strategy). Conclusions In terms of its performance and the quality of its solution, the heuristic strategy RAND is the preferred method for bed assignment in the case of shared resources. Future research is needed to investigate whether an equally marked improvement can be achieved in a large scale clinical application study, ideally one comprising all the departments involved in admission and assignment planning. PMID:23289448
Schmidt, Robert; Geisler, Sandra; Spreckelsen, Cord
2013-01-07
Elective patient admission and assignment planning is an important task of the strategic and operational management of a hospital and early on became a central topic of clinical operations research. The management of hospital beds is an important subtask. Various approaches have been proposed, involving the computation of efficient assignments with regard to the patients' condition, the necessity of the treatment, and the patients' preferences. However, these approaches are mostly based on static, unadaptable estimates of the length of stay and, thus, do not take into account the uncertainty of the patient's recovery. Furthermore, the effect of aggregated bed capacities have not been investigated in this context. Computer supported bed management, combining an adaptable length of stay estimation with the treatment of shared resources (aggregated bed capacities) has not yet been sufficiently investigated. The aim of our work is: 1) to define a cost function for patient admission taking into account adaptable length of stay estimations and aggregated resources, 2) to define a mathematical program formally modeling the assignment problem and an architecture for decision support, 3) to investigate four algorithmic methodologies addressing the assignment problem and one base-line approach, and 4) to evaluate these methodologies w.r.t. cost outcome, performance, and dismissal ratio. The expected free ward capacity is calculated based on individual length of stay estimates, introducing Bernoulli distributed random variables for the ward occupation states and approximating the probability densities. The assignment problem is represented as a binary integer program. Four strategies for solving the problem are applied and compared: an exact approach, using the mixed integer programming solver SCIP; and three heuristic strategies, namely the longest expected processing time, the shortest expected processing time, and random choice. A baseline approach serves to compare these optimization strategies with a simple model of the status quo. All the approaches are evaluated by a realistic discrete event simulation: the outcomes are the ratio of successful assignments and dismissals, the computation time, and the model's cost factors. A discrete event simulation of 226,000 cases shows a reduction of the dismissal rate compared to the baseline by more than 30 percentage points (from a mean dismissal ratio of 74.7% to 40.06% comparing the status quo with the optimization strategies). Each of the optimization strategies leads to an improved assignment. The exact approach has only a marginal advantage over the heuristic strategies in the model's cost factors (≤3%). Moreover,this marginal advantage was only achieved at the price of a computational time fifty times that of the heuristic models (an average computing time of 141 s using the exact method, vs. 2.6 s for the heuristic strategy). In terms of its performance and the quality of its solution, the heuristic strategy RAND is the preferred method for bed assignment in the case of shared resources. Future research is needed to investigate whether an equally marked improvement can be achieved in a large scale clinical application study, ideally one comprising all the departments involved in admission and assignment planning.
Lim, Wee Loon; Wibowo, Antoni; Desa, Mohammad Ishak; Haron, Habibollah
2016-01-01
The quadratic assignment problem (QAP) is an NP-hard combinatorial optimization problem with a wide variety of applications. Biogeography-based optimization (BBO), a relatively new optimization technique based on the biogeography concept, uses the idea of migration strategy of species to derive algorithm for solving optimization problems. It has been shown that BBO provides performance on a par with other optimization methods. A classical BBO algorithm employs the mutation operator as its diversification strategy. However, this process will often ruin the quality of solutions in QAP. In this paper, we propose a hybrid technique to overcome the weakness of classical BBO algorithm to solve QAP, by replacing the mutation operator with a tabu search procedure. Our experiments using the benchmark instances from QAPLIB show that the proposed hybrid method is able to find good solutions for them within reasonable computational times. Out of 61 benchmark instances tested, the proposed method is able to obtain the best known solutions for 57 of them. PMID:26819585
Lim, Wee Loon; Wibowo, Antoni; Desa, Mohammad Ishak; Haron, Habibollah
2016-01-01
The quadratic assignment problem (QAP) is an NP-hard combinatorial optimization problem with a wide variety of applications. Biogeography-based optimization (BBO), a relatively new optimization technique based on the biogeography concept, uses the idea of migration strategy of species to derive algorithm for solving optimization problems. It has been shown that BBO provides performance on a par with other optimization methods. A classical BBO algorithm employs the mutation operator as its diversification strategy. However, this process will often ruin the quality of solutions in QAP. In this paper, we propose a hybrid technique to overcome the weakness of classical BBO algorithm to solve QAP, by replacing the mutation operator with a tabu search procedure. Our experiments using the benchmark instances from QAPLIB show that the proposed hybrid method is able to find good solutions for them within reasonable computational times. Out of 61 benchmark instances tested, the proposed method is able to obtain the best known solutions for 57 of them.
Normal Unenhanced Raman Spectra of CO and CH/sub 4/ adsorbed on cobalt(poly)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marzouk, H.A.; Bradley, E.B.; Arunkumar, K.A.
Normal Unenhanced Raman Spectra (NURS) of low-polarizability CO molecules were observed for the first time on cobalt at R.T. and residual gas pressure. We assign five bands observed between 2030--2130 cm/sup -1/ to linear chemisorbed CO species, while those observed between 1840--2010 cm/sup -1/ have been ascribed to the 2--fold chemisorbed species. The three bands observed between 1740--1830 cm/sup -1/ we believe are due to the 3--fold species. The corresponding fourteen Co-C stretches were observed and assigned. A model based upon electron backdonation is proposed for each of the three structures. NURS were also observed at R.T. for physisorbed CH/submore » 4/ and assignments are made to the four frequencies of CH/sub 4/.« less
NASA Astrophysics Data System (ADS)
Liu, Chuang; Ye, Dong; Shi, Keke; Sun, Zhaowei
2017-07-01
A novel improved mixed H2/H∞ control technique combined with poles assignment theory is presented to achieve attitude stabilization and vibration suppression simultaneously for flexible spacecraft in this paper. The flexible spacecraft dynamics system is described and transformed into corresponding state space form. Based on linear matrix inequalities (LMIs) scheme and poles assignment theory, the improved mixed H2/H∞ controller does not restrict the equivalence of the two Lyapunov variables involved in H2 and H∞ performance, which can reduce conservatives compared with traditional mixed H2/H∞ controller. Moreover, it can eliminate the coupling of Lyapunov matrix variables and system matrices by introducing slack variable that provides additional degree of freedom. Several simulations are performed to demonstrate the effectiveness and feasibility of the proposed method in this paper.
Nonexpansiveness of a linearized augmented Lagrangian operator for hierarchical convex optimization
NASA Astrophysics Data System (ADS)
Yamagishi, Masao; Yamada, Isao
2017-04-01
Hierarchical convex optimization concerns two-stage optimization problems: the first stage problem is a convex optimization; the second stage problem is the minimization of a convex function over the solution set of the first stage problem. For the hierarchical convex optimization, the hybrid steepest descent method (HSDM) can be applied, where the solution set of the first stage problem must be expressed as the fixed point set of a certain nonexpansive operator. In this paper, we propose a nonexpansive operator that yields a computationally efficient update when it is plugged into the HSDM. The proposed operator is inspired by the update of the linearized augmented Lagrangian method. It is applicable to characterize the solution set of recent sophisticated convex optimization problems found in the context of inverse problems, where the sum of multiple proximable convex functions involving linear operators must be minimized to incorporate preferable properties into the minimizers. For such a problem formulation, there has not yet been reported any nonexpansive operator that yields an update free from the inversions of linear operators in cases where it is utilized in the HSDM. Unlike previously known nonexpansive operators, the proposed operator yields an inversion-free update in such cases. As an application of the proposed operator plugged into the HSDM, we also present, in the context of the so-called superiorization, an algorithmic solution to a convex optimization problem over the generalized convex feasible set where the intersection of the hard constraints is not necessarily simple.
Numerical methods on some structured matrix algebra problems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jessup, E.R.
1996-06-01
This proposal concerned the design, analysis, and implementation of serial and parallel algorithms for certain structured matrix algebra problems. It emphasized large order problems and so focused on methods that can be implemented efficiently on distributed-memory MIMD multiprocessors. Such machines supply the computing power and extensive memory demanded by the large order problems. We proposed to examine three classes of matrix algebra problems: the symmetric and nonsymmetric eigenvalue problems (especially the tridiagonal cases) and the solution of linear systems with specially structured coefficient matrices. As all of these are of practical interest, a major goal of this work was tomore » translate our research in linear algebra into useful tools for use by the computational scientists interested in these and related applications. Thus, in addition to software specific to the linear algebra problems, we proposed to produce a programming paradigm and library to aid in the design and implementation of programs for distributed-memory MIMD computers. We now report on our progress on each of the problems and on the programming tools.« less
Online Problem Solving for Adolescent Brain Injury: A Randomized Trial of 2 Approaches.
Wade, Shari L; Taylor, Hudson Gerry; Yeates, Keith Owen; Kirkwood, Michael; Zang, Huaiyu; McNally, Kelly; Stacin, Terry; Zhang, Nanhua
Adolescent traumatic brain injury (TBI) contributes to deficits in executive functioning and behavior, but few evidence-based treatments exist. We conducted a randomized clinical trial comparing Teen Online Problem Solving with Family (TOPS-Family) with Teen Online Problem Solving with Teen Only (TOPS-TO) or the access to Internet Resources Comparison (IRC) group. Children, aged 11 to 18 years, who sustained a complicated mild-to-severe TBI in the previous 18 months were randomly assigned to the TOPS-Family (49), TOPS-TO (51), or IRC group (52). Parent and self-report measures of externalizing behaviors and executive functioning were completed before treatment and 6 months later. Treatment effects were examined using linear regression models, adjusting for baseline symptom levels. Age, maternal education, and family stresses were examined as moderators. The TOPS-Family group had lower levels of parent-reported executive dysfunction at follow-up than the TOPS-TO group, and differences between the TOPS-Family and IRC groups approached significance. Maternal education moderated improvements in parent-reported externalizing behaviors, with less educated parents in the TOPS-Family group reporting fewer symptoms. On the self-report Behavior Rating Inventory of Executive Functions, treatment efficacy varied with the level of parental stresses. The TOPS-Family group reported greater improvements at low stress levels, whereas the TOPS-TO group reported greater improvement at high-stress levels. The TOPS-TO group did not have significantly lower symptoms than the IRC group on any comparison. Findings support the efficacy of online family problem solving to address executive dysfunction and improve externalizing behaviors among youth with TBI from less advantaged households. Treatment with the teen alone may be indicated in high-stress families.
Tachibana, Yoshiyuki; Fukushima, Ai; Saito, Hitomi; Yoneyama, Satoshi; Ushida, Kazuo; Yoneyama, Susumu; Kawashima, Ryuta
2012-01-01
We propose a new play activity intervention program for mothers and children. Our interdisciplinary program integrates four fields of child-related sciences: neuroscience, preschool pedagogy, developmental psychology, and child and maternal psychiatry. To determine the effect of this intervention on child and mother psychosocial problems related to parenting stress and on the children's cognitive abilities, we performed a cluster randomized controlled trial. Participants were 238 pairs of mothers and typically developing preschool children (ages 4-6 years old) from Wakakusa kindergarten in Japan. The pairs were asked to play at home for about 10 min a day, 5 days a week for 3 months. Participants were randomly assigned to the intervention or control group by class unit. The Parenting Stress Index (PSI) (for mothers), the Goodenough Draw-a-Man intelligence test (DAM), and the new S-S intelligence test (NS-SIT) (for children) were administered prior to and 3 months after the intervention period. Pre-post changes in test scores were compared between the groups using a linear mixed-effects model analysis. The primary outcomes were the Total score on the child domain of the PSI (for child psychosocial problems related to parenting stress), Total score on the parent domain of the PSI (for maternal psychosocial problems related to parenting stress), and the score on the DAM (for child cognitive abilities). The results of the PSI suggested that the program may reduce parenting stress. The results of the cognitive tests suggested that the program may improve the children's fluid intelligence, working memory, and processing speed. Our intervention program may ameliorate the children's psychosocial problems related to parenting stress and increase their cognitive abilities. UMIN Clinical Trials Registry UMIN000002265.
Tachibana, Yoshiyuki; Fukushima, Ai; Saito, Hitomi; Yoneyama, Satoshi; Ushida, Kazuo; Yoneyama, Susumu; Kawashima, Ryuta
2012-01-01
Background We propose a new play activity intervention program for mothers and children. Our interdisciplinary program integrates four fields of child-related sciences: neuroscience, preschool pedagogy, developmental psychology, and child and maternal psychiatry. To determine the effect of this intervention on child and mother psychosocial problems related to parenting stress and on the children's cognitive abilities, we performed a cluster randomized controlled trial. Methodology/Principal Findings Participants were 238 pairs of mothers and typically developing preschool children (ages 4–6 years old) from Wakakusa kindergarten in Japan. The pairs were asked to play at home for about 10 min a day, 5 days a week for 3 months. Participants were randomly assigned to the intervention or control group by class unit. The Parenting Stress Index (PSI) (for mothers), the Goodenough Draw-a-Man intelligence test (DAM), and the new S-S intelligence test (NS-SIT) (for children) were administered prior to and 3 months after the intervention period. Pre–post changes in test scores were compared between the groups using a linear mixed-effects model analysis. The primary outcomes were the Total score on the child domain of the PSI (for child psychosocial problems related to parenting stress), Total score on the parent domain of the PSI (for maternal psychosocial problems related to parenting stress), and the score on the DAM (for child cognitive abilities). The results of the PSI suggested that the program may reduce parenting stress. The results of the cognitive tests suggested that the program may improve the children's fluid intelligence, working memory, and processing speed. Conclusions/Significance Our intervention program may ameliorate the children's psychosocial problems related to parenting stress and increase their cognitive abilities. Trial Registration UMIN Clinical Trials Registry UMIN000002265 PMID:22848340
The Transitory Phase to the Attainment of Self-Regulatory Skill in Mathematical Problem Solving
ERIC Educational Resources Information Center
Lazakidou, G.; Paraskeva, F.; Retalis, S.
2007-01-01
Three phases of development of self-regulatory skill in the domain of mathematical problem solving were designed to examine students' behaviour and the effects on their problem solving ability. Forty-eight Grade 4 students (10 year olds) participated in this pilot study. The students were randomly assigned to one of three groups, each representing…
Effect of Computer-Presented Organizational/Memory Aids on Problem Solving Behavior.
ERIC Educational Resources Information Center
Steinberg, Esther R.; And Others
This research studied the effects of computer-presented organizational/memory aids on problem solving behavior. The aids were either matrix or verbal charts shown on the display screen next to the problem. The 104 college student subjects were randomly assigned to one of the four conditions: type of chart (matrix or verbal chart) and use of charts…
The Roles of Women in the Army and Their Impact on Military Operations and Organizations.
ERIC Educational Resources Information Center
Batts, John H.; And Others
Problems inherent in the expanded utilization of female soldiers in the U.S. Army are numerous. Attitudes of a wide sample of Army personnel, men and women, enlisted and officer, were surveyed pertaining to those problems. Some problems such as uniforms, billeting, assignments, and training are obvious and with proper planning can and will be…
ERIC Educational Resources Information Center
Swanson, H. Lee; Lussier, Cathy; Orosco, Michael
2013-01-01
This study investigated the role of strategy instruction and cognitive abilities on word problem solving accuracy in children with math difficulties (MD). Elementary school children (N = 120) with and without MD were randomly assigned to 1 of 4 conditions: general-heuristic (e.g., underline question sentence), visual-schematic presentation…
ERIC Educational Resources Information Center
Angeli, Charoula; Valanides, Nicos
2013-01-01
The present study investigated the problem-solving performance of 101 university students and their interactions with a computer modeling tool in order to solve a complex problem. Based on their performance on the hidden figures test, students were assigned to three groups of field-dependent (FD), field-mixed (FM), and field-independent (FI)…
ERIC Educational Resources Information Center
Fraser, T. M.; Pityn, P. J.
This book contains 12 case histories, each based on a real-life problem, that show how a manager can use common sense, knowledge, and interpersonal skills to solve problems in human performance at work. Each case study describes a worker's problem and provides background information and an assignment; solutions are suggested. The following cases…
Wang, Jiaxi; Gronalt, Manfred; Sun, Yan
2017-01-01
Due to its environmentally sustainable and energy-saving characteristics, railway transportation nowadays plays a fundamental role in delivering passengers and goods. Emerged in the area of transportation planning, the crew (workforce) sizing problem and the crew scheduling problem have been attached great importance by the railway industry and the scientific community. In this paper, we aim to solve the two problems by proposing a novel two-stage optimization approach in the context of the electric multiple units (EMU) depot shunting driver assignment problem. Given a predefined depot shunting schedule, the first stage of the approach focuses on determining an optimal size of shunting drivers. While the second stage is formulated as a bi-objective optimization model, in which we comprehensively consider the objectives of minimizing the total walking distance and maximizing the workload balance. Then we combine the normalized normal constraint method with a modified Pareto filter algorithm to obtain Pareto solutions for the bi-objective optimization problem. Furthermore, we conduct a series of numerical experiments to demonstrate the proposed approach. Based on the computational results, the regression analysis yield a driver size predictor and the sensitivity analysis give some interesting insights that are useful for decision makers.
Gronalt, Manfred; Sun, Yan
2017-01-01
Due to its environmentally sustainable and energy-saving characteristics, railway transportation nowadays plays a fundamental role in delivering passengers and goods. Emerged in the area of transportation planning, the crew (workforce) sizing problem and the crew scheduling problem have been attached great importance by the railway industry and the scientific community. In this paper, we aim to solve the two problems by proposing a novel two-stage optimization approach in the context of the electric multiple units (EMU) depot shunting driver assignment problem. Given a predefined depot shunting schedule, the first stage of the approach focuses on determining an optimal size of shunting drivers. While the second stage is formulated as a bi-objective optimization model, in which we comprehensively consider the objectives of minimizing the total walking distance and maximizing the workload balance. Then we combine the normalized normal constraint method with a modified Pareto filter algorithm to obtain Pareto solutions for the bi-objective optimization problem. Furthermore, we conduct a series of numerical experiments to demonstrate the proposed approach. Based on the computational results, the regression analysis yield a driver size predictor and the sensitivity analysis give some interesting insights that are useful for decision makers. PMID:28704489
NASA Technical Reports Server (NTRS)
Muravyov, Alexander A.
1999-01-01
In this paper, a method for obtaining nonlinear stiffness coefficients in modal coordinates for geometrically nonlinear finite-element models is developed. The method requires application of a finite-element program with a geometrically non- linear static capability. The MSC/NASTRAN code is employed for this purpose. The equations of motion of a MDOF system are formulated in modal coordinates. A set of linear eigenvectors is used to approximate the solution of the nonlinear problem. The random vibration problem of the MDOF nonlinear system is then considered. The solutions obtained by application of two different versions of a stochastic linearization technique are compared with linear and exact (analytical) solutions in terms of root-mean-square (RMS) displacements and strains for a beam structure.
Infrared laser spectroscopy of the linear C13 carbon cluster
NASA Technical Reports Server (NTRS)
Giesen, T. F.; Van Orden, A.; Hwang, H. J.; Fellers, R. S.; Provencal, R. A.; Saykally, R. J.
1994-01-01
The infrared absorption spectrum of a linear, 13-atom carbon cluster (C13) has been observed by using a supersonic cluster beam-diode laser spectrometer. Seventy-six rovibrational transitions were measured near 1809 wave numbers and assigned to an antisymmetric stretching fundamental in the 1 sigma g+ ground state of C13. This definitive structural characterization of a carbon cluster in the intermediate size range between C10 and C20 is in apparent conflict with theoretical calculations, which predict that clusters of this size should exist as planar monocyclic rings.
Perdiguero-Alonso, Diana; Montero, Francisco E; Kostadinova, Aneta; Raga, Juan Antonio; Barrett, John
2008-10-01
Due to the complexity of host-parasite relationships, discrimination between fish populations using parasites as biological tags is difficult. This study introduces, to our knowledge for the first time, random forests (RF) as a new modelling technique in the application of parasite community data as biological markers for population assignment of fish. This novel approach is applied to a dataset with a complex structure comprising 763 parasite infracommunities in population samples of Atlantic cod, Gadus morhua, from the spawning/feeding areas in five regions in the North East Atlantic (Baltic, Celtic, Irish and North seas and Icelandic waters). The learning behaviour of RF is evaluated in comparison with two other algorithms applied to class assignment problems, the linear discriminant function analysis (LDA) and artificial neural networks (ANN). The three algorithms are used to develop predictive models applying three cross-validation procedures in a series of experiments (252 models in total). The comparative approach to RF, LDA and ANN algorithms applied to the same datasets demonstrates the competitive potential of RF for developing predictive models since RF exhibited better accuracy of prediction and outperformed LDA and ANN in the assignment of fish to their regions of sampling using parasite community data. The comparative analyses and the validation experiment with a 'blind' sample confirmed that RF models performed more effectively with a large and diverse training set and a large number of variables. The discrimination results obtained for a migratory fish species with largely overlapping parasite communities reflects the high potential of RF for developing predictive models using data that are both complex and noisy, and indicates that it is a promising tool for parasite tag studies. Our results suggest that parasite community data can be used successfully to discriminate individual cod from the five different regions of the North East Atlantic studied using RF.
Sparse Substring Pattern Set Discovery Using Linear Programming Boosting
NASA Astrophysics Data System (ADS)
Kashihara, Kazuaki; Hatano, Kohei; Bannai, Hideo; Takeda, Masayuki
In this paper, we consider finding a small set of substring patterns which classifies the given documents well. We formulate the problem as 1 norm soft margin optimization problem where each dimension corresponds to a substring pattern. Then we solve this problem by using LPBoost and an optimal substring discovery algorithm. Since the problem is a linear program, the resulting solution is likely to be sparse, which is useful for feature selection. We evaluate the proposed method for real data such as movie reviews.
Domain decomposition in time for PDE-constrained optimization
Barker, Andrew T.; Stoll, Martin
2015-08-28
Here, PDE-constrained optimization problems have a wide range of applications, but they lead to very large and ill-conditioned linear systems, especially if the problems are time dependent. In this paper we outline an approach for dealing with such problems by decomposing them in time and applying an additive Schwarz preconditioner in time, so that we can take advantage of parallel computers to deal with the very large linear systems. We then illustrate the performance of our method on a variety of problems.
Pegg, Elise C; Gill, Harinderjit S
2016-09-06
A new software tool to assign the material properties of bone to an ABAQUS finite element mesh was created and compared with Bonemat, a similar tool originally designed to work with Ansys finite element models. Our software tool (py_bonemat_abaqus) was written in Python, which is the chosen scripting language for ABAQUS. The purpose of this study was to compare the software packages in terms of the material assignment calculation and processing speed. Three element types were compared (linear hexahedral (C3D8), linear tetrahedral (C3D4) and quadratic tetrahedral elements (C3D10)), both individually and as part of a mesh. Comparisons were made using a CT scan of a hemi-pelvis as a test case. A small difference, of -0.05kPa on average, was found between Bonemat version 3.1 (the current version) and our Python package. Errors were found in the previous release of Bonemat (version 3.0 downloaded from www.biomedtown.org) during calculation of the quadratic tetrahedron Jacobian, and conversion of the apparent density to modulus when integrating over the Young׳s modulus field. These issues caused up to 2GPa error in the modulus assignment. For these reasons, we recommend users upgrade to the most recent release of Bonemat. Processing speeds were assessed for the three different element types. Our Python package took significantly longer (110s on average) to perform the calculations compared with the Bonemat software (10s). Nevertheless, the workflow advantages of the package and added functionality makes 'py_bonemat_abaqus' a useful tool for ABAQUS users. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Mielke, R. R.; Tung, L. J.; Carraway, P. I., III
1984-01-01
The feasibility of using reduced order models and reduced order observers with eigenvalue/eigenvector assignment procedures is investigated. A review of spectral assignment synthesis procedures is presented. Then, a reduced order model which retains essential system characteristics is formulated. A constant state feedback matrix which assigns desired closed loop eigenvalues and approximates specified closed loop eigenvectors is calculated for the reduced order model. It is shown that the eigenvalue and eigenvector assignments made in the reduced order system are retained when the feedback matrix is implemented about the full order system. In addition, those modes and associated eigenvectors which are not included in the reduced order model remain unchanged in the closed loop full order system. The full state feedback design is then implemented by using a reduced order observer. It is shown that the eigenvalue and eigenvector assignments of the closed loop full order system rmain unchanged when a reduced order observer is used. The design procedure is illustrated by an actual design problem.
NASA Technical Reports Server (NTRS)
Mielke, R. R.; Tung, L. J.; Carraway, P. I., III
1985-01-01
The feasibility of using reduced order models and reduced order observers with eigenvalue/eigenvector assignment procedures is investigated. A review of spectral assignment synthesis procedures is presented. Then, a reduced order model which retains essential system characteristics is formulated. A constant state feedback matrix which assigns desired closed loop eigenvalues and approximates specified closed loop eigenvectors is calculated for the reduced order model. It is shown that the eigenvalue and eigenvector assignments made in the reduced order system are retained when the feedback matrix is implemented about the full order system. In addition, those modes and associated eigenvectors which are not included in the reduced order model remain unchanged in the closed loop full order system. The fulll state feedback design is then implemented by using a reduced order observer. It is shown that the eigenvalue and eigenvector assignments of the closed loop full order system remain unchanged when a reduced order observer is used. The design procedure is illustrated by an actual design problem.
High Order Finite Difference Methods, Multidimensional Linear Problems and Curvilinear Coordinates
NASA Technical Reports Server (NTRS)
Nordstrom, Jan; Carpenter, Mark H.
1999-01-01
Boundary and interface conditions are derived for high order finite difference methods applied to multidimensional linear problems in curvilinear coordinates. The boundary and interface conditions lead to conservative schemes and strict and strong stability provided that certain metric conditions are met.
A high-accuracy optical linear algebra processor for finite element applications
NASA Technical Reports Server (NTRS)
Casasent, D.; Taylor, B. K.
1984-01-01
Optical linear processors are computationally efficient computers for solving matrix-matrix and matrix-vector oriented problems. Optical system errors limit their dynamic range to 30-40 dB, which limits their accuray to 9-12 bits. Large problems, such as the finite element problem in structural mechanics (with tens or hundreds of thousands of variables) which can exploit the speed of optical processors, require the 32 bit accuracy obtainable from digital machines. To obtain this required 32 bit accuracy with an optical processor, the data can be digitally encoded, thereby reducing the dynamic range requirements of the optical system (i.e., decreasing the effect of optical errors on the data) while providing increased accuracy. This report describes a new digitally encoded optical linear algebra processor architecture for solving finite element and banded matrix-vector problems. A linear static plate bending case study is described which quantities the processor requirements. Multiplication by digital convolution is explained, and the digitally encoded optical processor architecture is advanced.
Marshall, David C; Baker, Robert G V
2002-01-01
The expansion of gambling industries worldwide is intertwined with the growing government dependence on gambling revenue for fiscal assignments. In Australia, electronic gaming machines (EGMs) have dominated recent gambling industry growth. As EGMs have proliferated, growing recognition has emerged that EGM distribution closely reflects levels of socioeconomic disadvantage. More machines are located in less advantaged regions. This paper analyses time-series socioeconomic distributions of EGMs in Melbourne, Australia, an immature EGM market, and then compares the findings with the mature market in Sydney. Similar findings in both cities suggest that market assignment of EGMs transcends differences in historical and legislative environments. This indicates that similar underlying structures are evident in both markets. Modelling the spatial structures of gambling markets provides an opportunity to identify regions most at risk of gambling related problems. Subsequently, policies can be formulated which ensure fiscal revenue from gambling can be better targeted towards regions likely to be most afflicted by excessive gambling-related problems.
On the stabilizability of multivariable systems by minimum order compensation
NASA Technical Reports Server (NTRS)
Byrnes, C. I.; Anderson, B. D. O.
1983-01-01
In this paper, a derivation is provided of the necessary condition, mp equal to or greater than n, for stabilizability by constant gain feedback of the generic degree n, p x m system. This follows from another of the main results, which asserts that generic stabilizability is equivalent to generic solvability of a deadbeat control problem, provided mp equal to or less than n. Taken together, these conclusions make it possible to make some sharp statements concerning minimum order stabilization. The techniques are primarily drawn from decision algebra and classical algebraic geometry and have additional consequences for problems of stabilizability and pole-assignability. Among these are the decidability (by a Sturm test) of the equivalence of generic pole-assignability and generic stabilizability, the semi-algebraic nature of the minimum order, q, of a stabilizing compensator, and the nonexistence of formulae involving rational operations and extraction of square roots for pole-assigning gains when they exist, answering in the negative a question raised by Anderson, Bose, and Jury (1975).
Design of a Software Configuration for Real-Time Multimedia Group Communication; HNUMTP
NASA Astrophysics Data System (ADS)
Park, Gil-Cheol
This paper designs transport protocol of multi-session/channel method for real time multimedia group telecommunication and realizes it. The special features of the designed and realized protocol are first, that it solved the sync problem which is the specific character of multimedia telecommunication by using multi-channel method protocol. Usual multimedia telecommunication is assigned one channel by each media data. This paper shortened the phenomenon that waits data for sync of receiving part by assigning more than one channel for the channel that has a lot of data per hour as video data. The problem of intermedia synchronization that happens then could be solved by sending temporal/spacial related data among data assigning extra control channel. Second, that it does integrated management for sessions. Each session is one group telecommunication unit which supports mutual working environment that is independent. Each session communicates the participants in the group independently, the session manager manages all the communication among groups and lets media sources connected with all network be operated efficiently.
Support of NASA quality requirements by defense contract administration services regions
NASA Technical Reports Server (NTRS)
Farrar, Hiram D.
1966-01-01
Defense Contract Administration Services Regions (DCASR) quality assurance personnel performing under NASA Letters of Delegation must work closely with the assigned technical representative of the NASA centers. It is realized that technical personnel from the NASA Centers cannot make on-site visits as frequently as they would like to. However, DCASR quality assurance personnel would know the assigned NASA technical representative and should contact him when problems arise. The technical representative is the expert on the hardware and should be consulted on any problem area. It is important that the DCASR quality assurance personnel recommend to the delegating NASA Center any new or improved methods of which they may be aware which would assist in achieving the desired quality and reliability in NASA hardware. NASA expects assignment of competent personnel in the Quality Assurance functional area and is not only buying the individual's technical skill, but also his experience. Suggestions by field personnel can many times up-grade the quality or the hardware.
Prediction of aquatic toxicity mode of action using linear discriminant and random forest models
The ability to determine the mode of action (MOA) for a diverse group of chemicals is a critical part of ecological risk assessment and chemical regulation. However, existing MOA assignment approaches in ecotoxicology have been limited to a relatively few MOAs, have high uncertai...
ERIC Educational Resources Information Center
Baird, Michael J.
2004-01-01
A real-life analytical assignment is presented to students, who had to examine an air conditioning coolant solution for metal contamination using an atomic absorption spectroscopy (AAS). This hands-on access to a real problem exposed the undergraduate students to the mechanism of AAS, and promoted participation in a simulated industrial activity.
2006-12-01
APPROACH As mentioned previously, ASCU does not use simulation in the traditional manner. Instead, it uses simulation to transition and capture the state...0 otherwise (by a heuristic discussed below). • Let cja = The reward for a UAV with sensor pack- age j being assigned to mission area a from the
Using Problem-Based Learning with Large Groups.
ERIC Educational Resources Information Center
Buzzelli, Andrew R.
1994-01-01
At the Pennsylvania College of Optometry, a core course in pediatric optometry was revised to use a problem-centered approach and implemented with a class of 147 students. Students were assigned specific roles to distribute work evenly. A survey found students responded positively to this approach. (MSE)
Method for protein structure alignment
Blankenbecler, Richard; Ohlsson, Mattias; Peterson, Carsten; Ringner, Markus
2005-02-22
This invention provides a method for protein structure alignment. More particularly, the present invention provides a method for identification, classification and prediction of protein structures. The present invention involves two key ingredients. First, an energy or cost function formulation of the problem simultaneously in terms of binary (Potts) assignment variables and real-valued atomic coordinates. Second, a minimization of the energy or cost function by an iterative method, where in each iteration (1) a mean field method is employed for the assignment variables and (2) exact rotation and/or translation of atomic coordinates is performed, weighted with the corresponding assignment variables.
A comparison of Heuristic method and Llewellyn’s rules for identification of redundant constraints
NASA Astrophysics Data System (ADS)
Estiningsih, Y.; Farikhin; Tjahjana, R. H.
2018-03-01
Important techniques in linear programming is modelling and solving practical optimization. Redundant constraints are consider for their effects on general linear programming problems. Identification and reduce redundant constraints are for avoidance of all the calculations associated when solving an associated linear programming problems. Many researchers have been proposed for identification redundant constraints. This paper a compararison of Heuristic method and Llewellyn’s rules for identification of redundant constraints.
Discrete Methods and their Applications
1993-02-03
problem of finding all near-optimal solutions to a linear program. In paper [18], we give a brief and elementary proof of a result of Hoffman [1952) about...relies only on linear programming duality; second, we obtain geometric and algebraic representations of the bounds that are determined explicitly in...same. We have studied the problem of finding the minimum n such that a given unit interval graph is an n--graph. A linear time algorithm to compute
An improved error bound for linear complementarity problems for B-matrices.
Gao, Lei; Li, Chaoqian
2017-01-01
A new error bound for the linear complementarity problem when the matrix involved is a B -matrix is presented, which improves the corresponding result in (Li et al. in Electron. J. Linear Algebra 31(1):476-484, 2016). In addition some sufficient conditions such that the new bound is sharper than that in (García-Esnaola and Peña in Appl. Math. Lett. 22(7):1071-1075, 2009) are provided.
INFORMS Section on Location Analysis Dissertation Award Submission
DOE Office of Scientific and Technical Information (OSTI.GOV)
Waddell, Lucas
This research effort can be summarized by two main thrusts, each of which has a chapter of the dissertation dedicated to it. First, I pose a novel polyhedral approach for identifying polynomially solvable in- stances of the QAP based on an application of the reformulation-linearization technique (RLT), a general procedure for constructing mixed 0-1 linear reformulations of 0-1 pro- grams. The feasible region to the continuous relaxation of the level-1 RLT form is a polytope having a highly specialized structure. Every binary solution to the QAP is associated with an extreme point of this polytope, and the objective function valuemore » is preserved at each such point. However, there exist extreme points that do not correspond to binary solutions. The key insight is a previously unnoticed and unexpected relationship between the polyhedral structure of the continuous relaxation of the level-1 RLT representation and various classes of readily solvable instances. Specifically, we show that a variety of apparently unrelated solvable cases of the QAP can all be categorized in the following sense: each such case has an objective function which ensures that an optimal solution to the continuous relaxation of the level-1 RLT form occurs at a binary extreme point. Interestingly, there exist instances that are solvable by the level-1 RLT form which do not satisfy the conditions of these cases, so that the level-1 form theoretically identifies a richer family of solvable instances. Second, I focus on instances of the QAP known in the literature as linearizable. An instance of the QAP is defined to be linearizable if and only if the problem can be equivalently written as a linear assignment problem that preserves the objective function value at all feasible solutions. I provide an entirely new polyheral-based perspective on the concept of linearizable by showing that an instance of the QAP is linearizable if and only if a relaxed version of the continuous relaxation of the level-1 RLT form is bounded. We also shows that the level-1 RLT form can identify a richer family of solvable instances than those deemed linearizable by demonstrating that the continuous relaxation of the level-1 RLT form can have an optimal binary solution for instances that are not linearizable. As a byproduct, I use this theoretical framework to explicity, in closed form, characterize the dimensions of the level-1 RLT form and various other problem relaxations.« less
On optimal control of linear systems in the presence of multiplicative noise
NASA Technical Reports Server (NTRS)
Joshi, S. M.
1976-01-01
This correspondence considers the problem of optimal regulator design for discrete time linear systems subjected to white state-dependent and control-dependent noise in addition to additive white noise in the input and the observations. A pseudo-deterministic problem is first defined in which multiplicative and additive input disturbances are present, but noise-free measurements of the complete state vector are available. This problem is solved via discrete dynamic programming. Next is formulated the problem in which the number of measurements is less than that of the state variables and the measurements are contaminated with state-dependent noise. The inseparability of control and estimation is brought into focus, and an 'enforced separation' solution is obtained via heuristic reasoning in which the control gains are shown to be the same as those in the pseudo-deterministic problem. An optimal linear state estimator is given in order to implement the controller.
Simple Test Functions in Meshless Local Petrov-Galerkin Methods
NASA Technical Reports Server (NTRS)
Raju, Ivatury S.
2016-01-01
Two meshless local Petrov-Galerkin (MLPG) methods based on two different trial functions but that use a simple linear test function were developed for beam and column problems. These methods used generalized moving least squares (GMLS) and radial basis (RB) interpolation functions as trial functions. These two methods were tested on various patch test problems. Both methods passed the patch tests successfully. Then the methods were applied to various beam vibration problems and problems involving Euler and Beck's columns. Both methods yielded accurate solutions for all problems studied. The simple linear test function offers considerable savings in computing efforts as the domain integrals involved in the weak form are avoided. The two methods based on this simple linear test function method produced accurate results for frequencies and buckling loads. Of the two methods studied, the method with radial basis trial functions is very attractive as the method is simple, accurate, and robust.
NASA Technical Reports Server (NTRS)
Stamnes, K.; Lie-Svendsen, O.; Rees, M. H.
1991-01-01
The linear Boltzmann equation can be cast in a form mathematically identical to the radiation-transport equation. A multigroup procedure is used to reduce the energy (or velocity) dependence of the transport equation to a series of one-speed problems. Each of these one-speed problems is equivalent to the monochromatic radiative-transfer problem, and existing software is used to solve this problem in slab geometry. The numerical code conserves particles in elastic collisions. Generic examples are provided to illustrate the applicability of this approach. Although this formalism can, in principle, be applied to a variety of test particle or linearized gas dynamics problems, it is particularly well-suited to study the thermalization of suprathermal particles interacting with a background medium when the thermal motion of the background cannot be ignored. Extensions of the formalism to include external forces and spherical geometry are also feasible.
Distributed Method to Optimal Profile Descent
NASA Astrophysics Data System (ADS)
Kim, Geun I.
Current ground automation tools for Optimal Profile Descent (OPD) procedures utilize path stretching and speed profile change to maintain proper merging and spacing requirements at high traffic terminal area. However, low predictability of aircraft's vertical profile and path deviation during decent add uncertainty to computing estimated time of arrival, a key information that enables the ground control center to manage airspace traffic effectively. This paper uses an OPD procedure that is based on a constant flight path angle to increase the predictability of the vertical profile and defines an OPD optimization problem that uses both path stretching and speed profile change while largely maintaining the original OPD procedure. This problem minimizes the cumulative cost of performing OPD procedures for a group of aircraft by assigning a time cost function to each aircraft and a separation cost function to a pair of aircraft. The OPD optimization problem is then solved in a decentralized manner using dual decomposition techniques under inter-aircraft ADS-B mechanism. This method divides the optimization problem into more manageable sub-problems which are then distributed to the group of aircraft. Each aircraft solves its assigned sub-problem and communicate the solutions to other aircraft in an iterative process until an optimal solution is achieved thus decentralizing the computation of the optimization problem.
Convex central configurations for the n-body problem
NASA Astrophysics Data System (ADS)
Xia, Zhihong
We give a simple proof of a classical result of MacMillan and Bartky (Trans. Amer. Math. Soc. 34 (1932) 838) which states that, for any four positive masses and any assigned order, there is a convex planar central configuration. Moreover, we show that the central configurations we find correspond to local minima of the potential function with fixed moment of inertia. This allows us to show that there are at least six local minimum central configurations for the planar four-body problem. We also show that for any assigned order of five masses, there is at least one convex spatial central configuration of local minimum type. Our method also applies to some other cases.
Planning Nurses in Maternity Care: a Stochastic Assignment Problem
NASA Astrophysics Data System (ADS)
Phillipson, Frank
2015-05-01
With 23 percent of all births taking place at home, The Netherlands have the highest rate of home births in the world. Also if the birth did not take place at home, it is not unusual for the mother and child to be out of hospital in a few hours after the baby was born. The explanation for both is the very well organised maternity care system. However, getting the right maternity care nurse available on time introduces a complex planning issue that can be recognized as a Stochastic Assignment Problem. In this paper an expert rule based approach is combined with scenario analysis to support the planner of the maternity care agency in his work.
A new neural network model for solving random interval linear programming problems.
Arjmandzadeh, Ziba; Safi, Mohammadreza; Nazemi, Alireza
2017-05-01
This paper presents a neural network model for solving random interval linear programming problems. The original problem involving random interval variable coefficients is first transformed into an equivalent convex second order cone programming problem. A neural network model is then constructed for solving the obtained convex second order cone problem. Employing Lyapunov function approach, it is also shown that the proposed neural network model is stable in the sense of Lyapunov and it is globally convergent to an exact satisfactory solution of the original problem. Several illustrative examples are solved in support of this technique. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Greenough, J. A.; Rider, W. J.
2004-05-01
A numerical study is undertaken comparing a fifth-order version of the weighted essentially non-oscillatory numerical (WENO5) method to a modern piecewise-linear, second-order, version of Godunov's (PLMDE) method for the compressible Euler equations. A series of one-dimensional test problems are examined beginning with classical linear problems and ending with complex shock interactions. The problems considered are: (1) linear advection of a Gaussian pulse in density, (2) Sod's shock tube problem, (3) the "peak" shock tube problem, (4) a version of the Shu and Osher shock entropy wave interaction and (5) the Woodward and Colella interacting shock wave problem. For each problem and method, run times, density error norms and convergence rates are reported for each method as produced from a common code test-bed. The linear problem exhibits the advertised convergence rate for both methods as well as the expected large disparity in overall error levels; WENO5 has the smaller errors and an enormous advantage in overall efficiency (in accuracy per unit CPU time). For the nonlinear problems with discontinuities, however, we generally see both first-order self-convergence of error as compared to an exact solution, or when an analytic solution is not available, a converged solution generated on an extremely fine grid. The overall comparison of error levels shows some variation from problem to problem. For Sod's shock tube, PLMDE has nearly half the error, while on the peak problem the errors are nearly the same. For the interacting blast wave problem the two methods again produce a similar level of error with a slight edge for the PLMDE. On the other hand, for the Shu-Osher problem, the errors are similar on the coarser grids, but favors WENO by a factor of nearly 1.5 on the finer grids used. In all cases holding mesh resolution constant though, PLMDE is less costly in terms of CPU time by approximately a factor of 6. If the CPU cost is taken as fixed, that is run times are equal for both numerical methods, then PLMDE uniformly produces lower errors than WENO for the fixed computation cost on the test problems considered here.
PLA realizations for VLSI state machines
NASA Technical Reports Server (NTRS)
Gopalakrishnan, S.; Whitaker, S.; Maki, G.; Liu, K.
1990-01-01
A major problem associated with state assignment procedures for VLSI controllers is obtaining an assignment that produces minimal or near minimal logic. The key item in Programmable Logic Array (PLA) area minimization is the number of unique product terms required by the design equations. This paper presents a state assignment algorithm for minimizing the number of product terms required to implement a finite state machine using a PLA. Partition algebra with predecessor state information is used to derive a near optimal state assignment. A maximum bound on the number of product terms required can be obtained by inspecting the predecessor state information. The state assignment algorithm presented is much simpler than existing procedures and leads to the same number of product terms or less. An area-efficient PLA structure implemented in a 1.0 micron CMOS process is presented along with a summary of the performance for a controller implemented using this design procedure.
Global Optimization of Emergency Evacuation Assignments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Han, Lee; Yuan, Fang; Chin, Shih-Miao
2006-01-01
Conventional emergency evacuation plans often assign evacuees to fixed routes or destinations based mainly on geographic proximity. Such approaches can be inefficient if the roads are congested, blocked, or otherwise dangerous because of the emergency. By not constraining evacuees to prespecified destinations, a one-destination evacuation approach provides flexibility in the optimization process. We present a framework for the simultaneous optimization of evacuation-traffic distribution and assignment. Based on the one-destination evacuation concept, we can obtain the optimal destination and route assignment by solving a one-destination traffic-assignment problem on a modified network representation. In a county-wide, large-scale evacuation case study, the one-destinationmore » model yields substantial improvement over the conventional approach, with the overall evacuation time reduced by more than 60 percent. More importantly, emergency planners can easily implement this framework by instructing evacuees to go to destinations that the one-destination optimization process selects.« less
Solving Fuzzy Optimization Problem Using Hybrid Ls-Sa Method
NASA Astrophysics Data System (ADS)
Vasant, Pandian
2011-06-01
Fuzzy optimization problem has been one of the most and prominent topics inside the broad area of computational intelligent. It's especially relevant in the filed of fuzzy non-linear programming. It's application as well as practical realization can been seen in all the real world problems. In this paper a large scale non-linear fuzzy programming problem has been solved by hybrid optimization techniques of Line Search (LS), Simulated Annealing (SA) and Pattern Search (PS). As industrial production planning problem with cubic objective function, 8 decision variables and 29 constraints has been solved successfully using LS-SA-PS hybrid optimization techniques. The computational results for the objective function respect to vagueness factor and level of satisfaction has been provided in the form of 2D and 3D plots. The outcome is very promising and strongly suggests that the hybrid LS-SA-PS algorithm is very efficient and productive in solving the large scale non-linear fuzzy programming problem.
Particle swarm optimization - Genetic algorithm (PSOGA) on linear transportation problem
NASA Astrophysics Data System (ADS)
Rahmalia, Dinita
2017-08-01
Linear Transportation Problem (LTP) is the case of constrained optimization where we want to minimize cost subject to the balance of the number of supply and the number of demand. The exact method such as northwest corner, vogel, russel, minimal cost have been applied at approaching optimal solution. In this paper, we use heurisitic like Particle Swarm Optimization (PSO) for solving linear transportation problem at any size of decision variable. In addition, we combine mutation operator of Genetic Algorithm (GA) at PSO to improve optimal solution. This method is called Particle Swarm Optimization - Genetic Algorithm (PSOGA). The simulations show that PSOGA can improve optimal solution resulted by PSO.
Anomaly General Circulation Models.
NASA Astrophysics Data System (ADS)
Navarra, Antonio
The feasibility of the anomaly model is assessed using barotropic and baroclinic models. In the barotropic case, both a stationary and a time-dependent model has been formulated and constructed, whereas only the stationary, linear case is considered in the baroclinic case. Results from the barotropic model indicate that a relation between the stationary solution and the time-averaged non-linear solution exists. The stationary linear baroclinic solution can therefore be considered with some confidence. The linear baroclinic anomaly model poses a formidable mathematical problem because it is necessary to solve a gigantic linear system to obtain the solution. A new method to find solution of large linear system, based on a projection on the Krylov subspace is shown to be successful when applied to the linearized baroclinic anomaly model. The scheme consists of projecting the original linear system on the Krylov subspace, thereby reducing the dimensionality of the matrix to be inverted to obtain the solution. With an appropriate setting of the damping parameters, the iterative Krylov method reaches a solution even using a Krylov subspace ten times smaller than the original space of the problem. This generality allows the treatment of the important problem of linear waves in the atmosphere. A larger class (nonzonally symmetric) of basic states can now be treated for the baroclinic primitive equations. These problem leads to large unsymmetrical linear systems of order 10000 and more which can now be successfully tackled by the Krylov method. The (R7) linear anomaly model is used to investigate extensively the linear response to equatorial and mid-latitude prescribed heating. The results indicate that the solution is deeply affected by the presence of the stationary waves in the basic state. The instability of the asymmetric flows, first pointed out by Simmons et al. (1983), is active also in the baroclinic case. However, the presence of baroclinic processes modifies the dominant response. The most sensitive areas are identified; they correspond to north Japan, the Pole and Greenland regions. A limited set of higher resolution (R15) experiments indicate that this situation is still present and enhanced at higher resolution. The linear anomaly model is also applied to a realistic case. (Abstract shortened with permission of author.).