Sample records for redundancy allocation problem

  1. Redundancy allocation problem for k-out-of- n systems with a choice of redundancy strategies

    NASA Astrophysics Data System (ADS)

    Aghaei, Mahsa; Zeinal Hamadani, Ali; Abouei Ardakan, Mostafa

    2017-03-01

    To increase the reliability of a specific system, using redundant components is a common method which is called redundancy allocation problem (RAP). Some of the RAP studies have focused on k-out-of- n systems. However, all of these studies assumed predetermined active or standby strategies for each subsystem. In this paper, for the first time, we propose a k-out-of- n system with a choice of redundancy strategies. Therefore, a k-out-of- n series-parallel system is considered when the redundancy strategy can be chosen for each subsystem. In other words, in the proposed model, the redundancy strategy is considered as an additional decision variable and an exact method based on integer programming is used to obtain the optimal solution of the problem. As the optimization of RAP belongs to the NP-hard class of problems, a modified version of genetic algorithm (GA) is also developed. The exact method and the proposed GA are implemented on a well-known test problem and the results demonstrate the efficiency of the new approach compared with the previous studies.

  2. A GA based penalty function technique for solving constrained redundancy allocation problem of series system with interval valued reliability of components

    NASA Astrophysics Data System (ADS)

    Gupta, R. K.; Bhunia, A. K.; Roy, D.

    2009-10-01

    In this paper, we have considered the problem of constrained redundancy allocation of series system with interval valued reliability of components. For maximizing the overall system reliability under limited resource constraints, the problem is formulated as an unconstrained integer programming problem with interval coefficients by penalty function technique and solved by an advanced GA for integer variables with interval fitness function, tournament selection, uniform crossover, uniform mutation and elitism. As a special case, considering the lower and upper bounds of the interval valued reliabilities of the components to be the same, the corresponding problem has been solved. The model has been illustrated with some numerical examples and the results of the series redundancy allocation problem with fixed value of reliability of the components have been compared with the existing results available in the literature. Finally, sensitivity analyses have been shown graphically to study the stability of our developed GA with respect to the different GA parameters.

  3. Quadratic Programming for Allocating Control Effort

    NASA Technical Reports Server (NTRS)

    Singh, Gurkirpal

    2005-01-01

    A computer program calculates an optimal allocation of control effort in a system that includes redundant control actuators. The program implements an iterative (but otherwise single-stage) algorithm of the quadratic-programming type. In general, in the quadratic-programming problem, one seeks the values of a set of variables that minimize a quadratic cost function, subject to a set of linear equality and inequality constraints. In this program, the cost function combines control effort (typically quantified in terms of energy or fuel consumed) and control residuals (differences between commanded and sensed values of variables to be controlled). In comparison with prior control-allocation software, this program offers approximately equal accuracy but much greater computational efficiency. In addition, this program offers flexibility, robustness to actuation failures, and a capability for selective enforcement of control requirements. The computational efficiency of this program makes it suitable for such complex, real-time applications as controlling redundant aircraft actuators or redundant spacecraft thrusters. The program is written in the C language for execution in a UNIX operating system.

  4. A multiprocessor computer simulation model employing a feedback scheduler/allocator for memory space and bandwidth matching and TMR processing

    NASA Technical Reports Server (NTRS)

    Bradley, D. B.; Irwin, J. D.

    1974-01-01

    A computer simulation model for a multiprocessor computer is developed that is useful for studying the problem of matching multiprocessor's memory space, memory bandwidth and numbers and speeds of processors with aggregate job set characteristics. The model assumes an input work load of a set of recurrent jobs. The model includes a feedback scheduler/allocator which attempts to improve system performance through higher memory bandwidth utilization by matching individual job requirements for space and bandwidth with space availability and estimates of bandwidth availability at the times of memory allocation. The simulation model includes provisions for specifying precedence relations among the jobs in a job set, and provisions for specifying precedence execution of TMR (Triple Modular Redundant and SIMPLEX (non redundant) jobs.

  5. A hybrid Jaya algorithm for reliability-redundancy allocation problems

    NASA Astrophysics Data System (ADS)

    Ghavidel, Sahand; Azizivahed, Ali; Li, Li

    2018-04-01

    This article proposes an efficient improved hybrid Jaya algorithm based on time-varying acceleration coefficients (TVACs) and the learning phase introduced in teaching-learning-based optimization (TLBO), named the LJaya-TVAC algorithm, for solving various types of nonlinear mixed-integer reliability-redundancy allocation problems (RRAPs) and standard real-parameter test functions. RRAPs include series, series-parallel, complex (bridge) and overspeed protection systems. The search power of the proposed LJaya-TVAC algorithm for finding the optimal solutions is first tested on the standard real-parameter unimodal and multi-modal functions with dimensions of 30-100, and then tested on various types of nonlinear mixed-integer RRAPs. The results are compared with the original Jaya algorithm and the best results reported in the recent literature. The optimal results obtained with the proposed LJaya-TVAC algorithm provide evidence for its better and acceptable optimization performance compared to the original Jaya algorithm and other reported optimal results.

  6. A Framework for Optimal Control Allocation with Structural Load Constraints

    NASA Technical Reports Server (NTRS)

    Frost, Susan A.; Taylor, Brian R.; Jutte, Christine V.; Burken, John J.; Trinh, Khanh V.; Bodson, Marc

    2010-01-01

    Conventional aircraft generally employ mixing algorithms or lookup tables to determine control surface deflections needed to achieve moments commanded by the flight control system. Control allocation is the problem of converting desired moments into control effector commands. Next generation aircraft may have many multipurpose, redundant control surfaces, adding considerable complexity to the control allocation problem. These issues can be addressed with optimal control allocation. Most optimal control allocation algorithms have control surface position and rate constraints. However, these constraints are insufficient to ensure that the aircraft's structural load limits will not be exceeded by commanded surface deflections. In this paper, a framework is proposed to enable a flight control system with optimal control allocation to incorporate real-time structural load feedback and structural load constraints. A proof of concept simulation that demonstrates the framework in a simulation of a generic transport aircraft is presented.

  7. Optimum allocation of redundancy among subsystems connected in series. Ph.D. Thesis - Case Western Reserve Univ., Sep. 1970

    NASA Technical Reports Server (NTRS)

    Bien, D. D.

    1973-01-01

    This analysis considers the optimum allocation of redundancy in a system of serially connected subsystems in which each subsystem is of the k-out-of-n type. Redundancy is optimally allocated when: (1) reliability is maximized for given costs; or (2) costs are minimized for given reliability. Several techniques are presented for achieving optimum allocation and their relative merits are discussed. Approximate solutions in closed form were attainable only for the special case of series-parallel systems and the efficacy of these approximations is discussed.

  8. A novel artificial fish swarm algorithm for solving large-scale reliability-redundancy application problem.

    PubMed

    He, Qiang; Hu, Xiangtao; Ren, Hong; Zhang, Hongqi

    2015-11-01

    A novel artificial fish swarm algorithm (NAFSA) is proposed for solving large-scale reliability-redundancy allocation problem (RAP). In NAFSA, the social behaviors of fish swarm are classified in three ways: foraging behavior, reproductive behavior, and random behavior. The foraging behavior designs two position-updating strategies. And, the selection and crossover operators are applied to define the reproductive ability of an artificial fish. For the random behavior, which is essentially a mutation strategy, the basic cloud generator is used as the mutation operator. Finally, numerical results of four benchmark problems and a large-scale RAP are reported and compared. NAFSA shows good performance in terms of computational accuracy and computational efficiency for large scale RAP. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.

  9. A new way to improve the robustness of complex communication networks by allocating redundancy links

    NASA Astrophysics Data System (ADS)

    Shi, Chunhui; Peng, Yunfeng; Zhuo, Yue; Tang, Jieying; Long, Keping

    2012-03-01

    We investigate the robustness of complex communication networks on allocating redundancy links. The protecting key nodes (PKN) strategy is proposed to improve the robustness of complex communication networks against intentional attack. Our numerical simulations show that allocating a few redundant links among key nodes using the PKN strategy will significantly increase the robustness of scale-free complex networks. We have also theoretically proved and demonstrated the effectiveness of the PKN strategy. We expect that our work will help achieve a better understanding of communication networks.

  10. Consideration of plant behaviour in optimal servo-compensator design

    NASA Astrophysics Data System (ADS)

    Moase, W. H.; Manzie, C.

    2016-07-01

    Where the most prevalent optimal servo-compensator formulations penalise the behaviour of an error system, this paper considers the problem of additionally penalising the actual states and inputs of the plant. Doing so has the advantage of enabling the penalty function to better resemble an economic cost. This is especially true of problems where control effort needs to be sensibly allocated across weakly redundant inputs or where one wishes to use penalties to soft-constrain certain states or inputs. It is shown that, although the resulting cost function grows unbounded as its horizon approaches infinity, it is possible to formulate an equivalent optimisation problem with a bounded cost. The resulting optimisation problem is similar to those in earlier studies but has an additional 'correction term' in the cost function, and a set of equality constraints that arise when there are redundant inputs. A numerical approach to solve the resulting optimisation problem is presented, followed by simulations on a micro-macro positioner that illustrate the benefits of the proposed servo-compensator design approach.

  11. An overview of the artificial intelligence and expert systems component of RICIS

    NASA Technical Reports Server (NTRS)

    Feagin, Terry

    1987-01-01

    Artificial Intelligence and Expert Systems are the important component of RICIS (Research Institute and Information Systems) research program. For space applications, a number of problem areas that should be able to make good use of the above tools include: resource allocation and management, control and monitoring, environmental control and life support, power distribution, communications scheduling, orbit and attitude maintenance, redundancy management, intelligent man-machine interfaces and fault detection, isolation and recovery.

  12. Design of safety-oriented control allocation strategies for overactuated electric vehicles

    NASA Astrophysics Data System (ADS)

    de Castro, Ricardo; Tanelli, Mara; Esteves Araújo, Rui; Savaresi, Sergio M.

    2014-08-01

    The new vehicle platforms for electric vehicles (EVs) that are becoming available are characterised by actuator redundancy, which makes it possible to jointly optimise different aspects of the vehicle motion. To do this, high-level control objectives are first specified and solved with appropriate control strategies. Then, the resulting virtual control action must be translated into actual actuator commands by a control allocation layer that takes care of computing the forces to be applied at the wheels. This step, in general, is quite demanding as far as computational complexity is considered. In this work, a safety-oriented approach to this problem is proposed. Specifically, a four-wheel steer EV with four in-wheel motors is considered, and the high-level motion controller is designed within a sliding mode framework with conditional integrators. For distributing the forces among the tyres, two control allocation approaches are investigated. The first, based on the extension of the cascading generalised inverse method, is computationally efficient but shows some limitations in dealing with unfeasible force values. To solve the problem, a second allocation algorithm is proposed, which relies on the linearisation of the tyre-road friction constraints. Extensive tests, carried out in the CarSim simulation environment, demonstrate the effectiveness of the proposed approach.

  13. Optimal bit allocation for hybrid scalable/multiple-description video transmission over wireless channels

    NASA Astrophysics Data System (ADS)

    Jubran, Mohammad K.; Bansal, Manu; Kondi, Lisimachos P.

    2006-01-01

    In this paper, we consider the problem of optimal bit allocation for wireless video transmission over fading channels. We use a newly developed hybrid scalable/multiple-description codec that combines the functionality of both scalable and multiple-description codecs. It produces a base layer and multiple-description enhancement layers. Any of the enhancement layers can be decoded (in a non-hierarchical manner) with the base layer to improve the reconstructed video quality. Two different channel coding schemes (Rate-Compatible Punctured Convolutional (RCPC)/Cyclic Redundancy Check (CRC) coding and, product code Reed Solomon (RS)+RCPC/CRC coding) are used for unequal error protection of the layered bitstream. Optimal allocation of the bitrate between source and channel coding is performed for discrete sets of source coding rates and channel coding rates. Experimental results are presented for a wide range of channel conditions. Also, comparisons with classical scalable coding show the effectiveness of using hybrid scalable/multiple-description coding for wireless transmission.

  14. Linear Quadratic Tracking Design for a Generic Transport Aircraft with Structural Load Constraints

    NASA Technical Reports Server (NTRS)

    Burken, John J.; Frost, Susan A.; Taylor, Brian R.

    2011-01-01

    When designing control laws for systems with constraints added to the tracking performance, control allocation methods can be utilized. Control allocations methods are used when there are more command inputs than controlled variables. Constraints that require allocators are such task as; surface saturation limits, structural load limits, drag reduction constraints or actuator failures. Most transport aircraft have many actuated surfaces compared to the three controlled variables (such as angle of attack, roll rate & angle of side slip). To distribute the control effort among the redundant set of actuators a fixed mixer approach can be utilized or online control allocation techniques. The benefit of an online allocator is that constraints can be considered in the design whereas the fixed mixer cannot. However, an online control allocator mixer has a disadvantage of not guaranteeing a surface schedule, which can then produce ill defined loads on the aircraft. The load uncertainty and complexity has prevented some controller designs from using advanced allocation techniques. This paper considers actuator redundancy management for a class of over actuated systems with real-time structural load limits using linear quadratic tracking applied to the generic transport model. A roll maneuver example of an artificial load limit constraint is shown and compared to the same no load limitation maneuver.

  15. A PC program to optimize system configuration for desired reliability at minimum cost

    NASA Technical Reports Server (NTRS)

    Hills, Steven W.; Siahpush, Ali S.

    1994-01-01

    High reliability is desired in all engineered systems. One way to improve system reliability is to use redundant components. When redundant components are used, the problem becomes one of allocating them to achieve the best reliability without exceeding other design constraints such as cost, weight, or volume. Systems with few components can be optimized by simply examining every possible combination but the number of combinations for most systems is prohibitive. A computerized iteration of the process is possible but anything short of a super computer requires too much time to be practical. Many researchers have derived mathematical formulations for calculating the optimum configuration directly. However, most of the derivations are based on continuous functions whereas the real system is composed of discrete entities. Therefore, these techniques are approximations of the true optimum solution. This paper describes a computer program that will determine the optimum configuration of a system of multiple redundancy of both standard and optional components. The algorithm is a pair-wise comparative progression technique which can derive the true optimum by calculating only a small fraction of the total number of combinations. A designer can quickly analyze a system with this program on a personal computer.

  16. Study of Fuze Structure and Reliability Design Based on the Direct Search Method

    NASA Astrophysics Data System (ADS)

    Lin, Zhang; Ning, Wang

    2017-03-01

    Redundant design is one of the important methods to improve the reliability of the system, but mutual coupling of multiple factors is often involved in the design. In my study, Direct Search Method is introduced into the optimum redundancy configuration for design optimization, in which, the reliability, cost, structural weight and other factors can be taken into account simultaneously, and the redundant allocation and reliability design of aircraft critical system are computed. The results show that this method is convenient and workable, and applicable to the redundancy configurations and optimization of various designs upon appropriate modifications. And this method has a good practical value.

  17. An Efficient, Lossless Database for Storing and Transmitting Medical Images

    NASA Technical Reports Server (NTRS)

    Fenstermacher, Marc J.

    1998-01-01

    This research aimed in creating new compression methods based on the central idea of Set Redundancy Compression (SRC). Set Redundancy refers to the common information that exists in a set of similar images. SRC compression methods take advantage of this common information and can achieve improved compression of similar images by reducing their Set Redundancy. The current research resulted in the development of three new lossless SRC compression methods: MARS (Median-Aided Region Sorting), MAZE (Max-Aided Zero Elimination) and MaxGBA (Max-Guided Bit Allocation).

  18. Computationally efficient control allocation

    NASA Technical Reports Server (NTRS)

    Durham, Wayne (Inventor)

    2001-01-01

    A computationally efficient method for calculating near-optimal solutions to the three-objective, linear control allocation problem is disclosed. The control allocation problem is that of distributing the effort of redundant control effectors to achieve some desired set of objectives. The problem is deemed linear if control effectiveness is affine with respect to the individual control effectors. The optimal solution is that which exploits the collective maximum capability of the effectors within their individual physical limits. Computational efficiency is measured by the number of floating-point operations required for solution. The method presented returned optimal solutions in more than 90% of the cases examined; non-optimal solutions returned by the method were typically much less than 1% different from optimal and the errors tended to become smaller than 0.01% as the number of controls was increased. The magnitude of the errors returned by the present method was much smaller than those that resulted from either pseudo inverse or cascaded generalized inverse solutions. The computational complexity of the method presented varied linearly with increasing numbers of controls; the number of required floating point operations increased from 5.5 i, to seven times faster than did the minimum-norm solution (the pseudoinverse), and at about the same rate as did the cascaded generalized inverse solution. The computational requirements of the method presented were much better than that of previously described facet-searching methods which increase in proportion to the square of the number of controls.

  19. A comparison of Heuristic method and Llewellyn’s rules for identification of redundant constraints

    NASA Astrophysics Data System (ADS)

    Estiningsih, Y.; Farikhin; Tjahjana, R. H.

    2018-03-01

    Important techniques in linear programming is modelling and solving practical optimization. Redundant constraints are consider for their effects on general linear programming problems. Identification and reduce redundant constraints are for avoidance of all the calculations associated when solving an associated linear programming problems. Many researchers have been proposed for identification redundant constraints. This paper a compararison of Heuristic method and Llewellyn’s rules for identification of redundant constraints.

  20. The Problem of Empirical Redundancy of Constructs in Organizational Research: An Empirical Investigation

    ERIC Educational Resources Information Center

    Le, Huy; Schmidt, Frank L.; Harter, James K.; Lauver, Kristy J.

    2010-01-01

    Construct empirical redundancy may be a major problem in organizational research today. In this paper, we explain and empirically illustrate a method for investigating this potential problem. We applied the method to examine the empirical redundancy of job satisfaction (JS) and organizational commitment (OC), two well-established organizational…

  1. Harmony search algorithm: application to the redundancy optimization problem

    NASA Astrophysics Data System (ADS)

    Nahas, Nabil; Thien-My, Dao

    2010-09-01

    The redundancy optimization problem is a well known NP-hard problem which involves the selection of elements and redundancy levels to maximize system performance, given different system-level constraints. This article presents an efficient algorithm based on the harmony search algorithm (HSA) to solve this optimization problem. The HSA is a new nature-inspired algorithm which mimics the improvization process of music players. Two kinds of problems are considered in testing the proposed algorithm, with the first limited to the binary series-parallel system, where the problem consists of a selection of elements and redundancy levels used to maximize the system reliability given various system-level constraints; the second problem for its part concerns the multi-state series-parallel systems with performance levels ranging from perfect operation to complete failure, and in which identical redundant elements are included in order to achieve a desirable level of availability. Numerical results for test problems from previous research are reported and compared. The results of HSA showed that this algorithm could provide very good solutions when compared to those obtained through other approaches.

  2. Redundant interferometric calibration as a complex optimization problem

    NASA Astrophysics Data System (ADS)

    Grobler, T. L.; Bernardi, G.; Kenyon, J. S.; Parsons, A. R.; Smirnov, O. M.

    2018-05-01

    Observations of the redshifted 21 cm line from the epoch of reionization have recently motivated the construction of low-frequency radio arrays with highly redundant configurations. These configurations provide an alternative calibration strategy - `redundant calibration' - and boost sensitivity on specific spatial scales. In this paper, we formulate calibration of redundant interferometric arrays as a complex optimization problem. We solve this optimization problem via the Levenberg-Marquardt algorithm. This calibration approach is more robust to initial conditions than current algorithms and, by leveraging an approximate matrix inversion, allows for further optimization and an efficient implementation (`redundant STEFCAL'). We also investigated using the preconditioned conjugate gradient method as an alternative to the approximate matrix inverse, but found that its computational performance is not competitive with respect to `redundant STEFCAL'. The efficient implementation of this new algorithm is made publicly available.

  3. Boeing flight deck design philosophy

    NASA Technical Reports Server (NTRS)

    Stoll, Harty

    1990-01-01

    Information relative to Boeing flight deck design philosophy is given in viewgraph form. Flight deck design rules, design considerations, functions allocated to the crew, redundancy and automation concerns, and examples of accident data that were reviewed are listed.

  4. Cartesian control of redundant robots

    NASA Technical Reports Server (NTRS)

    Colbaugh, R.; Glass, K.

    1989-01-01

    A Cartesian-space position/force controller is presented for redundant robots. The proposed control structure partitions the control problem into a nonredundant position/force trajectory tracking problem and a redundant mapping problem between Cartesian control input F is a set member of the set R(sup m) and robot actuator torque T is a set member of the set R(sup n) (for redundant robots, m is less than n). The underdetermined nature of the F yields T map is exploited so that the robot redundancy is utilized to improve the dynamic response of the robot. This dynamically optimal F yields T map is implemented locally (in time) so that it is computationally efficient for on-line control; however, it is shown that the map possesses globally optimal characteristics. Additionally, it is demonstrated that the dynamically optimal F yields T map can be modified so that the robot redundancy is used to simultaneously improve the dynamic response and realize any specified kinematic performance objective (e.g., manipulability maximization or obstacle avoidance). Computer simulation results are given for a four degree of freedom planar redundant robot under Cartesian control, and demonstrate that position/force trajectory tracking and effective redundancy utilization can be achieved simultaneously with the proposed controller.

  5. Reinforcement Learning with Orthonormal Basis Adaptation Based on Activity-Oriented Index Allocation

    NASA Astrophysics Data System (ADS)

    Satoh, Hideki

    An orthonormal basis adaptation method for function approximation was developed and applied to reinforcement learning with multi-dimensional continuous state space. First, a basis used for linear function approximation of a control function is set to an orthonormal basis. Next, basis elements with small activities are replaced with other candidate elements as learning progresses. As this replacement is repeated, the number of basis elements with large activities increases. Example chaos control problems for multiple logistic maps were solved, demonstrating that the method for adapting an orthonormal basis can modify a basis while holding the orthonormality in accordance with changes in the environment to improve the performance of reinforcement learning and to eliminate the adverse effects of redundant noisy states.

  6. Input relegation control for gross motion of a kinematically redundant manipulator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Unseren, M.A.

    1992-10-01

    This report proposes a method for resolving the kinematic redundancy of a serial link manipulator moving in a three-dimensional workspace. The underspecified problem of solving for the joint velocities based on the classical kinematic velocity model is transformed into a well-specified problem. This is accomplished by augmenting the original model with additional equations which relate a new vector variable quantifying the redundant degrees of freedom (DOF) to the joint velocities. The resulting augmented system yields a well specified solution for the joint velocities. Methods for selecting the redundant DOF quantifying variable and the transformation matrix relating it to the jointmore » velocities are presented so as to obtain a minimum Euclidean norm solution for the joint velocities. The approach is also applied to the problem of resolving the kinematic redundancy at the acceleration level. Upon resolving the kinematic redundancy, a rigid body dynamical model governing the gross motion of the manipulator is derived. A control architecture is suggested which according to the model, decouples the Cartesian space DOF and the redundant DOF.« less

  7. The rid-redundant procedure in C-Prolog

    NASA Technical Reports Server (NTRS)

    Chen, Huo-Yan; Wah, Benjamin W.

    1987-01-01

    C-Prolog can conveniently be used for logical inferences on knowledge bases. However, as similar to many search methods using backward chaining, a large number of redundant computation may be produced in recursive calls. To overcome this problem, the 'rid-redundant' procedure was designed to rid all redundant computations in running multi-recursive procedures. Experimental results obtained for C-Prolog on the Vax 11/780 computer show that there is an order of magnitude improvement in the running time and solvable problem size.

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Unseren, M.A.

    This report proposes a method for resolving the kinematic redundancy of a serial link manipulator moving in a three-dimensional workspace. The underspecified problem of solving for the joint velocities based on the classical kinematic velocity model is transformed into a well-specified problem. This is accomplished by augmenting the original model with additional equations which relate a new vector variable quantifying the redundant degrees of freedom (DOF) to the joint velocities. The resulting augmented system yields a well specified solution for the joint velocities. Methods for selecting the redundant DOF quantifying variable and the transformation matrix relating it to the jointmore » velocities are presented so as to obtain a minimum Euclidean norm solution for the joint velocities. The approach is also applied to the problem of resolving the kinematic redundancy at the acceleration level. Upon resolving the kinematic redundancy, a rigid body dynamical model governing the gross motion of the manipulator is derived. A control architecture is suggested which according to the model, decouples the Cartesian space DOF and the redundant DOF.« less

  9. Inhibition and Language Pragmatic View in Redundant Data Problem Solving

    ERIC Educational Resources Information Center

    Setti, Annalisa; Caramelli, Nicoletta

    2007-01-01

    The present study concerns redundant data problems, defined as problems in which irrelevant data is provided. This type of problem provides a misleading context [Pascual-Leone, J. (1987). Organismic process for neo-Piagetian theories: A dialectical causal account of cognitive development. "International Journal of Psychology," 22, 531-570] similar…

  10. Handbook: Design of automated redundancy verification

    NASA Technical Reports Server (NTRS)

    Ford, F. A.; Hasslinger, T. W.; Moreno, F. J.

    1971-01-01

    The use of the handbook is discussed and the design progress is reviewed. A description of the problem is presented, and examples are given to illustrate the necessity for redundancy verification, along with the types of situations to which it is typically applied. Reusable space vehicles, such as the space shuttle, are recognized as being significant in the development of the automated redundancy verification problem.

  11. Characterization and control of self-motions in redundant manipulators

    NASA Technical Reports Server (NTRS)

    Burdick, J.; Seraji, Homayoun

    1989-01-01

    The presence of redundant degrees of freedom in a manipulator structure leads to a physical phenomenon known as a self-motion, which is a continuous motion of the manipulator joints that leaves the end-effector motionless. In the first part of the paper, a global manifold mapping reformulation of manipulator kinematics is reviewed, and the inverse kinematic solution for redundant manipulators is developed in terms of self-motion manifolds. Global characterizations of the self-motion manifolds in terms of their number, geometry, homotopy class, and null space are reviewed using examples. Much previous work in redundant manipulator control has been concerned with the redundancy resolution problem, in which methods are developed to determine, or resolve, the motion of the joints in order to achieve end-effector trajectory control while optimizing additional objective functions. Redundancy resolution problems can be equivalently posed as the control of self-motions. Alternatives for redundancy resolution are briefly discussed.

  12. Redundancy-Aware Topic Modeling for Patient Record Notes

    PubMed Central

    Cohen, Raphael; Aviram, Iddo; Elhadad, Michael; Elhadad, Noémie

    2014-01-01

    The clinical notes in a given patient record contain much redundancy, in large part due to clinicians’ documentation habit of copying from previous notes in the record and pasting into a new note. Previous work has shown that this redundancy has a negative impact on the quality of text mining and topic modeling in particular. In this paper we describe a novel variant of Latent Dirichlet Allocation (LDA) topic modeling, Red-LDA, which takes into account the inherent redundancy of patient records when modeling content of clinical notes. To assess the value of Red-LDA, we experiment with three baselines and our novel redundancy-aware topic modeling method: given a large collection of patient records, (i) apply vanilla LDA to all documents in all input records; (ii) identify and remove all redundancy by chosing a single representative document for each record as input to LDA; (iii) identify and remove all redundant paragraphs in each record, leaving partial, non-redundant documents as input to LDA; and (iv) apply Red-LDA to all documents in all input records. Both quantitative evaluation carried out through log-likelihood on held-out data and topic coherence of produced topics and qualitative assessement of topics carried out by physicians show that Red-LDA produces superior models to all three baseline strategies. This research contributes to the emerging field of understanding the characteristics of the electronic health record and how to account for them in the framework of data mining. The code for the two redundancy-elimination baselines and Red-LDA is made publicly available to the community. PMID:24551060

  13. Redundancy-aware topic modeling for patient record notes.

    PubMed

    Cohen, Raphael; Aviram, Iddo; Elhadad, Michael; Elhadad, Noémie

    2014-01-01

    The clinical notes in a given patient record contain much redundancy, in large part due to clinicians' documentation habit of copying from previous notes in the record and pasting into a new note. Previous work has shown that this redundancy has a negative impact on the quality of text mining and topic modeling in particular. In this paper we describe a novel variant of Latent Dirichlet Allocation (LDA) topic modeling, Red-LDA, which takes into account the inherent redundancy of patient records when modeling content of clinical notes. To assess the value of Red-LDA, we experiment with three baselines and our novel redundancy-aware topic modeling method: given a large collection of patient records, (i) apply vanilla LDA to all documents in all input records; (ii) identify and remove all redundancy by chosing a single representative document for each record as input to LDA; (iii) identify and remove all redundant paragraphs in each record, leaving partial, non-redundant documents as input to LDA; and (iv) apply Red-LDA to all documents in all input records. Both quantitative evaluation carried out through log-likelihood on held-out data and topic coherence of produced topics and qualitative assessment of topics carried out by physicians show that Red-LDA produces superior models to all three baseline strategies. This research contributes to the emerging field of understanding the characteristics of the electronic health record and how to account for them in the framework of data mining. The code for the two redundancy-elimination baselines and Red-LDA is made publicly available to the community.

  14. A unique role of endogenous visual-spatial attention in rapid processing of multiple targets

    PubMed Central

    Guzman, Emmanuel; Grabowecky, Marcia; Palafox, German; Suzuki, Satoru

    2012-01-01

    Visual spatial attention can be exogenously captured by a salient stimulus or can be endogenously allocated by voluntary effort. Whether these two attention modes serve distinctive functions is debated, but for processing of single targets the literature suggests superiority of exogenous attention (it is faster acting and serves more functions). We report that endogenous attention uniquely contributes to processing of multiple targets. For speeded visual discrimination, response times are faster for multiple redundant targets than for single targets due to probability summation and/or signal integration. This redundancy gain was unaffected when attention was exogenously diverted from the targets, but was completely eliminated when attention was endogenously diverted. This was not due to weaker manipulation of exogenous attention because our exogenous and endogenous cues similarly affected overall response times. Thus, whereas exogenous attention is superior for processing single targets, endogenous attention plays a unique role in allocating resources crucial for rapid concurrent processing of multiple targets. PMID:21517209

  15. Mines Systems Safety Improvement Using an Integrated Event Tree and Fault Tree Analysis

    NASA Astrophysics Data System (ADS)

    Kumar, Ranjan; Ghosh, Achyuta Krishna

    2017-04-01

    Mines systems such as ventilation system, strata support system, flame proof safety equipment, are exposed to dynamic operational conditions such as stress, humidity, dust, temperature, etc., and safety improvement of such systems can be done preferably during planning and design stage. However, the existing safety analysis methods do not handle the accident initiation and progression of mine systems explicitly. To bridge this gap, this paper presents an integrated Event Tree (ET) and Fault Tree (FT) approach for safety analysis and improvement of mine systems design. This approach includes ET and FT modeling coupled with redundancy allocation technique. In this method, a concept of top hazard probability is introduced for identifying system failure probability and redundancy is allocated to the system either at component or system level. A case study on mine methane explosion safety with two initiating events is performed. The results demonstrate that the presented method can reveal the accident scenarios and improve the safety of complex mine systems simultaneously.

  16. Maximization of Learning Speed Due to Neuronal Redundancy in Reinforcement Learning

    NASA Astrophysics Data System (ADS)

    Takiyama, Ken

    2016-11-01

    Adaptable neural activity contributes to the flexibility of human behavior, which is optimized in situations such as motor learning and decision making. Although learning signals in motor learning and decision making are low-dimensional, neural activity, which is very high dimensional, must be modified to achieve optimal performance based on the low-dimensional signal, resulting in a severe credit-assignment problem. Despite this problem, the human brain contains a vast number of neurons, leaving an open question: what is the functional significance of the huge number of neurons? Here, I address this question by analyzing a redundant neural network with a reinforcement-learning algorithm in which the numbers of neurons and output units are N and M, respectively. Because many combinations of neural activity can generate the same output under the condition of N ≫ M, I refer to the index N - M as neuronal redundancy. Although greater neuronal redundancy makes the credit-assignment problem more severe, I demonstrate that a greater degree of neuronal redundancy facilitates learning speed. Thus, in an apparent contradiction of the credit-assignment problem, I propose the hypothesis that a functional role of a huge number of neurons or a huge degree of neuronal redundancy is to facilitate learning speed.

  17. The bliss (not the problem) of motor abundance (not redundancy).

    PubMed

    Latash, Mark L

    2012-03-01

    Motor control is an area of natural science exploring how the nervous system interacts with other body parts and the environment to produce purposeful, coordinated actions. A central problem of motor control-the problem of motor redundancy-was formulated by Nikolai Bernstein as the problem of elimination of redundant degrees-of-freedom. Traditionally, this problem has been addressed using optimization methods based on a variety of cost functions. This review draws attention to a body of recent findings suggesting that the problem has been formulated incorrectly. An alternative view has been suggested as the principle of abundance, which considers the apparently redundant degrees-of-freedom as useful and even vital for many aspects of motor behavior. Over the past 10 years, dozens of publications have provided support for this view based on the ideas of synergic control, computational apparatus of the uncontrolled manifold hypothesis, and the equilibrium-point (referent configuration) hypothesis. In particular, large amounts of "good variance"-variance in the space of elements that has no effect on the overall performance-have been documented across a variety of natural actions. "Good variance" helps an abundant system to deal with secondary tasks and unexpected perturbations; its amount shows adaptive modulation across a variety of conditions. These data support the view that there is no problem of motor redundancy; there is bliss of motor abundance.

  18. Control allocation for gimballed/fixed thrusters

    NASA Astrophysics Data System (ADS)

    Servidia, Pablo A.

    2010-02-01

    Some overactuated control systems use a control distribution law between the controller and the set of actuators, usually called control allocator. Beyond the control allocator, the configuration of actuators may be designed to be able to operate after a single point of failure, for system optimization and/or decentralization objectives. For some type of actuators, a control allocation is used even without redundancy, being a good example the design and operation of thruster configurations. In fact, as the thruster mass flow direction and magnitude only can be changed under certain limits, this must be considered in the feedback implementation. In this work, the thruster configuration design is considered in the fixed (F), single-gimbal (SG) and double-gimbal (DG) thruster cases. The minimum number of thrusters for each case is obtained and for the resulting configurations a specific control allocation is proposed using a nonlinear programming algorithm, under nominal and single-point of failure conditions.

  19. Optimal Bi-Objective Redundancy Allocation for Systems Reliability and Risk Management.

    PubMed

    Govindan, Kannan; Jafarian, Ahmad; Azbari, Mostafa E; Choi, Tsan-Ming

    2016-08-01

    In the big data era, systems reliability is critical to effective systems risk management. In this paper, a novel multiobjective approach, with hybridization of a known algorithm called NSGA-II and an adaptive population-based simulated annealing (APBSA) method is developed to solve the systems reliability optimization problems. In the first step, to create a good algorithm, we use a coevolutionary strategy. Since the proposed algorithm is very sensitive to parameter values, the response surface method is employed to estimate the appropriate parameters of the algorithm. Moreover, to examine the performance of our proposed approach, several test problems are generated, and the proposed hybrid algorithm and other commonly known approaches (i.e., MOGA, NRGA, and NSGA-II) are compared with respect to four performance measures: 1) mean ideal distance; 2) diversification metric; 3) percentage of domination; and 4) data envelopment analysis. The computational studies have shown that the proposed algorithm is an effective approach for systems reliability and risk management.

  20. Redundant Disk Arrays in Transaction Processing Systems. Ph.D. Thesis, 1993

    NASA Technical Reports Server (NTRS)

    Mourad, Antoine Nagib

    1994-01-01

    We address various issues dealing with the use of disk arrays in transaction processing environments. We look at the problem of transaction undo recovery and propose a scheme for using the redundancy in disk arrays to support undo recovery. The scheme uses twin page storage for the parity information in the array. It speeds up transaction processing by eliminating the need for undo logging for most transactions. The use of redundant arrays of distributed disks to provide recovery from disasters as well as temporary site failures and disk crashes is also studied. We investigate the problem of assigning the sites of a distributed storage system to redundant arrays in such a way that a cost of maintaining the redundant parity information is minimized. Heuristic algorithms for solving the site partitioning problem are proposed and their performance is evaluated using simulation. We also develop a heuristic for which an upper bound on the deviation from the optimal solution can be established.

  1. Base reaction optimization of redundant manipulators for space applications

    NASA Technical Reports Server (NTRS)

    Chung, C. L.; Desa, S.; Desilva, C. W.

    1988-01-01

    One of the problems associated with redundant manipulators which were proposed for space applications is that the reactions transmitted to the base of the manipulator as a result of the motion of the manipulator will cause undesirable effects on the dynamic behavior of the supporting space structure. It is therefore necessary to minimize the magnitudes of the forces and moments transmitted to the base. It is shown that kinematic redundancy can be used to solve the dynamic problem of minimizing the magnitude of the base reactions. The methodology described is applied to a four degree-of-freedom spatial manipulator with one redundant degree-of-freedom.

  2. Site partitioning for distributed redundant disk arrays

    NASA Technical Reports Server (NTRS)

    Mourad, Antoine N.; Fuchs, W. K.; Saab, Daniel G.

    1992-01-01

    Distributed redundant disk arrays can be used in a distributed computing system or database system to provide recovery in the presence of temporary and permanent failures of single sites. In this paper, we look at the problem of partitioning the sites into redundant arrays in such way that the communication costs for maintaining the parity information are minimized. We show that the partitioning problem is NP-complete and we propose two heuristic algorithms for finding approximate solutions.

  3. Benefit of adaptive FEC in shared backup path protected elastic optical network.

    PubMed

    Guo, Hong; Dai, Hua; Wang, Chao; Li, Yongcheng; Bose, Sanjay K; Shen, Gangxiang

    2015-07-27

    We apply an adaptive forward error correction (FEC) allocation strategy to an Elastic Optical Network (EON) operated with shared backup path protection (SBPP). To maximize the protected network capacity that can be carried, an Integer Linear Programing (ILP) model and a spectrum window plane (SWP)-based heuristic algorithm are developed. Simulation results show that the FEC coding overhead required by the adaptive FEC scheme is significantly lower than that needed by a fixed FEC allocation strategy resulting in higher network capacity for the adaptive strategy. The adaptive FEC allocation strategy can also significantly outperform the fixed FEC allocation strategy both in terms of the spare capacity redundancy and the average FEC coding overhead needed per optical channel. The proposed heuristic algorithm is efficient and not only performs closer to the ILP model but also does much better than the shortest-path algorithm.

  4. Contrarian behavior in a complex adaptive system

    NASA Astrophysics Data System (ADS)

    Liang, Y.; An, K. N.; Yang, G.; Huang, J. P.

    2013-01-01

    Contrarian behavior is a kind of self-organization in complex adaptive systems (CASs). Here we report the existence of a transition point in a model resource-allocation CAS with contrarian behavior by using human experiments, computer simulations, and theoretical analysis. The resource ratio and system predictability serve as the tuning parameter and order parameter, respectively. The transition point helps to reveal the positive or negative role of contrarian behavior. This finding is in contrast to the common belief that contrarian behavior always has a positive role in resource allocation, say, stabilizing resource allocation by shrinking the redundancy or the lack of resources. It is further shown that resource allocation can be optimized at the transition point by adding an appropriate size of contrarians. This work is also expected to be of value to some other fields ranging from management and social science to ecology and evolution.

  5. Spot the difference: Operational event sequence diagrams as a formal method for work allocation in the development of single-pilot operations for commercial aircraft.

    PubMed

    Harris, Don; Stanton, Neville A; Starr, Alison

    2015-01-01

    Function Allocation methods are important for the appropriate allocation of tasks between humans and automated systems. It is proposed that Operational Event Sequence Diagrams (OESDs) provide a simple yet rigorous basis upon which allocation of work can be assessed. This is illustrated with respect to a design concept for a passenger aircraft flown by just a single pilot where the objective is to replace or supplement functions normally undertaken by the second pilot with advanced automation. A scenario-based analysis (take off) was used in which there would normally be considerable demands and interactions with the second pilot. The OESD analyses indicate those tasks that would be suitable for allocation to automated assistance on the flight deck and those tasks that are now redundant in this new configuration (something that other formal Function Allocation approaches cannot identify). Furthermore, OESDs are demonstrated to be an easy to apply and flexible approach to the allocation of function in prospective systems. OESDs provide a simple yet rigorous basis upon which allocation of work can be assessed. The technique can deal with the flexible, dynamic allocation of work and the deletion of functions no longer required. This is illustrated using a novel design concept for a single-crew commercial aircraft.

  6. Tuning the Tin Ear: In Search of Fiscal Congruency

    ERIC Educational Resources Information Center

    Dragona, Anthony N.

    2011-01-01

    For 70 years, the Union City School District used a line-item budget system, a top down approach that was regarded by many as "deaf, dumb, and blind." This antiquated central administration process gave school leaders and staff little, if any, input into the distribution of resources for their schools, resulting in a redundancy of allocations,…

  7. Reliability of Fault Tolerant Control Systems. Part 2

    NASA Technical Reports Server (NTRS)

    Wu, N. Eva

    2000-01-01

    This paper reports Part II of a two part effort that is intended to delineate the relationship between reliability and fault tolerant control in a quantitative manner. Reliability properties peculiar to fault-tolerant control systems are emphasized, such as the presence of analytic redundancy in high proportion, the dependence of failures on control performance, and high risks associated with decisions in redundancy management due to multiple sources of uncertainties and sometimes large processing requirements. As a consequence, coverage of failures through redundancy management can be severely limited. The paper proposes to formulate the fault tolerant control problem as an optimization problem that maximizes coverage of failures through redundancy management. Coverage modeling is attempted in a way that captures its dependence on the control performance and on the diagnostic resolution. Under the proposed redundancy management policy, it is shown that an enhanced overall system reliability can be achieved with a control law of a superior robustness, with an estimator of a higher resolution, and with a control performance requirement of a lesser stringency.

  8. Redundancy relations and robust failure detection

    NASA Technical Reports Server (NTRS)

    Chow, E. Y.; Lou, X. C.; Verghese, G. C.; Willsky, A. S.

    1984-01-01

    All failure detection methods are based on the use of redundancy, that is on (possible dynamic) relations among the measured variables. Consequently the robustness of the failure detection process depends to a great degree on the reliability of the redundancy relations given the inevitable presence of model uncertainties. The problem of determining redundancy relations which are optimally robust in a sense which includes the major issues of importance in practical failure detection is addressed. A significant amount of intuition concerning the geometry of robust failure detection is provided.

  9. Redundant drive current imbalance problem of the Automatic Radiator Inspection Device (ARID)

    NASA Technical Reports Server (NTRS)

    Latino, Carl D.

    1992-01-01

    The Automatic Radiator Inspection Device (ARID) is a 4 Degree of Freedom (DOF) robot with redundant drive motors at each joint. The device is intended to automate the labor intensive task of space shuttle radiator inspection. For safety and redundancy, each joint is driven by two independent motor systems. Motors driving the same joint, however, draw vastly different currents. The concern was that the robot joints could be subjected to undue stress. It was the objective of this summer's project to determine the cause of this current imbalance. In addition it was to determine, in a quantitative manner, what was the cause, how serious the problem was in terms of damage or undue wear to the robot and find solutions if possible. It was concluded that most problems could be resolved with a better motor control design. This document discusses problems encountered and possible solutions.

  10. Study of techniques for redundancy verification without disrupting systems, phases 1-3

    NASA Technical Reports Server (NTRS)

    1970-01-01

    The problem of verifying the operational integrity of redundant equipment and the impact of a requirement for verification on such equipment are considered. Redundant circuits are examined and the characteristics which determine adaptability to verification are identified. Mutually exclusive and exhaustive categories for verification approaches are established. The range of applicability of these techniques is defined in terms of signal characteristics and redundancy features. Verification approaches are discussed and a methodology for the design of redundancy verification is developed. A case study is presented which involves the design of a verification system for a hypothetical communications system. Design criteria for redundant equipment are presented. Recommendations for the development of technological areas pertinent to the goal of increased verification capabilities are given.

  11. Site Partitioning for Redundant Arrays of Distributed Disks

    NASA Technical Reports Server (NTRS)

    Mourad, Antoine N.; Fuchs, W. Kent; Saab, Daniel G.

    1996-01-01

    Redundant arrays of distributed disks (RADD) can be used in a distributed computing system or database system to provide recovery in the presence of disk crashes and temporary and permanent failures of single sites. In this paper, we look at the problem of partitioning the sites of a distributed storage system into redundant arrays in such a way that the communication costs for maintaining the parity information are minimized. We show that the partitioning problem is NP-hard. We then propose and evaluate several heuristic algorithms for finding approximate solutions. Simulation results show that significant reduction in remote parity update costs can be achieved by optimizing the site partitioning scheme.

  12. Failure detection system design methodology. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Chow, E. Y.

    1980-01-01

    The design of a failure detection and identification system consists of designing a robust residual generation process and a high performance decision making process. The design of these two processes are examined separately. Residual generation is based on analytical redundancy. Redundancy relations that are insensitive to modelling errors and noise effects are important for designing robust residual generation processes. The characterization of the concept of analytical redundancy in terms of a generalized parity space provides a framework in which a systematic approach to the determination of robust redundancy relations are developed. The Bayesian approach is adopted for the design of high performance decision processes. The FDI decision problem is formulated as a Bayes sequential decision problem. Since the optimal decision rule is incomputable, a methodology for designing suboptimal rules is proposed. A numerical algorithm is developed to facilitate the design and performance evaluation of suboptimal rules.

  13. Decentralized control

    NASA Technical Reports Server (NTRS)

    Steffen, Chris

    1990-01-01

    An overview of the time-delay problem and the reliability problem which arise in trying to perform robotic construction operations at a remote space location are presented. The effects of the time-delay upon the control system design will be itemized. A high level overview of a decentralized method of control which is expected to perform better than the centralized approach in solving the time-delay problem is given. The lower level, decentralized, autonomous, Troter Move-Bar algorithm is also presented (Troters are coordinated independent robots). The solution of the reliability problem is connected to adding redundancy to the system. One method of adding redundancy is given.

  14. Management of redundancy in flight control systems using optimal decision theory

    NASA Technical Reports Server (NTRS)

    1981-01-01

    The problem of using redundancy that exists between dissimilar systems in aircraft flight control is addressed. That is, using the redundancy that exists between a rate gyro and an accelerometer--devices that have dissimilar outputs which are related only through the dynamics of the aircraft motion. Management of this type of redundancy requires advanced logic so that the system can monitor failure status and can reconfigure itself in the event of one or more failures. An optimal decision theory was tutorially developed for the management of sensor redundancy and the theory is applied to two aircraft examples. The first example is the space shuttle and the second is a highly maneuvering high performance aircraft--the F8-C. The examples illustrate the redundancy management design process and the performance of the algorithms presented in failure detection and control law reconfiguration.

  15. Task-space separation principle: a force-field approach to motion planning for redundant manipulators.

    PubMed

    Tommasino, Paolo; Campolo, Domenico

    2017-02-03

    In this work, we address human-like motor planning in redundant manipulators. Specifically, we want to capture postural synergies such as Donders' law, experimentally observed in humans during kinematically redundant tasks, and infer a minimal set of parameters to implement similar postural synergies in a kinematic model. For the model itself, although the focus of this paper is to solve redundancy by implementing postural strategies derived from experimental data, we also want to ensure that such postural control strategies do not interfere with other possible forms of motion control (in the task-space), i.e. solving the posture/movement problem. The redundancy problem is framed as a constrained optimization problem, traditionally solved via the method of Lagrange multipliers. The posture/movement problem can be tackled via the separation principle which, derived from experimental evidence, posits that the brain processes static torques (i.e. posture-dependent, such as gravitational torques) separately from dynamic torques (i.e. velocity-dependent). The separation principle has traditionally been applied at a joint torque level. Our main contribution is to apply the separation principle to Lagrange multipliers, which act as task-space force fields, leading to a task-space separation principle. In this way, we can separate postural control (implementing Donders' law) from various types of tasks-space movement planners. As an example, the proposed framework is applied to the (redundant) task of pointing with the human wrist. Nonlinear inverse optimization (NIO) is used to fit the model parameters and to capture motor strategies displayed by six human subjects during pointing tasks. The novelty of our NIO approach is that (i) the fitted motor strategy, rather than raw data, is used to filter and down-sample human behaviours; (ii) our framework is used to efficiently simulate model behaviour iteratively, until it converges towards the experimental human strategies.

  16. Intra-axiom redundancies in SNOMED CT.

    PubMed

    Dentler, Kathrin; Cornet, Ronald

    2015-09-01

    Intra-axiom redundancies are elements of concept definitions that are redundant as they are entailed by other elements of the concept definition. While such redundancies are harmless from a logical point of view, they make concept definitions hard to maintain, and they might lead to content-related problems when concepts evolve. The objective of this study is to develop a fully automated method to detect intra-axiom redundancies in OWL 2 EL and apply it to SNOMED Clinical Terms (SNOMED CT). We developed a software program in which we implemented, adapted and extended readily existing rules for redundancy elimination. With this, we analysed occurence of redundancy in 11 releases of SNOMED CT (January 2009 to January 2014). We used the ELK reasoner to classify SNOMED CT, and Pellet for explanation of equivalence. We analysed the completeness and soundness of the results by an in-depth examination of the identified redundant elements in the July 2012 release of SNOMED CT. To determine if concepts with redundant elements lead to maintenance issues, we analysed a small sample of solved redundancies. Analyses showed that the amount of redundantly defined concepts in SNOMED CT is consistently around 35,000. In the July 2012 version of SNOMED CT, 35,010 (12%) of the 296,433 concepts contained redundant elements in their definitions. The results of applying our method are sound and complete with respect to our evaluation. Analysis of solved redundancies suggests that redundancies in concept definitions lead to inadequate maintenance of SNOMED CT. Our analysis revealed that redundant elements are continuously introduced and removed, and that redundant elements may be overlooked when concept definitions are corrected. Applying our redundancy detection method to remove intra-axiom redundancies from the stated form of SNOMED CT and to point knowledge modellers to newly introduced redundancies can support creating and maintaining a redundancy-free version of SNOMED CT. Copyright © 2014 Elsevier B.V. All rights reserved.

  17. Sensitivity Analysis of Linear Programming and Quadratic Programming Algorithms for Control Allocation

    NASA Technical Reports Server (NTRS)

    Frost, Susan A.; Bodson, Marc; Acosta, Diana M.

    2009-01-01

    The Next Generation (NextGen) transport aircraft configurations being investigated as part of the NASA Aeronautics Subsonic Fixed Wing Project have more control surfaces, or control effectors, than existing transport aircraft configurations. Conventional flight control is achieved through two symmetric elevators, two antisymmetric ailerons, and a rudder. The five effectors, reduced to three command variables, produce moments along the three main axes of the aircraft and enable the pilot to control the attitude and flight path of the aircraft. The NextGen aircraft will have additional redundant control effectors to control the three moments, creating a situation where the aircraft is over-actuated and where a simple relationship does not exist anymore between the required effector deflections and the desired moments. NextGen flight controllers will incorporate control allocation algorithms to determine the optimal effector commands and attain the desired moments, taking into account the effector limits. Approaches to solving the problem using linear programming and quadratic programming algorithms have been proposed and tested. It is of great interest to understand their relative advantages and disadvantages and how design parameters may affect their properties. In this paper, we investigate the sensitivity of the effector commands with respect to the desired moments and show on some examples that the solutions provided using the l2 norm of quadratic programming are less sensitive than those using the l1 norm of linear programming.

  18. Optimally robust redundancy relations for failure detection in uncertain systems

    NASA Technical Reports Server (NTRS)

    Lou, X.-C.; Willsky, A. S.; Verghese, G. C.

    1986-01-01

    All failure detection methods are based, either explicitly or implicitly, on the use of redundancy, i.e. on (possibly dynamic) relations among the measured variables. The robustness of the failure detection process consequently depends to a great degree on the reliability of the redundancy relations, which in turn is affected by the inevitable presence of model uncertainties. In this paper the problem of determining redundancy relations that are optimally robust is addressed in a sense that includes several major issues of importance in practical failure detection and that provides a significant amount of intuition concerning the geometry of robust failure detection. A procedure is given involving the construction of a single matrix and its singular value decomposition for the determination of a complete sequence of redundancy relations, ordered in terms of their level of robustness. This procedure also provides the basis for comparing levels of robustness in redundancy provided by different sets of sensors.

  19. Finite-Horizon $H_\\infty $ Consensus for Multiagent Systems With Redundant Channels via An Observer-Type Event-Triggered Scheme.

    PubMed

    Xu, Wenying; Wang, Zidong; Ho, Daniel W C

    2018-05-01

    This paper is concerned with the finite-horizon consensus problem for a class of discrete time-varying multiagent systems with external disturbances and missing measurements. To improve the communication reliability, redundant channels are introduced and the corresponding protocol is constructed for the information transmission over redundant channels. An event-triggered scheme is adopted to determine whether the information of agents should be transmitted to their neighbors. Subsequently, an observer-type event-triggered control protocol is proposed based on the latest received neighbors' information. The purpose of the addressed problem is to design a time-varying controller based on the observed information to achieve the consensus performance in a finite horizon. By utilizing a constrained recursive Riccati difference equation approach, some sufficient conditions are obtained to guarantee the consensus performance, and the controller parameters are also designed. Finally, a numerical example is provided to demonstrate the desired reliability of redundant channels and the effectiveness of the event-triggered control protocol.

  20. Redundant disk arrays: Reliable, parallel secondary storage. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Gibson, Garth Alan

    1990-01-01

    During the past decade, advances in processor and memory technology have given rise to increases in computational performance that far outstrip increases in the performance of secondary storage technology. Coupled with emerging small-disk technology, disk arrays provide the cost, volume, and capacity of current disk subsystems, by leveraging parallelism, many times their performance. Unfortunately, arrays of small disks may have much higher failure rates than the single large disks they replace. Redundant arrays of inexpensive disks (RAID) use simple redundancy schemes to provide high data reliability. The data encoding, performance, and reliability of redundant disk arrays are investigated. Organizing redundant data into a disk array is treated as a coding problem. Among alternatives examined, codes as simple as parity are shown to effectively correct single, self-identifying disk failures.

  1. Necessity of creating digital tools to ensure efficiency of technical means

    NASA Astrophysics Data System (ADS)

    Rakov, V. I.; Zakharova, O. V.

    2018-05-01

    The authors estimated the problems of functioning of technical objects. The article notes that the increasing complexity of automation systems may lead to an increase of the redundant resource in proportion to the number of components and relationships in the system, and to the need of the redundant resource constant change that can make implementation of traditional structures with redundancy unnecessarily costly (Standby System, Fault Tolerance, High Availability). It proposes the idea of creating digital tools to ensure efficiency of technical facilities.

  2. Research on allocation efficiency of the daisy chain allocation algorithm

    NASA Astrophysics Data System (ADS)

    Shi, Jingping; Zhang, Weiguo

    2013-03-01

    With the improvement of the aircraft performance in reliability, maneuverability and survivability, the number of the control effectors increases a lot. How to distribute the three-axis moments into the control surfaces reasonably becomes an important problem. Daisy chain method is simple and easy to be carried out in the design of the allocation system. But it can not solve the allocation problem for entire attainable moment subset. For the lateral-directional allocation problem, the allocation efficiency of the daisy chain can be directly measured by the area of its subset of attainable moments. Because of the non-linear allocation characteristic, the subset of attainable moments of daisy-chain method is a complex non-convex polygon, and it is difficult to solve directly. By analyzing the two-dimensional allocation problems with a "micro-element" idea, a numerical calculation algorithm is proposed to compute the area of the non-convex polygon. In order to improve the allocation efficiency of the algorithm, a genetic algorithm with the allocation efficiency chosen as the fitness function is proposed to find the best pseudo-inverse matrix.

  3. Space and time in the context of equilibrium-point theory.

    PubMed

    Feldman, Anatol G

    2011-05-01

    Advances to the equilibrium-point (EP) theory and solutions to several classical problems of action and perception are suggested and discussed. Among them are (1) the posture-movement problem of how movements away from a stable posture can be made without evoking resistance of posture-stabilizing mechanisms resulting from intrinsic muscle and reflex properties; (2) the problem of kinesthesia or why our sense of limb position is fairly accurate despite ambiguous positional information delivered by proprioceptive and cutaneous signals; (3) the redundancy problems in the control of multiple muscles and degrees of freedom. Central to the EP hypothesis is the notion that there are specific neural structures that represent spatial frames of reference (FRs) selected by the brain in a task-specific way from a set of available FRs. The brain is also able to translate or/and rotate the selected FRs by modifying their major attributes-the origin, metrics, and orientation-and thus substantially influence, in a feed-forward manner, action and perception. The brain does not directly solve redundancy problems: it only limits the amount of redundancy by predetermining where, in spatial coordinates, a task-specific action should emerge and allows all motor elements, including the environment, to interact to deliver a unique action, thus solving the redundancy problem (natural selection of action). The EP theory predicts the existence of specific neurons associated with the control of different attributes of FRs and explains the role of mirror neurons in the inferior frontal gyrus and place cells in the hippocampus. WIREs Cogni Sci 2011 2 287-304 DOI: 10.1002/wcs.108 For further resources related to this article, please visit the WIREs website. Copyright © 2010 John Wiley & Sons, Ltd.

  4. A trust-based sensor allocation algorithm in cooperative space search problems

    NASA Astrophysics Data System (ADS)

    Shen, Dan; Chen, Genshe; Pham, Khanh; Blasch, Erik

    2011-06-01

    Sensor allocation is an important and challenging problem within the field of multi-agent systems. The sensor allocation problem involves deciding how to assign a number of targets or cells to a set of agents according to some allocation protocol. Generally, in order to make efficient allocations, we need to design mechanisms that consider both the task performers' costs for the service and the associated probability of success (POS). In our problem, the costs are the used sensor resource, and the POS is the target tracking performance. Usually, POS may be perceived differently by different agents because they typically have different standards or means of evaluating the performance of their counterparts (other sensors in the search and tracking problem). Given this, we turn to the notion of trust to capture such subjective perceptions. In our approach, we develop a trust model to construct a novel mechanism that motivates sensor agents to limit their greediness or selfishness. Then we model the sensor allocation optimization problem with trust-in-loop negotiation game and solve it using a sub-game perfect equilibrium. Numerical simulations are performed to demonstrate the trust-based sensor allocation algorithm in cooperative space situation awareness (SSA) search problems.

  5. Application of redundancy in the Saturn 5 guidance and control system

    NASA Technical Reports Server (NTRS)

    Moore, F. B.; White, J. B.

    1976-01-01

    The Saturn launch vehicle's guidance and control system is so complex that the reliability of a simplex system is not adequate to fulfill mission requirements. Thus, to achieve the desired reliability, redundancy encompassing a wide range of types and levels was employed. At one extreme, the lowest level, basic components (resistors, capacitors, relays, etc.) are employed in series, parallel, or quadruplex arrangements to insure continued system operation in the presence of possible failure conditions. At the other extreme, the highest level, complete subsystem duplication is provided so that a backup subsystem can be employed in case the primary system malfunctions. In between these two extremes, many other redundancy schemes and techniques are employed at various levels. Basic redundancy concepts are covered to gain insight into the advantages obtained with various techniques. Points and methods of application of these techniques are included. The theoretical gain in reliability resulting from redundancy is assessed and compared to a simplex system. Problems and limitations encountered in the practical application of redundancy are discussed as well as techniques verifying proper operation of the redundant channels. As background for the redundancy application discussion, a basic description of the guidance and control system is included.

  6. Ecological network analysis for a virtual water network.

    PubMed

    Fang, Delin; Chen, Bin

    2015-06-02

    The notions of virtual water flows provide important indicators to manifest the water consumption and allocation between different sectors via product transactions. However, the configuration of virtual water network (VWN) still needs further investigation to identify the water interdependency among different sectors as well as the network efficiency and stability in a socio-economic system. Ecological network analysis is chosen as a useful tool to examine the structure and function of VWN and the interactions among its sectors. A balance analysis of efficiency and redundancy is also conducted to describe the robustness (RVWN) of VWN. Then, network control analysis and network utility analysis are performed to investigate the dominant sectors and pathways for virtual water circulation and the mutual relationships between pairwise sectors. A case study of the Heihe River Basin in China shows that the balance between efficiency and redundancy is situated on the left side of the robustness curve with less efficiency and higher redundancy. The forestation, herding and fishing sectors and industrial sectors are found to be the main controllers. The network tends to be more mutualistic and synergic, though some competitive relationships that weaken the virtual water circulation still exist.

  7. A group-based tasks allocation algorithm for the optimization of long leave opportunities in academic departments

    NASA Astrophysics Data System (ADS)

    Eyono Obono, S. D.; Basak, Sujit Kumar

    2011-12-01

    The general formulation of the assignment problem consists in the optimal allocation of a given set of tasks to a workforce. This problem is covered by existing literature for different domains such as distributed databases, distributed systems, transportation, packets radio networks, IT outsourcing, and teaching allocation. This paper presents a new version of the assignment problem for the allocation of academic tasks to staff members in departments with long leave opportunities. It presents the description of a workload allocation scheme and its algorithm, for the allocation of an equitable number of tasks in academic departments where long leaves are necessary.

  8. On modeling human reliability in space flights - Redundancy and recovery operations

    NASA Astrophysics Data System (ADS)

    Aarset, M.; Wright, J. F.

    The reliability of humans is of paramount importance to the safety of space flight systems. This paper describes why 'back-up' operators might not be the best solution, and in some cases, might even degrade system reliability. The problem associated with human redundancy calls for special treatment in reliability analyses. The concept of Standby Redundancy is adopted, and psychological and mathematical models are introduced to improve the way such problems can be estimated and handled. In the past, human reliability has practically been neglected in most reliability analyses, and, when included, the humans have been modeled as a component and treated numerically the way technical components are. This approach is not wrong in itself, but it may lead to systematic errors if too simple analogies from the technical domain are used in the modeling of human behavior. In this paper redundancy in a man-machine system will be addressed. It will be shown how simplification from the technical domain, when applied to human components of a system, may give non-conservative estimates of system reliability.

  9. PRACTICAL: Planning and Resource Allocation in C2-Domains With Time Critical Algorithms (PRACTICAL: Planning en Allocatie in C2-Domeinen Met Tijdkritische Algoritmen)

    DTIC Science & Technology

    1993-02-01

    the (re)planning framework, incorporating the demonstrators CALIGULA and ALLOCATOR for resource allocation and scheduling respectively. In the Command...demonstrator CALIGULA for the problem of allocating frequencies to a radio link network. The problems in the domain of scheduling are dealt with. which has...demonstrating the (re)planning framework, incorporating the demonstrators CALIGULA and ALLOCATOR for resource allocation and scheduling respectively

  10. Vehicle routing problem and capacitated vehicle routing problem frameworks in fund allocation problem

    NASA Astrophysics Data System (ADS)

    Mamat, Nur Jumaadzan Zaleha; Jaaman, Saiful Hafizah; Ahmad, Rokiah@Rozita

    2016-11-01

    Two new methods adopted from methods commonly used in the field of transportation and logistics are proposed to solve a specific issue of investment allocation problem. Vehicle routing problem and capacitated vehicle routing methods are applied to optimize the fund allocation of a portfolio of investment assets. This is done by determining the sequence of the assets. As a result, total investment risk is minimized by this sequence.

  11. Comment on ``Steady-state properties of a totally asymmetric exclusion process with periodic structure''

    NASA Astrophysics Data System (ADS)

    Jiang, Rui; Hu, Mao-Bin; Wu, Qing-Song

    2008-07-01

    Lakatos [Phys. Rev. E 71, 011103 (2005)] have studied a totally asymmetric exclusion process that contains periodically varying movement rates. They have presented a cluster mean-field theory for the problem. We show that their cluster mean-field theory leads to redundant equations. We present a mean-field analysis in which there is no redundant equation.

  12. Research on air and missile defense task allocation based on extended contract net protocol

    NASA Astrophysics Data System (ADS)

    Zhang, Yunzhi; Wang, Gang

    2017-10-01

    Based on the background of air and missile defense distributed element corporative engagement, the interception task allocation problem of multiple weapon units with multiple targets under network condition is analyzed. Firstly, a mathematical model of task allocation is established by combat task decomposition. Secondly, the initialization assignment based on auction contract and the adjustment allocation scheme based on swap contract were introduced to the task allocation. Finally, through the simulation calculation of typical situation, the model can be used to solve the task allocation problem in complex combat environment.

  13. Minimizing the Total Service Time of Discrete Dynamic Berth Allocation Problem by an Iterated Greedy Heuristic

    PubMed Central

    2014-01-01

    Berth allocation is the forefront operation performed when ships arrive at a port and is a critical task in container port optimization. Minimizing the time ships spend at berths constitutes an important objective of berth allocation problems. This study focuses on the discrete dynamic berth allocation problem (discrete DBAP), which aims to minimize total service time, and proposes an iterated greedy (IG) algorithm to solve it. The proposed IG algorithm is tested on three benchmark problem sets. Experimental results show that the proposed IG algorithm can obtain optimal solutions for all test instances of the first and second problem sets and outperforms the best-known solutions for 35 out of 90 test instances of the third problem set. PMID:25295295

  14. Good vibrations: tactile feedback in support of attention allocation and human-automation coordination in event-driven domains.

    PubMed

    Sklar, A E; Sarter, N B

    1999-12-01

    Observed breakdowns in human-machine communication can be explained, in part, by the nature of current automation feedback, which relies heavily on focal visual attention. Such feedback is not well suited for capturing attention in case of unexpected changes and events or for supporting the parallel processing of large amounts of data in complex domains. As suggested by multiple-resource theory, one possible solution to this problem is to distribute information across various sensory modalities. A simulator study was conducted to compare the effectiveness of visual, tactile, and redundant visual and tactile cues for indicating unexpected changes in the status of an automated cockpit system. Both tactile conditions resulted in higher detection rates for, and faster response times to, uncommanded mode transitions. Tactile feedback did not interfere with, nor was its effectiveness affected by, the performance of concurrent visual tasks. The observed improvement in task-sharing performance indicates that the introduction of tactile feedback is a promising avenue toward better supporting human-machine communication in event-driven, information-rich domains.

  15. Dynamic resource allocation in a hierarchical multiprocessor system: A preliminary study

    NASA Technical Reports Server (NTRS)

    Ngai, Tin-Fook

    1986-01-01

    An integrated system approach to dynamic resource allocation is proposed. Some of the problems in dynamic resource allocation and the relationship of these problems to system structures are examined. A general dynamic resource allocation scheme is presented. A hierarchial system architecture which dynamically maps between processor structure and programs at multiple levels of instantiations is described. Simulation experiments were conducted to study dynamic resource allocation on the proposed system. Preliminary evaluation based on simple dynamic resource allocation algorithms indicates that with the proposed system approach, the complexity of dynamic resource management could be significantly reduced while achieving reasonable effective dynamic resource allocation.

  16. Models of resource allocation optimization when solving the control problems in organizational systems

    NASA Astrophysics Data System (ADS)

    Menshikh, V.; Samorokovskiy, A.; Avsentev, O.

    2018-03-01

    The mathematical model of optimizing the allocation of resources to reduce the time for management decisions and algorithms to solve the general problem of resource allocation. The optimization problem of choice of resources in organizational systems in order to reduce the total execution time of a job is solved. This problem is a complex three-level combinatorial problem, for the solving of which it is necessary to implement the solution to several specific problems: to estimate the duration of performing each action, depending on the number of performers within the group that performs this action; to estimate the total execution time of all actions depending on the quantitative composition of groups of performers; to find such a distribution of the existing resource of performers in groups to minimize the total execution time of all actions. In addition, algorithms to solve the general problem of resource allocation are proposed.

  17. A Method for Exploiting Redundancy to Accommodate Actuator Limits in Multivariable Systems

    NASA Technical Reports Server (NTRS)

    Litt, Jonathan; Roulette, Greg

    1995-01-01

    This paper introduces a new method for accommodating actuator saturation in a multivariable system with actuator redundancy. Actuator saturation can cause significant deterioration in control system performance because unmet demand may result in sluggish transients and oscillations in response to setpoint changes. To help compensate for this problem, a technique has been developed which takes advantage of redundancy in multivariable systems to redistribute the unmet control demand over the remaining useful effectors. This method is not a redesign procedure, rather it modifies commands to the unlimited effectors to compensate for those which are limited, thereby exploiting the built-in redundancy. The original commands are modified by the increments due to unmet demand, but when a saturated effector comes off its limit, the incremental commands disappear and the original unmodified controller remains intact. This scheme provides a smooth transition between saturated and unsaturated modes as it divides up the unmet requirement over any available actuators. This way, if there is sufficiently redundant control authority, performance can be maintained.

  18. Location-allocation models and new solution methodologies in telecommunication networks

    NASA Astrophysics Data System (ADS)

    Dinu, S.; Ciucur, V.

    2016-08-01

    When designing a telecommunications network topology, three types of interdependent decisions are combined: location, allocation and routing, which are expressed by the following design considerations: how many interconnection devices - consolidation points/concentrators should be used and where should they be located; how to allocate terminal nodes to concentrators; how should the voice, video or data traffic be routed and what transmission links (capacitated or not) should be built into the network. Including these three components of the decision into a single model generates a problem whose complexity makes it difficult to solve. A first method to address the overall problem is the sequential one, whereby the first step deals with the location-allocation problem and based on this solution the subsequent sub-problem (routing the network traffic) shall be solved. The issue of location and allocation in a telecommunications network, called "The capacitated concentrator location- allocation - CCLA problem" is based on one of the general location models on a network in which clients/demand nodes are the terminals and facilities are the concentrators. Like in a location model, each client node has a demand traffic, which must be served, and the facilities can serve these demands within their capacity limit. In this study, the CCLA problem is modeled as a single-source capacitated location-allocation model whose optimization objective is to determine the minimum network cost consisting of fixed costs for establishing the locations of concentrators, costs for operating concentrators and costs for allocating terminals to concentrators. The problem is known as a difficult combinatorial optimization problem for which powerful algorithms are required. Our approach proposes a Fuzzy Genetic Algorithm combined with a local search procedure to calculate the optimal values of the location and allocation variables. To confirm the efficiency of the proposed algorithm with respect to the quality of solutions, significant size test problems were considered: up to 100 terminal nodes and 50 concentrators on a 100 × 100 square grid. The performance of this hybrid intelligent algorithm was evaluated by measuring the quality of its solutions with respect to the following statistics: the standard deviation and the ratio of the best solution obtained.

  19. Toward an Integrated Design, Inspection and Redundancy Research Program.

    DTIC Science & Technology

    1984-01-01

    William Creelman William H. Silcox National Marine Service Standard Oil Company of California St. Louis, Missouri San Francisco, California .-- N...develop physical models and generic tools for analyzing the effects of redundancy, reserve strength, and residual strength on the system behavior of marine...probabilistic analyses to be applicable to real-world problems, this program needs to provide - the deterministic physical models and generic tools upon

  20. Biomass and nutrient allocation strategies in a desert ecosystem in the Hexi Corridor, northwest China.

    PubMed

    Zhang, Ke; Su, YongZhong; Yang, Rong

    2017-07-01

    The allocation of biomass and nutrients in plants is a crucial factor in understanding the process of plant structures and dynamics to different environmental conditions. In this study, we present a comprehensive scaling analysis of data from a desert ecosystem to determine biomass and nutrient (carbon (C), nitrogen (N), and phosphorus (P)) allocation strategies of desert plants from 40 sites in the Hexi Corridor. We found that the biomass and levels of C, N, and P storage were higher in shoots than in roots. Roots biomass and nutrient storage were concentrated at a soil depth of 0-30 cm. Scaling relationships of biomass, C storage, and P storage between shoots and roots were isometric, but that of N storage was allometric. Results of a redundancy analysis (RDA) showed that soil nutrient densities were the primary factors influencing biomass and nutrient allocation, accounting for 94.5% of the explained proportion. However, mean annual precipitation was the primary factor influencing the roots biomass/shoots biomass (R/S) ratio. Furthermore, Pearson's correlations and regression analyses demonstrated that although the biomass and nutrients that associated with functional traits primarily depended on soil conditions, mean annual precipitation and mean annual temperature had greater effects on roots biomass and nutrient storage.

  1. Mismatch and resolution in compressive imaging

    NASA Astrophysics Data System (ADS)

    Fannjiang, Albert; Liao, Wenjing

    2011-09-01

    Highly coherent sensing matrices arise in discretization of continuum problems such as radar and medical imaging when the grid spacing is below the Rayleigh threshold as well as in using highly coherent, redundant dictionaries as sparsifying operators. Algorithms (BOMP, BLOOMP) based on techniques of band exclusion and local optimization are proposed to enhance Orthogonal Matching Pursuit (OMP) and deal with such coherent sensing matrices. BOMP and BLOOMP have provably performance guarantee of reconstructing sparse, widely separated objects independent of the redundancy and have a sparsity constraint and computational cost similar to OMP's. Numerical study demonstrates the effectiveness of BLOOMP for compressed sensing with highly coherent, redundant sensing matrices.

  2. A game-theoretical pricing mechanism for multiuser rate allocation for video over WiMAX

    NASA Astrophysics Data System (ADS)

    Chen, Chao-An; Lo, Chi-Wen; Lin, Chia-Wen; Chen, Yung-Chang

    2010-07-01

    In multiuser rate allocation in a wireless network, strategic users can bias the rate allocation by misrepresenting their bandwidth demands to a base station, leading to an unfair allocation. Game-theoretical approaches have been proposed to address the unfair allocation problems caused by the strategic users. However, existing approaches rely on a timeconsuming iterative negotiation process. Besides, they cannot completely prevent unfair allocations caused by inconsistent strategic behaviors. To address these problems, we propose a Search Based Pricing Mechanism to reduce the communication time and to capture a user's strategic behavior. Our simulation results show that the proposed method significantly reduce the communication time as well as converges stably to an optimal allocation.

  3. Stochastic Optimization For Water Resources Allocation

    NASA Astrophysics Data System (ADS)

    Yamout, G.; Hatfield, K.

    2003-12-01

    For more than 40 years, water resources allocation problems have been addressed using deterministic mathematical optimization. When data uncertainties exist, these methods could lead to solutions that are sub-optimal or even infeasible. While optimization models have been proposed for water resources decision-making under uncertainty, no attempts have been made to address the uncertainties in water allocation problems in an integrated approach. This paper presents an Integrated Dynamic, Multi-stage, Feedback-controlled, Linear, Stochastic, and Distributed parameter optimization approach to solve a problem of water resources allocation. It attempts to capture (1) the conflict caused by competing objectives, (2) environmental degradation produced by resource consumption, and finally (3) the uncertainty and risk generated by the inherently random nature of state and decision parameters involved in such a problem. A theoretical system is defined throughout its different elements. These elements consisting mainly of water resource components and end-users are described in terms of quantity, quality, and present and future associated risks and uncertainties. Models are identified, modified, and interfaced together to constitute an integrated water allocation optimization framework. This effort is a novel approach to confront the water allocation optimization problem while accounting for uncertainties associated with all its elements; thus resulting in a solution that correctly reflects the physical problem in hand.

  4. An improved risk-explicit interval linear programming model for pollution load allocation for watershed management.

    PubMed

    Xia, Bisheng; Qian, Xin; Yao, Hong

    2017-11-01

    Although the risk-explicit interval linear programming (REILP) model has solved the problem of having interval solutions, it has an equity problem, which can lead to unbalanced allocation between different decision variables. Therefore, an improved REILP model is proposed. This model adds an equity objective function and three constraint conditions to overcome this equity problem. In this case, pollution reduction is in proportion to pollutant load, which supports balanced development between different regional economies. The model is used to solve the problem of pollution load allocation in a small transboundary watershed. Compared with the REILP original model result, our model achieves equity between the upstream and downstream pollutant loads; it also overcomes the problem of greatest pollution reduction, where sources are nearest to the control section. The model provides a better solution to the problem of pollution load allocation than previous versions.

  5. A robust optimisation approach to the problem of supplier selection and allocation in outsourcing

    NASA Astrophysics Data System (ADS)

    Fu, Yelin; Keung Lai, Kin; Liang, Liang

    2016-03-01

    We formulate the supplier selection and allocation problem in outsourcing under an uncertain environment as a stochastic programming problem. Both the decision-maker's attitude towards risk and the penalty parameters for demand deviation are considered in the objective function. A service level agreement, upper bound for each selected supplier's allocation and the number of selected suppliers are considered as constraints. A novel robust optimisation approach is employed to solve this problem under different economic situations. Illustrative examples are presented with managerial implications highlighted to support decision-making.

  6. Analog Processor To Solve Optimization Problems

    NASA Technical Reports Server (NTRS)

    Duong, Tuan A.; Eberhardt, Silvio P.; Thakoor, Anil P.

    1993-01-01

    Proposed analog processor solves "traveling-salesman" problem, considered paradigm of global-optimization problems involving routing or allocation of resources. Includes electronic neural network and auxiliary circuitry based partly on concepts described in "Neural-Network Processor Would Allocate Resources" (NPO-17781) and "Neural Network Solves 'Traveling-Salesman' Problem" (NPO-17807). Processor based on highly parallel computing solves problem in significantly less time.

  7. Reducing codon redundancy and screening effort of combinatorial protein libraries created by saturation mutagenesis.

    PubMed

    Kille, Sabrina; Acevedo-Rocha, Carlos G; Parra, Loreto P; Zhang, Zhi-Gang; Opperman, Diederik J; Reetz, Manfred T; Acevedo, Juan Pablo

    2013-02-15

    Saturation mutagenesis probes define sections of the vast protein sequence space. However, even if randomization is limited this way, the combinatorial numbers problem is severe. Because diversity is created at the codon level, codon redundancy is a crucial factor determining the necessary effort for library screening. Additionally, due to the probabilistic nature of the sampling process, oversampling is required to ensure library completeness as well as a high probability to encounter all unique variants. Our trick employs a special mixture of three primers, creating a degeneracy of 22 unique codons coding for the 20 canonical amino acids. Therefore, codon redundancy and subsequent screening effort is significantly reduced, and a balanced distribution of codon per amino acid is achieved, as demonstrated exemplarily for a library of cyclohexanone monooxygenase. We show that this strategy is suitable for any saturation mutagenesis methodology to generate less-redundant libraries.

  8. Development a heuristic method to locate and allocate the medical centers to minimize the earthquake relief operation time.

    PubMed

    Aghamohammadi, Hossein; Saadi Mesgari, Mohammad; Molaei, Damoon; Aghamohammadi, Hasan

    2013-01-01

    Location-allocation is a combinatorial optimization problem, and is defined as Non deterministic Polynomial Hard (NP) hard optimization. Therefore, solution of such a problem should be shifted from exact to heuristic or Meta heuristic due to the complexity of the problem. Locating medical centers and allocating injuries of an earthquake to them has high importance in earthquake disaster management so that developing a proper method will reduce the time of relief operation and will consequently decrease the number of fatalities. This paper presents the development of a heuristic method based on two nested genetic algorithms to optimize this location allocation problem by using the abilities of Geographic Information System (GIS). In the proposed method, outer genetic algorithm is applied to the location part of the problem and inner genetic algorithm is used to optimize the resource allocation. The final outcome of implemented method includes the spatial location of new required medical centers. The method also calculates that how many of the injuries at each demanding point should be taken to any of the existing and new medical centers as well. The results of proposed method showed high performance of designed structure to solve a capacitated location-allocation problem that may arise in a disaster situation when injured people has to be taken to medical centers in a reasonable time.

  9. Artificial Intelligence Techniques for the Berth Allocation and Container Stacking Problems in Container Terminals

    NASA Astrophysics Data System (ADS)

    Salido, Miguel A.; Rodriguez-Molins, Mario; Barber, Federico

    The Container Stacking Problem and the Berth Allocation Problem are two important problems in maritime container terminal's management which are clearly related. Terminal operators normally demand all containers to be loaded into an incoming vessel should be ready and easily accessible in the terminal before vessel's arrival. Similarly, customers (i.e., vessel owners) expect prompt berthing of their vessels upon arrival. In this paper, we present an artificial intelligence based-integrated system to relate these problems. Firstly, we develop a metaheuristic algorithm for berth allocation which generates an optimized order of vessel to be served according to existing berth constraints. Secondly, we develop a domain-oriented heuristic planner for calculating the number of reshuffles needed to allocate containers in the appropriate place for a given berth ordering of vessels. By combining these optimized solutions, terminal operators can be assisted to decide the most appropriated solution in each particular case.

  10. Multiple Interacting Risk Factors: On Methods for Allocating Risk Factor Interactions.

    PubMed

    Price, Bertram; MacNicoll, Michael

    2015-05-01

    A persistent problem in health risk analysis where it is known that a disease may occur as a consequence of multiple risk factors with interactions is allocating the total risk of the disease among the individual risk factors. This problem, referred to here as risk apportionment, arises in various venues, including: (i) public health management, (ii) government programs for compensating injured individuals, and (iii) litigation. Two methods have been described in the risk analysis and epidemiology literature for allocating total risk among individual risk factors. One method uses weights to allocate interactions among the individual risk factors. The other method is based on risk accounting axioms and finding an optimal and unique allocation that satisfies the axioms using a procedure borrowed from game theory. Where relative risk or attributable risk is the risk measure, we find that the game-theory-determined allocation is the same as the allocation where risk factor interactions are apportioned to individual risk factors using equal weights. Therefore, the apportionment problem becomes one of selecting a meaningful set of weights for allocating interactions among the individual risk factors. Equal weights and weights proportional to the risks of the individual risk factors are discussed. © 2015 Society for Risk Analysis.

  11. Rate Adaptive Based Resource Allocation with Proportional Fairness Constraints in OFDMA Systems

    PubMed Central

    Yin, Zhendong; Zhuang, Shufeng; Wu, Zhilu; Ma, Bo

    2015-01-01

    Orthogonal frequency division multiple access (OFDMA), which is widely used in the wireless sensor networks, allows different users to obtain different subcarriers according to their subchannel gains. Therefore, how to assign subcarriers and power to different users to achieve a high system sum rate is an important research area in OFDMA systems. In this paper, the focus of study is on the rate adaptive (RA) based resource allocation with proportional fairness constraints. Since the resource allocation is a NP-hard and non-convex optimization problem, a new efficient resource allocation algorithm ACO-SPA is proposed, which combines ant colony optimization (ACO) and suboptimal power allocation (SPA). To reduce the computational complexity, the optimization problem of resource allocation in OFDMA systems is separated into two steps. For the first one, the ant colony optimization algorithm is performed to solve the subcarrier allocation. Then, the suboptimal power allocation algorithm is developed with strict proportional fairness, and the algorithm is based on the principle that the sums of power and the reciprocal of channel-to-noise ratio for each user in different subchannels are equal. To support it, plenty of simulation results are presented. In contrast with root-finding and linear methods, the proposed method provides better performance in solving the proportional resource allocation problem in OFDMA systems. PMID:26426016

  12. Fund allocation using capacitated vehicle routing problem

    NASA Astrophysics Data System (ADS)

    Mamat, Nur Jumaadzan Zaleha; Jaaman, Saiful Hafizah; Ahmad, Rokiah Rozita; Darus, Maslina

    2014-09-01

    In investment fund allocation, it is unwise for an investor to distribute his fund into several assets simultaneously due to economic reasons. One solution is to allocate the fund into a particular asset at a time in a sequence that will either maximize returns or minimize risks depending on the investor's objective. The vehicle routing problem (VRP) provides an avenue to this issue. VRP answers the question on how to efficiently use the available fleet of vehicles to meet a given service demand, subjected to a set of operational requirements. This paper proposes an idea of using capacitated vehicle routing problem (CVRP) to optimize investment fund allocation by employing data of selected stocks in the FTSE Bursa Malaysia. Results suggest that CRVP can be applied to solve the issue of investment fund allocation and increase the investor's profit.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hao, He; Sun, Yannan; Carroll, Thomas E.

    We propose a coordination algorithm for cooperative power allocation among a collection of commercial buildings within a campus. We introduced thermal and power models of a typical commercial building Heating, Ventilation, and Air Conditioning (HVAC) system, and utilize model predictive control to characterize their power flexibility. The power allocation problem is formulated as a cooperative game using the Nash Bargaining Solution (NBS) concept, in which buildings collectively maximize the product of their utilities subject to their local flexibility constraints and a total power limit set by the campus coordinator. To solve the optimal allocation problem, a distributed protocol is designedmore » using dual decomposition of the Nash bargaining problem. Numerical simulations are performed to demonstrate the efficacy of our proposed allocation method« less

  14. Tutorial: Performance and reliability in redundant disk arrays

    NASA Technical Reports Server (NTRS)

    Gibson, Garth A.

    1993-01-01

    A disk array is a collection of physically small magnetic disks that is packaged as a single unit but operates in parallel. Disk arrays capitalize on the availability of small-diameter disks from a price-competitive market to provide the cost, volume, and capacity of current disk systems but many times their performance. Unfortunately, relative to current disk systems, the larger number of components in disk arrays leads to higher rates of failure. To tolerate failures, redundant disk arrays devote a fraction of their capacity to an encoding of their information. This redundant information enables the contents of a failed disk to be recovered from the contents of non-failed disks. The simplest and least expensive encoding for this redundancy, known as N+1 parity is highlighted. In addition to compensating for the higher failure rates of disk arrays, redundancy allows highly reliable secondary storage systems to be built much more cost-effectively than is now achieved in conventional duplicated disks. Disk arrays that combine redundancy with the parallelism of many small-diameter disks are often called Redundant Arrays of Inexpensive Disks (RAID). This combination promises improvements to both the performance and the reliability of secondary storage. For example, IBM's premier disk product, the IBM 3390, is compared to a redundant disk array constructed of 84 IBM 0661 3 1/2-inch disks. The redundant disk array has comparable or superior values for each of the metrics given and appears likely to cost less. In the first section of this tutorial, I explain how disk arrays exploit the emergence of high performance, small magnetic disks to provide cost-effective disk parallelism that combats the access and transfer gap problems. The flexibility of disk-array configurations benefits manufacturer and consumer alike. In contrast, I describe in this tutorial's second half how parallelism, achieved through increasing numbers of components, causes overall failure rates to rise. Redundant disk arrays overcome this threat to data reliability by ensuring that data remains available during and after component failures.

  15. Multi-Robot Coalitions Formation with Deadlines: Complexity Analysis and Solutions

    PubMed Central

    2017-01-01

    Multi-robot task allocation is one of the main problems to address in order to design a multi-robot system, very especially when robots form coalitions that must carry out tasks before a deadline. A lot of factors affect the performance of these systems and among them, this paper is focused on the physical interference effect, produced when two or more robots want to access the same point simultaneously. To our best knowledge, this paper presents the first formal description of multi-robot task allocation that includes a model of interference. Thanks to this description, the complexity of the allocation problem is analyzed. Moreover, the main contribution of this paper is to provide the conditions under which the optimal solution of the aforementioned allocation problem can be obtained solving an integer linear problem. The optimal results are compared to previous allocation algorithms already proposed by the first two authors of this paper and with a new method proposed in this paper. The results obtained show how the new task allocation algorithms reach up more than an 80% of the median of the optimal solution, outperforming previous auction algorithms with a huge reduction of the execution time. PMID:28118384

  16. Multi-Robot Coalitions Formation with Deadlines: Complexity Analysis and Solutions.

    PubMed

    Guerrero, Jose; Oliver, Gabriel; Valero, Oscar

    2017-01-01

    Multi-robot task allocation is one of the main problems to address in order to design a multi-robot system, very especially when robots form coalitions that must carry out tasks before a deadline. A lot of factors affect the performance of these systems and among them, this paper is focused on the physical interference effect, produced when two or more robots want to access the same point simultaneously. To our best knowledge, this paper presents the first formal description of multi-robot task allocation that includes a model of interference. Thanks to this description, the complexity of the allocation problem is analyzed. Moreover, the main contribution of this paper is to provide the conditions under which the optimal solution of the aforementioned allocation problem can be obtained solving an integer linear problem. The optimal results are compared to previous allocation algorithms already proposed by the first two authors of this paper and with a new method proposed in this paper. The results obtained show how the new task allocation algorithms reach up more than an 80% of the median of the optimal solution, outperforming previous auction algorithms with a huge reduction of the execution time.

  17. Derivation of three closed loop kinematic velocity models using normalized quaternion feedback for an autonomous redundant manipulator with application to inverse kinematics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Unseren, M.A.

    1993-04-01

    The report discusses the orientation tracking control problem for a kinematically redundant, autonomous manipulator moving in a three dimensional workspace. The orientation error is derived using the normalized quaternion error method of Ickes, the Luh, Walker, and Paul error method, and a method suggested here utilizing the Rodrigues parameters, all of which are expressed in terms of normalized quaternions. The analytical time derivatives of the orientation errors are determined. The latter, along with the translational velocity error, form a dosed loop kinematic velocity model of the manipulator using normalized quaternion and translational position feedback. An analysis of the singularities associatedmore » with expressing the models in a form suitable for solving the inverse kinematics problem is given. Two redundancy resolution algorithms originally developed using an open loop kinematic velocity model of the manipulator are extended to properly take into account the orientation tracking control problem. This report furnishes the necessary mathematical framework required prior to experimental implementation of the orientation tracking control schemes on the seven axis CESARm research manipulator or on the seven-axis Robotics Research K1207i dexterous manipulator, the latter of which is to be delivered to the Oak Ridge National Laboratory in 1993.« less

  18. Optimal Management of Redundant Control Authority for Fault Tolerance

    NASA Technical Reports Server (NTRS)

    Wu, N. Eva; Ju, Jianhong

    2000-01-01

    This paper is intended to demonstrate the feasibility of a solution to a fault tolerant control problem. It explains, through a numerical example, the design and the operation of a novel scheme for fault tolerant control. The fundamental principle of the scheme was formalized in [5] based on the notion of normalized nonspecificity. The novelty lies with the use of a reliability criterion for redundancy management, and therefore leads to a high overall system reliability.

  19. A Novel Sensor Selection and Power Allocation Algorithm for Multiple-Target Tracking in an LPI Radar Network

    PubMed Central

    She, Ji; Wang, Fei; Zhou, Jianjiang

    2016-01-01

    Radar networks are proven to have numerous advantages over traditional monostatic and bistatic radar. With recent developments, radar networks have become an attractive platform due to their low probability of intercept (LPI) performance for target tracking. In this paper, a joint sensor selection and power allocation algorithm for multiple-target tracking in a radar network based on LPI is proposed. It is found that this algorithm can minimize the total transmitted power of a radar network on the basis of a predetermined mutual information (MI) threshold between the target impulse response and the reflected signal. The MI is required by the radar network system to estimate target parameters, and it can be calculated predictively with the estimation of target state. The optimization problem of sensor selection and power allocation, which contains two variables, is non-convex and it can be solved by separating power allocation problem from sensor selection problem. To be specific, the optimization problem of power allocation can be solved by using the bisection method for each sensor selection scheme. Also, the optimization problem of sensor selection can be solved by a lower complexity algorithm based on the allocated powers. According to the simulation results, it can be found that the proposed algorithm can effectively reduce the total transmitted power of a radar network, which can be conducive to improving LPI performance. PMID:28009819

  20. The Natural-CCD Algorithm, a Novel Method to Solve the Inverse Kinematics of Hyper-redundant and Soft Robots.

    PubMed

    Martín, Andrés; Barrientos, Antonio; Del Cerro, Jaime

    2018-03-22

    This article presents a new method to solve the inverse kinematics problem of hyper-redundant and soft manipulators. From an engineering perspective, this kind of robots are underdetermined systems. Therefore, they exhibit an infinite number of solutions for the inverse kinematics problem, and to choose the best one can be a great challenge. A new algorithm based on the cyclic coordinate descent (CCD) and named as natural-CCD is proposed to solve this issue. It takes its name as a result of generating very harmonious robot movements and trajectories that also appear in nature, such as the golden spiral. In addition, it has been applied to perform continuous trajectories, to develop whole-body movements, to analyze motion planning in complex environments, and to study fault tolerance, even for both prismatic and rotational joints. The proposed algorithm is very simple, precise, and computationally efficient. It works for robots either in two or three spatial dimensions and handles a large amount of degrees-of-freedom. Because of this, it is aimed to break down barriers between discrete hyper-redundant and continuum soft robots.

  1. Oblivious image watermarking combined with JPEG compression

    NASA Astrophysics Data System (ADS)

    Chen, Qing; Maitre, Henri; Pesquet-Popescu, Beatrice

    2003-06-01

    For most data hiding applications, the main source of concern is the effect of lossy compression on hidden information. The objective of watermarking is fundamentally in conflict with lossy compression. The latter attempts to remove all irrelevant and redundant information from a signal, while the former uses the irrelevant information to mask the presence of hidden data. Compression on a watermarked image can significantly affect the retrieval of the watermark. Past investigations of this problem have heavily relied on simulation. It is desirable not only to measure the effect of compression on embedded watermark, but also to control the embedding process to survive lossy compression. In this paper, we focus on oblivious watermarking by assuming that the watermarked image inevitably undergoes JPEG compression prior to watermark extraction. We propose an image-adaptive watermarking scheme where the watermarking algorithm and the JPEG compression standard are jointly considered. Watermark embedding takes into consideration the JPEG compression quality factor and exploits an HVS model to adaptively attain a proper trade-off among transparency, hiding data rate, and robustness to JPEG compression. The scheme estimates the image-dependent payload under JPEG compression to achieve the watermarking bit allocation in a determinate way, while maintaining consistent watermark retrieval performance.

  2. Reconfigurable tree architectures using subtree oriented fault tolerance

    NASA Technical Reports Server (NTRS)

    Lowrie, Matthew B.

    1987-01-01

    An approach to the design of reconfigurable tree architecture is presented in which spare processors are allocated at the leaves. The approach is unique in that spares are associated with subtrees and sharing of spares between these subtrees can occur. The Subtree Oriented Fault Tolerance (SOFT) approach is more reliable than previous approaches capable of tolerating link and switch failures for both single chip and multichip tree implementations while reducing redundancy in terms of both spare processors and links. VLSI layout is 0(n) for binary trees and is directly extensible to N-ary trees and fault tolerance through performance degradation.

  3. An overview of adaptive model theory: solving the problems of redundancy, resources, and nonlinear interactions in human movement control.

    PubMed

    Neilson, Peter D; Neilson, Megan D

    2005-09-01

    Adaptive model theory (AMT) is a computational theory that addresses the difficult control problem posed by the musculoskeletal system in interaction with the environment. It proposes that the nervous system creates motor maps and task-dependent synergies to solve the problems of redundancy and limited central resources. These lead to the adaptive formation of task-dependent feedback/feedforward controllers able to generate stable, noninteractive control and render nonlinear interactions unobservable in sensory-motor relationships. AMT offers a unified account of how the nervous system might achieve these solutions by forming internal models. This is presented as the design of a simulator consisting of neural adaptive filters based on cerebellar circuitry. It incorporates a new network module that adaptively models (in real time) nonlinear relationships between inputs with changing and uncertain spectral and amplitude probability density functions as is the case for sensory and motor signals.

  4. Proposing integrated Shannon's entropy-inverse data envelopment analysis methods for resource allocation problem under a fuzzy environment

    NASA Astrophysics Data System (ADS)

    Çakır, Süleyman

    2017-10-01

    In this study, a two-phase methodology for resource allocation problems under a fuzzy environment is proposed. In the first phase, the imprecise Shannon's entropy method and the acceptability index are suggested, for the first time in the literature, to select input and output variables to be used in the data envelopment analysis (DEA) application. In the second step, an interval inverse DEA model is executed for resource allocation in a short run. In an effort to exemplify the practicality of the proposed fuzzy model, a real case application has been conducted involving 16 cement firms listed in Borsa Istanbul. The results of the case application indicated that the proposed hybrid model is a viable procedure to handle input-output selection and resource allocation problems under fuzzy conditions. The presented methodology can also lend itself to different applications such as multi-criteria decision-making problems.

  5. Use of computer modeling to investigate a dynamic interaction problem in the Skylab TACS quad-valve package

    NASA Technical Reports Server (NTRS)

    Hesser, R. J.; Gershman, R.

    1975-01-01

    A valve opening-response problem encountered during development of a control valve for the Skylab thruster attitude control system (TACS) is described. The problem involved effects of dynamic interaction among valves in the quad-redundant valve package. Also described is a detailed computer simulation of the quad-valve package which was helpful in resolving the problem.

  6. 77 FR 32183 - Transmission Planning and Cost Allocation by Transmission Owning and Operating Public Utilities

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-05-31

    ... it would not wait for systemic problems to undermine transmission planning before action is taken... that the development of transmission facilities can involve long lead times and complex problems... rather than allowing the problems in transmission planning and cost allocation to continue or to increase...

  7. Some Reliability Issues in Very Large Databases.

    ERIC Educational Resources Information Center

    Lynch, Clifford A.

    1988-01-01

    Describes the unique reliability problems of very large databases that necessitate specialized techniques for hardware problem management. The discussion covers the use of controlled partial redundancy to improve reliability, issues in operating systems and database management systems design, and the impact of disk technology on very large…

  8. Protecting Against Faults in JPL Spacecraft

    NASA Technical Reports Server (NTRS)

    Morgan, Paula

    2007-01-01

    A paper discusses techniques for protecting against faults in spacecraft designed and operated by NASA s Jet Propulsion Laboratory (JPL). The paper addresses, more specifically, fault-protection requirements and techniques common to most JPL spacecraft (in contradistinction to unique, mission specific techniques), standard practices in the implementation of these techniques, and fault-protection software architectures. Common requirements include those to protect onboard command, data-processing, and control computers; protect against loss of Earth/spacecraft radio communication; maintain safe temperatures; and recover from power overloads. The paper describes fault-protection techniques as part of a fault-management strategy that also includes functional redundancy, redundant hardware, and autonomous monitoring of (1) the operational and health statuses of spacecraft components, (2) temperatures inside and outside the spacecraft, and (3) allocation of power. The strategy also provides for preprogrammed automated responses to anomalous conditions. In addition, the software running in almost every JPL spacecraft incorporates a general-purpose "Safe Mode" response algorithm that configures the spacecraft in a lower-power state that is safe and predictable, thereby facilitating diagnosis of more complex faults by a team of human experts on Earth.

  9. Vector quantization for efficient coding of upper subbands

    NASA Technical Reports Server (NTRS)

    Zeng, W. J.; Huang, Y. F.

    1994-01-01

    This paper examines the application of vector quantization (VQ) to exploit both intra-band and inter-band redundancy in subband coding. The focus here is on the exploitation of inter-band dependency. It is shown that VQ is particularly suitable and effective for coding the upper subbands. Three subband decomposition-based VQ coding schemes are proposed here to exploit the inter-band dependency by making full use of the extra flexibility of VQ approach over scalar quantization. A quadtree-based variable rate VQ (VRVQ) scheme which takes full advantage of the intra-band and inter-band redundancy is first proposed. Then, a more easily implementable alternative based on an efficient block-based edge estimation technique is employed to overcome the implementational barriers of the first scheme. Finally, a predictive VQ scheme formulated in the context of finite state VQ is proposed to further exploit the dependency among different subbands. A VRVQ scheme proposed elsewhere is extended to provide an efficient bit allocation procedure. Simulation results show that these three hybrid techniques have advantages, in terms of peak signal-to-noise ratio (PSNR) and complexity, over other existing subband-VQ approaches.

  10. Covariance versus correlation in capacitated vehicle routing problem-investment fund allocation problem

    NASA Astrophysics Data System (ADS)

    Mamat, Nur Jumaadzan Zaleha; Jaaman, Saiful Hafizah; Ahmad, Rokiah@Rozita

    2017-04-01

    Capacitated Vehicle Routing Problem-Investment Fund Allocation Problem (CVRP-IFAP) provides investors with a sequence of assets to allocate their funds into. To minimize total risks of investment in CVRP-IFAP covariance values measure the risks between two assets. Another measure of risks are correlation values between returns. The correlation values can be used to diversify the risk of investment loss in order to optimize expected return against a certain level of risk. This study compares the total risk obtained from CVRP-IFAP when using covariance values and correlation values. Results show that CVRP-IFAP with covariance values provides lesser total risks and a significantly better measure of risk.

  11. Objective past of a quantum universe: Redundant records of consistent histories

    NASA Astrophysics Data System (ADS)

    Riedel, C. Jess; Zurek, Wojciech H.; Zwolak, Michael

    2016-03-01

    Motivated by the advances of quantum Darwinism and recognizing the role played by redundancy in identifying the small subset of quantum states with resilience characteristic of objective classical reality, we explore the implications of redundant records for consistent histories. The consistent histories formalism is a tool for describing sequences of events taking place in an evolving closed quantum system. A set of histories is consistent when one can reason about them using Boolean logic, i.e., when probabilities of sequences of events that define histories are additive. However, the vast majority of the sets of histories that are merely consistent are flagrantly nonclassical in other respects. This embarras de richesses (known as the set selection problem) suggests that one must go beyond consistency to identify how the classical past arises in our quantum universe. The key intuition we follow is that the records of events that define the familiar objective past are inscribed in many distinct systems, e.g., subsystems of the environment, and are accessible locally in space and time to observers. We identify histories that are not just consistent but redundantly consistent using the partial-trace condition introduced by Finkelstein as a bridge between histories and decoherence. The existence of redundant records is a sufficient condition for redundant consistency. It selects, from the multitude of the alternative sets of consistent histories, a small subset endowed with redundant records characteristic of the objective classical past. The information about an objective history of the past is then simultaneously within reach of many, who can independently reconstruct it and arrive at compatible conclusions in the present.

  12. Antenna Allocation in MIMO Radar with Widely Separated Antennas for Multi-Target Detection

    PubMed Central

    Gao, Hao; Wang, Jian; Jiang, Chunxiao; Zhang, Xudong

    2014-01-01

    In this paper, we explore a new resource called multi-target diversity to optimize the performance of multiple input multiple output (MIMO) radar with widely separated antennas for detecting multiple targets. In particular, we allocate antennas of the MIMO radar to probe different targets simultaneously in a flexible manner based on the performance metric of relative entropy. Two antenna allocation schemes are proposed. In the first scheme, each antenna is allocated to illuminate a proper target over the entire illumination time, so that the detection performance of each target is guaranteed. The problem is formulated as a minimum makespan scheduling problem in the combinatorial optimization framework. Antenna allocation is implemented through a branch-and-bound algorithm and an enhanced factor 2 algorithm. In the second scheme, called antenna-time allocation, each antenna is allocated to illuminate different targets with different illumination time. Both antenna allocation and time allocation are optimized based on illumination probabilities. Over a large range of transmitted power, target fluctuations and target numbers, both of the proposed antenna allocation schemes outperform the scheme without antenna allocation. Moreover, the antenna-time allocation scheme achieves a more robust detection performance than branch-and-bound algorithm and the enhanced factor 2 algorithm when the target number changes. PMID:25350505

  13. Antenna allocation in MIMO radar with widely separated antennas for multi-target detection.

    PubMed

    Gao, Hao; Wang, Jian; Jiang, Chunxiao; Zhang, Xudong

    2014-10-27

    In this paper, we explore a new resource called multi-target diversity to optimize the performance of multiple input multiple output (MIMO) radar with widely separated antennas for detecting multiple targets. In particular, we allocate antennas of the MIMO radar to probe different targets simultaneously in a flexible manner based on the performance metric of relative entropy. Two antenna allocation schemes are proposed. In the first scheme, each antenna is allocated to illuminate a proper target over the entire illumination time, so that the detection performance of each target is guaranteed. The problem is formulated as a minimum makespan scheduling problem in the combinatorial optimization framework. Antenna allocation is implemented through a branch-and-bound algorithm and an enhanced factor 2 algorithm. In the second scheme, called antenna-time allocation, each antenna is allocated to illuminate different targets with different illumination time. Both antenna allocation and time allocation are optimized based on illumination probabilities. Over a large range of transmitted power, target fluctuations and target numbers, both of the proposed antenna allocation schemes outperform the scheme without antenna allocation. Moreover, the antenna-time allocation scheme achieves a more robust detection performance than branch-and-bound algorithm and the enhanced factor 2 algorithm when the target number changes.

  14. Economic aspects of spectrum management

    NASA Technical Reports Server (NTRS)

    Stibolt, R. D.

    1979-01-01

    Problems associated with the allocation of the radio frequency spectrum are addressed. It is observed that the current method very likely does not allocate the resource to those most valuing its use. Ecomonic criteria by which the effectiveness of resource allocation schemes can be judged are set forth and some thoughts on traditional objections to implementation of market characteristics into frequency allocation are offered. The problem of dividing orbit and spectrum between two satellite services sharing the same band but having significantly different system characteristics is discussed. The problem is compounded by the likelihood that one service will commence operation much sooner than the other. Some alternative schemes are offered that, within proper international constraints, could achieve a desired flexibility in the division of orbit and frequency between the two services domestically over the next several years.

  15. Learning automata-based solutions to the nonlinear fractional knapsack problem with applications to optimal resource allocation.

    PubMed

    Granmo, Ole-Christoffer; Oommen, B John; Myrer, Svein Arild; Olsen, Morten Goodwin

    2007-02-01

    This paper considers the nonlinear fractional knapsack problem and demonstrates how its solution can be effectively applied to two resource allocation problems dealing with the World Wide Web. The novel solution involves a "team" of deterministic learning automata (LA). The first real-life problem relates to resource allocation in web monitoring so as to "optimize" information discovery when the polling capacity is constrained. The disadvantages of the currently reported solutions are explained in this paper. The second problem concerns allocating limited sampling resources in a "real-time" manner with the purpose of estimating multiple binomial proportions. This is the scenario encountered when the user has to evaluate multiple web sites by accessing a limited number of web pages, and the proportions of interest are the fraction of each web site that is successfully validated by an HTML validator. Using the general LA paradigm to tackle both of the real-life problems, the proposed scheme improves a current solution in an online manner through a series of informed guesses that move toward the optimal solution. At the heart of the scheme, a team of deterministic LA performs a controlled random walk on a discretized solution space. Comprehensive experimental results demonstrate that the discretization resolution determines the precision of the scheme, and that for a given precision, the current solution (to both problems) is consistently improved until a nearly optimal solution is found--even for switching environments. Thus, the scheme, while being novel to the entire field of LA, also efficiently handles a class of resource allocation problems previously not addressed in the literature.

  16. Direct Adaptive Control of Systems with Actuator Failures: State of the Art and Continuing Challenges

    NASA Technical Reports Server (NTRS)

    Tao, Gang; Joshi, Suresh M.

    2008-01-01

    In this paper, the problem of controlling systems with failures and faults is introduced, and an overview of recent work on direct adaptive control for compensation of uncertain actuator failures is presented. Actuator failures may be characterized by some unknown system inputs being stuck at some unknown (fixed or varying) values at unknown time instants, that cannot be influenced by the control signals. The key task of adaptive compensation is to design the control signals in such a manner that the remaining actuators can automatically and seamlessly take over for the failed ones, and achieve desired stability and asymptotic tracking. A certain degree of redundancy is necessary to accomplish failure compensation. The objective of adaptive control design is to effectively use the available actuation redundancy to handle failures without the knowledge of the failure patterns, parameters, and time of occurrence. This is a challenging problem because failures introduce large uncertainties in the dynamic structure of the system, in addition to parametric uncertainties and unknown disturbances. The paper addresses some theoretical issues in adaptive actuator failure compensation: actuator failure modeling, redundant actuation requirements, plant-model matching, error system dynamics, adaptation laws, and stability, tracking, and performance analysis. Adaptive control designs can be shown to effectively handle uncertain actuator failures without explicit failure detection. Some open technical challenges and research problems in this important research area are discussed.

  17. A survey and taxonomy on energy efficient resource allocation techniques for cloud computing systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hameed, Abdul; Khoshkbarforoushha, Alireza; Ranjan, Rajiv

    In a cloud computing paradigm, energy efficient allocation of different virtualized ICT resources (servers, storage disks, and networks, and the like) is a complex problem due to the presence of heterogeneous application (e.g., content delivery networks, MapReduce, web applications, and the like) workloads having contentious allocation requirements in terms of ICT resource capacities (e.g., network bandwidth, processing speed, response time, etc.). Several recent papers have tried to address the issue of improving energy efficiency in allocating cloud resources to applications with varying degree of success. However, to the best of our knowledge there is no published literature on this subjectmore » that clearly articulates the research problem and provides research taxonomy for succinct classification of existing techniques. Hence, the main aim of this paper is to identify open challenges associated with energy efficient resource allocation. In this regard, the study, first, outlines the problem and existing hardware and software-based techniques available for this purpose. Furthermore, available techniques already presented in the literature are summarized based on the energy-efficient research dimension taxonomy. The advantages and disadvantages of the existing techniques are comprehensively analyzed against the proposed research dimension taxonomy namely: resource adaption policy, objective function, allocation method, allocation operation, and interoperability.« less

  18. Resource allocation in shared spectrum access communications for operators with diverse service requirements

    NASA Astrophysics Data System (ADS)

    Kibria, Mirza Golam; Villardi, Gabriel Porto; Ishizu, Kentaro; Kojima, Fumihide; Yano, Hiroyuki

    2016-12-01

    In this paper, we study inter-operator spectrum sharing and intra-operator resource allocation in shared spectrum access communication systems and propose efficient dynamic solutions to address both inter-operator and intra-operator resource allocation optimization problems. For inter-operator spectrum sharing, we present two competent approaches, namely the subcarrier gain-based sharing and fragmentation-based sharing, which carry out fair and flexible allocation of the available shareable spectrum among the operators subject to certain well-defined sharing rules, traffic demands, and channel propagation characteristics. The subcarrier gain-based spectrum sharing scheme has been found to be more efficient in terms of achieved throughput. However, the fragmentation-based sharing is more attractive in terms of computational complexity. For intra-operator resource allocation, we consider resource allocation problem with users' dissimilar service requirements, where the operator supports users with delay constraint and non-delay constraint service requirements, simultaneously. This optimization problem is a mixed-integer non-linear programming problem and non-convex, which is computationally very expensive, and the complexity grows exponentially with the number of integer variables. We propose less-complex and efficient suboptimal solution based on formulating exact linearization, linear approximation, and convexification techniques for the non-linear and/or non-convex objective functions and constraints. Extensive simulation performance analysis has been carried out that validates the efficiency of the proposed solution.

  19. A stereo remote sensing feature selection method based on artificial bee colony algorithm

    NASA Astrophysics Data System (ADS)

    Yan, Yiming; Liu, Pigang; Zhang, Ye; Su, Nan; Tian, Shu; Gao, Fengjiao; Shen, Yi

    2014-05-01

    To improve the efficiency of stereo information for remote sensing classification, a stereo remote sensing feature selection method is proposed in this paper presents, which is based on artificial bee colony algorithm. Remote sensing stereo information could be described by digital surface model (DSM) and optical image, which contain information of the three-dimensional structure and optical characteristics, respectively. Firstly, three-dimensional structure characteristic could be analyzed by 3D-Zernike descriptors (3DZD). However, different parameters of 3DZD could descript different complexity of three-dimensional structure, and it needs to be better optimized selected for various objects on the ground. Secondly, features for representing optical characteristic also need to be optimized. If not properly handled, when a stereo feature vector composed of 3DZD and image features, that would be a lot of redundant information, and the redundant information may not improve the classification accuracy, even cause adverse effects. To reduce information redundancy while maintaining or improving the classification accuracy, an optimized frame for this stereo feature selection problem is created, and artificial bee colony algorithm is introduced for solving this optimization problem. Experimental results show that the proposed method can effectively improve the computational efficiency, improve the classification accuracy.

  20. A novel profit-allocation strategy for SDN enterprises

    NASA Astrophysics Data System (ADS)

    Hu, Wei; Hou, Ye; Tian, Longwei; Li, Yuan

    2017-01-01

    Aiming to solve the problem of profit allocation for supply and demand network (SDN) enterprises that ignores risk factors and generates low satisfaction, a novel profit-allocation model based on cooperative game theory and TOPSIS is proposed. This new model avoids the defect of the single-profit allocation model by introducing risk factors, compromise coefficients and high negotiation points. By measuring the Euclidean distance between the ideal solution vector and the negative ideal solution vector, every node's satisfaction problem for the SDN was resolved, and the mess phenomenon was avoided. Finally, the rationality and effectiveness of the proposed model was verified using a numerical example.

  1. Optimal allocation model of construction land based on two-level system optimization theory

    NASA Astrophysics Data System (ADS)

    Liu, Min; Liu, Yanfang; Xia, Yuping; Lei, Qihong

    2007-06-01

    The allocation of construction land is an important task in land-use planning. Whether implementation of planning decisions is a success or not, usually depends on a reasonable and scientific distribution method. Considering the constitution of land-use planning system and planning process in China, multiple levels and multiple objective decision problems is its essence. Also, planning quantity decomposition is a two-level system optimization problem and an optimal resource allocation decision problem between a decision-maker in the topper and a number of parallel decision-makers in the lower. According the characteristics of the decision-making process of two-level decision-making system, this paper develops an optimal allocation model of construction land based on two-level linear planning. In order to verify the rationality and the validity of our model, Baoan district of Shenzhen City has been taken as a test case. Under the assistance of the allocation model, construction land is allocated to ten townships of Baoan district. The result obtained from our model is compared to that of traditional method, and results show that our model is reasonable and usable. In the end, the paper points out the shortcomings of the model and further research directions.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hameed, Abdul; Khoshkbarforoushha, Alireza; Ranjan, Rajiv

    In a cloud computing paradigm, energy efficient allocation of different virtualized ICT resources (servers, storage disks, and networks, and the like) is a complex problem due to the presence of heterogeneous application (e.g., content delivery networks, MapReduce, web applications, and the like) workloads having contentious allocation requirements in terms of ICT resource capacities (e.g., network bandwidth, processing speed, response time, etc.). Several recent papers have tried to address the issue of improving energy efficiency in allocating cloud resources to applications with varying degree of success. However, to the best of our knowledge there is no published literature on this subjectmore » that clearly articulates the research problem and provides research taxonomy for succinct classification of existing techniques. Hence, the main aim of this paper is to identify open challenges associated with energy efficient resource allocation. In this regard, the study, first, outlines the problem and existing hardware and software-based techniques available for this purpose. Furthermore, available techniques already presented in the literature are summarized based on the energy-efficient research dimension taxonomy. The advantages and disadvantages of the existing techniques are comprehensively analyzed against the proposed research dimension taxonomy namely: resource adaption policy, objective function, allocation method, allocation operation, and interoperability.« less

  3. Data compression strategies for ptychographic diffraction imaging

    NASA Astrophysics Data System (ADS)

    Loetgering, Lars; Rose, Max; Treffer, David; Vartanyants, Ivan A.; Rosenhahn, Axel; Wilhein, Thomas

    2017-12-01

    Ptychography is a computational imaging method for solving inverse scattering problems. To date, the high amount of redundancy present in ptychographic data sets requires computer memory that is orders of magnitude larger than the retrieved information. Here, we propose and compare data compression strategies that significantly reduce the amount of data required for wavefield inversion. Information metrics are used to measure the amount of data redundancy present in ptychographic data. Experimental results demonstrate the technique to be memory efficient and stable in the presence of systematic errors such as partial coherence and noise.

  4. Design of joint source/channel coders

    NASA Technical Reports Server (NTRS)

    1991-01-01

    The need to transmit large amounts of data over a band limited channel has led to the development of various data compression schemes. Many of these schemes function by attempting to remove redundancy from the data stream. An unwanted side effect of this approach is to make the information transfer process more vulnerable to channel noise. Efforts at protecting against errors involve the reinsertion of redundancy and an increase in bandwidth requirements. The papers presented within this document attempt to deal with these problems from a number of different approaches.

  5. Some dynamic resource allocation problems in wireless networks

    NASA Astrophysics Data System (ADS)

    Berry, Randall

    2001-07-01

    We consider dynamic resource allocation problems that arise in wireless networking. Specifically transmission scheduling problems are studied in cases where a user can dynamically allocate communication resources such as transmission rate and power based on current channel knowledge as well as traffic variations. We assume that arriving data is stored in a transmission buffer, and investigate the trade-off between average transmission power and average buffer delay. A general characterization of this trade-off is given and the behavior of this trade-off in the regime of asymptotically large buffer delays is explored. An extension to a more general utility based quality of service definition is also discussed.

  6. Objective past of a quantum universe: Redundant records of consistent histories

    DOE PAGES

    Reidel, C. Jess; Zurek, Wojciech H.; Zwolak, Michael

    2016-03-21

    Motivated by the advances of quantum Darwinism and recognizing the role played by redundancy in identifying the small subset of quantum states with resilience characteristic of objective classical reality, we explore the implications of redundant records for consistent histories. The consistent histories formalism is a tool for describing sequences of events taking place in an evolving closed quantum system. A set of histories is consistent when one can reason about them using Boolean logic, i.e., when probabilities of sequences of events that define histories are additive. However, the vast majority of the sets of histories that are merely consistent aremore » flagrantly nonclassical in other respects. This embarras de richesses (known as the set selection problem) suggests that one must go beyond consistency to identify how the classical past arises in our quantum universe. The key intuition we follow is that the records of events that define the familiar objective past are inscribed in many distinct systems, e.g., subsystems of the environment, and are accessible locally in space and time to observers. We identify histories that are not just consistent but redundantly consistent using the partial-trace condition introduced by Finkelstein as a bridge between histories and decoherence. The existence of redundant records is a sufficient condition for redundant consistency. It selects, from the multitude of the alternative sets of consistent histories, a small subset endowed with redundant records characteristic of the objective classical past. Furthermore, the information about an objective history of the past is then simultaneously within reach of many, who can independently reconstruct it and arrive at compatible conclusions in the present.« less

  7. Objective past of a quantum universe: Redundant records of consistent histories

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reidel, C. Jess; Zurek, Wojciech H.; Zwolak, Michael

    Motivated by the advances of quantum Darwinism and recognizing the role played by redundancy in identifying the small subset of quantum states with resilience characteristic of objective classical reality, we explore the implications of redundant records for consistent histories. The consistent histories formalism is a tool for describing sequences of events taking place in an evolving closed quantum system. A set of histories is consistent when one can reason about them using Boolean logic, i.e., when probabilities of sequences of events that define histories are additive. However, the vast majority of the sets of histories that are merely consistent aremore » flagrantly nonclassical in other respects. This embarras de richesses (known as the set selection problem) suggests that one must go beyond consistency to identify how the classical past arises in our quantum universe. The key intuition we follow is that the records of events that define the familiar objective past are inscribed in many distinct systems, e.g., subsystems of the environment, and are accessible locally in space and time to observers. We identify histories that are not just consistent but redundantly consistent using the partial-trace condition introduced by Finkelstein as a bridge between histories and decoherence. The existence of redundant records is a sufficient condition for redundant consistency. It selects, from the multitude of the alternative sets of consistent histories, a small subset endowed with redundant records characteristic of the objective classical past. Furthermore, the information about an objective history of the past is then simultaneously within reach of many, who can independently reconstruct it and arrive at compatible conclusions in the present.« less

  8. The Seven Deadly Sins of World University Ranking: A Summary from Several Papers

    ERIC Educational Resources Information Center

    Soh, Kaycheng

    2017-01-01

    World university rankings use the weight-and-sum approach to process data. Although this seems to pass the common sense test, it has statistical problems. In recent years, seven such problems have been uncovered: spurious precision, weight discrepancies, assumed mutual compensation, indictor redundancy, inter-system discrepancy, negligence of…

  9. Trading a Problem-solving Task

    NASA Astrophysics Data System (ADS)

    Matsubara, Shigeo

    This paper focuses on a task allocation problem, especially cases where the task is to find a solution in a search problem or a constraint satisfaction problem. If the search problem is hard to solve, a contractor may fail to find a solution. Here, the more computational resources such as the CPU time the contractor invests in solving the search problem, the more a solution is likely to be found. This brings about a new problem that a contractee has to find an appropriate level of the quality in a task achievement as well as to find an efficient allocation of a task among contractors. For example, if the contractee asks the contractor to find a solution with certainty, the payment from the contractee to the contractor may exceed the contractee's benefit from obtaining a solution, which discourages the contractee from trading a task. However, solving this problem is difficult because the contractee cannot ascertain the contractor's problem-solving ability such as the amount of available resources and knowledge (e.g. algorithms, heuristics) or monitor what amount of resources are actually invested in solving the allocated task. To solve this problem, we propose a task allocation mechanism that is able to choose an appropriate level of the quality in a task achievement and prove that this mechanism guarantees that each contractor reveals its true information. Moreover, we show that our mechanism can increase the contractee's utility compared with a simple auction mechanism by using computer simulation.

  10. Resource allocation for wildland fire suppression planning using a stochastic program

    Treesearch

    Alex Taylor Masarie

    2011-01-01

    Resource allocation for wildland fire suppression problems, referred to here as Fire-S problems, have been studied for over a century. Not only have the many variants of the base Fire-S problem made it such a durable one to study, but advances in suppression technology and our ever-expanding knowledge of and experience with wildland fire behavior have required almost...

  11. Titan probe technology assessment and technology development plan study

    NASA Technical Reports Server (NTRS)

    Castro, A. J.

    1980-01-01

    The need for technology advances to accomplish the Titan probe mission was determined by defining mission conditions and requirements and evaluating the technology impact on the baseline probe configuration. Mission characteristics found to be technology drivers include (1) ten years dormant life in space vacuum; (2) unknown surface conditions, various sample materials, and a surface temperature; and (3) mission constraints of the Saturn Orbiter Dual Probe mission regarding weight allocation. The following areas were identified for further development: surface sample acquisition system; battery powered system; nonmetallic materials; magnetic bubble memory devices, and the landing system. Preentry science, reliability, and weight reduction and redundancy must also be considered.

  12. Space Tug avionics definition study. Volume 1: Executive summary

    NASA Technical Reports Server (NTRS)

    1975-01-01

    A top down approach was used to identify, compile, and develop avionics functional requirements for all flight and ground operational phases. Such requirements as safety mission critical functions and criteria, minimum redundancy levels, software memory sizing, power for tug and payload, data transfer between payload, tug, shuttle, and ground were established. Those functional requirements that related to avionics support of a particular function were compiled together under that support function heading. This unique approach provided both organizational efficiency and traceability back to the applicable operational phase and event. Each functional requirement was then allocated to the appropriate subsystems and its particular characteristics were quantified.

  13. On redundant variables in Lagrangian mechanics, with applications to perturbation theory and KS regularization. [Kustaanheimo-Stiefel two body problem

    NASA Technical Reports Server (NTRS)

    Broucke, R.; Lass, H.

    1975-01-01

    It is shown that it is possible to make a change of variables in a Lagrangian in such a way that the number of variables is increased. The Euler-Lagrange equations in the redundant variables are obtained in the standard way (without the use of Lagrange multipliers). These equations are not independent but they are all valid and consistent. In some cases they are simpler than if the minimum number of variables are used. The redundant variables are supposed to be related to each other by several constraints (not necessarily holonomic), but these constraints are not used in the derivation of the equations of motion. The method is illustrated with the well known Kustaanheimo-Stiefel regularization. Some interesting applications to perturbation theory are also described.

  14. Nonredundant sparse feature extraction using autoencoders with receptive fields clustering.

    PubMed

    Ayinde, Babajide O; Zurada, Jacek M

    2017-09-01

    This paper proposes new techniques for data representation in the context of deep learning using agglomerative clustering. Existing autoencoder-based data representation techniques tend to produce a number of encoding and decoding receptive fields of layered autoencoders that are duplicative, thereby leading to extraction of similar features, thus resulting in filtering redundancy. We propose a way to address this problem and show that such redundancy can be eliminated. This yields smaller networks and produces unique receptive fields that extract distinct features. It is also shown that autoencoders with nonnegativity constraints on weights are capable of extracting fewer redundant features than conventional sparse autoencoders. The concept is illustrated using conventional sparse autoencoder and nonnegativity-constrained autoencoders with MNIST digits recognition, NORB normalized-uniform object data and Yale face dataset. Copyright © 2017 Elsevier Ltd. All rights reserved.

  15. Comprehensive reliability allocation method for CNC lathes based on cubic transformed functions of failure mode and effects analysis

    NASA Astrophysics Data System (ADS)

    Yang, Zhou; Zhu, Yunpeng; Ren, Hongrui; Zhang, Yimin

    2015-03-01

    Reliability allocation of computerized numerical controlled(CNC) lathes is very important in industry. Traditional allocation methods only focus on high-failure rate components rather than moderate failure rate components, which is not applicable in some conditions. Aiming at solving the problem of CNC lathes reliability allocating, a comprehensive reliability allocation method based on cubic transformed functions of failure modes and effects analysis(FMEA) is presented. Firstly, conventional reliability allocation methods are introduced. Then the limitations of direct combination of comprehensive allocation method with the exponential transformed FMEA method are investigated. Subsequently, a cubic transformed function is established in order to overcome these limitations. Properties of the new transformed functions are discussed by considering the failure severity and the failure occurrence. Designers can choose appropriate transform amplitudes according to their requirements. Finally, a CNC lathe and a spindle system are used as an example to verify the new allocation method. Seven criteria are considered to compare the results of the new method with traditional methods. The allocation results indicate that the new method is more flexible than traditional methods. By employing the new cubic transformed function, the method covers a wider range of problems in CNC reliability allocation without losing the advantages of traditional methods.

  16. Scheduling language and algorithm development study. Volume 1, phase 2: Design considerations for a scheduling and resource allocation system

    NASA Technical Reports Server (NTRS)

    Morrell, R. A.; Odoherty, R. J.; Ramsey, H. R.; Reynolds, C. C.; Willoughby, J. K.; Working, R. D.

    1975-01-01

    Data and analyses related to a variety of algorithms for solving typical large-scale scheduling and resource allocation problems are presented. The capabilities and deficiencies of various alternative problem solving strategies are discussed from the viewpoint of computer system design.

  17. A heuristic approach to handle capacitated facility location problem evaluated using clustering internal evaluation

    NASA Astrophysics Data System (ADS)

    Sutanto, G. R.; Kim, S.; Kim, D.; Sutanto, H.

    2018-03-01

    One of the problems in dealing with capacitated facility location problem (CFLP) is occurred because of the difference between the capacity numbers of facilities and the number of customers that needs to be served. A facility with small capacity may result in uncovered customers. These customers need to be re-allocated to another facility that still has available capacity. Therefore, an approach is proposed to handle CFLP by using k-means clustering algorithm to handle customers’ allocation. And then, if customers’ re-allocation is needed, is decided by the overall average distance between customers and the facilities. This new approach is benchmarked to the existing approach by Liao and Guo which also use k-means clustering algorithm as a base idea to decide the facilities location and customers’ allocation. Both of these approaches are benchmarked by using three clustering evaluation methods with connectedness, compactness, and separations factors.

  18. Cross-layer Joint Relay Selection and Power Allocation Scheme for Cooperative Relaying System

    NASA Astrophysics Data System (ADS)

    Zhi, Hui; He, Mengmeng; Wang, Feiyue; Huang, Ziju

    2018-03-01

    A novel cross-layer joint relay selection and power allocation (CL-JRSPA) scheme over physical layer and data-link layer is proposed for cooperative relaying system in this paper. Our goal is finding the optimal relay selection and power allocation scheme to maximize system achievable rate when satisfying total transmit power constraint in physical layer and statistical delay quality-of-service (QoS) demand in data-link layer. Using the concept of effective capacity (EC), our goal can be formulated into an optimal joint relay selection and power allocation (JRSPA) problem to maximize the EC when satisfying total transmit power limitation. We first solving optimal power allocation (PA) problem with Lagrange multiplier approach, and then solving optimal relay selection (RS) problem. Simulation results demonstrate that CL-JRSPA scheme gets larger EC than other schemes when satisfying delay QoS demand. In addition, the proposed CL-JRSPA scheme achieves the maximal EC when relay located approximately halfway between source and destination, and EC becomes smaller when the QoS exponent becomes larger.

  19. Active and Passive Hydrologic Tomographic Surveys:A Revolution in Hydrology (Invited)

    NASA Astrophysics Data System (ADS)

    Yeh, T. J.

    2013-12-01

    Mathematical forward or inverse problems of flow through geological media always have unique solutions if necessary conditions are givens. Unique mathematical solutions to forward or inverse modeling of field problems are however always uncertain (an infinite number of possibilities) due to many reasons. They include non-representativeness of the governing equations, inaccurate necessary conditions, multi-scale heterogeneity, scale discrepancies between observation and model, noise and others. Conditional stochastic approaches, which derives the unbiased solution and quantifies the solution uncertainty, are therefore most appropriate for forward and inverse modeling of hydrological processes. Conditioning using non-redundant data sets reduces uncertainty. In this presentation, we explain non-redundant data sets in cross-hole aquifer tests, and demonstrate that active hydraulic tomographic survey (using man-made excitations) is a cost-effective approach to collect the same type but non-redundant data sets for reducing uncertainty in the inverse modeling. We subsequently show that including flux measurements (a piece of non-redundant data set) collected in the same well setup as in hydraulic tomography improves the estimated hydraulic conductivity field. We finally conclude with examples and propositions regarding how to collect and analyze data intelligently by exploiting natural recurrent events (river stage fluctuations, earthquakes, lightning, etc.) as energy sources for basin-scale passive tomographic surveys. The development of information fusion technologies that integrate traditional point measurements and active/passive hydrogeophysical tomographic surveys, as well as advances in sensor, computing, and information technologies may ultimately advance our capability of characterizing groundwater basins to achieve resolution far beyond the feat of current science and technology.

  20. Multi-Agent Coordination Techniques for Naval Tactical Combat Resources Management

    DTIC Science & Technology

    2008-07-01

    resource coordination and cooperation problems. The combat resource allocation planning problem is treated in the companion report [2]. 2.3 Resource...report focuses on the resource coordination problem, while allocation algorithms are discussed in the companion report [2]. First, coordination in...classification of each should be indicated as with the title.) Canada’s Leader in Defence and National Security Science and Technology Chef de file au Canada en

  1. Resource Allocation in Healthcare: Implications of Models of Medicine as a Profession

    PubMed Central

    Kluge, Eike-Henner W.

    2007-01-01

    For decades, the problem of how to allocate healthcare resources in a just and equitable fashion has been the subject of concerted discussion and analysis, yet the issue has stubbornly resisted resolution. This article suggests that a major reason for this is that the discussion has focused exclusively on the nature and status of the material resources, and that the nature and role of the medical profession have been entirely ignored. Because physicians are gatekeepers to healthcare resources, their role in allocation is central from a process perspective. This article identifies 3 distinct interpretations of the nature of medicine, shows how each mandates a different method of allocation, and argues that unless an appropriate model of medicine is developed that acknowledges the valid points contained in each of the 3 approaches, the allocation problem will remain unsolvable. PMID:17435657

  2. A mathematical modeling approach to resource allocation for railroad-highway crossing safety upgrades.

    PubMed

    Konur, Dinçer; Golias, Mihalis M; Darks, Brandon

    2013-03-01

    State Departments of Transportation (S-DOT's) periodically allocate budget for safety upgrades at railroad-highway crossings. Efficient resource allocation is crucial for reducing accidents at railroad-highway crossings and increasing railroad as well as highway transportation safety. While a specific method is not restricted to S-DOT's, sorting type of procedures are recommended by the Federal Railroad Administration (FRA), United States Department of Transportation for the resource allocation problem. In this study, a generic mathematical model is proposed for the resource allocation problem for railroad-highway crossing safety upgrades. The proposed approach is compared to sorting based methods for safety upgrades of public at-grade railroad-highway crossings in Tennessee. The comparison shows that the proposed mathematical modeling approach is more efficient than sorting methods in reducing accidents and severity. Copyright © 2012 Elsevier Ltd. All rights reserved.

  3. System Engineering for J-2X Development: The Simpler, the Better

    NASA Technical Reports Server (NTRS)

    Kelly, William M.; Greasley, Paul; Greene, William D.; Ackerman, Peter

    2008-01-01

    The Ares I and Ares V Vehicles will utilize the J-2X rocket engine developed for NASA by the Pratt and Whitney Rocketdyne Company (PWR) as the upper stage engine (USE). The J-2X is an improved higher power version of the original J-2 engine used for Apollo. System Engineering (SE) facilitates direct and open discussions of issues and problems. This simple idea is often overlooked in large, complex engineering development programs. Definition and distribution of requirements from the engine level to the component level is controlled by Allocation Reports which breaks down numerical design objectives (weight, reliability, etc.) into quanta goals for each component area. Linked databases of design and verification requirements help eliminate redundancy and potential mistakes inherent in separated systems. Another tool, the Architecture Design Description (ADD), is used to control J-2X system architecture and effectively communicate configuration changes to those involved in the design process. But the proof of an effective process is in successful program accomplishment. SE is the methodology being used to meet the challenge of completing J-2X engine certification 2 years ahead of any engine program ever developed at PWR. This paper describes the simple, better SE tools and techniques used to achieve this success.

  4. Fault tolerance issues in nanoelectronics

    NASA Astrophysics Data System (ADS)

    Spagocci, S. M.

    The astonishing success story of microelectronics cannot go on indefinitely. In fact, once devices reach the few-atom scale (nanoelectronics), transient quantum effects are expected to impair their behaviour. Fault tolerant techniques will then be required. The aim of this thesis is to investigate the problem of transient errors in nanoelectronic devices. Transient error rates for a selection of nanoelectronic gates, based upon quantum cellular automata and single electron devices, in which the electrostatic interaction between electrons is used to create Boolean circuits, are estimated. On the bases of such results, various fault tolerant solutions are proposed, for both logic and memory nanochips. As for logic chips, traditional techniques are found to be unsuitable. A new technique, in which the voting approach of triple modular redundancy (TMR) is extended by cascading TMR units composed of nanogate clusters, is proposed and generalised to other voting approaches. For memory chips, an error correcting code approach is found to be suitable. Various codes are considered and a lookup table approach is proposed for encoding and decoding. We are then able to give estimations for the redundancy level to be provided on nanochips, so as to make their mean time between failures acceptable. It is found that, for logic chips, space redundancies up to a few tens are required, if mean times between failures have to be of the order of a few years. Space redundancy can also be traded for time redundancy. As for memory chips, mean times between failures of the order of a few years are found to imply both space and time redundancies of the order of ten.

  5. Multi-robot task allocation based on two dimensional artificial fish swarm algorithm

    NASA Astrophysics Data System (ADS)

    Zheng, Taixiong; Li, Xueqin; Yang, Liangyi

    2007-12-01

    The problem of task allocation for multiple robots is to allocate more relative-tasks to less relative-robots so as to minimize the processing time of these tasks. In order to get optimal multi-robot task allocation scheme, a twodimensional artificial swarm algorithm based approach is proposed in this paper. In this approach, the normal artificial fish is extended to be two dimension artificial fish. In the two dimension artificial fish, each vector of primary artificial fish is extended to be an m-dimensional vector. Thus, each vector can express a group of tasks. By redefining the distance between artificial fish and the center of artificial fish, the behavior of two dimension fish is designed and the task allocation algorithm based on two dimension artificial swarm algorithm is put forward. At last, the proposed algorithm is applied to the problem of multi-robot task allocation and comparer with GA and SA based algorithm is done. Simulation and compare result shows the proposed algorithm is effective.

  6. Resource allocation using ANN in LTE

    NASA Astrophysics Data System (ADS)

    Yigit, Tuncay; Ersoy, Mevlut

    2017-07-01

    LTE is the 4th generation wireless network technology, which provides flexible bandwidth, higher data speeds and lower delay. Difficulties may be experienced upon an increase in the number of users in LTE. The objective of this study is to ensure a faster solution to any such resource allocation problems which might arise upon an increase in the number of users. A fast and effective solution has been obtained by making use of Artificial Neural Network. As a result, fast working artificial intelligence methods may be used in resource allocation problems during operation.

  7. Design of fuel cell powered data centers for sufficient reliability and availability

    NASA Astrophysics Data System (ADS)

    Ritchie, Alexa J.; Brouwer, Jacob

    2018-04-01

    It is challenging to design a sufficiently reliable fuel cell electrical system for use in data centers, which require 99.9999% uptime. Such a system could lower emissions and increase data center efficiency, but the reliability and availability of such a system must be analyzed and understood. Currently, extensive backup equipment is used to ensure electricity availability. The proposed design alternative uses multiple fuel cell systems each supporting a small number of servers to eliminate backup power equipment provided the fuel cell design has sufficient reliability and availability. Potential system designs are explored for the entire data center and for individual fuel cells. Reliability block diagram analysis of the fuel cell systems was accomplished to understand the reliability of the systems without repair or redundant technologies. From this analysis, it was apparent that redundant components would be necessary. A program was written in MATLAB to show that the desired system reliability could be achieved by a combination of parallel components, regardless of the number of additional components needed. Having shown that the desired reliability was achievable through some combination of components, a dynamic programming analysis was undertaken to assess the ideal allocation of parallel components.

  8. Ultrareliable fault-tolerant control systems

    NASA Technical Reports Server (NTRS)

    Webster, L. D.; Slykhouse, R. A.; Booth, L. A., Jr.; Carson, T. M.; Davis, G. J.; Howard, J. C.

    1984-01-01

    It is demonstrated that fault-tolerant computer systems, such as on the Shuttles, based on redundant, independent operation are a viable alternative in fault tolerant system designs. The ultrareliable fault-tolerant control system (UFTCS) was developed and tested in laboratory simulations of an UH-1H helicopter. UFTCS includes asymptotically stable independent control elements in a parallel, cross-linked system environment. Static redundancy provides the fault tolerance. A polling is performed among the computers, with results allowing for time-delay channel variations with tight bounds. When compared with the laboratory and actual flight data for the helicopter, the probability of a fault was, for the first 10 hr of flight given a quintuple computer redundancy, found to be 1 in 290 billion. Two weeks of untended Space Station operations would experience a fault probability of 1 in 24 million. Techniques for avoiding channel divergence problems are identified.

  9. Motion control of musculoskeletal systems with redundancy.

    PubMed

    Park, Hyunjoo; Durand, Dominique M

    2008-12-01

    Motion control of musculoskeletal systems for functional electrical stimulation (FES) is a challenging problem due to the inherent complexity of the systems. These include being highly nonlinear, strongly coupled, time-varying, time-delayed, and redundant. The redundancy in particular makes it difficult to find an inverse model of the system for control purposes. We have developed a control system for multiple input multiple output (MIMO) redundant musculoskeletal systems with little prior information. The proposed method separates the steady-state properties from the dynamic properties. The dynamic control uses a steady-state inverse model and is implemented with both a PID controller for disturbance rejection and an artificial neural network (ANN) feedforward controller for fast trajectory tracking. A mechanism to control the sum of the muscle excitation levels is also included. To test the performance of the proposed control system, a two degree of freedom ankle-subtalar joint model with eight muscles was used. The simulation results show that separation of steady-state and dynamic control allow small output tracking errors for different reference trajectories such as pseudo-step, sinusoidal and filtered random signals. The proposed control method also demonstrated robustness against system parameter and controller parameter variations. A possible application of this control algorithm is FES control using multiple contact cuff electrodes where mathematical modeling is not feasible and the redundancy makes the control of dynamic movement difficult.

  10. Repetitive Bibliographical Information in Relational Databases.

    ERIC Educational Resources Information Center

    Brooks, Terrence A.

    1988-01-01

    Proposes a solution to the problem of loading repetitive bibliographic information in a microcomputer-based relational database management system. The alternative design described is based on a representational redundancy design and normalization theory. (12 references) (Author/CLB)

  11. Analytical redundancy and the design of robust failure detection systems

    NASA Technical Reports Server (NTRS)

    Chow, E. Y.; Willsky, A. S.

    1984-01-01

    The Failure Detection and Identification (FDI) process is viewed as consisting of two stages: residual generation and decision making. It is argued that a robust FDI system can be achieved by designing a robust residual generation process. Analytical redundancy, the basis for residual generation, is characterized in terms of a parity space. Using the concept of parity relations, residuals can be generated in a number of ways and the design of a robust residual generation process can be formulated as a minimax optimization problem. An example is included to illustrate this design methodology. Previously announcedd in STAR as N83-20653

  12. Distributed control topologies for deep space formation flying spacecraft

    NASA Technical Reports Server (NTRS)

    Hadaegh, F. Y.; Smith, R. S.

    2002-01-01

    A formation of satellites flying in deep space can be specified in terms of the relative satellite positions and absolute satellite orientations. The redundancy in the relative position specification generates a family of control topologies with equivalent stability and reference tracking performance, one of which can be implemented without requiring communication between the spacecraft. A relative position design formulation is inherently unobservable, and a methodology for circumventing this problem is presented. Additional redundancy in the control actuation space can be exploited for feed-forward control of the formation centroid's location in space, or for minimization of total fuel consumption.

  13. Towards an Efficient Flooding Scheme Exploiting 2-Hop Backward Information in MANETs

    NASA Astrophysics Data System (ADS)

    Le, Trong Duc; Choo, Hyunseung

    Flooding is an indispensable operation for providing control or routing functionalities to mobile ad hoc networks (MANETs). Previously, many flooding schemes have been studied with the intention of curtailing the problems of severe redundancies, contention, and collisions in traditional implementations. A recent approach with relatively high efficiency is 1HI by Liu et al., which uses only 1-hop neighbor information. The scheme achieves local optimality in terms of the number of retransmission nodes with time complexity &Theta(n log n), where n is the number of neighbors of a node; however, this method tends to make many redundant transmissions. In this paper, we present a novel flooding algorithm, 2HBI (2-hop backward information), that efficiently reduces the number of retransmission nodes and solves the broadcast storm problem in ad hoc networks using our proposed concept, “2-hop backward information.” The most significant feature of the proposed algorithm is that it does not require any extra communication overhead other than the exchange of 1-hop HELLO messages but maintains high deliverability. Comprehensive computer simulations show that the proposed scheme significantly reduces redundant transmissions in 1HI and in pure flooding, up to 38% and 91%, respectively; accordingly it alleviates contention and collisions in networks.

  14. A Framework for Mining Actionable Navigation Patterns from In-Store RFID Datasets via Indoor Mapping

    PubMed Central

    Shen, Bin; Zheng, Qiuhua; Li, Xingsen; Xu, Libo

    2015-01-01

    With the quick development of RFID technology and the decreasing prices of RFID devices, RFID is becoming widely used in various intelligent services. Especially in the retail application domain, RFID is increasingly adopted to capture the shopping tracks and behavior of in-store customers. To further enhance the potential of this promising application, in this paper, we propose a unified framework for RFID-based path analytics, which uses both in-store shopping paths and RFID-based purchasing data to mine actionable navigation patterns. Four modules of this framework are discussed, which are: (1) mapping from the physical space to the cyber space, (2) data preprocessing, (3) pattern mining and (4) knowledge understanding and utilization. In the data preprocessing module, the critical problem of how to capture the mainstream shopping path sequences while wiping out unnecessary redundant and repeated details is addressed in detail. To solve this problem, two types of redundant patterns, i.e., loop repeat pattern and palindrome-contained pattern are recognized and the corresponding processing algorithms are proposed. The experimental results show that the redundant pattern filtering functions are effective and scalable. Overall, this work builds a bridge between indoor positioning and advanced data mining technologies, and provides a feasible way to study customers’ shopping behaviors via multi-source RFID data. PMID:25751076

  15. Delay test generation for synchronous sequential circuits

    NASA Astrophysics Data System (ADS)

    Devadas, Srinivas

    1989-05-01

    We address the problem of generating tests for delay faults in non-scan synchronous sequential circuits. Delay test generation for sequential circuits is a considerably more difficult problem than delay testing of combinational circuits and has received much less attention. In this paper, we present a method for generating test sequences to detect delay faults in sequential circuits using the stuck-at fault sequential test generator STALLION. The method is complete in that it will generate a delay test sequence for a targeted fault given sufficient CPU time, if such a sequence exists. We term faults for which no delay test sequence exists, under out test methodology, sequentially delay redundant. We describe means of eliminating sequential delay redundancies in logic circuits. We present a partial-scan methodology for enhancing the testability of difficult-to-test of untestable sequential circuits, wherein a small number of flip-flops are selected and made controllable/observable. The selection process guarantees the elimination of all sequential delay redundancies. We show that an intimate relationship exists between state assignment and delay testability of a sequential machine. We describe a state assignment algorithm for the synthesis of sequential machines with maximal delay fault testability. Preliminary experimental results using the test generation, partial-scan and synthesis algorithm are presented.

  16. Adaptive Control Allocation for Fault Tolerant Overactuated Autonomous Vehicles

    DTIC Science & Technology

    2007-11-01

    Tolerant Overactuated Autonomous Vehicles Casavola, A.; Garone, E. (2007) Adaptive Control Allocation for Fault Tolerant Overactuated Autonomous ...Adaptive Control Allocation for Fault Tolerant Overactuated Autonomous Vehicles 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6...Tolerant Overactuated Autonomous Vehicles 3.2 - 2 RTO-MP-AVT-145 UNCLASSIFIED/UNLIMITED Control allocation problem (CAP) - Given a virtual input v(t

  17. Econophysics of a ranked demand and supply resource allocation problem

    NASA Astrophysics Data System (ADS)

    Priel, Avner; Tamir, Boaz

    2018-01-01

    We present a two sided resource allocation problem, between demands and supplies, where both parties are ranked. For example, in Big Data problems where a set of different computational tasks is divided between a set of computers each with its own resources, or between employees and employers where both parties are ranked, the employees by their fitness and the employers by their package benefits. The allocation process can be viewed as a repeated game where in each iteration the strategy is decided by a meta-rule, based on the ranks of both parties and the results of the previous games. We show the existence of a phase transition between an absorbing state, where all demands are satisfied, and an active one where part of the demands are always left unsatisfied. The phase transition is governed by the ratio between supplies and demand. In a job allocation problem we find positive correlation between the rank of the workers and the rank of the factories; higher rank workers are usually allocated to higher ranked factories. These all suggest global emergent properties stemming from local variables. To demonstrate the global versus local relations, we introduce a local inertial force that increases the rank of employees in proportion to their persistence time in the same factory. We show that such a local force induces non trivial global effects, mostly to benefit the lower ranked employees.

  18. Hello! Is Anybody Out There?

    ERIC Educational Resources Information Center

    Caprio, M. W.

    1997-01-01

    Discusses problems such as redundancy caused by isolation of community colleges. Suggested means for overcoming institutional isolation and improving communication among community colleges include hosting conferences, accessing the Internet, instituting faculty exchange programs, offering faculty membership in professional societies, allowing…

  19. Reliability analysis of multicellular system architectures for low-cost satellites

    NASA Astrophysics Data System (ADS)

    Erlank, A. O.; Bridges, C. P.

    2018-06-01

    Multicellular system architectures are proposed as a solution to the problem of low reliability currently seen amongst small, low cost satellites. In a multicellular architecture, a set of independent k-out-of-n systems mimic the cells of a biological organism. In order to be beneficial, a multicellular architecture must provide more reliability per unit of overhead than traditional forms of redundancy. The overheads include power consumption, volume and mass. This paper describes the derivation of an analytical model for predicting a multicellular system's lifetime. The performance of such architectures is compared against that of several common forms of redundancy and proven to be beneficial under certain circumstances. In addition, the problem of peripheral interfaces and cross-strapping is investigated using a purpose-developed, multicellular simulation environment. Finally, two case studies are presented based on a prototype cell implementation, which demonstrate the feasibility of the proposed architecture.

  20. Winnowing sequences from a database search.

    PubMed

    Berman, P; Zhang, Z; Wolf, Y I; Koonin, E V; Miller, W

    2000-01-01

    In database searches for sequence similarity, matches to a distinct sequence region (e.g., protein domain) are frequently obscured by numerous matches to another region of the same sequence. In order to cope with this problem, algorithms are developed to discard redundant matches. One model for this problem begins with a list of intervals, each with an associated score; each interval gives the range of positions in the query sequence that align to a database sequence, and the score is that of the alignment. If interval I is contained in interval J, and I's score is less than J's, then I is said to be dominated by J. The problem is then to identify each interval that is dominated by at least K other intervals, where K is a given level of "tolerable redundancy." An algorithm is developed to solve the problem in O(N log N) time and O(N*) space, where N is the number of intervals and N* is a precisely defined value that never exceeds N and is frequently much smaller. This criterion for discarding database hits has been implemented in the Blast program, as illustrated herein with examples. Several variations and extensions of this approach are also described.

  1. Distortion outage minimization in Nakagami fading using limited feedback

    NASA Astrophysics Data System (ADS)

    Wang, Chih-Hong; Dey, Subhrakanti

    2011-12-01

    We focus on a decentralized estimation problem via a clustered wireless sensor network measuring a random Gaussian source where the clusterheads amplify and forward their received signals (from the intra-cluster sensors) over orthogonal independent stationary Nakagami fading channels to a remote fusion center that reconstructs an estimate of the original source. The objective of this paper is to design clusterhead transmit power allocation policies to minimize the distortion outage probability at the fusion center, subject to an expected sum transmit power constraint. In the case when full channel state information (CSI) is available at the clusterhead transmitters, the optimization problem can be shown to be convex and is solved exactly. When only rate-limited channel feedback is available, we design a number of computationally efficient sub-optimal power allocation algorithms to solve the associated non-convex optimization problem. We also derive an approximation for the diversity order of the distortion outage probability in the limit when the average transmission power goes to infinity. Numerical results illustrate that the sub-optimal power allocation algorithms perform very well and can close the outage probability gap between the constant power allocation (no CSI) and full CSI-based optimal power allocation with only 3-4 bits of channel feedback.

  2. Analysis and Research on the Optimal Allocation of Regional Water Resources

    NASA Astrophysics Data System (ADS)

    rui-chao, Xi; yu-jie, Gu

    2018-06-01

    Starting from the basic concept of optimal allocation of water resources, taking the allocation of water resources in Tianjin as an example, the present situation of water resources in Tianjin is analyzed, and the multi-objective optimal allocation model of water resources is used to optimize the allocation of water resources. We use LINGO to solve the model, get the optimal allocation plan that meets the economic and social benefits, and put forward relevant policies and regulations, so as to provide theoretical which is basis for alleviating and solving the problem of water shortage.

  3. Nash Social Welfare in Multiagent Resource Allocation

    NASA Astrophysics Data System (ADS)

    Ramezani, Sara; Endriss, Ulle

    We study different aspects of the multiagent resource allocation problem when the objective is to find an allocation that maximizes Nash social welfare, the product of the utilities of the individual agents. The Nash solution is an important welfare criterion that combines efficiency and fairness considerations. We show that the problem of finding an optimal outcome is NP-hard for a number of different languages for representing agent preferences; we establish new results regarding convergence to Nash-optimal outcomes in a distributed negotiation framework; and we design and test algorithms similar to those applied in combinatorial auctions for computing such an outcome directly.

  4. Joint Transmitter and Receiver Power Allocation under Minimax MSE Criterion with Perfect and Imperfect CSI for MC-CDMA Transmissions

    NASA Astrophysics Data System (ADS)

    Kotchasarn, Chirawat; Saengudomlert, Poompat

    We investigate the problem of joint transmitter and receiver power allocation with the minimax mean square error (MSE) criterion for uplink transmissions in a multi-carrier code division multiple access (MC-CDMA) system. The objective of power allocation is to minimize the maximum MSE among all users each of which has limited transmit power. This problem is a nonlinear optimization problem. Using the Lagrange multiplier method, we derive the Karush-Kuhn-Tucker (KKT) conditions which are necessary for a power allocation to be optimal. Numerical results indicate that, compared to the minimum total MSE criterion, the minimax MSE criterion yields a higher total MSE but provides a fairer treatment across the users. The advantages of the minimax MSE criterion are more evident when we consider the bit error rate (BER) estimates. Numerical results show that the minimax MSE criterion yields a lower maximum BER and a lower average BER. We also observe that, with the minimax MSE criterion, some users do not transmit at full power. For comparison, with the minimum total MSE criterion, all users transmit at full power. In addition, we investigate robust joint transmitter and receiver power allocation where the channel state information (CSI) is not perfect. The CSI error is assumed to be unknown but bounded by a deterministic value. This problem is formulated as a semidefinite programming (SDP) problem with bilinear matrix inequality (BMI) constraints. Numerical results show that, with imperfect CSI, the minimax MSE criterion also outperforms the minimum total MSE criterion in terms of the maximum and average BERs.

  5. Notice of Violation of IEEE Publication PrinciplesJoint Redundant Residue Number Systems and Module Isolation for Mitigating Single Event Multiple Bit Upsets in Datapath

    NASA Astrophysics Data System (ADS)

    Li, Lei; Hu, Jianhao

    2010-12-01

    Notice of Violation of IEEE Publication Principles"Joint Redundant Residue Number Systems and Module Isolation for Mitigating Single Event Multiple Bit Upsets in Datapath"by Lei Li and Jianhao Hu,in the IEEE Transactions on Nuclear Science, vol.57, no.6, Dec. 2010, pp. 3779-3786After careful and considered review of the content and authorship of this paper by a duly constituted expert committee, this paper has been found to be in violation of IEEE's Publication Principles.This paper contains substantial duplication of original text from the paper cited below. The original text was copied without attribution (including appropriate references to the original author(s) and/or paper title) and without permission.Due to the nature of this violation, reasonable effort should be made to remove all past references to this paper, and future references should be made to the following articles:"Multiple Error Detection and Correction Based on Redundant Residue Number Systems"by Vik Tor Goh and M.U. Siddiqi,in the IEEE Transactions on Communications, vol.56, no.3, March 2008, pp.325-330"A Coding Theory Approach to Error Control in Redundant Residue Number Systems. I: Theory and Single Error Correction"by H. Krishna, K-Y. Lin, and J-D. Sun, in the IEEE Transactions on Circuits and Systems II: Analog and Digital Signal Processing, vol.39, no.1, Jan 1992, pp.8-17In this paper, we propose a joint scheme which combines redundant residue number systems (RRNS) with module isolation (MI) for mitigating single event multiple bit upsets (SEMBUs) in datapath. The proposed hardening scheme employs redundant residues to improve the fault tolerance for datapath and module spacings to guarantee that SEMBUs caused by charge sharing do not propagate among the operation channels of different moduli. The features of RRNS, such as independence, parallel and error correction, are exploited to establish the radiation hardening architecture for the datapath in radiation environments. In the proposed scheme, all of the residues can be processed independently, and most of the soft errors in datapath can be corrected with the redundant relationship of the residues at correction module, which is allocated at the end of the datapath. In the back-end implementation, module isolation technique is used to improve the soft error rate performance for RRNS by physically separating the operation channels of different moduli. The case studies show at least an order of magnitude decrease on the soft error rate (SER) as compared to the NonRHBD designs, and demonstrate that RRNS+MI can reduce the SER from 10-12 to 10-17 when the processing steps of datapath are 106. The proposed scheme can even achieve less area and latency overheads than that without radiation hardening, since RRNS can reduce the operational complexity in datapath.

  6. Ant Colony Optimization Algorithm for Centralized Dynamic Channel Allocation in Multi-Cell OFDMA Systems

    NASA Astrophysics Data System (ADS)

    Kim, Hyo-Su; Kim, Dong-Hoi

    The dynamic channel allocation (DCA) scheme in multi-cell systems causes serious inter-cell interference (ICI) problem to some existing calls when channels for new calls are allocated. Such a problem can be addressed by advanced centralized DCA design that is able to minimize ICI. Thus, in this paper, a centralized DCA is developed for the downlink of multi-cell orthogonal frequency division multiple access (OFDMA) systems with full spectral reuse. However, in practice, as the search space of channel assignment for centralized DCA scheme in multi-cell systems grows exponentially with the increase of the number of required calls, channels, and cells, it becomes an NP-hard problem and is currently too complicated to find an optimum channel allocation. In this paper, we propose an ant colony optimization (ACO) based DCA scheme using a low-complexity ACO algorithm which is a kind of heuristic algorithm in order to solve the aforementioned problem. Simulation results demonstrate significant performance improvements compared to the existing schemes in terms of the grade of service (GoS) performance and the forced termination probability of existing calls without degrading the system performance of the average throughput.

  7. Steps Toward Optimal Competitive Scheduling

    NASA Technical Reports Server (NTRS)

    Frank, Jeremy; Crawford, James; Khatib, Lina; Brafman, Ronen

    2006-01-01

    This paper is concerned with the problem of allocating a unit capacity resource to multiple users within a pre-defined time period. The resource is indivisible, so that at most one user can use it at each time instance. However, different users may use it at different times. The users have independent, se@sh preferences for when and for how long they are allocated this resource. Thus, they value different resource access durations differently, and they value different time slots differently. We seek an optimal allocation schedule for this resource. This problem arises in many institutional settings where, e.g., different departments, agencies, or personal, compete for a single resource. We are particularly motivated by the problem of scheduling NASA's Deep Space Satellite Network (DSN) among different users within NASA. Access to DSN is needed for transmitting data from various space missions to Earth. Each mission has different needs for DSN time, depending on satellite and planetary orbits. Typically, the DSN is over-subscribed, in that not all missions will be allocated as much time as they want. This leads to various inefficiencies - missions spend much time and resource lobbying for their time, often exaggerating their needs. NASA, on the other hand, would like to make optimal use of this resource, ensuring that the good for NASA is maximized. This raises the thorny problem of how to measure the utility to NASA of each allocation. In the typical case, it is difficult for the central agency, NASA in our case, to assess the value of each interval to each user - this is really only known to the users who understand their needs. Thus, our problem is more precisely formulated as follows: find an allocation schedule for the resource that maximizes the sum of users preferences, when the preference values are private information of the users. We bypass this problem by making the assumptions that one can assign money to customers. This assumption is reasonable; a committee is usually in charge of deciding the priority of each mission competing for access to the DSN within a time period while scheduling. Instead, we can assume that the committee assigns a budget to each mission.This paper is concerned with the problem of allocating a unit capacity resource to multiple users within a pre-defined time period. The resource is indivisible, so that at most one user can use it at each time instance. However, different users may use it at different times. The users have independent, se@sh preferences for when and for how long they are allocated this resource. Thus, they value different resource access durations differently, and they value different time slots differently. We seek an optimal allocation schedule for this resource. This problem arises in many institutional settings where, e.g., different departments, agencies, or personal, compete for a single resource. We are particularly motivated by the problem of scheduling NASA's Deep Space Satellite Network (DSN) among different users within NASA. Access to DSN is needed for transmitting data from various space missions to Earth. Each mission has different needs for DSN time, depending on satellite and planetary orbits. Typically, the DSN is over-subscribed, in that not all missions will be allocated as much time as they want. This leads to various inefficiencies - missions spend much time and resource lobbying for their time, often exaggerating their needs. NASA, on the other hand, would like to make optimal use of this resource, ensuring that the good for NASA is maximized. This raises the thorny problem of how to measure the utility to NASA of each allocation. In the typical case, it is difficult for the central agency, NASA in our case, to assess the value of each interval to each user - this is really only known to the users who understand their needs. Thus, our problem is more precisely formulated as follows: find an allocation schedule for the resource that maximizes the sum ofsers preferences, when the preference values are private information of the users. We bypass this problem by making the assumptions that one can assign money to customers. This assumption is reasonable; a committee is usually in charge of deciding the priority of each mission competing for access to the DSN within a time period while scheduling. Instead, we can assume that the committee assigns a budget to each mission.

  8. A unified motion planning approach for redundant and non-redundant manipulators with actuator constraints. Ph.D. Thesis Final Report

    NASA Technical Reports Server (NTRS)

    Chung, Ching-Luan

    1990-01-01

    The term trajectory planning has been used to refer to the process of determining the time history of joint trajectory of each joint variable corresponding to a specified trajectory of the end effector. The trajectory planning problem was solved as a purely kinematic problem. The drawback is that there is no guarantee that the actuators can deliver the effort necessary to track the planned trajectory. To overcome this limitation, a motion planning approach which addresses the kinematics, dynamics, and feedback control of a manipulator in a unified framework was developed. Actuator constraints are taken into account explicitly and a priori in the synthesis of the feedback control law. Therefore the result of applying the motion planning approach described is not only the determination of the entire set of joint trajectories but also a complete specification of the feedback control strategy which would yield these joint trajectories without violating actuator constraints. The effectiveness of the unified motion planning approach is demonstrated on two problems which are of practical interest in manipulator robotics.

  9. Using Neural Networks for Sensor Validation

    NASA Technical Reports Server (NTRS)

    Mattern, Duane L.; Jaw, Link C.; Guo, Ten-Huei; Graham, Ronald; McCoy, William

    1998-01-01

    This paper presents the results of applying two different types of neural networks in two different approaches to the sensor validation problem. The first approach uses a functional approximation neural network as part of a nonlinear observer in a model-based approach to analytical redundancy. The second approach uses an auto-associative neural network to perform nonlinear principal component analysis on a set of redundant sensors to provide an estimate for a single failed sensor. The approaches are demonstrated using a nonlinear simulation of a turbofan engine. The fault detection and sensor estimation results are presented and the training of the auto-associative neural network to provide sensor estimates is discussed.

  10. Performance impact of mutation operators of a subpopulation-based genetic algorithm for multi-robot task allocation problems.

    PubMed

    Liu, Chun; Kroll, Andreas

    2016-01-01

    Multi-robot task allocation determines the task sequence and distribution for a group of robots in multi-robot systems, which is one of constrained combinatorial optimization problems and more complex in case of cooperative tasks because they introduce additional spatial and temporal constraints. To solve multi-robot task allocation problems with cooperative tasks efficiently, a subpopulation-based genetic algorithm, a crossover-free genetic algorithm employing mutation operators and elitism selection in each subpopulation, is developed in this paper. Moreover, the impact of mutation operators (swap, insertion, inversion, displacement, and their various combinations) is analyzed when solving several industrial plant inspection problems. The experimental results show that: (1) the proposed genetic algorithm can obtain better solutions than the tested binary tournament genetic algorithm with partially mapped crossover; (2) inversion mutation performs better than other tested mutation operators when solving problems without cooperative tasks, and the swap-inversion combination performs better than other tested mutation operators/combinations when solving problems with cooperative tasks. As it is difficult to produce all desired effects with a single mutation operator, using multiple mutation operators (including both inversion and swap) is suggested when solving similar combinatorial optimization problems.

  11. Design and Implementation of USAF Avionics Integration Support Facilities

    DTIC Science & Technology

    1981-12-01

    specification for taking the bbranch Vt -Routing indicator (No activity): Allocate Node: All’ocation of resources: R= Allocation rule. Res Resource type number...problems, and the integration and testing of the ECS. The purpose of this investigation is to establish a standard software development system...Corrections to equipment problems. -Compensation for equipment degradation. -New Developments . This approach is intended to centralize essential

  12. Multi-Objective Optimization for Trustworthy Tactical Networks: A Survey and Insights

    DTIC Science & Technology

    2013-06-01

    existing data sources, gathering and maintaining the data needed , and completing and reviewing the collection of information. Send comments regarding...problems: using repeated cooperative games [12], hedonic games [25], and nontransferable utility cooperative games [27]. It should be noted that trust...examined an optimal task allocation problem in a distributed computing system where program modules need to be allocated to different processors to

  13. Self-Coexistence among IEEE 802.22 Networks: Distributed Allocation of Power and Channel

    PubMed Central

    Sakin, Sayef Azad; Alamri, Atif; Tran, Nguyen H.

    2017-01-01

    Ensuring self-coexistence among IEEE 802.22 networks is a challenging problem owing to opportunistic access of incumbent-free radio resources by users in co-located networks. In this study, we propose a fully-distributed non-cooperative approach to ensure self-coexistence in downlink channels of IEEE 802.22 networks. We formulate the self-coexistence problem as a mixed-integer non-linear optimization problem for maximizing the network data rate, which is an NP-hard one. This work explores a sub-optimal solution by dividing the optimization problem into downlink channel allocation and power assignment sub-problems. Considering fairness, quality of service and minimum interference for customer-premises-equipment, we also develop a greedy algorithm for channel allocation and a non-cooperative game-theoretic framework for near-optimal power allocation. The base stations of networks are treated as players in a game, where they try to increase spectrum utilization by controlling power and reaching a Nash equilibrium point. We further develop a utility function for the game to increase the data rate by minimizing the transmission power and, subsequently, the interference from neighboring networks. A theoretical proof of the uniqueness and existence of the Nash equilibrium has been presented. Performance improvements in terms of data-rate with a degree of fairness compared to a cooperative branch-and-bound-based algorithm and a non-cooperative greedy approach have been shown through simulation studies. PMID:29215591

  14. Self-Coexistence among IEEE 802.22 Networks: Distributed Allocation of Power and Channel.

    PubMed

    Sakin, Sayef Azad; Razzaque, Md Abdur; Hassan, Mohammad Mehedi; Alamri, Atif; Tran, Nguyen H; Fortino, Giancarlo

    2017-12-07

    Ensuring self-coexistence among IEEE 802.22 networks is a challenging problem owing to opportunistic access of incumbent-free radio resources by users in co-located networks. In this study, we propose a fully-distributed non-cooperative approach to ensure self-coexistence in downlink channels of IEEE 802.22 networks. We formulate the self-coexistence problem as a mixed-integer non-linear optimization problem for maximizing the network data rate, which is an NP-hard one. This work explores a sub-optimal solution by dividing the optimization problem into downlink channel allocation and power assignment sub-problems. Considering fairness, quality of service and minimum interference for customer-premises-equipment, we also develop a greedy algorithm for channel allocation and a non-cooperative game-theoretic framework for near-optimal power allocation. The base stations of networks are treated as players in a game, where they try to increase spectrum utilization by controlling power and reaching a Nash equilibrium point. We further develop a utility function for the game to increase the data rate by minimizing the transmission power and, subsequently, the interference from neighboring networks. A theoretical proof of the uniqueness and existence of the Nash equilibrium has been presented. Performance improvements in terms of data-rate with a degree of fairness compared to a cooperative branch-and-bound-based algorithm and a non-cooperative greedy approach have been shown through simulation studies.

  15. S-EMG signal compression based on domain transformation and spectral shape dynamic bit allocation

    PubMed Central

    2014-01-01

    Background Surface electromyographic (S-EMG) signal processing has been emerging in the past few years due to its non-invasive assessment of muscle function and structure and because of the fast growing rate of digital technology which brings about new solutions and applications. Factors such as sampling rate, quantization word length, number of channels and experiment duration can lead to a potentially large volume of data. Efficient transmission and/or storage of S-EMG signals are actually a research issue. That is the aim of this work. Methods This paper presents an algorithm for the data compression of surface electromyographic (S-EMG) signals recorded during isometric contractions protocol and during dynamic experimental protocols such as the cycling activity. The proposed algorithm is based on discrete wavelet transform to proceed spectral decomposition and de-correlation, on a dynamic bit allocation procedure to code the wavelets transformed coefficients, and on an entropy coding to minimize the remaining redundancy and to pack all data. The bit allocation scheme is based on mathematical decreasing spectral shape models, which indicates a shorter digital word length to code high frequency wavelets transformed coefficients. Four bit allocation spectral shape methods were implemented and compared: decreasing exponential spectral shape, decreasing linear spectral shape, decreasing square-root spectral shape and rotated hyperbolic tangent spectral shape. Results The proposed method is demonstrated and evaluated for an isometric protocol and for a dynamic protocol using a real S-EMG signal data bank. Objective performance evaluations metrics are presented. In addition, comparisons with other encoders proposed in scientific literature are shown. Conclusions The decreasing bit allocation shape applied to the quantized wavelet coefficients combined with arithmetic coding results is an efficient procedure. The performance comparisons of the proposed S-EMG data compression algorithm with the established techniques found in scientific literature have shown promising results. PMID:24571620

  16. Integration and Task Allocation: Evidence from Patient Care*

    PubMed Central

    David, Guy; Rawley, Evan; Polsky, Daniel

    2013-01-01

    Using the universe of patient transitions from inpatient hospital care to skilled nursing facilities and home health care in 2005, we show how integration eliminates task misallocation problems between organizations. We find that vertical integration allows hospitals to shift patient recovery tasks downstream to lower-cost organizations by discharging patients earlier (and in poorer health) and increasing post-hospitalization service intensity. While integration facilitates a shift in the allocation of tasks and resources, health outcomes either improved or were unaffected by integration on average. The evidence suggests that integration solves coordination problems that arise in market exchange through improvements in the allocation of tasks across care settings. PMID:24415893

  17. Resource allocation for multichannel broadcasting visible light communication

    NASA Astrophysics Data System (ADS)

    Le, Nam-Tuan; Jang, Yeong Min

    2015-11-01

    Visible light communication (VLC), which offers the possibility of using light sources for both illumination and data communications simultaneously, will be a promising incorporation technique with lighting applications. However, it still remains some challenges especially coverage because of field-of-view limitation. In this paper, we focus on this issue by suggesting a resource allocation scheme for VLC broadcasting system. By using frame synchronization and a network calculus QoS approximation, as well as diversity technology, the proposed VLC architecture and QoS resource allocation for the multichannel-broadcasting MAC (medium access control) protocol can solve the coverage limitation problem and the link switching problem of exhibition service.

  18. Internalizing symptoms and conduct problems: Redundant, incremental, or interactive risk factors for adolescent substance use during the first year of high school?

    PubMed

    Khoddam, Rubin; Jackson, Nicholas J; Leventhal, Adam M

    2016-12-01

    The complex interplay of externalizing and internalizing problems in substance use risk is not well understood. This study tested whether the relationship of conduct problems and several internalizing disorders with future substance use is redundant, incremental, or interactive in adolescents. Two semiannual waves of data from the Happiness and Health Study were used, which included 3383 adolescents (M age=14.1years old; 53% females) in Los Angeles who were beginning high school at baseline. Logistic regression models tested the likelihood of past six-month alcohol, tobacco, marijuana, and any substance use at follow-up conditional on baseline conduct problems, symptoms of one of several internalizing disorders (i.e., Social Phobia and Major Depressive, Generalized Anxiety, Panic, and Obsessive-Compulsive Disorder), and their interaction adjusting for baseline use and other covariates. Conduct problems were a robust and consistent risk factor of each substance use outcome at follow-up. When adjusting for the internalizing-conduct comorbidity, depressive symptoms were the only internalizing problem whose risk for alcohol, tobacco, and any substance use was incremental to conduct problems. With the exception of social phobia, antagonistic interactive relationships between each internalizing disorder and conduct problems were found when predicting any substance use; internalizing symptoms was a more robust risk factor for substance use in teens with low (vs. high) conduct problems. Although internalizing and externalizing problems both generally increase risk of substance use, a closer look reveals important nuances in these risk pathways, particularly among teens with comorbid externalizing and internalizing problems. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  19. Efficient Pricing Technique for Resource Allocation Problem in Downlink OFDM Cognitive Radio Networks

    NASA Astrophysics Data System (ADS)

    Abdulghafoor, O. B.; Shaat, M. M. R.; Ismail, M.; Nordin, R.; Yuwono, T.; Alwahedy, O. N. A.

    2017-05-01

    In this paper, the problem of resource allocation in OFDM-based downlink cognitive radio (CR) networks has been proposed. The purpose of this research is to decrease the computational complexity of the resource allocation algorithm for downlink CR network while concerning the interference constraint of primary network. The objective has been secured by adopting pricing scheme to develop power allocation algorithm with the following concerns: (i) reducing the complexity of the proposed algorithm and (ii) providing firm power control to the interference introduced to primary users (PUs). The performance of the proposed algorithm is tested for OFDM- CRNs. The simulation results show that the performance of the proposed algorithm approached the performance of the optimal algorithm at a lower computational complexity, i.e., O(NlogN), which makes the proposed algorithm suitable for more practical applications.

  20. Prediction of lysine glutarylation sites by maximum relevance minimum redundancy feature selection.

    PubMed

    Ju, Zhe; He, Jian-Jun

    2018-06-01

    Lysine glutarylation is new type of protein acylation modification in both prokaryotes and eukaryotes. To better understand the molecular mechanism of glutarylation, it is important to identify glutarylated substrates and their corresponding glutarylation sites accurately. In this study, a novel bioinformatics tool named GlutPred is developed to predict glutarylation sites by using multiple feature extraction and maximum relevance minimum redundancy feature selection. On the one hand, amino acid factors, binary encoding, and the composition of k-spaced amino acid pairs features are incorporated to encode glutarylation sites. And the maximum relevance minimum redundancy method and the incremental feature selection algorithm are adopted to remove the redundant features. On the other hand, a biased support vector machine algorithm is used to handle the imbalanced problem in glutarylation sites training dataset. As illustrated by 10-fold cross-validation, the performance of GlutPred achieves a satisfactory performance with a Sensitivity of 64.80%, a Specificity of 76.60%, an Accuracy of 74.90% and a Matthew's correlation coefficient of 0.3194. Feature analysis shows that some k-spaced amino acid pair features play the most important roles in the prediction of glutarylation sites. The conclusions derived from this study might provide some clues for understanding the molecular mechanisms of glutarylation. Copyright © 2018 Elsevier Inc. All rights reserved.

  1. Operations Research Applied to Libraries.

    ERIC Educational Resources Information Center

    Nussbaum, Harvey

    This report discusses the seven basic types of problems found in business and industry as they apply to library institutions. These seven basic types are: (1) inventory problems, (2) allocation problems, (3) queueing problems, (4) sequencing problems, (5) routing problems, (6) replacement problems and (7) search problems. (MF)

  2. Distributed Multi-Cell Resource Allocation with Price Based ICI Coordination in Downlink OFDMA Networks

    NASA Astrophysics Data System (ADS)

    Lv, Gangming; Zhu, Shihua; Hui, Hui

    Multi-cell resource allocation under minimum rate request for each user in OFDMA networks is addressed in this paper. Based on Lagrange dual decomposition theory, the joint multi-cell resource allocation problem is decomposed and modeled as a limited-cooperative game, and a distributed multi-cell resource allocation algorithm is thus proposed. Analysis and simulation results show that, compared with non-cooperative iterative water-filling algorithm, the proposed algorithm can remarkably reduce the ICI level and improve overall system performances.

  3. The Conference Proceedings of the 1997 Air Transport Research Group (ATRG) of the WCTR Society. Volume 3

    NASA Technical Reports Server (NTRS)

    Oum, T. H.; Bowen, B. D.

    1997-01-01

    This paper covers topics such as: Safety and Air Fares; International Airline Safety; Multi-fare Seat Allocation Problem; Dynamic Allocation of Airline Seat Inventory; Seat Allocation on Flights with Two Fares; Effects of Intercontinental Alliances; Domestic Airline Mergers; Simulating the Effects of Airline Deregulation on Frequency Choice; and Firm Size Inequality and Market Power.

  4. Summary of Research 1997 Department of Systems Management.

    DTIC Science & Technology

    1999-01-01

    formulation and execution; impacts of budget allocation , reallocation, and reduction; imple- mentation of Defense Resource Management Systems; and the...flexible structure that can be applied to a wide range of resource allocation problems. PUBLICATIONS: Dolk, D., Murphy, M., and Thomas, G...policies, procedures, and rationale in deter- mining recruiting resource allocation decisions. The methodology relies on a review of the literature

  5. Evolutionary profiles derived from the QR factorization of multiple structural alignments gives an economy of information.

    PubMed

    O'Donoghue, Patrick; Luthey-Schulten, Zaida

    2005-02-25

    We present a new algorithm, based on the multidimensional QR factorization, to remove redundancy from a multiple structural alignment by choosing representative protein structures that best preserve the phylogenetic tree topology of the homologous group. The classical QR factorization with pivoting, developed as a fast numerical solution to eigenvalue and linear least-squares problems of the form Ax=b, was designed to re-order the columns of A by increasing linear dependence. Removing the most linear dependent columns from A leads to the formation of a minimal basis set which well spans the phase space of the problem at hand. By recasting the problem of redundancy in multiple structural alignments into this framework, in which the matrix A now describes the multiple alignment, we adapted the QR factorization to produce a minimal basis set of protein structures which best spans the evolutionary (phase) space. The non-redundant and representative profiles obtained from this procedure, termed evolutionary profiles, are shown in initial results to outperform well-tested profiles in homology detection searches over a large sequence database. A measure of structural similarity between homologous proteins, Q(H), is presented. By properly accounting for the effect and presence of gaps, a phylogenetic tree computed using this metric is shown to be congruent with the maximum-likelihood sequence-based phylogeny. The results indicate that evolutionary information is indeed recoverable from the comparative analysis of protein structure alone. Applications of the QR ordering and this structural similarity metric to analyze the evolution of structure among key, universally distributed proteins involved in translation, and to the selection of representatives from an ensemble of NMR structures are also discussed.

  6. Methods for Allocating Highway Costs

    DOT National Transportation Integrated Search

    1981-04-01

    Microeconomic theory and other concepts related to pricing are reviewed and applied to the problem of designing highway user charges. In view of the emphasis in the Congressional request for the Highway Cost Allocation Study on setting charges in acc...

  7. Asynchronous Incremental Stochastic Dual Descent Algorithm for Network Resource Allocation

    NASA Astrophysics Data System (ADS)

    Bedi, Amrit Singh; Rajawat, Ketan

    2018-05-01

    Stochastic network optimization problems entail finding resource allocation policies that are optimum on an average but must be designed in an online fashion. Such problems are ubiquitous in communication networks, where resources such as energy and bandwidth are divided among nodes to satisfy certain long-term objectives. This paper proposes an asynchronous incremental dual decent resource allocation algorithm that utilizes delayed stochastic {gradients} for carrying out its updates. The proposed algorithm is well-suited to heterogeneous networks as it allows the computationally-challenged or energy-starved nodes to, at times, postpone the updates. The asymptotic analysis of the proposed algorithm is carried out, establishing dual convergence under both, constant and diminishing step sizes. It is also shown that with constant step size, the proposed resource allocation policy is asymptotically near-optimal. An application involving multi-cell coordinated beamforming is detailed, demonstrating the usefulness of the proposed algorithm.

  8. Decision-theoretic methodology for reliability and risk allocation in nuclear power plants

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cho, N.Z.; Papazoglou, I.A.; Bari, R.A.

    1985-01-01

    This paper describes a methodology for allocating reliability and risk to various reactor systems, subsystems, components, operations, and structures in a consistent manner, based on a set of global safety criteria which are not rigid. The problem is formulated as a multiattribute decision analysis paradigm; the multiobjective optimization, which is performed on a PRA model and reliability cost functions, serves as the guiding principle for reliability and risk allocation. The concept of noninferiority is used in the multiobjective optimization problem. Finding the noninferior solution set is the main theme of the current approach. The assessment of the decision maker's preferencesmore » could then be performed more easily on the noninferior solution set. Some results of the methodology applications to a nontrivial risk model are provided and several outstanding issues such as generic allocation and preference assessment are discussed.« less

  9. Arithmetic and algebraic problem solving and resource allocation: the distinct impact of fluid and numerical intelligence.

    PubMed

    Dix, Annika; van der Meer, Elke

    2015-04-01

    This study investigates cognitive resource allocation dependent on fluid and numerical intelligence in arithmetic/algebraic tasks varying in difficulty. Sixty-six 11th grade students participated in a mathematical verification paradigm, while pupil dilation as a measure of resource allocation was collected. Students with high fluid intelligence solved the tasks faster and more accurately than those with average fluid intelligence, as did students with high compared to average numerical intelligence. However, fluid intelligence sped up response times only in students with average but not high numerical intelligence. Further, high fluid but not numerical intelligence led to greater task-related pupil dilation. We assume that fluid intelligence serves as a domain-general resource that helps to tackle problems for which domain-specific knowledge (numerical intelligence) is missing. The allocation of this resource can be measured by pupil dilation. Copyright © 2014 Society for Psychophysiological Research.

  10. Methods and circuitry for reconfigurable SEU/SET tolerance

    NASA Technical Reports Server (NTRS)

    Shuler, Jr., Robert L. (Inventor)

    2010-01-01

    A device is disclosed in one embodiment that has multiple identical sets of programmable functional elements, programmable routing resources, and majority voters that correct errors. The voters accept a mode input for a redundancy mode and a split mode. In the redundancy mode, the programmable functional elements are identical and are programmed identically so the voters produce an output corresponding to the majority of inputs that agree. In a split mode, each voter selects a particular programmable functional element output as the output of the voter. Therefore, in the split mode, the programmable functional elements can perform different functions, operate independently, and/or be connected together to process different parts of the same problem.

  11. Dead pixel replacement in LWIR microgrid polarimeters.

    PubMed

    Ratliff, Bradley M; Tyo, J Scott; Boger, James K; Black, Wiley T; Bowers, David L; Fetrow, Matthew P

    2007-06-11

    LWIR imaging arrays are often affected by nonresponsive pixels, or "dead pixels." These dead pixels can severely degrade the quality of imagery and often have to be replaced before subsequent image processing and display of the imagery data. For LWIR arrays that are integrated with arrays of micropolarizers, the problem of dead pixels is amplified. Conventional dead pixel replacement (DPR) strategies cannot be employed since neighboring pixels are of different polarizations. In this paper we present two DPR schemes. The first is a modified nearest-neighbor replacement method. The second is a method based on redundancy in the polarization measurements.We find that the redundancy-based DPR scheme provides an order-of-magnitude better performance for typical LWIR polarimetric data.

  12. A methodology for testing fault-tolerant software

    NASA Technical Reports Server (NTRS)

    Andrews, D. M.; Mahmood, A.; Mccluskey, E. J.

    1985-01-01

    A methodology for testing fault tolerant software is presented. There are problems associated with testing fault tolerant software because many errors are masked or corrected by voters, limiter, or automatic channel synchronization. This methodology illustrates how the same strategies used for testing fault tolerant hardware can be applied to testing fault tolerant software. For example, one strategy used in testing fault tolerant hardware is to disable the redundancy during testing. A similar testing strategy is proposed for software, namely, to move the major emphasis on testing earlier in the development cycle (before the redundancy is in place) thus reducing the possibility that undetected errors will be masked when limiters and voters are added.

  13. Sequence-Based Prediction of RNA-Binding Proteins Using Random Forest with Minimum Redundancy Maximum Relevance Feature Selection.

    PubMed

    Ma, Xin; Guo, Jing; Sun, Xiao

    2015-01-01

    The prediction of RNA-binding proteins is one of the most challenging problems in computation biology. Although some studies have investigated this problem, the accuracy of prediction is still not sufficient. In this study, a highly accurate method was developed to predict RNA-binding proteins from amino acid sequences using random forests with the minimum redundancy maximum relevance (mRMR) method, followed by incremental feature selection (IFS). We incorporated features of conjoint triad features and three novel features: binding propensity (BP), nonbinding propensity (NBP), and evolutionary information combined with physicochemical properties (EIPP). The results showed that these novel features have important roles in improving the performance of the predictor. Using the mRMR-IFS method, our predictor achieved the best performance (86.62% accuracy and 0.737 Matthews correlation coefficient). High prediction accuracy and successful prediction performance suggested that our method can be a useful approach to identify RNA-binding proteins from sequence information.

  14. A Review of Function Allocation and En Route Separation Assurance

    NASA Technical Reports Server (NTRS)

    Lewis, Timothy A.; Aweiss, Arwa S.; Guerreiro, Nelson M.; Daiker, Ronald J.

    2016-01-01

    Today's air traffic control system has reached a limit to the number of aircraft that can be safely managed at the same time. This air traffic capacity bottleneck is a critical problem along the path to modernization for air transportation. The design of the next separation assurance system to address this problem is a cornerstone of air traffic management research today. This report reviews recent work by NASA and others in the areas of function allocation and en route separation assurance. This includes: separation assurance algorithms and technology prototypes; concepts of operations and designs for advanced separation assurance systems; and specific investigations into air-ground and human-automation function allocation.

  15. A note on resource allocation scheduling with group technology and learning effects on a single machine

    NASA Astrophysics Data System (ADS)

    Lu, Yuan-Yuan; Wang, Ji-Bo; Ji, Ping; He, Hongyu

    2017-09-01

    In this article, single-machine group scheduling with learning effects and convex resource allocation is studied. The goal is to find the optimal job schedule, the optimal group schedule, and resource allocations of jobs and groups. For the problem of minimizing the makespan subject to limited resource availability, it is proved that the problem can be solved in polynomial time under the condition that the setup times of groups are independent. For the general setup times of groups, a heuristic algorithm and a branch-and-bound algorithm are proposed, respectively. Computational experiments show that the performance of the heuristic algorithm is fairly accurate in obtaining near-optimal solutions.

  16. Divergence in plant and microbial allocation strategies explains continental patterns in microbial allocation and biogeochemical fluxes.

    PubMed

    Averill, Colin

    2014-10-01

    Allocation trade-offs shape ecological and biogeochemical phenomena at local to global scale. Plant allocation strategies drive major changes in ecosystem carbon cycling. Microbial allocation to enzymes that decompose carbon vs. organic nutrients may similarly affect ecosystem carbon cycling. Current solutions to this allocation problem prioritise stoichiometric tradeoffs implemented in plant ecology. These solutions may not maximise microbial growth and fitness under all conditions, because organic nutrients are also a significant carbon resource for microbes. I created multiple allocation frameworks and simulated microbial growth using a microbial explicit biogeochemical model. I demonstrate that prioritising stoichiometric trade-offs does not optimise microbial allocation, while exploiting organic nutrients as carbon resources does. Analysis of continental-scale enzyme data supports the allocation patterns predicted by this framework, and modelling suggests large deviations in soil C loss based on which strategy is implemented. Therefore, understanding microbial allocation strategies will likely improve our understanding of carbon cycling and climate. © 2014 John Wiley & Sons Ltd/CNRS.

  17. Decision rules for allocation of finances to health systems strengthening

    PubMed Central

    Morton, Alec; Thomas, Ranjeeta; Smith, Peter C.

    2017-01-01

    A key dilemma in global health is how to allocate funds between disease-specific “vertical projects” on the one hand and “horizontal programmes” which aim to strengthen the entire health system on the other. While economic evaluation provides a way of approaching the prioritisation of vertical projects, it provides less guidance on how to prioritise between horizontal and vertical spending. We approach this problem by formulating a mathematical program which captures the complementary benefits of funding both vertical projects and horizontal programmes. We show that our solution to this math program has an appealing intuitive structure. We illustrate our model by computationally solving two specialised versions of this problem, with illustrations based on the problem of allocating funding for infectious diseases in sub-Saharan Africa. We conclude by reflecting on how such a model may be developed in the future and used to guide empirical data collection and theory development. PMID:27394006

  18. Efficient Simulation Budget Allocation for Selecting an Optimal Subset

    NASA Technical Reports Server (NTRS)

    Chen, Chun-Hung; He, Donghai; Fu, Michael; Lee, Loo Hay

    2008-01-01

    We consider a class of the subset selection problem in ranking and selection. The objective is to identify the top m out of k designs based on simulated output. Traditional procedures are conservative and inefficient. Using the optimal computing budget allocation framework, we formulate the problem as that of maximizing the probability of correc tly selecting all of the top-m designs subject to a constraint on the total number of samples available. For an approximation of this corre ct selection probability, we derive an asymptotically optimal allocat ion and propose an easy-to-implement heuristic sequential allocation procedure. Numerical experiments indicate that the resulting allocatio ns are superior to other methods in the literature that we tested, and the relative efficiency increases for larger problems. In addition, preliminary numerical results indicate that the proposed new procedur e has the potential to enhance computational efficiency for simulation optimization.

  19. Distributed Optimal Consensus Over Resource Allocation Network and Its Application to Dynamical Economic Dispatch.

    PubMed

    Li, Chaojie; Yu, Xinghuo; Huang, Tingwen; He, Xing; Chaojie Li; Xinghuo Yu; Tingwen Huang; Xing He; Li, Chaojie; Huang, Tingwen; He, Xing; Yu, Xinghuo

    2018-06-01

    The resource allocation problem is studied and reformulated by a distributed interior point method via a -logarithmic barrier. By the facilitation of the graph Laplacian, a fully distributed continuous-time multiagent system is developed for solving the problem. Specifically, to avoid high singularity of the -logarithmic barrier at boundary, an adaptive parameter switching strategy is introduced into this dynamical multiagent system. The convergence rate of the distributed algorithm is obtained. Moreover, a novel distributed primal-dual dynamical multiagent system is designed in a smart grid scenario to seek the saddle point of dynamical economic dispatch, which coincides with the optimal solution. The dual decomposition technique is applied to transform the optimization problem into easily solvable resource allocation subproblems with local inequality constraints. The good performance of the new dynamical systems is, respectively, verified by a numerical example and the IEEE six-bus test system-based simulations.

  20. LCD blink effect: acoustic specific or widespread phenomenon

    NASA Astrophysics Data System (ADS)

    Wright, Graham; Bosko, Floyd; Allen, Steve; Thomas, John T.

    2000-08-01

    The problem of 'flicker' of data presented on large area LCDs is deemed to be a major problem by the military display user community. Although the problem is certainly applicable to the display of certain types of acoustics data there are those who would expand it to cover datatypes which do not suffer from the effect. This paper describes in some detail the mechanisms within the LCD that cause the problem and extrapolates scenarios where compensation is required and where it would be redundant or detrimental. Solutions, which might be used to eliminate the effect are also described.

  1. Experiments in structural dynamics and control using a grid

    NASA Technical Reports Server (NTRS)

    Montgomery, R. C.

    1985-01-01

    Future spacecraft are being conceived that are highly flexible and of extreme size. The two features of flexibility and size pose new problems in control system design. Since large scale structures are not testable in ground based facilities, the decision on component placement must be made prior to full-scale tests on the spacecraft. Control law research is directed at solving problems of inadequate modelling knowledge prior to operation required to achieve peak performance. Another crucial problem addressed is accommodating failures in systems with smart components that are physically distributed on highly flexible structures. Parameter adaptive control is a method of promise that provides on-orbit tuning of the control system to improve performance by upgrading the mathematical model of the spacecraft during operation. Two specific questions are answered in this work. They are: What limits does on-line parameter identification with realistic sensors and actuators place on the ultimate achievable performance of a system in the highly flexible environment? Also, how well must the mathematical model used in on-board analytic redundancy be known and what are the reasonable expectations for advanced redundancy management schemes in the highly flexible and distributed component environment?

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Unseren, M.A.

    The report discusses the orientation tracking control problem for a kinematically redundant, autonomous manipulator moving in a three dimensional workspace. The orientation error is derived using the normalized quaternion error method of Ickes, the Luh, Walker, and Paul error method, and a method suggested here utilizing the Rodrigues parameters, all of which are expressed in terms of normalized quaternions. The analytical time derivatives of the orientation errors are determined. The latter, along with the translational velocity error, form a dosed loop kinematic velocity model of the manipulator using normalized quaternion and translational position feedback. An analysis of the singularities associatedmore » with expressing the models in a form suitable for solving the inverse kinematics problem is given. Two redundancy resolution algorithms originally developed using an open loop kinematic velocity model of the manipulator are extended to properly take into account the orientation tracking control problem. This report furnishes the necessary mathematical framework required prior to experimental implementation of the orientation tracking control schemes on the seven axis CESARm research manipulator or on the seven-axis Robotics Research K1207i dexterous manipulator, the latter of which is to be delivered to the Oak Ridge National Laboratory in 1993.« less

  3. Impact of Co-Site Interference on L/C-Band Spectrum for UAS Control and Non-Payload Communications

    NASA Technical Reports Server (NTRS)

    Kerczewski, Robert J.; Bishop, William D.; Hoder, Douglas J.; Shalkhauser, Kurt A.; Wilson, Jeffrey D.

    2015-01-01

    In order to provide for the safe integration of unmanned aircraft systems into the National Airspace System, the control and non-payload communications (CNPC) link connecting the ground-based pilot with the unmanned aircraft must be highly reliable. A specific requirement is that it must operate using aviation safety radiofrequency spectrum. The 2012 World Radiocommunication Conference (WRC-12) provided a potentially suitable allocation for LOS CNPC spectrum in C-Band at 5030-5091 MHz band which, when combined with a previous allocation in L-Band (960-1164 MHz) may satisfy the LOS spectrum requirement and provide for high reliability through dual-band redundancy. However, the LBand spectrum hosts a number of aeronautical navigation systems which require high-power transmitters on-board the aircraft. These high-power transmitters co-located with sensitive CNPC receivers operating in the same frequency band have the potential to create co-site interference, reducing the performance of the CNPC receivers and ultimately reducing the usability of the L-Band for CNPC. This paper examines the potential for co-site interference, as highlighted in recent flight tests, and discusses the impact on the UAS CNPC spectrum availability and requirements for further testing and analysis.

  4. Concurrent airline fleet allocation and aircraft design with profit modeling for multiple airlines

    NASA Astrophysics Data System (ADS)

    Govindaraju, Parithi

    A "System of Systems" (SoS) approach is particularly beneficial in analyzing complex large scale systems comprised of numerous independent systems -- each capable of independent operations in their own right -- that when brought in conjunction offer capabilities and performance beyond the constituents of the individual systems. The variable resource allocation problem is a type of SoS problem, which includes the allocation of "yet-to-be-designed" systems in addition to existing resources and systems. The methodology presented here expands upon earlier work that demonstrated a decomposition approach that sought to simultaneously design a new aircraft and allocate this new aircraft along with existing aircraft in an effort to meet passenger demand at minimum fleet level operating cost for a single airline. The result of this describes important characteristics of the new aircraft. The ticket price model developed and implemented here enables analysis of the system using profit maximization studies instead of cost minimization. A multiobjective problem formulation has been implemented to determine characteristics of a new aircraft that maximizes the profit of multiple airlines to recognize the fact that aircraft manufacturers sell their aircraft to multiple customers and seldom design aircraft customized to a single airline's operations. The route network characteristics of two simple airlines serve as the example problem for the initial studies. The resulting problem formulation is a mixed-integer nonlinear programming problem, which is typically difficult to solve. A sequential decomposition strategy is applied as a solution methodology by segregating the allocation (integer programming) and aircraft design (non-linear programming) subspaces. After solving a simple problem considering two airlines, the decomposition approach is then applied to two larger airline route networks representing actual airline operations in the year 2005. The decomposition strategy serves as a promising technique for future detailed analyses. Results from the profit maximization studies favor a smaller aircraft in terms of passenger capacity due to its higher yield generation capability on shorter routes while results from the cost minimization studies favor a larger aircraft due to its lower direct operating cost per seat mile.

  5. Decomposition method for zonal resource allocation problems in telecommunication networks

    NASA Astrophysics Data System (ADS)

    Konnov, I. V.; Kashuba, A. Yu

    2016-11-01

    We consider problems of optimal resource allocation in telecommunication networks. We first give an optimization formulation for the case where the network manager aims to distribute some homogeneous resource (bandwidth) among users of one region with quadratic charge and fee functions and present simple and efficient solution methods. Next, we consider a more general problem for a provider of a wireless communication network divided into zones (clusters) with common capacity constraints. We obtain a convex quadratic optimization problem involving capacity and balance constraints. By using the dual Lagrangian method with respect to the capacity constraint, we suggest to reduce the initial problem to a single-dimensional optimization problem, but calculation of the cost function value leads to independent solution of zonal problems, which coincide with the above single region problem. Some results of computational experiments confirm the applicability of the new methods.

  6. Multicriterion problem of allocation of resources in the heterogeneous distributed information processing systems

    NASA Astrophysics Data System (ADS)

    Antamoshkin, O. A.; Kilochitskaya, T. R.; Ontuzheva, G. A.; Stupina, A. A.; Tynchenko, V. S.

    2018-05-01

    This study reviews the problem of allocation of resources in the heterogeneous distributed information processing systems, which may be formalized in the form of a multicriterion multi-index problem with the linear constraints of the transport type. The algorithms for solution of this problem suggest a search for the entire set of Pareto-optimal solutions. For some classes of hierarchical systems, it is possible to significantly speed up the procedure of verification of a system of linear algebraic inequalities for consistency due to the reducibility of them to the stream models or the application of other solution schemes (for strongly connected structures) that take into account the specifics of the hierarchies under consideration.

  7. A Solution Method of Scheduling Problem with Worker Allocation by a Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Osawa, Akira; Ida, Kenichi

    In a scheduling problem with worker allocation (SPWA) proposed by Iima et al, the worker's skill level to each machine is all the same. However, each worker has a different skill level for each machine in the real world. For that reason, we propose a new model of SPWA in which a worker has the different skill level to each machine. To solve the problem, we propose a new GA for SPWA consisting of the following new three procedures, shortening of idle time, modifying infeasible solution to feasible solution, and a new selection method for GA. The effectiveness of the proposed algorithm is clarified by numerical experiments using benchmark problems for job-shop scheduling.

  8. Optimal allocation of trend following strategies

    NASA Astrophysics Data System (ADS)

    Grebenkov, Denis S.; Serror, Jeremy

    2015-09-01

    We consider a portfolio allocation problem for trend following (TF) strategies on multiple correlated assets. Under simplifying assumptions of a Gaussian market and linear TF strategies, we derive analytical formulas for the mean and variance of the portfolio return. We construct then the optimal portfolio that maximizes risk-adjusted return by accounting for inter-asset correlations. The dynamic allocation problem for n assets is shown to be equivalent to the classical static allocation problem for n2 virtual assets that include lead-lag corrections in positions of TF strategies. The respective roles of asset auto-correlations and inter-asset correlations are investigated in depth for the two-asset case and a sector model. In contrast to the principle of diversification suggesting to treat uncorrelated assets, we show that inter-asset correlations allow one to estimate apparent trends more reliably and to adjust the TF positions more efficiently. If properly accounted for, inter-asset correlations are not deteriorative but beneficial for portfolio management that can open new profit opportunities for trend followers. These concepts are illustrated using daily returns of three highly correlated futures markets: the E-mini S&P 500, Euro Stoxx 50 index, and the US 10-year T-note futures.

  9. Graph theoretical stable allocation as a tool for reproduction of control by human operators

    NASA Astrophysics Data System (ADS)

    van Nooijen, Ronald; Ertsen, Maurits; Kolechkina, Alla

    2016-04-01

    During the design of central control algorithms for existing water resource systems under manual control it is important to consider the interaction with parts of the system that remain under manual control and to compare the proposed new system with the existing manual methods. In graph theory the "stable allocation" problem has good solution algorithms and allows for formulation of flow distribution problems in terms of priorities. As a test case for the use of this approach we used the algorithm to derive water allocation rules for the Gezira Scheme, an irrigation system located between the Blue and White Niles south of Khartoum. In 1925, Gezira started with 300,000 acres; currently it covers close to two million acres.

  10. Control Reconfiguration of Command and Control Systems

    DTIC Science & Technology

    2007-01-01

    decision errors and control action delays upon entering a state. These two undesirable effects can be intertwined. To quantify their individual impact...19 6) Effect of...Study of the Effect of Supervisory Control on a Redundant Database Unit . . . . . . . . . . . . . . 32 (Metzler and Wu, Report to AFRL 2005) 9.1) Problem

  11. Optimal Power Allocation for Downstream xDSL With Per-Modem Total Power Constraints: Broadcast Channel Optimal Spectrum Balancing (BC-OSB)

    NASA Astrophysics Data System (ADS)

    Le Nir, Vincent; Moonen, Marc; Verlinden, Jan; Guenach, Mamoun

    2009-02-01

    Recently, the duality between Multiple Input Multiple Output (MIMO) Multiple Access Channels (MAC) and MIMO Broadcast Channels (BC) has been established under a total power constraint. The same set of rates for MAC can be achieved in BC exploiting the MAC-BC duality formulas while preserving the total power constraint. In this paper, we describe the BC optimal power allo- cation applying this duality in a downstream x-Digital Subscriber Lines (xDSL) context under a total power constraint for all modems over all tones. Then, a new algorithm called BC-Optimal Spectrum Balancing (BC-OSB) is devised for a more realistic power allocation under per-modem total power constraints. The capacity region of the primal BC problem under per-modem total power constraints is found by the dual optimization problem for the BC under per-modem total power constraints which can be rewritten as a dual optimization problem in the MAC by means of a precoder matrix based on the Lagrange multipliers. We show that the duality gap between the two problems is zero. The multi-user power allocation problem has been solved for interference channels and MAC using the OSB algorithm. In this paper we solve the problem of multi-user power allocation for the BC case using the OSB algorithm as well and we derive a computational efficient algorithm that will be referred to as BC-OSB. Simulation results are provided for two VDSL2 scenarios: the first one with Differential-Mode (DM) transmission only and the second one with both DM and Phantom- Mode (PM) transmissions.

  12. Kinematically Optimal Robust Control of Redundant Manipulators

    NASA Astrophysics Data System (ADS)

    Galicki, M.

    2017-12-01

    This work deals with the problem of the robust optimal task space trajectory tracking subject to finite-time convergence. Kinematic and dynamic equations of a redundant manipulator are assumed to be uncertain. Moreover, globally unbounded disturbances are allowed to act on the manipulator when tracking the trajectory by the endeffector. Furthermore, the movement is to be accomplished in such a way as to minimize both the manipulator torques and their oscillations thus eliminating the potential robot vibrations. Based on suitably defined task space non-singular terminal sliding vector variable and the Lyapunov stability theory, we derive a class of chattering-free robust kinematically optimal controllers, based on the estimation of transpose Jacobian, which seem to be effective in counteracting both uncertain kinematics and dynamics, unbounded disturbances and (possible) kinematic and/or algorithmic singularities met on the robot trajectory. The numerical simulations carried out for a redundant manipulator of a SCARA type consisting of the three revolute kinematic pairs and operating in a two-dimensional task space, illustrate performance of the proposed controllers as well as comparisons with other well known control schemes.

  13. Kinematics, controls, and path planning results for a redundant manipulator

    NASA Technical Reports Server (NTRS)

    Gretz, Bruce; Tilley, Scott W.

    1989-01-01

    The inverse kinematics solution, a modal position control algorithm, and path planning results for a 7 degree of freedom manipulator are presented. The redundant arm consists of two links with shoulder and elbow joints and a spherical wrist. The inverse kinematics problem for tip position is solved and the redundant joint is identified. It is also shown that a locus of tip positions exists in which there are kinematic limitations on self-motion. A computationally simple modal position control algorithm has been developed which guarantees a nearly constant closed-loop dynamic response throughout the workspace. If all closed-loop poles are assigned to the same location, the algorithm can be implemented with very little computation. To further reduce the required computation, the modal gains are updated only at discrete time intervals. Criteria are developed for the frequency of these updates. For commanding manipulator movements, a 5th-order spline which minimizes jerk provides a smooth tip-space path. Schemes for deriving a corresponding joint-space trajectory are discussed. Modifying the trajectory to avoid joint torque saturation when a tip payload is added is also considered. Simulation results are presented.

  14. Evolution of shuttle avionics redundancy management/fault tolerance

    NASA Technical Reports Server (NTRS)

    Boykin, J. C.; Thibodeau, J. R.; Schneider, H. E.

    1985-01-01

    The challenge of providing redundancy management (RM) and fault tolerance to meet the Shuttle Program requirements of fail operational/fail safe for the avionics systems was complicated by the critical program constraints of weight, cost, and schedule. The basic and sometimes false effectivity of less than pure RM designs is addressed. Evolution of the multiple input selection filter (the heart of the RM function) is discussed with emphasis on the subtle interactions of the flight control system that were found to be potentially catastrophic. Several other general RM development problems are discussed, with particular emphasis on the inertial measurement unit RM, indicative of the complexity of managing that three string system and its critical interfaces with the guidance and control systems.

  15. A robust H∞-tracking design for uncertain Takagi-Sugeno fuzzy systems with unknown premise variables using descriptor redundancy approach

    NASA Astrophysics Data System (ADS)

    Hassan Asemani, Mohammad; Johari Majd, Vahid

    2015-12-01

    This paper addresses a robust H∞ fuzzy observer-based tracking design problem for uncertain Takagi-Sugeno fuzzy systems with external disturbances. To have a practical observer-based controller, the premise variables of the system are assumed to be not measurable in general, which leads to a more complex design process. The tracker is synthesised based on a fuzzy Lyapunov function approach and non-parallel distributed compensation (non-PDC) scheme. Using the descriptor redundancy approach, the robust stability conditions are derived in the form of strict linear matrix inequalities (LMIs) even in the presence of uncertainties in the system, input, and output matrices simultaneously. Numerical simulations are provided to show the effectiveness of the proposed method.

  16. Land use allocation model considering climate change impact

    NASA Astrophysics Data System (ADS)

    Lee, D. K.; Yoon, E. J.; Song, Y. I.

    2017-12-01

    In Korea, climate change adaptation plans are being developed for each administrative district based on impact assessments constructed in various fields. This climate change impact assessments are superimposed on the actual space, which causes problems in land use allocation because the spatial distribution of individual impacts may be different each other. This implies that trade-offs between climate change impacts can occur depending on the composition of land use. Moreover, the actual space is complexly intertwined with various factors such as required area, legal regulations, and socioeconomic values, so land use allocation in consideration of climate change can be very difficult problem to solve (Liu et al. 2012; Porta et al. 2013).Optimization techniques can generate a sufficiently good alternatives for land use allocation at the strategic level if only the fitness function of relationship between impact and land use composition are derived. It has also been noted that land use optimization model is more effective than the scenario-based prediction model in achieving the objectives for problem solving (Zhang et al. 2014). Therefore in this study, we developed a quantitative tool, MOGA (Multi Objective Genetic Algorithm), which can generate a comprehensive land use allocations considering various climate change impacts, and apply it to the Gangwon-do in Korea. Genetic Algorithms (GAs) are the most popular optimization technique to address multi-objective in land use allocation. Also, it allows for immediate feedback to stake holders because it can run a number of experiments with different parameter values. And it is expected that land use decision makers and planners can formulate a detailed spatial plan or perform additional analysis based on the result of optimization model. Acknowledgments: This work was supported by the Korea Ministry of Environment (MOE) as "Climate Change Correspondence Program (Project number: 2014001310006)"

  17. The kidney allocation score: methodological problems, moral concerns and unintended consequences.

    PubMed

    Hippen, B

    2009-07-01

    The growing disparity between the demand for and supply of kidneys for transplantation has generated interest in alternative systems of allocating kidneys from deceased donors. This personal viewpoint focuses attention on the Kidney Allocation Score (KAS) proposal promulgated by the UNOS/OPTN Kidney Committee. I identify several methodological and moral flaws in the proposed system, concluding that any iteration of the KAS proposal should be met with more skepticism than sanguinity.

  18. Financial Resource Allocation in Higher Education

    ERIC Educational Resources Information Center

    Ušpuriene, Ana; Sakalauskas, Leonidas; Dumskis, Valerijonas

    2017-01-01

    The paper considers a problem of financial resource allocation in a higher education institution. The basic financial management instruments and the multi-stage cost minimization model created are described involving financial instruments to constraints. Both societal and institutional factors that determine the costs of educating students are…

  19. Interactive Land-Use Optimization Using Laguerre Voronoi Diagram with Dynamic Generating Point Allocation

    NASA Astrophysics Data System (ADS)

    Chaidee, S.; Pakawanwong, P.; Suppakitpaisarn, V.; Teerasawat, P.

    2017-09-01

    In this work, we devise an efficient method for the land-use optimization problem based on Laguerre Voronoi diagram. Previous Voronoi diagram-based methods are more efficient and more suitable for interactive design than discrete optimization-based method, but, in many cases, their outputs do not satisfy area constraints. To cope with the problem, we propose a force-directed graph drawing algorithm, which automatically allocates generating points of Voronoi diagram to appropriate positions. Then, we construct a Laguerre Voronoi diagram based on these generating points, use linear programs to adjust each cell, and reconstruct the diagram based on the adjustment. We adopt the proposed method to the practical case study of Chiang Mai University's allocated land for a mixed-use complex. For this case study, compared to other Voronoi diagram-based method, we decrease the land allocation error by 62.557 %. Although our computation time is larger than the previous Voronoi-diagram-based method, it is still suitable for interactive design.

  20. Statistical mechanics of competitive resource allocation using agent-based models

    NASA Astrophysics Data System (ADS)

    Chakraborti, Anirban; Challet, Damien; Chatterjee, Arnab; Marsili, Matteo; Zhang, Yi-Cheng; Chakrabarti, Bikas K.

    2015-01-01

    Demand outstrips available resources in most situations, which gives rise to competition, interaction and learning. In this article, we review a broad spectrum of multi-agent models of competition (El Farol Bar problem, Minority Game, Kolkata Paise Restaurant problem, Stable marriage problem, Parking space problem and others) and the methods used to understand them analytically. We emphasize the power of concepts and tools from statistical mechanics to understand and explain fully collective phenomena such as phase transitions and long memory, and the mapping between agent heterogeneity and physical disorder. As these methods can be applied to any large-scale model of competitive resource allocation made up of heterogeneous adaptive agent with non-linear interaction, they provide a prospective unifying paradigm for many scientific disciplines.

  1. Strategic planning for disaster recovery with stochastic last mile distribution

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bent, Russell Whitford; Van Hentenryck, Pascal; Coffrin, Carleton

    2010-01-01

    This paper considers the single commodity allocation problem (SCAP) for disaster recovery, a fundamental problem faced by all populated areas. SCAPs are complex stochastic optimization problems that combine resource allocation, warehouse routing, and parallel fleet routing. Moreover, these problems must be solved under tight runtime constraints to be practical in real-world disaster situations. This paper formalizes the specification of SCAPs and introduces a novel multi-stage hybrid-optimization algorithm that utilizes the strengths of mixed integer programming, constraint programming, and large neighborhood search. The algorithm was validated on hurricane disaster scenarios generated by Los Alamos National Laboratory using state-of-the-art disaster simulation toolsmore » and is deployed to aid federal organizations in the US.« less

  2. Three-level global resource allocation model for hiv control: A hierarchical decision system approach.

    PubMed

    Kassa, Semu Mitiku

    2018-02-01

    Funds from various global organizations, such as, The Global Fund, The World Bank, etc. are not directly distributed to the targeted risk groups. Especially in the so-called third-world-countries, the major part of the fund in HIV prevention programs comes from these global funding organizations. The allocations of these funds usually pass through several levels of decision making bodies that have their own specific parameters to control and specific objectives to achieve. However, these decisions are made mostly in a heuristic manner and this may lead to a non-optimal allocation of the scarce resources. In this paper, a hierarchical mathematical optimization model is proposed to solve such a problem. Combining existing epidemiological models with the kind of interventions being on practice, a 3-level hierarchical decision making model in optimally allocating such resources has been developed and analyzed. When the impact of antiretroviral therapy (ART) is included in the model, it has been shown that the objective function of the lower level decision making structure is a non-convex minimization problem in the allocation variables even if all the production functions for the intervention programs are assumed to be linear.

  3. Accurate and diverse recommendations via eliminating redundant correlations

    NASA Astrophysics Data System (ADS)

    Zhou, Tao; Su, Ri-Qi; Liu, Run-Ran; Jiang, Luo-Luo; Wang, Bing-Hong; Zhang, Yi-Cheng

    2009-12-01

    In this paper, based on a weighted projection of a bipartite user-object network, we introduce a personalized recommendation algorithm, called network-based inference (NBI), which has higher accuracy than the classical algorithm, namely collaborative filtering. In NBI, the correlation resulting from a specific attribute may be repeatedly counted in the cumulative recommendations from different objects. By considering the higher order correlations, we design an improved algorithm that can, to some extent, eliminate the redundant correlations. We test our algorithm on two benchmark data sets, MovieLens and Netflix. Compared with NBI, the algorithmic accuracy, measured by the ranking score, can be further improved by 23 per cent for MovieLens and 22 per cent for Netflix. The present algorithm can even outperform the Latent Dirichlet Allocation algorithm, which requires much longer computational time. Furthermore, most previous studies considered the algorithmic accuracy only; in this paper, we argue that the diversity and popularity, as two significant criteria of algorithmic performance, should also be taken into account. With more or less the same accuracy, an algorithm giving higher diversity and lower popularity is more favorable. Numerical results show that the present algorithm can outperform the standard one simultaneously in all five adopted metrics: lower ranking score and higher precision for accuracy, larger Hamming distance and lower intra-similarity for diversity, as well as smaller average degree for popularity.

  4. A Novel Joint Problem of Routing, Scheduling, and Variable-Width Channel Allocation in WMNs

    PubMed Central

    Liu, Wan-Yu; Chou, Chun-Hung

    2014-01-01

    This paper investigates a novel joint problem of routing, scheduling, and channel allocation for single-radio multichannel wireless mesh networks in which multiple channel widths can be adjusted dynamically through a new software technology so that more concurrent transmissions and suppressed overlapping channel interference can be achieved. Although the previous works have studied this joint problem, their linear programming models for the problem were not incorporated with some delicate constraints. As a result, this paper first constructs a linear programming model with more practical concerns and then proposes a simulated annealing approach with a novel encoding mechanism, in which the configurations of multiple time slots are devised to characterize the dynamic transmission process. Experimental results show that our approach can find the same or similar solutions as the optimal solutions for smaller-scale problems and can efficiently find good-quality solutions for a variety of larger-scale problems. PMID:24982990

  5. Approaches to Resource Allocation

    ERIC Educational Resources Information Center

    Dressel, Paul; Simon, Lou Anna Kimsey

    1976-01-01

    Various budgeting patterns and strategies are currently in use, each with its own particular strengths and weaknesses. Neither cost-benefit analysis nor cost-effectiveness analysis offers any better solution to the allocation problem than do the unsupported contentions of departments or the historical unit costs. An operable model that performs…

  6. The misnomer of attention-deficit hyperactivity disorder.

    PubMed

    Wasserman, Theodore; Wasserman, Lori Drucker

    2015-01-01

    We propose that attention-deficit disorder represents an inefficiency of an integrated system designed to allocate working memory to designated tasks rather than the absence or dysfunction of a particular form of attention. A significant portion of this inefficiency in the allocation of working memory represents poor engagement of the reward circuit with distinct circuits of learning and performance that control instrumental conditioning (learning). Efficient attention requires the interaction of these circuits. For a significant percentage of individuals who present with attention-deficit disorder, their problems represent the engagement, or lack thereof, of the motivational and reward circuit as opposed to problems, or disorders of attention traditionally defined as problems with orienting, focusing, and sustaining. We demonstrate that there is an integrated system of working-memory allocation that responds by recruiting relevant aspects of both cortex and subcortex to the demands of the task being encountered. In this model, attention is viewed as a gating function determined by novelty, flight-or-fight response, and reward history/valence affecting motivation. We view the traditional models of attention, rather than describe specific types of attention per se, as representing the description of the behavioral output of this integrated orienting and engagement system designed to allocate working memory to task-specific stimuli.

  7. Optimal Computing Budget Allocation for Particle Swarm Optimization in Stochastic Optimization.

    PubMed

    Zhang, Si; Xu, Jie; Lee, Loo Hay; Chew, Ek Peng; Wong, Wai Peng; Chen, Chun-Hung

    2017-04-01

    Particle Swarm Optimization (PSO) is a popular metaheuristic for deterministic optimization. Originated in the interpretations of the movement of individuals in a bird flock or fish school, PSO introduces the concept of personal best and global best to simulate the pattern of searching for food by flocking and successfully translate the natural phenomena to the optimization of complex functions. Many real-life applications of PSO cope with stochastic problems. To solve a stochastic problem using PSO, a straightforward approach is to equally allocate computational effort among all particles and obtain the same number of samples of fitness values. This is not an efficient use of computational budget and leaves considerable room for improvement. This paper proposes a seamless integration of the concept of optimal computing budget allocation (OCBA) into PSO to improve the computational efficiency of PSO for stochastic optimization problems. We derive an asymptotically optimal allocation rule to intelligently determine the number of samples for all particles such that the PSO algorithm can efficiently select the personal best and global best when there is stochastic estimation noise in fitness values. We also propose an easy-to-implement sequential procedure. Numerical tests show that our new approach can obtain much better results using the same amount of computational effort.

  8. A generalized fuzzy credibility-constrained linear fractional programming approach for optimal irrigation water allocation under uncertainty

    NASA Astrophysics Data System (ADS)

    Zhang, Chenglong; Guo, Ping

    2017-10-01

    The vague and fuzzy parametric information is a challenging issue in irrigation water management problems. In response to this problem, a generalized fuzzy credibility-constrained linear fractional programming (GFCCFP) model is developed for optimal irrigation water allocation under uncertainty. The model can be derived from integrating generalized fuzzy credibility-constrained programming (GFCCP) into a linear fractional programming (LFP) optimization framework. Therefore, it can solve ratio optimization problems associated with fuzzy parameters, and examine the variation of results under different credibility levels and weight coefficients of possibility and necessary. It has advantages in: (1) balancing the economic and resources objectives directly; (2) analyzing system efficiency; (3) generating more flexible decision solutions by giving different credibility levels and weight coefficients of possibility and (4) supporting in-depth analysis of the interrelationships among system efficiency, credibility level and weight coefficient. The model is applied to a case study of irrigation water allocation in the middle reaches of Heihe River Basin, northwest China. Therefore, optimal irrigation water allocation solutions from the GFCCFP model can be obtained. Moreover, factorial analysis on the two parameters (i.e. λ and γ) indicates that the weight coefficient is a main factor compared with credibility level for system efficiency. These results can be effective for support reasonable irrigation water resources management and agricultural production.

  9. Optimal Computing Budget Allocation for Particle Swarm Optimization in Stochastic Optimization

    PubMed Central

    Zhang, Si; Xu, Jie; Lee, Loo Hay; Chew, Ek Peng; Chen, Chun-Hung

    2017-01-01

    Particle Swarm Optimization (PSO) is a popular metaheuristic for deterministic optimization. Originated in the interpretations of the movement of individuals in a bird flock or fish school, PSO introduces the concept of personal best and global best to simulate the pattern of searching for food by flocking and successfully translate the natural phenomena to the optimization of complex functions. Many real-life applications of PSO cope with stochastic problems. To solve a stochastic problem using PSO, a straightforward approach is to equally allocate computational effort among all particles and obtain the same number of samples of fitness values. This is not an efficient use of computational budget and leaves considerable room for improvement. This paper proposes a seamless integration of the concept of optimal computing budget allocation (OCBA) into PSO to improve the computational efficiency of PSO for stochastic optimization problems. We derive an asymptotically optimal allocation rule to intelligently determine the number of samples for all particles such that the PSO algorithm can efficiently select the personal best and global best when there is stochastic estimation noise in fitness values. We also propose an easy-to-implement sequential procedure. Numerical tests show that our new approach can obtain much better results using the same amount of computational effort. PMID:29170617

  10. Simulation of Constrained Musculoskeletal Systems in Task Space.

    PubMed

    Stanev, Dimitar; Moustakas, Konstantinos

    2018-02-01

    This paper proposes an operational task space formalization of constrained musculoskeletal systems, motivated by its promising results in the field of robotics. The change of representation requires different algorithms for solving the inverse and forward dynamics simulation in the task space domain. We propose an extension to the direct marker control and an adaptation of the computed muscle control algorithms for solving the inverse kinematics and muscle redundancy problems, respectively. Experimental evaluation demonstrates that this framework is not only successful in dealing with the inverse dynamics problem, but also provides an intuitive way of studying and designing simulations, facilitating assessment prior to any experimental data collection. The incorporation of constraints in the derivation unveils an important extension of this framework toward addressing systems that use absolute coordinates and topologies that contain closed kinematic chains. Task space projection reveals a more intuitive encoding of the motion planning problem, allows for better correspondence between observed and estimated variables, provides the means to effectively study the role of kinematic redundancy, and most importantly, offers an abstract point of view and control, which can be advantageous toward further integration with high level models of the precommand level. Task-based approaches could be adopted in the design of simulation related to the study of constrained musculoskeletal systems.

  11. Cognitive Load Theory and the Acquisition of Complex Cognitive Skills in the Elderly: Towards an Integrative Framework.

    ERIC Educational Resources Information Center

    Van Gerven, Pascal W. M.; Paas, Fred G. W. C.; Van Merrienboer, Jeroen J. G.; Schmidt, Henk G.

    2000-01-01

    Cognitive load (CL) theory suggests minimizing extraneous CL and maximizing germane CL in order not to overload working memory. Instructional design for older adults should therefore include goal-free problems, worked examples, and different modalities and avoid splitting attention and including redundant information. (SK)

  12. On the optimal use of fictitious time in variation of parameters methods with application to BG14

    NASA Technical Reports Server (NTRS)

    Gottlieb, Robert G.

    1991-01-01

    The optimal way to use fictitious time in variation of parameter methods is presented. Setting fictitious time to zero at the end of each step is shown to cure the instability associated with some types of problems. Only some parameters are reinitialized, thereby retaining redundant information.

  13. Dynamic task allocation for a man-machine symbiotic system

    NASA Technical Reports Server (NTRS)

    Parker, L. E.; Pin, F. G.

    1987-01-01

    This report presents a methodological approach to the dynamic allocation of tasks in a man-machine symbiotic system in the context of dexterous manipulation and teleoperation. This report addresses a symbiotic system containing two symbiotic partners which work toward controlling a single manipulator arm for the execution of a series of sequential manipulation tasks. It is proposed that an automated task allocator use knowledge about the constraints/criteria of the problem, the available resources, the tasks to be performed, and the environment to dynamically allocate task recommendations for the man and the machine. The presentation of the methodology includes discussions concerning the interaction of the knowledge areas, the flow of control, the necessary communication links, and the replanning of the task allocation. Examples of task allocation are presented to illustrate the results of this methodolgy.

  14. Optimal resource allocation for defense of targets based on differing measures of attractiveness.

    PubMed

    Bier, Vicki M; Haphuriwat, Naraphorn; Menoyo, Jaime; Zimmerman, Rae; Culpen, Alison M

    2008-06-01

    This article describes the results of applying a rigorous computational model to the problem of the optimal defensive resource allocation among potential terrorist targets. In particular, our study explores how the optimal budget allocation depends on the cost effectiveness of security investments, the defender's valuations of the various targets, and the extent of the defender's uncertainty about the attacker's target valuations. We use expected property damage, expected fatalities, and two metrics of critical infrastructure (airports and bridges) as our measures of target attractiveness. Our results show that the cost effectiveness of security investment has a large impact on the optimal budget allocation. Also, different measures of target attractiveness yield different optimal budget allocations, emphasizing the importance of developing more realistic terrorist objective functions for use in budget allocation decisions for homeland security.

  15. Optimum Allocation of Water to the Cultivation Farms Using Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Saeidian, B.; Saadi Mesgari, M.; Ghodousi, M.

    2015-12-01

    The water scarcity crises in the world and specifically in Iran, requires the proper management of this valuable resource. According to the official reports, around 90 percent of the water in Iran is used for agriculture. Therefore, the adequate management and usage of water in this section can help significantly to overcome the above crises. The most important aspect of agricultural water management is related to the irrigation planning, which is basically an allocation problem. The proper allocation of water to the farms is not a simple and trivial problem, because of the limited amount of available water, the effect of different parameters, nonlinear characteristics of the objective function, and the wideness of the solution space. Usually To solve such complex problems, a meta-heuristic method such as genetic algorithm could be a good candidate. In this paper, Genetic Algorithm (GA) is used for the allocation of different amount of water to a number of farms. In this model, the amount of water transferable using canals of level one, in one period of irrigation is specified. In addition, the amount of water required by each farm is calculated using crop type, stage of crop development, and other parameters. Using these, the water production function of each farm is determined. Then, using the water production function, farm areas, and the revenue and cost of each crop type, the objective function is calculated. This objective function is used by GA for the allocation of water to the farms. The objective function is defined such that the economical profit extracted from all farms is maximized. Moreover, the limitation related to the amount of available water is considered as a constraint. In general, the total amount of allocated water should be less than the finally available water (the water transferred trough the level one canals). Because of the intensive scarcity of water, the deficit irrigation method are considered. In this method, the planning is on the basis of the optimum and limited allocation of water, and not on the basis of the each crop water requirement. According to the available literature, in the condition of water scarcity, the implementation of deficit irrigation strategy results in higher economical income. The main difference of this research with others is the allocation of water to the farms. Whilst, most of similar researches concentrate on the allocation of water to different water consumption sections (such as agriculture, industry etc.), networks and crops. Using the GA for the optimization of the water allocation, proper solutions were generated that maximize the total economical income in the entire study area. In addition, although the search space was considerably wide, the results of the implementation showed an adequate convergence speed. The repeatability test of the algorithm also proved that the algorithm is reasonably stable. In general the usage of GA algorithm can be considered as an efficient and trustable method for such irrigation planning problems. By optimum allocation of the water to the farms with different areas and crop types, and considering the deficit irrigation method, the general income of the entire area can be improved substantially.

  16. Joint sparse reconstruction of multi-contrast MRI images with graph based redundant wavelet transform.

    PubMed

    Lai, Zongying; Zhang, Xinlin; Guo, Di; Du, Xiaofeng; Yang, Yonggui; Guo, Gang; Chen, Zhong; Qu, Xiaobo

    2018-05-03

    Multi-contrast images in magnetic resonance imaging (MRI) provide abundant contrast information reflecting the characteristics of the internal tissues of human bodies, and thus have been widely utilized in clinical diagnosis. However, long acquisition time limits the application of multi-contrast MRI. One efficient way to accelerate data acquisition is to under-sample the k-space data and then reconstruct images with sparsity constraint. However, images are compromised at high acceleration factor if images are reconstructed individually. We aim to improve the images with a jointly sparse reconstruction and Graph-based redundant wavelet transform (GBRWT). First, a sparsifying transform, GBRWT, is trained to reflect the similarity of tissue structures in multi-contrast images. Second, joint multi-contrast image reconstruction is formulated as a ℓ 2, 1 norm optimization problem under GBRWT representations. Third, the optimization problem is numerically solved using a derived alternating direction method. Experimental results in synthetic and in vivo MRI data demonstrate that the proposed joint reconstruction method can achieve lower reconstruction errors and better preserve image structures than the compared joint reconstruction methods. Besides, the proposed method outperforms single image reconstruction with joint sparsity constraint of multi-contrast images. The proposed method explores the joint sparsity of multi-contrast MRI images under graph-based redundant wavelet transform and realizes joint sparse reconstruction of multi-contrast images. Experiment demonstrate that the proposed method outperforms the compared joint reconstruction methods as well as individual reconstructions. With this high quality image reconstruction method, it is possible to achieve the high acceleration factors by exploring the complementary information provided by multi-contrast MRI.

  17. Moving force identification based on redundant concatenated dictionary and weighted l1-norm regularization

    NASA Astrophysics Data System (ADS)

    Pan, Chu-Dong; Yu, Ling; Liu, Huan-Lin; Chen, Ze-Peng; Luo, Wen-Feng

    2018-01-01

    Moving force identification (MFI) is an important inverse problem in the field of bridge structural health monitoring (SHM). Reasonable signal structures of moving forces are rarely considered in the existing MFI methods. Interaction forces are complex because they contain both slowly-varying harmonic and impact signals due to bridge vibration and bumps on a bridge deck, respectively. Therefore, the interaction forces are usually hard to be expressed completely and sparsely by using a single basis function set. Based on the redundant concatenated dictionary and weighted l1-norm regularization method, a hybrid method is proposed for MFI in this study. The redundant dictionary consists of both trigonometric functions and rectangular functions used for matching the harmonic and impact signal features of unknown moving forces. The weighted l1-norm regularization method is introduced for formulation of MFI equation, so that the signal features of moving forces can be accurately extracted. The fast iterative shrinkage-thresholding algorithm (FISTA) is used for solving the MFI problem. The optimal regularization parameter is appropriately chosen by the Bayesian information criterion (BIC) method. In order to assess the accuracy and the feasibility of the proposed method, a simply-supported beam bridge subjected to a moving force is taken as an example for numerical simulations. Finally, a series of experimental studies on MFI of a steel beam are performed in laboratory. Both numerical and experimental results show that the proposed method can accurately identify the moving forces with a strong robustness, and it has a better performance than the Tikhonov regularization method. Some related issues are discussed as well.

  18. An intelligent allocation algorithm for parallel processing

    NASA Technical Reports Server (NTRS)

    Carroll, Chester C.; Homaifar, Abdollah; Ananthram, Kishan G.

    1988-01-01

    The problem of allocating nodes of a program graph to processors in a parallel processing architecture is considered. The algorithm is based on critical path analysis, some allocation heuristics, and the execution granularity of nodes in a program graph. These factors, and the structure of interprocessor communication network, influence the allocation. To achieve realistic estimations of the executive durations of allocations, the algorithm considers the fact that nodes in a program graph have to communicate through varying numbers of tokens. Coarse and fine granularities have been implemented, with interprocessor token-communication duration, varying from zero up to values comparable to the execution durations of individual nodes. The effect on allocation of communication network structures is demonstrated by performing allocations for crossbar (non-blocking) and star (blocking) networks. The algorithm assumes the availability of as many processors as it needs for the optimal allocation of any program graph. Hence, the focus of allocation has been on varying token-communication durations rather than varying the number of processors. The algorithm always utilizes as many processors as necessary for the optimal allocation of any program graph, depending upon granularity and characteristics of the interprocessor communication network.

  19. General form of a cooperative gradual maximal covering location problem

    NASA Astrophysics Data System (ADS)

    Bagherinejad, Jafar; Bashiri, Mahdi; Nikzad, Hamideh

    2018-07-01

    Cooperative and gradual covering are two new methods for developing covering location models. In this paper, a cooperative maximal covering location-allocation model is developed (CMCLAP). In addition, both cooperative and gradual covering concepts are applied to the maximal covering location simultaneously (CGMCLP). Then, we develop an integrated form of a cooperative gradual maximal covering location problem, which is called a general CGMCLP. By setting the model parameters, the proposed general model can easily be transformed into other existing models, facilitating general comparisons. The proposed models are developed without allocation for physical signals and with allocation for non-physical signals in discrete location space. Comparison of the previously introduced gradual maximal covering location problem (GMCLP) and cooperative maximal covering location problem (CMCLP) models with our proposed CGMCLP model in similar data sets shows that the proposed model can cover more demands and acts more efficiently. Sensitivity analyses are performed to show the effect of related parameters and the model's validity. Simulated annealing (SA) and a tabu search (TS) are proposed as solution algorithms for the developed models for large-sized instances. The results show that the proposed algorithms are efficient solution approaches, considering solution quality and running time.

  20. Information flows in hierarchical networks and the capability of organizations to successfully respond to failures, crises, and disasters

    NASA Astrophysics Data System (ADS)

    Helbing, Dirk; Ammoser, Hendrik; Kühnert, Christian

    2006-04-01

    In this paper we discuss the problem of information losses in organizations and how they depend on the organization network structure. Hierarchical networks are an optimal organization structure only when the failure rate of nodes or links is negligible. Otherwise, redundant information links are important to reduce the risk of information losses and the related costs. However, as redundant information links are expensive, the optimal organization structure is not a fully connected one. It rather depends on the failure rate. We suggest that sidelinks and temporary, adaptive shortcuts can improve the information flows considerably by generating small-world effects. This calls for modified organization structures to cope with today's challenges of businesses and administrations, in particular, to successfully respond to crises or disasters.

  1. Applying Cost-Sensitive Extreme Learning Machine and Dissimilarity Integration to Gene Expression Data Classification.

    PubMed

    Liu, Yanqiu; Lu, Huijuan; Yan, Ke; Xia, Haixia; An, Chunlin

    2016-01-01

    Embedding cost-sensitive factors into the classifiers increases the classification stability and reduces the classification costs for classifying high-scale, redundant, and imbalanced datasets, such as the gene expression data. In this study, we extend our previous work, that is, Dissimilar ELM (D-ELM), by introducing misclassification costs into the classifier. We name the proposed algorithm as the cost-sensitive D-ELM (CS-D-ELM). Furthermore, we embed rejection cost into the CS-D-ELM to increase the classification stability of the proposed algorithm. Experimental results show that the rejection cost embedded CS-D-ELM algorithm effectively reduces the average and overall cost of the classification process, while the classification accuracy still remains competitive. The proposed method can be extended to classification problems of other redundant and imbalanced data.

  2. On-Line Allocation Of Robot Resources To Task Plans

    NASA Astrophysics Data System (ADS)

    Lyons, Damian M.

    1989-02-01

    In this paper, I present an approach to representing plans that make on-line decisions about resource allocation. An on-line decision is the evaluation of a conditional expression involving sensory information as the plan is being executed. I use a plan representation called 7ZS10'1 1,12that has been especially designed for the domain of robot programming, and in particular, for the problem of on-line decisions. The resource allocation example is based on the robot assembly cell architecture outlined by Venkataraman and Lyons16. I begin by setting forth a definition of on-line decision making and some arguments as to why this form of decision making is important and useful. To set the context for the resource allocation example, I take some care in categorizing the types of on-line decision making and the approaches adopted by other workers so far. In particular, I justify a plan-based approach to the study of on-line decision making. From that, the focus shifts to one type of decision making: on-line allocation of robot resources to task plans. Robot resources are the physical manipulators (grippers, wrists, arms, feeders, etc) that are available to carry out the task. I formulate the assembly cell architecture of Venkataraman and Lyons16 as an R.S plan schema, and show how the on-line allocation specified in that architecture can be implemented. Finally, I show how considering the on-line allocation of logical resources, that is a physical resource plus some model information, can be used as a non-traditional approach to some problems in robot task planning.

  3. Alternative mathematical programming formulations for FSS synthesis

    NASA Technical Reports Server (NTRS)

    Reilly, C. H.; Mount-Campbell, C. A.; Gonsalvez, D. J. A.; Levis, C. A.

    1986-01-01

    A variety of mathematical programming models and two solution strategies are suggested for the problem of allocating orbital positions to (synthesizing) satellites in the Fixed Satellite Service. Mixed integer programming and almost linear programming formulations are presented in detail for each of two objectives: (1) positioning satellites as closely as possible to specified desired locations, and (2) minimizing the total length of the geostationary arc allocated to the satellites whose positions are to be determined. Computational results for mixed integer and almost linear programming models, with the objective of positioning satellites as closely as possible to their desired locations, are reported for three six-administration test problems and a thirteen-administration test problem.

  4. A centre-free approach for resource allocation with lower bounds

    NASA Astrophysics Data System (ADS)

    Obando, Germán; Quijano, Nicanor; Rakoto-Ravalontsalama, Naly

    2017-09-01

    Since complexity and scale of systems are continuously increasing, there is a growing interest in developing distributed algorithms that are capable to address information constraints, specially for solving optimisation and decision-making problems. In this paper, we propose a novel method to solve distributed resource allocation problems that include lower bound constraints. The optimisation process is carried out by a set of agents that use a communication network to coordinate their decisions. Convergence and optimality of the method are guaranteed under some mild assumptions related to the convexity of the problem and the connectivity of the underlying graph. Finally, we compare our approach with other techniques reported in the literature, and we present some engineering applications.

  5. Social problem-solving plus psychoeducation for adults with personality disorder: pragmatic randomised controlled trial.

    PubMed

    Huband, Nick; McMurran, Mary; Evans, Chris; Duggan, Conor

    2007-04-01

    Social problem-solving therapy may be relevant in the treatment of personality disorder, although assessments of its effectiveness are uncommon. To determine the effectiveness of a problem-solving intervention for adults with personality disorder in the community under conditions resembling routine clinical practice. Participants were randomly allocated to brief psychoeducation plus 16 problem-solving group sessions (n=87) or to waiting-list control (n=89). Primary outcome was comparison of scores on the Social Problem Solving Inventory and the Social Functioning Questionnaire between intervention and control arms at the conclusion of treatment, on average at 24 weeks after randomisation. In intention-to-treat analysis, those allocated to intervention showed significantly better problem-solving skills (P<0.001), higher overall social functioning (P=0.031) and lower anger expression (P=0.039) compared with controls. No significant differences were found on use of services during the intervention period. Problem-solving plus psychoeducation has potential as a preliminary intervention for adults with personality disorder.

  6. Problems, Perplexities, and Politics of Program Evaluation.

    ERIC Educational Resources Information Center

    Schneider, Gail Thierbach

    All educational evaluations share common problems, perplexities, and political considerations. Logistic problems include incomplete definition of purpose, unclear timelines and personnel allocations, inadequate support services, and the lack of a program plan. Interpersonal dynamics and conflicts plus unwieldy committee structures are perplexities…

  7. Playing Games with Optimal Competitive Scheduling

    NASA Technical Reports Server (NTRS)

    Frank, Jeremy; Crawford, James; Khatib, Lina; Brafman, Ronen

    2005-01-01

    This paper is concerned with the problem of allocating a unit capacity resource to multiple users within a pre-defined time period. The resource is indivisible, so that at most one user can use it at each time instance. However, different users may use it at different times. The users have independent, selfish preferences for when and for how long they are allocated this resource. Thus, they value different resource access durations differently, and they value different time slots differently. We seek an optimal allocation schedule for this resource.

  8. Solving the optimal attention allocation problem in manual control

    NASA Technical Reports Server (NTRS)

    Kleinman, D. L.

    1976-01-01

    Within the context of the optimal control model of human response, analytic expressions for the gradients of closed-loop performance metrics with respect to human operator attention allocation are derived. These derivatives serve as the basis for a gradient algorithm that determines the optimal attention that a human should allocate among several display indicators in a steady-state manual control task. Application of the human modeling techniques are made to study the hover control task for a CH-46 VTOL flight tested by NASA.

  9. Annual Review of Progress in Applied Computational Electromagnetics (6th), Held in Monterey, California on March 19-22, 1990

    DTIC Science & Technology

    1990-01-01

    the six fields will have two million cell locations. The table below shows the total allocation of 392 chips across fields and banks. To allow for...future growth, we allocate 16 wires for addressing both the rows and columns. eU 4 MBit locations bytes bits Chips (millions) (millions) (millions) per...sources apt to appear in most problems. If material parameters change during a run, then time must be allocated to read these constants into their

  10. Mars Sample Return mission utilizing in-situ propellant production

    NASA Technical Reports Server (NTRS)

    Zubrin, Robert; Price, Steve

    1995-01-01

    This report presents the results of a study examining the potential of in-situ propellant production (ISPP) on Mars to aid in achieving a low cost Mars Sample Return (MSR) mission. Two versions of such a mission were examined: a baseline version employing a dual string spacecraft, and a light weight version employing single string architecture with selective redundancy. Both systems employed light weight avionics currently being developed by Lockheed Martin, Jet Propulsion Lab and elsewhere in the aerospace community, both used a new concept for a simple, light weight parachuteless sample return capsule, both used a slightly modified version of the Mars Surveyor lander currently under development at Lockheed Martin for flight in 1998, and both used a combination of the Sabatier-electrolysis and reverse water gas shift ISPP systems to produce methane/oxygen propellant on Mars by combining a small quantity of imported hydrogen with the Martian CO2 atmosphere. It was found that the baseline mission could be launched on a Delta 7925 and return a 0.5 kg sample with 82 percent mission launch margin;over and beyond subsystem allocated contingency masses . The lightweight version could be launched on a Mid-Lite vehicle and return a 0.25 kg sample with 11 percent launch margin, over and above subsystem contingency mass allocations.

  11. Mobility and Position Error Analysis of a Complex Planar Mechanism with Redundant Constraints

    NASA Astrophysics Data System (ADS)

    Sun, Qipeng; Li, Gangyan

    2018-03-01

    Nowadays mechanisms with redundant constraints have been created and attracted much attention for their merits. The mechanism of the redundant constraints in a mechanical system is analyzed in this paper. A analysis method of Planar Linkage with a repetitive structure is proposed to get the number and type of constraints. According to the difference of applications and constraint characteristics, the redundant constraints are divided into the theoretical planar redundant constraints and the space-planar redundant constraints. And the calculation formula for the number of redundant constraints and type of judging method are carried out. And a complex mechanism with redundant constraints is analyzed of the influence about redundant constraints on mechanical performance. With the combination of theoretical derivation and simulation research, a mechanism analysis method is put forward about the position error of complex mechanism with redundant constraints. It points out the direction on how to eliminate or reduce the influence of redundant constraints.

  12. Optimality versus stability in water resource allocation.

    PubMed

    Read, Laura; Madani, Kaveh; Inanloo, Bahareh

    2014-01-15

    Water allocation is a growing concern in a developing world where limited resources like fresh water are in greater demand by more parties. Negotiations over allocations often involve multiple groups with disparate social, economic, and political status and needs, who are seeking a management solution for a wide range of demands. Optimization techniques for identifying the Pareto-optimal (social planner solution) to multi-criteria multi-participant problems are commonly implemented, although often reaching agreement for this solution is difficult. In negotiations with multiple-decision makers, parties who base decisions on individual rationality may find the social planner solution to be unfair, thus creating a need to evaluate the willingness to cooperate and practicality of a cooperative allocation solution, i.e., the solution's stability. This paper suggests seeking solutions for multi-participant resource allocation problems through an economics-based power index allocation method. This method can inform on allocation schemes that quantify a party's willingness to participate in a negotiation rather than opt for no agreement. Through comparison of the suggested method with a range of distance-based multi-criteria decision making rules, namely, least squares, MAXIMIN, MINIMAX, and compromise programming, this paper shows that optimality and stability can produce different allocation solutions. The mismatch between the socially-optimal alternative and the most stable alternative can potentially result in parties leaving the negotiation as they may be too dissatisfied with their resource share. This finding has important policy implications as it justifies why stakeholders may not accept the socially optimal solution in practice, and underlies the necessity of considering stability where it may be more appropriate to give up an unstable Pareto-optimal solution for an inferior stable one. Authors suggest assessing the stability of an allocation solution as an additional component to an analysis that seeks to distribute water in a negotiated process. Copyright © 2013 Elsevier Ltd. All rights reserved.

  13. From ideals to deals-The effect of impartiality experience on stakeholder behavior.

    PubMed

    Halko, Marja-Liisa; Miettinen, Topi

    2017-01-01

    In this paper, we study a two-party pie-sharing problem in the presence of asymmetries in the stakeholders' private endowments. Both the two stakeholders and third-party arbitrators may influence the outcome. We consider Nash-demand negotiations, where the two stakeholders place demands and share the pie accordingly if demands are compatible, and elicit dictatorial allocations from the stakeholders and the arbitrators. The Nash demands by stakeholders are strategic; the dictatorial allocations by stakeholders and arbitrators are non-strategic. We are interested in the influence of the past arbitrator experience on stakeholder allocations and demands and the past stakeholder experience on third-party arbitration allocations. We find that the ex-arbitrators' stakeholder allocations differ more from the impartial ideal than the stakeholder allocations by those without arbitration experience. In contrast with previous findings, the arbitration outcomes do not depend on the asymmetries in the previous stakeholder roles.

  14. From ideals to deals—The effect of impartiality experience on stakeholder behavior

    PubMed Central

    2017-01-01

    In this paper, we study a two-party pie-sharing problem in the presence of asymmetries in the stakeholders' private endowments. Both the two stakeholders and third-party arbitrators may influence the outcome. We consider Nash-demand negotiations, where the two stakeholders place demands and share the pie accordingly if demands are compatible, and elicit dictatorial allocations from the stakeholders and the arbitrators. The Nash demands by stakeholders are strategic; the dictatorial allocations by stakeholders and arbitrators are non-strategic. We are interested in the influence of the past arbitrator experience on stakeholder allocations and demands and the past stakeholder experience on third-party arbitration allocations. We find that the ex-arbitrators' stakeholder allocations differ more from the impartial ideal than the stakeholder allocations by those without arbitration experience. In contrast with previous findings, the arbitration outcomes do not depend on the asymmetries in the previous stakeholder roles. PMID:28786988

  15. Robust Inversion and Data Compression in Control Allocation

    NASA Technical Reports Server (NTRS)

    Hodel, A. Scottedward

    2000-01-01

    We present an off-line computational method for control allocation design. The control allocation function delta = F(z)tau = delta (sub 0) (z) mapping commanded body-frame torques to actuator commands is implicitly specified by trim condition delta (sub 0) (z) and by a robust pseudo-inverse problem double vertical line I - G(z) F(z) double vertical line less than epsilon (z) where G(z) is a system Jacobian evaluated at operating point z, z circumflex is an estimate of z, and epsilon (z) less than 1 is a specified error tolerance. The allocation function F(z) = sigma (sub i) psi (z) F (sub i) is computed using a heuristic technique for selecting wavelet basis functions psi and a constrained least-squares criterion for selecting the allocation matrices F (sub i). The method is applied to entry trajectory control allocation for a reusable launch vehicle (X-33).

  16. Optimization, Monotonicity and the Determination of Nash Equilibria — An Algorithmic Analysis

    NASA Astrophysics Data System (ADS)

    Lozovanu, D.; Pickl, S. W.; Weber, G.-W.

    2004-08-01

    This paper is concerned with the optimization of a nonlinear time-discrete model exploiting the special structure of the underlying cost game and the property of inverse matrices. The costs are interlinked by a system of linear inequalities. It is shown that, if the players cooperate, i.e., minimize the sum of all the costs, they achieve a Nash equilibrium. In order to determine Nash equilibria, the simplex method can be applied with respect to the dual problem. An introduction into the TEM model and its relationship to an economic Joint Implementation program is given. The equivalence problem is presented. The construction of the emission cost game and the allocation problem is explained. The assumption of inverse monotony for the matrices leads to a new result in the area of such allocation problems. A generalization of such problems is presented.

  17. Optimized maritime emergency resource allocation under dynamic demand.

    PubMed

    Zhang, Wenfen; Yan, Xinping; Yang, Jiaqi

    2017-01-01

    Emergency resource is important for people evacuation and property rescue when accident occurs. The relief efforts could be promoted by a reasonable emergency resource allocation schedule in advance. As the marine environment is complicated and changeful, the place, type, severity of maritime accident is uncertain and stochastic, bringing about dynamic demand of emergency resource. Considering dynamic demand, how to make a reasonable emergency resource allocation schedule is challenging. The key problem is to determine the optimal stock of emergency resource for supplier centers to improve relief efforts. This paper studies the dynamic demand, and which is defined as a set. Then a maritime emergency resource allocation model with uncertain data is presented. Afterwards, a robust approach is developed and used to make sure that the resource allocation schedule performs well with dynamic demand. Finally, a case study shows that the proposed methodology is feasible in maritime emergency resource allocation. The findings could help emergency manager to schedule the emergency resource allocation more flexibly in terms of dynamic demand.

  18. Optimized maritime emergency resource allocation under dynamic demand

    PubMed Central

    Yan, Xinping; Yang, Jiaqi

    2017-01-01

    Emergency resource is important for people evacuation and property rescue when accident occurs. The relief efforts could be promoted by a reasonable emergency resource allocation schedule in advance. As the marine environment is complicated and changeful, the place, type, severity of maritime accident is uncertain and stochastic, bringing about dynamic demand of emergency resource. Considering dynamic demand, how to make a reasonable emergency resource allocation schedule is challenging. The key problem is to determine the optimal stock of emergency resource for supplier centers to improve relief efforts. This paper studies the dynamic demand, and which is defined as a set. Then a maritime emergency resource allocation model with uncertain data is presented. Afterwards, a robust approach is developed and used to make sure that the resource allocation schedule performs well with dynamic demand. Finally, a case study shows that the proposed methodology is feasible in maritime emergency resource allocation. The findings could help emergency manager to schedule the emergency resource allocation more flexibly in terms of dynamic demand. PMID:29240792

  19. Titrating versus targeting home care services to frail elderly clients: an application of agency theory and cost-benefit analysis to home care policy.

    PubMed

    Weissert, William; Chernew, Michael; Hirth, Richard

    2003-02-01

    The article summarizes the shortcomings of current home care targeting policy, provides a conceptual framework for understanding the sources of its problems, and proposes an alternative resource allocation method. Methods required for different aspects of the study included synthesis of the published literature, regression analysis of risk predictors, and comparison of actual resource allocations with simulated budgets. Problems of imperfect agency ranging from unclear goals and inappropriate incentives to lack of information about the marginal effectiveness of home care could be mitigated with an improved budgeting method that combines client selection and resource allocation. No program can produce its best outcome performance when its goals are unclear and its technology is unstandardized. Titration of care would reallocate resources to maximize marginal benefit for marginal cost.

  20. A QoS Aware Resource Allocation Strategy for 3D A/V Streaming in OFDMA Based Wireless Systems

    PubMed Central

    Chung, Young-uk; Choi, Yong-Hoon; Park, Suwon; Lee, Hyukjoon

    2014-01-01

    Three-dimensional (3D) video is expected to be a “killer app” for OFDMA-based broadband wireless systems. The main limitation of 3D video streaming over a wireless system is the shortage of radio resources due to the large size of the 3D traffic. This paper presents a novel resource allocation strategy to address this problem. In the paper, the video-plus-depth 3D traffic type is considered. The proposed resource allocation strategy focuses on the relationship between 2D video and the depth map, handling them with different priorities. It is formulated as an optimization problem and is solved using a suboptimal heuristic algorithm. Numerical results show that the proposed scheme provides a better quality of service compared to conventional schemes. PMID:25250377

  1. Irrigation water allocation optimization using multi-objective evolutionary algorithm (MOEA) - a review

    NASA Astrophysics Data System (ADS)

    Fanuel, Ibrahim Mwita; Mushi, Allen; Kajunguri, Damian

    2018-03-01

    This paper analyzes more than 40 papers with a restricted area of application of Multi-Objective Genetic Algorithm, Non-Dominated Sorting Genetic Algorithm-II and Multi-Objective Differential Evolution (MODE) to solve the multi-objective problem in agricultural water management. The paper focused on different application aspects which include water allocation, irrigation planning, crop pattern and allocation of available land. The performance and results of these techniques are discussed. The review finds that there is a potential to use MODE to analyzed the multi-objective problem, the application is more significance due to its advantage of being simple and powerful technique than any Evolutionary Algorithm. The paper concludes with the hopeful new trend of research that demand effective use of MODE; inclusion of benefits derived from farm byproducts and production costs into the model.

  2. A Multi-layer Dynamic Model for Coordination Based Group Decision Making in Water Resource Allocation and Scheduling

    NASA Astrophysics Data System (ADS)

    Huang, Wei; Zhang, Xingnan; Li, Chenming; Wang, Jianying

    Management of group decision-making is an important issue in water source management development. In order to overcome the defects in lacking of effective communication and cooperation in the existing decision-making models, this paper proposes a multi-layer dynamic model for coordination in water resource allocation and scheduling based group decision making. By introducing the scheme-recognized cooperative satisfaction index and scheme-adjusted rationality index, the proposed model can solve the problem of poor convergence of multi-round decision-making process in water resource allocation and scheduling. Furthermore, the problem about coordination of limited resources-based group decision-making process can be solved based on the effectiveness of distance-based group of conflict resolution. The simulation results show that the proposed model has better convergence than the existing models.

  3. Comparison of lossless compression techniques for prepress color images

    NASA Astrophysics Data System (ADS)

    Van Assche, Steven; Denecker, Koen N.; Philips, Wilfried R.; Lemahieu, Ignace L.

    1998-12-01

    In the pre-press industry color images have both a high spatial and a high color resolution. Such images require a considerable amount of storage space and impose long transmission times. Data compression is desired to reduce these storage and transmission problems. Because of the high quality requirements in the pre-press industry only lossless compression is acceptable. Most existing lossless compression schemes operate on gray-scale images. In this case the color components of color images must be compressed independently. However, higher compression ratios can be achieved by exploiting inter-color redundancies. In this paper we present a comparison of three state-of-the-art lossless compression techniques which exploit such color redundancies: IEP (Inter- color Error Prediction) and a KLT-based technique, which are both linear color decorrelation techniques, and Interframe CALIC, which uses a non-linear approach to color decorrelation. It is shown that these techniques are able to exploit color redundancies and that color decorrelation can be done effectively and efficiently. The linear color decorrelators provide a considerable coding gain (about 2 bpp) on some typical prepress images. The non-linear interframe CALIC predictor does not yield better results, but the full interframe CALIC technique does.

  4. Modeling and simulation of dynamic ant colony's labor division for task allocation of UAV swarm

    NASA Astrophysics Data System (ADS)

    Wu, Husheng; Li, Hao; Xiao, Renbin; Liu, Jie

    2018-02-01

    The problem of unmanned aerial vehicle (UAV) task allocation not only has the intrinsic attribute of complexity, such as highly nonlinear, dynamic, highly adversarial and multi-modal, but also has a better practicability in various multi-agent systems, which makes it more and more attractive recently. In this paper, based on the classic fixed response threshold model (FRTM), under the idea of "problem centered + evolutionary solution" and by a bottom-up way, the new dynamic environmental stimulus, response threshold and transition probability are designed, and a dynamic ant colony's labor division (DACLD) model is proposed. DACLD allows a swarm of agents with a relatively low-level of intelligence to perform complex tasks, and has the characteristic of distributed framework, multi-tasks with execution order, multi-state, adaptive response threshold and multi-individual response. With the proposed model, numerical simulations are performed to illustrate the effectiveness of the distributed task allocation scheme in two situations of UAV swarm combat (dynamic task allocation with a certain number of enemy targets and task re-allocation due to unexpected threats). Results show that our model can get both the heterogeneous UAVs' real-time positions and states at the same time, and has high degree of self-organization, flexibility and real-time response to dynamic environments.

  5. Optimal allocation of resources for suppressing epidemic spreading on networks

    NASA Astrophysics Data System (ADS)

    Chen, Hanshuang; Li, Guofeng; Zhang, Haifeng; Hou, Zhonghuai

    2017-07-01

    Efficient allocation of limited medical resources is crucial for controlling epidemic spreading on networks. Based on the susceptible-infected-susceptible model, we solve the optimization problem of how best to allocate the limited resources so as to minimize prevalence, providing that the curing rate of each node is positively correlated to its medical resource. By quenched mean-field theory and heterogeneous mean-field (HMF) theory, we prove that an epidemic outbreak will be suppressed to the greatest extent if the curing rate of each node is directly proportional to its degree, under which the effective infection rate λ has a maximal threshold λcopt=1 / , where is the average degree of the underlying network. For a weak infection region (λ ≳λcopt ), we combine perturbation theory with the Lagrange multiplier method (LMM) to derive the analytical expression of optimal allocation of the curing rates and the corresponding minimized prevalence. For a general infection region (λ >λcopt ), the high-dimensional optimization problem is converted into numerically solving low-dimensional nonlinear equations by the HMF theory and LMM. Counterintuitively, in the strong infection region the low-degree nodes should be allocated more medical resources than the high-degree nodes to minimize prevalence. Finally, we use simulated annealing to validate the theoretical results.

  6. Superframe Duration Allocation Schemes to Improve the Throughput of Cluster-Tree Wireless Sensor Networks

    PubMed Central

    Leão, Erico; Montez, Carlos; Moraes, Ricardo; Portugal, Paulo; Vasques, Francisco

    2017-01-01

    The use of Wireless Sensor Network (WSN) technologies is an attractive option to support wide-scale monitoring applications, such as the ones that can be found in precision agriculture, environmental monitoring and industrial automation. The IEEE 802.15.4/ZigBee cluster-tree topology is a suitable topology to build wide-scale WSNs. Despite some of its known advantages, including timing synchronisation and duty-cycle operation, cluster-tree networks may suffer from severe network congestion problems due to the convergecast pattern of its communication traffic. Therefore, the careful adjustment of transmission opportunities (superframe durations) allocated to the cluster-heads is an important research issue. This paper proposes a set of proportional Superframe Duration Allocation (SDA) schemes, based on well-defined protocol and timing models, and on the message load imposed by child nodes (Load-SDA scheme), or by number of descendant nodes (Nodes-SDA scheme) of each cluster-head. The underlying reasoning is to adequately allocate transmission opportunities (superframe durations) and parametrize buffer sizes, in order to improve the network throughput and avoid typical problems, such as: network congestion, high end-to-end communication delays and discarded messages due to buffer overflows. Simulation assessments show how proposed allocation schemes may clearly improve the operation of wide-scale cluster-tree networks. PMID:28134822

  7. Allocation of Playing Time within Team Sports--A Problem for Discussion

    ERIC Educational Resources Information Center

    Lorentzen, Torbjørn

    2017-01-01

    The background of the article is the recurrent discussion about allocation of playing time in team sports involving children and young athletes. The objective is to analyse "why" playing time is a topic for discussion among parents, coaches and athletes. The following question is addressed: Under which condition is it "fair" to…

  8. Research versus Advocacy in the Allocation of Resources: Problems, Causes, Solutions.

    ERIC Educational Resources Information Center

    Menolascino, Frank J.; Stark, Jack A.

    1990-01-01

    This commentary on EC 231 901 discusses whether resource allocations and service policies for mentally retarded individuals should be based upon purported findings of scientific theory or the purported needs of service systems. The paper calls for improved research utilization and understanding of what makes a social movement work. (JDD)

  9. The Cost Structure of Higher Education: Implications for Governmental Policy in Steady State.

    ERIC Educational Resources Information Center

    Lyell, Edward H.

    The historical pattern of resource allocation in American higher education as exemplified by public colleges in Colorado was examined. The reliance upon average cost information in making resource allocation decisions was critiqued for the special problems that arise from student enrollment decline or steady state. A model of resource allocation…

  10. Distributive Decisions in Education: Goals, Trade-Offs, and Feasibility Constraints

    ERIC Educational Resources Information Center

    Shores, Kenneth; Loeb, Susanna

    2016-01-01

    Educators, policymakers, and citizens face questions of how to allocate scarce resources in the pursuit of competing goals for children and youth. Our goal in this article is to provide decision-makers with a framework for considering allocative problems in education, explicitly highlighting the implications of relevant feasibility constraints. We…

  11. Implementation and Incentive of Education Policies: Experience from the High School Admissions Quota Allocation Policy

    ERIC Educational Resources Information Center

    Shiyue, Wang

    2017-01-01

    Improving administrative efficiency is the core problem in administrative governance. This case study of quota allocation policy implementation in City A reveals that a set of education policy implementation and incentive mechanisms revolving around responsibility contracts and target evaluations has already taken shape, to guarantee effective…

  12. Carbon allocation and morphology of cherrybark oak seedlings and sprouts under three light regimes

    Treesearch

    Brian Roy Lockhart; Emile S. Gardiner; John D. Hodges; Andrew W. Ezell

    2008-01-01

    Continued problems in regenerating oak forests has led to a need for more basic infomation on oak seedling biology.In the present study, carbon allocation and morphology were compared between cherrybark oak (Quercus pagoda Raf.) seedlings and sprouts at I -Lag grown in full, 47%, and 20% sunlight....

  13. Evaluation of input output efficiency of oil field considering undesirable output —A case study of sandstone reservoir in Xinjiang oilfield

    NASA Astrophysics Data System (ADS)

    Zhang, Shuying; Wu, Xuquan; Li, Deshan; Xu, Yadong; Song, Shulin

    2017-06-01

    Based on the input and output data of sandstone reservoir in Xinjiang oilfield, the SBM-Undesirable model is used to study the technical efficiency of each block. Results show that: the model of SBM-undesirable to evaluate its efficiency and to avoid defects caused by traditional DEA model radial angle, improve the accuracy of the efficiency evaluation. by analyzing the projection of the oil blocks, we find that each block is in the negative external effects of input redundancy and output deficiency benefit and undesirable output, and there are greater differences in the production efficiency of each block; the way to improve the input-output efficiency of oilfield is to optimize the allocation of resources, reduce the undesirable output and increase the expected output.

  14. Guidance, Navigation, and Control System Design in a Mass Reduction Exercise

    NASA Technical Reports Server (NTRS)

    Crain, Timothy; Begly, Michael; Jackson, Mark; Broome, Joel

    2008-01-01

    Early Orion GN&C system designs optimized for robustness, simplicity, and utilization of commercially available components. During the System Definition Review (SDR), all subsystems on Orion were asked to re-optimize with component mass and steady state power as primary design metrics. The objective was to create a mass reserve in the Orion point of departure vehicle design prior to beginning the PDR analysis cycle. The Orion GN&C subsystem team transitioned from a philosophy of absolute 2 fault tolerance for crew safety and 1 fault tolerance for mission success to an approach of 1 fault tolerance for crew safety and risk based redundancy to meet probability allocations of loss of mission and loss of crew. This paper will discuss the analyses, rationale, and end results of this activity regarding Orion navigation sensor hardware, control effectors, and trajectory design.

  15. CLON: Overlay Networks and Gossip Protocols for Cloud Environments

    NASA Astrophysics Data System (ADS)

    Matos, Miguel; Sousa, António; Pereira, José; Oliveira, Rui; Deliot, Eric; Murray, Paul

    Although epidemic or gossip-based multicast is a robust and scalable approach to reliable data dissemination, its inherent redundancy results in high resource consumption on both links and nodes. This problem is aggravated in settings that have costlier or resource constrained links as happens in Cloud Computing infrastructures composed by several interconnected data centers across the globe.

  16. Cosmic ray astroparticle physics: current status and future perspectives

    NASA Astrophysics Data System (ADS)

    Donato, Fiorenza

    2017-02-01

    The data we are receiving from galactic cosmic rays are reaching an unprecedented precision, over very wide energy ranges. Nevertheless, many problems are still open, while new ones seem to appear when data happen to be redundant. We will discuss some paths to possible progress in the theoretical modeling and experimental exploration of the galactic cosmic radiation.

  17. Single event upset (SEU) testing at JPL

    NASA Technical Reports Server (NTRS)

    Coss, James R.

    1987-01-01

    It is believed that the increase in SEUs with more modern devices may have serious consequences for future space missions. The physics behind an SEU is discussed as well as SEU test philosophy and equipment, and testing results. It is concluded that the problem may be ameliorated by careful device selection and the use of redundancy or error correction.

  18. Optimal water resource allocation modelling in the Lowveld of Zimbabwe

    NASA Astrophysics Data System (ADS)

    Mhiribidi, Delight; Nobert, Joel; Gumindoga, Webster; Rwasoka, Donald T.

    2018-05-01

    The management and allocation of water from multi-reservoir systems is complex and thus requires dynamic modelling systems to achieve optimality. A multi-reservoir system in the Southern Lowveld of Zimbabwe is used for irrigation of sugarcane estates that produce sugar for both local and export consumption. The system is burdened with water allocation problems, made worse by decommissioning of dams. Thus the aim of this research was to develop an operating policy model for the Lowveld multi-reservoir system.The Mann Kendall Trend and Wilcoxon Signed-Rank tests were used to assess the variability of historic monthly rainfall and dam inflows for the period 1899-2015. The WEAP model was set up to evaluate the water allocation system of the catchment and come-up with a reference scenario for the 2015/2016 hydrologic year. Stochastic Dynamic Programming approach was used for optimisation of the multi-reservoirs releases.Results showed no significant trend in the rainfall but a significantly decreasing trend in inflows (p < 0.05). The water allocation model (WEAP) showed significant deficits ( ˜ 40 %) in irrigation water allocation in the reference scenario. The optimal rule curves for all the twelve months for each reservoir were obtained and considered to be a proper guideline for solving multi- reservoir management problems within the catchment. The rule curves are effective tools in guiding decision makers in the release of water without emptying the reservoirs but at the same time satisfying the demands based on the inflow, initial storage and end of month storage.

  19. Learners misperceive the benefits of redundant text in multimedia learning.

    PubMed

    Fenesi, Barbara; Kim, Joseph A

    2014-01-01

    Research on metacognition has consistently demonstrated that learners fail to endorse instructional designs that produce benefits to memory, and often prefer designs that actually impair comprehension. Unlike previous studies in which learners were only exposed to a single multimedia design, the current study used a within-subjects approach to examine whether exposure to both redundant text and non-redundant text multimedia presentations improved learners' metacognitive judgments about presentation styles that promote better understanding. A redundant text multimedia presentation containing narration paired with verbatim on-screen text (Redundant) was contrasted with two non-redundant text multimedia presentations: (1) narration paired with images and minimal text (Complementary) or (2) narration paired with minimal text (Sparse). Learners watched presentation pairs of either Redundant + Complementary, or Redundant + Sparse. Results demonstrate that Complementary and Sparse presentations produced highest overall performance on the final comprehension assessment, but the Redundant presentation produced highest perceived understanding and engagement ratings. These findings suggest that learners misperceive the benefits of redundant text, even after direct exposure to a non-redundant, effective presentation.

  20. Nanoscale molecular communication networks: a game-theoretic perspective

    NASA Astrophysics Data System (ADS)

    Jiang, Chunxiao; Chen, Yan; Ray Liu, K. J.

    2015-12-01

    Currently, communication between nanomachines is an important topic for the development of novel devices. To implement a nanocommunication system, diffusion-based molecular communication is considered as a promising bio-inspired approach. Various technical issues about molecular communications, including channel capacity, noise and interference, and modulation and coding, have been studied in the literature, while the resource allocation problem among multiple nanomachines has not been well investigated, which is a very important issue since all the nanomachines share the same propagation medium. Considering the limited computation capability of nanomachines and the expensive information exchange cost among them, in this paper, we propose a game-theoretic framework for distributed resource allocation in nanoscale molecular communication systems. We first analyze the inter-symbol and inter-user interference, as well as bit error rate performance, in the molecular communication system. Based on the interference analysis, we formulate the resource allocation problem as a non-cooperative molecule emission control game, where the Nash equilibrium is found and proved to be unique. In order to improve the system efficiency while guaranteeing fairness, we further model the resource allocation problem using a cooperative game based on the Nash bargaining solution, which is proved to be proportionally fair. Simulation results show that the Nash bargaining solution can effectively ensure fairness among multiple nanomachines while achieving comparable social welfare performance with the centralized scheme.

  1. Can self-care health books affect amount of contact with the primary health care team? A randomized controlled trial in general practice.

    PubMed

    Platts, Amanda; Mitton, Rosly; Boniface, David; Friedli, Karin

    2005-09-01

    To investigate the effects of two differently styled self-care health books in general practice on the frequency and duration of patients' consultations and their views of the books. Random allocation of patients to either a descriptive or a decision-tree based self-care health book, or a no-book control condition. Three- and 12-months follow-up by postal questionnaire and monitoring of consultations. A large general practice in the South East of England. A total of 1967 volunteer, adult patients who attended the practice in 2001 participated. Demographics; health problems; use of health services; use and perceptions of the trial book; frequency and duration of consultations. Response rates to postal questionnaires at 3 and 12 months were 80% and 74%. In all, 48% consulted their allocated book, compared with 25% who consulted any healthcare book in the Control group. Those reporting health problems were more likely to have consulted their allocated book; 60% reported that the allocated book made them more likely to deal with a problem themselves and 40% reported themselves less likely to consult the practice. However, there were no differences in consultation rates or durations of consultations between the three groups. Handing out of self-care health books may provide qualitative benefits for patients but is unlikely to reduce attendance at the GP practice.

  2. Routing design and fleet allocation optimization of freeway service patrol: Improved results using genetic algorithm

    NASA Astrophysics Data System (ADS)

    Sun, Xiuqiao; Wang, Jian

    2018-07-01

    Freeway service patrol (FSP), is considered to be an effective method for incident management and can help transportation agency decision-makers alter existing route coverage and fleet allocation. This paper investigates the FSP problem of patrol routing design and fleet allocation, with the objective of minimizing the overall average incident response time. While the simulated annealing (SA) algorithm and its improvements have been applied to solve this problem, they often become trapped in local optimal solution. Moreover, the issue of searching efficiency remains to be further addressed. In this paper, we employ the genetic algorithm (GA) and SA to solve the FSP problem. To maintain population diversity and avoid premature convergence, niche strategy is incorporated into the traditional genetic algorithm. We also employ elitist strategy to speed up the convergence. Numerical experiments have been conducted with the help of the Sioux Falls network. Results show that the GA slightly outperforms the dual-based greedy (DBG) algorithm, the very large-scale neighborhood searching (VLNS) algorithm, the SA algorithm and the scenario algorithm.

  3. Marginal Contribution-Based Distributed Subchannel Allocation in Small Cell Networks.

    PubMed

    Shah, Shashi; Kittipiyakul, Somsak; Lim, Yuto; Tan, Yasuo

    2018-05-10

    The paper presents a game theoretic solution for distributed subchannel allocation problem in small cell networks (SCNs) analyzed under the physical interference model. The objective is to find a distributed solution that maximizes the welfare of the SCNs, defined as the total system capacity. Although the problem can be addressed through best-response (BR) dynamics, the existence of a steady-state solution, i.e., a pure strategy Nash equilibrium (NE), cannot be guaranteed. Potential games (PGs) ensure convergence to a pure strategy NE when players rationally play according to some specified learning rules. However, such a performance guarantee comes at the expense of complete knowledge of the SCNs. To overcome such requirements, properties of PGs are exploited for scalable implementations, where we utilize the concept of marginal contribution (MC) as a tool to design learning rules of players’ utility and propose the marginal contribution-based best-response (MCBR) algorithm of low computational complexity for the distributed subchannel allocation problem. Finally, we validate and evaluate the proposed scheme through simulations for various performance metrics.

  4. Development of a conceptual integrated traffic safety problem identification database

    DOT National Transportation Integrated Search

    1999-12-01

    The project conceptualized a traffic safety risk management information system and statistical database for improved problem-driver identification, countermeasure development, and resource allocation. The California Department of Motor Vehicles Drive...

  5. New Mathematical Strategy Using Branch and Bound Method

    NASA Astrophysics Data System (ADS)

    Tarray, Tanveer Ahmad; Bhat, Muzafar Rasool

    In this paper, the problem of optimal allocation in stratified random sampling is used in the presence of nonresponse. The problem is formulated as a nonlinear programming problem (NLPP) and is solved using Branch and Bound method. Also the results are formulated through LINGO.

  6. Scenario-based modeling for multiple allocation hub location problem under disruption risk: multiple cuts Benders decomposition approach

    NASA Astrophysics Data System (ADS)

    Yahyaei, Mohsen; Bashiri, Mahdi

    2017-12-01

    The hub location problem arises in a variety of domains such as transportation and telecommunication systems. In many real-world situations, hub facilities are subject to disruption. This paper deals with the multiple allocation hub location problem in the presence of facilities failure. To model the problem, a two-stage stochastic formulation is developed. In the proposed model, the number of scenarios grows exponentially with the number of facilities. To alleviate this issue, two approaches are applied simultaneously. The first approach is to apply sample average approximation to approximate the two stochastic problem via sampling. Then, by applying the multiple cuts Benders decomposition approach, computational performance is enhanced. Numerical studies show the effective performance of the SAA in terms of optimality gap for small problem instances with numerous scenarios. Moreover, performance of multi-cut Benders decomposition is assessed through comparison with the classic version and the computational results reveal the superiority of the multi-cut approach regarding the computational time and number of iterations.

  7. Allocation and management issues in multiple-transaction open access transmission networks

    NASA Astrophysics Data System (ADS)

    Tao, Shu

    This thesis focuses on some key issues related to allocation and management by the independent grid operator (IGO) of unbundled services in multiple-transaction open access transmission networks. The three unbundled services addressed in the thesis are transmission real power losses, reactive power support requirements from generation sources, and transmission congestion management. We develop the general framework that explicitly represents multiple transactions undertaken simultaneously in the transmission grid. This framework serves as the basis for formulating various problems treated in the thesis. We use this comprehensive framework to develop a physical-flow-based mechanism to allocate the total transmission losses to each transaction using the system. An important property of the allocation scheme is its capability to effectively deal with counter flows that result in the presence of specific transactions. Using the loss allocation results as the basis, we construct the equivalent loss compensation concept and apply it to develop flexible and effective procedures for compensating losses in multiple-transaction networks. We present a new physical-flow-based mechanism for allocating the reactive power support requirements provided by generators in multiple-transaction networks. The allocatable reactive support requirements are formulated as the sum of two specific components---the voltage magnitude variation component and the voltage angle variation component. The formulation utilizes the multiple-transaction framework and makes use of certain simplifying approximations. The formulation leads to a natural allocation as a function of the amount of each transaction. The physical interpretation of each allocation as a sensitivity of the reactive output of a generator is discussed. We propose a congestion management allocation scheme for multiple-transaction networks. The proposed scheme determines the allocation of congestion among the transactions on a physical-flow basis. It also proposes a congestion relief scheme that removes the congestion attributed to each transaction on the network in a least-cost manner to the IGO and determines the appropriate transmission charges to each transaction for its transmission usage. The thesis provides a compendium of problems that are natural extensions of the research results reported here and appear to be good candidates for future work.

  8. Behavioral Economic Measures of Alcohol Reward Value as Problem Severity Indicators in College Students

    PubMed Central

    Skidmore, Jessica R.; Murphy, James G.; Martens, Matthew P.

    2014-01-01

    The aims of the current study were to examine the associations among behavioral economic measures of alcohol value derived from three distinct measurement approaches, and to evaluate their respective relations with traditional indicators of alcohol problem severity in college drinkers. Five behavioral economic metrics were derived from hypothetical demand curves that quantify reward value by plotting consumption and expenditures as a function of price, another metric measured proportional behavioral allocation and enjoyment related to alcohol versus other activities, and a final metric measured relative discretionary expenditures on alcohol. The sample included 207 heavy drinking college students (53% female) who were recruited through an on-campus health center or university courses. Factor analysis revealed that the alcohol valuation construct comprises two factors: one factor that reflects participants’ levels of alcohol price sensitivity (demand persistence), and a second factor that reflects participants’ maximum consumption and monetary and behavioral allocation towards alcohol (amplitude of demand). The demand persistence and behavioral allocation metrics demonstrated the strongest and most consistent multivariate relations with alcohol-related problems, even when controlling for other well-established predictors. The results suggest that behavioral economic indices of reward value show meaningful relations with alcohol problem severity in young adults. Despite the presence of some gender differences, these measures appear to be useful problem indicators for men and women. PMID:24749779

  9. Improving Search Properties in Genetic Programming

    NASA Technical Reports Server (NTRS)

    Janikow, Cezary Z.; DeWeese, Scott

    1997-01-01

    With the advancing computer processing capabilities, practical computer applications are mostly limited by the amount of human programming required to accomplish a specific task. This necessary human participation creates many problems, such as dramatically increased cost. To alleviate the problem, computers must become more autonomous. In other words, computers must be capable to program/reprogram themselves to adapt to changing environments/tasks/demands/domains. Evolutionary computation offers potential means, but it must be advanced beyond its current practical limitations. Evolutionary algorithms model nature. They maintain a population of structures representing potential solutions to the problem at hand. These structures undergo a simulated evolution by means of mutation, crossover, and a Darwinian selective pressure. Genetic programming (GP) is the most promising example of an evolutionary algorithm. In GP, the structures that evolve are trees, which is a dramatic departure from previously used representations such as strings in genetic algorithms. The space of potential trees is defined by means of their elements: functions, which label internal nodes, and terminals, which label leaves. By attaching semantic interpretation to those elements, trees can be interpreted as computer programs (given an interpreter), evolved architectures, etc. JSC has begun exploring GP as a potential tool for its long-term project on evolving dextrous robotic capabilities. Last year we identified representation redundancies as the primary source of inefficiency in GP. Subsequently, we proposed a method to use problem constraints to reduce those redundancies, effectively reducing GP complexity. This method was implemented afterwards at the University of Missouri. This summer, we have evaluated the payoff from using problem constraints to reduce search complexity on two classes of problems: learning boolean functions and solving the forward kinematics problem. We have also developed and implemented methods to use additional problem heuristics to fine-tune the searchable space, and to use typing information to further reduce the search space. Additional improvements have been proposed, but they are yet to be explored and implemented.

  10. Optimal assignment of workers to supporting services in a hospital

    NASA Astrophysics Data System (ADS)

    Sawik, Bartosz; Mikulik, Jerzy

    2008-01-01

    Supporting services play an important role in health care institutions such as hospitals. This paper presents an application of operations research model for optimal allocation of workers among supporting services in a public hospital. The services include logistics, inventory management, financial management, operations management, medical analysis, etc. The optimality criterion of the problem is to minimize operations costs of supporting services subject to some specific constraints. The constraints represent specific conditions for resource allocation in a hospital. The overall problem is formulated as an integer program in the literature known as the assignment problem, where the decision variables represent the assignment of people to various jobs. The results of some computational experiments modeled on a real data from a selected Polish hospital are reported.

  11. Pilot interaction with automated airborne decision making systems

    NASA Technical Reports Server (NTRS)

    Rouse, W. B.; Chu, Y. Y.; Greenstein, J. S.; Walden, R. S.

    1976-01-01

    An investigation was made of interaction between a human pilot and automated on-board decision making systems. Research was initiated on the topic of pilot problem solving in automated and semi-automated flight management systems and attempts were made to develop a model of human decision making in a multi-task situation. A study was made of allocation of responsibility between human and computer, and discussed were various pilot performance parameters with varying degrees of automation. Optimal allocation of responsibility between human and computer was considered and some theoretical results found in the literature were presented. The pilot as a problem solver was discussed. Finally the design of displays, controls, procedures, and computer aids for problem solving tasks in automated and semi-automated systems was considered.

  12. Hybrid Resource Allocation Scheme with Proportional Fairness in OFDMA-Based Cognitive Radio Systems

    NASA Astrophysics Data System (ADS)

    Li, Li; Xu, Changqing; Fan, Pingzhi; He, Jian

    In this paper, the resource allocation problem for proportional fairness in hybrid Cognitive Radio (CR) systems is studied. In OFDMA-based CR systems, traditional resource allocation algorithms can not guarantee proportional rates among CR users (CRU) in each OFDM symbol because the number of available subchannels might be smaller than that of CRUs in some OFDM symbols. To deal with this time-varying nature of available spectrum resource, a hybrid CR scheme in which CRUs are allowed to use subchannels in both spectrum holes and primary users (PU) bands is adopted and a resource allocation algorithm is proposed to guarantee proportional rates among CRUs with no undue interference to PUs.

  13. Some practical problems in implementing randomization.

    PubMed

    Downs, Matt; Tucker, Kathryn; Christ-Schmidt, Heidi; Wittes, Janet

    2010-06-01

    While often theoretically simple, implementing randomization to treatment in a masked, but confirmable, fashion can prove difficult in practice. At least three categories of problems occur in randomization: (1) bad judgment in the choice of method, (2) design and programming errors in implementing the method, and (3) human error during the conduct of the trial. This article focuses on these latter two types of errors, dealing operationally with what can go wrong after trial designers have selected the allocation method. We offer several case studies and corresponding recommendations for lessening the frequency of problems in allocating treatment or for mitigating the consequences of errors. Recommendations include: (1) reviewing the randomization schedule before starting a trial, (2) being especially cautious of systems that use on-demand random number generators, (3) drafting unambiguous randomization specifications, (4) performing thorough testing before entering a randomization system into production, (5) maintaining a dataset that captures the values investigators used to randomize participants, thereby allowing the process of treatment allocation to be reproduced and verified, (6) resisting the urge to correct errors that occur in individual treatment assignments, (7) preventing inadvertent unmasking to treatment assignments in kit allocations, and (8) checking a sample of study drug kits to allow detection of errors in drug packaging and labeling. Although we performed a literature search of documented randomization errors, the examples that we provide and the resultant recommendations are based largely on our own experience in industry-sponsored clinical trials. We do not know how representative our experience is or how common errors of the type we have seen occur. Our experience underscores the importance of verifying the integrity of the treatment allocation process before and during a trial. Clinical Trials 2010; 7: 235-245. http://ctj.sagepub.com.

  14. Cooperative vehicle routing problem: an opportunity for cost saving

    NASA Astrophysics Data System (ADS)

    Zibaei, Sedighe; Hafezalkotob, Ashkan; Ghashami, Seyed Sajad

    2016-09-01

    In this paper, a novel methodology is proposed to solve a cooperative multi-depot vehicle routing problem. We establish a mathematical model for multi-owner VRP in which each owner (i.e. player) manages single or multiple depots. The basic idea consists of offering an option that owners cooperatively manage the VRP to save their costs. We present cooperative game theory techniques for cost saving allocations which are obtained from various coalitions of owners. The methodology is illustrated with a numerical example in which different coalitions of the players are evaluated along with the results of cooperation and cost saving allocation methods.

  15. The veil of ignorance and health resource allocation.

    PubMed

    Soto, Carlos

    2012-08-01

    Some authors view the veil of ignorance as a preferred method for allocating resources because it imposes impartiality by stripping deliberators of knowledge of their personal identity. Using some prominent examples of such reasoning in the health care sector, I will argue for the following claims. First, choice behind a veil of ignorance often fails to provide clear guidance regarding resource allocation. Second, regardless of whether definite results could be derived from the veil, these results do not in themselves have important moral standing. This is partly because the veil does not determine which features are morally relevant for a given distributive problem. Third, even when we have settled the question of what features to count, choice behind a veil of ignorance arguably fails to take persons seriously. Ultimately, we do not need the veil to solve distributive problems, and we have good reason to appeal to some other distributive model.

  16. A Decision Model for Supporting Task Allocation Processes in Global Software Development

    NASA Astrophysics Data System (ADS)

    Lamersdorf, Ansgar; Münch, Jürgen; Rombach, Dieter

    Today, software-intensive systems are increasingly being developed in a globally distributed way. However, besides its benefit, global development also bears a set of risks and problems. One critical factor for successful project management of distributed software development is the allocation of tasks to sites, as this is assumed to have a major influence on the benefits and risks. We introduce a model that aims at improving management processes in globally distributed projects by giving decision support for task allocation that systematically regards multiple criteria. The criteria and causal relationships were identified in a literature study and refined in a qualitative interview study. The model uses existing approaches from distributed systems and statistical modeling. The article gives an overview of the problem and related work, introduces the empirical and theoretical foundations of the model, and shows the use of the model in an example scenario.

  17. Online Job Allocation with Hard Allocation Ratio Requirement (Author’s Manuscript)

    DTIC Science & Technology

    2016-04-14

    where each job can only be served by a subset of servers. Such a problem exists in many emerging Internet services, such as YouTube , Netflix, etc. For...example, in the case of YouTube , each video is replicated only in a small number of servers, and each server can only serve a limited number of...streams simultaneously. When a user accesses YouTube and makes a request to watch a video, this request needs to be allocated to one of the servers that

  18. The Allocation of Federal Expenditures Among States

    NASA Technical Reports Server (NTRS)

    Lee, Maw Lin

    1967-01-01

    This study explores factors associated with the allocation offederal expenditures by states and examines the implications of theseexpenditures on the state by state distribution of incomes. Theallocation of federal expenditures is functionally oriented toward theobjectives for which various government programs are set up. Thegeographical distribution of federal expenditures, therefore, washistorically considered to be a problem incidental to governmentactivity. Because of this, relatively little attention was given tothe question of why some states receive more federal allocation thanothers. In addition, the implications of this pattern of allocationamong the several states have not been intensively investigated.

  19. There is no silver bullet: the value of diversification in planning invasive species surveillance

    Treesearch

    Denys Yemshanov; Frank H. Koch; Bo Lu; D. Barry Lyons; Jeffrey P. Prestemon; Taylor Scarr; Klaus Koehler

    2014-01-01

    In this study we demonstrate how the notion of diversification can be used in broad-scale resource allocation for surveillance of invasive species. We consider the problem of short-term surveillance for an invasive species in a geographical environment.Wefind the optimal allocation of surveillance resourcesamongmultiple geographical subdivisions via application of a...

  20. Parallel Logic Programming Architecture

    DTIC Science & Technology

    1990-04-01

    Section 3.1. 3.1. A STATIC ALLOCATION SCHEME (SAS) Methods that have been used for decomposing distributed problems in artificial intelligence...multiple agents, knowledge organization and allocation, and cooperative parallel execution. These difficulties are common to distributed artificial ...for the following reasons. First, intellegent backtracking requires much more bookkeeping and is therefore more costly during consult-time and during

  1. Optimal allocation of invasive species surveillance with the maximum expected coverage concept

    Treesearch

    Denys Yemshanov; Robert G. Haight; Frank H. Koch; Bo Lu; Robert Venette; D. Barry Lyons; Taylor Scarr; Krista Ryall; Brian. Leung

    2015-01-01

    We address the problem of geographically allocating scarce survey resources to detect pests in their pathways of introduction given information about their likelihood of movement between origins and destinations. We introduce a model for selecting destination sites for survey that departs from the aim of reducing propagule pressure (PP) in pest destinations and instead...

  2. COOPERATIVE ROUTING FOR DYNAMIC AERIAL LAYER NETWORKS

    DTIC Science & Technology

    2018-03-01

    Advisor, Computing & Communications Division Information Directorate This report is published in the interest of scientific and technical...information accumulation at the physical layer, and study the cooperative routing and resource allocation problems associated with such SU networks...interference power constraint is studied . In [Shi2012Joint], an optimal power and sub-carrier allocation strategy to maximize SUs’ throughput subject to

  3. The Birth and Death of Redundancy in Decoherence and Quantum Darwinism

    NASA Astrophysics Data System (ADS)

    Riedel, Charles; Zurek, Wojciech; Zwolak, Michael

    2012-02-01

    Understanding the quantum-classical transition and the identification of a preferred classical domain through quantum Darwinism is based on recognizing high-redundancy states as both ubiquitous and exceptional. They are produced ubiquitously during decoherence, as has been demonstrated by the recent identification of very general conditions under which high-redundancy states develop. They are exceptional in that high-redundancy states occupy a very narrow corner of the global Hilbert space; states selected at random are overwelming likely to exhibit zero redundancy. In this letter, we examine the conditions and time scales for the transition from high-redundancy states to zero-redundancy states in many-body dynamics. We identify sufficient condition for the development of redundancy from product states and show that the destruction of redundancy can be accomplished even with highly constrained interactions.

  4. The Effects of Race Conditions When Implementing Single-Source Redundant Clock Trees in Triple Modular Redundant Synchronous Architectures

    NASA Technical Reports Server (NTRS)

    Berg, Melanie D.; Kim, Hak S.; Phan, Anthony M.; Seidleck, Christina M.; Label, Kenneth A.; Pellish, Jonathan A.; Campola, Michael J.

    2016-01-01

    We present the challenges that arise when using redundant clock domains due to their time-skew. Radiation data show that a singular clock domain provides an improved triple modular redundant (TMR) scheme over redundant clocks.

  5. Redundancy and Replication Help Make Your Systems Stress-Free

    ERIC Educational Resources Information Center

    Mitchell, Erik

    2011-01-01

    In mid-April, Amazon EC2 services had a small problem. Apparently, a large swath of its cloud computing environment had such substantial trouble that a number of customers had server issues. A number of high-profile sites, including Reddit, Evite, and Foursquare, went down when Amazon experienced issues in their US East 1a region (Justinb 2011).…

  6. An Intelligent Web-Based System for Diagnosing Student Learning Problems Using Concept Maps

    ERIC Educational Resources Information Center

    Acharya, Anal; Sinha, Devadatta

    2017-01-01

    The aim of this article is to propose a method for development of concept map in web-based environment for identifying concepts a student is deficient in after learning using traditional methods. Direct Hashing and Pruning algorithm was used to construct concept map. Redundancies within the concept map were removed to generate a learning sequence.…

  7. Markov Chains For Testing Redundant Software

    NASA Technical Reports Server (NTRS)

    White, Allan L.; Sjogren, Jon A.

    1990-01-01

    Preliminary design developed for validation experiment that addresses problems unique to assuring extremely high quality of multiple-version programs in process-control software. Approach takes into account inertia of controlled system in sense it takes more than one failure of control program to cause controlled system to fail. Verification procedure consists of two steps: experimentation (numerical simulation) and computation, with Markov model for each step.

  8. Revising an Engineering Design Rubric: A Case Study Illustrating Principles and Practices to Ensure Technical Quality of Rubrics

    ERIC Educational Resources Information Center

    Goldberg, Gail Lynn

    2014-01-01

    This article provides a detailed account of a rubric revision process to address seven common problems to which rubrics are prone: lack of consistency and parallelism; the presence of "orphan" and "widow" words and phrases; redundancy in descriptors; inconsistency in the focus of qualifiers; limited routes to partial credit;…

  9. Restoring Redundancy to the Wilkinson Microwave Anisotrophy Probe Propulsion System

    NASA Technical Reports Server (NTRS)

    O'Donnell, James R., Jr.; Davis, Gary T.; Ward, David K.

    2004-01-01

    The Wilkinson Microwave Anisotropy Probe is a follow-on to the Differential Microwave Radiometer instrument on the Cosmic Background Explorer. Attitude control system engineers discovered sixteen months before launch that configuration changes after the critical design review had resulted in a significant migration of the spacecraft's center of mass. As a result, the spacecraft no longer had a viable backup control mode in the event of a failure of the negative pitch-axis thruster. A tiger team was formed and identified potential solutions to this problem, such as adding thruster-plume shields to redirect thruster torque, adding or removing mass from the spacecraft, adding an additional thruster, moving thrusters, bending thruster nozzles or propellant tubing, or accepting the loss of redundancy. The project considered the impacts on mass, cost, fuel budget, and schedule for each solution, and decided to bend the propellant tubing of the two roll-control thrusters to allow the pair to be used for backup control in the negative pitch axis. This paper discusses the problem and the potential solutions, and documents the hardware and software changes and verification performed. Flight data are presented to show the on-orbit performance of the propulsion system and lessons learned are described.

  10. Fault-tolerance of a neural network solving the traveling salesman problem

    NASA Technical Reports Server (NTRS)

    Protzel, P.; Palumbo, D.; Arras, M.

    1989-01-01

    This study presents the results of a fault-injection experiment that stimulates a neural network solving the Traveling Salesman Problem (TSP). The network is based on a modified version of Hopfield's and Tank's original method. We define a performance characteristic for the TSP that allows an overall assessment of the solution quality for different city-distributions and problem sizes. Five different 10-, 20-, and 30- city cases are sued for the injection of up to 13 simultaneous stuck-at-0 and stuck-at-1 faults. The results of more than 4000 simulation-runs show the extreme fault-tolerance of the network, especially with respect to stuck-at-0 faults. One possible explanation for the overall surprising result is the redundancy of the problem representation.

  11. Foundation for Problem-Based Gaming

    ERIC Educational Resources Information Center

    Kiili, Kristian

    2007-01-01

    Educational games may offer a viable strategy for developing students' problem-solving skills. However, the state of art of educational game research does not provide an account for that. Thus, the aim of this research is to develop an empirically allocated model about problem-based gaming that can be utilised to design pedagogically meaningful…

  12. Energy-Efficient Cognitive Radio Sensor Networks: Parametric and Convex Transformations

    PubMed Central

    Naeem, Muhammad; Illanko, Kandasamy; Karmokar, Ashok; Anpalagan, Alagan; Jaseemuddin, Muhammad

    2013-01-01

    Designing energy-efficient cognitive radio sensor networks is important to intelligently use battery energy and to maximize the sensor network life. In this paper, the problem of determining the power allocation that maximizes the energy-efficiency of cognitive radio-based wireless sensor networks is formed as a constrained optimization problem, where the objective function is the ratio of network throughput and the network power. The proposed constrained optimization problem belongs to a class of nonlinear fractional programming problems. Charnes-Cooper Transformation is used to transform the nonlinear fractional problem into an equivalent concave optimization problem. The structure of the power allocation policy for the transformed concave problem is found to be of a water-filling type. The problem is also transformed into a parametric form for which a ε-optimal iterative solution exists. The convergence of the iterative algorithms is proven, and numerical solutions are presented. The iterative solutions are compared with the optimal solution obtained from the transformed concave problem, and the effects of different system parameters (interference threshold level, the number of primary users and secondary sensor nodes) on the performance of the proposed algorithms are investigated. PMID:23966194

  13. Investigation of Optimal Control Allocation for Gust Load Alleviation in Flight Control

    NASA Technical Reports Server (NTRS)

    Frost, Susan A.; Taylor, Brian R.; Bodson, Marc

    2012-01-01

    Advances in sensors and avionics computation power suggest real-time structural load measurements could be used in flight control systems for improved safety and performance. A conventional transport flight control system determines the moments necessary to meet the pilot's command, while rejecting disturbances and maintaining stability of the aircraft. Control allocation is the problem of converting these desired moments into control effector commands. In this paper, a framework is proposed to incorporate real-time structural load feedback and structural load constraints in the control allocator. Constrained optimal control allocation can be used to achieve desired moments without exceeding specified limits on monitored load points. Minimization of structural loads by the control allocator is used to alleviate gust loads. The framework to incorporate structural loads in the flight control system and an optimal control allocation algorithm will be described and then demonstrated on a nonlinear simulation of a generic transport aircraft with flight dynamics and static structural loads.

  14. Frequency allocations for a new satellite service - Digital audio broadcasting

    NASA Technical Reports Server (NTRS)

    Reinhart, Edward E.

    1992-01-01

    The allocation in the range 500-3000 MHz for digital audio broadcasting (DAB) is described in terms of key issues such as the transmission-system architectures. Attention is given to the optimal amount of spectrum for allocation and the technological considerations relevant to downlink bands for satellite and terrestrial transmissions. Proposals for DAB allocations are compared, and reference is made to factors impinging on the provision of ground/satellite feeder links. The allocation proposals describe the implementation of 50-60-MHz bandwidths for broadcasting in the ranges near 800 MHz, below 1525 MHz, near 2350 MHz, and near 2600 MHz. Three specific proposals are examined in terms of characteristics such as service areas, coverage/beam, channels/satellite beam, and FCC license status. Several existing problems are identified including existing services crowded with systems, the need for new bands in the 1000-3000-MHz range, and variations in the nature and intensity of implementations of existing allocations that vary from country to country.

  15. Power allocation for SWIPT in K-user interference channels using game theory

    NASA Astrophysics Data System (ADS)

    Wen, Zhigang; Liu, Ying; Liu, Xiaoqing; Li, Shan; Chen, Xianya

    2018-12-01

    A simultaneous wireless information and power transfer system in interference channels of multi-users is considered. In this system, each transmitter sends one data stream to its targeted receiver, which causes interference to other receivers. Since all transmitter-receiver links want to maximize their own average transmission rate, a power allocation problem under the transmit power constraints and the energy-harvesting constraints is developed. To solve this problem, we propose a game theory framework. Then, we convert the game into a variational inequalities problem by establishing the connection between game theory and variational inequalities and solve the variational inequalities problem. Through theoretical analysis, the existence and uniqueness of Nash equilibrium are both guaranteed by the theory of variational inequalities. A distributed iterative alternating optimization water-filling algorithm is derived, which is proved to converge. Numerical results show that the proposed algorithm reaches fast convergence and achieves a higher sum rate than the unaided scheme.

  16. Fair Resource Allocation to Health Research: Priority Topics for Bioethics Scholarship.

    PubMed

    Pratt, Bridget; Hyder, Adnan A

    2017-07-01

    This article draws attention to the limited amount of scholarship on what constitutes fairness and equity in resource allocation to health research by individual funders. It identifies three key decisions of ethical significance about resource allocation that research funders make regularly and calls for prioritizing scholarship on those topics - namely, how health resources should be fairly apportioned amongst public health and health care delivery versus health research, how health research resources should be fairly allocated between health problems experienced domestically versus other health problems typically experienced by disadvantaged populations outside the funder's country, and how domestic and non-domestic health research funding should be further apportioned to different areas, e.g. types of research and recipients. These three topics should be priorities for bioethics research because their outcomes have a substantial bearing on the achievement of health justice. The proposed agenda aims to move discussion on the ethics of health research funding beyond its current focus on the mismatch between worldwide basic and clinical research investment and the global burden of disease. Individual funders' decision-making on whether and to what extent to allocate resources to non-domestic health research, health systems research, research on the social determinants of health, capacity development, and recipients in certain countries should also be the focus of ethical scrutiny. © 2017 John Wiley & Sons Ltd.

  17. Design for Warehouse with Product Flow Type Allocation using Linear Programming: A Case Study in a Textile Industry

    NASA Astrophysics Data System (ADS)

    Khannan, M. S. A.; Nafisah, L.; Palupi, D. L.

    2018-03-01

    Sari Warna Co. Ltd, a company engaged in the textile industry, is experiencing problems in the allocation and placement of goods in the warehouse. During this time the company has not implemented the product flow type allocation and product placement to the respective products resulting in a high total material handling cost. Therefore, this study aimed to determine the allocation and placement of goods in the warehouse corresponding to product flow type with minimal total material handling cost. This research is a quantitative research based on the theory of storage and warehouse that uses a mathematical model of optimization problem solving using mathematical optimization model approach belongs to Heragu (2005), aided by software LINGO 11.0 in the calculation of the optimization model. Results obtained from this study is the proportion of the distribution for each functional area is the area of cross-docking at 0.0734, the reserve area at 0.1894, and the forward area at 0.7372. The allocation of product flow type 1 is 5 products, the product flow type 2 is 9 products, the product flow type 3 is 2 products, and the product flow type 4 is 6 products. The optimal total material handling cost by using this mathematical model equal to Rp43.079.510 while it is equal to Rp 49.869.728 by using the company’s existing method. It saves Rp6.790.218 for the total material handling cost. Thus, all of the products can be allocated in accordance with the product flow type with minimal total material handling cost.

  18. Control system of the inspection robots group applying auctions and multi-criteria analysis for task allocation

    NASA Astrophysics Data System (ADS)

    Panfil, Wawrzyniec; Moczulski, Wojciech

    2017-10-01

    In the paper presented is a control system of a mobile robots group intended for carrying out inspection missions. The main research problem was to define such a control system in order to facilitate a cooperation of the robots resulting in realization of the committed inspection tasks. Many of the well-known control systems use auctions for tasks allocation, where a subject of an auction is a task to be allocated. It seems that in the case of missions characterized by much larger number of tasks than number of robots it will be better if robots (instead of tasks) are subjects of auctions. The second identified problem concerns the one-sided robot-to-task fitness evaluation. Simultaneous assessment of the robot-to-task fitness and task attractiveness for robot should affect positively for the overall effectiveness of the multi-robot system performance. The elaborated system allows to assign tasks to robots using various methods for evaluation of fitness between robots and tasks, and using some tasks allocation methods. There is proposed the method for multi-criteria analysis, which is composed of two assessments, i.e. robot's concurrency position for task among other robots and task's attractiveness for robot among other tasks. Furthermore, there are proposed methods for tasks allocation applying the mentioned multi-criteria analysis method. The verification of both the elaborated system and the proposed tasks' allocation methods was carried out with the help of simulated experiments. The object under test was a group of inspection mobile robots being a virtual counterpart of the real mobile-robot group.

  19. Neural Network Solves "Traveling-Salesman" Problem

    NASA Technical Reports Server (NTRS)

    Thakoor, Anilkumar P.; Moopenn, Alexander W.

    1990-01-01

    Experimental electronic neural network solves "traveling-salesman" problem. Plans round trip of minimum distance among N cities, visiting every city once and only once (without backtracking). This problem is paradigm of many problems of global optimization (e.g., routing or allocation of resources) occuring in industry, business, and government. Applied to large number of cities (or resources), circuits of this kind expected to solve problem faster and more cheaply.

  20. Shared and Disorder-Specific Prefrontal Abnormalities in Boys with Pure Attention-Deficit/Hyperactivity Disorder Compared to Boys with Pure CD during Interference Inhibition and Attention Allocation

    ERIC Educational Resources Information Center

    Rubia, Katya; Halari, Rozmin; Smith, Anna B.; Mohammad, Majeed; Scott, Stephen; Brammer, Michael J.

    2009-01-01

    Background: Inhibitory and attention deficits have been suggested to be shared problems of disruptive behaviour disorders. Patients with attention deficit hyperactivity disorder (ADHD) and patients with conduct disorder (CD) show deficits in tasks of attention allocation and interference inhibition. However, functional magnetic resonance imaging…

  1. Nonprofit Decision Making and Resource Allocation: The Importance of Membership Preferences, Community Needs, and Interorganizational Ties.

    ERIC Educational Resources Information Center

    Markham, William T.; Johnson, Margaret A.; Bonjean, Charles M.

    1999-01-01

    Results of a study of community service organizations (n=12) and their communities indicate that distribution of volunteer funds and time was unrelated to community needs as measured by objective indicators. The most important determinants of resource allocation are members' perceptions of the severity of problems and their willingness to work in…

  2. Auction Mechanism to Allocate Air Traffic Control Slots

    NASA Technical Reports Server (NTRS)

    Raffarin, Marianne

    2003-01-01

    This article deals with an auction mechanism for airspace slots, as a means of solving the European airspace congestion problem. A disequilibrium, between Air Traffic Control (ATC) services supply and ATC services demand are at the origin of almost one fourth of delays in the air transport industry in Europe. In order to tackle this congestion problem, we suggest modifying both pricing and allocation of ATC services, by setting up an auction mechanism. Objects of the auction will be the right for airlines to cross a part of the airspace, and then to benefit from ATC services over a period corresponding to the necessary time for the crossing. Allocation and payment rules have to be defined according to the objectives of this auction. The auctioneer is the public authority in charge of ATC services, whose aim is to obtain an efficient allocation. Therefore, the social value will be maximized. Another objective is to internalize congestion costs. To that end, we apply the principle of Clarke-Groves mechanism auction: each winner has to pay the externalities imposed on other bidders. The complex context of ATC leads to a specific design for this auction.

  3. GIS and Game Theory for Water Resource Management

    NASA Astrophysics Data System (ADS)

    Ganjali, N.; Guney, C.

    2017-11-01

    In this study, aspects of Game theory and its application on water resources management combined with GIS techniques are detailed. First, each term is explained and the advantages and limitations of its aspect is discussed. Then, the nature of combinations between each pair and literature on the previous studies are given. Several cases were investigated and results were magnified in order to conclude with the applicability and combination of GIS- Game Theory- Water Resources Management. It is concluded that the game theory is used relatively in limited studies of water management fields such as cost/benefit allocation among users, water allocation among trans-boundary users in water resources, water quality management, groundwater management, analysis of water policies, fair allocation of water resources development cost and some other narrow fields. Also, Decision-making in environmental projects requires consideration of trade-offs between socio-political, environmental, and economic impacts and is often complicated by various stakeholder views. Most of the literature on water allocation and conflict problems uses traditional optimization models to identify the most efficient scheme while the Game Theory, as an optimization method, combined GIS are beneficial platforms for agent based models to be used in solving Water Resources Management problems in the further studies.

  4. Adjacency Matrix-Based Transmit Power Allocation Strategies in Wireless Sensor Networks

    PubMed Central

    Consolini, Luca; Medagliani, Paolo; Ferrari, Gianluigi

    2009-01-01

    In this paper, we present an innovative transmit power control scheme, based on optimization theory, for wireless sensor networks (WSNs) which use carrier sense multiple access (CSMA) with collision avoidance (CA) as medium access control (MAC) protocol. In particular, we focus on schemes where several remote nodes send data directly to a common access point (AP). Under the assumption of finite overall network transmit power and low traffic load, we derive the optimal transmit power allocation strategy that minimizes the packet error rate (PER) at the AP. This approach is based on modeling the CSMA/CA MAC protocol through a finite state machine and takes into account the network adjacency matrix, depending on the transmit power distribution and determining the network connectivity. It will be then shown that the transmit power allocation problem reduces to a convex constrained minimization problem. Our results show that, under the assumption of low traffic load, the power allocation strategy, which guarantees minimal delay, requires the maximization of network connectivity, which can be equivalently interpreted as the maximization of the number of non-zero entries of the adjacency matrix. The obtained theoretical results are confirmed by simulations for unslotted Zigbee WSNs. PMID:22346705

  5. Application of a COTS Resource Optimization Framework to the SSN Sensor Tasking Domain - Part I: Problem Definition

    NASA Astrophysics Data System (ADS)

    Tran, T.

    With the onset of the SmallSat era, the RSO catalog is expected to see continuing growth in the near future. This presents a significant challenge to the current sensor tasking of the SSN. The Air Force is in need of a sensor tasking system that is robust, efficient, scalable, and able to respond in real-time to interruptive events that can change the tracking requirements of the RSOs. Furthermore, the system must be capable of using processed data from heterogeneous sensors to improve tasking efficiency. The SSN sensor tasking can be regarded as an economic problem of supply and demand: the amount of tracking data needed by each RSO represents the demand side while the SSN sensor tasking represents the supply side. As the number of RSOs to be tracked grows, demand exceeds supply. The decision-maker is faced with the problem of how to allocate resources in the most efficient manner. Braxton recently developed a framework called Multi-Objective Resource Optimization using Genetic Algorithm (MOROUGA) as one of its modern COTS software products. This optimization framework took advantage of the maturing technology of evolutionary computation in the last 15 years. This framework was applied successfully to address the resource allocation of an AFSCN-like problem. In any resource allocation problem, there are five key elements: (1) the resource pool, (2) the tasks using the resources, (3) a set of constraints on the tasks and the resources, (4) the objective functions to be optimized, and (5) the demand levied on the resources. In this paper we explain in detail how the design features of this optimization framework are directly applicable to address the SSN sensor tasking domain. We also discuss our validation effort as well as present the result of the AFSCN resource allocation domain using a prototype based on this optimization framework.

  6. Resource Allocation Algorithms for the Next Generation Cellular Networks

    NASA Astrophysics Data System (ADS)

    Amzallag, David; Raz, Danny

    This chapter describes recent results addressing resource allocation problems in the context of current and future cellular technologies. We present models that capture several fundamental aspects of planning and operating these networks, and develop new approximation algorithms providing provable good solutions for the corresponding optimization problems. We mainly focus on two families of problems: cell planning and cell selection. Cell planning deals with choosing a network of base stations that can provide the required coverage of the service area with respect to the traffic requirements, available capacities, interference, and the desired QoS. Cell selection is the process of determining the cell(s) that provide service to each mobile station. Optimizing these processes is an important step towards maximizing the utilization of current and future cellular networks.

  7. Who gets how much: funding formulas in federal public health programs.

    PubMed

    Buehler, James W; Holtgrave, David R

    2007-01-01

    Federal public health programs use a mix of formula-based and competitive methods to allocate funds among states and other constituent jurisdictions. Characteristics of formula-based allocations used by a convenience sample of four programs, three from the Centers for Disease Control and Prevention and one from the Health Resources and Services Administration, are described to illustrate formula-based allocation methods in public health. Data sources in these public health formulas include population counts and funding proportions based on historical precedent. None include factors that adjust allocations based on variations in the availability of local resources or the cost of delivering services. Formula-funded activities are supplemented by programs that target specific prevention needs or encourage development of innovative methods to address emerging problems, using set-aside funds. A public health finance research agenda should address ways to improve the fit between funding allocation formulas and program objectives.

  8. Extensive cargo identification reveals distinct biological roles of the 12 importin pathways.

    PubMed

    Kimura, Makoto; Morinaka, Yuriko; Imai, Kenichiro; Kose, Shingo; Horton, Paul; Imamoto, Naoko

    2017-01-24

    Vast numbers of proteins are transported into and out of the nuclei by approximately 20 species of importin-β family nucleocytoplasmic transport receptors. However, the significance of the multiple parallel transport pathways that the receptors constitute is poorly understood because only limited numbers of cargo proteins have been reported. Here, we identified cargo proteins specific to the 12 species of human import receptors with a high-throughput method that employs stable isotope labeling with amino acids in cell culture, an in vitro reconstituted transport system, and quantitative mass spectrometry. The identified cargoes illuminated the manner of cargo allocation to the receptors. The redundancies of the receptors vary widely depending on the cargo protein. Cargoes of the same receptor are functionally related to one another, and the predominant protein groups in the cargo cohorts differ among the receptors. Thus, the receptors are linked to distinct biological processes by the nature of their cargoes.

  9. Coordinated path-following and direct yaw-moment control of autonomous electric vehicles with sideslip angle estimation

    NASA Astrophysics Data System (ADS)

    Guo, Jinghua; Luo, Yugong; Li, Keqiang; Dai, Yifan

    2018-05-01

    This paper presents a novel coordinated path following system (PFS) and direct yaw-moment control (DYC) of autonomous electric vehicles via hierarchical control technique. In the high-level control law design, a new fuzzy factor is introduced based on the magnitude of longitudinal velocity of vehicle, a linear time varying (LTV)-based model predictive controller (MPC) is proposed to acquire the wheel steering angle and external yaw moment. Then, a pseudo inverse (PI) low-level control allocation law is designed to realize the tracking of desired external moment torque and management of the redundant tire actuators. Furthermore, the vehicle sideslip angle is estimated by the data fusion of low-cost GPS and INS, which can be obtained by the integral of modified INS signals with GPS signals as initial value. Finally, the effectiveness of the proposed control system is validated by the simulation and experimental tests.

  10. Global biomass production potentials exceed expected future demand without the need for cropland expansion

    PubMed Central

    Mauser, Wolfram; Klepper, Gernot; Zabel, Florian; Delzeit, Ruth; Hank, Tobias; Putzenlechner, Birgitta; Calzadilla, Alvaro

    2015-01-01

    Global biomass demand is expected to roughly double between 2005 and 2050. Current studies suggest that agricultural intensification through optimally managed crops on today's cropland alone is insufficient to satisfy future demand. In practice though, improving crop growth management through better technology and knowledge almost inevitably goes along with (1) improving farm management with increased cropping intensity and more annual harvests where feasible and (2) an economically more efficient spatial allocation of crops which maximizes farmers' profit. By explicitly considering these two factors we show that, without expansion of cropland, today's global biomass potentials substantially exceed previous estimates and even 2050s' demands. We attribute 39% increase in estimated global production potentials to increasing cropping intensities and 30% to the spatial reallocation of crops to their profit-maximizing locations. The additional potentials would make cropland expansion redundant. Their geographic distribution points at possible hotspots for future intensification. PMID:26558436

  11. Global biomass production potentials exceed expected future demand without the need for cropland expansion.

    PubMed

    Mauser, Wolfram; Klepper, Gernot; Zabel, Florian; Delzeit, Ruth; Hank, Tobias; Putzenlechner, Birgitta; Calzadilla, Alvaro

    2015-11-12

    Global biomass demand is expected to roughly double between 2005 and 2050. Current studies suggest that agricultural intensification through optimally managed crops on today's cropland alone is insufficient to satisfy future demand. In practice though, improving crop growth management through better technology and knowledge almost inevitably goes along with (1) improving farm management with increased cropping intensity and more annual harvests where feasible and (2) an economically more efficient spatial allocation of crops which maximizes farmers' profit. By explicitly considering these two factors we show that, without expansion of cropland, today's global biomass potentials substantially exceed previous estimates and even 2050s' demands. We attribute 39% increase in estimated global production potentials to increasing cropping intensities and 30% to the spatial reallocation of crops to their profit-maximizing locations. The additional potentials would make cropland expansion redundant. Their geographic distribution points at possible hotspots for future intensification.

  12. Accelerating execution of the integrated TIGER series Monte Carlo radiation transport codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, L.M.; Hochstedler, R.D.

    1997-02-01

    Execution of the integrated TIGER series (ITS) of coupled electron/photon Monte Carlo radiation transport codes has been accelerated by modifying the FORTRAN source code for more efficient computation. Each member code of ITS was benchmarked and profiled with a specific test case that directed the acceleration effort toward the most computationally intensive subroutines. Techniques for accelerating these subroutines included replacing linear search algorithms with binary versions, replacing the pseudo-random number generator, reducing program memory allocation, and proofing the input files for geometrical redundancies. All techniques produced identical or statistically similar results to the original code. Final benchmark timing of themore » accelerated code resulted in speed-up factors of 2.00 for TIGER (the one-dimensional slab geometry code), 1.74 for CYLTRAN (the two-dimensional cylindrical geometry code), and 1.90 for ACCEPT (the arbitrary three-dimensional geometry code).« less

  13. Introduction of Service Systems Implementation

    NASA Astrophysics Data System (ADS)

    Demirkan, Haluk; Spohrer, James C.; Krishna, Vikas

    Services systems can range from an individual to a firm to an entire nation. They can also be nested and composed of other service systems. They are configurations of people, information, technology and organizations to co-create value between a service customer and a provider (Maglio et al. 2006; Spohrer et al. 2007). While these configurations can take many, potentially infinite, forms, they can be optimized for the subject service to eliminate unnecessary costs in the forms of redundancies, over allocation, etc. So what is an ideal configuration that a provider and a customer might strive to achieve? As much as it would be nice to have a formula for such configurations, experiences that are result of engagement, are very different for each value co-creation configurations. The variances and dynamism of customer provider engagements result in potentially infinite types and numbers of configurations in today's global economy.

  14. Optimal traffic resource allocation and management.

    DOT National Transportation Integrated Search

    2010-05-01

    "In this paper, we address the problem of determining the patrol routes of state troopers for maximum coverage of : highway spots with high frequencies of crashes (hot spots). We develop a mixed integer linear programming model : for this problem und...

  15. Lower-extremity musculoskeletal geometry affects the calculation of patellofemoral forces in vertical jumping and weightlifting.

    PubMed

    Cleather, D I; Bull, A M J

    2010-01-01

    The calculation of the patellofemoral joint contact force using three-dimensional (3D) modelling techniques requires a description of the musculoskeletal geometry of the lower limb. In this study, the influence of the complexity of the muscle model was studied by considering two different muscle models, the Delp and Horsman models. Both models were used to calculate the patellofemoral force during standing, vertical jumping, and Olympic-style weightlifting. The patellofemoral forces predicted by the Horsman model were markedly lower than those predicted by the Delp model in all activities and represented more realistic values when compared with previous work. This was found to be a result of a lower level of redundancy in the Delp model, which forced a higher level of muscular activation in order to allow a viable solution. The higher level of complexity in the Horsman model resulted in a greater degree of redundancy and consequently lower activation and patellofemoral forces. The results of this work demonstrate that a well-posed muscle model must have an adequate degree of complexity to create a sufficient independence, variability, and number of moment arms in order to ensure adequate redundancy of the force-sharing problem such that muscle forces are not overstated.

  16. Redundancy Analysis of Capacitance Data of a Coplanar Electrode Array for Fast and Stable Imaging Processing

    PubMed Central

    Wen, Yintang; Zhang, Zhenda; Zhang, Yuyan; Sun, Dongtao

    2017-01-01

    A coplanar electrode array sensor is established for the imaging of composite-material adhesive-layer defect detection. The sensor is based on the capacitive edge effect, which leads to capacitance data being considerably weak and susceptible to environmental noise. The inverse problem of coplanar array electrical capacitance tomography (C-ECT) is ill-conditioning, in which a small error of capacitance data can seriously affect the quality of reconstructed images. In order to achieve a stable image reconstruction process, a redundancy analysis method for capacitance data is proposed. The proposed method is based on contribution rate and anti-interference capability. According to the redundancy analysis, the capacitance data are divided into valid and invalid data. When the image is reconstructed by valid data, the sensitivity matrix needs to be changed accordingly. In order to evaluate the effectiveness of the sensitivity map, singular value decomposition (SVD) is used. Finally, the two-dimensional (2D) and three-dimensional (3D) images are reconstructed by the Tikhonov regularization method. Through comparison of the reconstructed images of raw capacitance data, the stability of the image reconstruction process can be improved, and the quality of reconstructed images is not degraded. As a result, much invalid data are not collected, and the data acquisition time can also be reduced. PMID:29295537

  17. Scheduling Jobs with Variable Job Processing Times on Unrelated Parallel Machines

    PubMed Central

    Zhang, Guang-Qian; Wang, Jian-Jun; Liu, Ya-Jing

    2014-01-01

    m unrelated parallel machines scheduling problems with variable job processing times are considered, where the processing time of a job is a function of its position in a sequence, its starting time, and its resource allocation. The objective is to determine the optimal resource allocation and the optimal schedule to minimize a total cost function that dependents on the total completion (waiting) time, the total machine load, the total absolute differences in completion (waiting) times on all machines, and total resource cost. If the number of machines is a given constant number, we propose a polynomial time algorithm to solve the problem. PMID:24982933

  18. Dynamism in a Semiconductor Industrial Machine Allocation Problem using a Hybrid of the Bio-inspired and Musical-Harmony Approach

    NASA Astrophysics Data System (ADS)

    Kalsom Yusof, Umi; Nor Akmal Khalid, Mohd

    2015-05-01

    Semiconductor industries need to constantly adjust to the rapid pace of change in the market. Most manufactured products usually have a very short life cycle. These scenarios imply the need to improve the efficiency of capacity planning, an important aspect of the machine allocation plan known for its complexity. Various studies have been performed to balance productivity and flexibility in the flexible manufacturing system (FMS). Many approaches have been developed by the researchers to determine the suitable balance between exploration (global improvement) and exploitation (local improvement). However, not much work has been focused on the domain of machine allocation problem that considers the effects of machine breakdowns. This paper develops a model to minimize the effect of machine breakdowns, thus increasing the productivity. The objectives are to minimize system unbalance and makespan as well as increase throughput while satisfying the technological constraints such as machine time availability. To examine the effectiveness of the proposed model, results for throughput, system unbalance and makespan on real industrial datasets were performed with applications of intelligence techniques, that is, a hybrid of genetic algorithm and harmony search. The result aims to obtain a feasible solution to the domain problem.

  19. Spectrum Sharing Based on a Bertrand Game in Cognitive Radio Sensor Networks

    PubMed Central

    Zeng, Biqing; Zhang, Chi; Hu, Pianpian; Wang, Shengyu

    2017-01-01

    In the study of power control and allocation based on pricing, the utility of secondary users is usually studied from the perspective of the signal to noise ratio. The study of secondary user utility from the perspective of communication demand can not only promote the secondary users to meet the maximum communication needs, but also to maximize the utilization of spectrum resources, however, research in this area is lacking, so from the viewpoint of meeting the demand of network communication, this paper designs a two stage model to solve spectrum leasing and allocation problem in cognitive radio sensor networks (CRSNs). In the first stage, the secondary base station collects the secondary network communication requirements, and rents spectrum resources from several primary base stations using the Bertrand game to model the transaction behavior of the primary base station and secondary base station. The second stage, the subcarriers and power allocation problem of secondary base stations is defined as a nonlinear programming problem to be solved based on Nash bargaining. The simulation results show that the proposed model can satisfy the communication requirements of each user in a fair and efficient way compared to other spectrum sharing schemes. PMID:28067850

  20. The Effects of Visual-Verbal Redundancy and Recaps on Television News Learning.

    ERIC Educational Resources Information Center

    Son, Jinok; Davie, William

    A study examined the effects of visual-verbal redundancy and recaps on learning from television news. Two factors were used: redundancy between the visual and audio channels, and the presence or absence of a recap. Manipulation of these factors created four conditions: (1) redundant pictures and words plus recap, (2) redundant pictures and words…

  1. Space Station Freedom resource allocation accommodation of technology payload requirements

    NASA Technical Reports Server (NTRS)

    Avery, Don E.; Collier, Lisa D.; Gartrell, Charles F.

    1990-01-01

    An overview of the Office of Aeronautics, Exploration, and Technology (OAET) Space Station Freedom Technology Payload Development Program is provided, and the OAET Station resource requirements are reviewed. The requirements are contrasted with current proposed resource allocations. A discussion of the issues and conclusions are provided. It is concluded that an overall 20 percent resource allocation is appropriate to support OAET's technology development program, that some resources are inadequate even at the 20 percent level, and that bartering resources among U.S. users and international partners and increasing the level of automation may be viable solutions to the resource constraint problem.

  2. Causes of Indoor Air Quality Problems in Schools: Summary of Scientific Research. Revised Edition.

    ERIC Educational Resources Information Center

    Bayer, Charlene W.; Crow, Sidney A.; Fischer, John

    Understanding the primary causes of indoor air quality (IAQ) problems and how controllable factors--proper heating, ventilation and air-conditioning (HVAC) system design, allocation of adequate outdoor air, proper filtration, effective humidity control, and routine maintenance--can avert problems may help all building owners, operators, and…

  3. Algorithm Optimally Allocates Actuation of a Spacecraft

    NASA Technical Reports Server (NTRS)

    Motaghedi, Shi

    2007-01-01

    A report presents an algorithm that solves the following problem: Allocate the force and/or torque to be exerted by each thruster and reaction-wheel assembly on a spacecraft for best performance, defined as minimizing the error between (1) the total force and torque commanded by the spacecraft control system and (2) the total of forces and torques actually exerted by all the thrusters and reaction wheels. The algorithm incorporates the matrix vector relationship between (1) the total applied force and torque and (2) the individual actuator force and torque values. It takes account of such constraints as lower and upper limits on the force or torque that can be applied by a given actuator. The algorithm divides the aforementioned problem into two optimization problems that it solves sequentially. These problems are of a type, known in the art as semi-definite programming problems, that involve linear matrix inequalities. The algorithm incorporates, as sub-algorithms, prior algorithms that solve such optimization problems very efficiently. The algorithm affords the additional advantage that the solution requires the minimum rate of consumption of fuel for the given best performance.

  4. Proposal for massively parallel data storage system

    NASA Technical Reports Server (NTRS)

    Mansuripur, M.

    1992-01-01

    An architecture for integrating large numbers of data storage units (drives) to form a distributed mass storage system is proposed. The network of interconnected units consists of nodes and links. At each node there resides a controller board, a data storage unit and, possibly, a local/remote user-terminal. The links (twisted-pair wires, coax cables, or fiber-optic channels) provide the communications backbone of the network. There is no central controller for the system as a whole; all decisions regarding allocation of resources, routing of messages and data-blocks, creation and distribution of redundant data-blocks throughout the system (for protection against possible failures), frequency of backup operations, etc., are made locally at individual nodes. The system can handle as many user-terminals as there are nodes in the network. Various users compete for resources by sending their requests to the local controller-board and receiving allocations of time and storage space. In principle, each user can have access to the entire system, and all drives can be running in parallel to service the requests for one or more users. The system is expandable up to a maximum number of nodes, determined by the number of routing-buffers built into the controller boards. Additional drives, controller-boards, user-terminals, and links can be simply plugged into an existing system in order to expand its capacity.

  5. Micropulsed Plasma Thrusters for Attitude Control of a Low-Earth-Orbiting CubeSat

    NASA Technical Reports Server (NTRS)

    Gatsonis, Nikolaos A.; Lu, Ye; Blandino, John; Demetriou, Michael A.; Paschalidis, Nicholas

    2016-01-01

    This study presents a 3-Unit CubeSat design with commercial-off-the-shelf hardware, Teflon-fueled micropulsed plasma thrusters, and an attitude determination and control approach. The micropulsed plasma thruster is sized by the impulse bit and pulse frequency required for continuous compensation of expected maximum disturbance torques at altitudes between 400 and 1000 km, as well as to perform stabilization of up to 20 deg /s and slew maneuvers of up to 180 deg. The study involves realistic power constraints anticipated on the 3-Unit CubeSat. Attitude estimation is implemented using the q method for static attitude determination of the quaternion using pairs of the spacecraft-sun and magnetic-field vectors. The quaternion estimate and the gyroscope measurements are used with an extended Kalman filter to obtain the attitude estimates. Proportional-derivative control algorithms use the static attitude estimates in order to calculate the torque required to compensate for the disturbance torques and to achieve specified stabilization and slewing maneuvers or combinations. The controller includes a thruster-allocation method, which determines the optimal utilization of the available thrusters and introduces redundancy in case of failure. Simulation results are presented for a 3-Unit CubeSat under detumbling, pointing, and pointing and spinning scenarios, as well as comparisons between the thruster-allocation and the paired-firing methods under thruster failure.

  6. Amplification, Redundancy, and Quantum Chernoff Information

    NASA Astrophysics Data System (ADS)

    Zwolak, Michael; Riedel, C. Jess; Zurek, Wojciech H.

    2014-04-01

    Amplification was regarded, since the early days of quantum theory, as a mysterious ingredient that endows quantum microstates with macroscopic consequences, key to the "collapse of the wave packet," and a way to avoid embarrassing problems exemplified by Schrödinger's cat. Such a bridge between the quantum microworld and the classical world of our experience was postulated ad hoc in the Copenhagen interpretation. Quantum Darwinism views amplification as replication, in many copies, of the information about quantum states. We show that such amplification is a natural consequence of a broad class of models of decoherence, including the photon environment we use to obtain most of our information. This leads to objective reality via the presence of robust and widely accessible records of selected quantum states. The resulting redundancy (the number of copies deposited in the environment) follows from the quantum Chernoff information that quantifies the information transmitted by a typical elementary subsystem of the environment.

  7. Self Healing Percolation

    NASA Astrophysics Data System (ADS)

    Scala, Antonio

    2015-03-01

    We introduce the concept of self-healing in the field of complex networks modelling; in particular, self-healing capabilities are implemented through distributed communication protocols that exploit redundant links to recover the connectivity of the system. Self-healing is a crucial in implementing the next generation of smart grids allowing to ensure a high quality of service to the users. We then map our self-healing procedure in a percolation problem and analyse the interplay between redundancies and topology in improving the resilience of networked infrastructures to multiple failures. We find exact results both for planar lattices and for random lattices, hinting the role of duality in the design of resilient networks. Finally, we introduce a cavity method approach to study the recovery of connectivity after damage in self-healing networks. CNR-PNR National Project ``Crisis-Lab,'' EU HOME/2013/CIPS/AG/4000005013 project CI2C and EU FET project MULTIPLEX nr.317532.

  8. Longitudinal Analysis of New Information Types in Clinical Notes

    PubMed Central

    Zhang, Rui; Pakhomov, Serguei; Melton, Genevieve B.

    2014-01-01

    It is increasingly recognized that redundant information in clinical notes within electronic health record (EHR) systems is ubiquitous, significant, and may negatively impact the secondary use of these notes for research and patient care. We investigated several automated methods to identify redundant versus relevant new information in clinical reports. These methods may provide a valuable approach to extract clinically pertinent information and further improve the accuracy of clinical information extraction systems. In this study, we used UMLS semantic types to extract several types of new information, including problems, medications, and laboratory information. Automatically identified new information highly correlated with manual reference standard annotations. Methods to identify different types of new information can potentially help to build up more robust information extraction systems for clinical researchers as well as aid clinicians and researchers in navigating clinical notes more effectively and quickly identify information pertaining to changes in health states. PMID:25717418

  9. Study on Data Clustering and Intelligent Decision Algorithm of Indoor Localization

    NASA Astrophysics Data System (ADS)

    Liu, Zexi

    2018-01-01

    Indoor positioning technology enables the human beings to have the ability of positional perception in architectural space, and there is a shortage of single network coverage and the problem of location data redundancy. So this article puts forward the indoor positioning data clustering algorithm and intelligent decision-making research, design the basic ideas of multi-source indoor positioning technology, analyzes the fingerprint localization algorithm based on distance measurement, position and orientation of inertial device integration. By optimizing the clustering processing of massive indoor location data, the data normalization pretreatment, multi-dimensional controllable clustering center and multi-factor clustering are realized, and the redundancy of locating data is reduced. In addition, the path is proposed based on neural network inference and decision, design the sparse data input layer, the dynamic feedback hidden layer and output layer, low dimensional results improve the intelligent navigation path planning.

  10. Optimizing conjunctive use of surface water and groundwater resources with stochastic dynamic programming

    NASA Astrophysics Data System (ADS)

    Davidsen, Claus; Liu, Suxia; Mo, Xingguo; Rosbjerg, Dan; Bauer-Gottwein, Peter

    2014-05-01

    Optimal management of conjunctive use of surface water and groundwater has been attempted with different algorithms in the literature. In this study, a hydro-economic modelling approach to optimize conjunctive use of scarce surface water and groundwater resources under uncertainty is presented. A stochastic dynamic programming (SDP) approach is used to minimize the basin-wide total costs arising from water allocations and water curtailments. Dynamic allocation problems with inclusion of groundwater resources proved to be more complex to solve with SDP than pure surface water allocation problems due to head-dependent pumping costs. These dynamic pumping costs strongly affect the total costs and can lead to non-convexity of the future cost function. The water user groups (agriculture, industry, domestic) are characterized by inelastic demands and fixed water allocation and water supply curtailment costs. As in traditional SDP approaches, one step-ahead sub-problems are solved to find the optimal management at any time knowing the inflow scenario and reservoir/aquifer storage levels. These non-linear sub-problems are solved using a genetic algorithm (GA) that minimizes the sum of the immediate and future costs for given surface water reservoir and groundwater aquifer end storages. The immediate cost is found by solving a simple linear allocation sub-problem, and the future costs are assessed by interpolation in the total cost matrix from the following time step. Total costs for all stages, reservoir states, and inflow scenarios are used as future costs to drive a forward moving simulation under uncertain water availability. The use of a GA to solve the sub-problems is computationally more costly than a traditional SDP approach with linearly interpolated future costs. However, in a two-reservoir system the future cost function would have to be represented by a set of planes, and strict convexity in both the surface water and groundwater dimension cannot be maintained. The optimization framework based on the GA is still computationally feasible and represents a clean and customizable method. The method has been applied to the Ziya River basin, China. The basin is located on the North China Plain and is subject to severe water scarcity, which includes surface water droughts and groundwater over-pumping. The head-dependent groundwater pumping costs will enable assessment of the long-term effects of increased electricity prices on the groundwater pumping. The coupled optimization framework is used to assess realistic alternative development scenarios for the basin. In particular the potential for using electricity pricing policies to reach sustainable groundwater pumping is investigated.

  11. Taxonomies of Organizational Change: Literature Review and Analysis

    DTIC Science & Technology

    1978-09-01

    operational terms presented a sig- nificant problem. The redundancy and circularity in discussions of variable groups reflects this dilemma . -34...Behavioral event and structured inteview protocols to be used to collect data from internal Army OE change agent and client subjects are presented with a...TABLE 22: Data Collection Method Proposed for Each Intervention Variable 168 *1 ABSTRACT This report presents a taxonomy and data collection method

  12. Multicolinearity and Indicator Redundancy Problem in World University Rankings: An Example Using Times Higher Education World University Ranking 2013-2014 Data

    ERIC Educational Resources Information Center

    Kaycheng, Soh

    2015-01-01

    World university ranking systems used the weight-and-sum approach to combined indicator scores into overall scores on which the universities are then ranked. This approach assumes that the indicators all independently contribute to the overall score in the specified proportions. In reality, this assumption is doubtful as the indicators tend to…

  13. Kinematic functions for redundancy resolution using configuration control

    NASA Technical Reports Server (NTRS)

    Seraji, Homayoun (Inventor)

    1994-01-01

    The invention fulfills new goals for redundancy resolution based on manipulator dynamics and end-effector characteristics. These goals are accomplished by employing the recently developed configuration control approach. Redundancy resolution is achieved by controlling the joint inertia matrix of the end-effector mass matrix that affect the inertial torques or by reducing the joint torques due to gravity loading and payload. The manipulator mechanical-advantage and velocity-ratio are also used as performance measures to be improved by proper utilization of redundancy. Furthermore, end-effector compliance, sensitivity, and impulsive force at impact are introduced as redundancy resolution criteria. The new goals for redundancy resolution allow a more efficient utilization of the redundant joints based on the desired task requirements.

  14. Enhancements and Algorithms for Avionic Information Processing System Design Methodology.

    DTIC Science & Technology

    1982-06-16

    programming algorithm is enhanced by incorporating task precedence constraints and hardware failures. Stochastic network methods are used to analyze...allocations in the presence of random fluctuations. Graph theoretic methods are used to analyze hardware designs, and new designs are constructed with...There, spatial dynamic programming (SDP) was used to solve a static, deterministic software allocation problem. Under the current contract the SDP

  15. Spatio-Temporal Super-Resolution Reconstruction of Remote-Sensing Images Based on Adaptive Multi-Scale Detail Enhancement

    PubMed Central

    Zhu, Hong; Tang, Xinming; Xie, Junfeng; Song, Weidong; Mo, Fan; Gao, Xiaoming

    2018-01-01

    There are many problems in existing reconstruction-based super-resolution algorithms, such as the lack of texture-feature representation and of high-frequency details. Multi-scale detail enhancement can produce more texture information and high-frequency information. Therefore, super-resolution reconstruction of remote-sensing images based on adaptive multi-scale detail enhancement (AMDE-SR) is proposed in this paper. First, the information entropy of each remote-sensing image is calculated, and the image with the maximum entropy value is regarded as the reference image. Subsequently, spatio-temporal remote-sensing images are processed using phase normalization, which is to reduce the time phase difference of image data and enhance the complementarity of information. The multi-scale image information is then decomposed using the L0 gradient minimization model, and the non-redundant information is processed by difference calculation and expanding non-redundant layers and the redundant layer by the iterative back-projection (IBP) technique. The different-scale non-redundant information is adaptive-weighted and fused using cross-entropy. Finally, a nonlinear texture-detail-enhancement function is built to improve the scope of small details, and the peak signal-to-noise ratio (PSNR) is used as an iterative constraint. Ultimately, high-resolution remote-sensing images with abundant texture information are obtained by iterative optimization. Real results show an average gain in entropy of up to 0.42 dB for an up-scaling of 2 and a significant promotion gain in enhancement measure evaluation for an up-scaling of 2. The experimental results show that the performance of the AMED-SR method is better than existing super-resolution reconstruction methods in terms of visual and accuracy improvements. PMID:29414893

  16. Spatio-Temporal Super-Resolution Reconstruction of Remote-Sensing Images Based on Adaptive Multi-Scale Detail Enhancement.

    PubMed

    Zhu, Hong; Tang, Xinming; Xie, Junfeng; Song, Weidong; Mo, Fan; Gao, Xiaoming

    2018-02-07

    There are many problems in existing reconstruction-based super-resolution algorithms, such as the lack of texture-feature representation and of high-frequency details. Multi-scale detail enhancement can produce more texture information and high-frequency information. Therefore, super-resolution reconstruction of remote-sensing images based on adaptive multi-scale detail enhancement (AMDE-SR) is proposed in this paper. First, the information entropy of each remote-sensing image is calculated, and the image with the maximum entropy value is regarded as the reference image. Subsequently, spatio-temporal remote-sensing images are processed using phase normalization, which is to reduce the time phase difference of image data and enhance the complementarity of information. The multi-scale image information is then decomposed using the L ₀ gradient minimization model, and the non-redundant information is processed by difference calculation and expanding non-redundant layers and the redundant layer by the iterative back-projection (IBP) technique. The different-scale non-redundant information is adaptive-weighted and fused using cross-entropy. Finally, a nonlinear texture-detail-enhancement function is built to improve the scope of small details, and the peak signal-to-noise ratio (PSNR) is used as an iterative constraint. Ultimately, high-resolution remote-sensing images with abundant texture information are obtained by iterative optimization. Real results show an average gain in entropy of up to 0.42 dB for an up-scaling of 2 and a significant promotion gain in enhancement measure evaluation for an up-scaling of 2. The experimental results show that the performance of the AMED-SR method is better than existing super-resolution reconstruction methods in terms of visual and accuracy improvements.

  17. Overcoming redundancies in bedside nursing assessments by validating a parsimonious meta-tool: findings from a methodological exercise study.

    PubMed

    Palese, Alvisa; Marini, Eva; Guarnier, Annamaria; Barelli, Paolo; Zambiasi, Paola; Allegrini, Elisabetta; Bazoli, Letizia; Casson, Paola; Marin, Meri; Padovan, Marisa; Picogna, Michele; Taddia, Patrizia; Chiari, Paolo; Salmaso, Daniele; Marognolli, Oliva; Canzan, Federica; Ambrosi, Elisa; Saiani, Luisa; Grassetti, Luca

    2016-10-01

    There is growing interest in validating tools aimed at supporting the clinical decision-making process and research. However, an increased bureaucratization of clinical practice and redundancies in the measures collected have been reported by clinicians. Redundancies in clinical assessments affect negatively both patients and nurses. To validate a meta-tool measuring the risks/problems currently estimated by multiple tools used in daily practice. A secondary analysis of a database was performed, using a cross-validation and a longitudinal study designs. In total, 1464 patients admitted to 12 medical units in 2012 were assessed at admission with the Brass, Barthel, Conley and Braden tools. Pertinent outcomes such as the occurrence of post-discharge need for resources and functional decline at discharge, as well as falls and pressure sores, were measured. Explorative factor analysis of each tool, inter-tool correlations and a conceptual evaluation of the redundant/similar items across tools were performed. Therefore, the validation of the meta-tool was performed through explorative factor analysis, confirmatory factor analysis and the structural equation model to establish the ability of the meta-tool to predict the outcomes estimated by the original tools. High correlations between the tools have emerged (from r 0.428 to 0.867) with a common variance from 18.3% to 75.1%. Through a conceptual evaluation and explorative factor analysis, the items were reduced from 42 to 20, and the three factors that emerged were confirmed by confirmatory factor analysis. According to the structural equation model results, two out of three emerged factors predicted the outcomes. From the initial 42 items, the meta-tool is composed of 20 items capable of predicting the outcomes as with the original tools. © 2016 John Wiley & Sons, Ltd.

  18. Reconfigurable Fault Tolerance for FPGAs

    NASA Technical Reports Server (NTRS)

    Shuler, Robert, Jr.

    2010-01-01

    The invention allows a field-programmable gate array (FPGA) or similar device to be efficiently reconfigured in whole or in part to provide higher capacity, non-redundant operation. The redundant device consists of functional units such as adders or multipliers, configuration memory for the functional units, a programmable routing method, configuration memory for the routing method, and various other features such as block RAM, I/O (random access memory, input/output) capability, dedicated carry logic, etc. The redundant device has three identical sets of functional units and routing resources and majority voters that correct errors. The configuration memory may or may not be redundant, depending on need. For example, SRAM-based FPGAs will need some type of radiation-tolerant configuration memory, or they will need triple-redundant configuration memory. Flash or anti-fuse devices will generally not need redundant configuration memory. Some means of loading and verifying the configuration memory is also required. These are all components of the pre-existing redundant FPGA. This innovation modifies the voter to accept a MODE input, which specifies whether ordinary voting is to occur, or if redundancy is to be split. Generally, additional routing resources will also be required to pass data between sections of the device created by splitting the redundancy. In redundancy mode, the voters produce an output corresponding to the two inputs that agree, in the usual fashion. In the split mode, the voters select just one input and convey this to the output, ignoring the other inputs. In a dual-redundant system (as opposed to triple-redundant), instead of a voter, there is some means to latch or gate a state update only when both inputs agree. In this case, the invention would require modification of the latch or gate so that it would operate normally in redundant mode, and would separately latch or gate the inputs in non-redundant mode.

  19. Quantum Darwinism: Entanglement, branches, and the emergent classicality of redundantly stored quantum information

    NASA Astrophysics Data System (ADS)

    Blume-Kohout, Robin; Zurek, Wojciech H.

    2006-06-01

    We lay a comprehensive foundation for the study of redundant information storage in decoherence processes. Redundancy has been proposed as a prerequisite for objectivity, the defining property of classical objects. We consider two ensembles of states for a model universe consisting of one system and many environments: the first consisting of arbitrary states, and the second consisting of “singly branching” states consistent with a simple decoherence model. Typical states from the random ensemble do not store information about the system redundantly, but information stored in branching states has a redundancy proportional to the environment’s size. We compute the specific redundancy for a wide range of model universes, and fit the results to a simple first-principles theory. Our results show that the presence of redundancy divides information about the system into three parts: classical (redundant); purely quantum; and the borderline, undifferentiated or “nonredundant,” information.

  20. An efficient approach for inverse kinematics and redundancy resolution scheme of hyper-redundant manipulators

    NASA Astrophysics Data System (ADS)

    Chembuly, V. V. M. J. Satish; Voruganti, Hari Kumar

    2018-04-01

    Hyper redundant manipulators have a large number of degrees of freedom (DOF) than the required to perform a given task. Additional DOF of manipulators provide the flexibility to work in highly cluttered environment and in constrained workspaces. Inverse kinematics (IK) of hyper-redundant manipulators is complicated due to large number of DOF and these manipulators have multiple IK solutions. The redundancy gives a choice of selecting best solution out of multiple solutions based on certain criteria such as obstacle avoidance, singularity avoidance, joint limit avoidance and joint torque minimization. This paper focuses on IK solution and redundancy resolution of hyper-redundant manipulator using classical optimization approach. Joint positions are computed by optimizing various criteria for a serial hyper redundant manipulators while traversing different paths in the workspace. Several cases are addressed using this scheme to obtain the inverse kinematic solution while optimizing the criteria like obstacle avoidance, joint limit avoidance.

  1. A review of alternative approaches to healthcare resource allocation.

    PubMed

    Petrou, S; Wolstenholme, J

    2000-07-01

    The resources available for healthcare are limited compared with demand, if not need, and all healthcare systems, regardless of their financing and organisation, employ mechanisms to ration or prioritise finite healthcare resources. This paper reviews alternative approaches that can be used to allocate healthcare resources. It discusses the problems encountered when allocating healthcare resources according to free market principles. It then proceeds to discuss the advantages and disadvantages of alternative resource allocation approaches that can be applied to public health systems. These include: (i) approaches based on the concept of meeting the needs of the population to maximising its capacity to benefit from interventions; (ii) economic approaches that identify the most efficient allocation of resources with the view of maximising health benefits or other measures of social welfare; (iii) approaches that seek to ration healthcare by age; and (iv) approaches that resolve resource allocation disputes through debate and bargaining. At present, there appears to be no consensus about the relative importance of the potentially conflicting principles that can be used to guide resource allocation decisions. It is concluded that whatever shape tomorrow's health service takes, the requirement to make equitable and efficient use of finite healthcare resources will remain.

  2. Allocating responsibility for environmental risks: A comparative analysis of examples from water governance.

    PubMed

    Doorn, Neelke

    2017-03-01

    The focus of the present study is on the allocation of responsibilities for addressing environmental risks in transboundary water governance. Effective environmental management in transboundary situations requires coordinated and cooperative action among diverse individuals and organizations. Currently, little insight exists on how to foster collective action such that individuals and organizations take the responsibility to address transboundary environmental risks. On the basis of 4 cases of transboundary water governance, it will be shown how certain allocation principles are more likely to encourage cooperative action. The main lesson from these case studies is that the allocation of responsibilities should be seen as a risk distribution problem, including considerations of effectiveness, efficiency, and fairness. Integr Environ Assess Manag 2017;13:371-375. © 2016 SETAC. © 2016 SETAC.

  3. Optimal Resource Allocation under Fair QoS in Multi-tier Server Systems

    NASA Astrophysics Data System (ADS)

    Akai, Hirokazu; Ushio, Toshimitsu; Hayashi, Naoki

    Recent development of network technology realizes multi-tier server systems, where several tiers perform functionally different processing requested by clients. It is an important issue to allocate resources of the systems to clients dynamically based on their current requests. On the other hand, Q-RAM has been proposed for resource allocation in real-time systems. In the server systems, it is important that execution results of all applications requested by clients are the same QoS(quality of service) level. In this paper, we extend Q-RAM to multi-tier server systems and propose a method for optimal resource allocation with fairness of the QoS levels of clients’ requests. We also consider an assignment problem of physical machines to be sleep in each tier sothat the energy consumption is minimized.

  4. 77 FR 5862 - Self-Regulatory Organizations; The NASDAQ Stock Market LLC; Notice of Filing and Immediate...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-02-06

    ... subscribers only): Per 64kb increase above $200/month. 128kb Tl base. Option 1, 2, or 3 with Fee for Option 1.../month. redundancy), dual hubs (one for redundancy), and dual router (one for redundancy). Option 3: Dual Tl lines (one for $2,500/month. redundancy), dual hubs (one for redundancy), and dual routers (one...

  5. Community-aware task allocation for social networked multiagent systems.

    PubMed

    Wang, Wanyuan; Jiang, Yichuan

    2014-09-01

    In this paper, we propose a novel community-aware task allocation model for social networked multiagent systems (SN-MASs), where the agent' cooperation domain is constrained in community and each agent can negotiate only with its intracommunity member agents. Under such community-aware scenarios, we prove that it remains NP-hard to maximize system overall profit. To solve this problem effectively, we present a heuristic algorithm that is composed of three phases: 1) task selection: select the desirable task to be allocated preferentially; 2) allocation to community: allocate the selected task to communities based on a significant task-first heuristics; and 3) allocation to agent: negotiate resources for the selected task based on a nonoverlap agent-first and breadth-first resource negotiation mechanism. Through the theoretical analyses and experiments, the advantages of our presented heuristic algorithm and community-aware task allocation model are validated. 1) Our presented heuristic algorithm performs very closely to the benchmark exponential brute-force optimal algorithm and the network flow-based greedy algorithm in terms of system overall profit in small-scale applications. Moreover, in the large-scale applications, the presented heuristic algorithm achieves approximately the same overall system profit, but significantly reduces the computational load compared with the greedy algorithm. 2) Our presented community-aware task allocation model reduces the system communication cost compared with the previous global-aware task allocation model and improves the system overall profit greatly compared with the previous local neighbor-aware task allocation model.

  6. Optimization of Land Use Suitability for Agriculture Using Integrated Geospatial Model and Genetic Algorithms

    NASA Astrophysics Data System (ADS)

    Mansor, S. B.; Pormanafi, S.; Mahmud, A. R. B.; Pirasteh, S.

    2012-08-01

    In this study, a geospatial model for land use allocation was developed from the view of simulating the biological autonomous adaptability to environment and the infrastructural preference. The model was developed based on multi-agent genetic algorithm. The model was customized to accommodate the constraint set for the study area, namely the resource saving and environmental-friendly. The model was then applied to solve the practical multi-objective spatial optimization allocation problems of land use in the core region of Menderjan Basin in Iran. The first task was to study the dominant crops and economic suitability evaluation of land. Second task was to determine the fitness function for the genetic algorithms. The third objective was to optimize the land use map using economical benefits. The results has indicated that the proposed model has much better performance for solving complex multi-objective spatial optimization allocation problems and it is a promising method for generating land use alternatives for further consideration in spatial decision-making.

  7. Methodologies for optimal resource allocation to the national space program and new space utilizations. Volume 1: Technical description

    NASA Technical Reports Server (NTRS)

    1971-01-01

    The optimal allocation of resources to the national space program over an extended time period requires the solution of a large combinatorial problem in which the program elements are interdependent. The computer model uses an accelerated search technique to solve this problem. The model contains a large number of options selectable by the user to provide flexible input and a broad range of output for use in sensitivity analyses of all entering elements. Examples of these options are budget smoothing under varied appropriation levels, entry of inflation and discount effects, and probabilistic output which provides quantified degrees of certainty that program costs will remain within planned budget. Criteria and related analytic procedures were established for identifying potential new space program directions. Used in combination with the optimal resource allocation model, new space applications can be analyzed in realistic perspective, including the advantage gain from existing space program plant and on-going programs such as the space transportation system.

  8. Studies in integrated line-and packet-switched computer communication systems

    NASA Astrophysics Data System (ADS)

    Maglaris, B. S.

    1980-06-01

    The problem of efficiently allocating the bandwidth of a trunk to both types of traffic is handled for various system and traffic models. A performance analysis is carried out both for variable and fixed frame schemes. It is shown that variable frame schemes, adjusting the frame length according to the traffic variations, offer better trunk utilization at the cost of the additional hardware and software complexity needed because of the lack of synchronization. An optimization study on the fixed frame schemes follows. The problem of dynamically allocating the fixed frame to both types of traffic is formulated as a Markovian Decision process. It is shown that the movable boundary scheme, suggested for commercial implementations of integrated multiplexors, offers optimal or near optimal performance and simplicity of implementation. Finally, the behavior of the movable boundary integrated scheme is studied for tandem link connections. Under the assumptions made for the line-switched traffic, the forward allocation technique is found to offer the best alternative among different path set-up strategies.

  9. A supplier selection and order allocation problem with stochastic demands

    NASA Astrophysics Data System (ADS)

    Zhou, Yun; Zhao, Lei; Zhao, Xiaobo; Jiang, Jianhua

    2011-08-01

    We consider a system comprising a retailer and a set of candidate suppliers that operates within a finite planning horizon of multiple periods. The retailer replenishes its inventory from the suppliers and satisfies stochastic customer demands. At the beginning of each period, the retailer makes decisions on the replenishment quantity, supplier selection and order allocation among the selected suppliers. An optimisation problem is formulated to minimise the total expected system cost, which includes an outer level stochastic dynamic program for the optimal replenishment quantity and an inner level integer program for supplier selection and order allocation with a given replenishment quantity. For the inner level subproblem, we develop a polynomial algorithm to obtain optimal decisions. For the outer level subproblem, we propose an efficient heuristic for the system with integer-valued inventory, based on the structural properties of the system with real-valued inventory. We investigate the efficiency of the proposed solution approach, as well as the impact of parameters on the optimal replenishment decision with numerical experiments.

  10. Risk-Based Sampling: I Don't Want to Weight in Vain.

    PubMed

    Powell, Mark R

    2015-12-01

    Recently, there has been considerable interest in developing risk-based sampling for food safety and animal and plant health for efficient allocation of inspection and surveillance resources. The problem of risk-based sampling allocation presents a challenge similar to financial portfolio analysis. Markowitz (1952) laid the foundation for modern portfolio theory based on mean-variance optimization. However, a persistent challenge in implementing portfolio optimization is the problem of estimation error, leading to false "optimal" portfolios and unstable asset weights. In some cases, portfolio diversification based on simple heuristics (e.g., equal allocation) has better out-of-sample performance than complex portfolio optimization methods due to estimation uncertainty. Even for portfolios with a modest number of assets, the estimation window required for true optimization may imply an implausibly long stationary period. The implications for risk-based sampling are illustrated by a simple simulation model of lot inspection for a small, heterogeneous group of producers. © 2015 Society for Risk Analysis.

  11. On the Fair Division of Multiple Stochastic Pies to Multiple Agents within the Nash Bargaining Solution

    PubMed Central

    Karmperis, Athanasios C.; Aravossis, Konstantinos; Tatsiopoulos, Ilias P.; Sotirchos, Anastasios

    2012-01-01

    The fair division of a surplus is one of the most widely examined problems. This paper focuses on bargaining problems with fixed disagreement payoffs where risk-neutral agents have reached an agreement that is the Nash-bargaining solution (NBS). We consider a stochastic environment, in which the overall return consists of multiple pies with uncertain sizes and we examine how these pies can be allocated with fairness among agents. Specifically, fairness is based on the Aristotle’s maxim: “equals should be treated equally and unequals unequally, in proportion to the relevant inequality”. In this context, fairness is achieved when all the individual stochastic surplus shares which are allocated to agents are distributed in proportion to the NBS. We introduce a novel algorithm, which can be used to compute the ratio of each pie that should be allocated to each agent, in order to ensure fairness within a symmetric or asymmetric NBS. PMID:23024752

  12. System of systems design: Evaluating aircraft in a fleet context using reliability and non-deterministic approaches

    NASA Astrophysics Data System (ADS)

    Frommer, Joshua B.

    This work develops and implements a solution framework that allows for an integrated solution to a resource allocation system-of-systems problem associated with designing vehicles for integration into an existing fleet to extend that fleet's capability while improving efficiency. Typically, aircraft design focuses on using a specific design mission while a fleet perspective would provide a broader capability. Aspects of design for both the vehicles and missions may be, for simplicity, deterministic in nature or, in a model that reflects actual conditions, uncertain. Toward this end, the set of tasks or goals for the to-be-planned system-of-systems will be modeled more accurately with non-deterministic values, and the designed platforms will be evaluated using reliability analysis. The reliability, defined as the probability of a platform or set of platforms to complete possible missions, will contribute to the fitness of the overall system. The framework includes building surrogate models for metrics such as capability and cost, and includes the ideas of reliability in the overall system-level design space. The concurrent design and allocation system-of-systems problem is a multi-objective mixed integer nonlinear programming (MINLP) problem. This study considered two system-of-systems problems that seek to simultaneously design new aircraft and allocate these aircraft into a fleet to provide a desired capability. The Coast Guard's Integrated Deepwater System program inspired the first problem, which consists of a suite of search-and-find missions for aircraft based on descriptions from the National Search and Rescue Manual. The second represents suppression of enemy air defense operations similar to those carried out by the U.S. Air Force, proposed as part of the Department of Defense Network Centric Warfare structure, and depicted in MILSTD-3013. The two problems seem similar, with long surveillance segments, but because of the complex nature of aircraft design, the analysis of the vehicle for high-speed attack combined with a long loiter period is considerably different from that for quick cruise to an area combined with a low speed search. However, the framework developed to solve this class of system-of-systems problem handles both scenarios and leads to a solution type for this kind of problem. On the vehicle-level of the problem, different technology can have an impact on the fleet-level. One such technology is Morphing, the ability to change shape, which is an ideal candidate technology for missions with dissimilar segments, such as the aforementioned two. A framework, using surrogate models based on optimally-sized aircraft, and using probabilistic parameters to define a concept of operations, is investigated; this has provided insight into the setup of the optimization problem, the use of the reliability metric, and the measurement of fleet level impacts of morphing aircraft. The research consisted of four phases. The two initial phases built and defined the framework to solve system-of-systems problem; these investigations used the search-and-find scenario as the example application. The first phase included the design of fixed-geometry and morphing aircraft for a range of missions and evaluated the aircraft capability using non-deterministic mission parameters. The second phase introduced the idea of multiple aircraft in a fleet, but only considered a fleet consisting of one aircraft type. The third phase incorporated the simultaneous design of a new vehicle and allocation into a fleet for the search-and-find scenario; in this phase, multiple types of aircraft are considered. The fourth phase repeated the simultaneous new aircraft design and fleet allocation for the SEAD scenario to show that the approach is not specific to the search-and-find scenario. The framework presented in this work appears to be a viable approach for concurrently designing and allocating constituents in a system, specifically aircraft in a fleet. The research also shows that new technology impact can be assessed at the fleet level using conceptual design principles.

  13. Redundancy in electronic health record corpora: analysis, impact on text mining performance and mitigation strategies.

    PubMed

    Cohen, Raphael; Elhadad, Michael; Elhadad, Noémie

    2013-01-16

    The increasing availability of Electronic Health Record (EHR) data and specifically free-text patient notes presents opportunities for phenotype extraction. Text-mining methods in particular can help disease modeling by mapping named-entities mentions to terminologies and clustering semantically related terms. EHR corpora, however, exhibit specific statistical and linguistic characteristics when compared with corpora in the biomedical literature domain. We focus on copy-and-paste redundancy: clinicians typically copy and paste information from previous notes when documenting a current patient encounter. Thus, within a longitudinal patient record, one expects to observe heavy redundancy. In this paper, we ask three research questions: (i) How can redundancy be quantified in large-scale text corpora? (ii) Conventional wisdom is that larger corpora yield better results in text mining. But how does the observed EHR redundancy affect text mining? Does such redundancy introduce a bias that distorts learned models? Or does the redundancy introduce benefits by highlighting stable and important subsets of the corpus? (iii) How can one mitigate the impact of redundancy on text mining? We analyze a large-scale EHR corpus and quantify redundancy both in terms of word and semantic concept repetition. We observe redundancy levels of about 30% and non-standard distribution of both words and concepts. We measure the impact of redundancy on two standard text-mining applications: collocation identification and topic modeling. We compare the results of these methods on synthetic data with controlled levels of redundancy and observe significant performance variation. Finally, we compare two mitigation strategies to avoid redundancy-induced bias: (i) a baseline strategy, keeping only the last note for each patient in the corpus; (ii) removing redundant notes with an efficient fingerprinting-based algorithm. (a)For text mining, preprocessing the EHR corpus with fingerprinting yields significantly better results. Before applying text-mining techniques, one must pay careful attention to the structure of the analyzed corpora. While the importance of data cleaning has been known for low-level text characteristics (e.g., encoding and spelling), high-level and difficult-to-quantify corpus characteristics, such as naturally occurring redundancy, can also hurt text mining. Fingerprinting enables text-mining techniques to leverage available data in the EHR corpus, while avoiding the bias introduced by redundancy.

  14. Redundant Asynchronous Microprocessor System

    NASA Technical Reports Server (NTRS)

    Meyer, G.; Johnston, J. O.; Dunn, W. R.

    1985-01-01

    Fault-tolerant computer structure called RAMPS (for redundant asynchronous microprocessor system) has simplicity of static redundancy but offers intermittent-fault handling ability of complex, dynamically redundant systems. New structure useful wherever several microprocessors are employed for control - in aircraft, industrial processes, robotics, and automatic machining, for example.

  15. Dynamic programming methods for concurrent design and dynamic allocation of vehicles embedded in a system-of-systems

    NASA Astrophysics Data System (ADS)

    Nusawardhana

    2007-12-01

    Recent developments indicate a changing perspective on how systems or vehicles should be designed. Such transition comes from the way decision makers in defense related agencies address complex problems. Complex problems are now often posed in terms of the capabilities desired, rather than in terms of requirements for a single systems. As a result, the way to provide a set of capabilities is through a collection of several individual, independent systems. This collection of individual independent systems is often referred to as a "System of Systems'' (SoS). Because of the independent nature of the constituent systems in an SoS, approaches to design an SoS, and more specifically, approaches to design a new system as a member of an SoS, will likely be different than the traditional design approaches for complex, monolithic (meaning the constituent parts have no ability for independent operation) systems. Because a system of system evolves over time, this simultaneous system design and resource allocation problem should be investigated in a dynamic context. Such dynamic optimization problems are similar to conventional control problems. However, this research considers problems which not only seek optimizing policies but also seek the proper system or vehicle to operate under these policies. This thesis presents a framework and a set of analytical tools to solve a class of SoS problems that involves the simultaneous design of a new system and allocation of the new system along with existing systems. Such a class of problems belongs to the problems of concurrent design and control of a new systems with solutions consisting of both optimal system design and optimal control strategy. Rigorous mathematical arguments show that the proposed framework solves the concurrent design and control problems. Many results exist for dynamic optimization problems of linear systems. In contrary, results on optimal nonlinear dynamic optimization problems are rare. The proposed framework is equipped with the set of analytical tools to solve several cases of nonlinear optimal control problems: continuous- and discrete-time nonlinear problems with applications on both optimal regulation and tracking. These tools are useful when mathematical descriptions of dynamic systems are available. In the absence of such a mathematical model, it is often necessary to derive a solution based on computer simulation. For this case, a set of parameterized decision may constitute a solution. This thesis presents a method to adjust these parameters based on the principle of stochastic approximation simultaneous perturbation using continuous measurements. The set of tools developed here mostly employs the methods of exact dynamic programming. However, due to the complexity of SoS problems, this research also develops suboptimal solution approaches, collectively recognized as approximate dynamic programming solutions, for large scale problems. The thesis presents, explores, and solves problems from an airline industry, in which a new aircraft is to be designed and allocated along with an existing fleet of aircraft. Because the life cycle of an aircraft is on the order of 10 to 20 years, this problem is to be addressed dynamically so that the new aircraft design is the best design for the fleet over a given time horizon.

  16. Model for the design of distributed data bases

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ram, S.

    This research focuses on developing a model to solve the File Allocation Problem (FAP). The model integrates two major design issues, namely Concurrently Control and Data Distribution. The central node locking mechanism is incorporated in developing a nonlinear integer programming model. Two solution algorithms are proposed, one of which was implemented in FORTRAN.V. The allocation of data bases and programs are examined using this heuristic. Several decision rules were also formulated based on the results of the heuristic. A second more comprehensive heuristic was proposed, based on the knapsack problem. The development and implementation of this algorithm has been leftmore » as a topic for future research.« less

  17. A Survey of the Problem of Unbalanced High School Educational Resource Allocation within the County Region in Gansu Province--Using Seven High Schools in Three Counties as an Example

    ERIC Educational Resources Information Center

    Kai, Liu; Gaofu, Du

    2015-01-01

    The imbalance in allocating high school educational resources within the county region has expanded the imbalances in local high school educational development. This has caused "diseconomies of scale" in high schools, aggravated the "expansion impulse" in building model high schools, limited the growth of effective demand by…

  18. Cellular trade-offs and optimal resource allocation during cyanobacterial diurnal growth

    PubMed Central

    Knoop, Henning; Bockmayr, Alexander; Steuer, Ralf

    2017-01-01

    Cyanobacteria are an integral part of Earth’s biogeochemical cycles and a promising resource for the synthesis of renewable bioproducts from atmospheric CO2. Growth and metabolism of cyanobacteria are inherently tied to the diurnal rhythm of light availability. As yet, however, insight into the stoichiometric and energetic constraints of cyanobacterial diurnal growth is limited. Here, we develop a computational framework to investigate the optimal allocation of cellular resources during diurnal phototrophic growth using a genome-scale metabolic reconstruction of the cyanobacterium Synechococcus elongatus PCC 7942. We formulate phototrophic growth as an autocatalytic process and solve the resulting time-dependent resource allocation problem using constraint-based analysis. Based on a narrow and well-defined set of parameters, our approach results in an ab initio prediction of growth properties over a full diurnal cycle. The computational model allows us to study the optimality of metabolite partitioning during diurnal growth. The cyclic pattern of glycogen accumulation, an emergent property of the model, has timing characteristics that are in qualitative agreement with experimental findings. The approach presented here provides insight into the time-dependent resource allocation problem of phototrophic diurnal growth and may serve as a general framework to assess the optimality of metabolic strategies that evolved in phototrophic organisms under diurnal conditions. PMID:28720699

  19. A new proposal for greenhouse gas emissions responsibility allocation: best available technologies approach.

    PubMed

    Berzosa, Álvaro; Barandica, Jesús M; Fernández-Sánchez, Gonzalo

    2014-01-01

    In recent years, several methodologies have been developed for the quantification of greenhouse gas (GHG) emissions. However, determining who is responsible for these emissions is also quite challenging. The most common approach is to assign emissions to the producer (based on the Kyoto Protocol), but proposals also exist for its allocation to the consumer (based on an ecological footprint perspective) and for a hybrid approach called shared responsibility. In this study, the existing proposals and standards regarding the allocation of GHG emissions responsibilities are analyzed, focusing on their main advantages and problems. A new model of shared responsibility that overcomes some of the existing problems is also proposed. This model is based on applying the best available technologies (BATs). This new approach allocates the responsibility between the producers and the final consumers based on the real capacity of each agent to reduce emissions. The proposed approach is demonstrated using a simple case study of a 4-step life cycle of ammonia nitrate (AN) fertilizer production. The proposed model has the characteristics that the standards and publications for assignment of GHG emissions responsibilities demand. This study presents a new way to assign responsibilities that pushes all the actors in the production chain, including consumers, to reduce pollution. © 2013 SETAC.

  20. Power-efficient distributed resource allocation under goodput QoS constraints for heterogeneous networks

    NASA Astrophysics Data System (ADS)

    Andreotti, Riccardo; Del Fiorentino, Paolo; Giannetti, Filippo; Lottici, Vincenzo

    2016-12-01

    This work proposes a distributed resource allocation (RA) algorithm for packet bit-interleaved coded OFDM transmissions in the uplink of heterogeneous networks (HetNets), characterized by small cells deployed over a macrocell area and sharing the same band. Every user allocates its transmission resources, i.e., bits per active subcarrier, coding rate, and power per subcarrier, to minimize the power consumption while both guaranteeing a target quality of service (QoS) and accounting for the interference inflicted by other users transmitting over the same band. The QoS consists of the number of information bits delivered in error-free packets per unit of time, or goodput (GP), estimated at the transmitter by resorting to an efficient effective SNR mapping technique. First, the RA problem is solved in the point-to-point case, thus deriving an approximate yet accurate closed-form expression for the power allocation (PA). Then, the interference-limited HetNet case is examined, where the RA problem is described as a non-cooperative game, providing a solution in terms of generalized Nash equilibrium. Thanks to the closed-form of the PA, the solution analysis is based on the best response concept. Hence, sufficient conditions for existence and uniqueness of the solution are analytically derived, along with a distributed algorithm capable of reaching the game equilibrium.

  1. The MAVEN Magnetic Field Investigation

    NASA Technical Reports Server (NTRS)

    Connerney, J. E. P.; Espley, J.; Lawton, P.; Murphy, S.; Odom, J.; Oliversen, R.; Sheppard, D.

    2014-01-01

    The MAVEN magnetic field investigation is part of a comprehensive particles and fields subsystem that will measure the magnetic and electric fields and plasma environment of Mars and its interaction with the solar wind. The magnetic field instrumentation consists of two independent tri-axial fluxgate magnetometer sensors, remotely mounted at the outer extremity of the two solar arrays on small extensions ("boomlets"). The sensors are controlled by independent and functionally identical electronics assemblies that are integrated within the particles and fields subsystem and draw their power from redundant power supplies within that system. Each magnetometer measures the ambient vector magnetic field over a wide dynamic range (to 65,536 nT per axis) with a quantization uncertainty of 0.008 nT in the most sensitive dynamic range and an accuracy of better than 0.05%. Both magnetometers sample the ambient magnetic field at an intrinsic sample rate of 32 vector samples per second. Telemetry is transferred from each magnetometer to the particles and fields package once per second and subsequently passed to the spacecraft after some reformatting. The magnetic field data volume may be reduced by averaging and decimation, when necessary to meet telemetry allocations, and application of data compression, utilizing a lossless 8-bit differencing scheme. The MAVEN magnetic field experiment may be reconfigured in flight to meet unanticipated needs and is fully hardware redundant. A spacecraft magnetic control program was implemented to provide a magnetically clean environment for the magnetic sensors and the MAVEN mission plan provides for occasional spacecraft maneuvers - multiple rotations about the spacecraft x and z axes - to characterize spacecraft fields and/or instrument offsets in flight.

  2. The MAVEN Magnetic Field Investigation

    NASA Astrophysics Data System (ADS)

    Connerney, J. E. P.; Espley, J.; Lawton, P.; Murphy, S.; Odom, J.; Oliversen, R.; Sheppard, D.

    2015-12-01

    The MAVEN magnetic field investigation is part of a comprehensive particles and fields subsystem that will measure the magnetic and electric fields and plasma environment of Mars and its interaction with the solar wind. The magnetic field instrumentation consists of two independent tri-axial fluxgate magnetometer sensors, remotely mounted at the outer extremity of the two solar arrays on small extensions ("boomlets"). The sensors are controlled by independent and functionally identical electronics assemblies that are integrated within the particles and fields subsystem and draw their power from redundant power supplies within that system. Each magnetometer measures the ambient vector magnetic field over a wide dynamic range (to 65,536 nT per axis) with a resolution of 0.008 nT in the most sensitive dynamic range and an accuracy of better than 0.05 %. Both magnetometers sample the ambient magnetic field at an intrinsic sample rate of 32 vector samples per second. Telemetry is transferred from each magnetometer to the particles and fields package once per second and subsequently passed to the spacecraft after some reformatting. The magnetic field data volume may be reduced by averaging and decimation, when necessary to meet telemetry allocations, and application of data compression, utilizing a lossless 8-bit differencing scheme. The MAVEN magnetic field experiment may be reconfigured in flight to meet unanticipated needs and is fully hardware redundant. A spacecraft magnetic control program was implemented to provide a magnetically clean environment for the magnetic sensors and the MAVEN mission plan provides for occasional spacecraft maneuvers—multiple rotations about the spacecraft x and z axes—to characterize spacecraft fields and/or instrument offsets in flight.

  3. JPRS Report China

    DTIC Science & Technology

    1988-12-14

    and the trend among stu- dents to "go into business " are some of the problems that have constantly bedeviled China’s academic commu- nity. After...should be able to come up with some of the needed funds through their own business endeavors and horizontal links, among other things. Improving...proper arrangements for the placement of redundant workers. They should assign them to production and business activities or paid services run by the

  4. Designing a Forcenet Information Topology

    DTIC Science & Technology

    2004-12-01

    and limitations, but one consistent problem is that by tagging data the amount of bits needed to represent the data grows proportionately. As a...40 Frank Morning, Jr., “Smallsats Grow Up,” Aviation Week & Space Technology, December 8, 2003 41 Ibid. 32 (such as emails home or...Below are a few important points concerning a huge field of work. a. Dissimilar Redundancy and Reconstitution A growing concern within the DoD is

  5. Optimal Power Allocation for CC-HARQ-based Cognitive Radio with Statistical CSI in Nakagami Slow Fading Channels

    NASA Astrophysics Data System (ADS)

    Xu, Ding; Li, Qun

    2017-01-01

    This paper addresses the power allocation problem for cognitive radio (CR) based on hybrid-automatic-repeat-request (HARQ) with chase combining (CC) in Nakagamimslow fading channels. We assume that, instead of the perfect instantaneous channel state information (CSI), only the statistical CSI is available at the secondary user (SU) transmitter. The aim is to minimize the SU outage probability under the primary user (PU) interference outage constraint. Using the Lagrange multiplier method, an iterative and recursive algorithm is derived to obtain the optimal power allocation for each transmission round. Extensive numerical results are presented to illustrate the performance of the proposed algorithm.

  6. Providing appropriate services to individuals in the community: a preliminary case-mix for model allocating personal care services.

    PubMed

    Phillips, Charles D; Dyer, James; Janousek, Vit; Halperin, Lisa; Hawes, Catherine

    2008-01-01

    Personal care services are often provided to clients in community settings through highly discretionary processes. Such processes provide little guidance for caseworkers concerning how public resources should be allocated. The results of such processes almost guarantee that individuals with very similar needs will receive very different levels of care resources. Such disparities in treatment open the door to inequity and ineffectiveness. One way to address this problem is through case-mix classification systems that allocate hours of care according to client needs. This paper outlines the preliminary steps taken by one state in its movement toward such a system.

  7. Restoring Redundancy to the MAP Propulsion System

    NASA Technical Reports Server (NTRS)

    ODonnell, James R., Jr.; Davis, Gary T.; Ward, David K.; Bauer, F. (Technical Monitor)

    2002-01-01

    The Microwave Anisotropy Probe is a follow-on to the Differential Microwave Radiometer instrument on the Cosmic Background Explorer. Sixteen months before launch, it was discovered that from the time of the critical design review, configuration changes had resulted in a significant migration of the spacecraft's center of mass. As a result, the spacecraft no longer had a viable backup control mode in the event of a failure of the negative pitch axis thruster. Potential solutions to this problem were identified, such as adding thruster plume shields to redirect thruster torque, adding mass to, or removing it from, the spacecraft, adding an additional thruster, moving thrusters, bending thrusters (either nozzles or propellant tubing), or accepting the loss of redundancy for the thruster. The impacts of each solution, including effects on the mass, cost, and fuel budgets, as well as schedule, were considered, and it was decided to bend the thruster propellant tubing of the two roll control thrusters, allowing that pair to be used for back-up control in the negative pitch axis. This paper discusses the problem and the potential solutions, and documents the hardware and software changes that needed to be made to implement the chosen solution. Flight data is presented to show the propulsion system on-orbit performance.

  8. Resource Economics

    NASA Astrophysics Data System (ADS)

    Conrad, Jon M.

    2000-01-01

    Resource Economics is a text for students with a background in calculus, intermediate microeconomics, and a familiarity with the spreadsheet software Excel. The book covers basic concepts, shows how to set up spreadsheets to solve dynamic allocation problems, and presents economic models for fisheries, forestry, nonrenewable resources, stock pollutants, option value, and sustainable development. Within the text, numerical examples are posed and solved using Excel's Solver. These problems help make concepts operational, develop economic intuition, and serve as a bridge to the study of real-world problems of resource management. Through these examples and additional exercises at the end of Chapters 1 to 8, students can make dynamic models operational, develop their economic intuition, and learn how to set up spreadsheets for the simulation of optimization of resource and environmental systems. Book is unique in its use of spreadsheet software (Excel) to solve dynamic allocation problems Conrad is co-author of a previous book for the Press on the subject for graduate students Approach is extremely student-friendly; gives students the tools to apply research results to actual environmental issues

  9. Improving Histopathology Laboratory Productivity: Process Consultancy and A3 Problem Solving.

    PubMed

    Yörükoğlu, Kutsal; Özer, Erdener; Alptekin, Birsen; Öcal, Cem

    2017-01-01

    The ISO 17020 quality program has been run in our pathology laboratory for four years to establish an action plan for correction and prevention of identified errors. In this study, we aimed to evaluate the errors that we could not identify through ISO 17020 and/or solve by means of process consulting. Process consulting is carefully intervening in a group or team to help it to accomplish its goals. The A3 problem solving process was run under the leadership of a 'workflow, IT and consultancy manager'. An action team was established consisting of technical staff. A root cause analysis was applied for target conditions, and the 6-S method was implemented for solution proposals. Applicable proposals were activated and the results were rated by six-sigma analysis. Non-applicable proposals were reported to the laboratory administrator. A mislabelling error was the most complained issue triggering all pre-analytical errors. There were 21 non-value added steps grouped in 8 main targets on the fish bone graphic (transporting, recording, moving, individual, waiting, over-processing, over-transaction and errors). Unnecessary redundant requests, missing slides, archiving issues, redundant activities, and mislabelling errors were proposed to be solved by improving visibility and fixing spaghetti problems. Spatial re-organization, organizational marking, re-defining some operations, and labeling activities raised the six sigma score from 24% to 68% for all phases. Operational transactions such as implementation of a pathology laboratory system was suggested for long-term improvement. Laboratory management is a complex process. Quality control is an effective method to improve productivity. Systematic checking in a quality program may not always find and/or solve the problems. External observation may reveal crucial indicators about the system failures providing very simple solutions.

  10. Dual redundant core memory systems

    NASA Technical Reports Server (NTRS)

    Hull, F. E.

    1972-01-01

    Electronic memory system consisting of series redundant drive switch circuits, triple redundant majority voted memory timing functions, and two data registers to provide functional dual redundancy is described. Signal flow through the circuits is illustrated and equence of events which occur within the memory system is explained.

  11. Exploration of joint redundancy but not task space variability facilitates supervised motor learning.

    PubMed

    Singh, Puneet; Jana, Sumitash; Ghosal, Ashitava; Murthy, Aditya

    2016-12-13

    The number of joints and muscles in a human arm is more than what is required for reaching to a desired point in 3D space. Although previous studies have emphasized how such redundancy and the associated flexibility may play an important role in path planning, control of noise, and optimization of motion, whether and how redundancy might promote motor learning has not been investigated. In this work, we quantify redundancy space and investigate its significance and effect on motor learning. We propose that a larger redundancy space leads to faster learning across subjects. We observed this pattern in subjects learning novel kinematics (visuomotor adaptation) and dynamics (force-field adaptation). Interestingly, we also observed differences in the redundancy space between the dominant hand and nondominant hand that explained differences in the learning of dynamics. Taken together, these results provide support for the hypothesis that redundancy aids in motor learning and that the redundant component of motor variability is not noise.

  12. Exploration of joint redundancy but not task space variability facilitates supervised motor learning

    PubMed Central

    Singh, Puneet; Jana, Sumitash; Ghosal, Ashitava; Murthy, Aditya

    2016-01-01

    The number of joints and muscles in a human arm is more than what is required for reaching to a desired point in 3D space. Although previous studies have emphasized how such redundancy and the associated flexibility may play an important role in path planning, control of noise, and optimization of motion, whether and how redundancy might promote motor learning has not been investigated. In this work, we quantify redundancy space and investigate its significance and effect on motor learning. We propose that a larger redundancy space leads to faster learning across subjects. We observed this pattern in subjects learning novel kinematics (visuomotor adaptation) and dynamics (force-field adaptation). Interestingly, we also observed differences in the redundancy space between the dominant hand and nondominant hand that explained differences in the learning of dynamics. Taken together, these results provide support for the hypothesis that redundancy aids in motor learning and that the redundant component of motor variability is not noise. PMID:27911808

  13. Maximization of Learning Speed in the Motor Cortex Due to Neuronal Redundancy

    PubMed Central

    Takiyama, Ken; Okada, Masato

    2012-01-01

    Many redundancies play functional roles in motor control and motor learning. For example, kinematic and muscle redundancies contribute to stabilizing posture and impedance control, respectively. Another redundancy is the number of neurons themselves; there are overwhelmingly more neurons than muscles, and many combinations of neural activation can generate identical muscle activity. The functional roles of this neuronal redundancy remains unknown. Analysis of a redundant neural network model makes it possible to investigate these functional roles while varying the number of model neurons and holding constant the number of output units. Our analysis reveals that learning speed reaches its maximum value if and only if the model includes sufficient neuronal redundancy. This analytical result does not depend on whether the distribution of the preferred direction is uniform or a skewed bimodal, both of which have been reported in neurophysiological studies. Neuronal redundancy maximizes learning speed, even if the neural network model includes recurrent connections, a nonlinear activation function, or nonlinear muscle units. Furthermore, our results do not rely on the shape of the generalization function. The results of this study suggest that one of the functional roles of neuronal redundancy is to maximize learning speed. PMID:22253586

  14. Stochastic Averaging for Constrained Optimization With Application to Online Resource Allocation

    NASA Astrophysics Data System (ADS)

    Chen, Tianyi; Mokhtari, Aryan; Wang, Xin; Ribeiro, Alejandro; Giannakis, Georgios B.

    2017-06-01

    Existing approaches to resource allocation for nowadays stochastic networks are challenged to meet fast convergence and tolerable delay requirements. The present paper leverages online learning advances to facilitate stochastic resource allocation tasks. By recognizing the central role of Lagrange multipliers, the underlying constrained optimization problem is formulated as a machine learning task involving both training and operational modes, with the goal of learning the sought multipliers in a fast and efficient manner. To this end, an order-optimal offline learning approach is developed first for batch training, and it is then generalized to the online setting with a procedure termed learn-and-adapt. The novel resource allocation protocol permeates benefits of stochastic approximation and statistical learning to obtain low-complexity online updates with learning errors close to the statistical accuracy limits, while still preserving adaptation performance, which in the stochastic network optimization context guarantees queue stability. Analysis and simulated tests demonstrate that the proposed data-driven approach improves the delay and convergence performance of existing resource allocation schemes.

  15. Outcome based state budget allocation for diabetes prevention programs using multi-criteria optimization with robust weights.

    PubMed

    Mehrotra, Sanjay; Kim, Kibaek

    2011-12-01

    We consider the problem of outcomes based budget allocations to chronic disease prevention programs across the United States (US) to achieve greater geographical healthcare equity. We use Diabetes Prevention and Control Programs (DPCP) by the Center for Disease Control and Prevention (CDC) as an example. We present a multi-criteria robust weighted sum model for such multi-criteria decision making in a group decision setting. The principal component analysis and an inverse linear programming techniques are presented and used to study the actual 2009 budget allocation by CDC. Our results show that the CDC budget allocation process for the DPCPs is not likely model based. In our empirical study, the relative weights for different prevalence and comorbidity factors and the corresponding budgets obtained under different weight regions are discussed. Parametric analysis suggests that money should be allocated to states to promote diabetes education and to increase patient-healthcare provider interactions to reduce disparity across the US.

  16. Shifting orders among suppliers considering risk, price and transportation cost

    NASA Astrophysics Data System (ADS)

    Revitasari, C.; Pujawan, I. N.

    2018-04-01

    Supplier order allocation is an important supply chain decision for an enterprise. It is related to the supplier’s function as a raw material provider and other supporting materials that will be used in production process. Most of works on order allocation has been based on costs and other supply chain performance, but very limited of them taking risks into consideration. In this paper we address the problem of order allocation of a single commodity sourced from multiple suppliers considering supply risks in addition to the attempt of minimizing transportation costs. The supply chain risk was investigated and a procedure was proposed in the risk mitigation phase as a form of risk profile. The objective including risk profile in order allocation is to maximize the product flow from a risky supplier to a relatively less risky supplier. The proposed procedure is applied to a sugar company. The result suggests that order allocations should be maximized to suppliers that have a relatively low risk and minimized to suppliers that have a relatively larger risks.

  17. Opportunistic Capacity-Based Resource Allocation for Chunk-Based Multi-Carrier Cognitive Radio Sensor Networks

    PubMed Central

    Huang, Jie; Zeng, Xiaoping; Jian, Xin; Tan, Xiaoheng; Zhang, Qi

    2017-01-01

    The spectrum allocation for cognitive radio sensor networks (CRSNs) has received considerable research attention under the assumption that the spectrum environment is static. However, in practice, the spectrum environment varies over time due to primary user/secondary user (PU/SU) activity and mobility, resulting in time-varied spectrum resources. This paper studies resource allocation for chunk-based multi-carrier CRSNs with time-varied spectrum resources. We present a novel opportunistic capacity model through a continuous time semi-Markov chain (CTSMC) to describe the time-varied spectrum resources of chunks and, based on this, a joint power and chunk allocation model by considering the opportunistically available capacity of chunks is proposed. To reduce the computational complexity, we split this model into two sub-problems and solve them via the Lagrangian dual method. Simulation results illustrate that the proposed opportunistic capacity-based resource allocation algorithm can achieve better performance compared with traditional algorithms when the spectrum environment is time-varied. PMID:28106803

  18. Which patients do I treat? An experimental study with economists and physicians

    PubMed Central

    2012-01-01

    This experiment investigates decisions made by prospective economists and physicians in an allocation problem which can be framed either medically or neutrally. The potential recipients differ with respect to their minimum needs as well as to how much they benefit from a treatment. We classify the allocators as either 'selfish', 'Rawlsian', or 'maximizing the number of recipients'. Economists tend to maximize their own payoff, whereas the physicians' choices are more in line with maximizing the number of recipients and with Rawlsianism. Regarding the framing, we observe that professional norms surface more clearly in familiar settings. Finally, we scrutinize how the probability of being served and the allocated quantity depend on a recipient's characteristics as well as on the allocator type. JEL Classification: A13, I19, C91, C72 PMID:22827912

  19. The Effects of Race Conditions when Implementing Single-Source Redundant Clock Trees in Triple Modular Redundant Synchronous Architectures

    NASA Technical Reports Server (NTRS)

    Berg, Melanie D.; Label, Kenneth A.; Pellish, Jonathan

    2016-01-01

    We present the challenges that arise when using redundant clock domains due to their clock-skew. Heavy-ion radiation data show that a singular clock domain (DTMR) provides an improved TMR methodology for SRAM-based FPGAs over redundant clocks.

  20. Intersensory Redundancy Enhances Memory in Bobwhite Quail Embryos

    ERIC Educational Resources Information Center

    Lickliter, Robert; Bahrick, Lorraine E.; Honeycutt, Hunter

    2004-01-01

    Information presented concurrently and redundantly to 2 or more senses (intersensory redundancy) has been shown to recruit attention and promote perceptual learning of amodal stimulus properties in animal embryos and human infants. This study examined whether the facilitative effect of intersensory redundancy also extends to the domain of memory.…

  1. Fusion Prevents the Redundant Signals Effect: Evidence from Stereoscopically Presented Stimuli

    ERIC Educational Resources Information Center

    Schroter, Hannes; Fiedler, Anja; Miller, Jeff; Ulrich, Rolf

    2011-01-01

    In a simple reaction time (RT) experiment, visual stimuli were stereoscopically presented either to one eye (single stimulation) or to both eyes (redundant stimulation), with brightness matched for single and redundant stimulations. Redundant stimulation resulted in two separate percepts when noncorresponding retinal areas were stimulated, whereas…

  2. Computer-Aided Group Problem Solving for Unified Life Cycle Engineering (ULCE)

    DTIC Science & Technology

    1989-02-01

    defining the problem, generating alternative solutions, evaluating alternatives, selecting alternatives, and implementing the solution. Systems...specialist in group dynamics, assists the group in formulating the problem and selecting a model framework. The analyst provides the group with computer...allocating resources, evaluating and selecting options, making judgments explicit, and analyzing dynamic systems. c. University of Rhode Island Drs. Geoffery

  3. Generating Data Flow Programs from Nonprocedural Specifications.

    DTIC Science & Technology

    1983-03-01

    With the I-structures, Gajski points out, it is difficult to know ahead of time the optimal memory allocation scheme to pertition large arrays. amory...contention problems may occur for frequently accessed elements stored in the sam memory module. Gajski observes that these are the same problem which

  4. Capacity improvement using simulation optimization approaches: A case study in the thermotechnology industry

    NASA Astrophysics Data System (ADS)

    Yelkenci Köse, Simge; Demir, Leyla; Tunalı, Semra; Türsel Eliiyi, Deniz

    2015-02-01

    In manufacturing systems, optimal buffer allocation has a considerable impact on capacity improvement. This study presents a simulation optimization procedure to solve the buffer allocation problem in a heat exchanger production plant so as to improve the capacity of the system. For optimization, three metaheuristic-based search algorithms, i.e. a binary-genetic algorithm (B-GA), a binary-simulated annealing algorithm (B-SA) and a binary-tabu search algorithm (B-TS), are proposed. These algorithms are integrated with the simulation model of the production line. The simulation model, which captures the stochastic and dynamic nature of the production line, is used as an evaluation function for the proposed metaheuristics. The experimental study with benchmark problem instances from the literature and the real-life problem show that the proposed B-TS algorithm outperforms B-GA and B-SA in terms of solution quality.

  5. Pricing Resources in LTE Networks through Multiobjective Optimization

    PubMed Central

    Lai, Yung-Liang; Jiang, Jehn-Ruey

    2014-01-01

    The LTE technology offers versatile mobile services that use different numbers of resources. This enables operators to provide subscribers or users with differential quality of service (QoS) to boost their satisfaction. On one hand, LTE operators need to price the resources high for maximizing their profits. On the other hand, pricing also needs to consider user satisfaction with allocated resources and prices to avoid “user churn,” which means subscribers will unsubscribe services due to dissatisfaction with allocated resources or prices. In this paper, we study the pricing resources with profits and satisfaction optimization (PRPSO) problem in the LTE networks, considering the operator profit and subscribers' satisfaction at the same time. The problem is modelled as nonlinear multiobjective optimization with two optimal objectives: (1) maximizing operator profit and (2) maximizing user satisfaction. We propose to solve the problem based on the framework of the NSGA-II. Simulations are conducted for evaluating the proposed solution. PMID:24526889

  6. Pricing resources in LTE networks through multiobjective optimization.

    PubMed

    Lai, Yung-Liang; Jiang, Jehn-Ruey

    2014-01-01

    The LTE technology offers versatile mobile services that use different numbers of resources. This enables operators to provide subscribers or users with differential quality of service (QoS) to boost their satisfaction. On one hand, LTE operators need to price the resources high for maximizing their profits. On the other hand, pricing also needs to consider user satisfaction with allocated resources and prices to avoid "user churn," which means subscribers will unsubscribe services due to dissatisfaction with allocated resources or prices. In this paper, we study the pricing resources with profits and satisfaction optimization (PRPSO) problem in the LTE networks, considering the operator profit and subscribers' satisfaction at the same time. The problem is modelled as nonlinear multiobjective optimization with two optimal objectives: (1) maximizing operator profit and (2) maximizing user satisfaction. We propose to solve the problem based on the framework of the NSGA-II. Simulations are conducted for evaluating the proposed solution.

  7. An analysis of choice making in the assessment of young children with severe behavior problems.

    PubMed

    Harding, J W; Wacker, D P; Berg, W K; Cooper, L J; Asmus, J; Mlela, K; Muller, J

    1999-01-01

    We examined how positive and negative reinforcement influenced time allocation, occurrence of problem behavior, and completion of parent instructions during a concurrent choice assessment with 2 preschool-aged children who displayed severe problem behavior in their homes. The children were given a series of concurrent choice options that varied availability of parent attention, access to preferred toys, and presentation of parent instructions. The results showed that both children consistently allocated their time to choice areas that included parent attention when no instructions were presented. When parent attention choice areas included the presentation of instructions, the children displayed differential patterns of behavior that appeared to be influenced by the presence or absence of preferred toys. The results extended previous applications of reinforcer assessment procedures by analyzing the relative influence of both positive and negative reinforcement within a concurrent-operants paradigm.

  8. Optimal Resource Allocation for NOMA-TDMA Scheme with α-Fairness in Industrial Internet of Things.

    PubMed

    Sun, Yanjing; Guo, Yiyu; Li, Song; Wu, Dapeng; Wang, Bin

    2018-05-15

    In this paper, a joint non-orthogonal multiple access and time division multiple access (NOMA-TDMA) scheme is proposed in Industrial Internet of Things (IIoT), which allowed multiple sensors to transmit in the same time-frequency resource block using NOMA. The user scheduling, time slot allocation, and power control are jointly optimized in order to maximize the system α -fair utility under transmit power constraint and minimum rate constraint. The optimization problem is nonconvex because of the fractional objective function and the nonconvex constraints. To deal with the original problem, we firstly convert the objective function in the optimization problem into a difference of two convex functions (D.C.) form, and then propose a NOMA-TDMA-DC algorithm to exploit the global optimum. Numerical results show that the NOMA-TDMA scheme significantly outperforms the traditional orthogonal multiple access scheme in terms of both spectral efficiency and user fairness.

  9. Game theoretic wireless resource allocation for H.264 MGS video transmission over cognitive radio networks

    NASA Astrophysics Data System (ADS)

    Fragkoulis, Alexandros; Kondi, Lisimachos P.; Parsopoulos, Konstantinos E.

    2015-03-01

    We propose a method for the fair and efficient allocation of wireless resources over a cognitive radio system network to transmit multiple scalable video streams to multiple users. The method exploits the dynamic architecture of the Scalable Video Coding extension of the H.264 standard, along with the diversity that OFDMA networks provide. We use a game-theoretic Nash Bargaining Solution (NBS) framework to ensure that each user receives the minimum video quality requirements, while maintaining fairness over the cognitive radio system. An optimization problem is formulated, where the objective is the maximization of the Nash product while minimizing the waste of resources. The problem is solved by using a Swarm Intelligence optimizer, namely Particle Swarm Optimization. Due to the high dimensionality of the problem, we also introduce a dimension-reduction technique. Our experimental results demonstrate the fairness imposed by the employed NBS framework.

  10. Choosing non-redundant representative subsets of protein sequence data sets using submodular optimization.

    PubMed

    Libbrecht, Maxwell W; Bilmes, Jeffrey A; Noble, William Stafford

    2018-04-01

    Selecting a non-redundant representative subset of sequences is a common step in many bioinformatics workflows, such as the creation of non-redundant training sets for sequence and structural models or selection of "operational taxonomic units" from metagenomics data. Previous methods for this task, such as CD-HIT, PISCES, and UCLUST, apply a heuristic threshold-based algorithm that has no theoretical guarantees. We propose a new approach based on submodular optimization. Submodular optimization, a discrete analogue to continuous convex optimization, has been used with great success for other representative set selection problems. We demonstrate that the submodular optimization approach results in representative protein sequence subsets with greater structural diversity than sets chosen by existing methods, using as a gold standard the SCOPe library of protein domain structures. In this setting, submodular optimization consistently yields protein sequence subsets that include more SCOPe domain families than sets of the same size selected by competing approaches. We also show how the optimization framework allows us to design a mixture objective function that performs well for both large and small representative sets. The framework we describe is the best possible in polynomial time (under some assumptions), and it is flexible and intuitive because it applies a suite of generic methods to optimize one of a variety of objective functions. © 2018 Wiley Periodicals, Inc.

  11. A unified flight control methodology for a compound rotorcraft in fundamental and aerobatic maneuvering flight

    NASA Astrophysics Data System (ADS)

    Thorsen, Adam

    This study investigates a novel approach to flight control for a compound rotorcraft in a variety of maneuvers ranging from fundamental to aerobatic in nature. Fundamental maneuvers are a class of maneuvers with design significance that are useful for testing and tuning flight control systems along with uncovering control law deficiencies. Aerobatic maneuvers are a class of aggressive and complex maneuvers with more operational significance. The process culminating in a unified approach to flight control includes various control allocation studies for redundant controls in trim and maneuvering flight, an efficient methodology to simulate non-piloted maneuvers with varying degrees of complexity, and the setup of an unconventional control inceptor configuration along with the use of a flight simulator to gather pilot feedback in order to improve the unified control architecture. A flight path generation algorithm was developed to calculate control inceptor commands required for a rotorcraft in aerobatic maneuvers. This generalized algorithm was tailored to generate flight paths through optimization methods in order to satisfy target terminal position coordinates or to minimize the total time of a particular maneuver. Six aerobatic maneuvers were developed drawing inspiration from air combat maneuvers of fighter jet aircraft: Pitch-Back Turn (PBT), Combat Ascent Turn (CAT), Combat Descent Turn (CDT), Weaving Pull-up (WPU), Combat Break Turn (CBT), and Zoom and Boom (ZAB). These aerobatic maneuvers were simulated at moderate to high advance ratios while fundamental maneuvers of the compound including level accelerations/decelerations, climbs, descents, and turns were investigated across the entire flight envelope to evaluate controller performance. The unified control system was developed to allow controls to seamlessly transition between manual and automatic allocations while ensuring that the axis of control for a particular inceptor remained constant with flight regime. An energy management system was developed in order to manage performance limits (namely power required) to promote carefree maneuvering and alleviate pilot workload. This system features limits on pilot commands and has additional logic for preserving control margins and limiting maximum speed in a dive. Nonlinear dynamic inversion (NLDI) is the framework of the unified controller, which incorporates primary and redundant controls. The inner loop of the NLDI controller regulates bank angle, pitch attitude, and yaw rate, while the outer loop command structure is varied (three modes). One version uses an outer loop that commands velocities in the longitudinal and vertical axes (velocity mode), another commands longitudinal acceleration and vertical speed (acceleration mode), and the third commands longitudinal acceleration and transitions from velocity to acceleration command in the vertical axis (aerobatic mode). The flight envelope is discretized into low, cruise, and high speed flight regimes. The unified outer loop primary control effectors for the longitudinal and vertical axes (collective pitch, pitch attitude, and propeller pitch) vary depending on flight regime. A weighted pseudoinverse is used to phase either the collective or propeller pitch in/out of a redundant control role. The controllers were evaluated in Penn State's Rotorcraft Flight Simulator retaining the cyclic stick for vertical and lateral axis control along with pedal inceptors for yaw axis control. A throttle inceptor was used in place of the pilot's traditional left hand inceptor for longitudinal axis control. Ultimately, a simple rigid body model of the aircraft was sufficient enough to design a controller with favorable performance and stability characteristics. This unified flight control system promoted a low enough pilot workload so that an untrained pilot (the author) was able to pilot maneuvers of varying complexity with ease. The framework of this unified system is generalized enough to be able to be applied to any rotorcraft with redundant controls. Minimum power propeller thrust shares ranged from 50% - 90% in high speed flight, while lift shares at high speeds tended towards 60% wing and 40% main rotor.

  12. Datasets for supplier selection and order allocation with green criteria, all-unit quantity discounts and varying number of suppliers.

    PubMed

    Hamdan, Sadeque; Cheaitou, Ali

    2017-08-01

    This data article provides detailed optimization input and output datasets and optimization code for the published research work titled "Dynamic green supplier selection and order allocation with quantity discounts and varying supplier availability" (Hamdan and Cheaitou, 2017, In press) [1]. Researchers may use these datasets as a baseline for future comparison and extensive analysis of the green supplier selection and order allocation problem with all-unit quantity discount and varying number of suppliers. More particularly, the datasets presented in this article allow researchers to generate the exact optimization outputs obtained by the authors of Hamdan and Cheaitou (2017, In press) [1] using the provided optimization code and then to use them for comparison with the outputs of other techniques or methodologies such as heuristic approaches. Moreover, this article includes the randomly generated optimization input data and the related outputs that are used as input data for the statistical analysis presented in Hamdan and Cheaitou (2017 In press) [1] in which two different approaches for ranking potential suppliers are compared. This article also provides the time analysis data used in (Hamdan and Cheaitou (2017, In press) [1] to study the effect of the problem size on the computation time as well as an additional time analysis dataset. The input data for the time study are generated randomly, in which the problem size is changed, and then are used by the optimization problem to obtain the corresponding optimal outputs as well as the corresponding computation time.

  13. Placing invasive species management in a spatiotemporal context.

    PubMed

    Baker, Christopher M; Bode, Michael

    2016-04-01

    Invasive species are a worldwide issue, both ecologically and economically. A large body of work focuses on various aspects of invasive species control, including how to allocate control efforts to eradicate an invasive population as cost effectively as possible: There are a diverse range of invasive species management problems, and past mathematical analyses generally focus on isolated examples, making it hard to identify and understand parallels between the different contexts. In this study, we use a single spatiotemporal model to tackle the problem of allocating control effort for invasive species when suppressing an island invasive species, and for long-term spatial suppression projects. Using feral cat suppression as an illustrative example, we identify the optimal resource allocation for island and mainland suppression projects. Our results demonstrate how using a single model to solve different problems reveals similar characteristics of the solutions in different scenarios. As well as illustrating the insights offered by linking problems through a spatiotemporal model, we also derive novel and practically applicable results for our case studies. For temporal suppression projects on islands, we find that lengthy projects are more cost effective and that rapid control projects are only economically cost effective when population growth rates are high or diminishing returns on control effort are low. When suppressing invasive species around conservation assets (e.g., national parks or exclusion fences), we find that the size of buffer zones should depend on the ratio of the species growth and spread rate.

  14. Resource allocation processes at multilateral organizations working in global health

    PubMed Central

    Chi, Y-Ling; Bump, Jesse B

    2018-01-01

    Abstract International institutions provide well over US$10 billion in development assistance for health (DAH) annually and between 1990 and 2014, DAH disbursements totaled $458 billion but how do they decide who gets what, and for what purpose? In this article, we explore how allocation decisions were made by the nine convening agencies of the Equitable Access Initiative. We provide clear, plain language descriptions of the complete process from resource mobilization to allocation for the nine multilateral agencies with prominent agendas in global health. Then, through a comparative analysis we illuminate the choices and strategies employed in the nine international institutions. We find that resource allocation in all reviewed institutions follow a similar pattern, which we categorized in a framework of five steps: strategy definition, resource mobilization, eligibility of countries, support type and funds allocation. All the reviewed institutions generate resource allocation decisions through well-structured and fairly complex processes. Variations in those processes seem to reflect differences in institutional principles and goals. However, these processes have serious shortcomings. Technical problems include inadequate flexibility to account for or meet country needs. Although aid effectiveness and value for money are commonly referenced, we find that neither performance nor impact is a major criterion for allocating resources. We found very little formal consideration of the incentives generated by allocation choices. Political issues include non-transparent influence on allocation processes by donors and bureaucrats, and the common practice of earmarking funds to bypass the normal allocation process entirely. Ethical deficiencies include low accountability and transparency at international institutions, and limited participation by affected citizens or their representatives. We find that recipient countries have low influence on allocation processes themselves, although within these processes they have some influence in relatively narrow areas. PMID:29415239

  15. The Inertial Upper Stage - Flight experience and capabilities

    NASA Astrophysics Data System (ADS)

    Kuhns, Randall H.; Maricich, Peter L.; Bangsund, Edward L.; Friske, Stephen A.; Hallman, Wayne P.; Goldstein, Allen E.

    1993-10-01

    The Inertial Upper Stage (IUS) is a two-stage rocket designed to place a variety of payloads in high earth orbit or on interplanetary trajectories, which has been boosted to date, together with its payloads, from the earth's surface to low altitude park orbits by the USAF Titan launcher and the NASA Space Shuttle. This paper discusses the IUS redundancy and presents data on the value of the IST's redundant design and the past uses of the vehicle's redundant capability to achieve mission success. The value of IUS's redundancy has been confirmed on several flights. The paper presents block diagrams of the IUS redundancy architecture and of the redundancy hardware switching and commands.

  16. Multi-finger prehension: control of a redundant mechanical system.

    PubMed

    Latash, Mark L; Zatsiorsky, Vladimir M

    2009-01-01

    The human hand has been a fascinating object of study for researchers in both biomechanics and motor control. Studies of human prehension have contributed significantly to the progress in addressing the famous problem of motor redundancy. After a brief review of the hand mechanics, we present results of recent studies that support a general view that the apparently redundant design of the hand is not a source of computational problems but a rich apparatus that allows performing a variety of tasks in a reliable and flexible way (the principle of abundance). Multi-digit synergies have been analyzed at two levels of a hypothetical hierarchy involved in the control of prehensile actions. At the upper level, forces and moments produced by the thumb and virtual finger (an imagined finger with a mechanical action equal to the combined mechanical action of all four fingers of the hand) co-vary to stabilize the gripping action and the orientation of the hand-held object. These results support the principle of superposition suggested earlier in robotics with respect to the control of artificial grippers. At the lower level of the hierarchy, forces and moments produced by individual fingers co-vary to stabilize the magnitude and direction of the force vector and the moment of force produced by the virtual finger. Adjustments to changes in task constraints (such as, for example, friction under individual digits) may be local and synergic. The latter reflect multi-digit prehension synergies and may be analyzed with the so-called chain effects: Sequences of relatively straightforward cause-effect links directly related to mechanical constraints leading to non-trivial strong co-variation between pairs of elemental variables. Analysis of grip force adjustments during motion of hand-held objects suggests that the central nervous system adjusts to gravitational and inertial loads differently. The human hand is a gold mine for researchers interested in the control of natural human movements.

  17. The biological component of the life support system for a Martian expedition.

    PubMed

    Sychev, V N; Levinskikh, M A; Shepelev, Ye Ya

    2003-01-01

    Ground-based experiments at RF SSC-IBMP RAS (State Science Center of Russian Federation--Institute of Biomedical Problems of Russian Academia of Science) were aimed at overall studies of a human-unicellular algae-mineralization LSS (life support system) model. The system was 15 m3 in volume. It contained 45 L of algal suspension with a dry substance density of 10-12 g per liter; water volume, including the algal suspension, was 59 L. More sophisticated model systems with partial substitution of unicellular algae with higher plates (crop area of 15 m2) were tested in three experiments from 1.5 to 2 months in duration. The experiments demonstrated that LSS employing the unicellular algae play not only a macrofunction (regeneration of atmosphere and water) but also carry some other functions (purification of atmosphere, formation of microbial cenosis etc.) providing an adequate human environment. It is also important that functional reliability of the algal regenerative subsystem is secured by a huge number of cells able, in the event of death of a part of population, to recover in the shortest possible time the size of population and, hence, functionality of the LSS autotrophic component. For a long period of time a Martian crew will be detached from Earth's biosphere and for this reason LSS of their vehicle must be highly reliable, robust and redundant. One of the approaches to LSS redundancy is installation of two systems with different but equally efficient regeneration technologies, i.e. physical-chemical and biological. At best, these two systems should operate in parallel sharing the function of regeneration of the human environment. In case of failure or a sharp deterioration in performance of one system the other will, by way of redundancy, increase its throughput to make up for the loss. This LSS design will enable simultaneous handling of a number of critical problems including adequate satisfaction of human environmental needs. c2003 COSPAR. Published by Elsevier Science Ltd. All rights reserved.

  18. The effect of a redundant color code on an overlearned identification task

    NASA Technical Reports Server (NTRS)

    Obrien, Kevin

    1992-01-01

    The possibility of finding redundancy gains with overlearned tasks was examined using a paradigm varying familiarity with the stimulus set. Redundant coding in a multidimensional stimulus was demonstrated to result in increased identification accuracy and decreased latency of identification when compared to stimuli varying on only one dimension. The advantages attributable to redundant coding are referred to as redundancy gain and were found for a variety of stimulus dimension combinations, including the use of hue or color as one of the dimensions. Factors that have affected redundancy gain include the discriminability of the levels of one stimulus dimension and the level of stimulus-to-response association. The results demonstrated that response time is in part a function of familiarity, but no effect of redundant color coding was demonstrated. Implications of research on coding in identification tasks for display design are discussed.

  19. Method and system for redundancy management of distributed and recoverable digital control system

    NASA Technical Reports Server (NTRS)

    Stange, Kent (Inventor); Hess, Richard (Inventor); Kelley, Gerald B (Inventor); Rogers, Randy (Inventor)

    2012-01-01

    A method and system for redundancy management is provided for a distributed and recoverable digital control system. The method uses unique redundancy management techniques to achieve recovery and restoration of redundant elements to full operation in an asynchronous environment. The system includes a first computing unit comprising a pair of redundant computational lanes for generating redundant control commands. One or more internal monitors detect data errors in the control commands, and provide a recovery trigger to the first computing unit. A second redundant computing unit provides the same features as the first computing unit. A first actuator control unit is configured to provide blending and monitoring of the control commands from the first and second computing units, and to provide a recovery trigger to each of the first and second computing units. A second actuator control unit provides the same features as the first actuator control unit.

  20. Efficient Allocation of Resources for Defense of Spatially Distributed Networks Using Agent-Based Simulation.

    PubMed

    Kroshl, William M; Sarkani, Shahram; Mazzuchi, Thomas A

    2015-09-01

    This article presents ongoing research that focuses on efficient allocation of defense resources to minimize the damage inflicted on a spatially distributed physical network such as a pipeline, water system, or power distribution system from an attack by an active adversary, recognizing the fundamental difference between preparing for natural disasters such as hurricanes, earthquakes, or even accidental systems failures and the problem of allocating resources to defend against an opponent who is aware of, and anticipating, the defender's efforts to mitigate the threat. Our approach is to utilize a combination of integer programming and agent-based modeling to allocate the defensive resources. We conceptualize the problem as a Stackelberg "leader follower" game where the defender first places his assets to defend key areas of the network, and the attacker then seeks to inflict the maximum damage possible within the constraints of resources and network structure. The criticality of arcs in the network is estimated by a deterministic network interdiction formulation, which then informs an evolutionary agent-based simulation. The evolutionary agent-based simulation is used to determine the allocation of resources for attackers and defenders that results in evolutionary stable strategies, where actions by either side alone cannot increase its share of victories. We demonstrate these techniques on an example network, comparing the evolutionary agent-based results to a more traditional, probabilistic risk analysis (PRA) approach. Our results show that the agent-based approach results in a greater percentage of defender victories than does the PRA-based approach. © 2015 Society for Risk Analysis.

  1. Irrelevance in Problem Solving

    NASA Technical Reports Server (NTRS)

    Levy, Alon Y.

    1992-01-01

    The notion of irrelevance underlies many different works in AI, such as detecting redundant facts, creating abstraction hierarchies and reformulation and modeling physical devices. However, in order to design problem solvers that exploit the notion of irrelevance, either by automatically detecting irrelevance or by being given knowledge about irrelevance, a formal treatment of the notion is required. In this paper we present a general framework for analyzing irrelevance. We discuss several properties of irrelevance and show how they vary in a space of definitions outlined by the framework. We show how irrelevance claims can be used to justify the creation of abstractions thereby suggesting a new view on the work on abstraction.

  2. Introducing "biophysical redundancy": the global status and past evolution of unused water, land and productivity resources for food production

    NASA Astrophysics Data System (ADS)

    Fader, Marianela

    2017-04-01

    Countries have different resilience to sudden and long-term changes in food demand and supply. An important part of this resilience is the degree of biophysical redundancy, defined as the potential food production of 'spare land', available water resources (i.e., not already used for human activities), as well as production increases through yield gap closure on cultivated areas and potential agricultural areas. The presentation will show the results of a recently published paper1 on the evolution of biophysical redundancy for agricultural production at country level, from 1992 to 2012. Results indicate that in 2012, the biophysical redundancy of 75 (48) countries, mainly in North Africa, Western Europe, the Middle East and Asia, was insufficient to produce the caloric nutritional needs for at least 50% (25%) of their population during a year. Biophysical redundancy has decreased in the last two decades in 102 out of 155 countries, 11 of these went from high to limited redundancy, and nine of these from limited to very low redundancy. Although the variability of the drivers of change across different countries is high, improvements in yield and population growth have a clear impact on the decreases of redundancy towards the very low redundancy category. We took a more detailed look at countries classified as 'Low Income Economies (LIEs)' since they are particularly vulnerable to domestic or external food supply changes, due to their limited capacity to offset for food supply decreases with higher purchasing power on the international market. Currently, nine LIEs have limited or very low biophysical redundancy. Many of these showed a decrease in redundancy over the last two decades, which is not always linked with improvements in per capita food availability.

  3. Past and Present Biophysical Redundancy of Countries as a Buffer to Changes in Food Supply

    NASA Technical Reports Server (NTRS)

    Fader, Marianela; Rulli, Maria Cristina; Carr, Joel; Dell' Angelo, Jampel; D' Odorico, Paolo; Gephart, Jessica A.; Kummu, Matti; Magliocca, Nicholas; Porkka, Miina; Prell, Christina; hide

    2016-01-01

    Spatially diverse trends in population growth, climate change, industrialization, urbanization and economic development are expected to change future food supply and demand. These changes may affect the suitability of land for food production, implying elevated risks especially for resource constrained, food-importing countries. We present the evolution of biophysical redundancy for agricultural production at country level, from 1992 to 2012. Biophysical redundancy, defined as unused biotic and abiotic environmental resources, is represented by the potential food production of 'spare land', available water resources (i.e., not already used for human activities), as well as production increases through yield gap closure on cultivated areas and potential agricultural areas. In 2012, the biophysical redundancy of 75 (48) countries, mainly in North Africa, Western Europe, the Middle East and Asia, was insufficient to produce the caloric nutritional needs for at least 50% (25%) of their population during a year. Biophysical redundancy has decreased in the last two decades in 102 out of 155 countries, 11 of these went from high to limited redundancy, and nine of these from limited to very low redundancy. Although the variability of the drivers of change across different countries is high, improvements in yield and population growth have a clear impact on the decreases of redundancy towards the very low redundancy category. We took a more detailed look at countries classified as 'Low Income Economies (LIEs)' since they are particularly vulnerable to domestic or external food supply changes, due to their limited capacity to offset for food supply decreases with higher purchasing power on the international market. Currently, nine LIEs have limited or very low biophysical redundancy. Many of these showed a decrease in redundancy over the last two decades, which is not always linked with improvements in per capita food availability.

  4. Concurrent Reinforcement Schedules for Problem Behavior and Appropriate Behavior: Experimental Applications of the Matching Law

    ERIC Educational Resources Information Center

    Borrero, Carrie S. W.; Vollmer, Timothy R.; Borrero, John C.; Bourret, Jason C.; Sloman, Kimberly N.; Samaha, Andrew L.; Dallery, Jesse

    2010-01-01

    This study evaluated how children who exhibited functionally equivalent problem and appropriate behavior allocate responding to experimentally arranged reinforcer rates. Relative reinforcer rates were arranged on concurrent variable-interval schedules and effects on relative response rates were interpreted using the generalized matching equation.…

  5. 30 CFR 1220.011 - Schedule of allowable direct and allocable joint costs and credits.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... engineering design problems related to equipment or facilities required for NPSL operations. (4) The cost of any contract service related to research and development is specifically excluded, as are contract services calling for feasibility studies not directly related to specific engineering design problems or...

  6. Inventory slack routing application in emergency logistics and relief distributions.

    PubMed

    Yang, Xianfeng; Hao, Wei; Lu, Yang

    2018-01-01

    Various natural and manmade disasters during last decades have highlighted the need of further improving on governmental preparedness to emergency events, and a relief supplies distribution problem named Inventory Slack Routing Problem (ISRP) has received increasing attentions. In an ISRP, inventory slack is defined as the duration between reliefs arriving time and estimated inventory stock-out time. Hence, a larger inventory slack could grant more responsive time in facing of various factors (e.g., traffic congestion) that may lead to delivery lateness. In this study, the relief distribution problem is formulated as an optimization model that maximize the minimum slack among all dispensing sites. To efficiently solve this problem, we propose a two-stage approach to tackle the vehicle routing and relief allocation sub-problems. By analyzing the inter-relations between these two sub-problems, a new objective function considering both delivery durations and dispensing rates of demand sites is applied in the first stage to design the vehicle routes. A hierarchical routing approach and a sweep approach are also proposed in this stage. Given the vehicle routing plan, the relief allocation could be easily solved in the second stage. Numerical experiment with a comparison of multi-vehicle Traveling Salesman Problem (TSP) has demonstrated the need of ISRP and the capability of the proposed solution approaches.

  7. Inventory slack routing application in emergency logistics and relief distributions

    PubMed Central

    Yang, Xianfeng; Lu, Yang

    2018-01-01

    Various natural and manmade disasters during last decades have highlighted the need of further improving on governmental preparedness to emergency events, and a relief supplies distribution problem named Inventory Slack Routing Problem (ISRP) has received increasing attentions. In an ISRP, inventory slack is defined as the duration between reliefs arriving time and estimated inventory stock-out time. Hence, a larger inventory slack could grant more responsive time in facing of various factors (e.g., traffic congestion) that may lead to delivery lateness. In this study, the relief distribution problem is formulated as an optimization model that maximize the minimum slack among all dispensing sites. To efficiently solve this problem, we propose a two-stage approach to tackle the vehicle routing and relief allocation sub-problems. By analyzing the inter-relations between these two sub-problems, a new objective function considering both delivery durations and dispensing rates of demand sites is applied in the first stage to design the vehicle routes. A hierarchical routing approach and a sweep approach are also proposed in this stage. Given the vehicle routing plan, the relief allocation could be easily solved in the second stage. Numerical experiment with a comparison of multi-vehicle Traveling Salesman Problem (TSP) has demonstrated the need of ISRP and the capability of the proposed solution approaches. PMID:29902196

  8. Interface Message Processors for the ARPA Computer Network

    DTIC Science & Technology

    1976-07-01

    and then clear the location) as its primitive locking facility (i.e., as the necessary multiprocessor lock equivalent to Dijkstra semaphores )[37]. To...of the extra storage required for the redundant copies. There is the problem of maintaining synchronization of multiple copy data bases in the presence...through any of the data base sites. I Update synchronization . Races between conflicting, "concurrent" update requests are resolved in a manner that j

  9. Application of Network and Decision Theory to Routing Problems.

    DTIC Science & Technology

    1982-03-01

    special thanks to Major Hal Carter, faculty member, for his help in getting the authors to understand one of the underlying algorithms in the methodology...61 26. General Methodology Flowchart .......... .. 64 27. Least Cost/Time Path Algorithm Flowchart . . 65 28. Possible Redundant Arc of Time...minimum time to travel. This was neces- sary because: 1. The DTN designers did not have a procedure to do so. 2. The various network algorithms to

  10. Balancing Detection and Eradication for Control of Epidemics: Sudden Oak Death in Mixed-Species Stands

    PubMed Central

    Ndeffo Mbah, Martial L.; Gilligan, Christopher A.

    2010-01-01

    Culling of infected individuals is a widely used measure for the control of several plant and animal pathogens but culling first requires detection of often cryptically-infected hosts. In this paper, we address the problem of how to allocate resources between detection and culling when the budget for disease management is limited. The results are generic but we motivate the problem for the control of a botanical epidemic in a natural ecosystem: sudden oak death in mixed evergreen forests in coastal California, in which species composition is generally dominated by a spreader species (bay laurel) and a second host species (coast live oak) that is an epidemiological dead-end in that it does not transmit infection but which is frequently a target for preservation. Using a combination of an epidemiological model for two host species with a common pathogen together with optimal control theory we address the problem of how to balance the allocation of resources for detection and epidemic control in order to preserve both host species in the ecosystem. Contrary to simple expectations our results show that an intermediate level of detection is optimal. Low levels of detection, characteristic of low effort expended on searching and detection of diseased trees, and high detection levels, exemplified by the deployment of large amounts of resources to identify diseased trees, fail to bring the epidemic under control. Importantly, we show that a slight change in the balance between the resources allocated to detection and those allocated to control may lead to drastic inefficiencies in control strategies. The results hold when quarantine is introduced to reduce the ingress of infected material into the region of interest. PMID:20856850

  11. Towards an understanding of the molecular regulation of carbon allocation in diatoms: the interaction of energy and carbon allocation.

    PubMed

    Wagner, Heiko; Jakob, Torsten; Fanesi, Andrea; Wilhelm, Christian

    2017-09-05

    In microalgae, the photosynthesis-driven CO 2 assimilation delivers cell building blocks that are used in different biosynthetic pathways. Little is known about how the cell regulates the subsequent carbon allocation to, for example, cell growth or for storage. However, knowledge about these regulatory mechanisms is of high biotechnological and ecological importance. In diatoms, the situation becomes even more complex because, as a consequence of their secondary endosymbiotic origin, the compartmentation of the pathways for the primary metabolic routes is different from green algae. Therefore, the mechanisms to manipulate the carbon allocation pattern cannot be adopted from the green lineage. This review describes the general pathways of cellular energy distribution from light absorption towards the final allocation of carbon into macromolecules and summarizes the current knowledge of diatom-specific allocation patterns. We further describe the (limited) knowledge of regulatory mechanisms of carbon partitioning between lipids, carbohydrates and proteins in diatoms. We present solutions to overcome the problems that hinder the identification of regulatory elements of carbon metabolism.This article is part of the themed issue 'The peculiar carbon metabolism in diatoms'. © 2017 The Author(s).

  12. Market Model for Resource Allocation in Emerging Sensor Networks with Reinforcement Learning

    PubMed Central

    Zhang, Yue; Song, Bin; Zhang, Ying; Du, Xiaojiang; Guizani, Mohsen

    2016-01-01

    Emerging sensor networks (ESNs) are an inevitable trend with the development of the Internet of Things (IoT), and intend to connect almost every intelligent device. Therefore, it is critical to study resource allocation in such an environment, due to the concern of efficiency, especially when resources are limited. By viewing ESNs as multi-agent environments, we model them with an agent-based modelling (ABM) method and deal with resource allocation problems with market models, after describing users’ patterns. Reinforcement learning methods are introduced to estimate users’ patterns and verify the outcomes in our market models. Experimental results show the efficiency of our methods, which are also capable of guiding topology management. PMID:27916841

  13. Methods for increasing noise immunity of radio electronic systems with redundancy

    NASA Astrophysics Data System (ADS)

    Orlov, P. E.; Medvedev, A. V.; Sharafutdinov, V. R.; Gazizov, T. R.; Ubaichin, A. V.

    2018-05-01

    The idea of increasing the noise immunity of radioelectronic systems with redundancy is presented. Specific technical solutions based on this idea of modal redundancy are described. An estimation of noise immunity improvement was performed by the example of implementation of modal redundancy with the broad-side electromagnetic coupling for a printed circuit board of the digital signal processing unit for an autonomous navigation system of a spacecraft. It is shown that the implementation of modal redundancy can provide an attenuation coefficient for the interference signal up to 12 dB.

  14. Plastid Uridine Salvage Activity Is Required for Photoassimilate Allocation and Partitioning in Arabidopsis[C][W

    PubMed Central

    Chen, Mingjie; Thelen, Jay J.

    2011-01-01

    Nucleotides are synthesized from de novo and salvage pathways. To characterize the uridine salvage pathway, two genes, UKL1 and UKL2, that tentatively encode uridine kinase (UK) and uracil phosphoribosyltransferase (UPRT) bifunctional enzymes were studied in Arabidopsis thaliana. T-DNA insertions in UKL1 and UKL2 reduced transcript expression and increased plant tolerance to toxic analogs 5-fluorouridine and 5-fluorouracil. Enzyme activity assays using purified recombinant proteins indicated that UKL1 and UKL2 have UK but not UPRT activity. Subcellular localization using a C-terminal enhanced yellow fluorescent protein fusion indicated that UKL1 and UKL2 localize to plastids. The ukl2 mutant shows reduced transient leaf starch during the day. External application of orotate rescued this phenotype in ukl2, indicating pyrimidine pools are limiting for starch synthesis in ukl2. Intermediates for lignin synthesis were upregulated, and there was increased lignin and reduced cellulose content in the ukl2 mutant. Levels of ATP, ADP, ADP-glucose, UTP, UDP, and UDP-glucose were altered in a light-dependent manner. Seed composition of the ukl1 and ukl2 mutants included lower oil and higher protein compared with the wild type. Unlike single gene mutants, the ukl1 ukl2 double mutant has severe developmental defects and reduced biomass accumulation, indicating these enzymes catalyze redundant reactions. These findings point to crucial roles played by uridine salvage for photoassimilate allocation and partitioning. PMID:21828290

  15. Prospective Analysis of Behavioral Economic Predictors of Stable Moderation Drinking Among Problem Drinkers Attempting Natural Recovery

    PubMed Central

    Tucker, Jalie A.; Cheong, JeeWon; Chandler, Susan D.; Lambert, Brice H.; Pietrzak, Brittney; Kwok, Heather; Davies, Susan L.

    2016-01-01

    Background As interventions have expanded beyond clinical treatment to include brief interventions for persons with less severe alcohol problems, predicting who can achieve stable moderation drinking has gained importance. Recent behavioral economic (BE) research on natural recovery has shown that active problem drinkers who allocate their monetary expenditures on alcohol and saving for the future over longer time horizons tend to have better subsequent recovery outcomes, including maintenance of stable moderation drinking. The present study compared the predictive utility of this money-based “Alcohol-Savings Discretionary Expenditure” (ASDE) index with multiple BE analogue measures of behavioral impulsivity and self-control, which have seldom been investigated together, to predict outcomes of natural recovery attempts. Methods Community-dwelling problem drinkers, enrolled shortly after stopping abusive drinking without treatment, were followed prospectively for up to a year (N = 175 [75.4% male], M age = 50.65 years). They completed baseline assessments of pre-resolution drinking practices and problems; analogue behavioral choice tasks (Delay Discounting, Melioration-Maximization, and Alcohol Purchase Tasks); and a Timeline Followback interview including expenditures on alcohol compared to voluntary savings (ASDE index) during the pre-resolution year. Results Multinomial logistic regression models showed that, among the BE measures, only the ASDE index predicted stable moderation drinking compared to stable abstinence or unstable resolutions involving relapse. As hypothesized, stable moderation was associated with more balanced pre-resolution allocations to drinking and savings (OR = 1.77, 95% CI = 1.02 ∼ 3.08, p < .05), suggesting it is associated with longer term behavior regulation processes than abstinence. Conclusions The ASDE's unique predictive utility may rest on its comprehensive representation of contextual elements to support this patterning of behavioral allocation. Stable low risk drinking, but not abstinence, requires such regulatory processes. PMID:27775161

  16. Flow of Funds Modeling for Localized Financial Markets: An Application of Spatial Price and Allocation Activity Analysis Models.

    DTIC Science & Technology

    1981-01-01

    on modeling the managerial aspects of the firm. The second has been the application to economic theory led by ...individual portfolio optimization problems which were embedded in a larger global optimization problem. In the global problem, portfolios were linked by market ...demand quantities or be given by linear demand relationships. As in~ the source markets , the model

  17. Efficient Computing Budget Allocation for Finding Simplest Good Designs

    PubMed Central

    Jia, Qing-Shan; Zhou, Enlu; Chen, Chun-Hung

    2012-01-01

    In many applications some designs are easier to implement, require less training data and shorter training time, and consume less storage than the others. Such designs are called simple designs, and are usually preferred over complex ones when they all have good performance. Despite the abundant existing studies on how to find good designs in simulation-based optimization (SBO), there exist few studies on finding simplest good designs. We consider this important problem in this paper, and make the following contributions. First, we provide lower bounds for the probabilities of correctly selecting the m simplest designs with top performance, and selecting the best m such simplest good designs, respectively. Second, we develop two efficient computing budget allocation methods to find m simplest good designs and to find the best m such designs, respectively; and show their asymptotic optimalities. Third, we compare the performance of the two methods with equal allocations over 6 academic examples and a smoke detection problem in wireless sensor networks. We hope that this work brings insight to finding the simplest good designs in general. PMID:23687404

  18. A market-based optimization approach to sensor and resource management

    NASA Astrophysics Data System (ADS)

    Schrage, Dan; Farnham, Christopher; Gonsalves, Paul G.

    2006-05-01

    Dynamic resource allocation for sensor management is a problem that demands solutions beyond traditional approaches to optimization. Market-based optimization applies solutions from economic theory, particularly game theory, to the resource allocation problem by creating an artificial market for sensor information and computational resources. Intelligent agents are the buyers and sellers in this market, and they represent all the elements of the sensor network, from sensors to sensor platforms to computational resources. These agents interact based on a negotiation mechanism that determines their bidding strategies. This negotiation mechanism and the agents' bidding strategies are based on game theory, and they are designed so that the aggregate result of the multi-agent negotiation process is a market in competitive equilibrium, which guarantees an optimal allocation of resources throughout the sensor network. This paper makes two contributions to the field of market-based optimization: First, we develop a market protocol to handle heterogeneous goods in a dynamic setting. Second, we develop arbitrage agents to improve the efficiency in the market in light of its dynamic nature.

  19. SLA-based optimisation of virtualised resource for multi-tier web applications in cloud data centres

    NASA Astrophysics Data System (ADS)

    Bi, Jing; Yuan, Haitao; Tie, Ming; Tan, Wei

    2015-10-01

    Dynamic virtualised resource allocation is the key to quality of service assurance for multi-tier web application services in cloud data centre. In this paper, we develop a self-management architecture of cloud data centres with virtualisation mechanism for multi-tier web application services. Based on this architecture, we establish a flexible hybrid queueing model to determine the amount of virtual machines for each tier of virtualised application service environments. Besides, we propose a non-linear constrained optimisation problem with restrictions defined in service level agreement. Furthermore, we develop a heuristic mixed optimisation algorithm to maximise the profit of cloud infrastructure providers, and to meet performance requirements from different clients as well. Finally, we compare the effectiveness of our dynamic allocation strategy with two other allocation strategies. The simulation results show that the proposed resource allocation method is efficient in improving the overall performance and reducing the resource energy cost.

  20. Research on Multirobot Pursuit Task Allocation Algorithm Based on Emotional Cooperation Factor

    PubMed Central

    Fang, Baofu; Chen, Lu; Wang, Hao; Dai, Shuanglu; Zhong, Qiubo

    2014-01-01

    Multirobot task allocation is a hot issue in the field of robot research. A new emotional model is used with the self-interested robot, which gives a new way to measure self-interested robots' individual cooperative willingness in the problem of multirobot task allocation. Emotional cooperation factor is introduced into self-interested robot; it is updated based on emotional attenuation and external stimuli. Then a multirobot pursuit task allocation algorithm is proposed, which is based on emotional cooperation factor. Combined with the two-step auction algorithm recruiting team leaders and team collaborators, set up pursuit teams, and finally use certain strategies to complete the pursuit task. In order to verify the effectiveness of this algorithm, some comparing experiments have been done with the instantaneous greedy optimal auction algorithm; the results of experiments show that the total pursuit time and total team revenue can be optimized by using this algorithm. PMID:25152925

  1. Research on multirobot pursuit task allocation algorithm based on emotional cooperation factor.

    PubMed

    Fang, Baofu; Chen, Lu; Wang, Hao; Dai, Shuanglu; Zhong, Qiubo

    2014-01-01

    Multirobot task allocation is a hot issue in the field of robot research. A new emotional model is used with the self-interested robot, which gives a new way to measure self-interested robots' individual cooperative willingness in the problem of multirobot task allocation. Emotional cooperation factor is introduced into self-interested robot; it is updated based on emotional attenuation and external stimuli. Then a multirobot pursuit task allocation algorithm is proposed, which is based on emotional cooperation factor. Combined with the two-step auction algorithm recruiting team leaders and team collaborators, set up pursuit teams, and finally use certain strategies to complete the pursuit task. In order to verify the effectiveness of this algorithm, some comparing experiments have been done with the instantaneous greedy optimal auction algorithm; the results of experiments show that the total pursuit time and total team revenue can be optimized by using this algorithm.

  2. Optimal Sensor Allocation for Fault Detection and Isolation

    NASA Technical Reports Server (NTRS)

    Azam, Mohammad; Pattipati, Krishna; Patterson-Hine, Ann

    2004-01-01

    Automatic fault diagnostic schemes rely on various types of sensors (e.g., temperature, pressure, vibration, etc) to measure the system parameters. Efficacy of a diagnostic scheme is largely dependent on the amount and quality of information available from these sensors. The reliability of sensors, as well as the weight, volume, power, and cost constraints, often makes it impractical to monitor a large number of system parameters. An optimized sensor allocation that maximizes the fault diagnosibility, subject to specified weight, volume, power, and cost constraints is required. Use of optimal sensor allocation strategies during the design phase can ensure better diagnostics at a reduced cost for a system incorporating a high degree of built-in testing. In this paper, we propose an approach that employs multiple fault diagnosis (MFD) and optimization techniques for optimal sensor placement for fault detection and isolation (FDI) in complex systems. Keywords: sensor allocation, multiple fault diagnosis, Lagrangian relaxation, approximate belief revision, multidimensional knapsack problem.

  3. From honeybees to Internet servers: biomimicry for distributed management of Internet hosting centers.

    PubMed

    Nakrani, Sunil; Tovey, Craig

    2007-12-01

    An Internet hosting center hosts services on its server ensemble. The center must allocate servers dynamically amongst services to maximize revenue earned from hosting fees. The finite server ensemble, unpredictable request arrival behavior and server reallocation cost make server allocation optimization difficult. Server allocation closely resembles honeybee forager allocation amongst flower patches to optimize nectar influx. The resemblance inspires a honeybee biomimetic algorithm. This paper describes details of the honeybee self-organizing model in terms of information flow and feedback, analyzes the homology between the two problems and derives the resulting biomimetic algorithm for hosting centers. The algorithm is assessed for effectiveness and adaptiveness by comparative testing against benchmark and conventional algorithms. Computational results indicate that the new algorithm is highly adaptive to widely varying external environments and quite competitive against benchmark assessment algorithms. Other swarm intelligence applications are briefly surveyed, and some general speculations are offered regarding their various degrees of success.

  4. Anthropic Correction of Information Estimates and Its Application to Neural Coding

    PubMed Central

    Gastpar, Michael C.; Gill, Patrick R.; Huth, Alexander G.; Theunissen, Frédéric E.

    2015-01-01

    Information theory has been used as an organizing principle in neuroscience for several decades. Estimates of the mutual information (MI) between signals acquired in neurophysiological experiments are believed to yield insights into the structure of the underlying information processing architectures. With the pervasive availability of recordings from many neurons, several information and redundancy measures have been proposed in the recent literature. A typical scenario is that only a small number of stimuli can be tested, while ample response data may be available for each of the tested stimuli. The resulting asymmetric information estimation problem is considered. It is shown that the direct plug-in information estimate has a negative bias. An anthropic correction is introduced that has a positive bias. These two complementary estimators and their combinations are natural candidates for information estimation in neuroscience. Tail and variance bounds are given for both estimates. The proposed information estimates are applied to the analysis of neural discrimination and redundancy in the avian auditory system. PMID:26900172

  5. Anthropic Correction of Information Estimates and Its Application to Neural Coding.

    PubMed

    Gastpar, Michael C; Gill, Patrick R; Huth, Alexander G; Theunissen, Frédéric E

    2010-02-01

    Information theory has been used as an organizing principle in neuroscience for several decades. Estimates of the mutual information (MI) between signals acquired in neurophysiological experiments are believed to yield insights into the structure of the underlying information processing architectures. With the pervasive availability of recordings from many neurons, several information and redundancy measures have been proposed in the recent literature. A typical scenario is that only a small number of stimuli can be tested, while ample response data may be available for each of the tested stimuli. The resulting asymmetric information estimation problem is considered. It is shown that the direct plug-in information estimate has a negative bias. An anthropic correction is introduced that has a positive bias. These two complementary estimators and their combinations are natural candidates for information estimation in neuroscience. Tail and variance bounds are given for both estimates. The proposed information estimates are applied to the analysis of neural discrimination and redundancy in the avian auditory system.

  6. Documentation for the machine-readable character coded version of the SKYMAP catalogue

    NASA Technical Reports Server (NTRS)

    Warren, W. H., Jr.

    1981-01-01

    The SKYMAP catalogue is a compilation of astronomical data prepared primarily for purposes of attitude guidance for satellites. In addition to the SKYMAP Master Catalogue data base, a software package of data base management and utility programs is available. The tape version of the SKYMAP Catalogue, as received by the Astronomical Data Center (ADC), contains logical records consisting of a combination of binary and EBCDIC data. Certain character coded data in each record are redundant in that the same data are present in binary form. In order to facilitate wider use of all SKYMAP data by the astronomical community, a formatted (character) version was prepared by eliminating all redundant character data and converting all binary data to character form. The character version of the catalogue is described. The document is intended to fully describe the formatted tape so that users can process the data problems and guess work; it should be distributed with any character version of the catalogue.

  7. A hybrid data compression approach for online backup service

    NASA Astrophysics Data System (ADS)

    Wang, Hua; Zhou, Ke; Qin, MingKang

    2009-08-01

    With the popularity of Saas (Software as a service), backup service has becoming a hot topic of storage application. Due to the numerous backup users, how to reduce the massive data load is a key problem for system designer. Data compression provides a good solution. Traditional data compression application used to adopt a single method, which has limitations in some respects. For example data stream compression can only realize intra-file compression, de-duplication is used to eliminate inter-file redundant data, compression efficiency cannot meet the need of backup service software. This paper proposes a novel hybrid compression approach, which includes two levels: global compression and block compression. The former can eliminate redundant inter-file copies across different users, the latter adopts data stream compression technology to realize intra-file de-duplication. Several compressing algorithms were adopted to measure the compression ratio and CPU time. Adaptability using different algorithm in certain situation is also analyzed. The performance analysis shows that great improvement is made through the hybrid compression policy.

  8. Synthesis of atmospheric turbulence point spread functions by sparse and redundant representations

    NASA Astrophysics Data System (ADS)

    Hunt, Bobby R.; Iler, Amber L.; Bailey, Christopher A.; Rucci, Michael A.

    2018-02-01

    Atmospheric turbulence is a fundamental problem in imaging through long slant ranges, horizontal-range paths, or uplooking astronomical cases through the atmosphere. An essential characterization of atmospheric turbulence is the point spread function (PSF). Turbulence images can be simulated to study basic questions, such as image quality and image restoration, by synthesizing PSFs of desired properties. In this paper, we report on a method to synthesize PSFs of atmospheric turbulence. The method uses recent developments in sparse and redundant representations. From a training set of measured atmospheric PSFs, we construct a dictionary of "basis functions" that characterize the atmospheric turbulence PSFs. A PSF can be synthesized from this dictionary by a properly weighted combination of dictionary elements. We disclose an algorithm to synthesize PSFs from the dictionary. The algorithm can synthesize PSFs in three orders of magnitude less computing time than conventional wave optics propagation methods. The resulting PSFs are also shown to be statistically representative of the turbulence conditions that were used to construct the dictionary.

  9. Leaf non-structural carbohydrate allocation and C:N:P stoichiometry in response to light acclimation in seedlings of two subtropical shade-tolerant tree species.

    PubMed

    Xie, Hongtao; Yu, Mukui; Cheng, Xiangrong

    2018-03-01

    Light availability greatly affects plant growth and development. In shaded environments, plants must respond to reduced light intensity to ensure a regular rate of photosynthesis to maintain the dynamic balance of nutrients, such as leaf non-structural carbohydrates (NSCs), carbon (C), nitrogen (N) and phosphorus (P). To improve our understanding of the nutrient utilization strategies of understory shade-tolerant plants, we compared the variations in leaf NSCs, C, N and P in response to heterogeneous controlled light conditions between two subtropical evergreen broadleaf shade-tolerant species, Elaeocarpus sylvestris (E. sylvestris) and Illicium henryi (I. henryi). Light intensity treatments were applied at five levels (100%, 52%, 33%, 15% and 6% full sunlight) for 30 weeks to identify the effects of reduced light intensity on leaf NSC allocation patterns and leaf C:N:P stoichiometry characteristics. We found that leaf soluble sugar, starch and NSC concentrations in E. sylvestris showed decreasing trends with reduced light intensity, whereas I. henryi presented slightly increasing trends from 100% to 15% full sunlight and then significant decreases at extremely low light intensity (6% full sunlight). The soluble sugar/starch ratio of E. sylvestris decreased with decreasing light intensity, whereas that of I. henryi remained stable. Moreover, both species exhibited increasing trends in leaf N and P concentrations but limited leaf N:P and C:P ratio fluctuations with decreasing light intensity, revealing their adaptive strategies for poor light environments and their growth strategies under ideal light environments. There were highly significant correlations between leaf NSC variables and C:N:P stoichiometric variables in both species, revealing a trade-off in photosynthesis production between leaf NSC and carbon allocation. Thus, shade-tolerant plants readjusted their allocation of leaf NSCs, C, N and P in response to light acclimation. Redundancy analysis showed that leaf morphological features of both E. sylvestris and I. henryi affected their corresponding leaf nutrient traits. These results improve our understanding of the dynamic balance between leaf NSCs and leaf C, N and P components in the nutritional metabolism of shade-tolerant plants. Two species of understory shade-tolerant plants responded differently to varying light intensities in terms of leaf non-structural carbohydrate allocation and the utilization of carbon, nitrogen and phosphorus to balance nutritional metabolism and adapt to environmental stress. Copyright © 2018 Elsevier Masson SAS. All rights reserved.

  10. Allocating and Reallocating Financial Resources in an Environment of Fiscal Stress. Topical Paper No. 24. Selected Proceedings of the Annual Conference on Higher Education (9th, Center for the Study of Higher Education, University of Arizona, Tucson, May 1984).

    ERIC Educational Resources Information Center

    Wilson, Robert A., Ed.

    Resource allocation and reallocation strategies for colleges who have financial problems are considered in three articles based on presentations to a national conference at the University of Arizona. In "Reallocation Strategies," James A. Hyatt discusses factors that shape institutional responses to reallocation and elements that should…

  11. Joint optimization of regional water-power systems

    NASA Astrophysics Data System (ADS)

    Pereira-Cardenal, Silvio J.; Mo, Birger; Gjelsvik, Anders; Riegels, Niels D.; Arnbjerg-Nielsen, Karsten; Bauer-Gottwein, Peter

    2016-06-01

    Energy and water resources systems are tightly coupled; energy is needed to deliver water and water is needed to extract or produce energy. Growing pressure on these resources has raised concerns about their long-term management and highlights the need to develop integrated solutions. A method for joint optimization of water and electric power systems was developed in order to identify methodologies to assess the broader interactions between water and energy systems. The proposed method is to include water users and power producers into an economic optimization problem that minimizes the cost of power production and maximizes the benefits of water allocation, subject to constraints from the power and hydrological systems. The method was tested on the Iberian Peninsula using simplified models of the seven major river basins and the power market. The optimization problem was successfully solved using stochastic dual dynamic programming. The results showed that current water allocation to hydropower producers in basins with high irrigation productivity, and to irrigation users in basins with high hydropower productivity was sub-optimal. Optimal allocation was achieved by managing reservoirs in very distinct ways, according to the local inflow, storage capacity, hydropower productivity, and irrigation demand and productivity. This highlights the importance of appropriately representing the water users' spatial distribution and marginal benefits and costs when allocating water resources optimally. The method can handle further spatial disaggregation and can be extended to include other aspects of the water-energy nexus.

  12. Reconceptualizing perceptual load as a rate problem: The role of time in the allocation of selective attention.

    PubMed

    Li, Zhi; Xin, Keyun; Li, Wei; Li, Yanzhe

    2018-04-30

    In the literature about allocation of selective attention, a widely studied question is when will attention be allocated to information that is clearly irrelevant to the task at hand. The present study, by using convergent evidence, demonstrated that there is a trade-off between quantity of information present in a display and the time allowed to process it. Specifically, whether or not there is interference from irrelevant distractors depends not only on the amount of information present, but also on the amount of time allowed to process that information. When processing time is calibrated to the amount of information present, irrelevant distractors can be selectively ignored successfully. These results suggest that the perceptual load in the load theory of selective attention (i.e., Lavie, 2005) should be thought about as a dynamic rate problem rather than a static capacity limitation. The authors thus propose that rather than conceiving of perceptual load as a quantity of information, they should consider it as a quantity of information per unit of time. In other words, it is the relationship between the quantity of information in the task and the time for processing the information that determines the allocation of selective attention. Thus, the present findings extended load theory, allowing it to explain findings that were previously considered as counter evidence of load theory. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  13. Influence of the Redundant Verification and the Non-Redundant Verification on the Hydraulic Tomography

    NASA Astrophysics Data System (ADS)

    Wei, T. B.; Chen, Y. L.; Lin, H. R.; Huang, S. Y.; Yeh, T. C. J.; Wen, J. C.

    2016-12-01

    In the groundwater study, it estimated the heterogeneous spatial distribution of hydraulic Properties, there were many scholars use to hydraulic tomography (HT) from field site pumping tests to estimate inverse of heterogeneous spatial distribution of hydraulic Properties, to prove the most of most field site aquifer was heterogeneous hydrogeological parameters spatial distribution field. Many scholars had proposed a method of hydraulic tomography to estimate heterogeneous spatial distribution of hydraulic Properties of aquifer, the Huang et al. [2011] was used the non-redundant verification analysis of pumping wells changed, observation wells fixed on the inverse and the forward, to reflect the feasibility of the heterogeneous spatial distribution of hydraulic Properties of field site aquifer of the non-redundant verification analysis on steady-state model.From post literature, finding only in steady state, non-redundant verification analysis of pumping well changed location and observation wells fixed location for inverse and forward. But the studies had not yet pumping wells fixed or changed location, and observation wells fixed location for redundant verification or observation wells change location for non-redundant verification of the various combinations may to explore of influences of hydraulic tomography method. In this study, it carried out redundant verification method and non-redundant verification method for forward to influences of hydraulic tomography method in transient. And it discuss above mentioned in NYUST campus sites the actual case, to prove the effectiveness of hydraulic tomography methods, and confirmed the feasibility on inverse and forward analysis from analysis results.Keywords: Hydraulic Tomography, Redundant Verification, Heterogeneous, Inverse, Forward

  14. Redundant Design in Interdependent Networks

    PubMed Central

    2016-01-01

    Modern infrastructure networks are often coupled together and thus could be modeled as interdependent networks. Overload and interdependent effect make interdependent networks more fragile when suffering from attacks. Existing research has primarily concentrated on the cascading failure process of interdependent networks without load, or the robustness of isolated network with load. Only limited research has been done on the cascading failure process caused by overload in interdependent networks. Redundant design is a primary approach to enhance the reliability and robustness of the system. In this paper, we propose two redundant methods, node back-up and dependency redundancy, and the experiment results indicate that two measures are effective and costless. Two detailed models about redundant design are introduced based on the non-linear load-capacity model. Based on the attributes and historical failure distribution of nodes, we introduce three static selecting strategies-Random-based, Degree-based, Initial load-based and a dynamic strategy-HFD (historical failure distribution) to identify which nodes could have a back-up with priority. In addition, we consider the cost and efficiency of different redundant proportions to determine the best proportion with maximal enhancement and minimal cost. Experiments on interdependent networks demonstrate that the combination of HFD and dependency redundancy is an effective and preferred measure to implement redundant design on interdependent networks. The results suggest that the redundant design proposed in this paper can permit construction of highly robust interactive networked systems. PMID:27764174

  15. Effects of Response Task and Accessory Stimuli on Redundancy Gain: Tests of the Hemispheric Coactivation Model

    ERIC Educational Resources Information Center

    Miller, Jeff; Van Nes, Fenna

    2007-01-01

    Two experiments tested predictions of the hemispheric coactivation model for redundancy gain (J. O. Miller, 2004). Simple reaction time was measured in divided attention tasks with visual stimuli presented to the left or right of fixation or redundantly to both sides. Experiment 1 tested the prediction that redundancy gain--the decrease in…

  16. Analysis of functional redundancies within the Arabidopsis TCP transcription factor family.

    PubMed

    Danisman, Selahattin; van Dijk, Aalt D J; Bimbo, Andrea; van der Wal, Froukje; Hennig, Lars; de Folter, Stefan; Angenent, Gerco C; Immink, Richard G H

    2013-12-01

    Analyses of the functions of TEOSINTE-LIKE1, CYCLOIDEA, and PROLIFERATING CELL FACTOR1 (TCP) transcription factors have been hampered by functional redundancy between its individual members. In general, putative functionally redundant genes are predicted based on sequence similarity and confirmed by genetic analysis. In the TCP family, however, identification is impeded by relatively low overall sequence similarity. In a search for functionally redundant TCP pairs that control Arabidopsis leaf development, this work performed an integrative bioinformatics analysis, combining protein sequence similarities, gene expression data, and results of pair-wise protein-protein interaction studies for the 24 members of the Arabidopsis TCP transcription factor family. For this, the work completed any lacking gene expression and protein-protein interaction data experimentally and then performed a comprehensive prediction of potential functional redundant TCP pairs. Subsequently, redundant functions could be confirmed for selected predicted TCP pairs by genetic and molecular analyses. It is demonstrated that the previously uncharacterized class I TCP19 gene plays a role in the control of leaf senescence in a redundant fashion with TCP20. Altogether, this work shows the power of combining classical genetic and molecular approaches with bioinformatics predictions to unravel functional redundancies in the TCP transcription factor family.

  17. Analysis of functional redundancies within the Arabidopsis TCP transcription factor family

    PubMed Central

    Danisman, Selahattin; de Folter, Stefan; Immink, Richard G. H.

    2013-01-01

    Analyses of the functions of TEOSINTE-LIKE1, CYCLOIDEA, and PROLIFERATING CELL FACTOR1 (TCP) transcription factors have been hampered by functional redundancy between its individual members. In general, putative functionally redundant genes are predicted based on sequence similarity and confirmed by genetic analysis. In the TCP family, however, identification is impeded by relatively low overall sequence similarity. In a search for functionally redundant TCP pairs that control Arabidopsis leaf development, this work performed an integrative bioinformatics analysis, combining protein sequence similarities, gene expression data, and results of pair-wise protein–protein interaction studies for the 24 members of the Arabidopsis TCP transcription factor family. For this, the work completed any lacking gene expression and protein–protein interaction data experimentally and then performed a comprehensive prediction of potential functional redundant TCP pairs. Subsequently, redundant functions could be confirmed for selected predicted TCP pairs by genetic and molecular analyses. It is demonstrated that the previously uncharacterized class I TCP19 gene plays a role in the control of leaf senescence in a redundant fashion with TCP20. Altogether, this work shows the power of combining classical genetic and molecular approaches with bioinformatics predictions to unravel functional redundancies in the TCP transcription factor family. PMID:24129704

  18. Experimental evaluation of certification trails using abstract data type validation

    NASA Technical Reports Server (NTRS)

    Wilson, Dwight S.; Sullivan, Gregory F.; Masson, Gerald M.

    1993-01-01

    Certification trails are a recently introduced and promising approach to fault-detection and fault-tolerance. Recent experimental work reveals many cases in which a certification-trail approach allows for significantly faster program execution time than a basic time-redundancy approach. Algorithms for answer-validation of abstract data types allow a certification trail approach to be used for a wide variety of problems. An attempt to assess the performance of algorithms utilizing certification trails on abstract data types is reported. Specifically, this method was applied to the following problems: heapsort, Hullman tree, shortest path, and skyline. Previous results used certification trails specific to a particular problem and implementation. The approach allows certification trails to be localized to 'data structure modules,' making the use of this technique transparent to the user of such modules.

  19. Optimal plant nitrogen use improves model representation of vegetation response to elevated CO2

    NASA Astrophysics Data System (ADS)

    Caldararu, Silvia; Kern, Melanie; Engel, Jan; Zaehle, Sönke

    2017-04-01

    Existing global vegetation models often cannot accurately represent observed ecosystem behaviour under transient conditions such as elevated atmospheric CO2, a problem that can be attributed to an inflexibility in model representation of plant responses. Plant optimality concepts have been proposed as a solution to this problem as they offer a way to represent plastic plant responses in complex models. Here we present a novel, next generation vegetation model which includes optimal nitrogen allocation to and within the canopy as well as optimal biomass allocation between above- and belowground components in response to nutrient and water availability. The underlying hypothesis is that plants adjust their use of nitrogen in response to environmental conditions and nutrient availability in order to maximise biomass growth. We show that for two FACE (Free Air CO2 enrichment) experiments, the Duke forest and Oak Ridge forest sites, the model can better predict vegetation responses over the duration of the experiment when optimal processes are included. Specifically, under elevated CO2 conditions, the model predicts a lower optimal leaf N concentration as well as increased biomass allocation to fine roots, which, combined with a redistribution of leaf N between the Rubisco and chlorophyll components, leads to a continued NPP response under high CO2, where models with a fixed canopy stoichiometry predict a quick onset of N limitation.Existing global vegetation models often cannot accurately represent observed ecosystem behaviour under transient conditions such as elevated atmospheric CO2, a problem that can be attributed to an inflexibility in model representation of plant responses. Plant optimality concepts have been proposed as a solution to this problem as they offer a way to represent plastic plant responses in complex models. Here we present a novel, next generation vegetation model which includes optimal nitrogen allocation to and within the canopy as well as optimal biomass allocation between above- and belowground components in response to nutrient and water availability. The underlying hypothesis is that plants adjust their use of nitrogen in response to environmental conditions and nutrient availability in order to maximise biomass growth. We show that for two FACE (Free Air CO2 enrichment) experiments, the Duke forest and Oak Ridge forest sites, the model can better predict vegetation responses over the duration of the experiment when optimal processes are included. Specifically, under elevated CO2 conditions, the model predicts a lower optimal leaf N concentration as well as increased biomass allocation to fine roots, which, combined with a redistribution of leaf N between the Rubisco and chlorophyll components, leads to a continued NPP response under high CO2, where models with a fixed canopy stoichiometry predict a quick onset of N limitation.

  20. Redundant operation of counter modules

    NASA Technical Reports Server (NTRS)

    Nagano, S. (Inventor)

    1980-01-01

    A technique for the redundant operation of counter modules is described. Redundant operation is maintained by detecting the zero state of each counter and clearing the other to that state, thus periodically resynchronizing the counters, and obtaining an output from both counters through AC coupled diode-OR gates. Redundant operation of counter flip flops is maintained in a similar manner, and synchronous operation of redundant squarewave clock generators of the feedback type is effected by connecting together the feedback inputs of the squarewave generators through a coupling resistor, and obtaining an output from both generators through AC coupled diode-OR gates.

  1. Applications of dynamic scheduling technique to space related problems: Some case studies

    NASA Astrophysics Data System (ADS)

    Nakasuka, Shinichi; Ninomiya, Tetsujiro

    1994-10-01

    The paper discusses the applications of 'Dynamic Scheduling' technique, which has been invented for the scheduling of Flexible Manufacturing System, to two space related scheduling problems: operation scheduling of a future space transportation system, and resource allocation in a space system with limited resources such as space station or space shuttle.

  2. Using Probabilistic Information in Solving Resource Allocation Problems for a Decentralized Firm

    DTIC Science & Technology

    1978-09-01

    deterministic equivalent form of HIQ’s problem (5) by an approach similar to the one used in stochastic programming with simple recourse. See Ziemba [38) or, in...1964). 38. Ziemba , W.T., "Stochastic Programs with Simple Recourse," Technical Report 72-15, Stanford University, Department of Operations Research

  3. Behavioral Family Intervention for Children with Developmental Disabilities and Behavioral Problems

    ERIC Educational Resources Information Center

    Roberts, Clare; Mazzucchelli, Trevor; Studman, Lisa; Sanders, Matthew R.

    2006-01-01

    The outcomes of a randomized clinical trial of a new behavioral family intervention, Stepping Stones Triple P, for preschoolers with developmental and behavior problems are presented. Forty-eight children with developmental disabilities participated, 27 randomly allocated to an intervention group and 20 to a wait-list control group. Parents…

  4. Foster Placement Disruptions Associated with Problem Behavior: Mitigating a Threshold Effect

    ERIC Educational Resources Information Center

    Fisher, Philip A.; Stoolmiller, Mike; Mannering, Anne M.; Takahashi, Aiko; Chamberlain, Patricia

    2011-01-01

    Objective: Placement disruptions have adverse effects on foster children. Identifying reliable predictors of placement disruptions might assist in the allocation of services to prevent disruptions. There were two objectives in this study: (a) to replicate a prior finding that the number of daily child problem behaviors at entry into a new foster…

  5. The Quiet Revolution in Land Use Control.

    ERIC Educational Resources Information Center

    Bosselman, Fred; Callies, David

    The Council on Environmental Quality commissioned this report on the innovative land use laws of several states to learn how some of the most complex land use issues and problems of re-allocating responsibilities between state and local governments are being addressed. Many of the laws analyzed are designed to deal with problems that are treated…

  6. Sharing the cost of river basin adaptation portfolios to climate change: Insights from social justice and cooperative game theory

    NASA Astrophysics Data System (ADS)

    Girard, Corentin; Rinaudo, Jean-Daniel; Pulido-Velazquez, Manuel

    2016-10-01

    The adaptation of water resource systems to the potential impacts of climate change requires mixed portfolios of supply and demand adaptation measures. The issue is not only to select efficient, robust, and flexible adaptation portfolios but also to find equitable strategies of cost allocation among the stakeholders. Our work addresses such cost allocation problems by applying two different theoretical approaches: social justice and cooperative game theory in a real case study. First of all, a cost-effective portfolio of adaptation measures at the basin scale is selected using a least-cost optimization model. Cost allocation solutions are then defined based on economic rationality concepts from cooperative game theory (the Core). Second, interviews are conducted to characterize stakeholders' perceptions of social justice principles associated with the definition of alternatives cost allocation rules. The comparison of the cost allocation scenarios leads to contrasted insights in order to inform the decision-making process at the river basin scale and potentially reap the efficiency gains from cooperation in the design of river basin adaptation portfolios.

  7. A machine learning approach for efficient uncertainty quantification using multiscale methods

    NASA Astrophysics Data System (ADS)

    Chan, Shing; Elsheikh, Ahmed H.

    2018-02-01

    Several multiscale methods account for sub-grid scale features using coarse scale basis functions. For example, in the Multiscale Finite Volume method the coarse scale basis functions are obtained by solving a set of local problems over dual-grid cells. We introduce a data-driven approach for the estimation of these coarse scale basis functions. Specifically, we employ a neural network predictor fitted using a set of solution samples from which it learns to generate subsequent basis functions at a lower computational cost than solving the local problems. The computational advantage of this approach is realized for uncertainty quantification tasks where a large number of realizations has to be evaluated. We attribute the ability to learn these basis functions to the modularity of the local problems and the redundancy of the permeability patches between samples. The proposed method is evaluated on elliptic problems yielding very promising results.

  8. 'Batman excision' of ventral skin in hypospadias repair, clue to aesthetic repair (point of technique).

    PubMed

    Hoebeke, P B; De Kuyper, P; Van Laecke, E

    2002-11-01

    In the hypospadiac penis the ventral skin is poorly developed, while dorsal skin is redundant. The classical Byars' flaps are a way to use the excess dorsal skin to cover the penile shaft. The appearance after Byars' flaps however is not natural. We use a more natural looking skin allocation with superior aesthetic results. The clue in this reconstruction is an inverted triangle shaped excision of ventral skin expanding over the edges of the hooded prepuce (which makes it look like Batman). After excision of the ventral skin it is possible to close the penile skin in the midline, thus mimicking the natural raphe. In case of preputial reconstruction the excised ventral skin makes the prepuce look more natural. The trend of further refining aesthetic appearance of the hypospadiac penis often neglects the penile skin reconstruction. A technique is presented by which the total penile appearances after surgery ameliorates due to better skin reconstruction.

  9. Security scheme in IMDD-OFDM-PON system with the chaotic pilot interval and scrambling

    NASA Astrophysics Data System (ADS)

    Chen, Qianghua; Bi, Meihua; Fu, Xiaosong; Lu, Yang; Zeng, Ran; Yang, Guowei; Yang, Xuelin; Xiao, Shilin

    2018-01-01

    In this paper, a random chaotic pilot interval and permutations scheme without any requirement of redundant sideband information is firstly proposed for the physical layer security-enhanced intensity modulation direct detection orthogonal frequency division multiplexing passive optical network (IMDD-OFDM-PON) system. With the help of the position feature of inserting the pilot, a simple logistic chaos map is used to generate the random pilot interval and scramble the chaotic subcarrier allocation of each column pilot data for improving the physical layer confidentiality. Due to the dynamic chaotic permutations of pilot data, the enhanced key space of ∼103303 is achieved in OFDM-PON. Moreover, the transmission experiment of 10-Gb/s 16-QAM encrypted OFDM data is successfully demonstrated over 20-km single-mode fiber, which indicates that the proposed scheme not only improves the system security, but also can achieve the same performance as in the common IMDD-OFDM-PON system without encryption scheme.

  10. Reliability and performance experience with flat-plate photovoltaic modules

    NASA Technical Reports Server (NTRS)

    Ross, R. G., Jr.

    1982-01-01

    Statistical models developed to define the most likely sources of photovoltaic (PV) array failures and the optimum method of allowing for the defects in order to achieve a 20 yr lifetime with acceptable performance degradation are summarized. Significant parameters were the cost of energy, annual power output, initial cost, replacement cost, rate of module replacement, the discount rate, and the plant lifetime. Acceptable degradation allocations were calculated to be 0.0001 cell failures/yr, 0.005 module failures/yr, 0.05 power loss/yr, a 0.01 rate of power loss/yr, and a 25 yr module wear-out length. Circuit redundancy techniques were determined to offset cell failures using fault tolerant designs such as series/parallel and bypass diode arrangements. Screening processes have been devised to eliminate cells that will crack in operation, and multiple electrical contacts at each cell compensate for the cells which escape the screening test and then crack when installed. The 20 yr array lifetime is expected to be achieved in the near-term.

  11. Entropy-Based Bounds On Redundancies Of Huffman Codes

    NASA Technical Reports Server (NTRS)

    Smyth, Padhraic J.

    1992-01-01

    Report presents extension of theory of redundancy of binary prefix code of Huffman type which includes derivation of variety of bounds expressed in terms of entropy of source and size of alphabet. Recent developments yielded bounds on redundancy of Huffman code in terms of probabilities of various components in source alphabet. In practice, redundancies of optimal prefix codes often closer to 0 than to 1.

  12. Past and present biophysical redundancy of countries as a buffer to changes in food supply

    NASA Astrophysics Data System (ADS)

    Fader, Marianela; Rulli, Maria Cristina; Carr, Joel; Dell'Angelo, Jampel; D'Odorico, Paolo; Gephart, Jessica A.; Kummu, Matti; Magliocca, Nicholas; Porkka, Miina; Prell, Christina; Puma, Michael J.; Ratajczak, Zak; Seekell, David A.; Suweis, Samir; Tavoni, Alessandro

    2016-05-01

    Spatially diverse trends in population growth, climate change, industrialization, urbanization and economic development are expected to change future food supply and demand. These changes may affect the suitability of land for food production, implying elevated risks especially for resource-constrained, food-importing countries. We present the evolution of biophysical redundancy for agricultural production at country level, from 1992 to 2012. Biophysical redundancy, defined as unused biotic and abiotic environmental resources, is represented by the potential food production of ‘spare land’, available water resources (i.e., not already used for human activities), as well as production increases through yield gap closure on cultivated areas and potential agricultural areas. In 2012, the biophysical redundancy of 75 (48) countries, mainly in North Africa, Western Europe, the Middle East and Asia, was insufficient to produce the caloric nutritional needs for at least 50% (25%) of their population during a year. Biophysical redundancy has decreased in the last two decades in 102 out of 155 countries, 11 of these went from high to limited redundancy, and nine of these from limited to very low redundancy. Although the variability of the drivers of change across different countries is high, improvements in yield and population growth have a clear impact on the decreases of redundancy towards the very low redundancy category. We took a more detailed look at countries classified as ‘Low Income Economies (LIEs)’ since they are particularly vulnerable to domestic or external food supply changes, due to their limited capacity to offset for food supply decreases with higher purchasing power on the international market. Currently, nine LIEs have limited or very low biophysical redundancy. Many of these showed a decrease in redundancy over the last two decades, which is not always linked with improvements in per capita food availability.

  13. Quantifying electrical impacts on redundant wire insertion in 7nm unidirectional designs

    NASA Astrophysics Data System (ADS)

    Mohyeldin, Ahmed; Schroeder, Uwe Paul; Srinivasan, Ramya; Narisetty, Haritez; Malik, Shobhit; Madhavan, Sriram

    2017-04-01

    In nano-meter scale Integrated Circuits, via fails due to random defects is a well-known yield detractor, and via redundancy insertion is a common method to help enhance semiconductors yield. For the case of Self Aligned Double Patterning (SADP), which might require unidirectional design layers as in the case of some advanced technology nodes, the conventional methods of inserting redundant vias don't work any longer. This is because adding redundant vias conventionally requires adding metal shapes in the non-preferred direction, which will violate the SADP design constraints in that case. Therefore, such metal layers fabricated using unidirectional SADP require an alternative method for providing the needed redundancy. This paper proposes a post-layout Design for Manufacturability (DFM) redundancy insertion method tailored for the design requirements introduced by unidirectional metal layers. The proposed method adds redundant wires in the preferred direction - after searching for nearby vacant routing tracks - in order to provide redundant paths for electrical signals. This method opportunistically adds robustness against failures due to silicon defects without impacting area or incurring new design rule violations. Implementation details of this redundancy insertion method will be explained in this paper. One known challenge with similar DFM layout fixing methods is the possible introduction of undesired electrical impact, causing other unintentional failures in design functionality. In this paper, a study is presented to quantify the electrical impacts of such redundancy insertion scheme and to examine if that electrical impact can be tolerated. The paper will show results to evaluate DFM insertion rates and corresponding electrical impact for a given design utilization and maximum inserted wire length. Parasitic extraction and static timing analysis results will be presented. A typical digital design implemented using GLOBALFOUNDRIES 7nm technology is used for demonstration. The provided results can help evaluate such extensive DFM insertion method from an electrical standpoint. Furthermore, the results could provide guidance on how to implement the proposed method of adding electrical redundancy such that intolerable electrical impacts could be avoided.

  14. Draft Software Metrics Panels Final Report. Papers Presented at the 30 June 1980 Meeting on Software Metrics, Washington DC.

    DTIC Science & Technology

    1980-06-01

    measuring program understanding. Shneiderman, Mayer, McKay, and Heller [241 found that flowcharts are redundant and have a potential negative affect on...dictionaries of program variables are superior to macro flowcharts as an aid to understand program control and data structures. Chrysler [5], using no...procedures as do beginners . Also; guaranteeing that groups of begining programmers have equal ability is not trivial. 3-10 The problem with material

  15. A Fully Redundant On-Line Mass Spectrometer System Used to Monitor Cryogenic Fuel Leaks on the Space Shuttle

    NASA Technical Reports Server (NTRS)

    Griffin, T. P.; Naylor, G. R.; Haskell, W. D.; Breznik, G. S.; Mizell, C. A.; Steinrock, Todd (Technical Monitor)

    2001-01-01

    This paper presents an on-line mass spectrometer designed to monitor for cryogenic leaks on the Space Shuttle. The topics include: 1) Hazardous Gas Detection Lab; 2) LASRE Test Support; 3) Background; 4) Location of Systems; 5) Sample Lines for Gas Detection; 6) Problems with Current Systems; 7) Requirements for New System (Nitrogen and Helium Background); and 8) HGDS 2000. This paper is in viewgraph form.

  16. Optimally Robust Redundancy Relations for Failure Detection in Uncertain Systems,

    DTIC Science & Technology

    1983-04-01

    particular applications. While the general methods provide the basis for what in principle should be a widely applicable failure detection methodology...modifications to this result which overcome them at no fundmental increase in complexity. 4.1 Scaling A critical problem with the criteria of the preceding...criterion which takes scaling into account L 2 s[ (45) As in (38), we can multiply the C. by positive scalars to take into account unequal weightings on

  17. A Rate-Based Congestion Control Algorithm for the SURAP 4 Packet Radio Architecture (SRNTN-72)

    DTIC Science & Technology

    1990-01-01

    factor, one packet. as at connection initialization. However. these TCP enhancements do not solve the fairness problem. The slow start algorithm ma...and signal interference (including lamming) and by the delavs demanded b% the hink -layer protocols in the absence of contention for resources at the...values. This role would be redundant if bits-per-second rations used measurements of packet duration to determine how fast to decrease, at the expense of

  18. Conformational Modeling of Continuum Structures in Robotics and Structural Biology: A Review

    PubMed Central

    Chirikjian, G. S.

    2016-01-01

    Hyper-redundant (or snakelike) manipulators have many more degrees of freedom than are required to position and orient an object in space. They have been employed in a variety of applications ranging from search-and-rescue to minimally invasive surgical procedures, and recently they even have been proposed as solutions to problems in maintaining civil infrastructure and the repair of satellites. The kinematic and dynamic properties of snakelike robots are captured naturally using a continuum backbone curve equipped with a naturally evolving set of reference frames, stiffness properties, and mass density. When the snakelike robot has a continuum architecture, the backbone curve corresponds with the physical device itself. Interestingly, these same modeling ideas can be used to describe conformational shapes of DNA molecules and filamentous protein structures in solution and in cells. This paper reviews several classes of snakelike robots: (1) hyper-redundant manipulators guided by backbone curves; (2) flexible steerable needles; and (3) concentric tube continuum robots. It is then shown how the same mathematical modeling methods used in these robotics contexts can be used to model molecules such as DNA. All of these problems are treated in the context of a common mathematical framework based on the differential geometry of curves, continuum mechanics, and variational calculus. Both coordinate-dependent Euler-Lagrange formulations and coordinate-free Euler-Poincaré approaches are reviewed. PMID:27030786

  19. Conformational Modeling of Continuum Structures in Robotics and Structural Biology: A Review.

    PubMed

    Chirikjian, G S

    Hyper-redundant (or snakelike) manipulators have many more degrees of freedom than are required to position and orient an object in space. They have been employed in a variety of applications ranging from search-and-rescue to minimally invasive surgical procedures, and recently they even have been proposed as solutions to problems in maintaining civil infrastructure and the repair of satellites. The kinematic and dynamic properties of snakelike robots are captured naturally using a continuum backbone curve equipped with a naturally evolving set of reference frames, stiffness properties, and mass density. When the snakelike robot has a continuum architecture, the backbone curve corresponds with the physical device itself. Interestingly, these same modeling ideas can be used to describe conformational shapes of DNA molecules and filamentous protein structures in solution and in cells. This paper reviews several classes of snakelike robots: (1) hyper-redundant manipulators guided by backbone curves; (2) flexible steerable needles; and (3) concentric tube continuum robots. It is then shown how the same mathematical modeling methods used in these robotics contexts can be used to model molecules such as DNA. All of these problems are treated in the context of a common mathematical framework based on the differential geometry of curves, continuum mechanics, and variational calculus. Both coordinate-dependent Euler-Lagrange formulations and coordinate-free Euler-Poincaré approaches are reviewed.

  20. A Fault Tolerance Mechanism for On-Road Sensor Networks

    PubMed Central

    Feng, Lei; Guo, Shaoyong; Sun, Jialu; Yu, Peng; Li, Wenjing

    2016-01-01

    On-Road Sensor Networks (ORSNs) play an important role in capturing traffic flow data for predicting short-term traffic patterns, driving assistance and self-driving vehicles. However, this kind of network is prone to large-scale communication failure if a few sensors physically fail. In this paper, to ensure that the network works normally, an effective fault-tolerance mechanism for ORSNs which mainly consists of backup on-road sensor deployment, redundant cluster head deployment and an adaptive failure detection and recovery method is proposed. Firstly, based on the N − x principle and the sensors’ failure rate, this paper formulates the backup sensor deployment problem in the form of a two-objective optimization, which explains the trade-off between the cost and fault resumption. In consideration of improving the network resilience further, this paper introduces a redundant cluster head deployment model according to the coverage constraint. Then a common solving method combining integer-continuing and sequential quadratic programming is explored to determine the optimal location of these two deployment problems. Moreover, an Adaptive Detection and Resume (ADR) protocol is deigned to recover the system communication through route and cluster adjustment if there is a backup on-road sensor mismatch. The final experiments show that our proposed mechanism can achieve an average 90% recovery rate and reduce the average number of failed sensors at most by 35.7%. PMID:27918483

  1. Synthesis of compact patterns for NMR relaxation decay in intelligent "electronic tongue" for analyzing heavy oil composition

    NASA Astrophysics Data System (ADS)

    Lapshenkov, E. M.; Volkov, V. Y.; Kulagin, V. P.

    2018-05-01

    The article is devoted to the problem of pattern creation of the NMR sensor signal for subsequent recognition by the artificial neural network in the intelligent device "the electronic tongue". The specific problem of removing redundant data from the spin-spin relaxation signal pattern that is used as a source of information in analyzing the composition of oil and petroleum products is considered. The method is proposed that makes it possible to remove redundant data of the relaxation decay pattern but without introducing additional distortion. This method is based on combining some relaxation decay curve intervals that increment below the noise level such that the increment of the combined intervals is above the noise level. In this case, the relaxation decay curve samples that are located inside the combined intervals are removed from the pattern. This method was tested on the heavy-oil NMR signal patterns that were created by using the Carr-Purcell-Meibum-Gill (CPMG) sequence for recording the relaxation process. Parameters of CPMG sequence are: 100 μs - time interval between 180° pulses, 0.4s - duration of measurement. As a result, it was revealed that the proposed method allowed one to reduce the number of samples 15 times (from 4000 to 270), and the maximum detected root mean square error (RMS error) equals 0.00239 (equivalent to signal-to-noise ratio 418).

  2. Detecting clinically relevant new information in clinical notes across specialties and settings.

    PubMed

    Zhang, Rui; Pakhomov, Serguei V S; Arsoniadis, Elliot G; Lee, Janet T; Wang, Yan; Melton, Genevieve B

    2017-07-05

    Automated methods for identifying clinically relevant new versus redundant information in electronic health record (EHR) clinical notes is useful for clinicians and researchers involved in patient care and clinical research, respectively. We evaluated methods to automatically identify clinically relevant new information in clinical notes, and compared the quantity of redundant information across specialties and clinical settings. Statistical language models augmented with semantic similarity measures were evaluated as a means to detect and quantify clinically relevant new and redundant information over longitudinal clinical notes for a given patient. A corpus of 591 progress notes over 40 inpatient admissions was annotated for new information longitudinally by physicians to generate a reference standard. Note redundancy between various specialties was evaluated on 71,021 outpatient notes and 64,695 inpatient notes from 500 solid organ transplant patients (April 2015 through August 2015). Our best method achieved at best performance of 0.87 recall, 0.62 precision, and 0.72 F-measure. Addition of semantic similarity metrics compared to baseline improved recall but otherwise resulted in similar performance. While outpatient and inpatient notes had relatively similar levels of high redundancy (61% and 68%, respectively), redundancy differed by author specialty with mean redundancy of 75%, 66%, 57%, and 55% observed in pediatric, internal medicine, psychiatry and surgical notes, respectively. Automated techniques with statistical language models for detecting redundant versus clinically relevant new information in clinical notes do not improve with the addition of semantic similarity measures. While levels of redundancy seem relatively similar in the inpatient and ambulatory settings in the Fairview Health Services, clinical note redundancy appears to vary significantly with different medical specialties.

  3. Water resources planning and management : A stochastic dual dynamic programming approach

    NASA Astrophysics Data System (ADS)

    Goor, Q.; Pinte, D.; Tilmant, A.

    2008-12-01

    Allocating water between different users and uses, including the environment, is one of the most challenging task facing water resources managers and has always been at the heart of Integrated Water Resources Management (IWRM). As water scarcity is expected to increase over time, allocation decisions among the different uses will have to be found taking into account the complex interactions between water and the economy. Hydro-economic optimization models can capture those interactions while prescribing efficient allocation policies. Many hydro-economic models found in the literature are formulated as large-scale non linear optimization problems (NLP), seeking to maximize net benefits from the system operation while meeting operational and/or institutional constraints, and describing the main hydrological processes. However, those models rarely incorporate the uncertainty inherent to the availability of water, essentially because of the computational difficulties associated stochastic formulations. The purpose of this presentation is to present a stochastic programming model that can identify economically efficient allocation policies in large-scale multipurpose multireservoir systems. The model is based on stochastic dual dynamic programming (SDDP), an extension of traditional SDP that is not affected by the curse of dimensionality. SDDP identify efficient allocation policies while considering the hydrologic uncertainty. The objective function includes the net benefits from the hydropower and irrigation sectors, as well as penalties for not meeting operational and/or institutional constraints. To be able to implement the efficient decomposition scheme that remove the computational burden, the one-stage SDDP problem has to be a linear program. Recent developments improve the representation of the non-linear and mildly non- convex hydropower function through a convex hull approximation of the true hydropower function. This model is illustrated on a cascade of 14 reservoirs on the Nile river basin.

  4. Reliable Collection of Real-Time Patient Physiologic Data from less Reliable Networks: a "Monitor of Monitors" System (MoMs).

    PubMed

    Hu, Peter F; Yang, Shiming; Li, Hsiao-Chi; Stansbury, Lynn G; Yang, Fan; Hagegeorge, George; Miller, Catriona; Rock, Peter; Stein, Deborah M; Mackenzie, Colin F

    2017-01-01

    Research and practice based on automated electronic patient monitoring and data collection systems is significantly limited by system down time. We asked whether a triple-redundant Monitor of Monitors System (MoMs) to collect and summarize key information from system-wide data sources could achieve high fault tolerance, early diagnosis of system failure, and improve data collection rates. In our Level I trauma center, patient vital signs(VS) monitors were networked to collect real time patient physiologic data streams from 94 bed units in our various resuscitation, operating, and critical care units. To minimize the impact of server collection failure, three BedMaster® VS servers were used in parallel to collect data from all bed units. To locate and diagnose system failures, we summarized critical information from high throughput datastreams in real-time in a dashboard viewer and compared the before and post MoMs phases to evaluate data collection performance as availability time, active collection rates, and gap duration, occurrence, and categories. Single-server collection rates in the 3-month period before MoMs deployment ranged from 27.8 % to 40.5 % with combined 79.1 % collection rate. Reasons for gaps included collection server failure, software instability, individual bed setting inconsistency, and monitor servicing. In the 6-month post MoMs deployment period, average collection rates were 99.9 %. A triple redundant patient data collection system with real-time diagnostic information summarization and representation improved the reliability of massive clinical data collection to nearly 100 % in a Level I trauma center. Such data collection framework may also increase the automation level of hospital-wise information aggregation for optimal allocation of health care resources.

  5. Development of an automatic subsea blowout preventer stack control system using PLC based SCADA.

    PubMed

    Cai, Baoping; Liu, Yonghong; Liu, Zengkai; Wang, Fei; Tian, Xiaojie; Zhang, Yanzhen

    2012-01-01

    An extremely reliable remote control system for subsea blowout preventer stack is developed based on the off-the-shelf triple modular redundancy system. To meet a high reliability requirement, various redundancy techniques such as controller redundancy, bus redundancy and network redundancy are used to design the system hardware architecture. The control logic, human-machine interface graphical design and redundant databases are developed by using the off-the-shelf software. A series of experiments were performed in laboratory to test the subsea blowout preventer stack control system. The results showed that the tested subsea blowout preventer functions could be executed successfully. For the faults of programmable logic controllers, discrete input groups and analog input groups, the control system could give correct alarms in the human-machine interface. Copyright © 2011 ISA. Published by Elsevier Ltd. All rights reserved.

  6. Redundant information encoding in primary motor cortex during natural and prosthetic motor control.

    PubMed

    So, Kelvin; Ganguly, Karunesh; Jimenez, Jessica; Gastpar, Michael C; Carmena, Jose M

    2012-06-01

    Redundant encoding of information facilitates reliable distributed information processing. To explore this hypothesis in the motor system, we applied concepts from information theory to quantify the redundancy of movement-related information encoded in the macaque primary motor cortex (M1) during natural and neuroprosthetic control. Two macaque monkeys were trained to perform a delay center-out reaching task controlling a computer cursor under natural arm movement (manual control, 'MC'), and using a brain-machine interface (BMI) via volitional control of neural ensemble activity (brain control, 'BC'). During MC, we found neurons in contralateral M1 to contain higher and more redundant information about target direction than ipsilateral M1 neurons, consistent with the laterality of movement control. During BC, we found that the M1 neurons directly incorporated into the BMI ('direct' neurons) contained the highest and most redundant target information compared to neurons that were not incorporated into the BMI ('indirect' neurons). This effect was even more significant when comparing to M1 neurons of the opposite hemisphere. Interestingly, when we retrained the BMI to use ipsilateral M1 activity, we found that these neurons were more redundant and contained higher information than contralateral M1 neurons, even though ensembles from this hemisphere were previously less redundant during natural arm movement. These results indicate that ensembles most associated to movement contain highest redundancy and information encoding, which suggests a role for redundancy in proficient natural and prosthetic motor control.

  7. Preliminary design of a redundant strapped down inertial navigation unit using two-degree-of-freedom tuned-gimbal gyroscopes

    NASA Technical Reports Server (NTRS)

    1976-01-01

    This redundant strapdown INS preliminary design study demonstrates the practicality of a skewed sensor system configuration by means of: (1) devising a practical system mechanization utilizing proven strapdown instruments, (2) thoroughly analyzing the skewed sensor redundancy management concept to determine optimum geometry, data processing requirements, and realistic reliability estimates, and (3) implementing the redundant computers into a low-cost, maintainable configuration.

  8. The role of redundant information in cultural transmission and cultural stabilization.

    PubMed

    Acerbi, Alberto; Tennie, Claudio

    2016-02-01

    Redundant copying has been proposed as a manner to achieve the high-fidelity necessary to pass on and preserve complex traits in human cultural transmission. There are at least 2 ways to define redundant copying. One refers to the possibility of copying repeatedly the same trait over time, and another to the ability to exploit multiple layers of information pointing to the same trait during a single copying event. Using an individual-based model, we explore how redundant copying (defined as in the latter way) helps to achieve successful transmission. The authors show that increasing redundant copying increases the likelihood of accurately transmitting a behavior more than either augmenting the number of copying occasions across time or boosting the general accuracy of social learning. They also investigate how different cost functions, deriving, for example, from the need to invest more energy in cognitive processing, impact the evolution of redundant copying. The authors show that populations converge either to high-fitness/high-costs states (with high redundant copying and complex culturally transmitted behaviors; resembling human culture) or to low-fitness/low-costs states (with low redundant copying and simple transmitted behaviors; resembling social learning forms typical of nonhuman animals). This outcome may help to explain why cumulative culture is rare in the animal kingdom. (c) 2016 APA, all rights reserved).

  9. Market-oriented Programming Using Small-world Networks for Controlling Building Environments

    NASA Astrophysics Data System (ADS)

    Shigei, Noritaka; Miyajima, Hiromi; Osako, Tsukasa

    The market model, which is one of the economic activity models, is modeled as an agent system, and applying the model to the resource allocation problem has been studied. For air conditioning control of building, which is one of the resource allocation problems, an effective method based on the agent system using auction has been proposed for traditional PID controller. On the other hand, it has been considered that this method is performed by decentralized control. However, its decentralization is not perfect, and its performace is not enough. In this paper, firstly, we propose a perfectly decentralized agent model and show its performance. Secondly, in order to improve the model, we propose the agent model based on small-world model. The effectiveness of the proposed model is shown by simulation.

  10. Joint Transmit Antenna Selection and Power Allocation for ISDF Relaying Mobile-to-Mobile Sensor Networks

    PubMed Central

    Xu, Lingwei; Zhang, Hao; Gulliver, T. Aaron

    2016-01-01

    The outage probability (OP) performance of multiple-relay incremental-selective decode-and-forward (ISDF) relaying mobile-to-mobile (M2M) sensor networks with transmit antenna selection (TAS) over N-Nakagami fading channels is investigated. Exact closed-form OP expressions for both optimal and suboptimal TAS schemes are derived. The power allocation problem is formulated to determine the optimal division of transmit power between the broadcast and relay phases. The OP performance under different conditions is evaluated via numerical simulation to verify the analysis. These results show that the optimal TAS scheme has better OP performance than the suboptimal scheme. Further, the power allocation parameter has a significant influence on the OP performance. PMID:26907282

  11. Improving Learning Performance Through Rational Resource Allocation

    NASA Technical Reports Server (NTRS)

    Gratch, J.; Chien, S.; DeJong, G.

    1994-01-01

    This article shows how rational analysis can be used to minimize learning cost for a general class of statistical learning problems. We discuss the factors that influence learning cost and show that the problem of efficient learning can be cast as a resource optimization problem. Solutions found in this way can be significantly more efficient than the best solutions that do not account for these factors. We introduce a heuristic learning algorithm that approximately solves this optimization problem and document its performance improvements on synthetic and real-world problems.

  12. Resource allocation processes at multilateral organizations working in global health.

    PubMed

    Chi, Y-Ling; Bump, Jesse B

    2018-02-01

    International institutions provide well over US$10 billion in development assistance for health (DAH) annually and between 1990 and 2014, DAH disbursements totaled $458 billion but how do they decide who gets what, and for what purpose? In this article, we explore how allocation decisions were made by the nine convening agencies of the Equitable Access Initiative. We provide clear, plain language descriptions of the complete process from resource mobilization to allocation for the nine multilateral agencies with prominent agendas in global health. Then, through a comparative analysis we illuminate the choices and strategies employed in the nine international institutions. We find that resource allocation in all reviewed institutions follow a similar pattern, which we categorized in a framework of five steps: strategy definition, resource mobilization, eligibility of countries, support type and funds allocation. All the reviewed institutions generate resource allocation decisions through well-structured and fairly complex processes. Variations in those processes seem to reflect differences in institutional principles and goals. However, these processes have serious shortcomings. Technical problems include inadequate flexibility to account for or meet country needs. Although aid effectiveness and value for money are commonly referenced, we find that neither performance nor impact is a major criterion for allocating resources. We found very little formal consideration of the incentives generated by allocation choices. Political issues include non-transparent influence on allocation processes by donors and bureaucrats, and the common practice of earmarking funds to bypass the normal allocation process entirely. Ethical deficiencies include low accountability and transparency at international institutions, and limited participation by affected citizens or their representatives. We find that recipient countries have low influence on allocation processes themselves, although within these processes they have some influence in relatively narrow areas. © The Author(s) 2018. Published by Oxford University Press in association with The London School of Hygiene and Tropical Medicine.

  13. The Hong Kong/AAO/Strasbourg Hα (HASH) Planetary Nebula Database

    NASA Astrophysics Data System (ADS)

    Bojičić, Ivan S.; Parker, Quentin A.; Frew, David J.

    2017-10-01

    The Hong Kong/AAO/Strasbourg Hα (HASH) planetary nebula database is an online research platform providing free and easy access to the largest and most comprehensive catalogue of known Galactic PNe and a repository of observational data (imaging and spectroscopy) for these and related astronomical objects. The main motivation for creating this system is resolving some of long standing problems in the field e.g. problems with mimics and dubious and/or misidentifications, errors in observational data and consolidation of the widely scattered data-sets. This facility allows researchers quick and easy access to the archived and new observational data and creating and sharing of non-redundant PN samples and catalogues.

  14. Multimedia transmission in MC-CDMA using adaptive subcarrier power allocation and CFO compensation

    NASA Astrophysics Data System (ADS)

    Chitra, S.; Kumaratharan, N.

    2018-02-01

    Multicarrier code division multiple access (MC-CDMA) system is one of the most effective techniques in fourth-generation (4G) wireless technology, due to its high data rate, high spectral efficiency and resistance to multipath fading. However, MC-CDMA systems are greatly deteriorated by carrier frequency offset (CFO) which is due to Doppler shift and oscillator instabilities. It leads to loss of orthogonality among the subcarriers and causes intercarrier interference (ICI). Water filling algorithm (WFA) is an efficient resource allocation algorithm to solve the power utilisation problems among the subcarriers in time-dispersive channels. The conventional WFA fails to consider the effect of CFO. To perform subcarrier power allocation with reduced CFO and to improve the capacity of MC-CDMA system, residual CFO compensated adaptive subcarrier power allocation algorithm is proposed in this paper. The proposed technique allocates power only to subcarriers with high channel to noise power ratio. The performance of the proposed method is evaluated using random binary data and image as source inputs. Simulation results depict that the bit error rate performance and ICI reduction capability of the proposed modified WFA offered superior performance in both power allocation and image compression for high-quality multimedia transmission in the presence of CFO and imperfect channel state information conditions.

  15. Simultaneous Liver-Kidney Allocation Policy: A Proposal to Optimize Appropriate Utilization of Scarce Resources.

    PubMed

    Formica, R N; Aeder, M; Boyle, G; Kucheryavaya, A; Stewart, D; Hirose, R; Mulligan, D

    2016-03-01

    The introduction of the Mayo End-Stage Liver Disease score into the Organ Procurement and Transplantation Network (OPTN) deceased donor liver allocation policy in 2002 has led to a significant increase in the number of simultaneous liver-kidney transplants in the United States. Despite multiple attempts, clinical science has not been able to reliably predict which liver candidates with renal insufficiency will recover renal function or need a concurrent kidney transplant. The problem facing the transplant community is that currently there are almost no medical criteria for candidacy for simultaneous liver-kidney allocation in the United States, and this lack of standardized rules and medical eligibility criteria for kidney allocation with a liver is counter to OPTN's Final Rule. Moreover, almost 50% of simultaneous liver-kidney organs come from a donor with a kidney donor profile index of ≤0.35. The kidneys from these donors could otherwise be allocated to pediatric recipients, young adults or prior organ donors. This paper presents the new OPTN and United Network of Organ Sharing simultaneous liver-kidney allocation policy, provides the supporting evidence and explains the rationale on which the policy was based. © Copyright 2015 The American Society of Transplantation and the American Society of Transplant Surgeons.

  16. Capacity planning for waste management systems: an interval fuzzy robust dynamic programming approach.

    PubMed

    Nie, Xianghui; Huang, Guo H; Li, Yongping

    2009-11-01

    This study integrates the concepts of interval numbers and fuzzy sets into optimization analysis by dynamic programming as a means of accounting for system uncertainty. The developed interval fuzzy robust dynamic programming (IFRDP) model improves upon previous interval dynamic programming methods. It allows highly uncertain information to be effectively communicated into the optimization process through introducing the concept of fuzzy boundary interval and providing an interval-parameter fuzzy robust programming method for an embedded linear programming problem. Consequently, robustness of the optimization process and solution can be enhanced. The modeling approach is applied to a hypothetical problem for the planning of waste-flow allocation and treatment/disposal facility expansion within a municipal solid waste (MSW) management system. Interval solutions for capacity expansion of waste management facilities and relevant waste-flow allocation are generated and interpreted to provide useful decision alternatives. The results indicate that robust and useful solutions can be obtained, and the proposed IFRDP approach is applicable to practical problems that are associated with highly complex and uncertain information.

  17. Factor structure and psychometric properties of the Fertility Problem Inventory–Short Form

    PubMed Central

    Zurlo, Maria Clelia; Cattaneo Della Volta, Maria Franscesca; Vallone, Federica

    2017-01-01

    The study analyses factor structure and psychometric properties of the Italian version of the Fertility Problem Inventory–Short Form. A sample of 206 infertile couples completed the Italian version of Fertility Problem Inventory (46 items) with demographics, State Anxiety Scale of State-Trait Anxiety Inventory (Form Y), Edinburgh Depression Scale and Dyadic Adjustment Scale, used to assess convergent and discriminant validity. Confirmatory factor analysis was unsatisfactory (comparative fit index = 0.87; Tucker-Lewis Index = 0.83; root mean square error of approximation = 0.17), and Cronbach’s α (0.95) revealed a redundancy of items. Exploratory factor analysis was carried out deleting cross-loading items, and Mokken scale analysis was applied to verify the items homogeneity within the reduced subscales of the questionnaire. The Fertility Problem Inventory–Short Form consists of 27 items, tapping four meaningful and reliable factors. Convergent and discriminant validity were confirmed. Findings indicated that the Fertility Problem Inventory–Short Form is a valid and reliable measure to assess infertility-related stress dimensions. PMID:29379625

  18. A Comparison of Emotions Elicited in Fair and Unfair Situations between Children with and without Behaviour Problems

    ERIC Educational Resources Information Center

    Averill-Roper, Gillian; Ricklidge, Julia J.

    2006-01-01

    This study compared emotions, assessed during fair and unfair situations, between children (aged 8 to 11) with and without behaviour problems, controlling for SES, depression, anxiety, IQ and educational achievement in order to study the relationship between emotional responses and subclinical antisocial behaviours. Group allocation was determined…

  19. A note on the modelling of circular smallholder migration.

    PubMed

    Bigsten, A

    1988-01-01

    "It is argued that circular migration [in Africa] should be seen as an optimization problem, where the household allocates its labour resources across activities, including work which requires migration, so as to maximize the joint family utility function. The migration problem is illustrated in a simple diagram, which makes it possible to analyse economic aspects of migration." excerpt

  20. Enhanced Specification and Verification for Timed Planning

    DTIC Science & Technology

    2009-02-28

    Scheduling Problem The job-shop scheduling problem ( JSSP ) is a generic resource allocation problem in which common resources (“machines”) are required...interleaving of all processes Pi with the non-delay and mutual exclusion constraints: JSSP =̂ |||0<i6n Pi Where mutual-exclusion( JSSP ) For every complete...execution of JSSP (which terminates), its associated sched- ule S is a feasible schedule. An optimal schedule is a trace of JSSP with the minimum ending

Top