NASA Technical Reports Server (NTRS)
Tuey, R. C.
1972-01-01
Computer solutions of linear programming problems are outlined. Information covers vector spaces, convex sets, and matrix algebra elements for solving simultaneous linear equations. Dual problems, reduced cost analysis, ranges, and error analysis are illustrated.
NASA Technical Reports Server (NTRS)
Lawson, C. L.; Krogh, F. T.; Gold, S. S.; Kincaid, D. R.; Sullivan, J.; Williams, E.; Hanson, R. J.; Haskell, K.; Dongarra, J.; Moler, C. B.
1982-01-01
The Basic Linear Algebra Subprograms (BLAS) library is a collection of 38 FORTRAN-callable routines for performing basic operations of numerical linear algebra. BLAS library is portable and efficient source of basic operations for designers of programs involving linear algebriac computations. BLAS library is supplied in portable FORTRAN and Assembler code versions for IBM 370, UNIVAC 1100 and CDC 6000 series computers.
Sparse linear programming subprogram
Hanson, R.J.; Hiebert, K.L.
1981-12-01
This report describes a subprogram, SPLP(), for solving linear programming problems. The package of subprogram units comprising SPLP() is written in Fortran 77. The subprogram SPLP() is intended for problems involving at most a few thousand constraints and variables. The subprograms are written to take advantage of sparsity in the constraint matrix. A very general problem statement is accepted by SPLP(). It allows upper, lower, or no bounds on the variables. Both the primal and dual solutions are returned as output parameters. The package has many optional features. Among them is the ability to save partial results and then use them to continue the computation at a later time.
NASA Technical Reports Server (NTRS)
Klumpp, A. R.; Lawson, C. L.
1988-01-01
Routines provided for common scalar, vector, matrix, and quaternion operations. Computer program extends Ada programming language to include linear-algebra capabilities similar to HAS/S programming language. Designed for such avionics applications as software for Space Station.
NASA Technical Reports Server (NTRS)
Ferencz, Donald C.; Viterna, Larry A.
1991-01-01
ALPS is a computer program which can be used to solve general linear program (optimization) problems. ALPS was designed for those who have minimal linear programming (LP) knowledge and features a menu-driven scheme to guide the user through the process of creating and solving LP formulations. Once created, the problems can be edited and stored in standard DOS ASCII files to provide portability to various word processors or even other linear programming packages. Unlike many math-oriented LP solvers, ALPS contains an LP parser that reads through the LP formulation and reports several types of errors to the user. ALPS provides a large amount of solution data which is often useful in problem solving. In addition to pure linear programs, ALPS can solve for integer, mixed integer, and binary type problems. Pure linear programs are solved with the revised simplex method. Integer or mixed integer programs are solved initially with the revised simplex, and the completed using the branch-and-bound technique. Binary programs are solved with the method of implicit enumeration. This manual describes how to use ALPS to create, edit, and solve linear programming problems. Instructions for installing ALPS on a PC compatible computer are included in the appendices along with a general introduction to linear programming. A programmers guide is also included for assistance in modifying and maintaining the program.
Linear Programming across the Curriculum
ERIC Educational Resources Information Center
Yoder, S. Elizabeth; Kurz, M. Elizabeth
2015-01-01
Linear programming (LP) is taught in different departments across college campuses with engineering and management curricula. Modeling an LP problem is taught in every linear programming class. As faculty teaching in Engineering and Management departments, the depth to which teachers should expect students to master this particular type of…
Linear Programming Applied to a Simple Circuit.
ERIC Educational Resources Information Center
Boyd, J. N.; Raychowdhury, P. N.
1980-01-01
Discusses what is meant by a linear program and states and illustrates two of the theorems upon which the methods of linear programing rest. This description is intended as an introduction to linear programing of physics students. (HM)
ALPS - A LINEAR PROGRAM SOLVER
NASA Technical Reports Server (NTRS)
Viterna, L. A.
1994-01-01
Linear programming is a widely-used engineering and management tool. Scheduling, resource allocation, and production planning are all well-known applications of linear programs (LP's). Most LP's are too large to be solved by hand, so over the decades many computer codes for solving LP's have been developed. ALPS, A Linear Program Solver, is a full-featured LP analysis program. ALPS can solve plain linear programs as well as more complicated mixed integer and pure integer programs. ALPS also contains an efficient solution technique for pure binary (0-1 integer) programs. One of the many weaknesses of LP solvers is the lack of interaction with the user. ALPS is a menu-driven program with no special commands or keywords to learn. In addition, ALPS contains a full-screen editor to enter and maintain the LP formulation. These formulations can be written to and read from plain ASCII files for portability. For those less experienced in LP formulation, ALPS contains a problem "parser" which checks the formulation for errors. ALPS creates fully formatted, readable reports that can be sent to a printer or output file. ALPS is written entirely in IBM's APL2/PC product, Version 1.01. The APL2 workspace containing all the ALPS code can be run on any APL2/PC system (AT or 386). On a 32-bit system, this configuration can take advantage of all extended memory. The user can also examine and modify the ALPS code. The APL2 workspace has also been "packed" to be run on any DOS system (without APL2) as a stand-alone "EXE" file, but has limited memory capacity on a 640K system. A numeric coprocessor (80X87) is optional but recommended. The standard distribution medium for ALPS is a 5.25 inch 360K MS-DOS format diskette. IBM, IBM PC and IBM APL2 are registered trademarks of International Business Machines Corporation. MS-DOS is a registered trademark of Microsoft Corporation.
On the linear programming bound for linear Lee codes.
Astola, Helena; Tabus, Ioan
2016-01-01
Based on an invariance-type property of the Lee-compositions of a linear Lee code, additional equality constraints can be introduced to the linear programming problem of linear Lee codes. In this paper, we formulate this property in terms of an action of the multiplicative group of the field [Formula: see text] on the set of Lee-compositions. We show some useful properties of certain sums of Lee-numbers, which are the eigenvalues of the Lee association scheme, appearing in the linear programming problem of linear Lee codes. Using the additional equality constraints, we formulate the linear programming problem of linear Lee codes in a very compact form, leading to a fast execution, which allows to efficiently compute the bounds for large parameter values of the linear codes.
Computer Program For Linear Algebra
NASA Technical Reports Server (NTRS)
Krogh, F. T.; Hanson, R. J.
1987-01-01
Collection of routines provided for basic vector operations. Basic Linear Algebra Subprogram (BLAS) library is collection from FORTRAN-callable routines for employing standard techniques to perform basic operations of numerical linear algebra.
Investigating Integer Restrictions in Linear Programming
ERIC Educational Resources Information Center
Edwards, Thomas G.; Chelst, Kenneth R.; Principato, Angela M.; Wilhelm, Thad L.
2015-01-01
Linear programming (LP) is an application of graphing linear systems that appears in many Algebra 2 textbooks. Although not explicitly mentioned in the Common Core State Standards for Mathematics, linear programming blends seamlessly into modeling with mathematics, the fourth Standard for Mathematical Practice (CCSSI 2010, p. 7). In solving a…
Breadboard linear array scan imager program
NASA Technical Reports Server (NTRS)
1975-01-01
The performance was evaluated of large scale integration photodiode arrays in a linear array scan imaging system breadboard for application to multispectral remote sensing of the earth's resources. Objectives, approach, implementation, and test results of the program are presented.
Portfolio optimization using fuzzy linear programming
NASA Astrophysics Data System (ADS)
Pandit, Purnima K.
2013-09-01
Portfolio Optimization (PO) is a problem in Finance, in which investor tries to maximize return and minimize risk by carefully choosing different assets. Expected return and risk are the most important parameters with regard to optimal portfolios. In the simple form PO can be modeled as quadratic programming problem which can be put into equivalent linear form. PO problems with the fuzzy parameters can be solved as multi-objective fuzzy linear programming problem. In this paper we give the solution to such problems with an illustrative example.
Generalised Assignment Matrix Methodology in Linear Programming
ERIC Educational Resources Information Center
Jerome, Lawrence
2012-01-01
Discrete Mathematics instructors and students have long been struggling with various labelling and scanning algorithms for solving many important problems. This paper shows how to solve a wide variety of Discrete Mathematics and OR problems using assignment matrices and linear programming, specifically using Excel Solvers although the same…
MLREG, stepwise multiple linear regression program
Carder, J.H.
1981-09-01
This program is written in FORTRAN for an IBM computer and performs multiple linear regressions according to a stepwise procedure. The program transforms and combines old variables into new variables, prints input and transformed data, sums, raw sums or squares, residual sum of squares, means and standard deviations, correlation coefficients, regression results at each step, ANOVA at each step, and predicted response results at each step. This package contains an EXEC used to execute the program,sample input data and output listing, source listing, documentation, and card decks containing the EXEC sample input, and FORTRAN source.
Linear programming computational experience with onyx
Atrek, E.
1994-12-31
ONYX is a linear programming software package based on an efficient variation of the gradient projection method. When fully configured, it is intended for application to industrial size problems. While the computational experience is limited at the time of this abstract, the technique is found to be robust and competitive with existing methodology in terms of both accuracy and speed. An overview of the approach is presented together with a description of program capabilities, followed by a discussion of up-to-date computational experience with the program. Conclusions include advantages of the approach and envisioned future developments.
A program for identification of linear systems
NASA Technical Reports Server (NTRS)
Buell, J.; Kalaba, R.; Ruspini, E.; Yakush, A.
1971-01-01
A program has been written for the identification of parameters in certain linear systems. These systems appear in biomedical problems, particularly in compartmental models of pharmacokinetics. The method presented here assumes that some of the state variables are regularly modified by jump conditions. This simulates administration of drugs following some prescribed drug regime. Parameters are identified by a least-square fit of the linear differential system to a set of experimental observations. The method is especially suited when the interval of observation of the system is very long.
Lincoln Near-Earth Asteroid Program (LINEAR)
NASA Astrophysics Data System (ADS)
Stokes, Grant H.; Evans, Jenifer B.; Viggh, Herbert E. M.; Shelly, Frank C.; Pearce, Eric C.
2000-11-01
The Lincoln Near-Earth Asteroid Research (LINEAR) program has applied electro-optical technology developed for Air Force Space Surveillance applications to the problem of discovering near-Earth asteroids (NEAs) and comets. This application is natural due to the commonality between the surveillance of the sky for man-made satellites and the search for near-Earth objects (NEOs). Both require the efficient search of broad swaths of sky to detect faint, moving objects. Currently, the Air Force Ground-based Electro-Optic Deep Space Surveillance (GEODSS) systems, which operate as part of the worldwide U.S. space surveillance network, are being upgraded to state-of-the-art charge-coupled device (CCD) detectors. These detectors are based on recent advances made by MIT Lincoln Laboratory in the fabrication of large format, highly sensitive CCDs. In addition, state-of-the-art data processing algorithms have been developed to employ the new detectors for search operations. In order to address stressing space surveillance requirements, the Lincoln CCDs have a unique combination of features, including large format, high quantum efficiency, frame transfer, high readout rate, and low noise, not found on any commercially available CCD. Systems development for the GEODSS upgrades has been accomplished at the Lincoln Laboratory Experimental Test Site (ETS) located near Socorro, New Mexico, over the past several years. Starting in 1996, the Air Force funded a small effort to demonstrate the effectiveness of the CCD and broad area search technology when applied to the problem of finding asteroids and comets. This program evolved into the current LINEAR program, which is jointly funded by the Air Force Office of Scientific Research and NASA. LINEAR, which started full operations in March of 1998, has discovered through September of 1999, 257 NEAs (of 797 known to date), 11 unusual objects (of 44 known), and 32 comets. Currently, LINEAR is contributing ∼70% of the worldwide NEA
Quantum Algorithm for Linear Programming Problems
NASA Astrophysics Data System (ADS)
Joag, Pramod; Mehendale, Dhananjay
The quantum algorithm (PRL 103, 150502, 2009) solves a system of linear equations with exponential speedup over existing classical algorithms. We show that the above algorithm can be readily adopted in the iterative algorithms for solving linear programming (LP) problems. The first iterative algorithm that we suggest for LP problem follows from duality theory. It consists of finding nonnegative solution of the equation forduality condition; forconstraints imposed by the given primal problem and for constraints imposed by its corresponding dual problem. This problem is called the problem of nonnegative least squares, or simply the NNLS problem. We use a well known method for solving the problem of NNLS due to Lawson and Hanson. This algorithm essentially consists of solving in each iterative step a new system of linear equations . The other iterative algorithms that can be used are those based on interior point methods. The same technique can be adopted for solving network flow problems as these problems can be readily formulated as LP problems. The suggested quantum algorithm cansolveLP problems and Network Flow problems of very large size involving millions of variables.
Optimized groundwater containment using linear programming
Quinn, J.J.; Johnson, R.L.; Durham, L.A.
1998-07-01
Groundwater extraction systems are typically installed to contain contaminant plumes. These systems are expensive to install and maintain. A traditional approach to designing such a wellfield is to use a series of trial-and-error simulations to test the effects of various well locations and pump rates. However, optimal locations and pump rates of extraction wells are difficult to determine when the objectives of the potential pumping scheme and the site hydrogeology are considered. This paper describes a case study of an application of linear programming theory to determine optimal well placement and pump rates. Calculations were conducted by using ModMan to link a calibrated MODFLOW flow model with LINDO, a linear programming package. Past activities at the site under study included disposal of contaminants in pits. Several groundwater plumes have been identified, and others may be present. The area of concern is bordered on three sides by a wetland, which receives a portion of its input water budget as groundwater discharge from the disposal area. The objective function of the optimization was to minimize the rate of groundwater extraction while preventing discharge to the marsh across a user-specified boundary. In this manner, the optimization routine selects well locations and pump rates to produce a groundwater divide along this boundary.
A Linear Programming Model for Assigning Students to Attendance Centers.
ERIC Educational Resources Information Center
Ontjes, Robert L.
A linear programing model and procedures for optimal assignment of students to attendance centers are presented. An example of the use of linear programing for the assignment of students to attendance centers in a particular school district is given. (CK)
Measuring Astronomical Distances with Linear Programming
NASA Astrophysics Data System (ADS)
Narain, Akshar
2015-05-01
A few years ago it was suggested that the distance to celestial bodies could be computed by tracking their position over about 24 hours and then solving a regression problem. One only needed to use inexpensive telescopes, cameras, and astrometry tools, and the experiment could be done from one's backyard. However, it is not obvious to an amateur what the regression problem is and how to solve it. This paper identifies that problem and shows how to solve it with linear programming. It also takes into account the body's celestial latitude to improve the method's accuracy. The new method is validated both with simulated and actual data to compute distances to asteroids to within 1% of correct values. It can be used as a new tutorial for amateurs to see how consumer-grade astrophotography and free astrometry and optimization tools come together to solve an important problem. It can also be used as a tool in crowdsourced detection of dangerous asteroids.
Robust Control Design via Linear Programming
NASA Technical Reports Server (NTRS)
Keel, L. H.; Bhattacharyya, S. P.
1998-01-01
This paper deals with the problem of synthesizing or designing a feedback controller of fixed dynamic order. The closed loop specifications considered here are given in terms of a target performance vector representing a desired set of closed loop transfer functions connecting various signals. In general these point targets are unattainable with a fixed order controller. By enlarging the target from a fixed point set to an interval set the solvability conditions with a fixed order controller are relaxed and a solution is more easily enabled. Results from the parametric robust control literature can be used to design the interval target family so that the performance deterioration is acceptable, even when plant uncertainty is present. It is shown that it is possible to devise a computationally simple linear programming approach that attempts to meet the desired closed loop specifications.
Multiobjective power dispatch using fuzzy linear programming
Yang, H.T.; Huang, C.M.; Lee, H.M.; Huang, C.L.
1995-12-31
This paper presents a new fuzzy linear programming (FLP) approach to determine the multiobjective power dispatch problem by taking into account fuel cost and environmental impact of NO{sub x} emission. The FLP technique first separately optimizes each objective. To further offer the best compromise solution out of the non-inferiority domain obtained by the FLP based operator, a preference index of distance membership function is used to aid the power system operator to adjust the generation levels in a most economic manner but also with minimal impact on the environments. The effectiveness of the proposed approach has been demonstrated on a 10-bus 5-generator system. Numerical results reveal that the FLP is a promising and efficient approach for dealing with the multiobjective nature of power dispatch problem.
Genetic Programming Transforms in Linear Regression Situations
NASA Astrophysics Data System (ADS)
Castillo, Flor; Kordon, Arthur; Villa, Carlos
The chapter summarizes the use of Genetic Programming (GP) inMultiple Linear Regression (MLR) to address multicollinearity and Lack of Fit (LOF). The basis of the proposed method is applying appropriate input transforms (model respecification) that deal with these issues while preserving the information content of the original variables. The transforms are selected from symbolic regression models with optimal trade-off between accuracy of prediction and expressional complexity, generated by multiobjective Pareto-front GP. The chapter includes a comparative study of the GP-generated transforms with Ridge Regression, a variant of ordinary Multiple Linear Regression, which has been a useful and commonly employed approach for reducing multicollinearity. The advantages of GP-generated model respecification are clearly defined and demonstrated. Some recommendations for transforms selection are given as well. The application benefits of the proposed approach are illustrated with a real industrial application in one of the broadest empirical modeling areas in manufacturing - robust inferential sensors. The chapter contributes to increasing the awareness of the potential of GP in statistical model building by MLR.
User's manual for LINEAR, a FORTRAN program to derive linear aircraft models
NASA Technical Reports Server (NTRS)
Duke, Eugene L.; Patterson, Brian P.; Antoniewicz, Robert F.
1987-01-01
This report documents a FORTRAN program that provides a powerful and flexible tool for the linearization of aircraft models. The program LINEAR numerically determines a linear system model using nonlinear equations of motion and a user-supplied nonlinear aerodynamic model. The system model determined by LINEAR consists of matrices for both state and observation equations. The program has been designed to allow easy selection and definition of the state, control, and observation variables to be used in a particular model.
User's manual for interactive LINEAR: A FORTRAN program to derive linear aircraft models
NASA Technical Reports Server (NTRS)
Antoniewicz, Robert F.; Duke, Eugene L.; Patterson, Brian P.
1988-01-01
An interactive FORTRAN program that provides the user with a powerful and flexible tool for the linearization of aircraft aerodynamic models is documented in this report. The program LINEAR numerically determines a linear system model using nonlinear equations of motion and a user-supplied linear or nonlinear aerodynamic model. The nonlinear equations of motion used are six-degree-of-freedom equations with stationary atmosphere and flat, nonrotating earth assumptions. The system model determined by LINEAR consists of matrices for both the state and observation equations. The program has been designed to allow easy selection and definition of the state, control, and observation variables to be used in a particular model.
Optimization Research of Generation Investment Based on Linear Programming Model
NASA Astrophysics Data System (ADS)
Wu, Juan; Ge, Xueqian
Linear programming is an important branch of operational research and it is a mathematical method to assist the people to carry out scientific management. GAMS is an advanced simulation and optimization modeling language and it will combine a large number of complex mathematical programming, such as linear programming LP, nonlinear programming NLP, MIP and other mixed-integer programming with the system simulation. In this paper, based on the linear programming model, the optimized investment decision-making of generation is simulated and analyzed. At last, the optimal installed capacity of power plants and the final total cost are got, which provides the rational decision-making basis for optimized investments.
An Intuitive Approach in Teaching Linear Programming in High School.
ERIC Educational Resources Information Center
Ulep, Soledad A.
1990-01-01
Discusses solving inequality problems involving linear programing. Describes the usual and alternative approaches. Presents an intuitive approach for finding a feasible solution by maximizing the objective function. (YP)
Comparison of open-source linear programming solvers.
Gearhart, Jared Lee; Adair, Kristin Lynn; Durfee, Justin David.; Jones, Katherine A.; Martin, Nathaniel; Detry, Richard Joseph
2013-10-01
When developing linear programming models, issues such as budget limitations, customer requirements, or licensing may preclude the use of commercial linear programming solvers. In such cases, one option is to use an open-source linear programming solver. A survey of linear programming tools was conducted to identify potential open-source solvers. From this survey, four open-source solvers were tested using a collection of linear programming test problems and the results were compared to IBM ILOG CPLEX Optimizer (CPLEX) [1], an industry standard. The solvers considered were: COIN-OR Linear Programming (CLP) [2], [3], GNU Linear Programming Kit (GLPK) [4], lp_solve [5] and Modular In-core Nonlinear Optimization System (MINOS) [6]. As no open-source solver outperforms CPLEX, this study demonstrates the power of commercial linear programming software. CLP was found to be the top performing open-source solver considered in terms of capability and speed. GLPK also performed well but cannot match the speed of CLP or CPLEX. lp_solve and MINOS were considerably slower and encountered issues when solving several test problems.
Linear System of Equations, Matrix Inversion, and Linear Programming Using MS Excel
ERIC Educational Resources Information Center
El-Gebeily, M.; Yushau, B.
2008-01-01
In this note, we demonstrate with illustrations two different ways that MS Excel can be used to solve Linear Systems of Equation, Linear Programming Problems, and Matrix Inversion Problems. The advantage of using MS Excel is its availability and transparency (the user is responsible for most of the details of how a problem is solved). Further, we…
Linear programming model for optimum resource allocation in rural systems
Devadas, V.
1997-07-01
The article presents a model for optimum resource allocation in a rural system. Making use of linear programming, the objective function of the linear programming model is to maximize the revenue of the rural system, and optimum resource allocation is made subject to a number of energy- and nonenergy-related constraints relevant to the rural system. The model also quantifies the major yields as well as the by-products of different sectors of the rural economic system.
EZLP: An Interactive Computer Program for Solving Linear Programming Problems. Final Report.
ERIC Educational Resources Information Center
Jarvis, John J.; And Others
Designed for student use in solving linear programming problems, the interactive computer program described (EZLP) permits the student to input the linear programming model in exactly the same manner in which it would be written on paper. This report includes a brief review of the development of EZLP; narrative descriptions of program features,…
Linear Programming for Vocational Education Planning. Interim Report.
ERIC Educational Resources Information Center
Young, Robert C.; And Others
The purpose of the paper is to define for potential users of vocational education management information systems a quantitative analysis technique and its utilization to facilitate more effective planning of vocational education programs. Defining linear programming (LP) as a management technique used to solve complex resource allocation problems…
Planning Student Flow with Linear Programming: A Tunisian Case Study.
ERIC Educational Resources Information Center
Bezeau, Lawrence
A student flow model in linear programming format, designed to plan the movement of students into secondary and university programs in Tunisia, is described. The purpose of the plan is to determine a sufficient number of graduating students that would flow back into the system as teachers or move into the labor market to meet fixed manpower…
A linear-programming approach to temporal reasoning
Jonsson, P.; Baeckstroem, C.
1996-12-31
We present a new formalism, Horn Disjunctive Linear Relations (Horn DLRs), for reasoning about temporal constraints. We prove that deciding satisfiability of sets of Horn DLRs is polynomial by exhibiting an algorithm based upon linear programming. Furthermore, we prove that most other approaches to tractable temporal constraint reasoning can be encoded as Horn DLRs, including the ORD-Horn algebra and most methods for purely quantitative reasoning.
Short-term case mix management with linear programming.
Hughes, W L; Soliman, S Y
1985-01-01
One short-term economic incentive created by a prospective payment system based on diagnosis-related groups (DRGs) is for hospital managers to optimally and efficiently use the hospital's current mix of services to maximize net contribution. DRGs provide a managerial definition of the hospital's product by determining the number of patients discharged within each of the 467 groupings. Thus, the DRG case mix can be thought of as the hospital's product mix. As in major industry, linear programming models may prove useful in determining the hospital's financially optimal case mix. This article provides a framework for applying the linear programming concept to case mix planning in the hospital setting. It also presents an illustration and interpretation of a linear programming model that provides information about the short-term optimal case mix.
From Parity and Payoff Games to Linear Programming
NASA Astrophysics Data System (ADS)
Schewe, Sven
This paper establishes a surprising reduction from parity and mean payoff games to linear programming problems. While such a connection is trivial for solitary games, it is surprising for two player games, because the players have opposing objectives, whose natural translations into an optimisation problem are minimisation and maximisation, respectively. Our reduction to linear programming circumvents the need for concurrent minimisation and maximisation by replacing one of them, the maximisation, by approximation. The resulting optimisation problem can be translated to a linear programme by a simple space transformation, which is inexpensive in the unit cost model, but results in an exponential growth of the coefficients. The discovered connection opens up unexpected applications - like μ-calculus model checking - of linear programming in the unit cost model, and thus turns the intriguing academic problem of finding a polynomial time algorithm for linear programming in this model of computation (and subsequently a strongly polynomial algorithm) into a problem of paramount practical importance: All advancements in this area can immediately be applied to accelerate solving parity and payoff games, or to improve their complexity analysis.
Synthesizing Dynamic Programming Algorithms from Linear Temporal Logic Formulae
NASA Technical Reports Server (NTRS)
Rosu, Grigore; Havelund, Klaus
2001-01-01
The problem of testing a linear temporal logic (LTL) formula on a finite execution trace of events, generated by an executing program, occurs naturally in runtime analysis of software. We present an algorithm which takes an LTL formula and generates an efficient dynamic programming algorithm. The generated algorithm tests whether the LTL formula is satisfied by a finite trace of events given as input. The generated algorithm runs in linear time, its constant depending on the size of the LTL formula. The memory needed is constant, also depending on the size of the formula.
Linear combination reading program for capture gamma rays
Tanner, Allan B.
1971-01-01
This program computes a weighting function, Qj, which gives a scalar output value of unity when applied to the spectrum of a desired element and a minimum value (considering statistics) when applied to spectra of materials not containing the desired element. Intermediate values are obtained for materials containing the desired element, in proportion to the amount of the element they contain. The program is written in the BASIC language in a format specific to the Hewlett-Packard 2000A Time-Sharing System, and is an adaptation of an earlier program for linear combination reading for X-ray fluorescence analysis (Tanner and Brinkerhoff, 1971). Following the program is a sample run from a study of the application of the linear combination technique to capture-gamma-ray analysis for calcium (report in preparation).
Diet planning for humans using mixed-integer linear programming.
Sklan, D; Dariel, I
1993-07-01
Human diet planning is generally carried out by selecting the food items or groups of food items to be used in the diet and then calculating the composition. If nutrient quantities do not reach the desired nutritional requirements, foods are exchanged or quantities altered and the composition recalculated. Iterations are repeated until a suitable diet is obtained. This procedure is cumbersome and slow and often leads to compromises in composition of the final diets. A computerized model, planning diets for humans at minimum cost while supplying all nutritional requirements, maintaining nutrient relationships and preserving eating practices is presented. This is based on a mixed-integer linear-programming algorithm. Linear equations were prepared for each nutritional requirement. To produce linear equations for relationships between nutrients, linear transformations were performed. Logical definitions for interactions such as the frequency of use of foods, relationships between exchange groups and the energy content of different meals were defined, and linear equations for these associations were written. Food items generally eaten in whole units were defined as integers. The use of this program is demonstrated for planning diets using a large selection of basic foods and for clinical situations where nutritional intervention is desirable. The system presented begins from a definition of the nutritional requirements and then plans the foods accordingly, and at minimum cost. This provides an accurate, efficient and versatile method of diet formulation.
Train repathing in emergencies based on fuzzy linear programming.
Meng, Xuelei; Cui, Bingmou
2014-01-01
Train pathing is a typical problem which is to assign the train trips on the sets of rail segments, such as rail tracks and links. This paper focuses on the train pathing problem, determining the paths of the train trips in emergencies. We analyze the influencing factors of train pathing, such as transferring cost, running cost, and social adverse effect cost. With the overall consideration of the segment and station capability constraints, we build the fuzzy linear programming model to solve the train pathing problem. We design the fuzzy membership function to describe the fuzzy coefficients. Furthermore, the contraction-expansion factors are introduced to contract or expand the value ranges of the fuzzy coefficients, coping with the uncertainty of the value range of the fuzzy coefficients. We propose a method based on triangular fuzzy coefficient and transfer the train pathing (fuzzy linear programming model) to a determinate linear model to solve the fuzzy linear programming problem. An emergency is supposed based on the real data of the Beijing-Shanghai Railway. The model in this paper was solved and the computation results prove the availability of the model and efficiency of the algorithm.
LFSPMC: Linear feature selection program using the probability of misclassification
NASA Technical Reports Server (NTRS)
Guseman, L. F., Jr.; Marion, B. P.
1975-01-01
The computational procedure and associated computer program for a linear feature selection technique are presented. The technique assumes that: a finite number, m, of classes exists; each class is described by an n-dimensional multivariate normal density function of its measurement vectors; the mean vector and covariance matrix for each density function are known (or can be estimated); and the a priori probability for each class is known. The technique produces a single linear combination of the original measurements which minimizes the one-dimensional probability of misclassification defined by the transformed densities.
Planning under uncertainty solving large-scale stochastic linear programs
Infanger, G. . Dept. of Operations Research Technische Univ., Vienna . Inst. fuer Energiewirtschaft)
1992-12-01
For many practical problems, solutions obtained from deterministic models are unsatisfactory because they fail to hedge against certain contingencies that may occur in the future. Stochastic models address this shortcoming, but up to recently seemed to be intractable due to their size. Recent advances both in solution algorithms and in computer technology now allow us to solve important and general classes of practical stochastic problems. We show how large-scale stochastic linear programs can be efficiently solved by combining classical decomposition and Monte Carlo (importance) sampling techniques. We discuss the methodology for solving two-stage stochastic linear programs with recourse, present numerical results of large problems with numerous stochastic parameters, show how to efficiently implement the methodology on a parallel multi-computer and derive the theory for solving a general class of multi-stage problems with dependency of the stochastic parameters within a stage and between different stages.
Coordination of directional overcurrent relay timing using linear programming
Urdaneta, A.J.; Restrepo, H.; Marquez, S.; Sanchez, J.
1996-01-01
A successive linear programming methodology is presented to treat more effectively those applications where a local structure change is performed to a system already in operation, and where the modification of the settings of already existent relays is not desirable. The dimension of the optimization problems to be solved is substantially reduced, and a sequence of small linear programming problems is stated and solved in terms of the time dial settings, until a feasible solution is reached. With the proposed technique, the number of relays of the original system to be reset is reduced substantially. It is found that there is a trade-off between the number of relays to be reset and the optimality of the settings of the relays.
Multiobjective fuzzy stochastic linear programming problems with inexact probability distribution
NASA Astrophysics Data System (ADS)
Hamadameen, Abdulqader Othman; Zainuddin, Zaitul Marlizawati
2014-06-01
This study deals with multiobjective fuzzy stochastic linear programming problems with uncertainty probability distribution which are defined as fuzzy assertions by ambiguous experts. The problem formulation has been presented and the two solutions strategies are; the fuzzy transformation via ranking function and the stochastic transformation when α-. cut technique and linguistic hedges are used in the uncertainty probability distribution. The development of Sen's method is employed to find a compromise solution, supported by illustrative numerical example.
Multiobjective fuzzy stochastic linear programming problems with inexact probability distribution
Hamadameen, Abdulqader Othman; Zainuddin, Zaitul Marlizawati
2014-06-19
This study deals with multiobjective fuzzy stochastic linear programming problems with uncertainty probability distribution which are defined as fuzzy assertions by ambiguous experts. The problem formulation has been presented and the two solutions strategies are; the fuzzy transformation via ranking function and the stochastic transformation when α{sup –}. cut technique and linguistic hedges are used in the uncertainty probability distribution. The development of Sen’s method is employed to find a compromise solution, supported by illustrative numerical example.
A cost-aggregating integer linear program for motif finding.
Kingsford, Carl; Zaslavsky, Elena; Singh, Mona
2011-12-01
In the motif finding problem one seeks a set of mutually similar substrings within a collection of biological sequences. This is an important and widely-studied problem, as such shared motifs in DNA often correspond to regulatory elements. We study a combinatorial framework where the goal is to find substrings of a given length such that the sum of their pairwise distances is minimized. We describe a novel integer linear program for the problem, which uses the fact that distances between substrings come from a limited set of possibilities allowing for aggregate consideration of sequence position pairs with the same distances. We show how to tighten its linear programming relaxation by adding an exponential set of constraints and give an efficient separation algorithm that can find violated constraints, thereby showing that the tightened linear program can still be solved in polynomial time. We apply our approach to find optimal solutions for the motif finding problem and show that it is effective in practice in uncovering known transcription factor binding sites.
MAGDM linear-programming models with distinct uncertain preference structures.
Xu, Zeshui S; Chen, Jian
2008-10-01
Group decision making with preference information on alternatives is an interesting and important research topic which has been receiving more and more attention in recent years. The purpose of this paper is to investigate multiple-attribute group decision-making (MAGDM) problems with distinct uncertain preference structures. We develop some linear-programming models for dealing with the MAGDM problems, where the information about attribute weights is incomplete, and the decision makers have their preferences on alternatives. The provided preference information can be represented in the following three distinct uncertain preference structures: 1) interval utility values; 2) interval fuzzy preference relations; and 3) interval multiplicative preference relations. We first establish some linear-programming models based on decision matrix and each of the distinct uncertain preference structures and, then, develop some linear-programming models to integrate all three structures of subjective uncertain preference information provided by the decision makers and the objective information depicted in the decision matrix. Furthermore, we propose a simple and straightforward approach in ranking and selecting the given alternatives. It is worth pointing out that the developed models can also be used to deal with the situations where the three distinct uncertain preference structures are reduced to the traditional ones, i.e., utility values, fuzzy preference relations, and multiplicative preference relations. Finally, we use a practical example to illustrate in detail the calculation process of the developed approach.
Using linear programming to minimize the cost of nurse personnel.
Matthews, Charles H
2005-01-01
Nursing personnel costs make up a major portion of most hospital budgets. This report evaluates and optimizes the utility of the nurse personnel at the Internal Medicine Outpatient Clinic of Wake Forest University Baptist Medical Center. Linear programming (LP) was employed to determine the effective combination of nurses that would allow for all weekly clinic tasks to be covered while providing the lowest possible cost to the department. Linear programming is a standard application of standard spreadsheet software that allows the operator to establish the variables to be optimized and then requires the operator to enter a series of constraints that will each have an impact on the ultimate outcome. The application is therefore able to quantify and stratify the nurses necessary to execute the tasks. With the report, a specific sensitivity analysis can be performed to assess just how sensitive the outcome is to the stress of adding or deleting a nurse to or from the payroll. The nurse employee cost structure in this study consisted of five certified nurse assistants (CNA), three licensed practicing nurses (LPN), and five registered nurses (RN). The LP revealed that the outpatient clinic should staff four RNs, three LPNs, and four CNAs with 95 percent confidence of covering nurse demand on the floor. This combination of nurses would enable the clinic to: 1. Reduce annual staffing costs by 16 percent; 2. Force each level of nurse to be optimally productive by focusing on tasks specific to their expertise; 3. Assign accountability more efficiently as the nurses adhere to their specific duties; and 4. Ultimately provide a competitive advantage to the clinic as it relates to nurse employee and patient satisfaction. Linear programming can be used to solve capacity problems for just about any staffing situation, provided the model is indeed linear. PMID:18972976
LTSTAR- SUPERSONIC WING NON-LINEAR AERODYNAMICS PROGRAM
NASA Technical Reports Server (NTRS)
Carlson, H. W.
1994-01-01
The Supersonic Wing Nonlinear Aerodynamics computer program, LTSTAR, was developed to provide for the estimation of the nonlinear aerodynamic characteristics of a wing at supersonic speeds. This corrected linearized-theory method accounts for nonlinearities in the variation of basic pressure loadings with local surface slopes, predicts the degree of attainment of theoretical leading-edge thrust forces, and provides an estimate of detached leading-edge vortex loadings that result when the theoretical thrust forces are not fully realized. Comparisons of LTSTAR computations with experimental results show significant improvements in detailed wing pressure distributions, particularly for large angles of attack and for regions of the wing where the flow is highly three-dimensional. The program provides generally improved predictions of the wing overall force and moment coefficients. LTSTAR could be useful in design studies aimed at aerodynamic performance optimization and for providing more realistic trade-off information for selection of wing planform geometry and airfoil section parameters. Input to the LTSTAR program includes wing planform data, freestream conditions, wing camber, wing thickness, scaling options, and output options. Output includes pressure coefficients along each chord, section normal and axial force coefficients, and the spanwise distribution of section force coefficients. With the chordwise distributions and section coefficients at each angle of attack, three sets of polars are output. The first set is for linearized theory with and without full leading-edge thrust, the second set includes nonlinear corrections, and the third includes estimates of attainable leading-edge thrust and vortex increments along with the nonlinear corrections. The LTSTAR program is written in FORTRAN IV for batch execution and has been implemented on a CDC 6000 series computer with a central memory requirement of approximately 150K (octal) of 60 bit words. The LTSTAR
A scalable parallel algorithm for multiple objective linear programs
NASA Technical Reports Server (NTRS)
Wiecek, Malgorzata M.; Zhang, Hong
1994-01-01
This paper presents an ADBASE-based parallel algorithm for solving multiple objective linear programs (MOLP's). Job balance, speedup and scalability are of primary interest in evaluating efficiency of the new algorithm. Implementation results on Intel iPSC/2 and Paragon multiprocessors show that the algorithm significantly speeds up the process of solving MOLP's, which is understood as generating all or some efficient extreme points and unbounded efficient edges. The algorithm gives specially good results for large and very large problems. Motivation and justification for solving such large MOLP's are also included.
A recurrent neural network for solving bilevel linear programming problem.
He, Xing; Li, Chuandong; Huang, Tingwen; Li, Chaojie; Huang, Junjian
2014-04-01
In this brief, based on the method of penalty functions, a recurrent neural network (NN) modeled by means of a differential inclusion is proposed for solving the bilevel linear programming problem (BLPP). Compared with the existing NNs for BLPP, the model has the least number of state variables and simple structure. Using nonsmooth analysis, the theory of differential inclusions, and Lyapunov-like method, the equilibrium point sequence of the proposed NNs can approximately converge to an optimal solution of BLPP under certain conditions. Finally, the numerical simulations of a supply chain distribution model have shown excellent performance of the proposed recurrent NNs.
Fuzzy Linear Programming and its Application in Home Textile Firm
NASA Astrophysics Data System (ADS)
Vasant, P.; Ganesan, T.; Elamvazuthi, I.
2011-06-01
In this paper, new fuzzy linear programming (FLP) based methodology using a specific membership function, named as modified logistic membership function is proposed. The modified logistic membership function is first formulated and its flexibility in taking up vagueness in parameter is established by an analytical approach. The developed methodology of FLP has provided a confidence in applying to real life industrial production planning problem. This approach of solving industrial production planning problem can have feedback with the decision maker, the implementer and the analyst.
Analysing seismic-source mechanisms by linear-programming methods.
Julian, B.R.
1986-01-01
Linear-programming methods are powerful and efficient tools for objectively analysing seismic focal mechanisms and are applicable to a wide range of problems, including tsunami warning and nuclear explosion identification. The source mechanism is represented as a point in the 6-D space of moment-tensor components. The present method can easily be extended to fit observed seismic-wave amplitudes (either signed or absolute) subject to polarity constraints, and to assess the range of mechanisms consistent with a set of measured amplitudes. -from Author
Direct linear programming solver in C for structural applications
NASA Astrophysics Data System (ADS)
Damkilde, L.; Hoyer, O.; Krenk, S.
1994-08-01
An optimization problem can be characterized by an object-function, which is maximized, and restrictions, which limit the variation of the variables. A subclass of optimization is Linear Programming (LP), where both the object-function and the restrictions are linear functions of the variables. The traditional solution methods for LP problems are based on the simplex method, and it is customary to allow only non-negative variables. Compared to other optimization routines the LP solvers are more robust and the optimum is reached in a finite number of steps and is not sensitive to the starting point. For structural applications many optimization problems can be linearized and solved by LP routines. However, the structural variables are not always non-negative, and this requires a reformation, where a variable x is substituted by the difference of two non-negative variables, x(sup + ) and x(sup - ). The transformation causes a doubling of the number of variables, and in a computer implementation the memory allocation doubles and for a typical problem the execution time at least doubles. This paper describes a LP solver written in C, which can handle a combination of non-negative variables and unlimited variables. The LP solver also allows restart, and this may reduce the computational costs if the solution to a similar LP problem is known a priori. The algorithm is based on the simplex method, and differs only in the logical choices. Application of the new LP solver will at the same time give both a more direct problem formulation and a more efficient program.
Technology Transfer Automated Retrieval System (TEKTRAN)
A stochastic/linear program Excel workbook was developed consisting of two worksheets illustrating linear and stochastic program approaches. Both approaches used the Excel Solver add-in. A published linear program problem served as an example for the ingredients, nutrients and costs and as a benchma...
Microgrid Reliability Modeling and Battery Scheduling Using Stochastic Linear Programming
Cardoso, Goncalo; Stadler, Michael; Siddiqui, Afzal; Marnay, Chris; DeForest, Nicholas; Barbosa-Povoa, Ana; Ferrao, Paulo
2013-05-23
This paper describes the introduction of stochastic linear programming into Operations DER-CAM, a tool used to obtain optimal operating schedules for a given microgrid under local economic and environmental conditions. This application follows previous work on optimal scheduling of a lithium-iron-phosphate battery given the output uncertainty of a 1 MW molten carbonate fuel cell. Both are in the Santa Rita Jail microgrid, located in Dublin, California. This fuel cell has proven unreliable, partially justifying the consideration of storage options. Several stochastic DER-CAM runs are executed to compare different scenarios to values obtained by a deterministic approach. Results indicate that using a stochastic approach provides a conservative yet more lucrative battery schedule. Lower expected energy bills result, given fuel cell outages, in potential savings exceeding 6percent.
Solution of the multiple dosing problem using linear programming.
Hacisalihzade, S S; Mansour, M
1985-07-01
A system theoretical approach to drug concentration-time data analysis is introduced after the discussion of some relevant concepts as they are used in system theory. The merits of this approach are demonstrated in multiple dosing problem. It is shown that dosage minimization without stringent constraints does not result in the desired therapeutic effect. In a different optimization the discrepancy between the actual and the desired time-histories of the relevant substance's plasma concentration is minimized. It is shown that both of these optimizations can be reduced to linear programming problems which are easily solvable with today's computers. These methods are demonstrated in a case study of dopaminergic substitution in Parkinson's disease where computer simulations show them to yield excellent results. Finally, the limits of this approach are also discussed.
Split diversity in constrained conservation prioritization using integer linear programming
Chernomor, Olga; Minh, Bui Quang; Forest, Félix; Klaere, Steffen; Ingram, Travis; Henzinger, Monika; von Haeseler, Arndt
2015-01-01
Phylogenetic diversity (PD) is a measure of biodiversity based on the evolutionary history of species. Here, we discuss several optimization problems related to the use of PD, and the more general measure split diversity (SD), in conservation prioritization. Depending on the conservation goal and the information available about species, one can construct optimization routines that incorporate various conservation constraints. We demonstrate how this information can be used to select sets of species for conservation action. Specifically, we discuss the use of species' geographic distributions, the choice of candidates under economic pressure, and the use of predator–prey interactions between the species in a community to define viability constraints. Despite such optimization problems falling into the area of NP hard problems, it is possible to solve them in a reasonable amount of time using integer programming. We apply integer linear programming to a variety of models for conservation prioritization that incorporate the SD measure. We exemplarily show the results for two data sets: the Cape region of South Africa and a Caribbean coral reef community. Finally, we provide user-friendly software at http://www.cibiv.at/software/pda. PMID:25893087
The Physics Program at the International Linear Collider
NASA Astrophysics Data System (ADS)
Strube, Jan; International Linear Collider Physics; Detector study groups Team
2016-03-01
The precise exploration of all aspects of the Higgs sector is one of the key goals for future colliders at the Energy Frontier. The International Linear Collider (ILC) provides the capability for model-independent measurements of all relevant couplings of the Higgs boson to fermions and gauge bosons, including direct measurements of the Top Yukawa coupling as well as of the Higgs self-coupling. In addition, it has a discovery potential for physics beyond the Standard Model that is complementary to the LHC. This contribution will review the highlights of ILC physics in the context of a 20-year-long program. This program covers different collision energies up to 500 GeV with various beam polarizations, each contributing important aspects to the exploration of this new sector of particle physics. Beyond this initial scope of the ILC, we will also discuss the prospects of a 1 TeV upgrade, which offers complementary capabilities for the measurement of double Higgs production and the Higgs self-coupling and increases the reach of direct and indirect searches. This work is presented on behalf of the groups contributing to ILC physics and detector studies in Asia, Europe and the US.
NASA Astrophysics Data System (ADS)
Vasant, P.; Ganesan, T.; Elamvazuthi, I.
2012-11-01
A fairly reasonable result was obtained for non-linear engineering problems using the optimization techniques such as neural network, genetic algorithms, and fuzzy logic independently in the past. Increasingly, hybrid techniques are being used to solve the non-linear problems to obtain better output. This paper discusses the use of neuro-genetic hybrid technique to optimize the geological structure mapping which is known as seismic survey. It involves the minimization of objective function subject to the requirement of geophysical and operational constraints. In this work, the optimization was initially performed using genetic programming, and followed by hybrid neuro-genetic programming approaches. Comparative studies and analysis were then carried out on the optimized results. The results indicate that the hybrid neuro-genetic hybrid technique produced better results compared to the stand-alone genetic programming method.
Monthly pan evaporation modeling using linear genetic programming
NASA Astrophysics Data System (ADS)
Guven, Aytac; Kisi, Ozgur
2013-10-01
This study compares the accuracy of linear genetic programming (LGP), fuzzy genetic (FG), adaptive neuro-fuzzy inference system (ANFIS), artificial neural networks (ANN) and Stephens-Stewart (SS) methods in modeling pan evaporations. Monthly climatic data including solar radiation, air temperature, relative humidity, wind speed and pan evaporation from Antalya and Mersin stations, in Turkey are used in the study. The study composed of two parts. First part of the study focuses the comparison of LGP models with those of the FG, ANFIS, ANN and SS models in estimating pan evaporations of Antalya and Mersin stations, separately. From the comparison results, the LGP models are found to be better than the other models. Comparison of LGP models with the other models in estimating pan evaporations of the Mersin Station by using both stations' inputs is focused in the second part of the study. The results indicate that the LGP models better accuracy than the FG, ANFIS, ANN and SS models. It is seen that the pan evaporations can be successfully estimated by the LGP method.
Accurate construction of consensus genetic maps via integer linear programming.
Wu, Yonghui; Close, Timothy J; Lonardi, Stefano
2011-01-01
We study the problem of merging genetic maps, when the individual genetic maps are given as directed acyclic graphs. The computational problem is to build a consensus map, which is a directed graph that includes and is consistent with all (or, the vast majority of) the markers in the input maps. However, when markers in the individual maps have ordering conflicts, the resulting consensus map will contain cycles. Here, we formulate the problem of resolving cycles in the context of a parsimonious paradigm that takes into account two types of errors that may be present in the input maps, namely, local reshuffles and global displacements. The resulting combinatorial optimization problem is, in turn, expressed as an integer linear program. A fast approximation algorithm is proposed, and an additional speedup heuristic is developed. Our algorithms were implemented in a software tool named MERGEMAP which is freely available for academic use. An extensive set of experiments shows that MERGEMAP consistently outperforms JOINMAP, which is the most popular tool currently available for this task, both in terms of accuracy and running time. MERGEMAP is available for download at http://www.cs.ucr.edu/~yonghui/mgmap.html. PMID:20479505
Flow discharge prediction in compound channels using linear genetic programming
NASA Astrophysics Data System (ADS)
Azamathulla, H. Md.; Zahiri, A.
2012-08-01
SummaryFlow discharge determination in rivers is one of the key elements in mathematical modelling in the design of river engineering projects. Because of the inundation of floodplains and sudden changes in river geometry, flow resistance equations are not applicable for compound channels. Therefore, many approaches have been developed for modification of flow discharge computations. Most of these methods have satisfactory results only in laboratory flumes. Due to the ability to model complex phenomena, the artificial intelligence methods have recently been employed for wide applications in various fields of water engineering. Linear genetic programming (LGP), a branch of artificial intelligence methods, is able to optimise the model structure and its components and to derive an explicit equation based on the variables of the phenomena. In this paper, a precise dimensionless equation has been derived for prediction of flood discharge using LGP. The proposed model was developed using published data compiled for stage-discharge data sets for 394 laboratories, and field of 30 compound channels. The results indicate that the LGP model has a better performance than the existing models.
Accurate construction of consensus genetic maps via integer linear programming.
Wu, Yonghui; Close, Timothy J; Lonardi, Stefano
2011-01-01
We study the problem of merging genetic maps, when the individual genetic maps are given as directed acyclic graphs. The computational problem is to build a consensus map, which is a directed graph that includes and is consistent with all (or, the vast majority of) the markers in the input maps. However, when markers in the individual maps have ordering conflicts, the resulting consensus map will contain cycles. Here, we formulate the problem of resolving cycles in the context of a parsimonious paradigm that takes into account two types of errors that may be present in the input maps, namely, local reshuffles and global displacements. The resulting combinatorial optimization problem is, in turn, expressed as an integer linear program. A fast approximation algorithm is proposed, and an additional speedup heuristic is developed. Our algorithms were implemented in a software tool named MERGEMAP which is freely available for academic use. An extensive set of experiments shows that MERGEMAP consistently outperforms JOINMAP, which is the most popular tool currently available for this task, both in terms of accuracy and running time. MERGEMAP is available for download at http://www.cs.ucr.edu/~yonghui/mgmap.html.
Learning oncogenetic networks by reducing to mixed integer linear programming.
Shahrabi Farahani, Hossein; Lagergren, Jens
2013-01-01
Cancer can be a result of accumulation of different types of genetic mutations such as copy number aberrations. The data from tumors are cross-sectional and do not contain the temporal order of the genetic events. Finding the order in which the genetic events have occurred and progression pathways are of vital importance in understanding the disease. In order to model cancer progression, we propose Progression Networks, a special case of Bayesian networks, that are tailored to model disease progression. Progression networks have similarities with Conjunctive Bayesian Networks (CBNs) [1],a variation of Bayesian networks also proposed for modeling disease progression. We also describe a learning algorithm for learning Bayesian networks in general and progression networks in particular. We reduce the hard problem of learning the Bayesian and progression networks to Mixed Integer Linear Programming (MILP). MILP is a Non-deterministic Polynomial-time complete (NP-complete) problem for which very good heuristics exists. We tested our algorithm on synthetic and real cytogenetic data from renal cell carcinoma. We also compared our learned progression networks with the networks proposed in earlier publications. The software is available on the website https://bitbucket.org/farahani/diprog.
Mixed integer linear programming for maximum-parsimony phylogeny inference.
Sridhar, Srinath; Lam, Fumei; Blelloch, Guy E; Ravi, R; Schwartz, Russell
2008-01-01
Reconstruction of phylogenetic trees is a fundamental problem in computational biology. While excellent heuristic methods are available for many variants of this problem, new advances in phylogeny inference will be required if we are to be able to continue to make effective use of the rapidly growing stores of variation data now being gathered. In this paper, we present two integer linear programming (ILP) formulations to find the most parsimonious phylogenetic tree from a set of binary variation data. One method uses a flow-based formulation that can produce exponential numbers of variables and constraints in the worst case. The method has, however, proven extremely efficient in practice on datasets that are well beyond the reach of the available provably efficient methods, solving several large mtDNA and Y-chromosome instances within a few seconds and giving provably optimal results in times competitive with fast heuristics than cannot guarantee optimality. An alternative formulation establishes that the problem can be solved with a polynomial-sized ILP. We further present a web server developed based on the exponential-sized ILP that performs fast maximum parsimony inferences and serves as a front end to a database of precomputed phylogenies spanning the human genome.
A Mixed Integer Linear Program for Airport Departure Scheduling
NASA Technical Reports Server (NTRS)
Gupta, Gautam; Jung, Yoon Chul
2009-01-01
Aircraft departing from an airport are subject to numerous constraints while scheduling departure times. These constraints include wake-separation constraints for successive departures, miles-in-trail separation for aircraft bound for the same departure fixes, and time-window or prioritization constraints for individual flights. Besides these, emissions as well as increased fuel consumption due to inefficient scheduling need to be included. Addressing all the above constraints in a single framework while allowing for resequencing of the aircraft using runway queues is critical to the implementation of the Next Generation Air Transport System (NextGen) concepts. Prior work on airport departure scheduling has addressed some of the above. However, existing methods use pre-determined runway queues, and schedule aircraft from these departure queues. The source of such pre-determined queues is not explicit, and could potentially be a subjective controller input. Determining runway queues and scheduling within the same framework would potentially result in better scheduling. This paper presents a mixed integer linear program (MILP) for the departure-scheduling problem. The program takes as input the incoming sequence of aircraft for departure from a runway, along with their earliest departure times and an optional prioritization scheme based on time-window of departure for each aircraft. The program then assigns these aircraft to the available departure queues and schedules departure times, explicitly considering wake separation and departure fix restrictions to minimize total delay for all aircraft. The approach is generalized and can be used in a variety of situations, and allows for aircraft prioritization based on operational as well as environmental considerations. We present the MILP in the paper, along with benefits over the first-come-first-serve (FCFS) scheme for numerous randomized problems based on real-world settings. The MILP results in substantially reduced
Development and validation of a general purpose linearization program for rigid aircraft models
NASA Technical Reports Server (NTRS)
Duke, E. L.; Antoniewicz, R. F.
1985-01-01
A FORTRAN program that provides the user with a powerful and flexible tool for the linearization of aircraft models is discussed. The program LINEAR numerically determines a linear systems model using nonlinear equations of motion and a user-supplied, nonlinear aerodynamic model. The system model determined by LINEAR consists of matrices for both the state and observation equations. The program has been designed to allow easy selection and definition of the state, control, and observation variables to be used in a particular model. Also, included in the report is a comparison of linear and nonlinear models for a high performance aircraft.
Development and validation of a general purpose linearization program for rigid aircraft models
NASA Technical Reports Server (NTRS)
Duke, E. L.; Antoniewicz, R. F.
1985-01-01
This paper discusses a FORTRAN program that provides the user with a powerful and flexible tool for the linearization of aircraft models. The program LINEAR numerically determines a linear systems model using nonlinear equations of motion and a user-supplied, nonlinear aerodynamic model. The system model determined by LINEAR consists of matrices for both the state and observation equations. The program has been designed to allow easy selection and definition of the state, control, and observation variables to be used in a particular model. Also, included in the report is a comparison of linear and nonlinear models for a high-performance aircraft.
NASA Technical Reports Server (NTRS)
Frost, Susan A.; Bodson, Marc; Acosta, Diana M.
2009-01-01
The Next Generation (NextGen) transport aircraft configurations being investigated as part of the NASA Aeronautics Subsonic Fixed Wing Project have more control surfaces, or control effectors, than existing transport aircraft configurations. Conventional flight control is achieved through two symmetric elevators, two antisymmetric ailerons, and a rudder. The five effectors, reduced to three command variables, produce moments along the three main axes of the aircraft and enable the pilot to control the attitude and flight path of the aircraft. The NextGen aircraft will have additional redundant control effectors to control the three moments, creating a situation where the aircraft is over-actuated and where a simple relationship does not exist anymore between the required effector deflections and the desired moments. NextGen flight controllers will incorporate control allocation algorithms to determine the optimal effector commands and attain the desired moments, taking into account the effector limits. Approaches to solving the problem using linear programming and quadratic programming algorithms have been proposed and tested. It is of great interest to understand their relative advantages and disadvantages and how design parameters may affect their properties. In this paper, we investigate the sensitivity of the effector commands with respect to the desired moments and show on some examples that the solutions provided using the l2 norm of quadratic programming are less sensitive than those using the l1 norm of linear programming.
A linear programming model for reducing system peak through customer load control programs
Kurucz, C.N.; Brandt, D.; Sim, S.
1996-11-01
A Linear Programming (LP) model was developed to optimize the amount of system peak load reduction through scheduling of control periods in commercial/industrial and residential load control programs at Florida Power and Light Company. The LP model can be used to determine both long and short term control scheduling strategies and for planning the number of customers which should be enrolled in each program. Results of applying the model to a forecasted late 1990s summer peak day load shape are presented. It is concluded that LP solutions provide a relatively inexpensive and powerful approach to planning and scheduling load control. Also, it is not necessary to model completely general scheduling of control periods in order to obtain near best solutions to peak load reduction.
Two Computer Programs for the Statistical Evaluation of a Weighted Linear Composite.
ERIC Educational Resources Information Center
Sands, William A.
1978-01-01
Two computer programs (one batch, one interactive) are designed to provide statistics for a weighted linear combination of several component variables. Both programs provide mean, variance, standard deviation, and a validity coefficient. (Author/JKS)
ERIC Educational Resources Information Center
Schmitt, M. A.; And Others
1994-01-01
Compares traditional manure application planning techniques calculated to meet agronomic nutrient needs on a field-by-field basis with plans developed using computer-assisted linear programming optimization methods. Linear programming provided the most economical and environmentally sound manure application strategy. (Contains 15 references.) (MDH)
A linear circuit analysis program with stiff systems capability
NASA Technical Reports Server (NTRS)
Cook, C. H.; Bavuso, S. J.
1973-01-01
Several existing network analysis programs have been modified and combined to employ a variable topological approach to circuit translation. Efficient numerical integration techniques are used for transient analysis.
NASA Technical Reports Server (NTRS)
Fleming, P.
1985-01-01
A design technique is proposed for linear regulators in which a feedback controller of fixed structure is chosen to minimize an integral quadratic objective function subject to the satisfaction of integral quadratic constraint functions. Application of a non-linear programming algorithm to this mathematically tractable formulation results in an efficient and useful computer-aided design tool. Particular attention is paid to computational efficiency and various recommendations are made. Two design examples illustrate the flexibility of the approach and highlight the special insight afforded to the designer.
Zhao, Yingfeng; Liu, Sanyang
2016-01-01
We present a practical branch and bound algorithm for globally solving generalized linear multiplicative programming problem with multiplicative constraints. To solve the problem, a relaxation programming problem which is equivalent to a linear programming is proposed by utilizing a new two-phase relaxation technique. In the algorithm, lower and upper bounds are simultaneously obtained by solving some linear relaxation programming problems. Global convergence has been proved and results of some sample examples and a small random experiment show that the proposed algorithm is feasible and efficient. PMID:27547676
Zhao, Yingfeng; Liu, Sanyang
2016-01-01
We present a practical branch and bound algorithm for globally solving generalized linear multiplicative programming problem with multiplicative constraints. To solve the problem, a relaxation programming problem which is equivalent to a linear programming is proposed by utilizing a new two-phase relaxation technique. In the algorithm, lower and upper bounds are simultaneously obtained by solving some linear relaxation programming problems. Global convergence has been proved and results of some sample examples and a small random experiment show that the proposed algorithm is feasible and efficient.
How Relevant Is Linear, Dichotomous Reasoning to Ongoing Program Evaluation?
ERIC Educational Resources Information Center
Nguyen, Tuan D.
1978-01-01
Criticizes Strasser and Deniston's post-planned evaluation (TM 504 253) because of their: (1) emphasis on evaluation research; (2) imposition of experimental rigor; (3) inapplicability to human service projects; (4) inattention to congruity between the program and its environment; (5) distinct characteristics of program evaluation; and (6)…
A Linear Programming Solution to the Faculty Assignment Problem
ERIC Educational Resources Information Center
Breslaw, Jon A.
1976-01-01
Investigates the problem of assigning faculty to courses at a university. A program is developed that is both efficient, in that integer programming is not required, and effective, in that it facilitates interaction by administration in determining the optimal solution. The results of some empirical tests are also reported. (Author)
DOE facilities programs and systems interaction with linear and non-linear techniques
Lin, C.W. )
1991-01-01
This book presents the proceedings of a symposium on DOE facilities programs held at the 1991 Pressure Vessels and Piping Conference. Topics include: seismic response analysis at DOE reactors, the reactor cooling system at the Savannah River Site, structural analysis of the P reactor at the Savannah River Site, and dynamic analysis of a postulated hydrogen burn in a waste storage tank.
NASA Technical Reports Server (NTRS)
1979-01-01
The computer program Linear SCIDNT which evaluates rotorcraft stability and control coefficients from flight or wind tunnel test data is described. It implements the maximum likelihood method to maximize the likelihood function of the parameters based on measured input/output time histories. Linear SCIDNT may be applied to systems modeled by linear constant-coefficient differential equations. This restriction in scope allows the application of several analytical results which simplify the computation and improve its efficiency over the general nonlinear case.
On-Off Minimum-Time Control With Limited Fuel Usage: Global Optima Via Linear Programming
DRIESSEN,BRIAN
1999-09-01
A method for finding a global optimum to the on-off minimum-time control problem with limited fuel usage is presented. Each control can take on only three possible values: maximum, zero, or minimum. The simplex method for linear systems naturally yields such a solution for the re-formulation presented herein because it always produces an extreme point solution to the linear program. Numerical examples for the benchmark linear flexible system are presented.
User's manual for interfacing a leading edge, vortex rollup program with two linear panel methods
NASA Technical Reports Server (NTRS)
Desilva, B. M. E.; Medan, R. T.
1979-01-01
Sufficient instructions are provided for interfacing the Mangler-Smith, leading edge vortex rollup program with a vortex lattice (POTFAN) method and an advanced higher order, singularity linear analysis for computing the vortex effects for simple canard wing combinations.
The Effect of Data Scaling on Dual Prices and Sensitivity Analysis in Linear Programs
ERIC Educational Resources Information Center
Adlakha, V. G.; Vemuganti, R. R.
2007-01-01
In many practical situations scaling the data is necessary to solve linear programs. This note explores the relationships in translating the sensitivity analysis between the original and the scaled problems.
The MARX Modulator Development Program for the International Linear Collider
Leyh, G.E.; /SLAC
2006-06-12
The ILC Marx Modulator Development Program at SLAC is working towards developing a full-scale ILC Marx ''Reference Design'' modulator prototype, with the goal of significantly reducing the size and cost of the ILC modulator while improving overall modulator efficiency and availability. The ILC Reference Design prototype will provide a proof-of-concept model to industry in advance of Phase II SBIR funding, and also allow operation of the new 10MW L-Band Klystron prototypes immediately upon their arrival at SLAC.
Users manual for linear Time-Varying Helicopter Simulation (Program TVHIS)
NASA Technical Reports Server (NTRS)
Burns, M. R.
1979-01-01
A linear time-varying helicopter simulation program (TVHIS) is described. The program is designed as a realistic yet efficient helicopter simulation. It is based on a linear time-varying helicopter model which includes rotor, actuator, and sensor models, as well as a simulation of flight computer logic. The TVHIS can generate a mean trajectory simulation along a nominal trajectory, or propagate covariance of helicopter states, including rigid-body, turbulence, control command, controller states, and rigid-body state estimates.
Large Scale Non-Linear Programming for PDE Constrained Optimization
VAN BLOEMEN WAANDERS, BART G.; BARTLETT, ROSCOE A.; LONG, KEVIN R.; BOGGS, PAUL T.; SALINGER, ANDREW G.
2002-10-01
Three years of large-scale PDE-constrained optimization research and development are summarized in this report. We have developed an optimization framework for 3 levels of SAND optimization and developed a powerful PDE prototyping tool. The optimization algorithms have been interfaced and tested on CVD problems using a chemically reacting fluid flow simulator resulting in an order of magnitude reduction in compute time over a black box method. Sandia's simulation environment is reviewed by characterizing each discipline and identifying a possible target level of optimization. Because SAND algorithms are difficult to test on actual production codes, a symbolic simulator (Sundance) was developed and interfaced with a reduced-space sequential quadratic programming framework (rSQP++) to provide a PDE prototyping environment. The power of Sundance/rSQP++ is demonstrated by applying optimization to a series of different PDE-based problems. In addition, we show the merits of SAND methods by comparing seven levels of optimization for a source-inversion problem using Sundance and rSQP++. Algorithmic results are discussed for hierarchical control methods. The design of an interior point quadratic programming solver is presented.
FSILP: fuzzy-stochastic-interval linear programming for supporting municipal solid waste management.
Li, Pu; Chen, Bing
2011-04-01
Although many studies on municipal solid waste management (MSW management) were conducted under uncertain conditions of fuzzy, stochastic, and interval coexistence, the solution to the conventional linear programming problems of integrating fuzzy method with the other two was inefficient. In this study, a fuzzy-stochastic-interval linear programming (FSILP) method is developed by integrating Nguyen's method with conventional linear programming for supporting municipal solid waste management. The Nguyen's method was used to convert the fuzzy and fuzzy-stochastic linear programming problems into the conventional linear programs, by measuring the attainment values of fuzzy numbers and/or fuzzy random variables, as well as superiority and inferiority between triangular fuzzy numbers/triangular fuzzy-stochastic variables. The developed method can effectively tackle uncertainties described in terms of probability density functions, fuzzy membership functions, and discrete intervals. Moreover, the method can also improve upon the conventional interval fuzzy programming and two-stage stochastic programming approaches, with advantageous capabilities that are easily achieved with fewer constraints and significantly reduces consumption time. The developed model was applied to a case study of municipal solid waste management system in a city. The results indicated that reasonable solutions had been generated. The solution can help quantify the relationship between the change of system cost and the uncertainties, which could support further analysis of tradeoffs between the waste management cost and the system failure risk.
The Computer Program LIAR for Beam Dynamics Calculations in Linear Accelerators
Assmann, R.W.; Adolphsen, C.; Bane, K.; Raubenheimer, T.O.; Siemann, R.H.; Thompson, K.; /SLAC
2011-08-26
Linear accelerators are the central components of the proposed next generation of linear colliders. They need to provide acceleration of up to 750 GeV per beam while maintaining very small normalized emittances. Standard simulation programs, mainly developed for storage rings, do not meet the specific requirements for high energy linear accelerators. We present a new program LIAR ('LInear Accelerator Research code') that includes wakefield effects, a 6D coupled beam description, specific optimization algorithms and other advanced features. Its modular structure allows to use and to extend it easily for different purposes. The program is available for UNIX workstations and Windows PC's. It can be applied to a broad range of accelerators. We present examples of simulations for SLC and NLC.
Linearized Programming of Memristors for Artificial Neuro-Sensor Signal Processing.
Yang, Changju; Kim, Hyongsuk
2016-01-01
A linearized programming method of memristor-based neural weights is proposed. Memristor is known as an ideal element to implement a neural synapse due to its embedded functions of analog memory and analog multiplication. Its resistance variation with a voltage input is generally a nonlinear function of time. Linearization of memristance variation about time is very important for the easiness of memristor programming. In this paper, a method utilizing an anti-serial architecture for linear programming is proposed. The anti-serial architecture is composed of two memristors with opposite polarities. It linearizes the variation of memristance due to complimentary actions of two memristors. For programming a memristor, additional memristor with opposite polarity is employed. The linearization effect of weight programming of an anti-serial architecture is investigated and memristor bridge synapse which is built with two sets of anti-serial memristor architecture is taken as an application example of the proposed method. Simulations are performed with memristors of both linear drift model and nonlinear model. PMID:27548186
Linearized Programming of Memristors for Artificial Neuro-Sensor Signal Processing.
Yang, Changju; Kim, Hyongsuk
2016-08-19
A linearized programming method of memristor-based neural weights is proposed. Memristor is known as an ideal element to implement a neural synapse due to its embedded functions of analog memory and analog multiplication. Its resistance variation with a voltage input is generally a nonlinear function of time. Linearization of memristance variation about time is very important for the easiness of memristor programming. In this paper, a method utilizing an anti-serial architecture for linear programming is proposed. The anti-serial architecture is composed of two memristors with opposite polarities. It linearizes the variation of memristance due to complimentary actions of two memristors. For programming a memristor, additional memristor with opposite polarity is employed. The linearization effect of weight programming of an anti-serial architecture is investigated and memristor bridge synapse which is built with two sets of anti-serial memristor architecture is taken as an application example of the proposed method. Simulations are performed with memristors of both linear drift model and nonlinear model.
Linearized Programming of Memristors for Artificial Neuro-Sensor Signal Processing
Yang, Changju; Kim, Hyongsuk
2016-01-01
A linearized programming method of memristor-based neural weights is proposed. Memristor is known as an ideal element to implement a neural synapse due to its embedded functions of analog memory and analog multiplication. Its resistance variation with a voltage input is generally a nonlinear function of time. Linearization of memristance variation about time is very important for the easiness of memristor programming. In this paper, a method utilizing an anti-serial architecture for linear programming is proposed. The anti-serial architecture is composed of two memristors with opposite polarities. It linearizes the variation of memristance due to complimentary actions of two memristors. For programming a memristor, additional memristor with opposite polarity is employed. The linearization effect of weight programming of an anti-serial architecture is investigated and memristor bridge synapse which is built with two sets of anti-serial memristor architecture is taken as an application example of the proposed method. Simulations are performed with memristors of both linear drift model and nonlinear model. PMID:27548186
Micosoft Excel Sensitivity Analysis for Linear and Stochastic Program Feed Formulation
Technology Transfer Automated Retrieval System (TEKTRAN)
Sensitivity analysis is a part of mathematical programming solutions and is used in making nutritional and economic decisions for a given feed formulation problem. The terms, shadow price and reduced cost, are familiar linear program (LP) terms to feed formulators. Because of the nonlinear nature of...
NASA Technical Reports Server (NTRS)
Bowman, L. M.
1984-01-01
An interactive steady state frequency response computer program with graphics is documented. Single or multiple forces may be applied to the structure using a modal superposition approach to calculate response. The method can be reapplied to linear, proportionally damped structures in which the damping may be viscous or structural. The theoretical approach and program organization are described. Example problems, user instructions, and a sample interactive session are given to demonstate the program's capability in solving a variety of problems.
Object matching using a locally affine invariant and linear programming techniques.
Li, Hongsheng; Huang, Xiaolei; He, Lei
2013-02-01
In this paper, we introduce a new matching method based on a novel locally affine-invariant geometric constraint and linear programming techniques. To model and solve the matching problem in a linear programming formulation, all geometric constraints should be able to be exactly or approximately reformulated into a linear form. This is a major difficulty for this kind of matching algorithm. We propose a novel locally affine-invariant constraint which can be exactly linearized and requires a lot fewer auxiliary variables than other linear programming-based methods do. The key idea behind it is that each point in the template point set can be exactly represented by an affine combination of its neighboring points, whose weights can be solved easily by least squares. Errors of reconstructing each matched point using such weights are used to penalize the disagreement of geometric relationships between the template points and the matched points. The resulting overall objective function can be solved efficiently by linear programming techniques. Our experimental results on both rigid and nonrigid object matching show the effectiveness of the proposed algorithm.
Ren, Jingzheng; Dong, Liang; Sun, Lu; Goodsite, Michael Evan; Tan, Shiyu; Dong, Lichun
2015-01-01
The aim of this work was to develop a model for optimizing the life cycle cost of biofuel supply chain under uncertainties. Multiple agriculture zones, multiple transportation modes for the transport of grain and biofuel, multiple biofuel plants, and multiple market centers were considered in this model, and the price of the resources, the yield of grain and the market demands were regarded as interval numbers instead of constants. An interval linear programming was developed, and a method for solving interval linear programming was presented. An illustrative case was studied by the proposed model, and the results showed that the proposed model is feasible for designing biofuel supply chain under uncertainties. PMID:25827247
Ren, Jingzheng; Dong, Liang; Sun, Lu; Goodsite, Michael Evan; Tan, Shiyu; Dong, Lichun
2015-01-01
The aim of this work was to develop a model for optimizing the life cycle cost of biofuel supply chain under uncertainties. Multiple agriculture zones, multiple transportation modes for the transport of grain and biofuel, multiple biofuel plants, and multiple market centers were considered in this model, and the price of the resources, the yield of grain and the market demands were regarded as interval numbers instead of constants. An interval linear programming was developed, and a method for solving interval linear programming was presented. An illustrative case was studied by the proposed model, and the results showed that the proposed model is feasible for designing biofuel supply chain under uncertainties.
Minicomputer linear programming analysis yields options for gasoline-blending decisions
Arnold, V.E.
1984-02-13
Neither a large mainframe computer nor extensive mathematics background is now necessary to take advantage of linear programs in evaluating gasoline blending options. A minicomputer can handle the task. This article presents a general algorithm for performing linear programming (LP) analysis by the simplex method on a Radio Shack TRS-80 Model I or III (Level Basic) minicomputer with 16K of random access memory (RAM). Application of this general algorithm to gasoline blending studies is presented in this article by an outline of steps necessary for data input and evaluation of several cases to decide between various investment options.
A novel recurrent neural network with finite-time convergence for linear programming.
Liu, Qingshan; Cao, Jinde; Chen, Guanrong
2010-11-01
In this letter, a novel recurrent neural network based on the gradient method is proposed for solving linear programming problems. Finite-time convergence of the proposed neural network is proved by using the Lyapunov method. Compared with the existing neural networks for linear programming, the proposed neural network is globally convergent to exact optimal solutions in finite time, which is remarkable and rare in the literature of neural networks for optimization. Some numerical examples are given to show the effectiveness and excellent performance of the new recurrent neural network.
A New Bound for the Ration Between the 2-Matching Problem and Its Linear Programming Relaxation
Boyd, Sylvia; Carr, Robert
1999-07-28
Consider the 2-matching problem defined on the complete graph, with edge costs which satisfy the triangle inequality. We prove that the value of a minimum cost 2-matching is bounded above by 4/3 times the value of its linear programming relaxation, the fractional 2-matching problem. This lends credibility to a long-standing conjecture that the optimal value for the traveling salesman problem is bounded above by 4/3 times the value of its linear programming relaxation, the subtour elimination problem.
NASA Technical Reports Server (NTRS)
Mitchell, C. E.; Eckert, K.
1979-01-01
A program for predicting the linear stability of liquid propellant rocket engines is presented. The underlying model assumptions and analytical steps necessary for understanding the program and its input and output are also given. The rocket engine is modeled as a right circular cylinder with an injector with a concentrated combustion zone, a nozzle, finite mean flow, and an acoustic admittance, or the sensitive time lag theory. The resulting partial differential equations are combined into two governing integral equations by the use of the Green's function method. These equations are solved using a successive approximation technique for the small amplitude (linear) case. The computational method used as well as the various user options available are discussed. Finally, a flow diagram, sample input and output for a typical application and a complete program listing for program MODULE are presented.
Moryakov, A. V. Pylyov, S. S.
2012-12-15
This paper presents the formulation of the problem and the methodical approach for solving large systems of linear differential equations describing nonstationary processes with the use of CUDA technology; this approach is implemented in the ANGEL program. Results for a test problem on transport of radioactive products over loops of a nuclear power plant are given. The possibilities for the use of the ANGEL program for solving various problems that simulate arbitrary nonstationary processes are discussed.
Iterative generation of higher-order nets in polynomial time using linear programming.
Roy, A; Mukhopadhyay, S
1997-01-01
This paper presents an algorithm for constructing and training a class of higher-order perceptrons for classification problems. The method uses linear programming models to construct and train the net. Its polynomial time complexity is proven and computational results are provided for several well-known problems. In all cases, very small nets were created compared to those reported in other computational studies.
Land Use and Soil Erosion. A National Linear Programming Model. Technical Bulletin Number 1742.
ERIC Educational Resources Information Center
Huang, Wen-Yuan; And Others
This technical bulletin documents a model, the Natural Resource Linear Programming (NRLP) model, capable of measuring the effects of land use restrictions imposed as conservation measures. The primary use for the model is to examine the government expenditures required to compensate farmers for retiring potentially erodible private cropland. The…
ERIC Educational Resources Information Center
Findorff, Irene K.
This document summarizes the results of a project at Tulane University that was designed to adapt, test, and evaluate a computerized information and menu planning system utilizing linear programing techniques for use in school lunch food service operations. The objectives of the menu planning were to formulate menu items into a palatable,…
POLARCALC: A program for calculating the linear-polarization factor using an area detector
Molodenskii, D. S.; Sul’yanov, S. N.
2015-05-15
A graphical interface program has been developed to determine the linear-polarization factor of a monochromatic X-ray beam when analyzing scattering from an amorphous object. An area coordinate detector is used in measurements. The change in intensity over the azimuthal angle at a constant diffraction angle is interpolated by a theoretical cosine dependence, which contains the polarization factor.
Linear circuit analysis program for IBM 1620 Monitor 2, 1311/1443 data processing system /CIRCS/
NASA Technical Reports Server (NTRS)
Hatfield, J.
1967-01-01
CIRCS is modification of IBSNAP Circuit Analysis Program, for use on smaller systems. This data processing system retains the basic dc, transient analysis, and FORTRAN 2 formats. It can be used on the IBM 1620/1311 Monitor I Mod 5 system, and solves a linear network containing 15 nodes and 45 branches.
Secret Message Decryption: Group Consulting Projects Using Matrices and Linear Programming
ERIC Educational Resources Information Center
Gurski, Katharine F.
2009-01-01
We describe two short group projects for finite mathematics students that incorporate matrices and linear programming into fictional consulting requests presented as a letter to the students. The students are required to use mathematics to decrypt secret messages in one project involving matrix multiplication and inversion. The second project…
ERIC Educational Resources Information Center
Dyehouse, Melissa; Bennett, Deborah; Harbor, Jon; Childress, Amy; Dark, Melissa
2009-01-01
Logic models are based on linear relationships between program resources, activities, and outcomes, and have been used widely to support both program development and evaluation. While useful in describing some programs, the linear nature of the logic model makes it difficult to capture the complex relationships within larger, multifaceted…
NASA Technical Reports Server (NTRS)
Cooke, C. H.
1975-01-01
STICAP (Stiff Circuit Analysis Program) is a FORTRAN 4 computer program written for the CDC-6400-6600 computer series and SCOPE 3.0 operating system. It provides the circuit analyst a tool for automatically computing the transient responses and frequency responses of large linear time invariant networks, both stiff and nonstiff (algorithms and numerical integration techniques are described). The circuit description and user's program input language is engineer-oriented, making simple the task of using the program. Engineering theories underlying STICAP are examined. A user's manual is included which explains user interaction with the program and gives results of typical circuit design applications. Also, the program structure from a systems programmer's viewpoint is depicted and flow charts and other software documentation are given.
NASA Technical Reports Server (NTRS)
Snow, L. S.; Kuhn, A. E.
1975-01-01
Previous error analyses conducted by the Guidance and Dynamics Branch of NASA have used the Guidance Analysis Program (GAP) as the trajectory simulation tool. Plans are made to conduct all future error analyses using the Space Vehicle Dynamics Simulation (SVDS) program. A study was conducted to compare the inertial measurement unit (IMU) error simulations of the two programs. Results of the GAP/SVDS comparison are presented and problem areas encountered while attempting to simulate IMU errors, vehicle performance uncertainties and environmental uncertainties using SVDS are defined. An evaluation of the SVDS linear error analysis capability is also included.
STAR adaptation of QR algorithm. [program for solving over-determined systems of linear equations
NASA Technical Reports Server (NTRS)
Shah, S. N.
1981-01-01
The QR algorithm used on a serial computer and executed on the Control Data Corporation 6000 Computer was adapted to execute efficiently on the Control Data STAR-100 computer. How the scalar program was adapted for the STAR-100 and why these adaptations yielded an efficient STAR program is described. Program listings of the old scalar version and the vectorized SL/1 version are presented in the appendices. Execution times for the two versions applied to the same system of linear equations, are compared.
User's Guide to the Weighted-Multiple-Linear Regression Program (WREG version 1.0)
Eng, Ken; Chen, Yin-Yu; Kiang, Julie.E.
2009-01-01
Streamflow is not measured at every location in a stream network. Yet hydrologists, State and local agencies, and the general public still seek to know streamflow characteristics, such as mean annual flow or flood flows with different exceedance probabilities, at ungaged basins. The goals of this guide are to introduce and familiarize the user with the weighted multiple-linear regression (WREG) program, and to also provide the theoretical background for program features. The program is intended to be used to develop a regional estimation equation for streamflow characteristics that can be applied at an ungaged basin, or to improve the corresponding estimate at continuous-record streamflow gages with short records. The regional estimation equation results from a multiple-linear regression that relates the observable basin characteristics, such as drainage area, to streamflow characteristics.
Genetic programming as an analytical tool for non-linear dielectric spectroscopy.
Woodward, A M; Gilbert, R J; Kell, D B
1999-05-01
By modelling the non-linear effects of membranous enzymes on an applied oscillating electromagnetic field using supervised multivariate analysis methods, Non-Linear Dielectric Spectroscopy (NLDS) has previously been shown to produce quantitative information that is indicative of the metabolic state of various organisms. The use of Genetic Programming (GP) for the multivariate analysis of NLDS data recorded from yeast fermentations is discussed, and GPs are compared with previous results using Partial Least Squares (PLS) and Artificial Neural Nets (NN). GP considerably outperforms these methods, both in terms of the precision of the predictions and their interpretability.
Digital program for solving the linear stochastic optimal control and estimation problem
NASA Technical Reports Server (NTRS)
Geyser, L. C.; Lehtinen, B.
1975-01-01
A computer program is described which solves the linear stochastic optimal control and estimation (LSOCE) problem by using a time-domain formulation. The LSOCE problem is defined as that of designing controls for a linear time-invariant system which is disturbed by white noise in such a way as to minimize a performance index which is quadratic in state and control variables. The LSOCE problem and solution are outlined; brief descriptions are given of the solution algorithms, and complete descriptions of each subroutine, including usage information and digital listings, are provided. A test case is included, as well as information on the IBM 7090-7094 DCS time and storage requirements.
An application of a linear programing technique to nonlinear minimax problems
NASA Technical Reports Server (NTRS)
Schiess, J. R.
1973-01-01
A differential correction technique for solving nonlinear minimax problems is presented. The basis of the technique is a linear programing algorithm which solves the linear minimax problem. By linearizing the original nonlinear equations about a nominal solution, both nonlinear approximation and estimation problems using the minimax norm may be solved iteratively. Some consideration is also given to improving convergence and to the treatment of problems with more than one measured quantity. A sample problem is treated with this technique and with the least-squares differential correction method to illustrate the properties of the minimax solution. The results indicate that for the sample approximation problem, the minimax technique provides better estimates than the least-squares method if a sufficient amount of data is used. For the sample estimation problem, the minimax estimates are better if the mathematical model is incomplete.
LDRD final report on massively-parallel linear programming : the parPCx system.
Parekh, Ojas; Phillips, Cynthia Ann; Boman, Erik Gunnar
2005-02-01
This report summarizes the research and development performed from October 2002 to September 2004 at Sandia National Laboratories under the Laboratory-Directed Research and Development (LDRD) project ''Massively-Parallel Linear Programming''. We developed a linear programming (LP) solver designed to use a large number of processors. LP is the optimization of a linear objective function subject to linear constraints. Companies and universities have expended huge efforts over decades to produce fast, stable serial LP solvers. Previous parallel codes run on shared-memory systems and have little or no distribution of the constraint matrix. We have seen no reports of general LP solver runs on large numbers of processors. Our parallel LP code is based on an efficient serial implementation of Mehrotra's interior-point predictor-corrector algorithm (PCx). The computational core of this algorithm is the assembly and solution of a sparse linear system. We have substantially rewritten the PCx code and based it on Trilinos, the parallel linear algebra library developed at Sandia. Our interior-point method can use either direct or iterative solvers for the linear system. To achieve a good parallel data distribution of the constraint matrix, we use a (pre-release) version of a hypergraph partitioner from the Zoltan partitioning library. We describe the design and implementation of our new LP solver called parPCx and give preliminary computational results. We summarize a number of issues related to efficient parallel solution of LPs with interior-point methods including data distribution, numerical stability, and solving the core linear system using both direct and iterative methods. We describe a number of applications of LP specific to US Department of Energy mission areas and we summarize our efforts to integrate parPCx (and parallel LP solvers in general) into Sandia's massively-parallel integer programming solver PICO (Parallel Interger and Combinatorial Optimizer). We
Refining and end use study of coal liquids II - linear programming analysis
Lowe, C.; Tam, S.
1995-12-31
A DOE-funded study is underway to determine the optimum refinery processing schemes for producing transportation fuels that will meet CAAA regulations from direct and indirect coal liquids. The study consists of three major parts: pilot plant testing of critical upgrading processes, linear programming analysis of different processing schemes, and engine emission testing of final products. Currently, fractions of a direct coal liquid produced form bituminous coal are being tested in sequence of pilot plant upgrading processes. This work is discussed in a separate paper. The linear programming model, which is the subject of this paper, has been completed for the petroleum refinery and is being modified to handle coal liquids based on the pilot plant test results. Preliminary coal liquid evaluation studies indicate that, if a refinery expansion scenario is adopted, then the marginal value of the coal liquid (over the base petroleum crude) is $3-4/bbl.
Linear programming analysis of VA/Q distributions: limits on central moments.
Kapitan, K S; Wagner, P D
1986-05-01
Linear programming examines the boundaries of infinite sets. We used this method with the multiple-inert gas-elimination technique to examine the central moments and arterial blood gases of the infinite family of ventilation perfusion (VA/Q) distributions that are compatible with a measured inert gas-retention set. A linear program was applied with Monte-Carlo error simulation to theoretical retention data, and 95% confidence intervals were constructed for the first three moments (mean, dispersion, and skew) and the arterial PO2 and PCO2 of all compatible blood flow distributions. Six typical cases were studied. Results demonstrate narrow confidence intervals for both the lower moments and predicted arterial blood gases of all test cases, which widen as moment number or error increase. We conclude that the blood gas composition and basic structure of all compatible VA/Q distributions are tightly constrained and that even subtle changes in this structure, as may occur experimentally, can be identified.
Annular precision linear shaped charge flight termination system for the ODES program
Vigil, M.G.; Marchi, D.L.
1994-06-01
The work for the development of an Annular Precision Linear Shaped Charge (APLSC) Flight Termination System (FTS) for the Operation and Deployment Experiment Simulator (ODES) program is discussed and presented in this report. The Precision Linear Shaped Charge (PLSC) concept was recently developed at Sandia. The APLSC component is designed to produce a copper jet to cut four inch diameter holes in each of two spherical tanks, one containing fuel and the other an oxidizer that are hyperbolic when mixed, to terminate the ODES vehicle flight if necessary. The FTS includes two detonators, six Mild Detonating Fuse (MDF) transfer lines, a detonator block, detonation transfer manifold, and the APLSC component. PLSCs have previously been designed in ring components where the jet penetrating axis is either directly away or toward the center of the ring assembly. Typically, these PLSC components are designed to cut metal cylinders from the outside inward or from the inside outward. The ODES program requires an annular linear shaped charge. The (Linear Shaped Charge Analysis) LESCA code was used to design this 65 grain/foot APLSC and data comparing the analytically predicted to experimental data are presented. Jet penetration data are presented to assess the maximum depth and reproducibility of the penetration. Data are presented for full scale tests, including all FTS components, and conducted with nominal 19 inch diameter, spherical tanks.
Beynon, R J
1985-01-01
Software for non-linear curve fitting has been written in BASIC to execute on the British Broadcasting Corporation Microcomputer. The program uses the direct search algorithm Pattern-search, a robust algorithm that has the additional advantage of needing specification of the function without inclusion of the partial derivatives. Although less efficient than gradient methods, the program can be readily configured to solve low-dimensional optimization problems that are normally encountered in life sciences. In writing the software, emphasis has been placed upon the 'user interface' and making the most efficient use of the facilities provided by the minimal configuration of this system.
A new one-layer neural network for linear and quadratic programming.
Gao, Xingbao; Liao, Li-Zhi
2010-06-01
In this paper, we present a new neural network for solving linear and quadratic programming problems in real time by introducing some new vectors. The proposed neural network is stable in the sense of Lyapunov and can converge to an exact optimal solution of the original problem when the objective function is convex on the set defined by equality constraints. Compared with existing one-layer neural networks for quadratic programming problems, the proposed neural network has the least neurons and requires weak stability conditions. The validity and transient behavior of the proposed neural network are demonstrated by some simulation results.
NASA Astrophysics Data System (ADS)
Takabe, Satoshi; Hukushima, Koji
2014-04-01
The typical behavior of the linear programming (LP) problem is studied as a relaxation of the minimum vertex cover problem, which is a type of integer programming (IP) problem. To deal with LP and IP using statistical mechanics, a lattice-gas model on the Erdös-Rényi random graphs is analyzed by a replica method. It is found that the LP optimal solution is typically equal to that given by IP below the critical average degree c*=e in the thermodynamic limit. The critical threshold for LP = IP extends the previous result c = 1, and coincides with the replica symmetry-breaking threshold of the IP.
A linear programming approach to characterizing norm bounded uncertainty from experimental data
NASA Technical Reports Server (NTRS)
Scheid, R. E.; Bayard, D. S.; Yam, Y.
1991-01-01
The linear programming spectral overbounding and factorization (LPSOF) algorithm, an algorithm for finding a minimum phase transfer function of specified order whose magnitude tightly overbounds a specified nonparametric function of frequency, is introduced. This method has direct application to transforming nonparametric uncertainty bounds (available from system identification experiments) into parametric representations required for modern robust control design software (i.e., a minimum-phase transfer function multiplied by a norm-bounded perturbation).
NASA Technical Reports Server (NTRS)
Lehtinen, B.; Geyser, L. C.
1984-01-01
AESOP is a computer program for use in designing feedback controls and state estimators for linear multivariable systems. AESOP is meant to be used in an interactive manner. Each design task that the program performs is assigned a "function" number. The user accesses these functions either (1) by inputting a list of desired function numbers or (2) by inputting a single function number. In the latter case the choice of the function will in general depend on the results obtained by the previously executed function. The most important of the AESOP functions are those that design,linear quadratic regulators and Kalman filters. The user interacts with the program when using these design functions by inputting design weighting parameters and by viewing graphic displays of designed system responses. Supporting functions are provided that obtain system transient and frequency responses, transfer functions, and covariance matrices. The program can also compute open-loop system information such as stability (eigenvalues), eigenvectors, controllability, and observability. The program is written in ANSI-66 FORTRAN for use on an IBM 3033 using TSS 370. Descriptions of all subroutines and results of two test cases are included in the appendixes.
Stable computation of search directions for near-degenerate linear programming problems
Hough, P.D.
1997-03-01
In this paper, we examine stability issues that arise when computing search directions ({delta}x, {delta}y, {delta} s) for a primal-dual path-following interior point method for linear programming. The dual step {delta}y can be obtained by solving a weighted least-squares problem for which the weight matrix becomes extremely il conditioned near the boundary of the feasible region. Hough and Vavisis proposed using a type of complete orthogonal decomposition (the COD algorithm) to solve such a problem and presented stability results. The work presented here addresses the stable computation of the primal step {delta}x and the change in the dual slacks {delta}s. These directions can be obtained in a straight-forward manner, but near-degeneracy in the linear programming instance introduces ill-conditioning which can cause numerical problems in this approach. Therefore, we propose a new method of computing {delta}x and {delta}s. More specifically, this paper describes and orthogonal projection algorithm that extends the COD method. Unlike other algorithms, this method is stable for interior point methods without assuming nondegeneracy in the linear programming instance. Thus, it is more general than other algorithms on near-degenerate problems.
The Linear Programming to evaluate the performance of Oral Health in Primary Care
Colussi, Claudia Flemming; Calvo, Maria Cristina Marino; de Freitas, Sergio Fernando Torres
2013-01-01
ABSTRACT Objective To show the use of Linear Programming to evaluate the performance of Oral Health in Primary Care. Methods This study used data from 19 municipalities of Santa Catarina city that participated of the state evaluation in 2009 and have more than 50,000 habitants. A total of 40 indicators were evaluated, calculated using the Microsoft Excel 2007, and converted to the interval [0, 1] in ascending order (one indicating the best situation and zero indicating the worst situation). Applying the Linear Programming technique municipalities were assessed and compared among them according to performance curve named “quality estimated frontier”. Municipalities included in the frontier were classified as excellent. Indicators were gathered, and became synthetic indicators. Results The majority of municipalities not included in the quality frontier (values different of 1.0) had lower values than 0.5, indicating poor performance. The model applied to the municipalities of Santa Catarina city assessed municipal management and local priorities rather than the goals imposed by pre-defined parameters. In the final analysis three municipalities were included in the “perceived quality frontier”. Conclusion The Linear Programming technique allowed to identify gaps that must be addressed by city managers to enhance actions taken. It also enabled to observe each municipal performance and compare results among similar municipalities. PMID:23579751
IESIP - AN IMPROVED EXPLORATORY SEARCH TECHNIQUE FOR PURE INTEGER LINEAR PROGRAMMING PROBLEMS
NASA Technical Reports Server (NTRS)
Fogle, F. R.
1994-01-01
IESIP, an Improved Exploratory Search Technique for Pure Integer Linear Programming Problems, addresses the problem of optimizing an objective function of one or more variables subject to a set of confining functions or constraints by a method called discrete optimization or integer programming. Integer programming is based on a specific form of the general linear programming problem in which all variables in the objective function and all variables in the constraints are integers. While more difficult, integer programming is required for accuracy when modeling systems with small numbers of components such as the distribution of goods, machine scheduling, and production scheduling. IESIP establishes a new methodology for solving pure integer programming problems by utilizing a modified version of the univariate exploratory move developed by Robert Hooke and T.A. Jeeves. IESIP also takes some of its technique from the greedy procedure and the idea of unit neighborhoods. A rounding scheme uses the continuous solution found by traditional methods (simplex or other suitable technique) and creates a feasible integer starting point. The Hook and Jeeves exploratory search is modified to accommodate integers and constraints and is then employed to determine an optimal integer solution from the feasible starting solution. The user-friendly IESIP allows for rapid solution of problems up to 10 variables in size (limited by DOS allocation). Sample problems compare IESIP solutions with the traditional branch-and-bound approach. IESIP is written in Borland's TURBO Pascal for IBM PC series computers and compatibles running DOS. Source code and an executable are provided. The main memory requirement for execution is 25K. This program is available on a 5.25 inch 360K MS DOS format diskette. IESIP was developed in 1990. IBM is a trademark of International Business Machines. TURBO Pascal is registered by Borland International.
A new gradient-based neural network for solving linear and quadratic programming problems.
Leung, Y; Chen, K Z; Jiao, Y C; Gao, X B; Leung, K S
2001-01-01
A new gradient-based neural network is constructed on the basis of the duality theory, optimization theory, convex analysis theory, Lyapunov stability theory, and LaSalle invariance principle to solve linear and quadratic programming problems. In particular, a new function F(x, y) is introduced into the energy function E(x, y) such that the function E(x, y) is convex and differentiable, and the resulting network is more efficient. This network involves all the relevant necessary and sufficient optimality conditions for convex quadratic programming problems. For linear programming and quadratic programming (QP) problems with unique and infinite number of solutions, we have proven strictly that for any initial point, every trajectory of the neural network converges to an optimal solution of the QP and its dual problem. The proposed network is different from the existing networks which use the penalty method or Lagrange method, and the inequality constraints are properly handled. The simulation results show that the proposed neural network is feasible and efficient.
SLFP: A stochastic linear fractional programming approach for sustainable waste management
Zhu, H.; Huang, G.H.
2011-12-15
Highlights: > A new fractional programming (SLFP) method is developed for waste management. > SLFP can solve ratio optimization problems associated with random inputs. > A case study of waste flow allocation demonstrates its applicability. > SLFP helps compare objectives of two aspects and reflect system efficiency. > This study supports in-depth analysis of tradeoffs among multiple system criteria. - Abstract: A stochastic linear fractional programming (SLFP) approach is developed for supporting sustainable municipal solid waste management under uncertainty. The SLFP method can solve ratio optimization problems associated with random information, where chance-constrained programming is integrated into a linear fractional programming framework. It has advantages in: (1) comparing objectives of two aspects, (2) reflecting system efficiency, (3) dealing with uncertainty expressed as probability distributions, and (4) providing optimal-ratio solutions under different system-reliability conditions. The method is applied to a case study of waste flow allocation within a municipal solid waste (MSW) management system. The obtained solutions are useful for identifying sustainable MSW management schemes with maximized system efficiency under various constraint-violation risks. The results indicate that SLFP can support in-depth analysis of the interrelationships among system efficiency, system cost and system-failure risk.
Zörnig, Peter
2015-08-01
We present integer programming models for some variants of the farthest string problem. The number of variables and constraints is substantially less than that of the integer linear programming models known in the literature. Moreover, the solution of the linear programming-relaxation contains only a small proportion of noninteger values, which considerably simplifies the rounding process. Numerical tests have shown excellent results, especially when a small set of long sequences is given.
A strictly improving linear programming alorithm based on a series of Phase 1 problems
Leichner, S.A.; Dantzig, G.B.; Davis, J.W.
1992-04-01
When used on degenerate problems, the simplex method often takes a number of degenerate steps at a particular vertex before moving to the next. In theory (although rarely in practice), the simplex method can actually cycle at such a degenerate point. Instead of trying to modify the simplex method to avoid degenerate steps, we have developed a new linear programming algorithm that is completely impervious to degeneracy. This new method solves the Phase II problem of finding an optimal solution by solving a series of Phase I feasibility problems. Strict improvement is attained at each iteration in the Phase I algorithm, and the Phase II sequence of feasibility problems has linear convergence in the number of Phase I problems. When tested on the 30 smallest NETLIB linear programming test problems, the computational results for the new Phase II algorithm were over 15% faster than the simplex method; on some problems, it was almost two times faster, and on one problem it was four times faster.
Modified Cholesky factorizations in interior-point algorithms for linear programming.
Wright, S.; Mathematics and Computer Science
1999-01-01
We investigate a modified Cholesky algorithm typical of those used in most interior-point codes for linear programming. Cholesky-based interior-point codes are popular for three reasons: their implementation requires only minimal changes to standard sparse Cholesky algorithms (allowing us to take full advantage of software written by specialists in that area); they tend to be more efficient than competing approaches that use alternative factorizations; and they perform robustly on most practical problems, yielding good interior-point steps even when the coefficient matrix of the main linear system to be solved for the step components is ill conditioned. We investigate this surprisingly robust performance by using analytical tools from matrix perturbation theory and error analysis, illustrating our results with computational experiments. Finally, we point out the potential limitations of this approach.
Coelho, Clarimar José; Galvão, Roberto K H; de Araújo, Mário César U; Pimentel, Maria Fernanda; da Silva, Edvan Cirino
2003-01-01
A novel strategy for the optimization of wavelet transforms with respect to the statistics of the data set in multivariate calibration problems is proposed. The optimization follows a linear semi-infinite programming formulation, which does not display local maxima problems and can be reproducibly solved with modest computational effort. After the optimization, a variable selection algorithm is employed to choose a subset of wavelet coefficients with minimal collinearity. The selection allows the building of a calibration model by direct multiple linear regression on the wavelet coefficients. In an illustrative application involving the simultaneous determination of Mn, Mo, Cr, Ni, and Fe in steel samples by ICP-AES, the proposed strategy yielded more accurate predictions than PCR, PLS, and nonoptimized wavelet regression. PMID:12767151
NASA Astrophysics Data System (ADS)
Han, Jeongwoo
Decision-making under uncertainty is particularly challenging in the case of multi-disciplinary, multilevel system optimization problems. Subsystem interactions cause strong couplings, which may be amplified by uncertainty. Thus, effective coordination strategies can be particularly beneficial. Analytical target cascading (ATC) is a deterministic optimization method for multilevel hierarchical systems, which was recently extended to probabilistic design. Solving the optimization problem requires propagation of uncertainty, namely, evaluating or estimating output distributions given random input variables. This uncertainty propagation can be a challenging and computationally expensive task for nonlinear functions, but is relatively easy for linear ones. In order to overcome the difficulty in uncertainty propagation, this dissertation introduces the use of Sequential Linear Programming (SLP) for solving ATC problems, and specifically extends this use for Probabilistic Analytical Target Cascading (PATC) problems. A new coordination strategy is proposed for ATC and PATC, which coordinates linking variables among subproblems using sequential lineralizations. By linearizing and solving a hierarchy of problems successively, the algorithm takes advantage of the simplicity and ease of uncertainty propagation for a linear system. Linearity of subproblems is maintained using an Linfinity norm to measure deviations between targets and responses. A subproblem suspension strategy is used to temporarily suspend inclusion of subproblems that do not need significant redesign, based on trust region and target value step size. A global convergence proof of the SLP-based coordination strategy is derived. Experiments with test problems show that, relative to standard ATC and PATC coordination, the number of subproblem evaluations is reduced considerably while maintaining accuracy. To demonstrate the applicability of the proposed strategies to problems of practical complexity, a hybrid
A Non-linear Temperature-Time Program for Non-isothermal Kinetic Measurements
NASA Astrophysics Data System (ADS)
Sohn, Hong Yong
2016-04-01
A new temperature-time program for non-isothermal measurements of chemical reaction rates has been developed. The major advantages of the proposed temperature-time function are twofold: Firstly, the analysis of kinetic information in the high temperature range of the measurement is improved over the conventional linear temperature program by slowing the rate of temperature increase in the high temperature range and secondly, the new temperature program greatly facilitates the data analysis by providing a closed-form solution of the temperature integral and allows a convenient way to obtain the kinetic parameters by eliminating the need for the approximate evaluation of the temperature integral. The procedures for applying the new temperature-time program to the analysis of experimental data are demonstrated in terms of the determination of the kinetic parameters based on the selection of a suitable conversion function in the rate equation as well as the direct determination of activation energy at different conversion extents without the need for a conversion function. The rate analysis based on the new temperature program is robust and does not appear to be sensitive to errors in experimental measurements.
Zheng, Yuanjie; Hunter, Allan A; Wu, Jue; Wang, Hongzhi; Gao, Jianbin; Maguire, Maureen G; Gee, James C
2011-01-01
In this paper, we address the problem of landmark matching based retinal image registration. Two major contributions render our registration algorithm distinguished from many previous methods. One is a novel landmark-matching formulation which enables not only a joint estimation of the correspondences and transformation model but also the optimization with linear programming. The other contribution lies in the introduction of a reinforced self-similarities descriptor in characterizing the local appearance of landmarks. Theoretical analysis and a series of preliminary experimental results show both the effectiveness of our optimization scheme and the high differentiating ability of our features.
A FORTRAN program for the analysis of linear continuous and sample-data systems
NASA Technical Reports Server (NTRS)
Edwards, J. W.
1976-01-01
A FORTRAN digital computer program which performs the general analysis of linearized control systems is described. State variable techniques are used to analyze continuous, discrete, and sampled data systems. Analysis options include the calculation of system eigenvalues, transfer functions, root loci, root contours, frequency responses, power spectra, and transient responses for open- and closed-loop systems. A flexible data input format allows the user to define systems in a variety of representations. Data may be entered by inputing explicit data matrices or matrices constructed in user written subroutines, by specifying transfer function block diagrams, or by using a combination of these methods.
An improved multiple linear regression and data analysis computer program package
NASA Technical Reports Server (NTRS)
Sidik, S. M.
1972-01-01
NEWRAP, an improved version of a previous multiple linear regression program called RAPIER, CREDUC, and CRSPLT, allows for a complete regression analysis including cross plots of the independent and dependent variables, correlation coefficients, regression coefficients, analysis of variance tables, t-statistics and their probability levels, rejection of independent variables, plots of residuals against the independent and dependent variables, and a canonical reduction of quadratic response functions useful in optimum seeking experimentation. A major improvement over RAPIER is that all regression calculations are done in double precision arithmetic.
Cooper, G F
1986-04-01
Bayes' formula has been applied extensively in computer-based medical diagnostic systems. One assumption that is often made in the application of the formula is that the findings in a case are conditionally independent. This assumption is often invalid and leads to inaccurate posterior probability assignments to the diagnostic hypotheses. This paper discusses a method for using causal knowledge to structure findings according to their probabilistic dependencies. An inference procedure is discussed which propagates probabilities within a network of causally related findings in order to calculate posterior probabilities of diagnostic hypotheses. A linear programming technique is described that bounds the values of the propagated probabilities subject to known probabilistic constraints.
Sun Wei; Huang, Guo H.; Lv Ying; Li Gongchen
2012-06-15
Highlights: Black-Right-Pointing-Pointer Inexact piecewise-linearization-based fuzzy flexible programming is proposed. Black-Right-Pointing-Pointer It's the first application to waste management under multiple complexities. Black-Right-Pointing-Pointer It tackles nonlinear economies-of-scale effects in interval-parameter constraints. Black-Right-Pointing-Pointer It estimates costs more accurately than the linear-regression-based model. Black-Right-Pointing-Pointer Uncertainties are decreased and more satisfactory interval solutions are obtained. - Abstract: To tackle nonlinear economies-of-scale (EOS) effects in interval-parameter constraints for a representative waste management problem, an inexact piecewise-linearization-based fuzzy flexible programming (IPFP) model is developed. In IPFP, interval parameters for waste amounts and transportation/operation costs can be quantified; aspiration levels for net system costs, as well as tolerance intervals for both capacities of waste treatment facilities and waste generation rates can be reflected; and the nonlinear EOS effects transformed from objective function to constraints can be approximated. An interactive algorithm is proposed for solving the IPFP model, which in nature is an interval-parameter mixed-integer quadratically constrained programming model. To demonstrate the IPFP's advantages, two alternative models are developed to compare their performances. One is a conventional linear-regression-based inexact fuzzy programming model (IPFP2) and the other is an IPFP model with all right-hand-sides of fussy constraints being the corresponding interval numbers (IPFP3). The comparison results between IPFP and IPFP2 indicate that the optimized waste amounts would have the similar patterns in both models. However, when dealing with EOS effects in constraints, the IPFP2 may underestimate the net system costs while the IPFP can estimate the costs more accurately. The comparison results between IPFP and IPFP3 indicate
Consideration in selecting crops for the human-rated life support system: a Linear Programming model
NASA Technical Reports Server (NTRS)
Wheeler, E. F.; Kossowski, J.; Goto, E.; Langhans, R. W.; White, G.; Albright, L. D.; Wilcox, D.; Henninger, D. L. (Principal Investigator)
1996-01-01
A Linear Programming model has been constructed which aids in selecting appropriate crops for CELSS (Controlled Environment Life Support System) food production. A team of Controlled Environment Agriculture (CEA) faculty, staff, graduate students and invited experts representing more than a dozen disciplines, provided a wide range of expertise in developing the model and the crop production program. The model incorporates nutritional content and controlled-environment based production yields of carefully chosen crops into a framework where a crop mix can be constructed to suit the astronauts' needs. The crew's nutritional requirements can be adequately satisfied with only a few crops (assuming vitamin mineral supplements are provided) but this will not be satisfactory from a culinary standpoint. This model is flexible enough that taste and variety driven food choices can be built into the model.
Dinan, T.M.
1984-01-01
The objectives of this study were to: (1) determine how energy efficiency affects the resale value of homes; (2) use this information concerning the implicit price of energy efficiency to estimate the resale value of fuel saving investments; and (3) incorporate these resale values into the investment decision process and determine the efficient investment mix for a household planning to own a given home for three alternative time periods. Two models were used to accomplish these objectives. A hedonic price model was used to determine the impact of energy efficiency on housing prices. The hedonic technique is a method used to attach implicit prices to characteristics that are not themselves bought and sold in markets, but are components of market goods. The hedonic model in this study provided an estimate of the implicit price paid for an increase in energy efficiency in homes on the Des-Moines housing market. In order to determine how the length of time the home is to be owned affects the optimal investment mix, a linear programming model was used to determine the cost minimizing investment mix for a baseline house under the assumption that it would be owned for 6, 20, and 50 years, alternatively. The results of the hedonic technique revealed that a premium is paid for energy efficient homes in Des Moines. The results of the linear programming model reveal that the optimal fuel saving investment mix for a home is sensitive to the time the home is to be owned.
Warid, Warid; Hizam, Hashim; Mariun, Norman; Abdul-Wahab, Noor Izzri
2016-01-01
This paper proposes a new formulation for the multi-objective optimal power flow (MOOPF) problem for meshed power networks considering distributed generation. An efficacious multi-objective fuzzy linear programming optimization (MFLP) algorithm is proposed to solve the aforementioned problem with and without considering the distributed generation (DG) effect. A variant combination of objectives is considered for simultaneous optimization, including power loss, voltage stability, and shunt capacitors MVAR reserve. Fuzzy membership functions for these objectives are designed with extreme targets, whereas the inequality constraints are treated as hard constraints. The multi-objective fuzzy optimal power flow (OPF) formulation was converted into a crisp OPF in a successive linear programming (SLP) framework and solved using an efficient interior point method (IPM). To test the efficacy of the proposed approach, simulations are performed on the IEEE 30-busand IEEE 118-bus test systems. The MFLP optimization is solved for several optimization cases. The obtained results are compared with those presented in the literature. A unique solution with a high satisfaction for the assigned targets is gained. Results demonstrate the effectiveness of the proposed MFLP technique in terms of solution optimality and rapid convergence. Moreover, the results indicate that using the optimal DG location with the MFLP algorithm provides the solution with the highest quality. PMID:26954783
Warid, Warid; Hizam, Hashim; Mariun, Norman; Abdul-Wahab, Noor Izzri
2016-01-01
This paper proposes a new formulation for the multi-objective optimal power flow (MOOPF) problem for meshed power networks considering distributed generation. An efficacious multi-objective fuzzy linear programming optimization (MFLP) algorithm is proposed to solve the aforementioned problem with and without considering the distributed generation (DG) effect. A variant combination of objectives is considered for simultaneous optimization, including power loss, voltage stability, and shunt capacitors MVAR reserve. Fuzzy membership functions for these objectives are designed with extreme targets, whereas the inequality constraints are treated as hard constraints. The multi-objective fuzzy optimal power flow (OPF) formulation was converted into a crisp OPF in a successive linear programming (SLP) framework and solved using an efficient interior point method (IPM). To test the efficacy of the proposed approach, simulations are performed on the IEEE 30-busand IEEE 118-bus test systems. The MFLP optimization is solved for several optimization cases. The obtained results are compared with those presented in the literature. A unique solution with a high satisfaction for the assigned targets is gained. Results demonstrate the effectiveness of the proposed MFLP technique in terms of solution optimality and rapid convergence. Moreover, the results indicate that using the optimal DG location with the MFLP algorithm provides the solution with the highest quality. PMID:26954783
Glocker, Ben; Paragios, Nikos; Komodakis, Nikos; Tziritas, Georgios; Navab, Nassir
2007-01-01
In this paper we propose a novel non-rigid volume registration based on discrete labeling and linear programming. The proposed framework reformulates registration as a minimal path extraction in a weighted graph. The space of solutions is represented using a set of a labels which are assigned to predefined displacements. The graph topology corresponds to a superimposed regular grid onto the volume. Links between neighborhood control points introduce smoothness, while links between the graph nodes and the labels (end-nodes) measure the cost induced to the objective function through the selection of a particular deformation for a given control point once projected to the entire volume domain, Higher order polynomials are used to express the volume deformation from the ones of the control points. Efficient linear programming that can guarantee the optimal solution up to (a user-defined) bound is considered to recover the optimal registration parameters. Therefore, the method is gradient free, can encode various similarity metrics (simple changes on the graph construction), can guarantee a globally sub-optimal solution and is computational tractable. Experimental validation using simulated data with known deformation, as well as manually segmented data demonstrate the extreme potentials of our approach. PMID:17633717
A wavelet-linear genetic programming model for sodium (Na+) concentration forecasting in rivers
NASA Astrophysics Data System (ADS)
Ravansalar, Masoud; Rajaee, Taher; Zounemat-Kermani, Mohammad
2016-06-01
The prediction of water quality parameters in water resources such as rivers is of importance issue that needs to be considered in better management of irrigation systems and water supplies. In this respect, this study proposes a new hybrid wavelet-linear genetic programming (WLGP) model for prediction of monthly sodium (Na+) concentration. The 23-year monthly data used in this study, were measured from the Asi River at the Demirköprü gauging station located in Antakya, Turkey. At first, the measured discharge (Q) and Na+ datasets are initially decomposed into several sub-series using discrete wavelet transform (DWT). Then, these new sub-series are imposed to the ad hoc linear genetic programming (LGP) model as input patterns to predict monthly Na+ one month ahead. The results of the new proposed WLGP model are compared with LGP, WANN and ANN models. Comparison of the models represents the superiority of the WLGP model over the LGP, WANN and ANN models such that the Nash-Sutcliffe efficiencies (NSE) for WLGP, WANN, LGP and ANN models were 0.984, 0.904, 0.484 and 0.351, respectively. The achieved results even points to the superiority of the single LGP model than the ANN model. Continuously, the capability of the proposed WLGP model in terms of prediction of the Na+ peak values is also presented in this study.
Averaging and Linear Programming in Some Singularly Perturbed Problems of Optimal Control
Gaitsgory, Vladimir; Rossomakhine, Sergey
2015-04-15
The paper aims at the development of an apparatus for analysis and construction of near optimal solutions of singularly perturbed (SP) optimal controls problems (that is, problems of optimal control of SP systems) considered on the infinite time horizon. We mostly focus on problems with time discounting criteria but a possibility of the extension of results to periodic optimization problems is discussed as well. Our consideration is based on earlier results on averaging of SP control systems and on linear programming formulations of optimal control problems. The idea that we exploit is to first asymptotically approximate a given problem of optimal control of the SP system by a certain averaged optimal control problem, then reformulate this averaged problem as an infinite-dimensional linear programming (LP) problem, and then approximate the latter by semi-infinite LP problems. We show that the optimal solution of these semi-infinite LP problems and their duals (that can be found with the help of a modification of an available LP software) allow one to construct near optimal controls of the SP system. We demonstrate the construction with two numerical examples.
Fan, Yurui; Huang, Guohe; Veawab, Amornvadee
2012-01-01
In this study, a generalized fuzzy linear programming (GFLP) method was developed to deal with uncertainties expressed as fuzzy sets that exist in the constraints and objective function. A stepwise interactive algorithm (SIA) was advanced to solve GFLP model and generate solutions expressed as fuzzy sets. To demonstrate its application, the developed GFLP method was applied to a regional sulfur dioxide (SO2) control planning model to identify effective SO2 mitigation polices with a minimized system performance cost under uncertainty. The results were obtained to represent the amount of SO2 allocated to different control measures from different sources. Compared with the conventional interval-parameter linear programming (ILP) approach, the solutions obtained through GFLP were expressed as fuzzy sets, which can provide intervals for the decision variables and objective function, as well as related possibilities. Therefore, the decision makers can make a tradeoff between model stability and the plausibility based on solutions obtained through GFLP and then identify desired policies for SO2-emission control under uncertainty.
Warid, Warid; Hizam, Hashim; Mariun, Norman; Abdul-Wahab, Noor Izzri
2016-01-01
This paper proposes a new formulation for the multi-objective optimal power flow (MOOPF) problem for meshed power networks considering distributed generation. An efficacious multi-objective fuzzy linear programming optimization (MFLP) algorithm is proposed to solve the aforementioned problem with and without considering the distributed generation (DG) effect. A variant combination of objectives is considered for simultaneous optimization, including power loss, voltage stability, and shunt capacitors MVAR reserve. Fuzzy membership functions for these objectives are designed with extreme targets, whereas the inequality constraints are treated as hard constraints. The multi-objective fuzzy optimal power flow (OPF) formulation was converted into a crisp OPF in a successive linear programming (SLP) framework and solved using an efficient interior point method (IPM). To test the efficacy of the proposed approach, simulations are performed on the IEEE 30-busand IEEE 118-bus test systems. The MFLP optimization is solved for several optimization cases. The obtained results are compared with those presented in the literature. A unique solution with a high satisfaction for the assigned targets is gained. Results demonstrate the effectiveness of the proposed MFLP technique in terms of solution optimality and rapid convergence. Moreover, the results indicate that using the optimal DG location with the MFLP algorithm provides the solution with the highest quality.
Radial-interval linear programming for environmental management under varied protection levels.
Tan, Qian; Huang, Guo H; Cai, Yanpeng
2010-09-01
In this study, a radial-interval linear programming (RILP) approach was developed for supporting waste management under uncertainty. RILP improved interval-parameter linear programming and its extensions in terms of input reasonableness and output robustness. From the perspective of modeling inputs, RILP could tackle highly uncertain information at the bounds of interval parameters through introducing the concept of fluctuation radius. Regarding modeling outputs, RILP allows controlling the degree of conservatism associated with interval solutions and is capable of quantifying corresponding system risks and benefits. This could facilitate the reflection of interactive relationship between the feasibility of system and the uncertainty of parameters. A computationally tractable algorithm was provided to solve RILP. Then, a long-term waste management case was studied to demonstrate the applicability of the developed methodology. A series of interval solutions obtained under varied protection levels were compared, helping gain insights into the interactions among protection level, violation risk, and system cost. Potential waste allocation alternatives could be generated from these interval solutions, which would be screened in real-world practices according to various projected system conditions as well as decision-makers' willingness to pay and risk tolerance levels. Sensitivity analysis further revealed the significant impact of fluctuation radii of interval parameters on the system. The results indicated that RILP is applicable to a wide spectrum of environmental management problems that are subject to compound uncertainties.
Djukanovic, M.; Babic, B.; Milosevic, B.; Sobajic, D.J.; Pao, Y.H. |
1996-05-01
In this paper the blending/transloading facilities are modeled using an interactive fuzzy linear programming (FLP), in order to allow the decision-maker to solve the problem of uncertainty of input information within the fuel scheduling optimization. An interactive decision-making process is formulated in which decision-maker can learn to recognize good solutions by considering all possibilities of fuzziness. The application of the fuzzy formulation is accompanied by a careful examination of the definition of fuzziness, appropriateness of the membership function and interpretation of results. The proposed concept provides a decision support system with integration-oriented features, whereby the decision-maker can learn to recognize the relative importance of factors in the specific domain of optimal fuel scheduling (OFS) problem. The formulation of a fuzzy linear programming problem to obtain a reasonable nonfuzzy solution under consideration of the ambiguity of parameters, represented by fuzzy numbers, is introduced. An additional advantage of the FLP formulation is its ability to deal with multi-objective problems.
Tonkin, Matthew J.; Tiedeman, Claire R.; Ely, D. Matthew; Hill, Mary C.
2007-01-01
The OPR-PPR program calculates the Observation-Prediction (OPR) and Parameter-Prediction (PPR) statistics that can be used to evaluate the relative importance of various kinds of data to simulated predictions. The data considered fall into three categories: (1) existing observations, (2) potential observations, and (3) potential information about parameters. The first two are addressed by the OPR statistic; the third is addressed by the PPR statistic. The statistics are based on linear theory and measure the leverage of the data, which depends on the location, the type, and possibly the time of the data being considered. For example, in a ground-water system the type of data might be a head measurement at a particular location and time. As a measure of leverage, the statistics do not take into account the value of the measurement. As linear measures, the OPR and PPR statistics require minimal computational effort once sensitivities have been calculated. Sensitivities need to be calculated for only one set of parameter values; commonly these are the values estimated through model calibration. OPR-PPR can calculate the OPR and PPR statistics for any mathematical model that produces the necessary OPR-PPR input files. In this report, OPR-PPR capabilities are presented in the context of using the ground-water model MODFLOW-2000 and the universal inverse program UCODE_2005. The method used to calculate the OPR and PPR statistics is based on the linear equation for prediction standard deviation. Using sensitivities and other information, OPR-PPR calculates (a) the percent increase in the prediction standard deviation that results when one or more existing observations are omitted from the calibration data set; (b) the percent decrease in the prediction standard deviation that results when one or more potential observations are added to the calibration data set; or (c) the percent decrease in the prediction standard deviation that results when potential information on one
Matthew J. Tonkin; Claire R. Tiedeman; D. Matthew Ely; and Mary C. Hill
2007-08-16
The OPR-PPR program calculates the Observation-Prediction (OPR) and Parameter-Prediction (PPR) statistics that can be used to evaluate the relative importance of various kinds of data to simulated predictions. The data considered fall into three categories: (1) existing observations, (2) potential observations, and (3) potential information about parameters. The first two are addressed by the OPR statistic; the third is addressed by the PPR statistic. The statistics are based on linear theory and measure the leverage of the data, which depends on the location, the type, and possibly the time of the data being considered. For example, in a ground-water system the type of data might be a head measurement at a particular location and time. As a measure of leverage, the statistics do not take into account the value of the measurement. As linear measures, the OPR and PPR statistics require minimal computational effort once sensitivities have been calculated. Sensitivities need to be calculated for only one set of parameter values; commonly these are the values estimated through model calibration. OPR-PPR can calculate the OPR and PPR statistics for any mathematical model that produces the necessary OPR-PPR input files. In this report, OPR-PPR capabilities are presented in the context of using the ground-water model MODFLOW-2000 and the universal inverse program UCODE_2005. The method used to calculate the OPR and PPR statistics is based on the linear equation for prediction standard deviation. Using sensitivities and other information, OPR-PPR calculates (a) the percent increase in the prediction standard deviation that results when one or more existing observations are omitted from the calibration data set; (b) the percent decrease in the prediction standard deviation that results when one or more potential observations are added to the calibration data set; or (c) the percent decrease in the prediction standard deviation that results when potential information on one
ERIC Educational Resources Information Center
Nakhanu, Shikuku Beatrice; Musasia, Amadalo Maurice
2015-01-01
The topic Linear Programming is included in the compulsory Kenyan secondary school mathematics curriculum at form four. The topic provides skills for determining best outcomes in a given mathematical model involving some linear relationship. This technique has found application in business, economics as well as various engineering fields. Yet many…
Sun, Wei; Huang, Guo H; Lv, Ying; Li, Gongchen
2012-06-01
To tackle nonlinear economies-of-scale (EOS) effects in interval-parameter constraints for a representative waste management problem, an inexact piecewise-linearization-based fuzzy flexible programming (IPFP) model is developed. In IPFP, interval parameters for waste amounts and transportation/operation costs can be quantified; aspiration levels for net system costs, as well as tolerance intervals for both capacities of waste treatment facilities and waste generation rates can be reflected; and the nonlinear EOS effects transformed from objective function to constraints can be approximated. An interactive algorithm is proposed for solving the IPFP model, which in nature is an interval-parameter mixed-integer quadratically constrained programming model. To demonstrate the IPFP's advantages, two alternative models are developed to compare their performances. One is a conventional linear-regression-based inexact fuzzy programming model (IPFP2) and the other is an IPFP model with all right-hand-sides of fussy constraints being the corresponding interval numbers (IPFP3). The comparison results between IPFP and IPFP2 indicate that the optimized waste amounts would have the similar patterns in both models. However, when dealing with EOS effects in constraints, the IPFP2 may underestimate the net system costs while the IPFP can estimate the costs more accurately. The comparison results between IPFP and IPFP3 indicate that their solutions would be significantly different. The decreased system uncertainties in IPFP's solutions demonstrate its effectiveness for providing more satisfactory interval solutions than IPFP3. Following its first application to waste management, the IPFP can be potentially applied to other environmental problems under multiple complexities. PMID:22370050
Sun, Wei; Huang, Guo H; Lv, Ying; Li, Gongchen
2012-06-01
To tackle nonlinear economies-of-scale (EOS) effects in interval-parameter constraints for a representative waste management problem, an inexact piecewise-linearization-based fuzzy flexible programming (IPFP) model is developed. In IPFP, interval parameters for waste amounts and transportation/operation costs can be quantified; aspiration levels for net system costs, as well as tolerance intervals for both capacities of waste treatment facilities and waste generation rates can be reflected; and the nonlinear EOS effects transformed from objective function to constraints can be approximated. An interactive algorithm is proposed for solving the IPFP model, which in nature is an interval-parameter mixed-integer quadratically constrained programming model. To demonstrate the IPFP's advantages, two alternative models are developed to compare their performances. One is a conventional linear-regression-based inexact fuzzy programming model (IPFP2) and the other is an IPFP model with all right-hand-sides of fussy constraints being the corresponding interval numbers (IPFP3). The comparison results between IPFP and IPFP2 indicate that the optimized waste amounts would have the similar patterns in both models. However, when dealing with EOS effects in constraints, the IPFP2 may underestimate the net system costs while the IPFP can estimate the costs more accurately. The comparison results between IPFP and IPFP3 indicate that their solutions would be significantly different. The decreased system uncertainties in IPFP's solutions demonstrate its effectiveness for providing more satisfactory interval solutions than IPFP3. Following its first application to waste management, the IPFP can be potentially applied to other environmental problems under multiple complexities.
Linear ground-water flow, flood-wave response program for programmable calculators
Kernodle, John Michael
1978-01-01
Two programs are documented which solve a discretized analytical equation derived to determine head changes at a point in a one-dimensional ground-water flow system. The programs, written for programmable calculators, are in widely divergent but commonly encountered languages and serve to illustrate the adaptability of the linear model to use in situations where access to true computers is not possible or economical. The analytical method assumes a semi-infinite aquifer which is uniform in thickness and hydrologic characteristics, bounded on one side by an impermeable barrier and on the other parallel side by a fully penetrating stream in complete hydraulic connection with the aquifer. Ground-water heads may be calculated for points along a line which is perpendicular to the impermeable barrie and the fully penetrating stream. Head changes at the observation point are dependent on (1) the distance between that point and the impermeable barrier, (2) the distance between the line of stress (the stream) and the impermeable barrier, (3) aquifer diffusivity, (4) time, and (5) head changes along the line of stress. The primary application of the programs is to determine aquifer diffusivity by the flood-wave response technique. (Woodard-USGS)
NASA Technical Reports Server (NTRS)
Young, Katherine C.; Sobieszczanski-Sobieski, Jaroslaw
1988-01-01
This project has two objectives. The first is to determine whether linear programming techniques can improve performance when handling design optimization problems with a large number of design variables and constraints relative to the feasible directions algorithm. The second purpose is to determine whether using the Kreisselmeier-Steinhauser (KS) function to replace the constraints with one constraint will reduce the cost of total optimization. Comparisons are made using solutions obtained with linear and non-linear methods. The results indicate that there is no cost saving using the linear method or in using the KS function to replace constraints.
Use of linear programming to calculate dwell times for the design of petal tools.
Santiago-Alvarado, Agustin; González-García, Jorge; Castañeda-Roldan, Cuauhtémoc; Cordero-Dávila, Alberto; Vera-Díaz, Erika; Robledo-Sánchez, Carlos Ignacio
2007-07-20
Two constraints in the design of a petal tool are, the angles that define it must all be positive, and wear must never be greater than the desired wear. The first constraint is equivalent to that of the positive dwell times of a small solid tool. In view of this foregoing, we present a design of petal tools that are used to generate conic surfaces from their nearest spheres and that correct the profile of a surface that is polished. We study optimal angular sizes of a petal tool, which are found after we use linear programming to calculate the optimal dwell times of a set of complete annular tools placed in different zones of the glass surface. We report numerical results of designed petal tools.
Approximating high-dimensional dynamics by barycentric coordinates with linear programming
Hirata, Yoshito Aihara, Kazuyuki; Suzuki, Hideyuki; Shiro, Masanori; Takahashi, Nozomu; Mas, Paloma
2015-01-15
The increasing development of novel methods and techniques facilitates the measurement of high-dimensional time series but challenges our ability for accurate modeling and predictions. The use of a general mathematical model requires the inclusion of many parameters, which are difficult to be fitted for relatively short high-dimensional time series observed. Here, we propose a novel method to accurately model a high-dimensional time series. Our method extends the barycentric coordinates to high-dimensional phase space by employing linear programming, and allowing the approximation errors explicitly. The extension helps to produce free-running time-series predictions that preserve typical topological, dynamical, and/or geometric characteristics of the underlying attractors more accurately than the radial basis function model that is widely used. The method can be broadly applied, from helping to improve weather forecasting, to creating electronic instruments that sound more natural, and to comprehensively understanding complex biological data.
NASA Technical Reports Server (NTRS)
Armstrong, E. S.
1975-01-01
A digital computer program (ORACLS) for implementing the optimal regulator theory approach to the design of controllers for linear time-invariant systems is described. The user-oriented program employs the latest numerical techniques and is applicable to both the digital and continuous control problems.
Transonic-Small-Disturbance and Linear Analyses for the Active Aeroelastic Wing Program
NASA Technical Reports Server (NTRS)
Wiesman, Carol D.; Silva, Walter A.; Spain, Charles V.; Heeg, Jennifer
2005-01-01
Analysis serves many roles in the Active Aeroelastic Wing (AAW) program. It has been employed to ensure safe testing of both a flight vehicle and wind tunnel model, has formulated models for control law design, has provided comparison data for validation of experimental methods and has addressed several analytical research topics. Aeroelastic analyses using mathematical models of both the flight vehicle and the wind tunnel model configurations have been conducted. Static aeroelastic characterizations of the flight vehicle and wind tunnel model have been produced in the transonic regime and at low supersonic Mach numbers. The flight vehicle has been analyzed using linear aerodynamic theory and transonic small disturbance theory. Analyses of the wind-tunnel model were performed using only linear methods. Research efforts conducted through these analyses include defining regions of the test space where transonic effects play an important role and investigating transonic similarity. A comparison of these aeroelastic analyses for the AAW flight vehicle is presented in this paper. Results from a study of transonic similarity are also presented. Data sets from these analyses include pressure distributions, stability and control derivatives, control surface effectiveness, and vehicle deflections.
PAPR reduction in FBMC using an ACE-based linear programming optimization
NASA Astrophysics Data System (ADS)
van der Neut, Nuan; Maharaj, Bodhaswar TJ; de Lange, Frederick; González, Gustavo J.; Gregorio, Fernando; Cousseau, Juan
2014-12-01
This paper presents four novel techniques for peak-to-average power ratio (PAPR) reduction in filter bank multicarrier (FBMC) modulation systems. The approach extends on current PAPR reduction active constellation extension (ACE) methods, as used in orthogonal frequency division multiplexing (OFDM), to an FBMC implementation as the main contribution. The four techniques introduced can be split up into two: linear programming optimization ACE-based techniques and smart gradient-project (SGP) ACE techniques. The linear programming (LP)-based techniques compensate for the symbol overlaps by utilizing a frame-based approach and provide a theoretical upper bound on achievable performance for the overlapping ACE techniques. The overlapping ACE techniques on the other hand can handle symbol by symbol processing. Furthermore, as a result of FBMC properties, the proposed techniques do not require side information transmission. The PAPR performance of the techniques is shown to match, or in some cases improve, on current PAPR techniques for FBMC. Initial analysis of the computational complexity of the SGP techniques indicates that the complexity issues with PAPR reduction in FBMC implementations can be addressed. The out-of-band interference introduced by the techniques is investigated. As a result, it is shown that the interference can be compensated for, whilst still maintaining decent PAPR performance. Additional results are also provided by means of a study of the PAPR reduction of the proposed techniques at a fixed clipping probability. The bit error rate (BER) degradation is investigated to ensure that the trade-off in terms of BER degradation is not too severe. As illustrated by exhaustive simulations, the SGP ACE-based technique proposed are ideal candidates for practical implementation in systems employing the low-complexity polyphase implementation of FBMC modulators. The methods are shown to offer significant PAPR reduction and increase the feasibility of FBMC as
Tetens, Inge; Dejgård Jensen, Jørgen; Smed, Sinne; Gabrijelčič Blenkuš, Mojca; Rayner, Mike; Darmon, Nicole; Robertson, Aileen
2016-01-01
Background Food-Based Dietary Guidelines (FBDGs) are developed to promote healthier eating patterns, but increasing food prices may make healthy eating less affordable. The aim of this study was to design a range of cost-minimized nutritionally adequate health-promoting food baskets (FBs) that help prevent both micronutrient inadequacy and diet-related non-communicable diseases at lowest cost. Methods Average prices for 312 foods were collected within the Greater Copenhagen area. The cost and nutrient content of five different cost-minimized FBs for a family of four were calculated per day using linear programming. The FBs were defined using five different constraints: cultural acceptability (CA), or dietary guidelines (DG), or nutrient recommendations (N), or cultural acceptability and nutrient recommendations (CAN), or dietary guidelines and nutrient recommendations (DGN). The variety and number of foods in each of the resulting five baskets was increased through limiting the relative share of individual foods. Results The one-day version of N contained only 12 foods at the minimum cost of DKK 27 (€ 3.6). The CA, DG, and DGN were about twice of this and the CAN cost ~DKK 81 (€ 10.8). The baskets with the greater variety of foods contained from 70 (CAN) to 134 (DGN) foods and cost between DKK 60 (€ 8.1, N) and DKK 125 (€ 16.8, DGN). Ensuring that the food baskets cover both dietary guidelines and nutrient recommendations doubled the cost while cultural acceptability (CAN) tripled it. Conclusion Use of linear programming facilitates the generation of low-cost food baskets that are nutritionally adequate, health promoting, and culturally acceptable. PMID:27760131
NASA Technical Reports Server (NTRS)
Wei, Peng; Sridhar, Banavar; Chen, Neil Yi-Nan; Sun, Dengfent
2012-01-01
A class of strategies has been proposed to reduce contrail formation in the United States airspace. A 3D grid based on weather data and the cruising altitude level of aircraft is adjusted to avoid the persistent contrail potential area with the consideration to fuel-efficiency. In this paper, the authors introduce a contrail avoidance strategy on 3D grid by considering additional operationally feasible constraints from an air traffic controller's aspect. First, shifting too many aircraft to the same cruising level will make the miles-in-trail at this level smaller than the safety separation threshold. Furthermore, the high density of aircraft at one cruising level may exceed the workload for the traffic controller. Therefore, in our new model we restrict the number of total aircraft at each level. Second, the aircraft count variation for successive intervals cannot be too drastic since the workload to manage climbing/descending aircraft is much larger than managing cruising aircraft. The contrail reduction is formulated as an integer-programming problem and the problem is shown to have the property of total unimodularity. Solving the corresponding relaxed linear programming with the simplex method provides an optimal and integral solution to the problem. Simulation results are provided to illustrate the methodology.
A two-stage sequential linear programming approach to IMRT dose optimization
Zhang, Hao H; Meyer, Robert R; Wu, Jianzhou; Naqvi, Shahid A; Shi, Leyuan; D’Souza, Warren D
2010-01-01
The conventional IMRT planning process involves two stages in which the first stage consists of fast but approximate idealized pencil beam dose calculations and dose optimization and the second stage consists of discretization of the intensity maps followed by intensity map segmentation and a more accurate final dose calculation corresponding to physical beam apertures. Consequently, there can be differences between the presumed dose distribution corresponding to pencil beam calculations and optimization and a more accurately computed dose distribution corresponding to beam segments that takes into account collimator-specific effects. IMRT optimization is computationally expensive and has therefore led to the use of heuristic (e.g., simulated annealing and genetic algorithms) approaches that do not encompass a global view of the solution space. We modify the traditional two-stage IMRT optimization process by augmenting the second stage via an accurate Monte-Carlo based kernel-superposition dose calculations corresponding to beam apertures combined with an exact mathematical programming based sequential optimization approach that uses linear programming (SLP). Our approach was tested on three challenging clinical test cases with multileaf collimator constraints corresponding to two vendors. We compared our approach to the conventional IMRT planning approach, a direct-aperture approach and a segment weight optimization approach. Our results in all three cases indicate that the SLP approach outperformed the other approaches, achieving superior critical structure sparing. Convergence of our approach is also demonstrated. Finally, our approach has also been integrated with a commercial treatment planning system and may be utilized clinically. PMID:20071764
Takabe, Satoshi; Hukushima, Koji
2016-05-01
Typical behavior of the linear programming (LP) problem is studied as a relaxation of the minimum vertex cover (min-VC), a type of integer programming (IP) problem. A lattice-gas model on the Erdös-Rényi random graphs of α-uniform hyperedges is proposed to express both the LP and IP problems of the min-VC in the common statistical mechanical model with a one-parameter family. Statistical mechanical analyses reveal for α=2 that the LP optimal solution is typically equal to that given by the IP below the critical average degree c=e in the thermodynamic limit. The critical threshold for good accuracy of the relaxation extends the mathematical result c=1 and coincides with the replica symmetry-breaking threshold of the IP. The LP relaxation for the minimum hitting sets with α≥3, minimum vertex covers on α-uniform random graphs, is also studied. Analytic and numerical results strongly suggest that the LP relaxation fails to estimate optimal values above the critical average degree c=e/(α-1) where the replica symmetry is broken. PMID:27301006
NASA Astrophysics Data System (ADS)
Takabe, Satoshi; Hukushima, Koji
2016-05-01
Typical behavior of the linear programming (LP) problem is studied as a relaxation of the minimum vertex cover (min-VC), a type of integer programming (IP) problem. A lattice-gas model on the Erdös-Rényi random graphs of α -uniform hyperedges is proposed to express both the LP and IP problems of the min-VC in the common statistical mechanical model with a one-parameter family. Statistical mechanical analyses reveal for α =2 that the LP optimal solution is typically equal to that given by the IP below the critical average degree c =e in the thermodynamic limit. The critical threshold for good accuracy of the relaxation extends the mathematical result c =1 and coincides with the replica symmetry-breaking threshold of the IP. The LP relaxation for the minimum hitting sets with α ≥3 , minimum vertex covers on α -uniform random graphs, is also studied. Analytic and numerical results strongly suggest that the LP relaxation fails to estimate optimal values above the critical average degree c =e /(α -1 ) where the replica symmetry is broken.
Takabe, Satoshi; Hukushima, Koji
2016-05-01
Typical behavior of the linear programming (LP) problem is studied as a relaxation of the minimum vertex cover (min-VC), a type of integer programming (IP) problem. A lattice-gas model on the Erdös-Rényi random graphs of α-uniform hyperedges is proposed to express both the LP and IP problems of the min-VC in the common statistical mechanical model with a one-parameter family. Statistical mechanical analyses reveal for α=2 that the LP optimal solution is typically equal to that given by the IP below the critical average degree c=e in the thermodynamic limit. The critical threshold for good accuracy of the relaxation extends the mathematical result c=1 and coincides with the replica symmetry-breaking threshold of the IP. The LP relaxation for the minimum hitting sets with α≥3, minimum vertex covers on α-uniform random graphs, is also studied. Analytic and numerical results strongly suggest that the LP relaxation fails to estimate optimal values above the critical average degree c=e/(α-1) where the replica symmetry is broken.
Linear Mode Photon Counting LADAR Camera Development for the Ultra-Sensitive Detector Program
NASA Astrophysics Data System (ADS)
Jack, M.; Bailey, S.; Edwards, J.; Burkholder, R.; Liu, K.; Asbrock, J.; Randall, V.; Chapman, G.; Riker, J.
Advanced LADAR receivers enable high accuracy identification of targets at ranges beyond standard EOIR sensors. Increased sensitivity of these receivers will enable reductions in laser power, hence more affordable, smaller sensors as well as much longer range of detection. Raytheon has made a recent breakthrough in LADAR architecture by combining very low noise ~ 30 electron front end amplifiers with moderate gain >60 Avalanche Photodiodes. The combination of these enables detection of laser pulse returns containing as few as one photon up to 1000s of photons. Because a lower APD gain is utilized the sensor operation differs dramatically from traditional "geiger mode APD" LADARs. Linear mode photon counting LADAR offers advantages including: determination of intensity as well as time of arrival, nanosecond recovery times and discrimination between radiation events and signals. In our talk we will review the basic amplifier and APD component performance, the front end architecture, the demonstration of single photon detection using a simple 4 x 4 SCA and the design of a fully integrated photon counting camera under development in support of the Ultra-Sensitive Detector (USD) program sponsored by the Air Force Research Laboratory at Kirtland AFB, NM. Work Supported in Part by AFRL - Contract # FA8632-05-C-2454 Dr. Jim Riker Program Manager.
Aspect-Object Alignment with Integer Linear Programming in Opinion Mining
Zhao, Yanyan; Qin, Bing; Liu, Ting; Yang, Wei
2015-01-01
Target extraction is an important task in opinion mining. In this task, a complete target consists of an aspect and its corresponding object. However, previous work has always simply regarded the aspect as the target itself and has ignored the important "object" element. Thus, these studies have addressed incomplete targets, which are of limited use for practical applications. This paper proposes a novel and important sentiment analysis task, termed aspect-object alignment, to solve the "object neglect" problem. The objective of this task is to obtain the correct corresponding object for each aspect. We design a two-step framework for this task. We first provide an aspect-object alignment classifier that incorporates three sets of features, namely, the basic, relational, and special target features. However, the objects that are assigned to aspects in a sentence often contradict each other and possess many complicated features that are difficult to incorporate into a classifier. To resolve these conflicts, we impose two types of constraints in the second step: intra-sentence constraints and inter-sentence constraints. These constraints are encoded as linear formulations, and Integer Linear Programming (ILP) is used as an inference procedure to obtain a final global decision that is consistent with the constraints. Experiments on a corpus in the camera domain demonstrate that the three feature sets used in the aspect-object alignment classifier are effective in improving its performance. Moreover, the classifier with ILP inference performs better than the classifier without it, thereby illustrating that the two types of constraints that we impose are beneficial. PMID:26000635
Aspect-object alignment with Integer Linear Programming in opinion mining.
Zhao, Yanyan; Qin, Bing; Liu, Ting; Yang, Wei
2015-01-01
Target extraction is an important task in opinion mining. In this task, a complete target consists of an aspect and its corresponding object. However, previous work has always simply regarded the aspect as the target itself and has ignored the important "object" element. Thus, these studies have addressed incomplete targets, which are of limited use for practical applications. This paper proposes a novel and important sentiment analysis task, termed aspect-object alignment, to solve the "object neglect" problem. The objective of this task is to obtain the correct corresponding object for each aspect. We design a two-step framework for this task. We first provide an aspect-object alignment classifier that incorporates three sets of features, namely, the basic, relational, and special target features. However, the objects that are assigned to aspects in a sentence often contradict each other and possess many complicated features that are difficult to incorporate into a classifier. To resolve these conflicts, we impose two types of constraints in the second step: intra-sentence constraints and inter-sentence constraints. These constraints are encoded as linear formulations, and Integer Linear Programming (ILP) is used as an inference procedure to obtain a final global decision that is consistent with the constraints. Experiments on a corpus in the camera domain demonstrate that the three feature sets used in the aspect-object alignment classifier are effective in improving its performance. Moreover, the classifier with ILP inference performs better than the classifier without it, thereby illustrating that the two types of constraints that we impose are beneficial. PMID:26000635
Aspect-object alignment with Integer Linear Programming in opinion mining.
Zhao, Yanyan; Qin, Bing; Liu, Ting; Yang, Wei
2015-01-01
Target extraction is an important task in opinion mining. In this task, a complete target consists of an aspect and its corresponding object. However, previous work has always simply regarded the aspect as the target itself and has ignored the important "object" element. Thus, these studies have addressed incomplete targets, which are of limited use for practical applications. This paper proposes a novel and important sentiment analysis task, termed aspect-object alignment, to solve the "object neglect" problem. The objective of this task is to obtain the correct corresponding object for each aspect. We design a two-step framework for this task. We first provide an aspect-object alignment classifier that incorporates three sets of features, namely, the basic, relational, and special target features. However, the objects that are assigned to aspects in a sentence often contradict each other and possess many complicated features that are difficult to incorporate into a classifier. To resolve these conflicts, we impose two types of constraints in the second step: intra-sentence constraints and inter-sentence constraints. These constraints are encoded as linear formulations, and Integer Linear Programming (ILP) is used as an inference procedure to obtain a final global decision that is consistent with the constraints. Experiments on a corpus in the camera domain demonstrate that the three feature sets used in the aspect-object alignment classifier are effective in improving its performance. Moreover, the classifier with ILP inference performs better than the classifier without it, thereby illustrating that the two types of constraints that we impose are beneficial.
Chen, Vivian Yi-Ju; Yang, Tse-Chuan
2012-08-01
An increasing interest in exploring spatial non-stationarity has generated several specialized analytic software programs; however, few of these programs can be integrated natively into a well-developed statistical environment such as SAS. We not only developed a set of SAS macro programs to fill this gap, but also expanded the geographically weighted generalized linear modeling (GWGLM) by integrating the strengths of SAS into the GWGLM framework. Three features distinguish our work. First, the macro programs of this study provide more kernel weighting functions than the existing programs. Second, with our codes the users are able to better specify the bandwidth selection process compared to the capabilities of existing programs. Third, the development of the macro programs is fully embedded in the SAS environment, providing great potential for future exploration of complicated spatially varying coefficient models in other disciplines. We provided three empirical examples to illustrate the use of the SAS macro programs and demonstrated the advantages explained above.
A linear programming model to optimize diets in environmental policy scenarios.
Moraes, L E; Wilen, J E; Robinson, P H; Fadel, J G
2012-03-01
The objective was to develop a linear programming model to formulate diets for dairy cattle when environmental policies are present and to examine effects of these policies on diet formulation and dairy cattle nitrogen and mineral excretions as well as methane emissions. The model was developed as a minimum cost diet model. Two types of environmental policies were examined: a tax and a constraint on methane emissions. A tax was incorporated to simulate a greenhouse gas emissions tax policy, and prices of carbon credits in the current carbon markets were attributed to the methane production variable. Three independent runs were made, using carbon dioxide equivalent prices of $5, $17, and $250/t. A constraint was incorporated into the model to simulate the second type of environmental policy, reducing methane emissions by predetermined amounts. The linear programming formulation of this second alternative enabled the calculation of marginal costs of reducing methane emissions. Methane emission and manure production by dairy cows were calculated according to published equations, and nitrogen and mineral excretions were calculated by mass conservation laws. Results were compared with respect to the values generated by a base least-cost model. Current prices of the carbon credit market did not appear onerous enough to have a substantive incentive effect in reducing methane emissions and altering diet costs of our hypothetical dairy herd. However, when emissions of methane were assumed to be reduced by 5, 10, and 13.5% from the base model, total diet costs increased by 5, 19.1, and 48.5%, respectively. Either these increased costs would be passed onto the consumer or dairy producers would go out of business. Nitrogen and potassium excretions were increased by 16.5 and 16.7% with a 13.5% reduction in methane emissions from the base model. Imposing methane restrictions would further increase the demand for grains and other human-edible crops, which is not a progressive
A linear programming model to optimize diets in environmental policy scenarios.
Moraes, L E; Wilen, J E; Robinson, P H; Fadel, J G
2012-03-01
The objective was to develop a linear programming model to formulate diets for dairy cattle when environmental policies are present and to examine effects of these policies on diet formulation and dairy cattle nitrogen and mineral excretions as well as methane emissions. The model was developed as a minimum cost diet model. Two types of environmental policies were examined: a tax and a constraint on methane emissions. A tax was incorporated to simulate a greenhouse gas emissions tax policy, and prices of carbon credits in the current carbon markets were attributed to the methane production variable. Three independent runs were made, using carbon dioxide equivalent prices of $5, $17, and $250/t. A constraint was incorporated into the model to simulate the second type of environmental policy, reducing methane emissions by predetermined amounts. The linear programming formulation of this second alternative enabled the calculation of marginal costs of reducing methane emissions. Methane emission and manure production by dairy cows were calculated according to published equations, and nitrogen and mineral excretions were calculated by mass conservation laws. Results were compared with respect to the values generated by a base least-cost model. Current prices of the carbon credit market did not appear onerous enough to have a substantive incentive effect in reducing methane emissions and altering diet costs of our hypothetical dairy herd. However, when emissions of methane were assumed to be reduced by 5, 10, and 13.5% from the base model, total diet costs increased by 5, 19.1, and 48.5%, respectively. Either these increased costs would be passed onto the consumer or dairy producers would go out of business. Nitrogen and potassium excretions were increased by 16.5 and 16.7% with a 13.5% reduction in methane emissions from the base model. Imposing methane restrictions would further increase the demand for grains and other human-edible crops, which is not a progressive
Dyehouse, Melissa; Bennett, Deborah; Harbor, Jon; Childress, Amy; Dark, Melissa
2009-08-01
Logic models are based on linear relationships between program resources, activities, and outcomes, and have been used widely to support both program development and evaluation. While useful in describing some programs, the linear nature of the logic model makes it difficult to capture the complex relationships within larger, multifaceted programs. Causal loop diagrams based on a systems thinking approach can better capture a multidimensional, layered program model while providing a more complete understanding of the relationship between program elements, which enables evaluators to examine influences and dependencies between and within program components. Few studies describe how to conceptualize and apply systems models for educational program evaluation. The goal of this paper is to use our NSF-funded, Interdisciplinary GK-12 project: Bringing Authentic Problem Solving in STEM to Rural Middle Schools to illustrate a systems thinking approach to model a complex educational program to aid in evaluation. GK-12 pairs eight teachers with eight STEM doctoral fellows per program year to implement curricula in middle schools. We demonstrate how systems thinking provides added value by modeling the participant groups, instruments, outcomes, and other factors in ways that enhance the interpretation of quantitative and qualitative data. Limitations of the model include added complexity. Implications include better understanding of interactions and outcomes and analyses reflecting interacting or conflicting variables.
NASA Astrophysics Data System (ADS)
Dubey, Dipti; Chandra, Suresh; Mehra, Aparna
2015-05-01
In this paper, we study the multi-objective flexible linear programming (MOFLP) problems (or fuzzy multi-objective linear programming problems) in the heterogeneous bipolar framework. Bipolarity allows us to distinguish between the negative and the positive preferences. Negative preferences denote what is unacceptable while positive preferences are less restrictive and express what is desirable. This viewpoint enables us to handle fuzzy sets representing constraints and objective functions separately and combine them in distinct ways. In this paper, a solution concept of Pareto-optimality for MOFLP problems is defined and an approach is proposed to single out such a solution for MOFLP with highest possible degree of feasibility.
Linear genetic programming application for successive-station monthly streamflow prediction
NASA Astrophysics Data System (ADS)
Danandeh Mehr, Ali; Kahya, Ercan; Yerdelen, Cahit
2014-09-01
In recent decades, artificial intelligence (AI) techniques have been pronounced as a branch of computer science to model wide range of hydrological phenomena. A number of researches have been still comparing these techniques in order to find more effective approaches in terms of accuracy and applicability. In this study, we examined the ability of linear genetic programming (LGP) technique to model successive-station monthly streamflow process, as an applied alternative for streamflow prediction. A comparative efficiency study between LGP and three different artificial neural network algorithms, namely feed forward back propagation (FFBP), generalized regression neural networks (GRNN), and radial basis function (RBF), has also been presented in this study. For this aim, firstly, we put forward six different successive-station monthly streamflow prediction scenarios subjected to training by LGP and FFBP using the field data recorded at two gauging stations on Çoruh River, Turkey. Based on Nash-Sutcliffe and root mean squared error measures, we then compared the efficiency of these techniques and selected the best prediction scenario. Eventually, GRNN and RBF algorithms were utilized to restructure the selected scenario and to compare with corresponding FFBP and LGP. Our results indicated the promising role of LGP for successive-station monthly streamflow prediction providing more accurate results than those of all the ANN algorithms. We found an explicit LGP-based expression evolved by only the basic arithmetic functions as the best prediction model for the river, which uses the records of the both target and upstream stations.
An improved exploratory search technique for pure integer linear programming problems
NASA Technical Reports Server (NTRS)
Fogle, F. R.
1990-01-01
The development is documented of a heuristic method for the solution of pure integer linear programming problems. The procedure draws its methodology from the ideas of Hooke and Jeeves type 1 and 2 exploratory searches, greedy procedures, and neighborhood searches. It uses an efficient rounding method to obtain its first feasible integer point from the optimal continuous solution obtained via the simplex method. Since this method is based entirely on simple addition or subtraction of one to each variable of a point in n-space and the subsequent comparison of candidate solutions to a given set of constraints, it facilitates significant complexity improvements over existing techniques. It also obtains the same optimal solution found by the branch-and-bound technique in 44 of 45 small to moderate size test problems. Two example problems are worked in detail to show the inner workings of the method. Furthermore, using an established weighted scheme for comparing computational effort involved in an algorithm, a comparison of this algorithm is made to the more established and rigorous branch-and-bound method. A computer implementation of the procedure, in PC compatible Pascal, is also presented and discussed.
Mitsos, Alexander; Melas, Ioannis N; Morris, Melody K; Saez-Rodriguez, Julio; Lauffenburger, Douglas A; Alexopoulos, Leonidas G
2012-01-01
Modeling of signal transduction pathways plays a major role in understanding cells' function and predicting cellular response. Mathematical formalisms based on a logic formalism are relatively simple but can describe how signals propagate from one protein to the next and have led to the construction of models that simulate the cells response to environmental or other perturbations. Constrained fuzzy logic was recently introduced to train models to cell specific data to result in quantitative pathway models of the specific cellular behavior. There are two major issues in this pathway optimization: i) excessive CPU time requirements and ii) loosely constrained optimization problem due to lack of data with respect to large signaling pathways. Herein, we address both issues: the former by reformulating the pathway optimization as a regular nonlinear optimization problem; and the latter by enhanced algorithms to pre/post-process the signaling network to remove parts that cannot be identified given the experimental conditions. As a case study, we tackle the construction of cell type specific pathways in normal and transformed hepatocytes using medium and large-scale functional phosphoproteomic datasets. The proposed Non Linear Programming (NLP) formulation allows for fast optimization of signaling topologies by combining the versatile nature of logic modeling with state of the art optimization algorithms.
Cho, J H; Ahn, K H; Chung, W J; Gwon, E M
2003-01-01
A waste load allocation model using linear programming has been developed for economic water quality management. A modified Qual2e model was used for water quality calculations and transfer coefficients were derived from the calculated water quality. This allocation model was applied to the heavily polluted Gyungan River, located in South Korea. For water quality management of the river, two scenarios were proposed. Scenario 1 proposed to minimise the total waste load reduction in the river basin. Scenario 2 proposed to minimise waste load reduction considering regional equity. Waste loads, which have to be reduced at each sub-basin and WWTP, were determined to meet the water quality goal of the river. Application results of the allocation model indicate that advanced treatment is required for most of the existing WWTPs in the river basin and construction of new WWTPs and capacity expansion of existing plants are necessary. Distribution characteristics of pollution sources and pollutant loads in the river basin was analysed using Arc/View GIS. PMID:15137169
Efficient linear programming algorithm to generate the densest lattice sphere packings.
Marcotte, Étienne; Torquato, Salvatore
2013-06-01
Finding the densest sphere packing in d-dimensional Euclidean space R(d) is an outstanding fundamental problem with relevance in many fields, including the ground states of molecular systems, colloidal crystal structures, coding theory, discrete geometry, number theory, and biological systems. Numerically generating the densest sphere packings becomes very challenging in high dimensions due to an exponentially increasing number of possible sphere contacts and sphere configurations, even for the restricted problem of finding the densest lattice sphere packings. In this paper we apply the Torquato-Jiao packing algorithm, which is a method based on solving a sequence of linear programs, to robustly reproduce the densest known lattice sphere packings for dimensions 2 through 19. We show that the TJ algorithm is appreciably more efficient at solving these problems than previously published methods. Indeed, in some dimensions, the former procedure can be as much as three orders of magnitude faster at finding the optimal solutions than earlier ones. We also study the suboptimal local density-maxima solutions (inherent structures or "extreme" lattices) to gain insight about the nature of the topography of the "density" landscape. PMID:23848802
[Elaboration, by linear programming, of new products from cereals and legumes].
Ballesteros, M N; Yépiz, G M; Grijalva, M I; Ramos, E; Valencia, M E
1984-03-01
The differing contents of essential amino acids in cereals and legumes bring about an overall increase in protein quality when these foods are consumed together. This study describes a least cost formulation method for preparing products based on cereals and legumes using linear programming. The mixture was formulated under different constraints; from a nutritional standpoint, a given amino acid pattern, and another one on a technological feasibility constraint, which depends on the type of product to be elaborated. From the formulation based on wheat, chick-pea, sorghum, and soybean flours, three products were developed: bread, tortillas and cookies; from these, bread was selected for further evaluation. The product was chemically evaluated by proximate analysis composition, and amino acids were determined by HPLC. Biological evaluation was performed by the PER and RPV methods, obtaining a PER of 1.69 for the developed bread product, and of 0.68 for the control bread. The RPV for the developed product was 64.31% of lactoalbumin and 23% for the control bread, which represents an increase of 41%. The sensory evaluation results did not indicate significant differences in taste, texture, color or overall acceptability of the developed bread product as compared to the control.
Christodoulou, Manolis A; Kontogeorgou, Chrysa
2008-10-01
In recent years there has been a great effort to convert the existing Air Traffic Control system into a novel system known as Free Flight. Free Flight is based on the concept that increasing international airspace capacity will grant more freedom to individual pilots during the enroute flight phase, thereby giving them the opportunity to alter flight paths in real time. Under the current system, pilots must request, then receive permission from air traffic controllers to alter flight paths. Understandably the new system allows pilots to gain the upper hand in air traffic. At the same time, however, this freedom increase pilot responsibility. Pilots face a new challenge in avoiding the traffic shares congested air space. In order to ensure safety, an accurate system, able to predict and prevent conflict among aircraft is essential. There are certain flight maneuvers that exist in order to prevent flight disturbances or collision and these are graded in the following categories: vertical, lateral and airspeed. This work focuses on airspeed maneuvers and tries to introduce a new idea for the control of Free Flight, in three dimensions, using neural networks trained with examples prepared through non-linear programming.
Automatic design of synthetic gene circuits through mixed integer non-linear programming.
Huynh, Linh; Kececioglu, John; Köppe, Matthias; Tagkopoulos, Ilias
2012-01-01
Automatic design of synthetic gene circuits poses a significant challenge to synthetic biology, primarily due to the complexity of biological systems, and the lack of rigorous optimization methods that can cope with the combinatorial explosion as the number of biological parts increases. Current optimization methods for synthetic gene design rely on heuristic algorithms that are usually not deterministic, deliver sub-optimal solutions, and provide no guaranties on convergence or error bounds. Here, we introduce an optimization framework for the problem of part selection in synthetic gene circuits that is based on mixed integer non-linear programming (MINLP), which is a deterministic method that finds the globally optimal solution and guarantees convergence in finite time. Given a synthetic gene circuit, a library of characterized parts, and user-defined constraints, our method can find the optimal selection of parts that satisfy the constraints and best approximates the objective function given by the user. We evaluated the proposed method in the design of three synthetic circuits (a toggle switch, a transcriptional cascade, and a band detector), with both experimentally constructed and synthetic promoter libraries. Scalability and robustness analysis shows that the proposed framework scales well with the library size and the solution space. The work described here is a step towards a unifying, realistic framework for the automated design of biological circuits. PMID:22536398
Earthquake mechanisms from linear-programming inversion of seismic-wave amplitude ratios
Julian, B.R.; Foulger, G.R.
1996-01-01
The amplitudes of radiated seismic waves contain far more information about earthquake source mechanisms than do first-motion polarities, but amplitudes are severely distorted by the effects of heterogeneity in the Earth. This distortion can be reduced greatly by using the ratios of amplitudes of appropriately chosen seismic phases, rather than simple amplitudes, but existing methods for inverting amplitude ratios are severely nonlinear and require computationally intensive searching methods to ensure that solutions are globally optimal. Searching methods are particularly costly if general (moment tensor) mechanisms are allowed. Efficient linear-programming methods, which do not suffer from these problems, have previously been applied to inverting polarities and wave amplitudes. We extend these methods to amplitude ratios, in which formulation on inequality constraint for an amplitude ratio takes the same mathematical form as a polarity observation. Three-component digital data for an earthquake at the Hengill-Grensdalur geothermal area in southwestern Iceland illustrate the power of the method. Polarities of P, SH, and SV waves, unusually well distributed on the focal sphere, cannot distinguish between diverse mechanisms, including a double couple. Amplitude ratios, on the other hand, clearly rule out the double-couple solution and require a large explosive isotropic component.
Poos, Alexandra M; Maicher, André; Dieckmann, Anna K; Oswald, Marcus; Eils, Roland; Kupiec, Martin; Luke, Brian; König, Rainer
2016-06-01
Understanding telomere length maintenance mechanisms is central in cancer biology as their dysregulation is one of the hallmarks for immortalization of cancer cells. Important for this well-balanced control is the transcriptional regulation of the telomerase genes. We integrated Mixed Integer Linear Programming models into a comparative machine learning based approach to identify regulatory interactions that best explain the discrepancy of telomerase transcript levels in yeast mutants with deleted regulators showing aberrant telomere length, when compared to mutants with normal telomere length. We uncover novel regulators of telomerase expression, several of which affect histone levels or modifications. In particular, our results point to the transcription factors Sum1, Hst1 and Srb2 as being important for the regulation of EST1 transcription, and we validated the effect of Sum1 experimentally. We compiled our machine learning method leading to a user friendly package for R which can straightforwardly be applied to similar problems integrating gene regulator binding information and expression profiles of samples of e.g. different phenotypes, diseases or treatments. PMID:26908654
Automatic design of synthetic gene circuits through mixed integer non-linear programming.
Huynh, Linh; Kececioglu, John; Köppe, Matthias; Tagkopoulos, Ilias
2012-01-01
Automatic design of synthetic gene circuits poses a significant challenge to synthetic biology, primarily due to the complexity of biological systems, and the lack of rigorous optimization methods that can cope with the combinatorial explosion as the number of biological parts increases. Current optimization methods for synthetic gene design rely on heuristic algorithms that are usually not deterministic, deliver sub-optimal solutions, and provide no guaranties on convergence or error bounds. Here, we introduce an optimization framework for the problem of part selection in synthetic gene circuits that is based on mixed integer non-linear programming (MINLP), which is a deterministic method that finds the globally optimal solution and guarantees convergence in finite time. Given a synthetic gene circuit, a library of characterized parts, and user-defined constraints, our method can find the optimal selection of parts that satisfy the constraints and best approximates the objective function given by the user. We evaluated the proposed method in the design of three synthetic circuits (a toggle switch, a transcriptional cascade, and a band detector), with both experimentally constructed and synthetic promoter libraries. Scalability and robustness analysis shows that the proposed framework scales well with the library size and the solution space. The work described here is a step towards a unifying, realistic framework for the automated design of biological circuits.
Triple/quadruple patterning layout decomposition via novel linear programming and iterative rounding
NASA Astrophysics Data System (ADS)
Lin, Yibo; Xu, Xiaoqing; Yu, Bei; Baldick, Ross; Pan, David Z.
2016-03-01
As feature size of the semiconductor technology scales down to 10nm and beyond, multiple patterning lithography (MPL) has become one of the most practical candidates for lithography, along with other emerging technologies such as extreme ultraviolet lithography (EUVL), e-beam lithography (EBL) and directed self assembly (DSA). Due to the delay of EUVL and EBL, triple and even quadruple patterning are considered to be used for lower metal and contact layers with tight pitches. In the process of MPL, layout decomposition is the key design stage, where a layout is split into various parts and each part is manufactured through a separate mask. For metal layers, stitching may be allowed to resolve conflicts, while it is forbidden for contact and via layers. In this paper, we focus on the application of layout decomposition where stitching is not allowed such as for contact and via layers. We propose a linear programming and iterative rounding (LPIR) solving technique to reduce the number of non-integers in the LP relaxation problem. Experimental results show that the proposed algorithms can provide high quality decomposition solutions efficiently while introducing as few conflicts as possible.
Integer Linear Programming for Constrained Multi-Aspect Committee Review Assignment.
Karimzadehgan, Maryam; Zhai, Chengxiang
2012-07-01
Automatic review assignment can significantly improve the productivity of many people such as conference organizers, journal editors and grant administrators. A general setup of the review assignment problem involves assigning a set of reviewers on a committee to a set of documents to be reviewed under the constraint of review quota so that the reviewers assigned to a document can collectively cover multiple topic aspects of the document. No previous work has addressed such a setup of committee review assignments while also considering matching multiple aspects of topics and expertise. In this paper, we tackle the problem of committee review assignment with multi-aspect expertise matching by casting it as an integer linear programming problem. The proposed algorithm can naturally accommodate any probabilistic or deterministic method for modeling multiple aspects to automate committee review assignments. Evaluation using a multi-aspect review assignment test set constructed using ACM SIGIR publications shows that the proposed algorithm is effective and efficient for committee review assignments based on multi-aspect expertise matching.
Poos, Alexandra M.; Maicher, André; Dieckmann, Anna K.; Oswald, Marcus; Eils, Roland; Kupiec, Martin; Luke, Brian; König, Rainer
2016-01-01
Understanding telomere length maintenance mechanisms is central in cancer biology as their dysregulation is one of the hallmarks for immortalization of cancer cells. Important for this well-balanced control is the transcriptional regulation of the telomerase genes. We integrated Mixed Integer Linear Programming models into a comparative machine learning based approach to identify regulatory interactions that best explain the discrepancy of telomerase transcript levels in yeast mutants with deleted regulators showing aberrant telomere length, when compared to mutants with normal telomere length. We uncover novel regulators of telomerase expression, several of which affect histone levels or modifications. In particular, our results point to the transcription factors Sum1, Hst1 and Srb2 as being important for the regulation of EST1 transcription, and we validated the effect of Sum1 experimentally. We compiled our machine learning method leading to a user friendly package for R which can straightforwardly be applied to similar problems integrating gene regulator binding information and expression profiles of samples of e.g. different phenotypes, diseases or treatments. PMID:26908654
A Mixed Integer Linear Program for Solving a Multiple Route Taxi Scheduling Problem
NASA Technical Reports Server (NTRS)
Montoya, Justin Vincent; Wood, Zachary Paul; Rathinam, Sivakumar; Malik, Waqar Ahmad
2010-01-01
Aircraft movements on taxiways at busy airports often create bottlenecks. This paper introduces a mixed integer linear program to solve a Multiple Route Aircraft Taxi Scheduling Problem. The outputs of the model are in the form of optimal taxi schedules, which include routing decisions for taxiing aircraft. The model extends an existing single route formulation to include routing decisions. An efficient comparison framework compares the multi-route formulation and the single route formulation. The multi-route model is exercised for east side airport surface traffic at Dallas/Fort Worth International Airport to determine if any arrival taxi time savings can be achieved by allowing arrivals to have two taxi routes: a route that crosses an active departure runway and a perimeter route that avoids the crossing. Results indicate that the multi-route formulation yields reduced arrival taxi times over the single route formulation only when a perimeter taxiway is used. In conditions where the departure aircraft are given an optimal and fixed takeoff sequence, accumulative arrival taxi time savings in the multi-route formulation can be as high as 3.6 hours more than the single route formulation. If the departure sequence is not optimal, the multi-route formulation results in less taxi time savings made over the single route formulation, but the average arrival taxi time is significantly decreased.
Lefkoff, L.J.; Gorelick, S.M.
1987-01-01
A FORTRAN-77 computer program code that helps solve a variety of aquifer management problems involving the control of groundwater hydraulics. It is intended for use with any standard mathematical programming package that uses Mathematical Programming System input format. The computer program creates the input files to be used by the optimization program. These files contain all the hydrologic information and management objectives needed to solve the management problem. Used in conjunction with a mathematical programming code, the computer program identifies the pumping or recharge strategy that achieves a user 's management objective while maintaining groundwater hydraulic conditions within desired limits. The objective may be linear or quadratic, and may involve the minimization of pumping and recharge rates or of variable pumping costs. The problem may contain constraints on groundwater heads, gradients, and velocities for a complex, transient hydrologic system. Linear superposition of solutions to the transient, two-dimensional groundwater flow equation is used by the computer program in conjunction with the response matrix optimization method. A unit stress is applied at each decision well and transient responses at all control locations are computed using a modified version of the U.S. Geological Survey two dimensional aquifer simulation model. The program also computes discounted cost coefficients for the objective function and accounts for transient aquifer conditions. (Author 's abstract)
NASA Technical Reports Server (NTRS)
Geyser, L. C.
1978-01-01
A digital computer program, DYGABCD, was developed that generates linearized, dynamic models of simulated turbofan and turbojet engines. DYGABCD is based on an earlier computer program, DYNGEN, that is capable of calculating simulated nonlinear steady-state and transient performance of one- and two-spool turbojet engines or two- and three-spool turbofan engines. Most control design techniques require linear system descriptions. For multiple-input/multiple-output systems such as turbine engines, state space matrix descriptions of the system are often desirable. DYGABCD computes the state space matrices commonly referred to as the A, B, C, and D matrices required for a linear system description. The report discusses the analytical approach and provides a users manual, FORTRAN listings, and a sample case.
Technology Transfer Automated Retrieval System (TEKTRAN)
Ready-to-use therapeutic food (RUTF) is the standard of care for children suffering from noncomplicated severe acute malnutrition (SAM). The objective was to develop a comprehensive linear programming (LP) tool to create novel RUTF formulations for Ethiopia. A systematic approach that surveyed inter...
Modeling the distribution of ciliate protozoa in the reticulo-rumen using linear programming.
Hook, S E; Dijkstra, J; Wright, A-D G; McBride, B W; France, J
2012-01-01
The flow of ciliate protozoa from the reticulo-rumen is significantly less than expected given the total density of rumen protozoa present. To maintain their numbers in the reticulo-rumen, protozoa can be selectively retained through association with feed particles and the rumen wall. Few mathematical models have been designed to model rumen protozoa in both the free-living and attached phases, and the data used in the models were acquired using classical techniques. It has therefore become necessary to provide an updated model that more accurately represents these microorganisms and incorporates the recent literature on distribution, sequestration, and generation times. This paper represents a novel approach to synthesizing experimental data on rumen microorganisms in a quantitative and structured manner. The development of a linear programming model of rumen protozoa in an approximate steady state will be described and applied to data from healthy ruminants consuming commonly fed diets. In the model, protozoa associated with the liquid phase and protozoa attached to particulate matter or sequestered against the rumen wall are distinguished. Growth, passage, death, and transfer of protozoa between both pools are represented. The results from the model application using the contrasting diets of increased forage content versus increased starch content indicate that the majority of rumen protozoa, 63 to 90%, are found in the attached phase, either attached to feed particles or sequestered on the rumen wall. A slightly greater proportion of protozoa are found in the attached phase in animals fed a hay diet compared with a starch diet. This suggests that experimental protocols that only sample protozoa from the rumen fluid could be significantly underestimating the size of the protozoal population of the rumen. Further data are required on the distribution of ciliate protozoa in the rumen of healthy animals to improve model development, but the model described herein
Integrating Genomics and Proteomics Data to Predict Drug Effects Using Binary Linear Programming
Ji, Zhiwei; Su, Jing; Liu, Chenglin; Wang, Hongyan; Huang, Deshuang; Zhou, Xiaobo
2014-01-01
The Library of Integrated Network-Based Cellular Signatures (LINCS) project aims to create a network-based understanding of biology by cataloging changes in gene expression and signal transduction that occur when cells are exposed to a variety of perturbations. It is helpful for understanding cell pathways and facilitating drug discovery. Here, we developed a novel approach to infer cell-specific pathways and identify a compound's effects using gene expression and phosphoproteomics data under treatments with different compounds. Gene expression data were employed to infer potential targets of compounds and create a generic pathway map. Binary linear programming (BLP) was then developed to optimize the generic pathway topology based on the mid-stage signaling response of phosphorylation. To demonstrate effectiveness of this approach, we built a generic pathway map for the MCF7 breast cancer cell line and inferred the cell-specific pathways by BLP. The first group of 11 compounds was utilized to optimize the generic pathways, and then 4 compounds were used to identify effects based on the inferred cell-specific pathways. Cross-validation indicated that the cell-specific pathways reliably predicted a compound's effects. Finally, we applied BLP to re-optimize the cell-specific pathways to predict the effects of 4 compounds (trichostatin A, MS-275, staurosporine, and digoxigenin) according to compound-induced topological alterations. Trichostatin A and MS-275 (both HDAC inhibitors) inhibited the downstream pathway of HDAC1 and caused cell growth arrest via activation of p53 and p21; the effects of digoxigenin were totally opposite. Staurosporine blocked the cell cycle via p53 and p21, but also promoted cell growth via activated HDAC1 and its downstream pathway. Our approach was also applied to the PC3 prostate cancer cell line, and the cross-validation analysis showed very good accuracy in predicting effects of 4 compounds. In summary, our computational model can be
Integrating genomics and proteomics data to predict drug effects using binary linear programming.
Ji, Zhiwei; Su, Jing; Liu, Chenglin; Wang, Hongyan; Huang, Deshuang; Zhou, Xiaobo
2014-01-01
The Library of Integrated Network-Based Cellular Signatures (LINCS) project aims to create a network-based understanding of biology by cataloging changes in gene expression and signal transduction that occur when cells are exposed to a variety of perturbations. It is helpful for understanding cell pathways and facilitating drug discovery. Here, we developed a novel approach to infer cell-specific pathways and identify a compound's effects using gene expression and phosphoproteomics data under treatments with different compounds. Gene expression data were employed to infer potential targets of compounds and create a generic pathway map. Binary linear programming (BLP) was then developed to optimize the generic pathway topology based on the mid-stage signaling response of phosphorylation. To demonstrate effectiveness of this approach, we built a generic pathway map for the MCF7 breast cancer cell line and inferred the cell-specific pathways by BLP. The first group of 11 compounds was utilized to optimize the generic pathways, and then 4 compounds were used to identify effects based on the inferred cell-specific pathways. Cross-validation indicated that the cell-specific pathways reliably predicted a compound's effects. Finally, we applied BLP to re-optimize the cell-specific pathways to predict the effects of 4 compounds (trichostatin A, MS-275, staurosporine, and digoxigenin) according to compound-induced topological alterations. Trichostatin A and MS-275 (both HDAC inhibitors) inhibited the downstream pathway of HDAC1 and caused cell growth arrest via activation of p53 and p21; the effects of digoxigenin were totally opposite. Staurosporine blocked the cell cycle via p53 and p21, but also promoted cell growth via activated HDAC1 and its downstream pathway. Our approach was also applied to the PC3 prostate cancer cell line, and the cross-validation analysis showed very good accuracy in predicting effects of 4 compounds. In summary, our computational model can be
Mitsos, Alexander; Melas, Ioannis N; Siminelakis, Paraskeuas; Chairakaki, Aikaterini D; Saez-Rodriguez, Julio; Alexopoulos, Leonidas G
2009-12-01
Understanding the mechanisms of cell function and drug action is a major endeavor in the pharmaceutical industry. Drug effects are governed by the intrinsic properties of the drug (i.e., selectivity and potency) and the specific signaling transduction network of the host (i.e., normal vs. diseased cells). Here, we describe an unbiased, phosphoproteomic-based approach to identify drug effects by monitoring drug-induced topology alterations. With our proposed method, drug effects are investigated under diverse stimulations of the signaling network. Starting with a generic pathway made of logical gates, we build a cell-type specific map by constraining it to fit 13 key phopshoprotein signals under 55 experimental conditions. Fitting is performed via an Integer Linear Program (ILP) formulation and solution by standard ILP solvers; a procedure that drastically outperforms previous fitting schemes. Then, knowing the cell's topology, we monitor the same key phosphoprotein signals under the presence of drug and we re-optimize the specific map to reveal drug-induced topology alterations. To prove our case, we make a topology for the hepatocytic cell-line HepG2 and we evaluate the effects of 4 drugs: 3 selective inhibitors for the Epidermal Growth Factor Receptor (EGFR) and a non-selective drug. We confirm effects easily predictable from the drugs' main target (i.e., EGFR inhibitors blocks the EGFR pathway) but we also uncover unanticipated effects due to either drug promiscuity or the cell's specific topology. An interesting finding is that the selective EGFR inhibitor Gefitinib inhibits signaling downstream the Interleukin-1alpha (IL1alpha) pathway; an effect that cannot be extracted from binding affinity-based approaches. Our method represents an unbiased approach to identify drug effects on small to medium size pathways which is scalable to larger topologies with any type of signaling interventions (small molecules, RNAi, etc). The method can reveal drug effects on
Assembling genes from predicted exons in linear time with dynamic programming.
Guigó, R
1998-01-01
In a number of programs for gene structure prediction in higher eukaryotic genomic sequences, exon prediction is decoupled from gene assembly: a large pool of candidate exons is predicted and scored from features located in the query DNA sequence, and candidate genes are assembled from such a pool as sequences of nonoverlapping frame-compatible exons. Genes are scored as a function of the scores of the assembled exons, and the highest scoring candidate gene is assumed to be the most likely gene encoded by the query DNA sequence. Considering additive gene scoring functions, currently available algorithms to determine such a highest scoring candidate gene run in time proportional to the square of the number of predicted exons. Here, we present an algorithm whose running time grows only linearly with the size of the set of predicted exons. Polynomial algorithms rely on the fact that, while scanning the set of predicted exons, the highest scoring gene ending in a given exon can be obtained by appending the exon to the highest scoring among the highest scoring genes ending at each compatible preceding exon. The algorithm here relies on the simple fact that such highest scoring gene can be stored and updated. This requires scanning the set of predicted exons simultaneously by increasing acceptor and donor position. On the other hand, the algorithm described here does not assume an underlying gene structure model. Indeed, the definition of valid gene structures is externally defined in the so-called Gene Model. The Gene Model specifies simply which gene features are allowed immediately upstream which other gene features in valid gene structures. This allows for great flexibility in formulating the gene identification problem. In particular it allows for multiple-gene two-strand predictions and for considering gene features other than coding exons (such as promoter elements) in valid gene structures.
Sixth SIAM conference on applied linear algebra: Final program and abstracts. Final technical report
1997-12-31
Linear algebra plays a central role in mathematics and applications. The analysis and solution of problems from an amazingly wide variety of disciplines depend on the theory and computational techniques of linear algebra. In turn, the diversity of disciplines depending on linear algebra also serves to focus and shape its development. Some problems have special properties (numerical, structural) that can be exploited. Some are simply so large that conventional approaches are impractical. New computer architectures motivate new algorithms, and fresh ways to look at old ones. The pervasive nature of linear algebra in analyzing and solving problems means that people from a wide spectrum--universities, industrial and government laboratories, financial institutions, and many others--share an interest in current developments in linear algebra. This conference aims to bring them together for their mutual benefit. Abstracts of papers presented are included.
NASA Technical Reports Server (NTRS)
Utku, S.
1969-01-01
A general purpose digital computer program for the in-core solution of linear equilibrium problems of structural mechanics is documented. The program requires minimum input for the description of the problem. The solution is obtained by means of the displacement method and the finite element technique. Almost any geometry and structure may be handled because of the availability of linear, triangular, quadrilateral, tetrahedral, hexahedral, conical, triangular torus, and quadrilateral torus elements. The assumption of piecewise linear deflection distribution insures monotonic convergence of the deflections from the stiffer side with decreasing mesh size. The stresses are provided by the best-fit strain tensors in the least squares at the mesh points where the deflections are given. The selection of local coordinate systems whenever necessary is automatic. The core memory is used by means of dynamic memory allocation, an optional mesh-point relabelling scheme and imposition of the boundary conditions during the assembly time.
NASA Technical Reports Server (NTRS)
Dieudonne, J. E.
1978-01-01
A numerical technique was developed which generates linear perturbation models from nonlinear aircraft vehicle simulations. The technique is very general and can be applied to simulations of any system that is described by nonlinear differential equations. The computer program used to generate these models is discussed, with emphasis placed on generation of the Jacobian matrices, calculation of the coefficients needed for solving the perturbation model, and generation of the solution of the linear differential equations. An example application of the technique to a nonlinear model of the NASA terminal configured vehicle is included.
Li, Y P; Huang, G H
2006-11-01
In this study, an interval-parameter two-stage mixed integer linear programming (ITMILP) model is developed for supporting long-term planning of waste management activities in the City of Regina. In the ITMILP, both two-stage stochastic programming and interval linear programming are introduced into a general mixed integer linear programming framework. Uncertainties expressed as not only probability density functions but also discrete intervals can be reflected. The model can help tackle the dynamic, interactive and uncertain characteristics of the solid waste management system in the City, and can address issues concerning plans for cost-effective waste diversion and landfill prolongation. Three scenarios are considered based on different waste management policies. The results indicate that reasonable solutions have been generated. They are valuable for supporting the adjustment or justification of the existing waste flow allocation patterns, the long-term capacity planning of the City's waste management system, and the formulation of local policies and regulations regarding waste generation and management. PMID:16678336
DRIESSEN,BRIAN; SADEGH,NADER
2000-04-25
This work presents a method of finding near global optima to minimum-time trajectory generation problem for systems that would be linear if it were not for the presence of Coloumb friction. The required final state of the system is assumed to be maintainable by the system, and the input bounds are assumed to be large enough so that they can overcome the maximum static Coloumb friction force. Other than the previous work for generating minimum-time trajectories for non redundant robotic manipulators for which the path in joint space is already specified, this work represents, to the best of the authors' knowledge, the first approach for generating near global optima for minimum-time problems involving a nonlinear class of dynamic systems. The reason the optima generated are near global optima instead of exactly global optima is due to a discrete-time approximation of the system (which is usually used anyway to simulate such a system numerically). The method closely resembles previous methods for generating minimum-time trajectories for linear systems, where the core operation is the solution of a Phase I linear programming problem. For the nonlinear systems considered herein, the core operation is instead the solution of a mixed integer linear programming problem.
NASA Technical Reports Server (NTRS)
Arneson, Heather M.; Dousse, Nicholas; Langbort, Cedric
2014-01-01
We consider control design for positive compartmental systems in which each compartment's outflow rate is described by a concave function of the amount of material in the compartment.We address the problem of determining the routing of material between compartments to satisfy time-varying state constraints while ensuring that material reaches its intended destination over a finite time horizon. We give sufficient conditions for the existence of a time-varying state-dependent routing strategy which ensures that the closed-loop system satisfies basic network properties of positivity, conservation and interconnection while ensuring that capacity constraints are satisfied, when possible, or adjusted if a solution cannot be found. These conditions are formulated as a linear programming problem. Instances of this linear programming problem can be solved iteratively to generate a solution to the finite horizon routing problem. Results are given for the application of this control design method to an example problem. Key words: linear programming; control of networks; positive systems; controller constraints and structure.
Non-Linear Editing for the Smaller College-Level Production Program, Rev. 2.0.
ERIC Educational Resources Information Center
Tetzlaff, David
This paper focuses on a specific topic and contention: Non-linear editing earns its place in a liberal arts setting because it is a superior tool to teach the concepts of how moving picture discourse is constructed through editing. The paper first points out that most students at small liberal arts colleges are not going to wind up working…
Yang, X.
1998-12-31
Modeling ground motions from multi-shot, delay-fired mining blasts is important to the understanding of their source characteristics such as spectrum modulation. MineSeis is a MATLAB{reg_sign} (a computer language) Graphical User Interface (GUI) program developed for the effective modeling of these multi-shot mining explosions. The program provides a convenient and interactive tool for modeling studies. Multi-shot, delay-fired mining blasts are modeled as the time-delayed linear superposition of identical single shot sources in the program. These single shots are in turn modeled as the combination of an isotropic explosion source and a spall source. Mueller and Murphy`s (1971) model for underground nuclear explosions is used as the explosion source model. A modification of Anandakrishnan et al.`s (1997) spall model is developed as the spall source model. Delays both due to the delay-firing and due to the single-shot location differences are taken into account in calculating the time delays of the superposition. Both synthetic and observed single-shot seismograms can be used to construct the superpositions. The program uses MATLAB GUI for input and output to facilitate user interaction with the program. With user provided source and path parameters, the program calculates and displays the source time functions, the single shot synthetic seismograms and the superimposed synthetic seismograms. In addition, the program provides tools so that the user can manipulate the results, such as filtering, zooming and creating hard copies.
AESOP: A computer-aided design program for linear multivariable control systems
NASA Technical Reports Server (NTRS)
Lehtinen, B.; Geyser, L. C.
1982-01-01
An interactive computer program (AESOP) which solves quadratic optimal control and is discussed. The program can also be used to perform system analysis calculations such as transient and frequency responses, controllability, observability, etc., in support of the control and filter design computations.
Ahlfeld, D.P.; Dougherty, D.E.
1994-11-01
MODLP is a computational tool that may help design capture zones for controlling the movement of contaminated groundwater. It creates and solves linear optimization programs that contain constraints on hydraulic head or head differences in a groundwater system. The groundwater domain is represented by USGS MODFLOW groundwater flow simulation model. This document describes the general structure of the computer program, MODLP, the types of constraints that may be imposed, detailed input instructions, interpretation of the output, and the interaction with the MODFLOW simulation kernel.
NASA Technical Reports Server (NTRS)
Carlson, Harry W.
1985-01-01
The purpose here is to show how two linearized theory computer programs in combination may be used for the design of low speed wing flap systems capable of high levels of aerodynamic efficiency. A fundamental premise of the study is that high levels of aerodynamic performance for flap systems can be achieved only if the flow about the wing remains predominantly attached. Based on this premise, a wing design program is used to provide idealized attached flow camber surfaces from which candidate flap systems may be derived, and, in a following step, a wing evaluation program is used to provide estimates of the aerodynamic performance of the candidate systems. Design strategies and techniques that may be employed are illustrated through a series of examples. Applicability of the numerical methods to the analysis of a representative flap system (although not a system designed by the process described here) is demonstrated in a comparison with experimental data.
Communications oriented programming of parallel iterative solutions of sparse linear systems
NASA Technical Reports Server (NTRS)
Patrick, M. L.; Pratt, T. W.
1986-01-01
Parallel algorithms are developed for a class of scientific computational problems by partitioning the problems into smaller problems which may be solved concurrently. The effectiveness of the resulting parallel solutions is determined by the amount and frequency of communication and synchronization and the extent to which communication can be overlapped with computation. Three different parallel algorithms for solving the same class of problems are presented, and their effectiveness is analyzed from this point of view. The algorithms are programmed using a new programming environment. Run-time statistics and experience obtained from the execution of these programs assist in measuring the effectiveness of these algorithms.
Maia, Julio Daniel Carvalho; Urquiza Carvalho, Gabriel Aires; Mangueira, Carlos Peixoto; Santana, Sidney Ramos; Cabral, Lucidio Anjos Formiga; Rocha, Gerd B
2012-09-11
In this study, we present some modifications in the semiempirical quantum chemistry MOPAC2009 code that accelerate single-point energy calculations (1SCF) of medium-size (up to 2500 atoms) molecular systems using GPU coprocessors and multithreaded shared-memory CPUs. Our modifications consisted of using a combination of highly optimized linear algebra libraries for both CPU (LAPACK and BLAS from Intel MKL) and GPU (MAGMA and CUBLAS) to hasten time-consuming parts of MOPAC such as the pseudodiagonalization, full diagonalization, and density matrix assembling. We have shown that it is possible to obtain large speedups just by using CPU serial linear algebra libraries in the MOPAC code. As a special case, we show a speedup of up to 14 times for a methanol simulation box containing 2400 atoms and 4800 basis functions, with even greater gains in performance when using multithreaded CPUs (2.1 times in relation to the single-threaded CPU code using linear algebra libraries) and GPUs (3.8 times). This degree of acceleration opens new perspectives for modeling larger structures which appear in inorganic chemistry (such as zeolites and MOFs), biochemistry (such as polysaccharides, small proteins, and DNA fragments), and materials science (such as nanotubes and fullerenes). In addition, we believe that this parallel (GPU-GPU) MOPAC code will make it feasible to use semiempirical methods in lengthy molecular simulations using both hybrid QM/MM and QM/QM potentials.
NASA Astrophysics Data System (ADS)
Yamamoto, Akira; Yokoya, Kaoru
2015-02-01
An overview of linear collider programs is given. The history and technical challenges are described and the pioneering electron-positron linear collider, the SLC, is first introduced. For future energy frontier linear collider projects, the International Linear Collider (ILC) and the Compact Linear Collider (CLIC) are introduced and their technical features are discussed. The ILC is based on superconducting RF technology and the CLIC is based on two-beam acceleration technology. The ILC collaboration completed the Technical Design Report in 2013, and has come to the stage of "Design to Reality." The CLIC collaboration published the Conceptual Design Report in 2012, and the key technology demonstration is in progress. The prospects for further advanced acceleration technology are briefly discussed for possible long-term future linear colliders.
NASA Astrophysics Data System (ADS)
Yamamoto, Akira; Yokoya, Kaoru
An overview of linear collider programs is given. The history and technical challenges are described and the pioneering electron-positron linear collider, the SLC, is first introduced. For future energy frontier linear collider projects, the International Linear Collider (ILC) and the Compact Linear Collider (CLIC) are introduced and their technical features are discussed. The ILC is based on superconducting RF technology and the CLIC is based on two-beam acceleration technology. The ILC collaboration completed the Technical Design Report in 2013, and has come to the stage of "Design to Reality." The CLIC collaboration published the Conceptual Design Report in 2012, and the key technology demonstration is in progress. The prospects for further advanced acceleration technology are briefly discussed for possible long-term future linear colliders.
Linear program relaxation of sparse nonnegative recovery in compressive sensing microarrays.
Qin, Linxia; Xiu, Naihua; Kong, Lingchen; Li, Yu
2012-01-01
Compressive sensing microarrays (CSM) are DNA-based sensors that operate using group testing and compressive sensing principles. Mathematically, one can cast the CSM as sparse nonnegative recovery (SNR) which is to find the sparsest solutions subjected to an underdetermined system of linear equations and nonnegative restriction. In this paper, we discuss the l₁ relaxation of the SNR. By defining nonnegative restricted isometry/orthogonality constants, we give a nonnegative restricted property condition which guarantees that the SNR and the l₁ relaxation share the common unique solution. Besides, we show that any solution to the SNR must be one of the extreme points of the underlying feasible set. PMID:23251229
Numerical Scheme for Viability Computation Using Randomized Technique with Linear Programming
Djeridane, Badis
2008-06-12
We deal with the problem of computing viability sets for nonlinear continuous or hybrid systems. Our main objective is to beat the curse of dimensionality, that is, we want to avoid the exponential growth of required computational resource with respect to the dimension of the system. We propose a randomized approach for viability computation: we avoid griding the state-space, use random extraction of points instead, and the computation of viable set test is formulated as a classical feasibility problem. This algorithm was implemented successfully to linear and nonlinear examples. We provide comparison of our results with results of other method.
NASA Technical Reports Server (NTRS)
Maskew, B.
1982-01-01
VSAERO is a computer program used to predict the nonlinear aerodynamic characteristics of arbitrary three-dimensional configurations in subsonic flow. Nonlinear effects of vortex separation and vortex surface interaction are treated in an iterative wake-shape calculation procedure, while the effects of viscosity are treated in an iterative loop coupling potential-flow and integral boundary-layer calculations. The program employs a surface singularity panel method using quadrilateral panels on which doublet and source singularities are distributed in a piecewise constant form. This user's manual provides a brief overview of the mathematical model, instructions for configuration modeling and a description of the input and output data. A listing of a sample case is included.
NASA Technical Reports Server (NTRS)
Gupta, K. K.; Akyuz, F. A.; Heer, E.
1972-01-01
This program, an extension of the linear equilibrium problem solver ELAS, is an updated and extended version of its earlier form (written in FORTRAN 2 for the IBM 7094 computer). A synchronized material property concept utilizing incremental time steps and the finite element matrix displacement approach has been adopted for the current analysis. A special option enables employment of constant time steps in the logarithmic scale, thereby reducing computational efforts resulting from accumulative material memory effects. A wide variety of structures with elastic or viscoelastic material properties can be analyzed by VISCEL. The program is written in FORTRAN 5 language for the Univac 1108 computer operating under the EXEC 8 system. Dynamic storage allocation is automatically effected by the program, and the user may request up to 195K core memory in a 260K Univac 1108/EXEC 8 machine. The physical program VISCEL, consisting of about 7200 instructions, has four distinct links (segments), and the compiled program occupies a maximum of about 11700 words decimal of core storage.
Joustra, P E; de Wit, J; Struben, V M D; Overbeek, B J H; Fockens, P; Elkhuizen, S G
2010-03-01
To reduce the access times of an endoscopy department, we developed an iterative combination of Discrete Event simulation and Integer Linear Programming. We developed the method in the Endoscopy Department of the Academic Medical Center in Amsterdam and compared different scenarios to reduce the access times for the department. The results show that by a more effective allocation of the current capacity, all procedure types will meet their corresponding performance targets in contrast to the current situation. This improvement can be accomplished without requiring additional equipment and staff. Currently, our recommendations are implemented.
Computer program provides linear sampled- data analysis for high order systems
NASA Technical Reports Server (NTRS)
Bunn, D. B.; Kimball, R. B.
1967-01-01
Computer program performs transformations in the order S-to W-to Z to allow arithmetic to be completed in the W-plane. The method is based on a direct transformation from the S-plane to the W-plane. The W-plane poles and zeros are transformed into Z-plane poles and zeros using the bilinear transformation algorithm.
ERIC Educational Resources Information Center
Biomedical Interdisciplinary Curriculum Project, Berkeley, CA.
This student text presents instructional materials for a unit of mathematics within the Biomedical Interdisciplinary Curriculum Project (BICP), a two-year interdisciplinary precollege curriculum aimed at preparing high school students for entry into college and vocational programs leading to a career in the health field. Lessons concentrate on…
ERIC Educational Resources Information Center
Biomedical Interdisciplinary Curriculum Project, Berkeley, CA.
This instructor's manual presents lesson plans for a unit of mathematics within the Biomedical Interdisciplinary Curriculum Project (BICP), a two-year interdisciplinary precollege curriculum aimed at preparing high school students for entry into college and vocational programs leading to a career in the health field. Lessons concentrate on…
ERIC Educational Resources Information Center
Bennett, Susan V.; Calderone, Cynthia; Dedrick, Robert F.; Gunn, AnnMarie Alberton
2015-01-01
In this mixed method research, we examined the effects of reading and singing software program (RSSP) as a reading intervention on struggling readers' reading achievement as measured by the Florida Comprehensive Assessment Test, the high stakes state test administered in the state of Florida, at one elementary school. Our team defined struggling…
NASA Astrophysics Data System (ADS)
Ellis, J. H.; McBean, E. A.; Farquhar, G. J.
In Part I of this work, a deterministic model for development of acid rain abatement strategies was extended to a stochastic form through the incorporation of uncertainty in the transfer coefficients which describe long-range pollutant transport and transformation. The two extreme cases of: (i) complete dependence between transfer coefficients (i.e. colinearity); (ii) complete independence (noncolinearity) were developed. In this work, a more realistic 'middle ground' between these two extremes is investigated. This approach and its associated transfer coefficient covariance structure involve limited colinearity. A simplified linear version of the limited colinearity optimization model is employed. An application is presented which shows that a central three-state, one-receptor sub-system ('sub-airshed') in eastern North America plays a dominant role with respect to determining overall system performance characteristics. Nonlinear, nonlinearnonseparable and multiobjective extensions of the stochastic model are discussed.
NASA Astrophysics Data System (ADS)
Cui, Liang; Li, Yongping; Huang, Guohe
2016-06-01
A double-sided fuzzy chance-constrained fractional programming (DFCFP) method is developed for planning water resources management under uncertainty. In DFCFP the system marginal benefit per unit of input under uncertainty can also be balanced. The DFCFP is applied to a real case of water resources management in the Zhangweinan River Basin, China. The results show that the amounts of water allocated to the two cities (Anyang and Handan) would be different under minimum and maximum reliability degrees. It was found that the marginal benefit of the system solved by DFCFP is bigger than the system benefit under the minimum and maximum reliability degrees, which not only improve economic efficiency in the mass, but also remedy water deficiency. Compared with the traditional double-sided fuzzy chance-constrained programming (DFCP) method, the solutions obtained from DFCFP are significantly higher, and the DFCFP has advantages in water conservation.
NASA Technical Reports Server (NTRS)
Hauser, F. D.; Szollosi, G. D.; Lakin, W. S.
1972-01-01
COEBRA, the Computerized Optimization of Elastic Booster Autopilots, is an autopilot design program. The bulk of the design criteria is presented in the form of minimum allowed gain/phase stability margins. COEBRA has two optimization phases: (1) a phase to maximize stability margins; and (2) a phase to optimize structural bending moment load relief capability in the presence of minimum requirements on gain/phase stability margins.
Zou, Rui; Liu, Yong; Yu, Yajuan
2013-01-01
The conflict of water environment protection and economic development has brought severe water pollution and restricted the sustainable development in the watershed. A risk explicit interval linear programming (REILP) method was used to solve integrated watershed environmental-economic optimization problem. Interval linear programming (ILP) and REILP models for uncertainty-based environmental economic optimization at the watershed scale were developed for the management of Lake Fuxian watershed, China. Scenario analysis was introduced into model solution process to ensure the practicality and operability of optimization schemes. Decision makers' preferences for risk levels can be expressed through inputting different discrete aspiration level values into the REILP model in three periods under two scenarios. Through balancing the optimal system returns and corresponding system risks, decision makers can develop an efficient industrial restructuring scheme based directly on the window of “low risk and high return efficiency” in the trade-off curve. The representative schemes at the turning points of two scenarios were interpreted and compared to identify a preferable planning alternative, which has the relatively low risks and nearly maximum benefits. This study provides new insights and proposes a tool, which was REILP, for decision makers to develop an effectively environmental economic optimization scheme in integrated watershed management. PMID:24191144
Zhang, Xiaoling; Huang, Kai; Zou, Rui; Liu, Yong; Yu, Yajuan
2013-01-01
The conflict of water environment protection and economic development has brought severe water pollution and restricted the sustainable development in the watershed. A risk explicit interval linear programming (REILP) method was used to solve integrated watershed environmental-economic optimization problem. Interval linear programming (ILP) and REILP models for uncertainty-based environmental economic optimization at the watershed scale were developed for the management of Lake Fuxian watershed, China. Scenario analysis was introduced into model solution process to ensure the practicality and operability of optimization schemes. Decision makers' preferences for risk levels can be expressed through inputting different discrete aspiration level values into the REILP model in three periods under two scenarios. Through balancing the optimal system returns and corresponding system risks, decision makers can develop an efficient industrial restructuring scheme based directly on the window of "low risk and high return efficiency" in the trade-off curve. The representative schemes at the turning points of two scenarios were interpreted and compared to identify a preferable planning alternative, which has the relatively low risks and nearly maximum benefits. This study provides new insights and proposes a tool, which was REILP, for decision makers to develop an effectively environmental economic optimization scheme in integrated watershed management.
Application of fuzzy goal programming approach to multi-objective linear fractional inventory model
NASA Astrophysics Data System (ADS)
Dutta, D.; Kumar, Pavan
2015-09-01
In this paper, we propose a model and solution approach for a multi-item inventory problem without shortages. The proposed model is formulated as a fractional multi-objective optimisation problem along with three constraints: budget constraint, space constraint and budgetary constraint on ordering cost of each item. The proposed inventory model becomes a multiple criteria decision-making (MCDM) problem in fuzzy environment. This model is solved by multi-objective fuzzy goal programming (MOFGP) approach. A numerical example is given to illustrate the proposed model.
Experimental program to build a multimegawatt lasertron for super linear colliders
Garwin, E.L.; Herrmannsfeldt, W.B.; Sinclair, C.; Weaver, J.N.; Welch, J.J.; Wilson, P.B.
1985-04-01
A lasertron (a microwave ''triode'' with an RF output cavity and an RF modulated laser to illuminate a photocathode) is a possible high power RF amplifier for TeV linear colliders. As the first step toward building a 35 MW, S-band lasertron for a proof of principle demonstration, a 400 kV dc diode is being designed with a GaAs photocathode, a drift-tube and a collector. After some cathode life tests are made in the diode, an RF output cavity will replace the drift tube and a mode-locked, frequency-doubled, Nd:YAG laser, modulated to produce a 1 us-long comb of 60 ps pulses at a 2856 MHz rate, will be used to illuminate the photocathode to make an RF power source out of the device. This paper discusses the plans for the project and includes some results of numerical simulation studies of the lasertron as well as some of the ultra-high vacuum and mechanical design requirements for incorporating a photocathode.
Knapp, Bettina; Kaderali, Lars
2013-01-01
Perturbation experiments for example using RNA interference (RNAi) offer an attractive way to elucidate gene function in a high throughput fashion. The placement of hit genes in their functional context and the inference of underlying networks from such data, however, are challenging tasks. One of the problems in network inference is the exponential number of possible network topologies for a given number of genes. Here, we introduce a novel mathematical approach to address this question. We formulate network inference as a linear optimization problem, which can be solved efficiently even for large-scale systems. We use simulated data to evaluate our approach, and show improved performance in particular on larger networks over state-of-the art methods. We achieve increased sensitivity and specificity, as well as a significant reduction in computing time. Furthermore, we show superior performance on noisy data. We then apply our approach to study the intracellular signaling of human primary nave CD4(+) T-cells, as well as ErbB signaling in trastuzumab resistant breast cancer cells. In both cases, our approach recovers known interactions and points to additional relevant processes. In ErbB signaling, our results predict an important role of negative and positive feedback in controlling the cell cycle progression.
Current Status of the Next Linear Collider X-Band Klystron Development Program
Caryotakis, G.; Haase, A.A.; Jongewaard, E.N.; Pearson, C.; Sprehn, D.W.; /SLAC
2005-05-09
Klystrons capable of driving accelerator sections in the Next Linear Collider (NLC) have been developed at SLAC during the last decade. In addition to fourteen 50 MW solenoid-focused devices and a 50 MW Periodic Permanent Magnet focused (PPM) klystron, a 500 kV 75 MW PPM klystron was tested in 1999 to 80 MW with 3 {micro}s pulses, but very low duty. Subsequent 75 MW prototypes aimed for low-cost manufacture by employing reusable focusing structures external to the vacuum, similar to a solenoid electromagnet. During the PPM klystron development, several partners (CPI, EEV and Toshiba) have participated by constructing partial or complete PPM klystrons. After early failures during testing of the first two devices, SLAC has recently tested this design (XP3-3) to the full NLC specifications of 75 MW, 1.6 {micro}s pulse length, and 120 Hz. This 14.4 kW average power operation came with an efficiency of 50%. The XP3-3 average and peak output power, together with the focusing method, arguably makes it the most advanced high power klystron ever built anywhere in the world. Design considerations and test results for these latest prototypes will be presented.
Ament, D; Ho, J; Loute, E; Remmelswaal, M
1980-06-01
Nested decomposition of linear programs is the result of a multilevel, hierarchical application of the Dantzig-Wolfe decomposition principle. The general structure is called lower block-triangular, and permits direct accounting of long-term effects of investment, service life, etc. LIFT, an algorithm for solving lower block triangular linear programs, is based on state-of-the-art modular LP software. The algorithmic and software aspects of LIFT are outlined, and computational results are presented. 5 figures, 6 tables. (RWR)
IKE: An interactive klystron evaluation program for SLAC linear collider klystron performance
Kleban, S.D.; Koontz, R.F.; Vlieks, A.E.
1987-03-01
When the new 65 MW klystrons for the SLC were planned, a computer based interlock and data recording system was implemented in the general electronics upgrade. Significant klystron operating parameters are interlocked and displayed in the SLC central control room through the VAX control computer. A program titled ''IKE'' has been written to record klystron operating data each day, store the data in a database, and provide various sorted operating and statistical information to klystron engineers, and maintenance personnel in the form of terminal listings, bar graphs, and special printed reports. This paper gives an overview of the IKE system, describes its use as a klystron maintenance tool, and explains why it is valuable to klystron engineers.
Catanzaro, Daniele; Shackney, Stanley E; Schaffer, Alejandro A; Schwartz, Russell
2016-01-01
Ductal Carcinoma In Situ (DCIS) is a precursor lesion of Invasive Ductal Carcinoma (IDC) of the breast. Investigating its temporal progression could provide fundamental new insights for the development of better diagnostic tools to predict which cases of DCIS will progress to IDC. We investigate the problem of reconstructing a plausible progression from single-cell sampled data of an individual with synchronous DCIS and IDC. Specifically, by using a number of assumptions derived from the observation of cellular atypia occurring in IDC, we design a possible predictive model using integer linear programming (ILP). Computational experiments carried out on a preexisting data set of 13 patients with simultaneous DCIS and IDC show that the corresponding predicted progression models are classifiable into categories having specific evolutionary characteristics. The approach provides new insights into mechanisms of clonal progression in breast cancers and helps illustrate the power of the ILP approach for similar problems in reconstructing tumor evolution scenarios under complex sets of constraints.
NASA Astrophysics Data System (ADS)
Noor-E-Alam, Md.; Doucette, John
2015-08-01
Grid-based location problems (GBLPs) can be used to solve location problems in business, engineering, resource exploitation, and even in the field of medical sciences. To solve these decision problems, an integer linear programming (ILP) model is designed and developed to provide the optimal solution for GBLPs considering fixed cost criteria. Preliminary results show that the ILP model is efficient in solving small to moderate-sized problems. However, this ILP model becomes intractable in solving large-scale instances. Therefore, a decomposition heuristic is proposed to solve these large-scale GBLPs, which demonstrates significant reduction of solution runtimes. To benchmark the proposed heuristic, results are compared with the exact solution via ILP. The experimental results show that the proposed method significantly outperforms the exact method in runtime with minimal (and in most cases, no) loss of optimality.
NASA Technical Reports Server (NTRS)
Houts, R. C.; Burlage, D. W.
1972-01-01
A time domain technique is developed to design finite-duration impulse response digital filters using linear programming. Two related applications of this technique in data transmission systems are considered. The first is the design of pulse shaping digital filters to generate or detect signaling waveforms transmitted over bandlimited channels that are assumed to have ideal low pass or bandpass characteristics. The second is the design of digital filters to be used as preset equalizers in cascade with channels that have known impulse response characteristics. Example designs are presented which illustrate that excellent waveforms can be generated with frequency-sampling filters and the ease with which digital transversal filters can be designed for preset equalization.
Wood, Scott T.; Dean, Brian C.; Dean, Delphine
2013-01-01
This paper presents a novel computer vision algorithm to analyze 3D stacks of confocal images of fluorescently stained single cells. The goal of the algorithm is to create representative in silico model structures that can be imported into finite element analysis software for mechanical characterization. Segmentation of cell and nucleus boundaries is accomplished via standard thresholding methods. Using novel linear programming methods, a representative actin stress fiber network is generated by computing a linear superposition of fibers having minimum discrepancy compared with an experimental 3D confocal image. Qualitative validation is performed through analysis of seven 3D confocal image stacks of adherent vascular smooth muscle cells (VSMCs) grown in 2D culture. The presented method is able to automatically generate 3D geometries of the cell's boundary, nucleus, and representative F-actin network based on standard cell microscopy data. These geometries can be used for direct importation and implementation in structural finite element models for analysis of the mechanics of a single cell to potentially speed discoveries in the fields of regenerative medicine, mechanobiology, and drug discovery. PMID:23395283
NASA Astrophysics Data System (ADS)
Jamali, A.; Khaleghi, E.; Gholaminezhad, I.; Nariman-zadeh, N.
2016-05-01
In this paper, a new multi-objective genetic programming (GP) with a diversity preserving mechanism and a real number alteration operator is presented and successfully used for Pareto optimal modelling of some complex non-linear systems using some input-output data. In this study, two different input-output data-sets of a non-linear mathematical model and of an explosive cutting process are considered separately in three-objective optimisation processes. The pertinent conflicting objective functions that have been considered for such Pareto optimisations are namely, training error (TE), prediction error (PE), and the length of tree (complexity of the network) (TL) of the GP models. Such three-objective optimisation implementations leads to some non-dominated choices of GP-type models for both cases representing the trade-offs among those objective functions. Therefore, optimal Pareto fronts of such GP models exhibit the trade-off among the corresponding conflicting objectives and, thus, provide different non-dominated optimal choices of GP-type models. Moreover, the results show that no significant optimality in TE and PE may occur when the TL of the corresponding GP model exceeds some values.
Cabrera, V E
2010-01-01
The purpose of the study was 2-fold: 1) to propose a novel modeling framework using Markovian linear programming to optimize dairy farmer-defined goals under different decision schemes and 2) to illustrate the model with a practical application testing diets for entire lactations. A dairy herd population was represented by cow state variables defined by parity (1 to 15), month in lactation (1 to 24), and pregnancy status (0 nonpregnant and 1 to 9 mo of pregnancy). A database of 326,000 lactations of Holsteins from AgSource Dairy Herd Improvement service (http://agsource.crinet.com/page249/DHI) was used to parameterize reproduction, mortality, and involuntary culling. The problem was set up as a Markovian linear program model containing 5,580 decision variables and 8,731 constraints. The model optimized the net revenue of the steady state dairy herd population having 2 options in each state: keeping or replacing an animal. Five diets were studied to assess economic, environmental, and herd structural outcomes. Diets varied in proportions of alfalfa silage (38 to 98% of dry matter), high-moisture ear corn (0 to 42% of dry matter), and soybean meal (0 to 18% of dry matter) within and between lactations, which determined dry matter intake, milk production, and N excretion. Diet ingredient compositions ranged from one of high concentrates to alfalfa silage only. Hence, the model identified the maximum net revenue that included the value of nutrient excretion and the cost of manure disposal associated with the optimal policy. Outcomes related to optimal solutions included the herd population structure, the replacement policy, and the amount of N excreted under each diet experiment. The problem was solved using the Excel Risk Solver Platform with the Standard LP/Quadratic Engine. Consistent replacement policies were to (1) keep pregnant cows, (2) keep primiparous cows longer than multiparous cows, and (3) decrease replacement rates when milk and feed prices are favorable
Cabrera, V E
2010-01-01
The purpose of the study was 2-fold: 1) to propose a novel modeling framework using Markovian linear programming to optimize dairy farmer-defined goals under different decision schemes and 2) to illustrate the model with a practical application testing diets for entire lactations. A dairy herd population was represented by cow state variables defined by parity (1 to 15), month in lactation (1 to 24), and pregnancy status (0 nonpregnant and 1 to 9 mo of pregnancy). A database of 326,000 lactations of Holsteins from AgSource Dairy Herd Improvement service (http://agsource.crinet.com/page249/DHI) was used to parameterize reproduction, mortality, and involuntary culling. The problem was set up as a Markovian linear program model containing 5,580 decision variables and 8,731 constraints. The model optimized the net revenue of the steady state dairy herd population having 2 options in each state: keeping or replacing an animal. Five diets were studied to assess economic, environmental, and herd structural outcomes. Diets varied in proportions of alfalfa silage (38 to 98% of dry matter), high-moisture ear corn (0 to 42% of dry matter), and soybean meal (0 to 18% of dry matter) within and between lactations, which determined dry matter intake, milk production, and N excretion. Diet ingredient compositions ranged from one of high concentrates to alfalfa silage only. Hence, the model identified the maximum net revenue that included the value of nutrient excretion and the cost of manure disposal associated with the optimal policy. Outcomes related to optimal solutions included the herd population structure, the replacement policy, and the amount of N excreted under each diet experiment. The problem was solved using the Excel Risk Solver Platform with the Standard LP/Quadratic Engine. Consistent replacement policies were to (1) keep pregnant cows, (2) keep primiparous cows longer than multiparous cows, and (3) decrease replacement rates when milk and feed prices are favorable
Diffendorfer, James E.; Richards, Paul M.; Dalrymple, George H.; DeAngelis, Donald L.
2001-01-01
We present the application of Linear Programming for estimating biomass fluxes in ecosystem and food web models. We use the herpetological assemblage of the Everglades as an example. We developed food web structures for three common Everglades freshwater habitat types: marsh, prairie, and upland. We obtained a first estimate of the fluxes using field data, literature estimates, and professional judgment. Linear programming was used to obtain a consistent and better estimate of the set of fluxes, while maintaining mass balance and minimizing deviations from point estimates. The results support the view that the Everglades is a spatially heterogeneous system, with changing patterns of energy flux, species composition, and biomasses across the habitat types. We show that a food web/ecosystem perspective, combined with Linear Programming, is a robust method for describing food webs and ecosystems that requires minimal data, produces useful post-solution analyses, and generates hypotheses regarding the structure of energy flow in the system.
NASA Astrophysics Data System (ADS)
Bostan, Mohamad; Hadi Afshar, Mohamad; Khadem, Majed
2015-04-01
This article proposes a hybrid linear programming (LP-LP) methodology for the simultaneous optimal design and operation of groundwater utilization systems. The proposed model is an extension of an earlier LP-LP model proposed by the authors for the optimal operation of a set of existing wells. The proposed model can be used to optimally determine the number, configuration and pumping rates of the operational wells out of potential wells with fixed locations to minimize the total cost of utilizing a two-dimensional confined aquifer under steady-state flow conditions. The model is able to take into account the well installation, piping and pump installation costs in addition to the operational costs, including the cost of energy and maintenance. The solution to the problem is defined by well locations and their pumping rates, minimizing the total cost while satisfying a downstream demand, lower/upper bound on the pumping rates, and lower/upper bound on the water level drawdown at the wells. A discretized version of the differential equation governing the flow is first embedded into the model formulation as a set of additional constraints. The resulting mixed-integer highly constrained nonlinear optimization problem is then decomposed into two subproblems with different sets of decision variables, one with a piezometric head and the other with the operational well locations and the corresponding pumping rates. The binary variables representing the well locations are approximated by a continuous variable leading to two LP subproblems. Having started with a random value for all decision variables, the two subproblems are solved iteratively until convergence is achieved. The performance and ability of the proposed method are tested against a hypothetical problem from the literature and the results are presented and compared with those obtained using a mixed-integer nonlinear programming method. The results show the efficiency and effectiveness of the proposed method for
Simic, Vladimir; Dimitrijevic, Branka
2015-02-01
An interval linear programming approach is used to formulate and comprehensively test a model for optimal long-term planning of vehicle recycling in the Republic of Serbia. The proposed model is applied to a numerical case study: a 4-year planning horizon (2013-2016) is considered, three legislative cases and three scrap metal price trends are analysed, availability of final destinations for sorted waste flows is explored. Potential and applicability of the developed model are fully illustrated. Detailed insights on profitability and eco-efficiency of the projected contemporary equipped vehicle recycling factory are presented. The influences of the ordinance on the management of end-of-life vehicles in the Republic of Serbia on the vehicle hulks procuring, sorting generated material fractions, sorted waste allocation and sorted metals allocation decisions are thoroughly examined. The validity of the waste management strategy for the period 2010-2019 is tested. The formulated model can create optimal plans for procuring vehicle hulks, sorting generated material fractions, allocating sorted waste flows and allocating sorted metals. Obtained results are valuable for supporting the construction and/or modernisation process of a vehicle recycling system in the Republic of Serbia.
Lee, Dongyul; Lee, Chaewoo
2014-01-01
The advancement in wideband wireless network supports real time services such as IPTV and live video streaming. However, because of the sharing nature of the wireless medium, efficient resource allocation has been studied to achieve a high level of acceptability and proliferation of wireless multimedia. Scalable video coding (SVC) with adaptive modulation and coding (AMC) provides an excellent solution for wireless video streaming. By assigning different modulation and coding schemes (MCSs) to video layers, SVC can provide good video quality to users in good channel conditions and also basic video quality to users in bad channel conditions. For optimal resource allocation, a key issue in applying SVC in the wireless multicast service is how to assign MCSs and the time resources to each SVC layer in the heterogeneous channel condition. We formulate this problem with integer linear programming (ILP) and provide numerical results to show the performance under 802.16 m environment. The result shows that our methodology enhances the overall system throughput compared to an existing algorithm.
Lee, Chaewoo
2014-01-01
The advancement in wideband wireless network supports real time services such as IPTV and live video streaming. However, because of the sharing nature of the wireless medium, efficient resource allocation has been studied to achieve a high level of acceptability and proliferation of wireless multimedia. Scalable video coding (SVC) with adaptive modulation and coding (AMC) provides an excellent solution for wireless video streaming. By assigning different modulation and coding schemes (MCSs) to video layers, SVC can provide good video quality to users in good channel conditions and also basic video quality to users in bad channel conditions. For optimal resource allocation, a key issue in applying SVC in the wireless multicast service is how to assign MCSs and the time resources to each SVC layer in the heterogeneous channel condition. We formulate this problem with integer linear programming (ILP) and provide numerical results to show the performance under 802.16 m environment. The result shows that our methodology enhances the overall system throughput compared to an existing algorithm. PMID:25276862
Ryan, Jason C; Banerjee, Ashis Gopal; Cummings, Mary L; Roy, Nicholas
2014-06-01
Planning operations across a number of domains can be considered as resource allocation problems with timing constraints. An unexplored instance of such a problem domain is the aircraft carrier flight deck, where, in current operations, replanning is done without the aid of any computerized decision support. Rather, veteran operators employ a set of experience-based heuristics to quickly generate new operating schedules. These expert user heuristics are neither codified nor evaluated by the United States Navy; they have grown solely from the convergent experiences of supervisory staff. As unmanned aerial vehicles (UAVs) are introduced in the aircraft carrier domain, these heuristics may require alterations due to differing capabilities. The inclusion of UAVs also allows for new opportunities for on-line planning and control, providing an alternative to the current heuristic-based replanning methodology. To investigate these issues formally, we have developed a decision support system for flight deck operations that utilizes a conventional integer linear program-based planning algorithm. In this system, a human operator sets both the goals and constraints for the algorithm, which then returns a proposed schedule for operator approval. As a part of validating this system, the performance of this collaborative human-automation planner was compared with that of the expert user heuristics over a set of test scenarios. The resulting analysis shows that human heuristics often outperform the plans produced by an optimization algorithm, but are also often more conservative.
ERIC Educational Resources Information Center
Bessler, William Carl
This paper presents the procedures, results, and conclusions of a study designed to determine the effectiveness of an electronic student response system in teaching biology to the non-major. Nine group-paced linear programs were used. Subjects were 664 college students divided into treatment and control groups. The effectiveness of the response…
ERIC Educational Resources Information Center
Pogany, Peter P.
The study applied the conventional linear transportation program to the student assignment problem and investigated methods of measuring the achieved level of desegregation. Existing measures of desegregation were analyzed, and two new indexes were developed for use in the present model and probably for other system analytical models designed to…
NASA Astrophysics Data System (ADS)
Bayati, Mohsen; Borgs, Christian; Chayes, Jennifer; Zecchina, Riccardo
2008-06-01
We consider the general problem of finding the minimum weight b-matching on arbitrary graphs. We prove that, whenever the linear programing relaxation of the problem has no fractional solutions, then the cavity or belief propagation equations converge to the correct solution both for synchronous and asynchronous updating.
Ureba, A.; Salguero, F. J.; Barbeiro, A. R.; Jimenez-Ortega, E.; Baeza, J. A.; Leal, A.; Miras, H.; Linares, R.; Perucha, M.
2014-08-15
Purpose: The authors present a hybrid direct multileaf collimator (MLC) aperture optimization model exclusively based on sequencing of patient imaging data to be implemented on a Monte Carlo treatment planning system (MC-TPS) to allow the explicit radiation transport simulation of advanced radiotherapy treatments with optimal results in efficient times for clinical practice. Methods: The planning system (called CARMEN) is a full MC-TPS, controlled through aMATLAB interface, which is based on the sequencing of a novel map, called “biophysical” map, which is generated from enhanced image data of patients to achieve a set of segments actually deliverable. In order to reduce the required computation time, the conventional fluence map has been replaced by the biophysical map which is sequenced to provide direct apertures that will later be weighted by means of an optimization algorithm based on linear programming. A ray-casting algorithm throughout the patient CT assembles information about the found structures, the mass thickness crossed, as well as PET values. Data are recorded to generate a biophysical map for each gantry angle. These maps are the input files for a home-made sequencer developed to take into account the interactions of photons and electrons with the MLC. For each linac (Axesse of Elekta and Primus of Siemens) and energy beam studied (6, 9, 12, 15 MeV and 6 MV), phase space files were simulated with the EGSnrc/BEAMnrc code. The dose calculation in patient was carried out with the BEAMDOSE code. This code is a modified version of EGSnrc/DOSXYZnrc able to calculate the beamlet dose in order to combine them with different weights during the optimization process. Results: Three complex radiotherapy treatments were selected to check the reliability of CARMEN in situations where the MC calculation can offer an added value: A head-and-neck case (Case I) with three targets delineated on PET/CT images and a demanding dose-escalation; a partial breast
Zhang, Huiling; Huang, Qingsheng; Bei, Zhendong; Wei, Yanjie; Floudas, Christodoulos A
2016-03-01
In this article, we present COMSAT, a hybrid framework for residue contact prediction of transmembrane (TM) proteins, integrating a support vector machine (SVM) method and a mixed integer linear programming (MILP) method. COMSAT consists of two modules: COMSAT_SVM which is trained mainly on position-specific scoring matrix features, and COMSAT_MILP which is an ab initio method based on optimization models. Contacts predicted by the SVM model are ranked by SVM confidence scores, and a threshold is trained to improve the reliability of the predicted contacts. For TM proteins with no contacts above the threshold, COMSAT_MILP is used. The proposed hybrid contact prediction scheme was tested on two independent TM protein sets based on the contact definition of 14 Å between Cα-Cα atoms. First, using a rigorous leave-one-protein-out cross validation on the training set of 90 TM proteins, an accuracy of 66.8%, a coverage of 12.3%, a specificity of 99.3% and a Matthews' correlation coefficient (MCC) of 0.184 were obtained for residue pairs that are at least six amino acids apart. Second, when tested on a test set of 87 TM proteins, the proposed method showed a prediction accuracy of 64.5%, a coverage of 5.3%, a specificity of 99.4% and a MCC of 0.106. COMSAT shows satisfactory results when compared with 12 other state-of-the-art predictors, and is more robust in terms of prediction accuracy as the length and complexity of TM protein increase. COMSAT is freely accessible at http://hpcc.siat.ac.cn/COMSAT/. PMID:26756402
NASA Astrophysics Data System (ADS)
Liu, Yong; Qin, Xiaosheng; Guo, Huaicheng; Zhou, Feng; Wang, Jinfeng; Lv, Xiaojian; Mao, Guozhu
2007-12-01
Lake areas in urban fringes are under increasing urbanization pressure. Consequently, the conflict between rapid urban development and the maintenance of water bodies in such areas urgently needs to be addressed. An inexact chance-constrained linear programming (ICCLP) model for optimal land-use management of lake areas in urban fringes was developed. The ICCLP model was based on land-use suitability assessment and land evaluation. The maximum net economic benefit (NEB) was selected as the objective of land-use allocation. The total environmental capacity (TEC) of water systems and the public financial investment (PFI) at different probability levels were considered key constraints. Other constraints included in the model were land-use suitability, governmental requirements on the ratios of various land-use types, and technical constraints. A case study implementing the system was performed for the lake area of Hanyang at the urban fringe of Wuhan, central China, based on our previous study on land-use suitability assessment. The Hanyang lake area is under significant urbanization pressure. A 15-year optimal model for land-use allocation is proposed during 2006 to 2020 to better protect the water system and to gain the maximum benefits of development. Sixteen constraints were set for the optimal model. The model results indicated that NEB was between 1.48 × 109 and 8.76 × 109 or between 3.98 × 109 and 16.7 × 109, depending on the different urban-expansion patterns and land demands. The changes in total developed area and the land-use structure were analyzed under different probabilities ( q i ) of TEC. Changes in q i resulted in different urban expansion patterns and demands on land, which were the direct result of the constraints imposed by TEC and PFI. The ICCLP model might help local authorities better understand and address complex land-use systems and develop optimal land-use management strategies that better balance urban expansion and grassland
Simic, Vladimir
2015-01-01
End-of-life vehicles (ELVs) are vehicles that have reached the end of their useful lives and are no longer registered or licensed for use. The ELV recycling problem has become very serious in the last decade and more and more efforts are made in order to reduce the impact of ELVs on the environment. This paper proposes the fuzzy risk explicit interval linear programming model for ELV recycling planning in the EU. It has advantages in reflecting uncertainties presented in terms of intervals in the ELV recycling systems and fuzziness in decision makers' preferences. The formulated model has been applied to a numerical study in which different decision maker types and several ELV types under two EU ELV Directive legislative cases were examined. This study is conducted in order to examine the influences of the decision maker type, the α-cut level, the EU ELV Directive and the ELV type on decisions about vehicle hulks procuring, storing unprocessed hulks, sorting generated material fractions, allocating sorted waste flows and allocating sorted metals. Decision maker type can influence quantity of vehicle hulks kept in storages. The EU ELV Directive and decision maker type have no influence on which vehicle hulk type is kept in the storage. Vehicle hulk type, the EU ELV Directive and decision maker type do not influence the creation of metal allocation plans, since each isolated metal has its regular destination. The valid EU ELV Directive eco-efficiency quotas can be reached even when advanced thermal treatment plants are excluded from the ELV recycling process. The introduction of the stringent eco-efficiency quotas will significantly reduce the quantities of land-filled waste fractions regardless of the type of decision makers who will manage vehicle recycling system. In order to reach these stringent quotas, significant quantities of sorted waste need to be processed in advanced thermal treatment plants. Proposed model can serve as the support for the European
Zhang, Huiling; Huang, Qingsheng; Bei, Zhendong; Wei, Yanjie; Floudas, Christodoulos A
2016-03-01
In this article, we present COMSAT, a hybrid framework for residue contact prediction of transmembrane (TM) proteins, integrating a support vector machine (SVM) method and a mixed integer linear programming (MILP) method. COMSAT consists of two modules: COMSAT_SVM which is trained mainly on position-specific scoring matrix features, and COMSAT_MILP which is an ab initio method based on optimization models. Contacts predicted by the SVM model are ranked by SVM confidence scores, and a threshold is trained to improve the reliability of the predicted contacts. For TM proteins with no contacts above the threshold, COMSAT_MILP is used. The proposed hybrid contact prediction scheme was tested on two independent TM protein sets based on the contact definition of 14 Å between Cα-Cα atoms. First, using a rigorous leave-one-protein-out cross validation on the training set of 90 TM proteins, an accuracy of 66.8%, a coverage of 12.3%, a specificity of 99.3% and a Matthews' correlation coefficient (MCC) of 0.184 were obtained for residue pairs that are at least six amino acids apart. Second, when tested on a test set of 87 TM proteins, the proposed method showed a prediction accuracy of 64.5%, a coverage of 5.3%, a specificity of 99.4% and a MCC of 0.106. COMSAT shows satisfactory results when compared with 12 other state-of-the-art predictors, and is more robust in terms of prediction accuracy as the length and complexity of TM protein increase. COMSAT is freely accessible at http://hpcc.siat.ac.cn/COMSAT/.
NASA Astrophysics Data System (ADS)
Cuevas Vivas, Gabriel Francisco
A methodology to optimize enrichment distributions in Light Water Reactor (LWR) fuel assemblies is developed and tested. The optimization technique employed is the linear programming revised simplex method, and the fuel assembly's performance is evaluated with a neutron transport code that is also utilized in the calculation of sensitivity coefficients. The enrichment distribution optimization procedure begins from a single-value (flat) enrichment distribution until a target, maximum local power peaking factor, is achieved. The optimum rod enrichment distribution, with 1.00 for the maximum local power peaking factor and with each rod having its own enrichment, is calculated at an intermediate stage of the analysis. Later, the best locations and values for a reduced number of rod enrichments is obtained as a function of a target maximum local power peaking factor by applying sensitivity to change techniques. Finally, a shuffling process that assigns individual rod enrichments among the enrichment groups is performed. The relative rod power distribution is then slightly modified and the rod grouping redefined until the optimum configuration is attained. To verify the accuracy of the relative rod power distribution, a full computation with the neutron transport code using the optimum enrichment distribution is carried out. The results are compared and tested for assembly designs loaded with fresh Low Enriched Uranium (LEU) and plutonium Mixed OXide (MOX) fuels. MOX isotopics for both reactor-grade and weapons-grade plutonium were utilized to demonstrate the wide-range of applicability of the optimization technique. The features of the assembly designs used for evaluation purposes included burnable absorbers and internal water regions, and were prepared to resemble the configurations of modern assemblies utilized in commercial Boiling Water Reactors (BWRs) and Pressurized Water Reactors (PWRs). In some cases, a net improvement in the relative rod power distribution or
Liu, Yong; Qin, Xiaosheng; Guo, Huaicheng; Zhou, Feng; Wang, Jinfeng; Lv, Xiaojian; Mao, Guozhu
2007-12-01
Lake areas in urban fringes are under increasing urbanization pressure. Consequently, the conflict between rapid urban development and the maintenance of water bodies in such areas urgently needs to be addressed. An inexact chance-constrained linear programming (ICCLP) model for optimal land-use management of lake areas in urban fringes was developed. The ICCLP model was based on land-use suitability assessment and land evaluation. The maximum net economic benefit (NEB) was selected as the objective of land-use allocation. The total environmental capacity (TEC) of water systems and the public financial investment (PFI) at different probability levels were considered key constraints. Other constraints included in the model were land-use suitability, governmental requirements on the ratios of various land-use types, and technical constraints. A case study implementing the system was performed for the lake area of Hanyang at the urban fringe of Wuhan, central China, based on our previous study on land-use suitability assessment. The Hanyang lake area is under significant urbanization pressure. A 15-year optimal model for land-use allocation is proposed during 2006 to 2020 to better protect the water system and to gain the maximum benefits of development. Sixteen constraints were set for the optimal model. The model results indicated that NEB was between $1.48 x 10(9) and $8.76 x 10(9) or between $3.98 x 10(9) and $16.7 x 10(9), depending on the different urban-expansion patterns and land demands. The changes in total developed area and the land-use structure were analyzed under different probabilities (q ( i )) of TEC. Changes in q ( i ) resulted in different urban expansion patterns and demands on land, which were the direct result of the constraints imposed by TEC and PFI. The ICCLP model might help local authorities better understand and address complex land-use systems and develop optimal land-use management strategies that better balance urban expansion and
NASA Technical Reports Server (NTRS)
Walker, K. P.
1981-01-01
Results of a 20-month research and development program for nonlinear structural modeling with advanced time-temperature constitutive relationships are reported. The program included: (1) the evaluation of a number of viscoplastic constitutive models in the published literature; (2) incorporation of three of the most appropriate constitutive models into the MARC nonlinear finite element program; (3) calibration of the three constitutive models against experimental data using Hastelloy-X material; and (4) application of the most appropriate constitutive model to a three dimensional finite element analysis of a cylindrical combustor liner louver test specimen to establish the capability of the viscoplastic model to predict component structural response.
NASA Astrophysics Data System (ADS)
Guo, P.; Huang, G. H.; Li, Y. P.
2010-01-01
In this study, an inexact fuzzy-chance-constrained two-stage mixed-integer linear programming (IFCTIP) approach is developed for flood diversion planning under multiple uncertainties. A concept of the distribution with fuzzy boundary interval probability is defined to address multiple uncertainties expressed as integration of intervals, fuzzy sets and probability distributions. IFCTIP integrates the inexact programming, two-stage stochastic programming, integer programming and fuzzy-stochastic programming within a general optimization framework. IFCTIP incorporates the pre-regulated water-diversion policies directly into its optimization process to analyze various policy scenarios; each scenario has different economic penalty when the promised targets are violated. More importantly, it can facilitate dynamic programming for decisions of capacity-expansion planning under fuzzy-stochastic conditions. IFCTIP is applied to a flood management system. Solutions from IFCTIP provide desired flood diversion plans with a minimized system cost and a maximized safety level. The results indicate that reasonable solutions are generated for objective function values and decision variables, thus a number of decision alternatives can be generated under different levels of flood flows.
Chen, Wen; Schuster, Gary B
2012-01-18
Nanometer-scale arrays of conducting polymers were prepared on scaffolds of self-assembling DNA modules. A series of DNA oligomers was prepared, each containing six 2,5-bis(2-thienyl)pyrrole (SNS) monomer units linked covalently to N4 atoms of alternating cytosines placed between leading and trailing 12-nucleobase recognition sequences. These DNA modules were encoded so the recognition sequences would uniquely associate through Watson-Crick assembly to form closed-cycle or linear arrays of aligned SNS monomers. The melting behavior and electrophoretic migration of these assemblies showed cooperative formation of multicomponent arrays containing two to five DNA modules (i.e., 12-30 SNS monomers). The treatment of these arrays with horseradish peroxidase and H(2)O(2) resulted in oxidative polymerization of the SNS monomers with concomitant ligation of the DNA modules. The resulting cyclic and linear arrays exhibited chemical and optical properties typical of conducting thiophene-like polymers, with a red-end absorption beyond 1250 nm. AFM images of the cyclic array containing 18 SNS units revealed highly regular 10 nm diameter objects. PMID:22242713
NASA Technical Reports Server (NTRS)
Bielawa, R. L.
1976-01-01
The differential equations of motion for the lateral and torsional deformations of a nonlinearly twisted rotor blade in steady flight conditions together with those additional aeroelastic features germane to composite bearingless rotors are derived. The differential equations are formulated in terms of uncoupled (zero pitch and twist) vibratory modes with exact coupling effects due to finite, time variable blade pitch and, to second order, twist. Also presented are derivations of the fully coupled inertia and aerodynamic load distributions, automatic pitch change coupling effects, structural redundancy characteristics of the composite bearingless rotor flexbeam - torque tube system in bending and torsion, and a description of the linearized equations appropriate for eigensolution analyses. Three appendixes are included presenting material appropriate to the digital computer program implementation of the analysis, program G400.
Holm, Åsa; Larsson, Torbjörn; Tedgren, Åsa Carlsson
2013-08-15
Purpose: Recent research has shown that the optimization model hitherto used in high-dose-rate (HDR) brachytherapy corresponds weakly to the dosimetric indices used to evaluate the quality of a dose distribution. Although alternative models that explicitly include such dosimetric indices have been presented, the inclusion of the dosimetric indices explicitly yields intractable models. The purpose of this paper is to develop a model for optimizing dosimetric indices that is easier to solve than those proposed earlier.Methods: In this paper, the authors present an alternative approach for optimizing dose distributions for HDR brachytherapy where dosimetric indices are taken into account through surrogates based on the conditional value-at-risk concept. This yields a linear optimization model that is easy to solve, and has the advantage that the constraints are easy to interpret and modify to obtain satisfactory dose distributions.Results: The authors show by experimental comparisons, carried out retrospectively for a set of prostate cancer patients, that their proposed model corresponds well with constraining dosimetric indices. All modifications of the parameters in the authors' model yield the expected result. The dose distributions generated are also comparable to those generated by the standard model with respect to the dosimetric indices that are used for evaluating quality.Conclusions: The authors' new model is a viable surrogate to optimizing dosimetric indices and quickly and easily yields high quality dose distributions.
Wang, Hsiao-Fan; Hsu, Hsin-Wei
2010-11-01
With the urgency of global warming, green supply chain management, logistics in particular, has drawn the attention of researchers. Although there are closed-loop green logistics models in the literature, most of them do not consider the uncertain environment in general terms. In this study, a generalized model is proposed where the uncertainty is expressed by fuzzy numbers. An interval programming model is proposed by the defined means and mean square imprecision index obtained from the integrated information of all the level cuts of fuzzy numbers. The resolution for interval programming is based on the decision maker (DM)'s preference. The resulting solution provides useful information on the expected solutions under a confidence level containing a degree of risk. The results suggest that the more optimistic the DM is, the better is the resulting solution. However, a higher risk of violation of the resource constraints is also present. By defining this probable risk, a solution procedure was developed with numerical illustrations. This provides a DM trade-off mechanism between logistic cost and the risk. PMID:20547439
Sidorin, Anatoly
2010-01-05
In linear accelerators the particles are accelerated by either electrostatic fields or oscillating Radio Frequency (RF) fields. Accordingly the linear accelerators are divided in three large groups: electrostatic, induction and RF accelerators. Overview of the different types of accelerators is given. Stability of longitudinal and transverse motion in the RF linear accelerators is briefly discussed. The methods of beam focusing in linacs are described.
Kang, Bongmun; Yoon, Ho-Sung
2015-02-01
Recently, microalgae was considered as a renewable energy for fuel production because its production is nonseasonal and may take place on nonarable land. Despite all of these advantages, microalgal oil production is significantly affected by environmental factors. Furthermore, the large variability remains an important problem in measurement of algae productivity and compositional analysis, especially, the total lipid content. Thus, there is considerable interest in accurate determination of total lipid content during the biotechnological process. For these reason, various high-throughput technologies were suggested for accurate measurement of total lipids contained in the microorganisms, especially oleaginous microalgae. In addition, more advanced technologies were employed to quantify the total lipids of the microalgae without a pretreatment. However, these methods are difficult to measure total lipid content in wet form microalgae obtained from large-scale production. In present study, the thermal analysis performed with two-step linear temeperature program was applied to measure heat evolved in temperature range from 310 to 351 °C of Nostoc sp. KNUA003 obtained from large-scale cultivation. And then, we examined the relationship between the heat evolved in 310-351 °C (HE) and total lipid content of the wet Nostoc cell cultivated in raceway. As a result, the linear relationship was determined between HE value and total lipid content of Nostoc sp. KNUA003. Particularly, there was a linear relationship of 98% between the HE value and the total lipid content of the tested microorganism. Based on this relationship, the total lipid content converted from the heat evolved of wet Nostoc sp. KNUA003 could be used for monitoring its lipid induction in large-scale cultivation.
ERIC Educational Resources Information Center
Walkiewicz, T. A.; Newby, N. D., Jr.
1972-01-01
A discussion of linear collisions between two or three objects is related to a junior-level course in analytical mechanics. The theoretical discussion uses a geometrical approach that treats elastic and inelastic collisions from a unified point of view. Experiments with a linear air track are described. (Author/TS)
Yang, X.
1998-04-01
Large scale (up to 5 kt) chemical blasts are routinely conducted by mining and quarry industries around the world to remove overburden or to fragment rocks. Because of their ability to trigger the future International Monitoring System (IMS) of the Comprehensive Test Ban Treaty (CTBT), these blasts are monitored and studied by verification seismologists for the purpose of discriminating them from possible clandestine nuclear tests. One important component of these studies is the modeling of ground motions from these blasts with theoretical and empirical source models. The modeling exercises provide physical bases to regional discriminants and help to explain the observed signal characteristics. The program MineSeis has been developed to implement the synthetic seismogram modeling of multi-shot blast sources with the linear superposition of single shot sources. Single shot sources used in the modeling are the spherical explosion plus spall model mentioned here. Mueller and Murphy`s (1971) model is used as the spherical explosion model. A modification of Anandakrishnan et al.`s (1997) spall model is developed for the spall component. The program is implemented with the MATLAB{reg_sign} Graphical User Interface (GUI), providing the user with easy, interactive control of the calculation.
Borbulevych, Oleg Y.; Plumley, Joshua A.; Martin, Roger I.; Merz, Kenneth M. Jr; Westerhoff, Lance M.
2014-05-01
Semiempirical quantum-chemical X-ray macromolecular refinement using the program DivCon integrated with PHENIX is described. Macromolecular crystallographic refinement relies on sometimes dubious stereochemical restraints and rudimentary energy functionals to ensure the correct geometry of the model of the macromolecule and any covalently bound ligand(s). The ligand stereochemical restraint file (CIF) requires a priori understanding of the ligand geometry within the active site, and creation of the CIF is often an error-prone process owing to the great variety of potential ligand chemistry and structure. Stereochemical restraints have been replaced with more robust functionals through the integration of the linear-scaling, semiempirical quantum-mechanics (SE-QM) program DivCon with the PHENIX X-ray refinement engine. The PHENIX/DivCon package has been thoroughly validated on a population of 50 protein–ligand Protein Data Bank (PDB) structures with a range of resolutions and chemistry. The PDB structures used for the validation were originally refined utilizing various refinement packages and were published within the past five years. PHENIX/DivCon does not utilize CIF(s), link restraints and other parameters for refinement and hence it does not make as many a priori assumptions about the model. Across the entire population, the method results in reasonable ligand geometries and low ligand strains, even when the original refinement exhibited difficulties, indicating that PHENIX/DivCon is applicable to both single-structure and high-throughput crystallography.
Dibari, Filippo; Diop, El Hadji I; Collins, Steven; Seal, Andrew
2012-05-01
According to the United Nations (UN), 25 million children <5 y of age are currently affected by severe acute malnutrition and need to be treated using special nutritional products such as ready-to-use therapeutic foods (RUTF). Improved formulations are in demand, but a standardized approach for RUTF design has not yet been described. A method relying on linear programming (LP) analysis was developed and piloted in the design of a RUTF prototype for the treatment of wasting in East African children and adults. The LP objective function and decision variables consisted of the lowest formulation price and the weights of the chosen commodities (soy, sorghum, maize, oil, and sugar), respectively. The LP constraints were based on current UN recommendations for the macronutrient content of therapeutic food and included palatability, texture, and maximum food ingredient weight criteria. Nonlinear constraints for nutrient ratios were converted to linear equations to allow their use in LP. The formulation was considered accurate if laboratory results confirmed an energy density difference <10% and a protein or lipid difference <5 g · 100 g(-1) compared to the LP formulation estimates. With this test prototype, the differences were 7%, and 2.3 and -1.0 g · 100 g(-1), respectively, and the formulation accuracy was considered good. LP can contribute to the design of ready-to-use foods (therapeutic, supplementary, or complementary), targeting different forms of malnutrition, while using commodities that are cheaper, regionally available, and meet local cultural preferences. However, as with all prototype feeding products for medical use, composition analysis, safety, acceptability, and clinical effectiveness trials must be conducted to validate the formulation.
Christofilos, N.C.; Polk, I.J.
1959-02-17
Improvements in linear particle accelerators are described. A drift tube system for a linear ion accelerator reduces gap capacity between adjacent drift tube ends. This is accomplished by reducing the ratio of the diameter of the drift tube to the diameter of the resonant cavity. Concentration of magnetic field intensity at the longitudinal midpoint of the external sunface of each drift tube is reduced by increasing the external drift tube diameter at the longitudinal center region.
Santika, Otte; Fahmida, Umi; Ferguson, Elaine L
2009-01-01
Effective population-specific, food-based complementary feeding recommendations (CFR) are required to combat micronutrient deficiencies. To facilitate their formulation, a modeling approach was recently developed. However, it has not yet been used in practice. This study therefore aimed to use this approach to develop CFR for 9- to 11-mo-old Indonesian infants and to identify nutrients that will likely remain low in their diets. The CFR were developed using a 4-phase approach based on linear and goal programming. Model parameters were defined using dietary data collected in a cross-sectional survey of 9- to 11-mo-old infants (n = 100) living in the Bogor District, West-Java, Indonesia and a market survey of 3 local markets. Results showed theoretical iron requirements could not be achieved using local food sources (highest level achievable, 63% of recommendations) and adequate levels of iron, niacin, zinc, and calcium were difficult to achieve. Fortified foods, meatballs, chicken liver, eggs, tempe-tofu, banana, and spinach were the best local food sources to improve dietary quality. The final CFR were: breast-feed on demand, provide 3 meals/d, of which 1 is a fortified infant cereal; > or = 5 servings/wk of tempe/tofu; > or = 3 servings/wk of animal-source foods, of which 2 servings/wk are chicken liver; vegetables, daily; snacks, 2 times/d, including > or = 2 servings/wk of banana; and > or = 4 servings/wk of fortified-biscuits. Results showed that the approach can be used to objectively formulate population-specific CFR and identify key problem nutrients to strengthen nutrition program planning and policy decisions. Before recommending these CFR, their long-term acceptability, affordability, and effectiveness should be assessed.
Zhang, Liping; Zhang, Shiwen; Huang, Yajie; Cao, Meng; Huang, Yuanfang; Zhang, Hongyan
2016-01-01
Understanding abandoned mine land (AML) changes during land reclamation is crucial for reusing damaged land resources and formulating sound ecological restoration policies. This study combines the linear programming (LP) model and the CLUE-S model to simulate land-use dynamics in the Mentougou District (Beijing, China) from 2007 to 2020 under three reclamation scenarios, that is, the planning scenario based on the general land-use plan in study area (scenario 1), maximal comprehensive benefits (scenario 2), and maximal ecosystem service value (scenario 3). Nine landscape-scale graph metrics were then selected to describe the landscape characteristics. The results show that the coupled model presented can simulate the dynamics of AML effectively and the spatially explicit transformations of AML were different. New cultivated land dominates in scenario 1, while construction land and forest land account for major percentages in scenarios 2 and 3, respectively. Scenario 3 has an advantage in most of the selected indices as the patches combined most closely. To conclude, reclaiming AML by transformation into more forest can reduce the variability and maintain the stability of the landscape ecological system in study area. These findings contribute to better mapping AML dynamics and providing policy support for the management of AML. PMID:27023575
Borbulevych, Oleg Y; Plumley, Joshua A; Martin, Roger I; Merz, Kenneth M; Westerhoff, Lance M
2014-05-01
Macromolecular crystallographic refinement relies on sometimes dubious stereochemical restraints and rudimentary energy functionals to ensure the correct geometry of the model of the macromolecule and any covalently bound ligand(s). The ligand stereochemical restraint file (CIF) requires a priori understanding of the ligand geometry within the active site, and creation of the CIF is often an error-prone process owing to the great variety of potential ligand chemistry and structure. Stereochemical restraints have been replaced with more robust functionals through the integration of the linear-scaling, semiempirical quantum-mechanics (SE-QM) program DivCon with the PHENIX X-ray refinement engine. The PHENIX/DivCon package has been thoroughly validated on a population of 50 protein-ligand Protein Data Bank (PDB) structures with a range of resolutions and chemistry. The PDB structures used for the validation were originally refined utilizing various refinement packages and were published within the past five years. PHENIX/DivCon does not utilize CIF(s), link restraints and other parameters for refinement and hence it does not make as many a priori assumptions about the model. Across the entire population, the method results in reasonable ligand geometries and low ligand strains, even when the original refinement exhibited difficulties, indicating that PHENIX/DivCon is applicable to both single-structure and high-throughput crystallography.
Koffi-Tessio, E.N.
1982-01-01
This study examines the interrelationship between the energy sector and the production of three agricultural crops (sugar, macadamia nut, and coffee) by small growers on the Big Island of Hawaii. Specifically, it attempts: to explore the patterns of energy use in agriculture; to determine the relative efficiency of fuel use by farm size among the three crops; and to investigate the impacts of higher energy costs on farmers' net revenues under three output-price and three energy-cost scenarios. To meet these objectives, a linear-programming model was developed. The objective function was to maximize net revenues subject to resource availability, production, marketing, and non-negativity constraints. The major conclusions emerging are: higher energy costs have not significantly impacted on farmers' net revenues, but do have a differential impact depending on the output price and resource endowments of each crop grower; farmers are faced with many constraints that do not permit factor substitution. For policy formulation, it was observed that policy makers are overly concerned with the problems facing growers at the macro level, without considering their constraints at the micro level. These micro factors play a dominant role in resource allocation. They must, therefore, be incorporated into a comprehensive energy and agricultural policy at the county and state level.
Borbulevych, Oleg Y.; Plumley, Joshua A.; Martin, Roger I.; Merz, Kenneth M.; Westerhoff, Lance M.
2014-01-01
Macromolecular crystallographic refinement relies on sometimes dubious stereochemical restraints and rudimentary energy functionals to ensure the correct geometry of the model of the macromolecule and any covalently bound ligand(s). The ligand stereochemical restraint file (CIF) requires a priori understanding of the ligand geometry within the active site, and creation of the CIF is often an error-prone process owing to the great variety of potential ligand chemistry and structure. Stereochemical restraints have been replaced with more robust functionals through the integration of the linear-scaling, semiempirical quantum-mechanics (SE-QM) program DivCon with the PHENIX X-ray refinement engine. The PHENIX/DivCon package has been thoroughly validated on a population of 50 protein–ligand Protein Data Bank (PDB) structures with a range of resolutions and chemistry. The PDB structures used for the validation were originally refined utilizing various refinement packages and were published within the past five years. PHENIX/DivCon does not utilize CIF(s), link restraints and other parameters for refinement and hence it does not make as many a priori assumptions about the model. Across the entire population, the method results in reasonable ligand geometries and low ligand strains, even when the original refinement exhibited difficulties, indicating that PHENIX/DivCon is applicable to both single-structure and high-throughput crystallography. PMID:24816093
Zhang, Liping; Zhang, Shiwen; Huang, Yajie; Cao, Meng; Huang, Yuanfang; Zhang, Hongyan
2016-03-24
Understanding abandoned mine land (AML) changes during land reclamation is crucial for reusing damaged land resources and formulating sound ecological restoration policies. This study combines the linear programming (LP) model and the CLUE-S model to simulate land-use dynamics in the Mentougou District (Beijing, China) from 2007 to 2020 under three reclamation scenarios, that is, the planning scenario based on the general land-use plan in study area (scenario 1), maximal comprehensive benefits (scenario 2), and maximal ecosystem service value (scenario 3). Nine landscape-scale graph metrics were then selected to describe the landscape characteristics. The results show that the coupled model presented can simulate the dynamics of AML effectively and the spatially explicit transformations of AML were different. New cultivated land dominates in scenario 1, while construction land and forest land account for major percentages in scenarios 2 and 3, respectively. Scenario 3 has an advantage in most of the selected indices as the patches combined most closely. To conclude, reclaiming AML by transformation into more forest can reduce the variability and maintain the stability of the landscape ecological system in study area. These findings contribute to better mapping AML dynamics and providing policy support for the management of AML.
Zhang, Liping; Zhang, Shiwen; Huang, Yajie; Cao, Meng; Huang, Yuanfang; Zhang, Hongyan
2016-04-01
Understanding abandoned mine land (AML) changes during land reclamation is crucial for reusing damaged land resources and formulating sound ecological restoration policies. This study combines the linear programming (LP) model and the CLUE-S model to simulate land-use dynamics in the Mentougou District (Beijing, China) from 2007 to 2020 under three reclamation scenarios, that is, the planning scenario based on the general land-use plan in study area (scenario 1), maximal comprehensive benefits (scenario 2), and maximal ecosystem service value (scenario 3). Nine landscape-scale graph metrics were then selected to describe the landscape characteristics. The results show that the coupled model presented can simulate the dynamics of AML effectively and the spatially explicit transformations of AML were different. New cultivated land dominates in scenario 1, while construction land and forest land account for major percentages in scenarios 2 and 3, respectively. Scenario 3 has an advantage in most of the selected indices as the patches combined most closely. To conclude, reclaiming AML by transformation into more forest can reduce the variability and maintain the stability of the landscape ecological system in study area. These findings contribute to better mapping AML dynamics and providing policy support for the management of AML. PMID:27023575
NASA Technical Reports Server (NTRS)
Magnus, A. E.; Epton, M. A.
1981-01-01
Panel aerodynamics (PAN AIR) is a system of computer programs designed to analyze subsonic and supersonic inviscid flows about arbitrary configurations. A panel method is a program which solves a linear partial differential equation by approximating the configuration surface by a set of panels. An overview of the theory of potential flow in general and PAN AIR in particular is given along with detailed mathematical formulations. Fluid dynamics, the Navier-Stokes equation, and the theory of panel methods were also discussed.
NASA Astrophysics Data System (ADS)
Trzaskuś-Żak, Beata; Żak, Andrzej
2013-09-01
This paper presents a method of binary linear programming for the selection of customers to whom a rebate will be offered. In return for the rebate, the customer undertakes payment of its debt to the mine by the deadline specified. In this way, the company is expected to achieve the required rate of collection of receivables. This, of course, will be at the expense of reduced revenue, which can be made up for by increased sales. Customer selection was done in order to keep the overall cost to the mine of the offered rebates as low as possible: where: KcR - total cost of rebates granted by the mine; kj - cost of granting the rebate to a jth customer; xj - decision variables; j = 1, …, n - particular customers. The calculations were performed with the Solver tool (Excel programme). The cost of rebates was calculated from the formula: kj = ΔPj - Kk(j) where: ΔPj - difference in revenues from customer j; Kk(j)- cost of the so-called trade credit with regard to customer j. The cost of the trade credit was calculated from the formula: where r - interest rate on the bank loan, % ts - collection time for the receivable in days (e.g. t1 = 30, t2 = 45,…, t12 = 360); Ns - value of the receivable at collection date ts. This paper presents the general model of linear binary programming for managing receivables by granting rebates. The model, in its general form, aims at: - minimising the objective function: - with the restrictions: - and: xj ɛ (0,1) where: Ntji - value of the timely payments of a customer j in an ith month of the period analysed; Nnji - value of the overdue receivables of a customer j in an ith month of the period analysed; q - the assumed minimum percentage of timely payments collected; Ni - summarised value of all receivables in the month i; m - the number of months in the period analysed. The general model was used for application to the example of the operating Mine X. Furthermore, the study has been extended through the presentation of a binary
ERIC Educational Resources Information Center
Li, Yuan H.; Yang, Yu N.; Tompkins, Leroy J.; Modarresi, Shahpar
2005-01-01
The statistical technique, "Zero-One Linear Programming," that has successfully been used to create multiple tests with similar characteristics (e.g., item difficulties, test information and test specifications) in the area of educational measurement, was deemed to be a suitable method for creating multiple sets of matched samples to be used as…
LINEAR - DERIVATION AND DEFINITION OF A LINEAR AIRCRAFT MODEL
NASA Technical Reports Server (NTRS)
Duke, E. L.
1994-01-01
The Derivation and Definition of a Linear Model program, LINEAR, provides the user with a powerful and flexible tool for the linearization of aircraft aerodynamic models. LINEAR was developed to provide a standard, documented, and verified tool to derive linear models for aircraft stability analysis and control law design. Linear system models define the aircraft system in the neighborhood of an analysis point and are determined by the linearization of the nonlinear equations defining vehicle dynamics and sensors. LINEAR numerically determines a linear system model using nonlinear equations of motion and a user supplied linear or nonlinear aerodynamic model. The nonlinear equations of motion used are six-degree-of-freedom equations with stationary atmosphere and flat, nonrotating earth assumptions. LINEAR is capable of extracting both linearized engine effects, such as net thrust, torque, and gyroscopic effects and including these effects in the linear system model. The point at which this linear model is defined is determined either by completely specifying the state and control variables, or by specifying an analysis point on a trajectory and directing the program to determine the control variables and the remaining state variables. The system model determined by LINEAR consists of matrices for both the state and observation equations. The program has been designed to provide easy selection of state, control, and observation variables to be used in a particular model. Thus, the order of the system model is completely under user control. Further, the program provides the flexibility of allowing alternate formulations of both the state and observation equations. Data describing the aircraft and the test case is input to the program through a terminal or formatted data files. All data can be modified interactively from case to case. The aerodynamic model can be defined in two ways: a set of nondimensional stability and control derivatives for the flight point of
Dantzig, G.B.
1992-10-01
Analogous to gunners firing trial shots to bracket a target in order to adjust direction and distance, we demonstate that it is sometimes faster not to apply an algorithm directly, but to roughly approximately solve several perturbations of the problem and then combine these rough approximations to get an exact solution. To find a feasible solution to an m-equation linear program with a convexity constraint, the von Neumann Algorithm generates a sequence of approximate solutions which converge very slowly to the right hand side b[sup 0]. However, it can be redirected so that in the first few iterations it is guaranteed to move rapidly towards the neighborhood of one of m + 1 perturbed right hand sides [cflx b][sup i], then redirected in turn to the next [cflx b][sup i]. Once within the neighborhood of each [cflx b][sup i], a weighted sum of the approximate solutions. [bar x][sup i] yields the exact solution of the unperturbed problem where the weights are found by solving a system of m + 1 equations in m + 1 unknowns. It is assumed an r > 0 is given for which the problem is feasible for all right hand sides b whose distance [parallel]b - b[sup 0][parallel][sub 2] [le] r. The feasible solution is found in less than 4(m+ 1)[sup 3]/r[sup 2] iterations. The work per iteration is [delta]mn + 2m + n + 9 multiplications plus [delta]mn + m + n + 9 additions or comparisons where [delta] is the density of nonzero coeffients in the matrix.
Dantzig, G.B.
1992-10-01
Analogous to gunners firing trial shots to bracket a target in order to adjust direction and distance, we demonstate that it is sometimes faster not to apply an algorithm directly, but to roughly approximately solve several perturbations of the problem and then combine these rough approximations to get an exact solution. To find a feasible solution to an m-equation linear program with a convexity constraint, the von Neumann Algorithm generates a sequence of approximate solutions which converge very slowly to the right hand side b{sup 0}. However, it can be redirected so that in the first few iterations it is guaranteed to move rapidly towards the neighborhood of one of m + 1 perturbed right hand sides {cflx b}{sup i}, then redirected in turn to the next {cflx b}{sup i}. Once within the neighborhood of each {cflx b}{sup i}, a weighted sum of the approximate solutions. {bar x}{sup i} yields the exact solution of the unperturbed problem where the weights are found by solving a system of m + 1 equations in m + 1 unknowns. It is assumed an r > 0 is given for which the problem is feasible for all right hand sides b whose distance {parallel}b - b{sup 0}{parallel}{sub 2} {le} r. The feasible solution is found in less than 4(m+ 1){sup 3}/r{sup 2} iterations. The work per iteration is {delta}mn + 2m + n + 9 multiplications plus {delta}mn + m + n + 9 additions or comparisons where {delta} is the density of nonzero coeffients in the matrix.
Linearly Adjustable International Portfolios
NASA Astrophysics Data System (ADS)
Fonseca, R. J.; Kuhn, D.; Rustem, B.
2010-09-01
We present an approach to multi-stage international portfolio optimization based on the imposition of a linear structure on the recourse decisions. Multiperiod decision problems are traditionally formulated as stochastic programs. Scenario tree based solutions however can become intractable as the number of stages increases. By restricting the space of decision policies to linear rules, we obtain a conservative tractable approximation to the original problem. Local asset prices and foreign exchange rates are modelled separately, which allows for a direct measure of their impact on the final portfolio value.
NASA Astrophysics Data System (ADS)
Trzaskuś-Żak, Beata; Żak, Andrzej
2013-09-01
This paper presents a method of binary linear programming for the selection of customers to whom a rebate will be offered. In return for the rebate, the customer undertakes payment of its debt to the mine by the deadline specified. In this way, the company is expected to achieve the required rate of collection of receivables. This, of course, will be at the expense of reduced revenue, which can be made up for by increased sales. Customer selection was done in order to keep the overall cost to the mine of the offered rebates as low as possible: where: KcR - total cost of rebates granted by the mine; kj - cost of granting the rebate to a jth customer; xj - decision variables; j = 1, …, n - particular customers. The calculations were performed with the Solver tool (Excel programme). The cost of rebates was calculated from the formula: kj = ΔPj - Kk(j) where: ΔPj - difference in revenues from customer j; Kk(j)- cost of the so-called trade credit with regard to customer j. The cost of the trade credit was calculated from the formula: where r - interest rate on the bank loan, % ts - collection time for the receivable in days (e.g. t1 = 30, t2 = 45,…, t12 = 360); Ns - value of the receivable at collection date ts. This paper presents the general model of linear binary programming for managing receivables by granting rebates. The model, in its general form, aims at: - minimising the objective function: - with the restrictions: - and: xj ɛ (0,1) where: Ntji - value of the timely payments of a customer j in an ith month of the period analysed; Nnji - value of the overdue receivables of a customer j in an ith month of the period analysed; q - the assumed minimum percentage of timely payments collected; Ni - summarised value of all receivables in the month i; m - the number of months in the period analysed. The general model was used for application to the example of the operating Mine X. Furthermore, the study has been extended through the presentation of a binary
Colgate, S.A.
1958-05-27
An improvement is presented in linear accelerators for charged particles with respect to the stable focusing of the particle beam. The improvement consists of providing a radial electric field transverse to the accelerating electric fields and angularly introducing the beam of particles in the field. The results of the foregoing is to achieve a beam which spirals about the axis of the acceleration path. The combination of the electric fields and angular motion of the particles cooperate to provide a stable and focused particle beam.
NASA Technical Reports Server (NTRS)
2006-01-01
[figure removed for brevity, see original site] Context image for PIA03667 Linear Clouds
These clouds are located near the edge of the south polar region. The cloud tops are the puffy white features in the bottom half of the image.
Image information: VIS instrument. Latitude -80.1N, Longitude 52.1E. 17 meter/pixel resolution.
Note: this THEMIS visual image has not been radiometrically nor geometrically calibrated for this preliminary release. An empirical correction has been performed to remove instrumental effects. A linear shift has been applied in the cross-track and down-track direction to approximate spacecraft and planetary motion. Fully calibrated and geometrically projected images will be released through the Planetary Data System in accordance with Project policies at a later time.
NASA's Jet Propulsion Laboratory manages the 2001 Mars Odyssey mission for NASA's Office of Space Science, Washington, D.C. The Thermal Emission Imaging System (THEMIS) was developed by Arizona State University, Tempe, in collaboration with Raytheon Santa Barbara Remote Sensing. The THEMIS investigation is led by Dr. Philip Christensen at Arizona State University. Lockheed Martin Astronautics, Denver, is the prime contractor for the Odyssey project, and developed and built the orbiter. Mission operations are conducted jointly from Lockheed Martin and from JPL, a division of the California Institute of Technology in Pasadena.
NASA Astrophysics Data System (ADS)
Czopek, Kazimierz; Trzaskuś-Żak, Beata
2013-06-01
The paper presents an example of a theoretical linear programming model in the management of mine receivables. To this end, an economic production model of linear programming was applied to optimising the revenue of the mine. The amount of product sold by the mine to individual customers was assumed as the decisive variable, and the product price was the parameter of the objective function. As for boundaries, upper receivable limits were assumed for each of the adopted receivable collection cycles. The sequence of collection cycles, and the receivable values assigned to them, were adopted according to the growing probability of overdue and uncollectible receivables. Two receivables-management optimisation cases were analysed, in which the objective function was to maximise the sales value (revenue) of the Mine. The first case studied in the model involves application of a discount to reduce the product price, in a mine whose production output is not being used to capacity. To improve cash flow, the mine offers its customers a reduced price and increased purchasing up to the mine's capacity in exchange for shortened receivable collection times. Fixed and variable-cost accounting is applied to determine the relevant price reduction. In the other case analysed, the mine sells as much as its current output allows, but despite that is still forced to reduce the price of its products. Application of a discount in this case (reducing the product price) inevitably involves shortened receivable collection times and reduced costs of financing trade credit. Artykuł przedstawia przykład teoretycznego modelu programowania liniowego w zarządzaniu należnościami kopalni. Wykorzystano w tym celu model produkcyjno-gospodarczy programowania liniowego do optymalizacji wartości przychodu kopalni. Jako zmienną decyzyjną modelu przyjęto ilość sprzedaży produktu kopalni do poszczególnych odbiorców, natomiast parametrem funkcji celu jest cena sprzedaży produktu. W
ERIC Educational Resources Information Center
Community College Journal, 1996
1996-01-01
Includes a collection of eight short articles describing model community college programs. Discusses a literacy program, a mobile computer classroom, a support program for at-risk students, a timber-harvesting program, a multimedia presentation on successful women graduates, a career center, a collaboration with NASA, and an Israeli engineering…
Weston, R J; Weston, M E; Bollinger, R O
1984-01-01
A non-linear curve-fitting program using a modified Hoerl's function on the Hewlett-Packard HP-97 and Texas Instruments TI-59 programmable calculators for the determination of Phadezaym IgE PRIST (IgE) results is described. Excellent correlation between the reference serum concentration and the curve fit concentration results were obtained. The equation used in the curve fit is ln y = A + B ln x + CxD, where A, B, C and an accuracy of fit term R are calculated by the program. The value of D must be specified by the user before the curve fit is performed.
Chen, Wen; Schuster, Gary B
2013-03-20
A series of linear and cyclic, sequence controlled, DNA-conjoined copolymers of aniline (ANi) and 2,5-bis(2-thienyl)pyrrole (SNS) were synthesized. In one approach, linear copolymers were prepared from complementary DNA oligomers containing covalently attached SNS and ANi monomers. Hybridization of the oligomers aligns the monomers in the major groove of the DNA. Treatment of the SNS- and ANi-containing duplexes with horseradish peroxidase (HRP) and H2O2 causes rapid and efficient polymerization. In this way, linear copolymers (SNS)4(ANi)6 and (ANi)2(SNS)2(ANi)2(SNS)2(ANi)2 were prepared and analyzed. A second approach to the preparation of linear and cyclic copolymers of ANi and SNS employed a DNA encoded module strategy. In this approach, single-stranded DNA oligomers composed of a central region containing (SNS)6 or (ANi)5 covalently attached monomer blocks and flanking 5'- and 3'-single-strand DNA recognition sequences were combined in buffer solution. Self-assembly of these oligomers by Watson-Crick base pairing of the recognition sequences creates linear or cyclic arrays of SNS and ANi monomer blocks. Treatment of these arrays with HRP/H2O2 causes rapid and efficient polymerization to form copolymers having patterns such as cyclic BBA and linear ABA, where B stands for an (SNS)6 block and A stands for an (ANi)5 block. These DNA-conjoined copolymers were characterized by melting temperature analysis, circular dichroism spectroscopy, native and denaturing polyacrylamide gel electrophoresis, and UV-visible-near-IR optical spectroscopy. The optical spectra of these copolymers are typical of those of conducting polymers and are uniquely dependent on the specific order of monomers in the copolymer. PMID:23448549
NASA Astrophysics Data System (ADS)
Metternicht, Graciela; Blanco, Paula; del Valle, Hector; Laterra, Pedro; Hardtke, Leonardo; Bouza, Pablo
2015-04-01
Wildlife is part of the Patagonian rangelands sheep farming environment, with the potential of providing extra revenue to livestock owners. As sheep farming became less profitable, farmers and ranchers could focus on sustainable wildlife harvesting. It has been argued that sustainable wildlife harvesting is ecologically one of the most rational forms of land use because of its potential to provide multiple products of high value, while reducing pressure on ecosystems. The guanaco (Lama guanicoe) is the most conspicuous wild ungulate of Patagonia. Guanaco ?bre, meat, pelts and hides are economically valuable and have the potential to be used within the present Patagonian context of production systems. Guanaco populations in South America, including Patagonia, have experienced a sustained decline. Causes for this decline are related to habitat alteration, competition for forage with sheep, and lack of reasonable management plans to develop livelihoods for ranchers. In this study we propose an approach to explicitly determinate optimal stocking rates based on trade-offs between guanaco density and livestock grazing intensity on rangelands. The focus of our research is on finding optimal sheep stocking rates at paddock level, to ensure the highest production outputs while: a) meeting requirements of sustainable conservation of guanacos over their minimum viable population; b) maximizing soil carbon sequestration, and c) minimizing soil erosion. In this way, determination of optimal stocking rate in rangelands becomes a multi-objective optimization problem that can be addressed using a Fuzzy Multi-Objective Linear Programming (MOLP) approach. Basically, this approach converts multi-objective problems into single-objective optimizations, by introducing a set of objective weights. Objectives are represented using fuzzy set theory and fuzzy memberships, enabling each objective function to adopt a value between 0 and 1. Each objective function indicates the satisfaction of
Miller, Naomi J.; Perrin, Tess E.; Royer, Michael P.; Wilkerson, Andrea M.; Beeson, Tracy A.
2014-05-20
Although lensed troffers are numerous, there are many other types of optical systems as well. This report looked at the performance of three linear (T8) LED lamps chosen primarily based on their luminous intensity distributions (narrow, medium, and wide beam angles) as well as a benchmark fluorescent lamp in five different troffer types. Also included are the results of a subjective evaluation. Results show that linear (T8) LED lamps can improve luminaire efficiency in K12-lensed and parabolic-louvered troffers, effect little change in volumetric and high-performance diffuse-lensed type luminaires, but reduce efficiency in recessed indirect troffers. These changes can be accompanied by visual appearance and visual comfort consequences, especially when LED lamps with clear lenses and narrow distributions are installed. Linear (T8) LED lamps with diffuse apertures exhibited wider beam angles, performed more similarly to fluorescent lamps, and received better ratings from observers. Guidance is provided on which luminaires are the best candidates for retrofitting with linear (T8) LED lamps.
A Structural Connection between Linear and 0-1 Integer Linear Formulations
ERIC Educational Resources Information Center
Adlakha, V.; Kowalski, K.
2007-01-01
The connection between linear and 0-1 integer linear formulations has attracted the attention of many researchers. The main reason triggering this interest has been an availability of efficient computer programs for solving pure linear problems including the transportation problem. Also the optimality of linear problems is easily verifiable…
NASA Astrophysics Data System (ADS)
Ranjbaran, A.; Phipps, M. E.
1994-04-01
A finite element program for the nonlinear stress analysis of two-dimensional problems is introduced. Both metallic and reinforced concrete structures are considered. In the case of metals plasticity is taken into account. For reinforced concrete structures cracking of concrete in tension, plasticity and crushing of concrete in compression, and plasticity of reinforcement is accounted for. A new and unified model for embedding reinforcement in concrete elements is proposed. The proposed model is quite general in the sense that it can be used both for two- and three-dimensional problems. The theoretical basis of the program is presented. The accuracy, efficiency and robustness of the program and its implementation is verified through the analysis of two-dimensional problems.
Positrons for linear colliders
Ecklund, S.
1987-11-01
The requirements of a positron source for a linear collider are briefly reviewed, followed by methods of positron production and production of photons by electromagnetic cascade showers. Cross sections for the electromagnetic cascade shower processes of positron-electron pair production and Compton scattering are compared. A program used for Monte Carlo analysis of electromagnetic cascades is briefly discussed, and positron distributions obtained from several runs of the program are discussed. Photons from synchrotron radiation and from channeling are also mentioned briefly, as well as positron collection, transverse focusing techniques, and longitudinal capture. Computer ray tracing is then briefly discussed, followed by space-charge effects and thermal heating and stress due to showers. (LEW)
NASA Technical Reports Server (NTRS)
Magnus, Alfred E.; Epton, Michael A.
1981-01-01
An outline of the derivation of the differential equation governing linear subsonic and supersonic potential flow is given. The use of Green's Theorem to obtain an integral equation over the boundary surface is discussed. The engineering techniques incorporated in the PAN AIR (Panel Aerodynamics) program (a discretization method which solves the integral equation for arbitrary first order boundary conditions) are then discussed in detail. Items discussed include the construction of the compressibility transformations, splining techniques, imposition of the boundary conditions, influence coefficient computation (including the concept of the finite part of an integral), computation of pressure coefficients, and computation of forces and moments.
Rees, J.R.
1989-10-01
April, 1989, the first Z zero particle was observed at the Stanford Linear Collider (SLC). The SLC collides high-energy beams of electrons and positrons into each other. In break with tradition the SLC aims two linear beams at each other. Strong motives impelled the Stanford team to choose the route of innovation. One reason being that linear colliders promise to be less expensive to build and operate than storage ring colliders. An equally powerful motive was the desire to build an Z zero factory, a facility at which the Z zero particle can be studied in detail. More than 200 Z zero particles have been detected at the SLC and more continue to be churned out regularly. It is in measuring the properties of the Z zero that the SLC has a seminal contribution to make. One of the primary goals of the SLC experimental program is to determine the mass of the Z zero as precisely as possible.In the end, the SLC's greatest significance will be in having proved a new accelerator technology. 7 figs.
Computing Linear Mathematical Models Of Aircraft
NASA Technical Reports Server (NTRS)
Duke, Eugene L.; Antoniewicz, Robert F.; Krambeer, Keith D.
1991-01-01
Derivation and Definition of Linear Aircraft Model (LINEAR) computer program provides user with powerful, and flexible, standard, documented, and verified software tool for linearization of mathematical models of aerodynamics of aircraft. Intended for use in software tool to drive linear analysis of stability and design of control laws for aircraft. Capable of both extracting such linearized engine effects as net thrust, torque, and gyroscopic effects, and including these effects in linear model of system. Designed to provide easy selection of state, control, and observation variables used in particular model. Also provides flexibility of allowing alternate formulations of both state and observation equations. Written in FORTRAN.
Ozsoy, Oyku Eren; Can, Tolga
2013-01-01
Inference of topology of signaling networks from perturbation experiments is a challenging problem. Recently, the inference problem has been formulated as a reference network editing problem and it has been shown that finding the minimum number of edit operations on a reference network to comply with perturbation experiments is an NP-complete problem. In this paper, we propose an integer linear optimization (ILP) model for reconstruction of signaling networks from RNAi data and a reference network. The ILP model guarantees the optimal solution; however, is practical only for small signaling networks of size 10-15 genes due to computational complexity. To scale for large signaling networks, we propose a divide and conquer-based heuristic, in which a given reference network is divided into smaller subnetworks that are solved separately and the solutions are merged together to form the solution for the large network. We validate our proposed approach on real and synthetic data sets, and comparison with the state of the art shows that our proposed approach is able to scale better for large networks while attaining similar or better biological accuracy.
NASA Technical Reports Server (NTRS)
Purdon, David J.; Baruah, Pranab K.; Bussoletti, John E.; Epton, Michael A.; Massena, William A.; Nelson, Franklin D.; Tsurusaki, Kiyoharu
1990-01-01
The Maintenance Document Version 3.0 is a guide to the PAN AIR software system, a system which computes the subsonic or supersonic linear potential flow about a body of nearly arbitrary shape, using a higher order panel method. The document describes the overall system and each program module of the system. Sufficient detail is given for program maintenance, updating, and modification. It is assumed that the reader is familiar with programming and CRAY computer systems. The PAN AIR system was written in FORTRAN 4 language except for a few CAL language subroutines which exist in the PAN AIR library. Structured programming techniques were used to provide code documentation and maintainability. The operating systems accommodated are COS 1.11, COS 1.12, COS 1.13, and COS 1.14 on the CRAY 1S, 1M, and X-MP computing systems. The system is comprised of a data base management system, a program library, an execution control module, and nine separate FORTRAN technical modules. Each module calculates part of the posed PAN AIR problem. The data base manager is used to communicate between modules and within modules. The technical modules must be run in a prescribed fashion for each PAN AIR problem. In order to ease the problem of supplying the many JCL cards required to execute the modules, a set of CRAY procedures (PAPROCS) was created to automatically supply most of the JCL cards. Most of this document has not changed for Version 3.0. It now, however, strictly applies only to PAN AIR version 3.0. The major changes are: (1) additional sections covering the new FDP module (which calculates streamlines and offbody points); (2) a complete rewrite of the section on the MAG module; and (3) strict applicability to CRAY computing systems.
NASA Technical Reports Server (NTRS)
Baruah, P. K.; Bussoletti, J. E.; Chiang, D. T.; Massena, W. A.; Nelson, F. D.; Furdon, D. J.; Tsurusaki, K.
1981-01-01
The Maintenance Document is a guide to the PAN AIR software system, a system which computes the subsonic or supersonic linear potential flow about a body of nearly arbitrary shape, using a higher order panel method. The document describes the over-all system and each program module of the system. Sufficient detail is given for program maintenance, updating and modification. It is assumed that the reader is familiar with programming and CDC (Control Data Corporation) computer systems. The PAN AIR system was written in FORTRAN 4 language except for a few COMPASS language subroutines which exist in the PAN AIR library. Structured programming techniques were used to provide code documentation and maintainability. The operating systems accommodated are NOS 1.2, NOS/BE and SCOPE 2.1.3 on the CDC 6600, 7600 and Cyber 175 computing systems. The system is comprised of a data management system, a program library, an execution control module and nine separate FORTRAN technical modules. Each module calculates part of the posed PAN AIR problem. The data base manager is used to communicate between modules and within modules. The technical modules must be run in a prescribed fashion for each PAN AIR problem. In order to ease the problem of supplying the many JCL cards required to execute the modules, a separate module called MEC (Module Execution Control) was created to automatically supply most of the JCL cards. In addition to the MEC generated JCL, there is an additional set of user supplied JCL cards to initiate the JCL sequence stored on the system.
Granato, Gregory E.
2006-01-01
The Kendall-Theil Robust Line software (KTRLine-version 1.0) is a Visual Basic program that may be used with the Microsoft Windows operating system to calculate parameters for robust, nonparametric estimates of linear-regression coefficients between two continuous variables. The KTRLine software was developed by the U.S. Geological Survey, in cooperation with the Federal Highway Administration, for use in stochastic data modeling with local, regional, and national hydrologic data sets to develop planning-level estimates of potential effects of highway runoff on the quality of receiving waters. The Kendall-Theil robust line was selected because this robust nonparametric method is resistant to the effects of outliers and nonnormality in residuals that commonly characterize hydrologic data sets. The slope of the line is calculated as the median of all possible pairwise slopes between points. The intercept is calculated so that the line will run through the median of input data. A single-line model or a multisegment model may be specified. The program was developed to provide regression equations with an error component for stochastic data generation because nonparametric multisegment regression tools are not available with the software that is commonly used to develop regression models. The Kendall-Theil robust line is a median line and, therefore, may underestimate total mass, volume, or loads unless the error component or a bias correction factor is incorporated into the estimate. Regression statistics such as the median error, the median absolute deviation, the prediction error sum of squares, the root mean square error, the confidence interval for the slope, and the bias correction factor for median estimates are calculated by use of nonparametric methods. These statistics, however, may be used to formulate estimates of mass, volume, or total loads. The program is used to read a two- or three-column tab-delimited input file with variable names in the first row and
Gebrehiwot, Tesfay Gebregzabher; San Sebastian, Miguel; Edin, Kerstin; Goicolea, Isabel
2015-01-01
Background In 2003, the Ethiopian Ministry of Health established the Health Extension Program (HEP), with the goal of improving access to health care and health promotion activities in rural areas of the country. This paper aims to assess the association of the HEP with improved utilization of maternal health services in Northern Ethiopia using institution-based retrospective data. Methods Average quarterly total attendances for antenatal care (ANC), delivery care (DC) and post-natal care (PNC) at health posts and health care centres were studied from 2002 to 2012. Regression analysis was applied to two models to assess whether trends were statistically significant. One model was used to estimate the level and trend changes associated with the immediate period of intervention, while changes related to the post-intervention period were estimated by the other. Results The total number of consultations for ANC, DC and PNC increased constantly, particularly after the late-intervention period. Increases were higher for ANC and PNC at health post level and for DC at health centres. A positive statistically significant upward trend was found for DC and PNC in all facilities (p<0.01). The positive trend was also present in ANC at health centres (p = 0.04), but not at health posts. Conclusion Our findings revealed an increase in the use of antenatal, delivery and post-natal care after the introduction of the HEP. We are aware that other factors, that we could not control for, might be explaining that increase. The figures for DC and PNC are however low and more needs to be done in order to increase the access to the health care system as well as the demand for these services by the population. Strengthening of the health information system in the region needs also to be prioritized. PMID:26218074
Systems of Linear Equations on a Spreadsheet.
ERIC Educational Resources Information Center
Bosch, William W.; Strickland, Jeff
1998-01-01
The Optimizer in Quattro Pro and the Solver in Excel software programs make solving linear and nonlinear optimization problems feasible for business mathematics students. Proposes ways in which the Optimizer or Solver can be coaxed into solving systems of linear equations. (ASK)
QUANTUM OPTICS. Universal linear optics.
Carolan, Jacques; Harrold, Christopher; Sparrow, Chris; Martín-López, Enrique; Russell, Nicholas J; Silverstone, Joshua W; Shadbolt, Peter J; Matsuda, Nobuyuki; Oguma, Manabu; Itoh, Mikitaka; Marshall, Graham D; Thompson, Mark G; Matthews, Jonathan C F; Hashimoto, Toshikazu; O'Brien, Jeremy L; Laing, Anthony
2015-08-14
Linear optics underpins fundamental tests of quantum mechanics and quantum technologies. We demonstrate a single reprogrammable optical circuit that is sufficient to implement all possible linear optical protocols up to the size of that circuit. Our six-mode universal system consists of a cascade of 15 Mach-Zehnder interferometers with 30 thermo-optic phase shifters integrated into a single photonic chip that is electrically and optically interfaced for arbitrary setting of all phase shifters, input of up to six photons, and their measurement with a 12-single-photon detector system. We programmed this system to implement heralded quantum logic and entangling gates, boson sampling with verification tests, and six-dimensional complex Hadamards. We implemented 100 Haar random unitaries with an average fidelity of 0.999 ± 0.001. Our system can be rapidly reprogrammed to implement these and any other linear optical protocol, pointing the way to applications across fundamental science and quantum technologies. PMID:26160375
Electrothermal linear actuator
NASA Technical Reports Server (NTRS)
Derr, L. J.; Tobias, R. A.
1969-01-01
Converting electric power into powerful linear thrust without generation of magnetic fields is accomplished with an electrothermal linear actuator. When treated by an energized filament, a stack of bimetallic washers expands and drives the end of the shaft upward.
Linear Programming Problems for Generalized Uncertainty
ERIC Educational Resources Information Center
Thipwiwatpotjana, Phantipa
2010-01-01
Uncertainty occurs when there is more than one realization that can represent an information. This dissertation concerns merely discrete realizations of an uncertainty. Different interpretations of an uncertainty and their relationships are addressed when the uncertainty is not a probability of each realization. A well known model that can handle…
Optimized reservoir management: Mixed linear programming
Currie, J.C.; Novotnak, J.F.; Aasboee, B.T.; Kennedy, C.J.
1997-12-01
The Ekofisk field and surrounding Phillips Norway Group fields, also referred to as the greater Ekofisk area fields, are in the southern part of the Norwegian sector of the North Sea. Oil and gas separation and transportation facilities are centrally located on the Ekofisk complex at Ekofisk field. The Ekofisk 2 redevelopment project is designed to replace the oil-/gas-production and -processing capabilities of the existing Ekofisk complex. This requirement grew out of the high operating and maintenance expenses associated with the existing facilities. Other factors of significance were the effects of seafloor subsidence and changing safety regulations. A significant aspect of the Ekofisk field has been reservoir compaction that has resulted in seabed subsidence over the areal extent of the reservoir. After 25 years of production, the cumulative subsidence in the center of the field is more than 21 ft. The redevelopment project addresses the economic, maintenance, and safety factors and maintains the economic viability of Ekofisk and surrounding fields.
Measuring Astronomical Distances with Linear Programming
ERIC Educational Resources Information Center
Narain, Akshar
2015-01-01
A few years ago it was suggested that the distance to celestial bodies could be computed by tracking their position over about 24 hours and then solving a regression problem. One only needed to use inexpensive telescopes, cameras, and astrometry tools, and the experiment could be done from one's backyard. However, it is not obvious to an amateur…
NASA Technical Reports Server (NTRS)
Sidwell, Kenneth W.; Baruah, Pranab K.; Bussoletti, John E.; Medan, Richard T.; Conner, R. S.; Purdon, David J.
1990-01-01
A comprehensive description of user problem definition for the PAN AIR (Panel Aerodynamics) system is given. PAN AIR solves the 3-D linear integral equations of subsonic and supersonic flow. Influence coefficient methods are used which employ source and doublet panels as boundary surfaces. Both analysis and design boundary conditions can be used. This User's Manual describes the information needed to use the PAN AIR system. The structure and organization of PAN AIR are described, including the job control and module execution control languages for execution of the program system. The engineering input data are described, including the mathematical and physical modeling requirements. Version 3.0 strictly applies only to PAN AIR version 3.0. The major revisions include: (1) inputs and guidelines for the new FDP module (which calculates streamlines and offbody points); (2) nine new class 1 and class 2 boundary conditions to cover commonly used modeling practices, in particular the vorticity matching Kutta condition; (3) use of the CRAY solid state Storage Device (SSD); and (4) incorporation of errata and typo's together with additional explanation and guidelines.
NASA Technical Reports Server (NTRS)
Epton, Michael A.; Magnus, Alfred E.
1990-01-01
An outline of the derivation of the differential equation governing linear subsonic and supersonic potential flow is given. The use of Green's Theorem to obtain an integral equation over the boundary surface is discussed. The engineering techniques incorporated in the Panel Aerodynamics (PAN AIR) program (a discretization method which solves the integral equation for arbitrary first order boundary conditions) are then discussed in detail. Items discussed include the construction of the compressibility transformation, splining techniques, imposition of the boundary conditions, influence coefficient computation (including the concept of the finite part of an integral), computation of pressure coefficients, and computation of forces and moments. Principal revisions to version 3.0 are the following: (1) appendices H and K more fully describe the Aerodynamic Influence Coefficient (AIC) construction; (2) appendix L now provides a complete description of the AIC solution process; (3) appendix P is new and discusses the theory for the new FDP module (which calculates streamlines and offbody points); and (4) numerous small corrections and revisions reflecting the MAG module rewrite.
Wiedemann, H.
1981-11-01
Since no linear colliders have been built yet it is difficult to know at what energy the linear cost scaling of linear colliders drops below the quadratic scaling of storage rings. There is, however, no doubt that a linear collider facility for a center of mass energy above say 500 GeV is significantly cheaper than an equivalent storage ring. In order to make the linear collider principle feasible at very high energies a number of problems have to be solved. There are two kinds of problems: one which is related to the feasibility of the principle and the other kind of problems is associated with minimizing the cost of constructing and operating such a facility. This lecture series describes the problems and possible solutions. Since the real test of a principle requires the construction of a prototype I will in the last chapter describe the SLC project at the Stanford Linear Accelerator Center.
NASA Technical Reports Server (NTRS)
Holloway, Sidney E., III (Inventor); Crossley, Edward A., Jr. (Inventor); Jones, Irby W. (Inventor); Miller, James B. (Inventor); Davis, C. Calvin (Inventor); Behun, Vaughn D. (Inventor); Goodrich, Lewis R., Sr. (Inventor)
1992-01-01
A linear mass actuator includes an upper housing and a lower housing connectable to each other and having a central passageway passing axially through a mass that is linearly movable in the central passageway. Rollers mounted in the upper and lower housings in frictional engagement with the mass translate the mass linearly in the central passageway and drive motors operatively coupled to the roller means, for rotating the rollers and driving the mass axially in the central passageway.
Linear phase compressive filter
McEwan, Thomas E.
1995-01-01
A phase linear filter for soliton suppression is in the form of a laddered series of stages of non-commensurate low pass filters with each low pass filter having a series coupled inductance (L) and a reverse biased, voltage dependent varactor diode, to ground which acts as a variable capacitance (C). L and C values are set to levels which correspond to a linear or conventional phase linear filter. Inductance is mapped directly from that of an equivalent nonlinear transmission line and capacitance is mapped from the linear case using a large signal equivalent of a nonlinear transmission line.
Linear phase compressive filter
McEwan, T.E.
1995-06-06
A phase linear filter for soliton suppression is in the form of a laddered series of stages of non-commensurate low pass filters with each low pass filter having a series coupled inductance (L) and a reverse biased, voltage dependent varactor diode, to ground which acts as a variable capacitance (C). L and C values are set to levels which correspond to a linear or conventional phase linear filter. Inductance is mapped directly from that of an equivalent nonlinear transmission line and capacitance is mapped from the linear case using a large signal equivalent of a nonlinear transmission line. 2 figs.
Fault tolerant linear actuator
Tesar, Delbert
2004-09-14
In varying embodiments, the fault tolerant linear actuator of the present invention is a new and improved linear actuator with fault tolerance and positional control that may incorporate velocity summing, force summing, or a combination of the two. In one embodiment, the invention offers a velocity summing arrangement with a differential gear between two prime movers driving a cage, which then drives a linear spindle screw transmission. Other embodiments feature two prime movers driving separate linear spindle screw transmissions, one internal and one external, in a totally concentric and compact integrated module.
Runtime Analysis of Linear Temporal Logic Specifications
NASA Technical Reports Server (NTRS)
Giannakopoulou, Dimitra; Havelund, Klaus
2001-01-01
This report presents an approach to checking a running program against its Linear Temporal Logic (LTL) specifications. LTL is a widely used logic for expressing properties of programs viewed as sets of executions. Our approach consists of translating LTL formulae to finite-state automata, which are used as observers of the program behavior. The translation algorithm we propose modifies standard LTL to B chi automata conversion techniques to generate automata that check finite program traces. The algorithm has been implemented in a tool, which has been integrated with the generic JPaX framework for runtime analysis of Java programs.
Linearly polarized fiber amplifier
Kliner, Dahv A.; Koplow, Jeffery P.
2004-11-30
Optically pumped rare-earth-doped polarizing fibers exhibit significantly higher gain for one linear polarization state than for the orthogonal state. Such a fiber can be used to construct a single-polarization fiber laser, amplifier, or amplified-spontaneous-emission (ASE) source without the need for additional optical components to obtain stable, linearly polarized operation.
Richter, B.
1985-12-01
A report is given on the goals and progress of the SLAC Linear Collider. The status of the machine and the detectors are discussed and an overview is given of the physics which can be done at this new facility. Some ideas on how (and why) large linear colliders of the future should be built are given.
NASA Technical Reports Server (NTRS)
Clancy, John P.
1988-01-01
The object of the invention is to provide a mechanical force actuator which is lightweight and manipulatable and utilizes linear motion for push or pull forces while maintaining a constant overall length. The mechanical force producing mechanism comprises a linear actuator mechanism and a linear motion shaft mounted parallel to one another. The linear motion shaft is connected to a stationary or fixed housing and to a movable housing where the movable housing is mechanically actuated through actuator mechanism by either manual means or motor means. The housings are adapted to releasably receive a variety of jaw or pulling elements adapted for clamping or prying action. The stationary housing is adapted to be pivotally mounted to permit an angular position of the housing to allow the tool to adapt to skewed interfaces. The actuator mechanisms is operated by a gear train to obtain linear motion of the actuator mechanism.
Linear models: permutation methods
Cade, B.S.; Everitt, B.S.; Howell, D.C.
2005-01-01
Permutation tests (see Permutation Based Inference) for the linear model have applications in behavioral studies when traditional parametric assumptions about the error term in a linear model are not tenable. Improved validity of Type I error rates can be achieved with properly constructed permutation tests. Perhaps more importantly, increased statistical power, improved robustness to effects of outliers, and detection of alternative distributional differences can be achieved by coupling permutation inference with alternative linear model estimators. For example, it is well-known that estimates of the mean in linear model are extremely sensitive to even a single outlying value of the dependent variable compared to estimates of the median [7, 19]. Traditionally, linear modeling focused on estimating changes in the center of distributions (means or medians). However, quantile regression allows distributional changes to be estimated in all or any selected part of a distribution or responses, providing a more complete statistical picture that has relevance to many biological questions [6]...
Hlaing, Lwin Mar; Fahmida, Umi; Htet, Min Kyaw; Utomo, Budi; Firmansyah, Agus; Ferguson, Elaine L
2016-07-01
Poor feeding practices result in inadequate nutrient intakes in young children in developing countries. To improve practices, local food-based complementary feeding recommendations (CFR) are needed. This cross-sectional survey aimed to describe current food consumption patterns of 12-23-month-old Myanmar children (n 106) from Ayeyarwady region in order to identify nutrient requirements that are difficult to achieve using local foods and to formulate affordable and realistic CFR to improve dietary adequacy. Weekly food consumption patterns were assessed using a 12-h weighed dietary record, single 24-h recall and a 5-d food record. Food costs were estimated by market surveys. CFR were formulated by linear programming analysis using WHO Optifood software and evaluated among mothers (n 20) using trial of improved practices (TIP). Findings showed that Ca, Zn, niacin, folate and Fe were 'problem nutrients': nutrients that did not achieve 100 % recommended nutrient intake even when the diet was optimised. Chicken liver, anchovy and roselle leaves were locally available nutrient-dense foods that would fill these nutrient gaps. The final set of six CFR would ensure dietary adequacy for five of twelve nutrients at a minimal cost of 271 kyats/d (based on the exchange rate of 900 kyats/USD at the time of data collection: 3rd quarter of 2012), but inadequacies remained for niacin, folate, thiamin, Fe, Zn, Ca and vitamin B6. TIP showed that mothers believed liver and vegetables would cause worms and diarrhoea, but these beliefs could be overcome to successfully promote liver consumption. Therefore, an acceptable set of CFR were developed to improve the dietary practices of 12-23-month-old Myanmar children using locally available foods. Alternative interventions such as fortification, however, are still needed to ensure dietary adequacy of all nutrients.
NASA Technical Reports Server (NTRS)
Vinson, John
1998-01-01
In July of 1999 two linear aerospike rocket engines will power the first flight of NASA's X-33 advanced technology demonstrator. A successful X-33 flight test program will validate the aerospike nozzle concept, a key technical feature of Lockheed Martin's VentureStar(trademark) reusable launch vehicle. The aerospike received serious consideration for NASA's current space shuttle, but was eventually rejected in 1969 in favor of high chamber pressure bell engines, in part because of perceived technical risk. The aerospike engine (discussed below) has several performance advantages over conventional bell engines. However, these performance advantages are difficult to validate by ground test. The space shuttle, a multibillion dollar program intended to provide all of NASA's future space lift could not afford the gamble of choosing a potentially superior though unproven aerospike engine over a conventional bell engine. The X-33 demonstrator provides an opportunity to prove the aerospike's performance advantage in flight before commiting to an operational vehicle.
Aircraft engine mathematical model - linear system approach
NASA Astrophysics Data System (ADS)
Rotaru, Constantin; Roateşi, Simona; Cîrciu, Ionicǎ
2016-06-01
This paper examines a simplified mathematical model of the aircraft engine, based on the theory of linear and nonlinear systems. The dynamics of the engine was represented by a linear, time variant model, near a nominal operating point within a finite time interval. The linearized equations were expressed in a matrix form, suitable for the incorporation in the MAPLE program solver. The behavior of the engine was included in terms of variation of the rotational speed following a deflection of the throttle. The engine inlet parameters can cover a wide range of altitude and Mach numbers.
NASA Technical Reports Server (NTRS)
Studer, P. A. (Inventor)
1983-01-01
A linear magnetic bearing system having electromagnetic vernier flux paths in shunt relation with permanent magnets, so that the vernier flux does not traverse the permanent magnet, is described. Novelty is believed to reside in providing a linear magnetic bearing having electromagnetic flux paths that bypass high reluctance permanent magnets. Particular novelty is believed to reside in providing a linear magnetic bearing with a pair of axially spaced elements having electromagnets for establishing vernier x and y axis control. The magnetic bearing system has possible use in connection with a long life reciprocating cryogenic refrigerator that may be used on the space shuttle.
BLAS- BASIC LINEAR ALGEBRA SUBPROGRAMS
NASA Technical Reports Server (NTRS)
Krogh, F. T.
1994-01-01
The Basic Linear Algebra Subprogram (BLAS) library is a collection of FORTRAN callable routines for employing standard techniques in performing the basic operations of numerical linear algebra. The BLAS library was developed to provide a portable and efficient source of basic operations for designers of programs involving linear algebraic computations. The subprograms available in the library cover the operations of dot product, multiplication of a scalar and a vector, vector plus a scalar times a vector, Givens transformation, modified Givens transformation, copy, swap, Euclidean norm, sum of magnitudes, and location of the largest magnitude element. Since these subprograms are to be used in an ANSI FORTRAN context, the cases of single precision, double precision, and complex data are provided for. All of the subprograms have been thoroughly tested and produce consistent results even when transported from machine to machine. BLAS contains Assembler versions and FORTRAN test code for any of the following compilers: Lahey F77L, Microsoft FORTRAN, or IBM Professional FORTRAN. It requires the Microsoft Macro Assembler and a math co-processor. The PC implementation allows individual arrays of over 64K. The BLAS library was developed in 1979. The PC version was made available in 1986 and updated in 1988.
... is the device most commonly used for external beam radiation treatments for patients with cancer. The linear ... shape of the patient's tumor and the customized beam is directed to the patient's tumor. The beam ...
Isolated linear blaschkoid psoriasis.
Nasimi, M; Abedini, R; Azizpour, A; Nikoo, A
2016-10-01
Linear psoriasis (LPs) is considered a rare clinical presentation of psoriasis, which is characterized by linear erythematous and scaly lesions along the lines of Blaschko. We report the case of a 20-year-old man who presented with asymptomatic linear and S-shaped erythematous, scaly plaques on right side of his trunk. The plaques were arranged along the lines of Blaschko with a sharp demarcation at the midline. Histological examination of a skin biopsy confirmed the diagnosis of psoriasis. Topical calcipotriol and betamethasone dipropionate ointments were prescribed for 2 months. A good clinical improvement was achieved, with reduction in lesion thickness and scaling. In patients with linear erythematous and scaly plaques along the lines of Blaschko, the diagnosis of LPs should be kept in mind, especially in patients with asymptomatic lesions of late onset. PMID:27663156
NASA Technical Reports Server (NTRS)
Laughlin, Darren
1995-01-01
Inertial linear actuators developed to suppress residual accelerations of nominally stationary or steadily moving platforms. Function like long-stroke version of voice coil in conventional loudspeaker, with superimposed linear variable-differential transformer. Basic concept also applicable to suppression of vibrations of terrestrial platforms. For example, laboratory table equipped with such actuators plus suitable vibration sensors and control circuits made to vibrate much less in presence of seismic, vehicular, and other environmental vibrational disturbances.
Shetty, Shricharith; Rao, Raghavendra; Kudva, R Ranjini; Subramanian, Kumudhini
2016-01-01
Alopecia areata (AA) over scalp is known to present in various shapes and extents of hair loss. Typically it presents as circumscribed patches of alopecia with underlying skin remaining normal. We describe a rare variant of AA presenting in linear band-like form. Only four cases of linear alopecia have been reported in medical literature till today, all four being diagnosed as lupus erythematosus profundus. PMID:27625568
Shetty, Shricharith; Rao, Raghavendra; Kudva, R Ranjini; Subramanian, Kumudhini
2016-01-01
Alopecia areata (AA) over scalp is known to present in various shapes and extents of hair loss. Typically it presents as circumscribed patches of alopecia with underlying skin remaining normal. We describe a rare variant of AA presenting in linear band-like form. Only four cases of linear alopecia have been reported in medical literature till today, all four being diagnosed as lupus erythematosus profundus.
Shetty, Shricharith; Rao, Raghavendra; Kudva, R Ranjini; Subramanian, Kumudhini
2016-01-01
Alopecia areata (AA) over scalp is known to present in various shapes and extents of hair loss. Typically it presents as circumscribed patches of alopecia with underlying skin remaining normal. We describe a rare variant of AA presenting in linear band-like form. Only four cases of linear alopecia have been reported in medical literature till today, all four being diagnosed as lupus erythematosus profundus. PMID:27625568
Multiple linear regression analysis
NASA Technical Reports Server (NTRS)
Edwards, T. R.
1980-01-01
Program rapidly selects best-suited set of coefficients. User supplies only vectors of independent and dependent data and specifies confidence level required. Program uses stepwise statistical procedure for relating minimal set of variables to set of observations; final regression contains only most statistically significant coefficients. Program is written in FORTRAN IV for batch execution and has been implemented on NOVA 1200.
Superconducting linear actuator
NASA Technical Reports Server (NTRS)
Johnson, Bruce; Hockney, Richard
1993-01-01
Special actuators are needed to control the orientation of large structures in space-based precision pointing systems. Electromagnetic actuators that presently exist are too large in size and their bandwidth is too low. Hydraulic fluid actuation also presents problems for many space-based applications. Hydraulic oil can escape in space and contaminate the environment around the spacecraft. A research study was performed that selected an electrically-powered linear actuator that can be used to control the orientation of a large pointed structure. This research surveyed available products, analyzed the capabilities of conventional linear actuators, and designed a first-cut candidate superconducting linear actuator. The study first examined theoretical capabilities of electrical actuators and determined their problems with respect to the application and then determined if any presently available actuators or any modifications to available actuator designs would meet the required performance. The best actuator was then selected based on available design, modified design, or new design for this application. The last task was to proceed with a conceptual design. No commercially-available linear actuator or modification capable of meeting the specifications was found. A conventional moving-coil dc linear actuator would meet the specification, but the back-iron for this actuator would weigh approximately 12,000 lbs. A superconducting field coil, however, eliminates the need for back iron, resulting in an actuator weight of approximately 1000 lbs.
Designing linear systolic arrays
Kumar, V.K.P.; Tsai, Y.C. . Dept. of Electrical Engineering)
1989-12-01
The authors develop a simple mapping technique to design linear systolic arrays. The basic idea of the technique is to map the computations of a certain class of two-dimensional systolic arrays onto one-dimensional arrays. Using this technique, systolic algorithms are derived for problems such as matrix multiplication and transitive closure on linearly connected arrays of PEs with constant I/O bandwidth. Compared to known designs in the literature, the technique leads to modular systolic arrays with constant hardware in each PE, few control lines, lexicographic data input/output, and improved delay time. The unidirectional flow of control and data in this design assures implementation of the linear array in the known fault models of wafer scale integration.
NASA Technical Reports Server (NTRS)
Leviton, Douglas B. (Inventor)
1993-01-01
A Linear Motion Encoding device for measuring the linear motion of a moving object is disclosed in which a light source is mounted on the moving object and a position sensitive detector such as an array photodetector is mounted on a nearby stationary object. The light source emits a light beam directed towards the array photodetector such that a light spot is created on the array. An analog-to-digital converter, connected to the array photodetector is used for reading the position of the spot on the array photodetector. A microprocessor and memory is connected to the analog-to-digital converter to hold and manipulate data provided by the analog-to-digital converter on the position of the spot and to compute the linear displacement of the moving object based upon the data from the analog-to-digital converter.
Linear stochastic optimal control and estimation
NASA Technical Reports Server (NTRS)
Geyser, L. C.; Lehtinen, F. K. B.
1976-01-01
Digital program has been written to solve the LSOCE problem by using a time-domain formulation. LSOCE problem is defined as that of designing controls for linear time-invariant system which is disturbed by white noise in such a way as to minimize quadratic performance index.
Improved Electrohydraulic Linear Actuators
NASA Technical Reports Server (NTRS)
Hamtil, James
2004-01-01
A product line of improved electrohydraulic linear actuators has been developed. These actuators are designed especially for use in actuating valves in rocket-engine test facilities. They are also adaptable to many industrial uses, such as steam turbines, process control valves, dampers, motion control, etc. The advantageous features of the improved electrohydraulic linear actuators are best described with respect to shortcomings of prior electrohydraulic linear actuators that the improved ones are intended to supplant. The flow of hydraulic fluid to the two ports of the actuator cylinder is controlled by a servo valve that is controlled by a signal from a servo amplifier that, in turn, receives an analog position-command signal (a current having a value between 4 and 20 mA) from a supervisory control system of the facility. As the position command changes, the servo valve shifts, causing a greater flow of hydraulic fluid to one side of the cylinder and thereby causing the actuator piston to move to extend or retract a piston rod from the actuator body. A linear variable differential transformer (LVDT) directly linked to the piston provides a position-feedback signal, which is compared with the position-command signal in the servo amplifier. When the position-feedback and position-command signals match, the servo valve moves to its null position, in which it holds the actuator piston at a steady position.
Duck, F
2010-01-01
The propagation of acoustic waves is a fundamentally non-linear process, and only waves with infinitesimally small amplitudes may be described by linear expressions. In practice, all ultrasound propagation is associated with a progressive distortion in the acoustic waveform and the generation of frequency harmonics. At the frequencies and amplitudes used for medical diagnostic scanning, the waveform distortion can result in the formation of acoustic shocks, excess deposition of energy, and acoustic saturation. These effects occur most strongly when ultrasound propagates within liquids with comparatively low acoustic attenuation, such as water, amniotic fluid, or urine. Attenuation by soft tissues limits but does not extinguish these non-linear effects. Harmonics may be used to create tissue harmonic images. These offer improvements over conventional B-mode images in spatial resolution and, more significantly, in the suppression of acoustic clutter and side-lobe artefacts. The quantity B/A has promise as a parameter for tissue characterization, but methods for imaging B/A have shown only limited success. Standard methods for the prediction of tissue in-situ exposure from acoustic measurements in water, whether for regulatory purposes, for safety assessment, or for planning therapeutic regimes, may be in error because of unaccounted non-linear losses. Biological effects mechanisms are altered by finite-amplitude effects. PMID:20349813
NASA Technical Reports Server (NTRS)
Chandler, J. A. (Inventor)
1985-01-01
The linear motion valve is described. The valve spool employs magnetically permeable rings, spaced apart axially, which engage a sealing assembly having magnetically permeable pole pieces in magnetic relationship with a magnet. The gap between the ring and the pole pieces is sealed with a ferrofluid. Depletion of the ferrofluid is minimized.
Resistors Improve Ramp Linearity
NASA Technical Reports Server (NTRS)
Kleinberg, L. L.
1982-01-01
Simple modification to bootstrap ramp generator gives more linear output over longer sweep times. New circuit adds just two resistors, one of which is adjustable. Modification cancels nonlinearities due to variations in load on charging capacitor and due to changes in charging current as the voltage across capacitor increases.
ERIC Educational Resources Information Center
Dobbs, David E.
2013-01-01
A direct method is given for solving first-order linear recurrences with constant coefficients. The limiting value of that solution is studied as "n to infinity." This classroom note could serve as enrichment material for the typical introductory course on discrete mathematics that follows a calculus course.
PC Basic Linear Algebra Subroutines
1992-03-09
PC-BLAS is a highly optimized version of the Basic Linear Algebra Subprograms (BLAS), a standardized set of thirty-eight routines that perform low-level operations on vectors of numbers in single and double-precision real and complex arithmetic. Routines are included to find the index of the largest component of a vector, apply a Givens or modified Givens rotation, multiply a vector by a constant, determine the Euclidean length, perform a dot product, swap and copy vectors, andmore » find the norm of a vector. The BLAS have been carefully written to minimize numerical problems such as loss of precision and underflow and are designed so that the computation is independent of the interface with the calling program. This independence is achieved through judicious use of Assembly language macros. Interfaces are provided for Lahey Fortran 77, Microsoft Fortran 77, and Ryan-McFarland IBM Professional Fortran.« less
NASA Technical Reports Server (NTRS)
Goldowsky, Michael P. (Inventor)
1987-01-01
A reciprocating linear motor is formed with a pair of ring-shaped permanent magnets having opposite radial polarizations, held axially apart by a nonmagnetic yoke, which serves as an axially displaceable armature assembly. A pair of annularly wound coils having axial lengths which differ from the axial lengths of the permanent magnets are serially coupled together in mutual opposition and positioned with an outer cylindrical core in axial symmetry about the armature assembly. One embodiment includes a second pair of annularly wound coils serially coupled together in mutual opposition and an inner cylindrical core positioned in axial symmetry inside the armature radially opposite to the first pair of coils. Application of a potential difference across a serial connection of the two pairs of coils creates a current flow perpendicular to the magnetic field created by the armature magnets, thereby causing limited linear displacement of the magnets relative to the coils.
General linear chirplet transform
NASA Astrophysics Data System (ADS)
Yu, Gang; Zhou, Yiqi
2016-03-01
Time-frequency (TF) analysis (TFA) method is an effective tool to characterize the time-varying feature of a signal, which has drawn many attentions in a fairly long period. With the development of TFA, many advanced methods are proposed, which can provide more precise TF results. However, some restrictions are introduced inevitably. In this paper, we introduce a novel TFA method, termed as general linear chirplet transform (GLCT), which can overcome some limitations existed in current TFA methods. In numerical and experimental validations, by comparing with current TFA methods, some advantages of GLCT are demonstrated, which consist of well-characterizing the signal of multi-component with distinct non-linear features, being independent to the mathematical model and initial TFA method, allowing for the reconstruction of the interested component, and being non-sensitivity to noise.
NASA Technical Reports Server (NTRS)
Collins, Earl R., Jr.; Curry, Kenneth C.
1990-01-01
Electrically charged helices attract or repel each other. Proposed electrostatic linear actuator made with intertwined dual helices, which holds charge-bearing surfaces. Dual-helix configuration provides relatively large unbroken facing charged surfaces (relatively large electrostatic force) within small volume. Inner helix slides axially in outer helix in response to voltages applied to conductors. Spiral form also makes components more rigid. Actuator conceived to have few moving parts and to be operable after long intervals of inactivity.
Buttram, M.T.; Ginn, J.W.
1988-06-21
A linear induction accelerator includes a plurality of adder cavities arranged in a series and provided in a structure which is evacuated so that a vacuum inductance is provided between each adder cavity and the structure. An energy storage system for the adder cavities includes a pulsed current source and a respective plurality of bipolar converting networks connected thereto. The bipolar high-voltage, high-repetition-rate square pulse train sets and resets the cavities. 4 figs.
Relativistic Linear Restoring Force
ERIC Educational Resources Information Center
Clark, D.; Franklin, J.; Mann, N.
2012-01-01
We consider two different forms for a relativistic version of a linear restoring force. The pair comes from taking Hooke's law to be the force appearing on the right-hand side of the relativistic expressions: d"p"/d"t" or d"p"/d["tau"]. Either formulation recovers Hooke's law in the non-relativistic limit. In addition to these two forces, we…
Combustion powered linear actuator
Fischer, Gary J.
2007-09-04
The present invention provides robotic vehicles having wheeled and hopping mobilities that are capable of traversing (e.g. by hopping over) obstacles that are large in size relative to the robot and, are capable of operation in unpredictable terrain over long range. The present invention further provides combustion powered linear actuators, which can include latching mechanisms to facilitate pressurized fueling of the actuators, as can be used to provide wheeled vehicles with a hopping mobility.
Representation of linear orders.
Taylor, D A; Kim, J O; Sudevan, P
1984-01-01
Two binary classification tasks were used to explore the associative structure of linear orders. In Experiment 1, college students classified English letters as targets or nontargets, the targets being consecutive letters of the alphabet. The time to reject nontargets was a decreasing function of the distance from the target set, suggesting response interference mediated by automatic associations from the target to the nontarget letters. The way in which this interference effect depended on the placement of the boundaries between the target and nontarget sets revealed the relative strengths of individual interletter associations. In Experiment 2, students were assigned novel linear orders composed of letterlike symbols and asked to classify pairs of symbols as being adjacent or nonadjacent in the assigned sequence. Reaction time was found to be a joint function of the distance between any pair of symbols and the relative positions of those symbols within the sequence. The effects of both distance and position decreased systematically over 6 days of practice with a particular order, beginning at a level typical of unfamiliar orders and converging on a level characteristic of familiar orders such as letters and digits. These results provide an empirical unification of two previously disparate sets of findings in the literature on linear orders, those concerning familiar and unfamiliar orders, and the systematic transition between the two patterns of results suggests the gradual integration of a new associative structure.
NASA Astrophysics Data System (ADS)
Uhlmann, Armin
2016-03-01
This is an introduction to antilinear operators. In following Wigner the terminus antilinear is used as it is standard in Physics. Mathematicians prefer to say conjugate linear. By restricting to finite-dimensional complex-linear spaces, the exposition becomes elementary in the functional analytic sense. Nevertheless it shows the amazing differences to the linear case. Basics of antilinearity is explained in sects. 2, 3, 4, 7 and in sect. 1.2: Spectrum, canonical Hermitian form, antilinear rank one and two operators, the Hermitian adjoint, classification of antilinear normal operators, (skew) conjugations, involutions, and acq-lines, the antilinear counterparts of 1-parameter operator groups. Applications include the representation of the Lagrangian Grassmannian by conjugations, its covering by acq-lines. As well as results on equivalence relations. After remembering elementary Tomita-Takesaki theory, antilinear maps, associated to a vector of a two-partite quantum system, are defined. By allowing to write modular objects as twisted products of pairs of them, they open some new ways to express EPR and teleportation tasks. The appendix presents a look onto the rich structure of antilinear operator spaces.
Linearized Kernel Dictionary Learning
NASA Astrophysics Data System (ADS)
Golts, Alona; Elad, Michael
2016-06-01
In this paper we present a new approach of incorporating kernels into dictionary learning. The kernel K-SVD algorithm (KKSVD), which has been introduced recently, shows an improvement in classification performance, with relation to its linear counterpart K-SVD. However, this algorithm requires the storage and handling of a very large kernel matrix, which leads to high computational cost, while also limiting its use to setups with small number of training examples. We address these problems by combining two ideas: first we approximate the kernel matrix using a cleverly sampled subset of its columns using the Nystr\\"{o}m method; secondly, as we wish to avoid using this matrix altogether, we decompose it by SVD to form new "virtual samples," on which any linear dictionary learning can be employed. Our method, termed "Linearized Kernel Dictionary Learning" (LKDL) can be seamlessly applied as a pre-processing stage on top of any efficient off-the-shelf dictionary learning scheme, effectively "kernelizing" it. We demonstrate the effectiveness of our method on several tasks of both supervised and unsupervised classification and show the efficiency of the proposed scheme, its easy integration and performance boosting properties.
Ultrasonic linear measurement system
NASA Technical Reports Server (NTRS)
Marshall, Scot H. (Inventor)
1991-01-01
An ultrasonic linear measurement system uses the travel time of surface waves along the perimeter of a three-dimensional curvilinear body to determine the perimeter of the curvilinear body. The system can also be used piece-wise to measure distances along plane surfaces. The system can be used to measure perimeters where use of laser light, optical means or steel tape would be extremely difficult, time consuming or impossible. It can also be used to determine discontinuities in surfaces of known perimeter or dimension.
[Congenital linear nevus sebaceus].
Linnemann, Anders; Bygum, Anette; Fenger-Grøn, Jesper
2011-09-01
An unusual case of nevus sebaceous is described. Nevus sebaceous is a congenital epidermal hamartoma of the skin and the predilection site is the head or neck. In this case the nevus followed the lines of Blaschko along the back of the left lower extremity. The linear lesion seemed papulovesicular which caused suspicion of incontinentia pigmenti or infection, and the boy received antimicrobial treatment until a biopsy revealed the correct diagnosis. We wish to emphasize this clinical picture to spare the patient and relatives from unnecessary tests, treatment and concern. PMID:21893006
NASA Technical Reports Server (NTRS)
Perkins, Gerald S. (Inventor)
1980-01-01
A linear actuator which can apply high forces is described, which includes a reciprocating rod having a threaded portion engaged by a nut that is directly coupled to the rotor of an electric motor. The nut is connected to the rotor in a manner that minimizes loading on the rotor, by the use of a coupling that transmits torque to the nut but permits it to shift axially and radially with respect to the rotor. The nut has a threaded hydrostatic bearing for engaging the threaded rod portion, with an oilcarrying groove in the nut being interrupted.
A Proposed Method for Solving Fuzzy System of Linear Equations
Rostami-Malkhalifeh, Mohsen; Jahanshaloo, Gholam Reza
2014-01-01
This paper proposes a new method for solving fuzzy system of linear equations with crisp coefficients matrix and fuzzy or interval right hand side. Some conditions for the existence of a fuzzy or interval solution of m × n linear system are derived and also a practical algorithm is introduced in detail. The method is based on linear programming problem. Finally the applicability of the proposed method is illustrated by some numerical examples. PMID:25215332
A proposed method for solving fuzzy system of linear equations.
Kargar, Reza; Allahviranloo, Tofigh; Rostami-Malkhalifeh, Mohsen; Jahanshaloo, Gholam Reza
2014-01-01
This paper proposes a new method for solving fuzzy system of linear equations with crisp coefficients matrix and fuzzy or interval right hand side. Some conditions for the existence of a fuzzy or interval solution of m × n linear system are derived and also a practical algorithm is introduced in detail. The method is based on linear programming problem. Finally the applicability of the proposed method is illustrated by some numerical examples.
Accumulative Equating Error after a Chain of Linear Equatings
ERIC Educational Resources Information Center
Guo, Hongwen
2010-01-01
After many equatings have been conducted in a testing program, equating errors can accumulate to a degree that is not negligible compared to the standard error of measurement. In this paper, the author investigates the asymptotic accumulative standard error of equating (ASEE) for linear equating methods, including chained linear, Tucker, and…
Analysis of linear trade models and relation to scale economies.
Gomory, R E; Baumol, W J
1997-09-01
We discuss linear Ricardo models with a range of parameters. We show that the exact boundary of the region of equilibria of these models is obtained by solving a simple integer programming problem. We show that there is also an exact correspondence between many of the equilibria resulting from families of linear models and the multiple equilibria of economies of scale models.
Implementing Linear Algebra Related Algorithms on the TI-92+ Calculator.
ERIC Educational Resources Information Center
Alexopoulos, John; Abraham, Paul
2001-01-01
Demonstrates a less utilized feature of the TI-92+: its natural and powerful programming language. Shows how to implement several linear algebra related algorithms including the Gram-Schmidt process, Least Squares Approximations, Wronskians, Cholesky Decompositions, and Generalized Linear Least Square Approximations with QR Decompositions.…
NASA Astrophysics Data System (ADS)
Birx, Daniel
1992-03-01
Among the family of particle accelerators, the Induction Linear Accelerator is the best suited for the acceleration of high current electron beams. Because the electromagnetic radiation used to accelerate the electron beam is not stored in the cavities but is supplied by transmission lines during the beam pulse it is possible to utilize very low Q (typically<10) structures and very large beam pipes. This combination increases the beam breakup limited maximum currents to of order kiloamperes. The micropulse lengths of these machines are measured in 10's of nanoseconds and duty factors as high as 10-4 have been achieved. Until recently the major problem with these machines has been associated with the pulse power drive. Beam currents of kiloamperes and accelerating potentials of megavolts require peak power drives of gigawatts since no energy is stored in the structure. The marriage of liner accelerator technology and nonlinear magnetic compressors has produced some unique capabilities. It now appears possible to produce electron beams with average currents measured in amperes, peak currents in kiloamperes and gradients exceeding 1 MeV/meter, with power efficiencies approaching 50%. The nonlinear magnetic compression technology has replaced the spark gap drivers used on earlier accelerators with state-of-the-art all-solid-state SCR commutated compression chains. The reliability of these machines is now approaching 1010 shot MTBF. In the following paper we will briefly review the historical development of induction linear accelerators and then discuss the design considerations.
Pseudo Linear Gyro Calibration
NASA Technical Reports Server (NTRS)
Harman, Richard; Bar-Itzhack, Itzhack Y.
2003-01-01
Previous high fidelity onboard attitude algorithms estimated only the spacecraft attitude and gyro bias. The desire to promote spacecraft and ground autonomy and improvements in onboard computing power has spurred development of more sophisticated calibration algorithms. Namely, there is a desire to provide for sensor calibration through calibration parameter estimation onboard the spacecraft as well as autonomous estimation on the ground. Gyro calibration is a particularly challenging area of research. There are a variety of gyro devices available for any prospective mission ranging from inexpensive low fidelity gyros with potentially unstable scale factors to much more expensive extremely stable high fidelity units. Much research has been devoted to designing dedicated estimators such as particular Extended Kalman Filter (EKF) algorithms or Square Root Information Filters. This paper builds upon previous attitude, rate, and specialized gyro parameter estimation work performed with Pseudo Linear Kalman Filter (PSELIKA). The PSELIKA advantage is the use of the standard linear Kalman Filter algorithm. A PSELIKA algorithm for an orthogonal gyro set which includes estimates of attitude, rate, gyro misalignments, gyro scale factors, and gyro bias is developed and tested using simulated and flight data. The measurements PSELIKA uses include gyro and quaternion tracker data.
Solutions of The Fully Fuzzy Linear System
NASA Astrophysics Data System (ADS)
Mikaeilvand, Nasser; Allahviranloo, Tofigh
2009-05-01
As can be seen from the definition of extended operations on fuzzy numbers, subtraction and division of fuzzy numbers are not the inverse operations to addition and multiplication, respectively. Hence for solving equations or system of equations, we must use methods without using inverse operators. In this paper, we propose a novel method to find the nonzero solutions of fully fuzzy linear systems (shown as FFLS). System's parameters are Split to two groups of non positives and non negatives by solving one multi objective linear program (MOLP) and employing embedding method to transform n×n (FFLS) to 2n×2n parametric form linear system and hence, transform operations on fuzzy numbers to operations on functions. And finally, numerical examples are used to illustrate this approach.
NASA Astrophysics Data System (ADS)
2001-05-01
Third Nucleus Observed with the VLT Summary New images from the VLT show that one of the two nuclei of Comet LINEAR (C/2001 A2), now about 100 million km from the Earth, has just split into at least two pieces . The three fragments are now moving through space in nearly parallel orbits while they slowly drift apart. This comet will pass through its perihelion (nearest point to the Sun) on May 25, 2001, at a distance of about 116 million kilometres. It has brightened considerably due to the splitting of its "dirty snowball" nucleus and can now be seen with the unaided eye by observers in the southern hemisphere as a faint object in the southern constellation of Lepus (The Hare). PR Photo 18a/01 : Three nuclei of Comet LINEAR . PR Photo 18b/01 : The break-up of Comet LINEAR (false-colour). Comet LINEAR splits and brightens ESO PR Photo 18a/01 ESO PR Photo 18a/01 [Preview - JPEG: 400 x 438 pix - 55k] [Normal - JPEG: 800 x 875 pix - 136k] ESO PR Photo 18b/01 ESO PR Photo 18b/01 [Preview - JPEG: 367 x 400 pix - 112k] [Normal - JPEG: 734 x 800 pix - 272k] Caption : ESO PR Photo 18a/01 shows the three nuclei of Comet LINEAR (C/2001 A2). It is a reproduction of a 1-min exposure in red light, obtained in the early evening of May 16, 2001, with the 8.2-m VLT YEPUN (UT4) telescope at Paranal. ESO PR Photo 18b/01 shows the same image, but in a false-colour rendering for more clarity. The cometary fragment "B" (right) has split into "B1" and "B2" (separation about 1 arcsec, or 500 km) while fragment "A" (upper left) is considerably fainter. Technical information about these photos is available below. Comet LINEAR was discovered on January 3, 2001, and designated by the International Astronomical Union (IAU) as C/2001 A2 (see IAU Circular 7564 [1]). Six weeks ago, it was suddenly observed to brighten (IAUC 7605 [1]). Amateurs all over the world saw the comparatively faint comet reaching naked-eye magnitude and soon thereafter, observations with professional telescopes indicated
Applications of Goal Programming to Education.
ERIC Educational Resources Information Center
Van Dusseldorp, Ralph A.; And Others
This paper discusses goal programming, a computer-based operations research technique that is basically a modification and extension of linear programming. The authors first discuss the similarities and differences between goal programming and linear programming, then describe the limitations of goal programming and its possible applications for…
Vuori, Kaarina; Strandén, Ismo; Sevón-Aimonen, Marja-Liisa; Mäntysaari, Esa A
2006-01-01
A method based on Taylor series expansion for estimation of location parameters and variance components of non-linear mixed effects models was considered. An attractive property of the method is the opportunity for an easily implemented algorithm. Estimation of non-linear mixed effects models can be done by common methods for linear mixed effects models, and thus existing programs can be used after small modifications. The applicability of this algorithm in animal breeding was studied with simulation using a Gompertz function growth model in pigs. Two growth data sets were analyzed: a full set containing observations from the entire growing period, and a truncated time trajectory set containing animals slaughtered prematurely, which is common in pig breeding. The results from the 50 simulation replicates with full data set indicate that the linearization approach was capable of estimating the original parameters satisfactorily. However, estimation of the parameters related to adult weight becomes unstable in the case of a truncated data set.
Singular linear-quadratic control problem for systems with linear delay
Sesekin, A. N.
2013-12-18
A singular linear-quadratic optimization problem on the trajectories of non-autonomous linear differential equations with linear delay is considered. The peculiarity of this problem is the fact that this problem has no solution in the class of integrable controls. To ensure the existence of solutions is required to expand the class of controls including controls with impulse components. Dynamical systems with linear delay are used to describe the motion of pantograph from the current collector with electric traction, biology, etc. It should be noted that for practical problems fact singularity criterion of quality is quite commonly occurring, and therefore the study of these problems is surely important. For the problem under discussion optimal programming control contained impulse components at the initial and final moments of time is constructed under certain assumptions on the functional and the right side of the control system.
Linear stochastic optimal control and estimation problem
NASA Technical Reports Server (NTRS)
Geyser, L. C.; Lehtinen, F. K. B.
1980-01-01
Problem involves design of controls for linear time-invariant system disturbed by white noise. Solution is Kalman filter coupled through set of optimal regulator gains to produce desired control signal. Key to solution is solving matrix Riccati differential equation. LSOCE effectively solves problem for wide range of practical applications. Program is written in FORTRAN IV for batch execution and has been implemented on IBM 360.
World lays groundwork for future linear collider
Feder, Toni
2010-07-15
With the Large Hadron Collider at CERN finally working, the particle-physics community can now afford to divide its attention between achieving LHC results and preparing for the next machine on its wish list, an electron-positron linear collider. The preparations involve developing and deciding on the technology for such a machine, the mode of its governance, and how to balance regional and global particle- and accelerator-physics programs.
2006-11-17
Software that simulates and inverts electromagnetic field data for subsurface electrical properties (electrical conductivity) of geological media. The software treats data produced by a time harmonic source field excitation arising from the following antenna geometery: loops and grounded bipoles, as well as point electric and magnetic dioples. The inversion process is carried out using a non-linear conjugate gradient optimization scheme, which minimizes the misfit between field data and model data using a least squares criteria.more » The software is an upgrade from the code NLCGCS_MP ver 1.0. The upgrade includes the following components: Incorporation of new 1 D field sourcing routines to more accurately simulate the 3D electromagnetic field for arbitrary geologic& media, treatment for generalized finite length transmitting antenna geometry (antennas with vertical and horizontal component directions). In addition, the software has been upgraded to treat transverse anisotropy in electrical conductivity.« less
Linear response at criticality
NASA Astrophysics Data System (ADS)
Svenkeson, Adam; Bologna, Mauro; Grigolini, Paolo
2012-10-01
We study a set of cooperatively interacting units at criticality, and we prove with analytical and numerical arguments that they generate the same renewal non-Poisson intermittency as that produced by blinking quantum dots, thereby giving a stronger support to the results of earlier investigation. By analyzing how this out-of-equilibrium system responds to harmonic perturbations, we find that the response can be described only using a new form of linear response theory that accounts for aging and the nonergodic behavior of the underlying process. We connect the undamped response of the system at criticality to the decaying response predicted by the recently established nonergodic fluctuation-dissipation theorem for dichotomous processes using information about the second moment of the fluctuations. We demonstrate that over a wide range of perturbation frequencies the response of the cooperative system is greatest when at criticality.
Van Atta, C.M.; Beringer, R.; Smith, L.
1959-01-01
A linear accelerator of heavy ions is described. The basic contributions of the invention consist of a method and apparatus for obtaining high energy particles of an element with an increased charge-to-mass ratio. The method comprises the steps of ionizing the atoms of an element, accelerating the resultant ions to an energy substantially equal to one Mev per nucleon, stripping orbital electrons from the accelerated ions by passing the ions through a curtain of elemental vapor disposed transversely of the path of the ions to provide a second charge-to-mass ratio, and finally accelerating the resultant stripped ions to a final energy of at least ten Mev per nucleon.
NASA Technical Reports Server (NTRS)
Holloway, Sidney E., III
1994-01-01
This paper describes the mechanical design, analysis, fabrication, testing, and lessons learned by developing a uniquely designed spaceflight-like actuator. The linear proof mass actuator (LPMA) was designed to attach to both a large space structure and a ground test model without modification. Previous designs lacked the power to perform in a terrestrial environment while other designs failed to produce the desired accelerations or frequency range for spaceflight applications. Thus, the design for a unique actuator was conceived and developed at NASA Langley Research Center. The basic design consists of four large mechanical parts (mass, upper housing, lower housing, and center support) and numerous smaller supporting components including an accelerometer, encoder, and four drive motors. Fabrication personnel were included early in the design phase of the LPMA as part of an integrated manufacturing process to alleviate potential difficulties in machining an already challenging design. Operating testing of the LPMA demonstrated that the actuator is capable of various types of load functions.
NASA Technical Reports Server (NTRS)
Holloway, S. E., III
1995-01-01
This paper describes the mechanical design, analysis, fabrication, testing, and lessons learned by developing a uniquely designed spaceflight-like actuator. The Linear Proof Mass Actuator (LPMA) was designed to attach to both a large space structure and a ground test model without modification. Previous designs lacked the power to perform in a terrestrial environment while other designs failed to produce the desired accelerations or frequency range for spaceflight applications. Thus, the design for a unique actuator was conceived and developed at NASA Langley Research Center. The basic design consists of four large mechanical parts (Mass, Upper Housing, Lower Housing, and Center Support) and numerous smaller supporting components including an accelerometer, encoder, and four drive motors. Fabrication personnel were included early in the design phase of the LPMA as part of an integrated manufacturing process to alleviate potential difficulties in machining an already challenging design. Operational testing of the LPMA demonstrated that the actuator is capable of various types of load functions.
Linear Motor Free Piston Compressor
NASA Astrophysics Data System (ADS)
Bloomfield, David P.
1995-02-01
A Linear Motor Free Piston Compressor (LMFPC), a free piston pressure recovery system for fuel cell powerplants was developed. The LMFPC consists of a reciprocating compressor and a reciprocating expander which are separated by a piston. In the past energy efficient turbochargers have been used for pressure large (over 50 kW) fuel cell powerplants by recovering pressure energy from the powerplant exhaust. A free piston compressor allows pressurizing 3 - 5 kW sized fuel cell powerplants. The motivation for pressurizing PEM fuel cell powerplants is to improve fuel cell performance. Pressurization of direct methanol fuel cells will be required if PEM membranes are to be used Direct methanol oxidation anode catalysts require high temperatures to operate at reasonable power densities. The elevated temperatures above 80 C will cause high water loss from conventional PEM membranes unless pressurization is employed. Because pressurization is an energy intensive process, recovery of the pressure energy is required to permit high efficiency in fuel cell powerplants. A complete LMFPC which can pressurize a 3 kW fuel cell stack was built. This unit is one of several that were constructed during the course of the program.
Generalized Linear Covariance Analysis
NASA Astrophysics Data System (ADS)
Markley, F. Landis; Carpenter, J. Russell
2009-01-01
This paper presents a comprehensive approach to filter modeling for generalized covariance analysis of both batch least-squares and sequential estimators. We review and extend in two directions the results of prior work that allowed for partitioning of the state space into "solve-for" and "consider" parameters, accounted for differences between the formal values and the true values of the measurement noise, process noise, and a priori solve-for and consider covariances, and explicitly partitioned the errors into subspaces containing only the influence of the measurement noise, process noise, and a priori solve-for and consider covariances. In this work, we explicitly add sensitivity analysis to this prior work, and relax an implicit assumption that the batch estimator's epoch time occurs prior to the definitive span. We also apply the method to an integrated orbit and attitude problem, in which gyro and accelerometer errors, though not estimated, influence the orbit determination performance. We illustrate our results using two graphical presentations, which we call the "variance sandpile" and the "sensitivity mosaic," and we compare the linear covariance results to confidence intervals associated with ensemble statistics from a Monte Carlo analysis.
Generalized Linear Covariance Analysis
NASA Technical Reports Server (NTRS)
Carpenter, James R.; Markley, F. Landis
2014-01-01
This talk presents a comprehensive approach to filter modeling for generalized covariance analysis of both batch least-squares and sequential estimators. We review and extend in two directions the results of prior work that allowed for partitioning of the state space into solve-for'' and consider'' parameters, accounted for differences between the formal values and the true values of the measurement noise, process noise, and textita priori solve-for and consider covariances, and explicitly partitioned the errors into subspaces containing only the influence of the measurement noise, process noise, and solve-for and consider covariances. In this work, we explicitly add sensitivity analysis to this prior work, and relax an implicit assumption that the batch estimator's epoch time occurs prior to the definitive span. We also apply the method to an integrated orbit and attitude problem, in which gyro and accelerometer errors, though not estimated, influence the orbit determination performance. We illustrate our results using two graphical presentations, which we call the variance sandpile'' and the sensitivity mosaic,'' and we compare the linear covariance results to confidence intervals associated with ensemble statistics from a Monte Carlo analysis.
Kliman, Gerald B.; Brynsvold, Glen V.; Jahns, Thomas M.
1989-01-01
A winding and method of winding for a submersible linear pump for pumping liquid sodium is disclosed. The pump includes a stator having a central cylindrical duct preferably vertically aligned. The central vertical duct is surrounded by a system of coils in slots. These slots are interleaved with magnetic flux conducting elements, these magnetic flux conducting elements forming a continuous magnetic field conduction path along the stator. The central duct has placed therein a cylindrical magnetic conducting core, this core having a cylindrical diameter less than the diameter of the cylindrical duct. The core once placed to the duct defines a cylindrical interstitial pumping volume of the pump. This cylindrical interstitial pumping volume preferably defines an inlet at the bottom of the pump, and an outlet at the top of the pump. Pump operation occurs by static windings in the outer stator sequentially conveying toroidal fields from the pump inlet at the bottom of the pump to the pump outlet at the top of the pump. The winding apparatus and method of winding disclosed uses multiple slots per pole per phase with parallel winding legs on each phase equal to or less than the number of slots per pole per phase. The slot sequence per pole per phase is chosen to equalize the variations in flux density of the pump sodium as it passes into the pump at the pump inlet with little or no flux and acquires magnetic flux in passage through the pump to the pump outlet.
Kliman, G.B.; Brynsvold, G.V.; Jahns, T.M.
1989-08-22
A winding and method of winding for a submersible linear pump for pumping liquid sodium are disclosed. The pump includes a stator having a central cylindrical duct preferably vertically aligned. The central vertical duct is surrounded by a system of coils in slots. These slots are interleaved with magnetic flux conducting elements, these magnetic flux conducting elements forming a continuous magnetic field conduction path along the stator. The central duct has placed therein a cylindrical magnetic conducting core, this core having a cylindrical diameter less than the diameter of the cylindrical duct. The core once placed to the duct defines a cylindrical interstitial pumping volume of the pump. This cylindrical interstitial pumping volume preferably defines an inlet at the bottom of the pump, and an outlet at the top of the pump. Pump operation occurs by static windings in the outer stator sequentially conveying toroidal fields from the pump inlet at the bottom of the pump to the pump outlet at the top of the pump. The winding apparatus and method of winding disclosed uses multiple slots per pole per phase with parallel winding legs on each phase equal to or less than the number of slots per pole per phase. The slot sequence per pole per phase is chosen to equalize the variations in flux density of the pump sodium as it passes into the pump at the pump inlet with little or no flux and acquires magnetic flux in passage through the pump to the pump outlet. 4 figs.
Meisner, John W.; Moore, Robert M.; Bienvenue, Louis L.
1985-03-19
Electromagnetic linear induction pump for liquid metal which includes a unitary pump duct. The duct comprises two substantially flat parallel spaced-apart wall members, one being located above the other and two parallel opposing side members interconnecting the wall members. Located within the duct are a plurality of web members interconnecting the wall members and extending parallel to the side members whereby the wall members, side members and web members define a plurality of fluid passageways, each of the fluid passageways having substantially the same cross-sectional flow area. Attached to an outer surface of each side member is an electrically conductive end bar for the passage of an induced current therethrough. A multi-phase, electrical stator is located adjacent each of the wall members. The duct, stators, and end bars are enclosed in a housing which is provided with an inlet and outlet in fluid communication with opposite ends of the fluid passageways in the pump duct. In accordance with a preferred embodiment, the inlet and outlet includes a transition means which provides for a transition from a round cross-sectional flow path to a substantially rectangular cross-sectional flow path defined by the pump duct.
Generalized Linear Covariance Analysis
NASA Technical Reports Server (NTRS)
Carpenter, J. Russell; Markley, F. Landis
2008-01-01
We review and extend in two directions the results of prior work on generalized covariance analysis methods. This prior work allowed for partitioning of the state space into "solve-for" and "consider" parameters, allowed for differences between the formal values and the true values of the measurement noise, process noise, and a priori solve-for and consider covariances, and explicitly partitioned the errors into subspaces containing only the influence of the measurement noise, process noise, and a priori solve-for and consider covariances. In this work, we explicitly add sensitivity analysis to this prior work, and relax an implicit assumption that the batch estimator s anchor time occurs prior to the definitive span. We also apply the method to an integrated orbit and attitude problem, in which gyro and accelerometer errors, though not estimated, influence the orbit determination performance. We illustrate our results using two graphical presentations, which we call the "variance sandpile" and the "sensitivity mosaic," and we compare the linear covariance results to confidence intervals associated with ensemble statistics from a Monte Carlo analysis.
NASA Astrophysics Data System (ADS)
Theofilis, Vassilios
2011-01-01
This article reviews linear instability analysis of flows over or through complex two-dimensional (2D) and 3D geometries. In the three decades since it first appeared in the literature, global instability analysis, based on the solution of the multidimensional eigenvalue and/or initial value problem, is continuously broadening both in scope and in depth. To date it has dealt successfully with a wide range of applications arising in aerospace engineering, physiological flows, food processing, and nuclear-reactor safety. In recent years, nonmodal analysis has complemented the more traditional modal approach and increased knowledge of flow instability physics. Recent highlights delivered by the application of either modal or nonmodal global analysis are briefly discussed. A conscious effort is made to demystify both the tools currently utilized and the jargon employed to describe them, demonstrating the simplicity of the analysis. Hopefully this will provide new impulses for the creation of next-generation algorithms capable of coping with the main open research areas in which step-change progress can be expected by the application of the theory: instability analysis of fully inhomogeneous, 3D flows and control thereof.
Berkeley Proton Linear Accelerator
DOE R&D Accomplishments Database
Alvarez, L. W.; Bradner, H.; Franck, J.; Gordon, H.; Gow, J. D.; Marshall, L. C.; Oppenheimer, F. F.; Panofsky, W. K. H.; Richman, C.; Woodyard, J. R.
1953-10-13
A linear accelerator, which increases the energy of protons from a 4 Mev Van de Graaff injector, to a final energy of 31.5 Mev, has been constructed. The accelerator consists of a cavity 40 feet long and 39 inches in diameter, excited at resonance in a longitudinal electric mode with a radio-frequency power of about 2.2 x 10{sup 6} watts peak at 202.5 mc. Acceleration is made possible by the introduction of 46 axial "drift tubes" into the cavity, which is designed such that the particles traverse the distance between the centers of successive tubes in one cycle of the r.f. power. The protons are longitudinally stable as in the synchrotron, and are stabilized transversely by the action of converging fields produced by focusing grids. The electrical cavity is constructed like an inverted airplane fuselage and is supported in a vacuum tank. Power is supplied by 9 high powered oscillators fed from a pulse generator of the artificial transmission line type.
Improved Equivalent Linearization Implementations Using Nonlinear Stiffness Evaluation
NASA Technical Reports Server (NTRS)
Rizzi, Stephen A.; Muravyov, Alexander A.
2001-01-01
This report documents two new implementations of equivalent linearization for solving geometrically nonlinear random vibration problems of complicated structures. The implementations are given the acronym ELSTEP, for "Equivalent Linearization using a STiffness Evaluation Procedure." Both implementations of ELSTEP are fundamentally the same in that they use a novel nonlinear stiffness evaluation procedure to numerically compute otherwise inaccessible nonlinear stiffness terms from commercial finite element programs. The commercial finite element program MSC/NASTRAN (NASTRAN) was chosen as the core of ELSTEP. The FORTRAN implementation calculates the nonlinear stiffness terms and performs the equivalent linearization analysis outside of NASTRAN. The Direct Matrix Abstraction Program (DMAP) implementation performs these operations within NASTRAN. Both provide nearly identical results. Within each implementation, two error minimization approaches for the equivalent linearization procedure are available - force and strain energy error minimization. Sample results for a simply supported rectangular plate are included to illustrate the analysis procedure.
Simulation of a medical linear accelerator for teaching purposes.
Anderson, Rhys; Lamey, Michael; MacPherson, Miller; Carlone, Marco
2015-01-01
Simulation software for medical linear accelerators that can be used in a teaching environment was developed. The components of linear accelerators were modeled to first order accuracy using analytical expressions taken from the literature. The expressions used constants that were empirically set such that realistic response could be expected. These expressions were programmed in a MATLAB environment with a graphical user interface in order to produce an environment similar to that of linear accelerator service mode. The program was evaluated in a systematic fashion, where parameters affecting the clinical properties of medical linear accelerator beams were adjusted independently, and the effects on beam energy and dose rate recorded. These results confirmed that beam tuning adjustments could be simulated in a simple environment. Further, adjustment of service parameters over a large range was possible, and this allows the demonstration of linear accelerator physics in an environment accessible to both medical physicists and linear accelerator service engineers. In conclusion, a software tool, named SIMAC, was developed to improve the teaching of linear accelerator physics in a simulated environment. SIMAC performed in a similar manner to medical linear accelerators. The authors hope that this tool will be valuable as a teaching tool for medical physicists and linear accelerator service engineers.
A Methodology and Linear Model for System Planning and Evaluation.
ERIC Educational Resources Information Center
Meyer, Richard W.
1982-01-01
The two-phase effort at Clemson University to design a comprehensive library automation program is reported. Phase one was based on a version of IBM's business system planning methodology, and the second was based on a linear model designed to compare existing program systems to the phase one design. (MLW)
Linear Algebraic Method for Non-Linear Map Analysis
Yu,L.; Nash, B.
2009-05-04
We present a newly developed method to analyze some non-linear dynamics problems such as the Henon map using a matrix analysis method from linear algebra. Choosing the Henon map as an example, we analyze the spectral structure, the tune-amplitude dependence, the variation of tune and amplitude during the particle motion, etc., using the method of Jordan decomposition which is widely used in conventional linear algebra.
NASA Technical Reports Server (NTRS)
Medan, R. T. (Editor); Magnus, A. E.; Sidwell, K. W.; Epton, M. A.
1981-01-01
Numerous applications of the PAN AIR computer program system are presented. PAN AIR is user-oriented tool for analyzing and/or designing aerodynamic configurations in subsonic or supersonic flow using a technique generally referred to as a higher order panel method. Problems solved include simple wings in subsonic and supersonic flow, a wing-body in supersonic flow, wing with deflected flap in subsonic flow, design of two-dimensional and three-dimensional wings, axisymmetric nacelle in supersonic flow, and wing-canard-tail-nacelle-fuselage combination in supersonic flow.
Linear Collider Physics Resource Book Snowmass 2001
Ronan , M.T.
2001-06-01
The American particle physics community can look forward to a well-conceived and vital program of experimentation for the next ten years, using both colliders and fixed target beams to study a wide variety of pressing questions. Beyond 2010, these programs will be reaching the end of their expected lives. The CERN LHC will provide an experimental program of the first importance. But beyond the LHC, the American community needs a coherent plan. The Snowmass 2001 Workshop and the deliberations of the HEPAP subpanel offer a rare opportunity to engage the full community in planning our future for the next decade or more. A major accelerator project requires a decade from the beginning of an engineering design to the receipt of the first data. So it is now time to decide whether to begin a new accelerator project that will operate in the years soon after 2010. We believe that the world high-energy physics community needs such a project. With the great promise of discovery in physics at the next energy scale, and with the opportunity for the uncovering of profound insights, we cannot allow our field to contract to a single experimental program at a single laboratory in the world. We believe that an e{sup +}e{sup -} linear collider is an excellent choice for the next major project in high-energy physics. Applying experimental techniques very different from those used at hadron colliders, an e{sup +}e{sup -} linear collider will allow us to build on the discoveries made at the Tevatron and the LHC, and to add a level of precision and clarity that will be necessary to understand the physics of the next energy scale. It is not necessary to anticipate specific results from the hadron collider programs to argue for constructing an e{sup +}e{sup -} linear collider; in any scenario that is now discussed, physics will benefit from the new information that e{sup +}e{sup -} experiments can provide. This last point merits further emphasis. If a new accelerator could be designed and
Progress in the Next Linear Collider Design
NASA Astrophysics Data System (ADS)
Raubenheimer, T. O.
2001-07-01
An electron/positron linear collider with a center-of-mass energy between 0.5 and 1 TeV would be an important complement to the physics program of the LHC. The Next Linear Collider (NLC) is being designed by a US collaboration (FNAL, LBNL, LLNL, and SLAC) which is working closely with the Japanese collaboration that is designing the Japanese Linear Collider (JLC). The NLC main linacs are based on normal conducting 11 GHz rf. This paper will discuss the technical difficulties encountered as well as the many changes that have been made to the NLC design over the last year. These changes include improvements to the X-band rf system as well as modifications to the injector and the beam delivery system. They are based on new conceptual solutions as well as results from the R&D programs which have exceeded initial specifications. The net effect has been to reduce the length of the collider from about 32 km to 25 km and to reduce the number of klystrons and modulators by a factor of two. Together these lead to significant cost savings.
NASA Technical Reports Server (NTRS)
Holloway, Sidney E., III; Crossley, Edward A.; Miller, James B.; Jones, Irby W.; Davis, C. Calvin; Behun, Vaughn D.; Goodrich, Lewis R., Sr.
1995-01-01
Linear proof-mass actuator (LPMA) is friction-driven linear mass actuator capable of applying controlled force to structure in outer space to damp out oscillations. Capable of high accelerations and provides smooth, bidirectional travel of mass. Design eliminates gears and belts. LPMA strong enough to be used terrestrially where linear actuators needed to excite or damp out oscillations. High flexibility designed into LPMA by varying size of motors, mass, and length of stroke, and by modifying control software.
Linear collider development at SLAC
Irwin, J.
1993-08-01
Linear collider R&D at SLAC comprises work on the present Stanford Linear Collider (SLC) and work toward the next linear collider (NLC). Recent SLC developments are summarized. NLC studies are divided into hardware-based and theoretical. We report on the status of the NLC Test Accelerator (NLCTA) and the final focus test beam (FFTB), describe plans for ASSET, an installation to measure accelerator structure wakefields, and mention IR design developments. Finally we review recent NLC theoretical studies, ending with the author`s view of next linear collider parameter sets.
Quantization of general linear electrodynamics
Rivera, Sergio; Schuller, Frederic P.
2011-03-15
General linear electrodynamics allow for an arbitrary linear constitutive relation between the field strength 2-form and induction 2-form density if crucial hyperbolicity and energy conditions are satisfied, which render the theory predictive and physically interpretable. Taking into account the higher-order polynomial dispersion relation and associated causal structure of general linear electrodynamics, we carefully develop its Hamiltonian formulation from first principles. Canonical quantization of the resulting constrained system then results in a quantum vacuum which is sensitive to the constitutive tensor of the classical theory. As an application we calculate the Casimir effect in a birefringent linear optical medium.
Linear equality constraints in the general linear mixed model.
Edwards, L J; Stewart, P W; Muller, K E; Helms, R W
2001-12-01
Scientists may wish to analyze correlated outcome data with constraints among the responses. For example, piecewise linear regression in a longitudinal data analysis can require use of a general linear mixed model combined with linear parameter constraints. Although well developed for standard univariate models, there are no general results that allow a data analyst to specify a mixed model equation in conjunction with a set of constraints on the parameters. We resolve the difficulty by precisely describing conditions that allow specifying linear parameter constraints that insure the validity of estimates and tests in a general linear mixed model. The recommended approach requires only straightforward and noniterative calculations to implement. We illustrate the convenience and advantages of the methods with a comparison of cognitive developmental patterns in a study of individuals from infancy to early adulthood for children from low-income families.
Linear Algebra and Image Processing
ERIC Educational Resources Information Center
Allali, Mohamed
2010-01-01
We use the computing technology digital image processing (DIP) to enhance the teaching of linear algebra so as to make the course more visual and interesting. Certainly, this visual approach by using technology to link linear algebra to DIP is interesting and unexpected to both students as well as many faculty. (Contains 2 tables and 11 figures.)
Linear algebra and image processing
NASA Astrophysics Data System (ADS)
Allali, Mohamed
2010-09-01
We use the computing technology digital image processing (DIP) to enhance the teaching of linear algebra so as to make the course more visual and interesting. Certainly, this visual approach by using technology to link linear algebra to DIP is interesting and unexpected to both students as well as many faculty.
Passive linearization of nonlinear resonances
NASA Astrophysics Data System (ADS)
Habib, G.; Grappasonni, C.; Kerschen, G.
2016-07-01
The objective of this paper is to demonstrate that the addition of properly tuned nonlinearities to a nonlinear system can increase the range over which a specific resonance responds linearly. Specifically, we seek to enforce two important properties of linear systems, namely, the force-displacement proportionality and the invariance of resonance frequencies. Numerical simulations and experiments are used to validate the theoretical findings.
Spatial Processes in Linear Ordering
ERIC Educational Resources Information Center
von Hecker, Ulrich; Klauer, Karl Christoph; Wolf, Lukas; Fazilat-Pour, Masoud
2016-01-01
Memory performance in linear order reasoning tasks (A > B, B > C, C > D, etc.) shows quicker, and more accurate responses to queries on wider (AD) than narrower (AB) pairs on a hypothetical linear mental model (A -- B -- C -- D). While indicative of an analogue representation, research so far did not provide positive evidence for spatial…
Suppressing Electron Cloud in Future Linear Colliders
Pivi, M; Kirby, R.E.; Raubenheimer, T.O.; Le Pimpec, F.; /PSI, Villigen
2005-05-27
Any accelerator circulating positively charged beams can suffer from a build-up of an electron cloud (EC) in the beam pipe. The cloud develops through ionization of residual gases, synchrotron radiation and secondary electron emission and, when severe, can cause instability, emittance blow-up or loss of the circulating beam. The electron cloud is potentially a luminosity limiting effect for both the Large Hadron Collider (LHC) and the International Linear Collider (ILC). For the ILC positron damping ring, the development of the electron cloud must be suppressed. This paper discusses the state-of-the-art of the ongoing SLAC and international R&D program to study potential remedies.
Linear and nonlinear oscilations in Classical Mechanics
NASA Astrophysics Data System (ADS)
Cruz, Enrique; Martinez, Juan L.; Camacho, Edgar
1997-04-01
The theory of small oscilations is very important in many areas of physics and others sciences due to the simple form of the equations and the easy interpretetion of the results. In this work we show three examples of mechanical systems and using the Lagrangian formulation, we study the linear regime making approaches to the Lagrange's equations, and for the analysis of the nonlinear behavior of the systems we use the Hamiltonian formulation, we use the program MATHEMATICA for the whole analysis. MATHEMATICA is useful because many students can approach to the analysis and simulations using modern tools like the simbolic and numerical computacional packages.
Ensemble control of linear systems with parameter uncertainties
NASA Astrophysics Data System (ADS)
Kou, Kit Ian; Liu, Yang; Zhang, Dandan; Tu, Yanshuai
2016-07-01
In this paper, we study the optimal control problem for a class of four-dimensional linear systems based on quaternionic and Fourier analysis. When the control is unconstrained, the optimal ensemble controller for this linear ensemble control systems is given in terms of prolate spheroidal wave functions. For the constrained convex optimisation problem of such systems, the quadratic programming is presented to obtain the optimal control laws. Simulations are given to verity the effectiveness of the proposed theory.
Successive linear optimization approach to the dynamic traffic assignment problem
Ho, J.K.
1980-11-01
A dynamic model for the optimal control of traffic flow over a network is considered. The model, which treats congestion explicitly in the flow equations, gives rise to nonlinear, nonconvex mathematical programming problems. It has been shown for a piecewise linear version of this model that a global optimum is contained in the set of optimal solutions of a certain linear program. A sufficient condition for optimality which implies that a global optimum can be obtained by successively optimizing at most N + 1 objective functions for the linear program, where N is the number of time periods in the planning horizon is presented. Computational results are reported to indicate the efficiency of this approach.
Computer modeling of batteries from non-linear circuit elements
NASA Technical Reports Server (NTRS)
Waaben, S.; Federico, J.; Moskowitz, I.
1983-01-01
A simple non-linear circuit model for battery behavior is given. It is based on time-dependent features of the well-known PIN change storage diode, whose behavior is described by equations similar to those associated with electrochemical cells. The circuit simulation computer program ADVICE was used to predict non-linear response from a topological description of the battery analog built from advice components. By a reasonable choice of one set of parameters, the circuit accurately simulates a wide spectrum of measured non-linear battery responses to within a few millivolts.
Linearization algorithms for line transfer
Scott, H.A.
1990-11-06
Complete linearization is a very powerful technique for solving multi-line transfer problems that can be used efficiently with a variety of transfer formalisms. The linearization algorithm we describe is computationally very similar to ETLA, but allows an effective treatment of strongly-interacting lines. This algorithm has been implemented (in several codes) with two different transfer formalisms in all three one-dimensional geometries. We also describe a variation of the algorithm that handles saturable laser transport. Finally, we present a combination of linearization with a local approximate operator formalism, which has been implemented in two dimensions and is being developed in three dimensions. 11 refs.
Precision magnetic suspension linear bearing
NASA Technical Reports Server (NTRS)
Trumper, David L.; Queen, Michael A.
1992-01-01
We have shown the design and analyzed the electromechanics of a linear motor suitable for independently controlling two suspension degrees of freedom. This motor, at least on paper, meets the requirements for driving an X-Y stage of 10 Kg mass with about 4 m/sq sec acceleration, with travel of several hundred millimeters in X and Y, and with reasonable power dissipation. A conceptual design for such a stage is presented. The theoretical feasibility of linear and planar bearings using single or multiple magnetic suspension linear motors is demonstrated.
Characterizations of linear sufficient statistics
NASA Technical Reports Server (NTRS)
Peters, B. C., Jr.; Redner, R.; Decell, H. P., Jr.
1976-01-01
A necessary and sufficient condition is developed such that there exists a continous linear sufficient statistic T for a dominated collection of totally finite measures defined on the Borel field generated by the open sets of a Banach space X. In particular, corollary necessary and sufficient conditions are given so that there exists a rank K linear sufficient statistic T for any finite collection of probability measures having n-variate normal densities. In this case a simple calculation, involving only the population means and covariances, determines the smallest integer K for which there exists a rank K linear sufficient statistic T (as well as an associated statistic T itself).
Practical Session: Simple Linear Regression
NASA Astrophysics Data System (ADS)
Clausel, M.; Grégoire, G.
2014-12-01
Two exercises are proposed to illustrate the simple linear regression. The first one is based on the famous Galton's data set on heredity. We use the lm R command and get coefficients estimates, standard error of the error, R2, residuals …In the second example, devoted to data related to the vapor tension of mercury, we fit a simple linear regression, predict values, and anticipate on multiple linear regression. This pratical session is an excerpt from practical exercises proposed by A. Dalalyan at EPNC (see Exercises 1 and 2 of http://certis.enpc.fr/~dalalyan/Download/TP_ENPC_4.pdf).
Linear Bregman algorithm implemented in parallel GPU
NASA Astrophysics Data System (ADS)
Li, Pengyan; Ke, Jue; Sui, Dong; Wei, Ping
2015-08-01
At present, most compressed sensing (CS) algorithms have poor converging speed, thus are difficult to run on PC. To deal with this issue, we use a parallel GPU, to implement a broadly used compressed sensing algorithm, the Linear Bregman algorithm. Linear iterative Bregman algorithm is a reconstruction algorithm proposed by Osher and Cai. Compared with other CS reconstruction algorithms, the linear Bregman algorithm only involves the vector and matrix multiplication and thresholding operation, and is simpler and more efficient for programming. We use C as a development language and adopt CUDA (Compute Unified Device Architecture) as parallel computing architectures. In this paper, we compared the parallel Bregman algorithm with traditional CPU realized Bregaman algorithm. In addition, we also compared the parallel Bregman algorithm with other CS reconstruction algorithms, such as OMP and TwIST algorithms. Compared with these two algorithms, the result of this paper shows that, the parallel Bregman algorithm needs shorter time, and thus is more convenient for real-time object reconstruction, which is important to people's fast growing demand to information technology.
Overdetermined Systems of Linear Equations.
ERIC Educational Resources Information Center
Williams, Gareth
1990-01-01
Explored is an overdetermined system of linear equations to find an appropriate least squares solution. A geometrical interpretation of this solution is given. Included is a least squares point discussion. (KR)
Acoustic emission linear pulse holography
Collins, H.D.; Busse, L.J.; Lemon, D.K.
1983-10-25
This device relates to the concept of and means for performing Acoustic Emission Linear Pulse Holography, which combines the advantages of linear holographic imaging and Acoustic Emission into a single non-destructive inspection system. This unique system produces a chronological, linear holographic image of a flaw by utilizing the acoustic energy emitted during crack growth. The innovation is the concept of utilizing the crack-generated acoustic emission energy to generate a chronological series of images of a growing crack by applying linear, pulse holographic processing to the acoustic emission data. The process is implemented by placing on a structure an array of piezoelectric sensors (typically 16 or 32 of them) near the defect location. A reference sensor is placed between the defect and the array.
Optimal piecewise locally linear modeling
NASA Astrophysics Data System (ADS)
Harris, Chris J.; Hong, Xia; Feng, M.
1999-03-01
Associative memory networks such as Radial Basis Functions, Neurofuzzy and Fuzzy Logic used for modelling nonlinear processes suffer from the curse of dimensionality (COD), in that as the input dimension increases the parameterization, computation cost, training data requirements, etc. increase exponentially. Here a new algorithm is introduced for the construction of a Delaunay input space partitioned optimal piecewise locally linear models to overcome the COD as well as generate locally linear models directly amenable to linear control and estimation algorithms. The training of the model is configured as a new mixture of experts network with a new fast decision rule derived using convex set theory. A very fast simulated reannealing (VFSR) algorithm is utilized to search a global optimal solution of the Delaunay input space partition. A benchmark non-linear time series is used to demonstrate the new approach.
SLC: The first linear collider
NASA Astrophysics Data System (ADS)
Phinney, Nan
The Stanford Linear Collider (SLC) was built in the 1980s at the Stanford Linear Accelerator Center (SLAC) in California. Like LEP, it was designed to study the properties of the Z boson at a center-of-mass energy of about 91 GeV. The SLC was also a prototype for an entirely new approach to electron-positron colliders. The development of a new technology was motivated by the fact that in an electron storage ring, the electrons radiate synchrotron radiation as they are bent around the ring. To avoid excessive energy loss from this radiation, the circumference of the ring has to increase as the square of the desired energy, making very high energy rings prohibitively large and expensive. With a linear accelerator, the electrons do not need to bend and the tunnel length only grows linearly with energy...
Spacetime metric from linear electrodynamics
NASA Astrophysics Data System (ADS)
Obukhov, Yuri N.; Hehl, Friedrich W.
1999-07-01
The Maxwell equations are formulated on an arbitrary (1+3)-dimensional manifold. Then, imposing a (constrained) linear constitutive relation between electromagnetic field (E,B) and excitation (D,ℌ), we derive the metric of spacetime therefrom.
Linear Back-Drive Differentials
NASA Technical Reports Server (NTRS)
Waydo, Peter
2003-01-01
Linear back-drive differentials have been proposed as alternatives to conventional gear differentials for applications in which there is only limited rotational motion (e.g., oscillation). The finite nature of the rotation makes it possible to optimize a linear back-drive differential in ways that would not be possible for gear differentials or other differentials that are required to be capable of unlimited rotation. As a result, relative to gear differentials, linear back-drive differentials could be more compact and less massive, could contain fewer complex parts, and could be less sensitive to variations in the viscosities of lubricants. Linear back-drive differentials would operate according to established principles of power ball screws and linear-motion drives, but would utilize these principles in an innovative way. One major characteristic of such mechanisms that would be exploited in linear back-drive differentials is the possibility of designing them to drive or back-drive with similar efficiency and energy input: in other words, such a mechanism can be designed so that a rotating screw can drive a nut linearly or the linear motion of the nut can cause the screw to rotate. A linear back-drive differential (see figure) would include two collinear shafts connected to two parts that are intended to engage in limited opposing rotations. The linear back-drive differential would also include a nut that would be free to translate along its axis but not to rotate. The inner surface of the nut would be right-hand threaded at one end and left-hand threaded at the opposite end to engage corresponding right- and left-handed threads on the shafts. A rotation and torque introduced into the system via one shaft would drive the nut in linear motion. The nut, in turn, would back-drive the other shaft, creating a reaction torque. Balls would reduce friction, making it possible for the shaft/nut coupling on each side to operate with 90 percent efficiency.
A path-following interior-point algorithm for linear and quadratic problems
Wright, S.J.
1993-12-01
We describe an algorithm for the monotone linear complementarity problem that converges for many positive, not necessarily feasible, starting point and exhibits polynomial complexity if some additional assumptions are made on the starting point. If the problem has a strictly complementary solution, the method converges subquadratically. We show that the algorithm and its convergence extend readily to the mixed monotone linear complementarity problem and, hence, to all the usual formulations of the linear programming and convex quadratic programming problems.
Polarized Electrons for Linear Colliders
NASA Astrophysics Data System (ADS)
Clendenin, J. E.; Brachmann, A.; Garwin, E. L.; Kirby, R. E.; Luh, D.-A.; Maruyama, T.; Prescott, C. Y.; Sheppard, J. C.; Turner, J.; Prepost, R.
2005-08-01
Future electron-positron linear colliders require a highly polarized electron beam with a pulse structure that depends primarily on whether the acceleration utilizes warm or superconducting RF structures. The International Linear Collider (ILC) will use cold structures for the main linac. It is shown that a DC-biased polarized photoelectron source such as successfully used for the SLC can meet the charge requirements for the ILC micropulse with a polarization approaching 90%.
Polarized Electrons for Linear Colliders
Clendenin, J.
2004-11-19
Future electron-positron linear colliders require a highly polarized electron beam with a pulse structure that depends primarily on whether the acceleration utilizes warm or superconducting rf structures. The International Linear Collider (ILC) will use cold structures for the main linac. It is shown that a dc-biased polarized photoelectron source such as successfully used for the SLC can meet the charge requirements for the ILC micropulse with a polarization approaching 90%.
Linear superposition in nonlinear equations.
Khare, Avinash; Sukhatme, Uday
2002-06-17
Several nonlinear systems such as the Korteweg-de Vries (KdV) and modified KdV equations and lambda phi(4) theory possess periodic traveling wave solutions involving Jacobi elliptic functions. We show that suitable linear combinations of these known periodic solutions yield many additional solutions with different periods and velocities. This linear superposition procedure works by virtue of some remarkable new identities involving elliptic functions. PMID:12059300
BMDO photovoltaics program overview
NASA Technical Reports Server (NTRS)
Caveny, Leonard H.; Allen, Douglas M.
1994-01-01
This is an overview of the Ballistic Missile Defense Organization (BMDO) Photovoltaic Program. Areas discussed are: (1) BMDO advanced Solar Array program; (2) Brilliant Eyes type satellites; (3) Electric propulsion; (4) Contractor Solar arrays; (5) Iofee Concentrator and Cell development; (6) Entech linear mini-dome concentrator; and (7) Flight test update/plans.
Automating linear accelerator quality assurance
Eckhause, Tobias; Thorwarth, Ryan; Moran, Jean M.; Al-Hallaq, Hania; Farrey, Karl; Ritter, Timothy; DeMarco, John; Pawlicki, Todd; Kim, Gwe-Ya; Popple, Richard; Sharma, Vijeshwar; Park, SungYong; Perez, Mario; Booth, Jeremy T.
2015-10-15
Purpose: The purpose of this study was 2-fold. One purpose was to develop an automated, streamlined quality assurance (QA) program for use by multiple centers. The second purpose was to evaluate machine performance over time for multiple centers using linear accelerator (Linac) log files and electronic portal images. The authors sought to evaluate variations in Linac performance to establish as a reference for other centers. Methods: The authors developed analytical software tools for a QA program using both log files and electronic portal imaging device (EPID) measurements. The first tool is a general analysis tool which can read and visually represent data in the log file. This tool, which can be used to automatically analyze patient treatment or QA log files, examines the files for Linac deviations which exceed thresholds. The second set of tools consists of a test suite of QA fields, a standard phantom, and software to collect information from the log files on deviations from the expected values. The test suite was designed to focus on the mechanical tests of the Linac to include jaw, MLC, and collimator positions during static, IMRT, and volumetric modulated arc therapy delivery. A consortium of eight institutions delivered the test suite at monthly or weekly intervals on each Linac using a standard phantom. The behavior of various components was analyzed for eight TrueBeam Linacs. Results: For the EPID and trajectory log file analysis, all observed deviations which exceeded established thresholds for Linac behavior resulted in a beam hold off. In the absence of an interlock-triggering event, the maximum observed log file deviations between the expected and actual component positions (such as MLC leaves) varied from less than 1% to 26% of published tolerance thresholds. The maximum and standard deviations of the variations due to gantry sag, collimator angle, jaw position, and MLC positions are presented. Gantry sag among Linacs was 0.336 ± 0.072 mm. The
Estimating population trends with a linear model
Bart, J.; Collins, B.; Morrison, R.I.G.
2003-01-01
We describe a simple and robust method for estimating trends in population size. The method may be used with Breeding Bird Survey data, aerial surveys, point counts, or any other program of repeated surveys at permanent locations. Surveys need not be made at each location during each survey period. The method differs from most existing methods in being design based, rather than model based. The only assumptions are that the nominal sampling plan is followed and that sample size is large enough for use of the t-distribution. Simulations based on two bird data sets from natural populations showed that the point estimate produced by the linear model was essentially unbiased even when counts varied substantially and 25% of the complete data set was missing. The estimating-equation approach, often used to analyze Breeding Bird Survey data, performed similarly on one data set but had substantial bias on the second data set, in which counts were highly variable. The advantages of the linear model are its simplicity, flexibility, and that it is self-weighting. A user-friendly computer program to carry out the calculations is available from the senior author.
An algorithm for linearizing convex extremal problems
Gorskaya, Elena S
2010-06-09
This paper suggests a method of approximating the solution of minimization problems for convex functions of several variables under convex constraints is suggested. The main idea of this approach is the approximation of a convex function by a piecewise linear function, which results in replacing the problem of convex programming by a linear programming problem. To carry out such an approximation, the epigraph of a convex function is approximated by the projection of a polytope of greater dimension. In the first part of the paper, the problem is considered for functions of one variable. In this case, an algorithm for approximating the epigraph of a convex function by a polygon is presented, it is shown that this algorithm is optimal with respect to the number of vertices of the polygon, and exact bounds for this number are obtained. After this, using an induction procedure, the algorithm is generalized to certain classes of functions of several variables. Applying the suggested method, polynomial algorithms for an approximate calculation of the L{sub p}-norm of a matrix and of the minimum of the entropy function on a polytope are obtained. Bibliography: 19 titles.
An algorithm for linearizing convex extremal problems
NASA Astrophysics Data System (ADS)
Gorskaya, Elena S.
2010-06-01
This paper suggests a method of approximating the solution of minimization problems for convex functions of several variables under convex constraints is suggested. The main idea of this approach is the approximation of a convex function by a piecewise linear function, which results in replacing the problem of convex programming by a linear programming problem. To carry out such an approximation, the epigraph of a convex function is approximated by the projection of a polytope of greater dimension. In the first part of the paper, the problem is considered for functions of one variable. In this case, an algorithm for approximating the epigraph of a convex function by a polygon is presented, it is shown that this algorithm is optimal with respect to the number of vertices of the polygon, and exact bounds for this number are obtained. After this, using an induction procedure, the algorithm is generalized to certain classes of functions of several variables. Applying the suggested method, polynomial algorithms for an approximate calculation of the L_p-norm of a matrix and of the minimum of the entropy function on a polytope are obtained. Bibliography: 19 titles.
Progress report on the SLAC Linear Collider
Kozanecki, W.
1987-11-01
In this paper we report on the status of the SLAC Linear Collider (SLC), the prototype of a new generation of colliding beam accelerators. This novel type of machine holds the potential of extending electron-positron colliding beam studies to center-of-mass (c.m.) energies far in excess of what is economically achievable with colliding beam storage rings. If the technical challenges posed by linear colliders are solvable at a reasonable cost, this new approach would provide an attractive alternative to electron-positron rings, where, because of rapidly rising synchrotron radiation losses, the cost and size of the ring increases with the square of the c.m. energy. In addition to its role as a test vehicle for the linear collider principle, the SLC aims at providing an abundant source of Z/sup 0/ decays to high energy physics experiments. Accordingly, two major detectors, the upgraded Mark II, now installed on the SLC beam line, and the state-of-the-art SLD, currently under construction, are preparing to probe the Standard Model at the Z/sup 0/ pole. The SLC project was originally funded in 1983. Since the completion of construction, we have been commissioning the machine to bring it up to a performance level adequate for starting the high energy physics program. In the remainder of this paper, we will discuss the status, problems and performance of the major subsystems of the SLC. We will conclude with a brief outline of the physics program, and of the planned enhancements to the capabilities of the machine. 26 refs., 7 figs.
Transformation matrices between non-linear and linear differential equations
NASA Technical Reports Server (NTRS)
Sartain, R. L.
1983-01-01
In the linearization of systems of non-linear differential equations, those systems which can be exactly transformed into the second order linear differential equation Y"-AY'-BY=0 where Y, Y', and Y" are n x 1 vectors and A and B are constant n x n matrices of real numbers were considered. The 2n x 2n matrix was used to transform the above matrix equation into the first order matrix equation X' = MX. Specially the matrix M and the conditions which will diagonalize or triangularize M were studied. Transformation matrices P and P sub -1 were used to accomplish this diagonalization or triangularization to return to the solution of the second order matrix differential equation system from the first order system.
A nanoscale linear-to-linear motion converter of graphene.
Dai, Chunchun; Guo, Zhengrong; Zhang, Hongwei; Chang, Tienchong
2016-08-14
Motion conversion plays an irreplaceable role in a variety of machinery. Although many macroscopic motion converters have been widely used, it remains a challenge to convert motion at the nanoscale. Here we propose a nanoscale linear-to-linear motion converter, made of a flake-substrate system of graphene, which can convert the out-of-plane motion of the substrate into the in-plane motion of the flake. The curvature gradient induced van der Waals potential gradient between the flake and the substrate provides the driving force to achieve motion conversion. The proposed motion converter may have general implications for the design of nanomachinery and nanosensors.
Henry, J.J.
1961-09-01
A linear count-rate meter is designed to provide a highly linear output while receiving counting rates from one cycle per second to 100,000 cycles per second. Input pulses enter a linear discriminator and then are fed to a trigger circuit which produces positive pulses of uniform width and amplitude. The trigger circuit is connected to a one-shot multivibrator. The multivibrator output pulses have a selected width. Feedback means are provided for preventing transistor saturation in the multivibrator which improves the rise and decay times of the output pulses. The multivibrator is connected to a diode-switched, constant current metering circuit. A selected constant current is switched to an averaging circuit for each pulse received, and for a time determined by the received pulse width. The average output meter current is proportional to the product of the counting rate, the constant current, and the multivibrator output pulse width.
Belos Block Linear Solvers Package
2004-03-01
Belos is an extensible and interoperable framework for large-scale, iterative methods for solving systems of linear equations with multiple right-hand sides. The motivation for this framework is to provide a generic interface to a collection of algorithms for solving large-scale linear systems. Belos is interoperable because both the matrix and vectors are considered to be opaque objects--only knowledge of the matrix and vectors via elementary operations is necessary. An implementation of Balos is accomplished viamore » the use of interfaces. One of the goals of Belos is to allow the user flexibility in specifying the data representation for the matrix and vectors and so leverage any existing software investment. The algorithms that will be included in package are Krylov-based linear solvers, like Block GMRES (Generalized Minimal RESidual) and Block CG (Conjugate-Gradient).« less
Permafrost Hazards and Linear Infrastructure
NASA Astrophysics Data System (ADS)
Stanilovskaya, Julia; Sergeev, Dmitry
2014-05-01
The international experience of linear infrastructure planning, construction and exploitation in permafrost zone is being directly tied to the permafrost hazard assessment. That procedure should also consider the factors of climate impact and infrastructure protection. The current global climate change hotspots are currently polar and mountain areas. Temperature rise, precipitation and land ice conditions change, early springs occur more often. The big linear infrastructure objects cross the territories with different permafrost conditions which are sensitive to the changes in air temperature, hydrology, and snow accumulation which are connected to climatic dynamics. One of the most extensive linear structures built on permafrost worldwide are Trans Alaskan Pipeline (USA), Alaska Highway (Canada), Qinghai-Xizang Railway (China) and Eastern Siberia - Pacific Ocean Oil Pipeline (Russia). Those are currently being influenced by the regional climate change and permafrost impact which may act differently from place to place. Thermokarst is deemed to be the most dangerous process for linear engineering structures. Its formation and development depend on the linear structure type: road or pipeline, elevated or buried one. Zonal climate and geocryological conditions are also of the determining importance here. All the projects are of the different age and some of them were implemented under different climatic conditions. The effects of permafrost thawing have been recorded every year since then. The exploration and transportation companies from different countries maintain the linear infrastructure from permafrost degradation in different ways. The highways in Alaska are in a good condition due to governmental expenses on annual reconstructions. The Chara-China Railroad in Russia is under non-standard condition due to intensive permafrost response. Standards for engineering and construction should be reviewed and updated to account for permafrost hazards caused by the
General Purpose Unfolding Program with Linear and Nonlinear Regularizations.
1987-05-07
Version 00 The interpretation of several physical measurements requires the unfolding or deconvolution of the solution of Fredholm integral equations of the first kind. Examples include neutron spectroscopy with activation detectors, moderating spheres, or proton recoil measurements. LOUHI82 is designed to be applicable to a large number of physical problems and to be extended to incorporate other unfolding methods.
Combinatorial therapy discovery using mixed integer linear programming
Pang, Kaifang; Wan, Ying-Wooi; Choi, William T.; Donehower, Lawrence A.; Sun, Jingchun; Pant, Dhruv; Liu, Zhandong
2014-01-01
Motivation: Combinatorial therapies play increasingly important roles in combating complex diseases. Owing to the huge cost associated with experimental methods in identifying optimal drug combinations, computational approaches can provide a guide to limit the search space and reduce cost. However, few computational approaches have been developed for this purpose, and thus there is a great need of new algorithms for drug combination prediction. Results: Here we proposed to formulate the optimal combinatorial therapy problem into two complementary mathematical algorithms, Balanced Target Set Cover (BTSC) and Minimum Off-Target Set Cover (MOTSC). Given a disease gene set, BTSC seeks a balanced solution that maximizes the coverage on the disease genes and minimizes the off-target hits at the same time. MOTSC seeks a full coverage on the disease gene set while minimizing the off-target set. Through simulation, both BTSC and MOTSC demonstrated a much faster running time over exhaustive search with the same accuracy. When applied to real disease gene sets, our algorithms not only identified known drug combinations, but also predicted novel drug combinations that are worth further testing. In addition, we developed a web-based tool to allow users to iteratively search for optimal drug combinations given a user-defined gene set. Availability: Our tool is freely available for noncommercial use at http://www.drug.liuzlab.org/. Contact: zhandong.liu@bcm.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:24463180
Vanilla technicolor at linear colliders
NASA Astrophysics Data System (ADS)
Frandsen, Mads T.; Järvinen, Matti; Sannino, Francesco
2011-08-01
We analyze the reach of linear colliders for models of dynamical electroweak symmetry breaking. We show that linear colliders can efficiently test the compositeness scale, identified with the mass of the new spin-one resonances, until the maximum energy in the center of mass of the colliding leptons. In particular we analyze the Drell-Yan processes involving spin-one intermediate heavy bosons decaying either leptonically or into two standard model gauge bosons. We also analyze the light Higgs production in association with a standard model gauge boson stemming also from an intermediate spin-one heavy vector.
Characterizations of linear sufficient statistics
NASA Technical Reports Server (NTRS)
Peters, B. C., Jr.; Reoner, R.; Decell, H. P., Jr.
1977-01-01
A surjective bounded linear operator T from a Banach space X to a Banach space Y must be a sufficient statistic for a dominated family of probability measures defined on the Borel sets of X. These results were applied, so that they characterize linear sufficient statistics for families of the exponential type, including as special cases the Wishart and multivariate normal distributions. The latter result was used to establish precisely which procedures for sampling from a normal population had the property that the sample mean was a sufficient statistic.
Linear Corrugating - Final Technical Report
Lloyd Chapman
2000-05-23
Linear Corrugating is a process for the manufacture of corrugated containers in which the flutes of the corrugated medium are oriented in the Machine Direction (MD) of the several layers of paper used. Conversely, in the conventional corrugating process the flutes are oriented at right angles to the MD in the Cross Machine Direction (CD). Paper is stronger in MD than in CD. Therefore, boxes made using the Linear Corrugating process are significantly stronger-in the prime strength criteria, Box Compression Test (BCT) than boxes made conventionally. This means that using Linear Corrugating boxes can be manufactured to BCT equaling conventional boxes but containing 30% less fiber. The corrugated container industry is a large part of the U.S. economy, producing over 40 million tons annually. For such a large industry, the potential savings of Linear Corrugating are enormous. The grant for this project covered three phases in the development of the Linear Corrugating process: (1) Production and evaluation of corrugated boxes on commercial equipment to verify that boxes so manufactured would have enhanced BCT as proposed in the application; (2) Production and evaluation of corrugated boxes made on laboratory equipment using combined board from (1) above but having dual manufactures joints (glue joints). This box manufacturing method (Dual Joint) is proposed to overcome box perimeter limitations of the Linear Corrugating process; (3) Design, Construction, Operation and Evaluation of an engineering prototype machine to form flutes in corrugating medium in the MD of the paper. This operation is the central requirement of the Linear Corrugating process. Items I and II were successfully completed, showing predicted BCT increases from the Linear Corrugated boxes and significant strength improvement in the Dual Joint boxes. The Former was constructed and operated successfully using kraft linerboard as the forming medium. It was found that tensile strength and stretch
Nonlinear optimization with linear constraints using a projection method
NASA Technical Reports Server (NTRS)
Fox, T.
1982-01-01
Nonlinear optimization problems that are encountered in science and industry are examined. A method of projecting the gradient vector onto a set of linear contraints is developed, and a program that uses this method is presented. The algorithm that generates this projection matrix is based on the Gram-Schmidt method and overcomes some of the objections to the Rosen projection method.
Simulated Analysis of Linear Reversible Enzyme Inhibition with SCILAB
ERIC Educational Resources Information Center
Antuch, Manuel; Ramos, Yaquelin; Álvarez, Rubén
2014-01-01
SCILAB is a lesser-known program (than MATLAB) for numeric simulations and has the advantage of being free software. A challenging software-based activity to analyze the most common linear reversible inhibition types with SCILAB is described. Students establish typical values for the concentration of enzyme, substrate, and inhibitor to simulate…
Using Cognitive Tutor Software in Learning Linear Algebra Word Concept
ERIC Educational Resources Information Center
Yang, Kai-Ju
2015-01-01
This paper reports on a study of twelve 10th grade students using Cognitive Tutor, a math software program, to learn linear algebra word concept. The study's purpose was to examine whether students' mathematics performance as it is related to using Cognitive Tutor provided evidence to support Koedlinger's (2002) four instructional principles used…
Efficient solution procedures for systems with local non-linearities
NASA Astrophysics Data System (ADS)
Ibrahimbegovic, Adnan; Wilson, Edward L.
1992-06-01
This paper presents several methods for enhancing computational efficiency in both static and dynamic analysis of structural systems with localized nonlinear behavior. A significant reduction of computational effort with respect to brute-force nonlinear analysis is achieved in all cases at the insignificant (or no) loss of accuracy. The presented methodologies are easily incorporated into a standard computer program for linear analysis.
Spiral: Automated Computing for Linear Transforms
NASA Astrophysics Data System (ADS)
Püschel, Markus
2010-09-01
Writing fast software has become extraordinarily difficult. For optimal performance, programs and their underlying algorithms have to be adapted to take full advantage of the platform's parallelism, memory hierarchy, and available instruction set. To make things worse, the best implementations are often platform-dependent and platforms are constantly evolving, which quickly renders libraries obsolete. We present Spiral, a domain-specific program generation system for important functionality used in signal processing and communication including linear transforms, filters, and other functions. Spiral completely replaces the human programmer. For a desired function, Spiral generates alternative algorithms, optimizes them, compiles them into programs, and intelligently searches for the best match to the computing platform. The main idea behind Spiral is a mathematical, declarative, domain-specific framework to represent algorithms and the use of rewriting systems to generate and optimize algorithms at a high level of abstraction. Experimental results show that the code generated by Spiral competes with, and sometimes outperforms, the best available human-written code.
New directions in linear accelerators
Jameson, R.A.
1984-01-01
Current work on linear particle accelerators is placed in historical and physics contexts, and applications driving the state of the art are discussed. Future needs and the ways they may force development are outlined in terms of exciting R and D challenges presented to today's accelerator designers. 23 references, 7 figures.
Linear electric field mass spectrometry
McComas, D.J.; Nordholt, J.E.
1991-03-29
A mass spectrometer is described having a low weight and low power requirement, for use in space. It can be used to analyze the ionized particles in the region of the spacecraft on which it is mounted. High mass resolution measurements are made by timing ions moving through a gridless cylindrically sysmetric linear electric field.
Linear electric field mass spectrometry
McComas, D.J.; Nordholt, J.E.
1992-12-01
A mass spectrometer and methods for mass spectrometry are described. The apparatus is compact and of low weight and has a low power requirement, making it suitable for use on a space satellite and as a portable detector for the presence of substances. High mass resolution measurements are made by timing ions moving through a gridless cylindrically symmetric linear electric field. 8 figs.
Linear electric field mass spectrometry
McComas, David J.; Nordholt, Jane E.
1992-01-01
A mass spectrometer and methods for mass spectrometry. The apparatus is compact and of low weight and has a low power requirement, making it suitable for use on a space satellite and as a portable detector for the presence of substances. High mass resolution measurements are made by timing ions moving through a gridless cylindrically symmetric linear electric field.
Linearization of Conservative Nonlinear Oscillators
ERIC Educational Resources Information Center
Belendez, A.; Alvarez, M. L.; Fernandez, E.; Pascual, I.
2009-01-01
A linearization method of the nonlinear differential equation for conservative nonlinear oscillators is analysed and discussed. This scheme is based on the Chebyshev series expansion of the restoring force which allows us to obtain a frequency-amplitude relation which is valid not only for small but also for large amplitudes and, sometimes, for…
Feedback Systems for Linear Colliders
1999-04-12
Feedback systems are essential for stable operation of a linear collider, providing a cost-effective method for relaxing tight tolerances. In the Stanford Linear Collider (SLC), feedback controls beam parameters such as trajectory, energy, and intensity throughout the accelerator. A novel dithering optimization system which adjusts final focus parameters to maximize luminosity contributed to achieving record performance in the 1997-98 run. Performance limitations of the steering feedback have been investigated, and improvements have been made. For the Next Linear Collider (NLC), extensive feedback systems are planned as an integral part of the design. Feedback requirements for JLC (the Japanese Linear Collider) are essentially identical to NLC; some of the TESLA requirements are similar but there are significant differences. For NLC, algorithms which incorporate improvements upon the SLC implementation are being prototyped. Specialized systems for the damping rings, rf and interaction point will operate at high bandwidth and fast response. To correct for the motion of individual bunches within a train, both feedforward and feedback systems are planned. SLC experience has shown that feedback systems are an invaluable operational tool for decoupling systems, allowing precision tuning, and providing pulse-to-pulse diagnostics. Feedback systems for the NLC will incorporate the key SLC features and the benefits of advancing technologies.
Parameterized Linear Longitudinal Airship Model
NASA Technical Reports Server (NTRS)
Kulczycki, Eric; Elfes, Alberto; Bayard, David; Quadrelli, Marco; Johnson, Joseph
2010-01-01
A parameterized linear mathematical model of the longitudinal dynamics of an airship is undergoing development. This model is intended to be used in designing control systems for future airships that would operate in the atmospheres of Earth and remote planets. Heretofore, the development of linearized models of the longitudinal dynamics of airships has been costly in that it has been necessary to perform extensive flight testing and to use system-identification techniques to construct models that fit the flight-test data. The present model is a generic one that can be relatively easily specialized to approximate the dynamics of specific airships at specific operating points, without need for further system identification, and with significantly less flight testing. The approach taken in the present development is to merge the linearized dynamical equations of an airship with techniques for estimation of aircraft stability derivatives, and to thereby make it possible to construct a linearized dynamical model of the longitudinal dynamics of a specific airship from geometric and aerodynamic data pertaining to that airship. (It is also planned to develop a model of the lateral dynamics by use of the same methods.) All of the aerodynamic data needed to construct the model of a specific airship can be obtained from wind-tunnel testing and computational fluid dynamics
Linear or Exponential Number Lines
ERIC Educational Resources Information Center
Stafford, Pat
2011-01-01
Having decided to spend some time looking at one's understanding of numbers, the author was inspired by "Alex's Adventures in Numberland," by Alex Bellos to look at one's innate appreciation of number. Bellos quotes research studies suggesting that an individual's natural appreciation of numbers is more likely to be exponential rather than linear,…
NASA Astrophysics Data System (ADS)
Rincon, F.; Schekochihin, A. A.; Cowley, S. C.
2015-02-01
Slow dynamical changes in magnetic-field strength and invariance of the particles' magnetic moments generate ubiquitous pressure anisotropies in weakly collisional, magnetized astrophysical plasmas. This renders them unstable to fast, small-scale mirror and firehose instabilities, which are capable of exerting feedback on the macroscale dynamics of the system. By way of a new asymptotic theory of the early non-linear evolution of the mirror instability in a plasma subject to slow shearing or compression, we show that the instability does not saturate quasi-linearly at a steady, low-amplitude level. Instead, the trapping of particles in small-scale mirrors leads to non-linear secular growth of magnetic perturbations, δB/B ∝ t2/3. Our theory explains recent collisionless simulation results, provides a prediction of the mirror evolution in weakly collisional plasmas and establishes a foundation for a theory of non-linear mirror dynamics with trapping, valid up to δB/B = O(1).
Phycotoxicity of linear alkylbenzene sulfonate
Chawla, G.; Viswanathan, P.N.; Devi, S.
1988-04-01
Dose- and time-dependent effects of linear alkylbenzene sulfonate, a major component of synthetic detergent, to the blue-green alga Nostoc muscorum, were studied under laboratory conditions. Toxicity was evident, at doses above 0.001%, from the decrease in biomass, heterocyst number, and protein content and pathomorphological alterations.
Comparison of Tracking Codes for the International Linear Collider
Latina, A.; Schulte, D.; Smith, J.C.; Poirier, F.; Walker, N.J.; Lebrun, P.; Ranjan, K.; Kubo, K.; Tenenbaum, Peter Gregory; Eliasson, P.; /Uppsala U.
2008-01-23
In an effort to compare beam dynamics and create a ''benchmark'' for Dispersion Free Steering (DFS) a comparison was made between different International Linear Collider (ILC) simulation programs while performing DFS. This study consisted of three parts. Firstly, a simple betatron oscillation was tracked through each code. Secondly, a set of component misalignments and corrector settings generated from one program was read into the others to confirm similar emittance dilution. Thirdly, given the same set of component misalignments, DFS was performed independently in each program and the resulting emittance dilution was compared. Performance was found to agree exceptionally well in all three studies.
High average power linear induction accelerator development
Bayless, J.R.; Adler, R.J.
1987-07-01
There is increasing interest in linear induction accelerators (LIAs) for applications including free electron lasers, high power microwave generators and other types of radiation sources. Lawrence Livermore National Laboratory has developed LIA technology in combination with magnetic pulse compression techniques to achieve very impressive performance levels. In this paper we will briefly discuss the LIA concept and describe our development program. Our goals are to improve the reliability and reduce the cost of LIA systems. An accelerator is presently under construction to demonstrate these improvements at an energy of 1.6 MeV in 2 kA, 65 ns beam pulses at an average beam power of approximately 30 kW. The unique features of this system are a low cost accelerator design and an SCR-switched, magnetically compressed, pulse power system. 4 refs., 7 figs.
Vanguard industrial linear accelerator rapid product development
NASA Astrophysics Data System (ADS)
Harroun, Jim
1994-07-01
Siemens' ability to take the VanguardTM Industrial Linear Accelerator from the development stage to the market place in less than two years is described. Emphasis is on the development process, from the business plan through the shipment of the first commercial sale. Included are discussions on the evolution of the marketing specifications, with emphasis on imaging system requirements, as well as flexibility for expansion into other markets. Requirements used to create the engineering specifications, how they were incorporated into the design, and lessons learned from the demonstration system are covered. Some real-life examples of unanticipated problems are presented, as well as how they were resolved, including some discussion of the special problems encountered in developing a user interface and a training program for an international customer.
Distributed control using linear momentum exchange devices
NASA Technical Reports Server (NTRS)
Sharkey, J. P.; Waites, Henry; Doane, G. B., III
1987-01-01
MSFC has successfully employed the use of the Vibrational Control of Space Structures (VCOSS) Linear Momentum Exchange Devices (LMEDs), which was an outgrowth of the Air Force Wright Aeronautical Laboratory (AFWAL) program, in a distributed control experiment. The control experiment was conducted in MSFC's Ground Facility for Large Space Structures Control Verification (GF/LSSCV). The GF/LSSCV's test article was well suited for this experiment in that the LMED could be judiciously placed on the ASTROMAST. The LMED placements were such that vibrational mode information could be extracted from the accelerometers on the LMED. The LMED accelerometer information was processed by the control algorithms so that the LMED masses could be accelerated to produce forces which would dampen the vibrational modes of interest. Experimental results are presented showing the LMED's capabilities.
Slope Estimation in Noisy Piecewise Linear Functions✩
Ingle, Atul; Bucklew, James; Sethares, William; Varghese, Tomy
2014-01-01
This paper discusses the development of a slope estimation algorithm called MAPSlope for piecewise linear data that is corrupted by Gaussian noise. The number and locations of slope change points (also known as breakpoints) are assumed to be unknown a priori though it is assumed that the possible range of slope values lies within known bounds. A stochastic hidden Markov model that is general enough to encompass real world sources of piecewise linear data is used to model the transitions between slope values and the problem of slope estimation is addressed using a Bayesian maximum a posteriori approach. The set of possible slope values is discretized, enabling the design of a dynamic programming algorithm for posterior density maximization. Numerical simulations are used to justify choice of a reasonable number of quantization levels and also to analyze mean squared error performance of the proposed algorithm. An alternating maximization algorithm is proposed for estimation of unknown model parameters and a convergence result for the method is provided. Finally, results using data from political science, finance and medical imaging applications are presented to demonstrate the practical utility of this procedure. PMID:25419020
Frequency scaling of linear super-colliders
Mondelli, A.; Chernin, D.; Drobot, A.; Reiser, M.; Granatstein, V.
1986-06-01
The development of electron-positron linear colliders in the TeV energy range will be facilitated by the development of high-power rf sources at frequencies above 2856 MHz. Present S-band technology, represented by the SLC, would require a length in excess of 50 km per linac to accelerate particles to energies above 1 TeV. By raising the rf driving frequency, the rf breakdown limit is increased, thereby allowing the length of the accelerators to be reduced. Currently available rf power sources set the realizable gradient limit in an rf linac at frequencies above S-band. This paper presents a model for the frequency scaling of linear colliders, with luminosity scaled in proportion to the square of the center-of-mass energy. Since wakefield effects are the dominant deleterious effect, a separate single-bunch simulation model is described which calculates the evolution of the beam bunch with specified wakefields, including the effects of using programmed phase positioning and Landau damping. The results presented here have been obtained for a SLAC structure, scaled in proportion to wavelength.
NASA Technical Reports Server (NTRS)
Vranish, John
2009-01-01
T-slide linear actuators use gear bearing differential epicyclical transmissions (GBDETs) to directly drive a linear rack, which, in turn, performs the actuation. Conventional systems use a rotary power source in conjunction with a nut and screw to provide linear motion. Non-back-drive properties of GBDETs make the new actuator more direct and simpler. Versions of this approach will serve as a long-stroke, ultra-precision, position actuator for NASA science instruments, and as a rugged, linear actuator for NASA deployment duties. The T slide can operate effectively in the presence of side forces and torques. Versions of the actuator can perform ultra-precision positioning. A basic T-slide actuator is a long-stroke, rack-and-pinion linear actuator that, typically, consists of a T-slide, several idlers, a transmission to drive the slide (powered by an electric motor) and a housing that holds the entire assembly. The actuator is driven by gear action on its top surface, and is guided and constrained by gear-bearing idlers on its other two parallel surfaces. The geometry, implemented with gear-bearing technology, is particularly effective. An electronic motor operating through a GBDET can directly drive the T slide against large loads, as a rack and pinion linear actuator, with no break and no danger of back driving. The actuator drives the slide into position and stops. The slide holes position with power off and no brake, regardless of load. With the T slide configuration, this GBDET has an entire T-gear surface on which to operate. The GB idlers coupling the other two T slide parallel surfaces to their housing counterpart surfaces provide constraints in five degrees-of-freedom and rolling friction in the direction of actuation. Multiple GB idlers provide roller bearing strength sufficient to support efficient, rolling friction movement, even in the presence of large, resisting forces. T-slide actuators can be controlled using the combination of an off
Intercultural Programs Program Evaluation.
ERIC Educational Resources Information Center
Jones, Mary Lynne
The report evaluates the programs of the Des Moines (Iowa) Public Schools' Office of Intercultural Programs' services. The programs are designed to provide educational equity and serve as a resource for students, parents, community, and staff in a variety of areas, including: a voluntary transfer program; paired and magnet schools; extended day…
Superstructure of linear duplex DNA.
Vollenweider, H J; Koller, T; Parello, J; Sogo, J M
1976-01-01
The superstructure of a covalently closed circular DNA (of bacteriophage PM 2) was compared by electron microscopy with that of a linear duplex DNA (of bacteriophage T7) when ionic strength and benzyldimethylalkylammonium chloride concentration were varied. In parallel studies the sedimentation behavior of these DNAs was studied by analytical ultracentrifugation, but for technical reasons these had to be without benzyldimethylalkylammonium chloride. By combining the information from the two methods one has to conclude that with increasing ionic strength the linear duplex T7 DNA spontaneously forms a structure similar to that of the superhelical structure of closed circular PM 2 DNA. The superstructure is destroyed under premelting conditions and in the presence of an excess of ethidium bromide. Images PMID:1069302
Linear regression in astronomy. II
NASA Technical Reports Server (NTRS)
Feigelson, Eric D.; Babu, Gutti J.
1992-01-01
A wide variety of least-squares linear regression procedures used in observational astronomy, particularly investigations of the cosmic distance scale, are presented and discussed. The classes of linear models considered are (1) unweighted regression lines, with bootstrap and jackknife resampling; (2) regression solutions when measurement error, in one or both variables, dominates the scatter; (3) methods to apply a calibration line to new data; (4) truncated regression models, which apply to flux-limited data sets; and (5) censored regression models, which apply when nondetections are present. For the calibration problem we develop two new procedures: a formula for the intercept offset between two parallel data sets, which propagates slope errors from one regression to the other; and a generalization of the Working-Hotelling confidence bands to nonstandard least-squares lines. They can provide improved error analysis for Faber-Jackson, Tully-Fisher, and similar cosmic distance scale relations.
Spatial processes in linear ordering.
von Hecker, Ulrich; Klauer, Karl Christoph; Wolf, Lukas; Fazilat-Pour, Masoud
2016-07-01
Memory performance in linear order reasoning tasks (A > B, B > C, C > D, etc.) shows quicker, and more accurate responses to queries on wider (AD) than narrower (AB) pairs on a hypothetical linear mental model (A - B - C - D). While indicative of an analogue representation, research so far did not provide positive evidence for spatial processes in the construction of such models. In a series of 7 experiments we report such evidence. Participants respond quicker when the dominant element in a pair is presented on the left (or top) rather than on the right (or bottom). The left-anchoring tendency reverses in a sample with Farsi background (reading/writing from right to left). Alternative explanations and confounds are tested. A theoretical model is proposed that integrates basic assumptions about acquired reading/writing habits as a scaffold for spatial simulation, and primacy/dominance representation within such spatial simulations. (PsycINFO Database Record PMID:26641448
Linear inflation from quartic potential
NASA Astrophysics Data System (ADS)
Kannike, Kristjan; Racioppi, Antonio; Raidal, Martti
2016-01-01
We show that if the inflaton has a non-minimal coupling to gravity and the Planck scale is dynamically generated, the results of Coleman-Weinberg inflation are confined in between two attractor solutions: quadratic inflation, which is ruled out by the recent measurements, and linear inflation which, instead, is in the experimental allowed region. The minimal scenario has only one free parameter — the inflaton's non-minimal coupling to gravity — that determines all physical parameters such as the tensor-to-scalar ratio and the reheating temperature of the Universe. Should the more precise future measurements of inflationary parameters point towards linear inflation, further interest in scale-invariant scenarios would be motivated.
Positive fractional linear electrical circuits
NASA Astrophysics Data System (ADS)
Kaczorek, Tadeusz
2013-10-01
The positive fractional linear systems and electrical circuits are addressed. New classes of fractional asymptotically stable and unstable electrical circuits are introduced. The Caputo and Riemann-Liouville definitions of fractional derivatives are used to analysis of the positive electrical circuits composed of resistors, capacitors, coils and voltage (current) sources. The positive fractional electrical and specially unstable different types electrical circuits are analyzed. Some open problems are formulated.
Segmented rail linear induction motor
Cowan, Jr., Maynard; Marder, Barry M.
1996-01-01
A segmented rail linear induction motor has a segmented rail consisting of a plurality of nonferrous electrically conductive segments aligned along a guideway. The motor further includes a carriage including at least one pair of opposed coils fastened to the carriage for moving the carriage. A power source applies an electric current to the coils to induce currents in the conductive surfaces to repel the coils from adjacent edges of the conductive surfaces.
Segmented rail linear induction motor
Cowan, M. Jr.; Marder, B.M.
1996-09-03
A segmented rail linear induction motor has a segmented rail consisting of a plurality of nonferrous electrically conductive segments aligned along a guideway. The motor further includes a carriage including at least one pair of opposed coils fastened to the carriage for moving the carriage. A power source applies an electric current to the coils to induce currents in the conductive surfaces to repel the coils from adjacent edges of the conductive surfaces. 6 figs.
Logistic systems with linear feedback
NASA Astrophysics Data System (ADS)
Son, Leonid; Shulgin, Dmitry; Ogluzdina, Olga
2016-08-01
A wide variety of systems may be described by specific dependence, which is known as logistic curve, or S-curve, between the internal characteristic and the external parameter. Linear feedback between these two values may be suggested for a wide set of systems also. In present paper, we suggest a bifurcation behavior for systems with both features, and discuss it for two cases, which are the Ising magnet in external field, and the development of manufacturing enterprise.
Precision linear ramp function generator
Jatko, W. Bruce; McNeilly, David R.; Thacker, Louis H.
1986-01-01
A ramp function generator is provided which produces a precise linear ramp unction which is repeatable and highly stable. A derivative feedback loop is used to stabilize the output of an integrator in the forward loop and control the ramp rate. The ramp may be started from a selected baseline voltage level and the desired ramp rate is selected by applying an appropriate constant voltage to the input of the integrator.
Precision linear ramp function generator
Jatko, W.B.; McNeilly, D.R.; Thacker, L.H.
1984-08-01
A ramp function generator is provided which produces a precise linear ramp function which is repeatable and highly stable. A derivative feedback loop is used to stabilize the output of an integrator in the forward loop and control the ramp rate. The ramp may be started from a selected baseline voltage level and the desired ramp rate is selected by applying an appropriate constant voltage to the input of the integrator.
Cast dielectric composite linear accelerator
Sanders, David M.; Sampayan, Stephen; Slenes, Kirk; Stoller, H. M.
2009-11-10
A linear accelerator having cast dielectric composite layers integrally formed with conductor electrodes in a solventless fabrication process, with the cast dielectric composite preferably having a nanoparticle filler in an organic polymer such as a thermosetting resin. By incorporating this cast dielectric composite the dielectric constant of critical insulating layers of the transmission lines of the accelerator are increased while simultaneously maintaining high dielectric strengths for the accelerator.
Wilson, P.B.
1987-05-01
The next generation of linear collider after the SLC (Stanford Linear Collider) will probably have an energy in the range 300 GeV-1 TeV per linac. A number of exotic accelerating schemes, such as laser and plasma acceleration, have been proposed for linear colliders of the far future. However, the technology which is most mature and which could lead to a collider in the above energy range in the relatively near future is the rf-driven linac, in which externally produced rf is fed into a more or less conventional metallic accelerating structure. Two basic technologies have been proposed for producing the required high peak rf power: discrete microwave power sources, and various two-beam acceleration schemes in which the rf is produced by a high current driving beam running parallel to the main accelerator. The current status of experimental and analytic work on both the discrete source and the two-beam methods for producing rf is discussed. The implications of beam-beam related effects (luminosity, disruption and beamstrahlung) for the design of rf-driven colliders are also considered.
NASA Astrophysics Data System (ADS)
Ranjan, Kirti; Solyak, Nikolay; Tenenbaum, Peter
2005-04-01
Recently the particle physics community has chosen a single technology for the new accelerator, opening the way for the world community to unite and concentrate resources on the design of an International Linear collider (ILC) using superconducting technology. One of the key operational issues in the design of the ILC will be the preservation of the small beam emittances during passage through the main linear accelerator (linac). Sources of emittance dilution include incoherent misalignments of the quadrupole magnets and rf-structure misalignments. In this work, the study of emittance dilution for the 500-GeV center of mass energy main linac of the Superconducting Linear Accelerator design, based on adaptation of the TESLA TDR design is performed using LIAR simulation program. Based on the tolerances of the present design, effect of two important Beam-Based steering algorithms, Flat Steering and Dispersion Free Steering, are compared with respect to the emittance dilution in the main linac. We also investigated the effect of various misalignments on the emittance dilution for these two steering algorithms.
Linear time near-optimal planning in the blocks world
Slaney, J.; Thiebaux, S.
1996-12-31
This paper reports an analysis of near-optimal Blocks World planning. Various methods are clarified, and their time complexity is shown to be linear in the number of blocks, which improves their known complexity bounds. The speed of the implemented programs (ten thousand blocks are handled in a second) enables us to make empirical observations on large problems. These suggest that the above methods have very close average performance ratios, and yield a rough upper bound on those ratios well below the worst case of 2. Further, they lead to the conjecture that in the limit the simplest linear time algorithm could be just as good on average as the optimal one.
Fractional non-linear modelling of ultracapacitors
NASA Astrophysics Data System (ADS)
Bertrand, Nicolas; Sabatier, Jocelyn; Briat, Olivier; Vinassa, Jean-Michel
2010-05-01
In this paper, it is demonstrated that an ultracapacitor exhibits a non-linear behaviour in relation to the operating voltage. A set of fractional order linear systems resulting from a frequency analysis of the ultracapacitor at various operating points is first obtained. Then, a non-linear model is deduced from the linear systems set, so that its Taylor linearization around the considered operating points (for the frequency analysis), produces the linear system set. The resulting non-linear model is validated on a Hybrid Electric Vehicle (HEV) application.
ELDIN NAFEE, SHERIF SALAH
2013-07-24
Version 00 Calculations of the decay heat is of great importance for the design of the shielding of discharged fuel, the design and transport of fuel-storage flasks and the management of the resulting radioactive waste. These are relevant to safety and have large economic and legislative consequences. In the HEATKAU code, a new approach has been proposed to evaluate the decay heat power after a fission burst of a fissile nuclide for short cooling time. This method is based on the numerical solution of coupled linear differential equations that describe decays and buildups of the minor fission products (MFPs) nuclides. HEATKAU is written entirely in the MATLAB programming environment. The MATLAB data can be stored in a standard, fast and easy-access, platform- independent binary format which is easy to visualize.
2013-07-24
Version 00 Calculations of the decay heat is of great importance for the design of the shielding of discharged fuel, the design and transport of fuel-storage flasks and the management of the resulting radioactive waste. These are relevant to safety and have large economic and legislative consequences. In the HEATKAU code, a new approach has been proposed to evaluate the decay heat power after a fission burst of a fissile nuclide for short cooling time.more » This method is based on the numerical solution of coupled linear differential equations that describe decays and buildups of the minor fission products (MFPs) nuclides. HEATKAU is written entirely in the MATLAB programming environment. The MATLAB data can be stored in a standard, fast and easy-access, platform- independent binary format which is easy to visualize.« less
Nonferromagnetic linear variable differential transformer
Ellis, James F.; Walstrom, Peter L.
1977-06-14
A nonferromagnetic linear variable differential transformer for accurately measuring mechanical displacements in the presence of high magnetic fields is provided. The device utilizes a movable primary coil inside a fixed secondary coil that consists of two series-opposed windings. Operation is such that the secondary output voltage is maintained in phase (depending on polarity) with the primary voltage. The transducer is well-suited to long cable runs and is useful for measuring small displacements in the presence of high or alternating magnetic fields.
Linear readout of object manifolds
NASA Astrophysics Data System (ADS)
Chung, SueYeon; Lee, Daniel D.; Sompolinsky, Haim
2016-06-01
Objects are represented in sensory systems by continuous manifolds due to sensitivity of neuronal responses to changes in physical features such as location, orientation, and intensity. What makes certain sensory representations better suited for invariant decoding of objects by downstream networks? We present a theory that characterizes the ability of a linear readout network, the perceptron, to classify objects from variable neural responses. We show how the readout perceptron capacity depends on the dimensionality, size, and shape of the object manifolds in its input neural representation.
A linear Fick's law calorimeter
NASA Astrophysics Data System (ADS)
Alpert, Seymour S.; Bryant, Pat D.; Woodside, William F.
1982-10-01
A small animal calorimeter is described that is based on the direct application of Fick's law. Heat flow is channeled through a circular disk of magnesium and the temperature difference between the inside and outside surface of the disk is detected by means of solid-state temperature transducers. The device is calibrated using a light-weight electrical resistive source and is shown to be linear in its response and to have an e-folding time of 4.8 min. A rat was introduced into the calorimeter and its heat energy expenditure rate was observed in both the sedated and unsedated states.
A terabyte linear tape recorder
NASA Technical Reports Server (NTRS)
Webber, John C.
1994-01-01
A plan has been formulated and selected for a NASA Phase 2 SBIR award for using the VLBA tape recorder for recording general data. The VLBA tape recorder is a high-speed, high-density linear tape recorder developed for Very Long Baseline Interferometry (VLBI) which is presently capable of recording at rates up to 2 Gbit/sec and holding up to 1 Terabyte of data on one tape, using a special interface and not employing error correction. A general-purpose interface and error correction will be added so that the recorder can be used in other high-speed, high-capacity applications.
Fast feedback for linear colliders
Hendrickson, L.; Adolphsen, C.; Allison, S.; Gromme, T.; Grossberg, P.; Himel, T.; Krauter, K.; MacKenzie, R.; Minty, M.; Sass, R.
1995-05-01
A fast feedback system provides beam stabilization for the SLC. As the SLC is in some sense a prototype for future linear colliders, this system may be a prototype for future feedbacks. The SLC provides a good base of experience for feedback requirements and capabilities as well as a testing ground for performance characteristics. The feedback system controls a wide variety of machine parameters throughout the SLC and associated experiments, including regulation of beam position, angle, energy, intensity and timing parameters. The design and applications of the system are described, in addition to results of recent performance studies.
Acoustic emission linear pulse holography
Collins, H. D.; Busse, L. J.; Lemon, D. K.
1985-07-30
Defects in a structure are imaged as they propagate, using their emitted acoustic energy as a monitored source. Short bursts of acoustic energy propagate through the structure to a discrete element receiver array. A reference timing transducer located between the array and the inspection zone initiates a series of time-of-flight measurements. A resulting series of time-of-flight measurements are then treated as aperture data and are transferred to a computer for reconstruction of a synthetic linear holographic image. The images can be displayed and stored as a record of defect growth.
Acoustic emission linear pulse holography
Collins, H. Dale; Busse, Lawrence J.; Lemon, Douglas K.
1985-01-01
Defects in a structure are imaged as they propagate, using their emitted acoustic energy as a monitored source. Short bursts of acoustic energy propagate through the structure to a discrete element receiver array. A reference timing transducer located between the array and the inspection zone initiates a series of time-of-flight measurements. A resulting series of time-of-flight measurements are then treated as aperture data and are transferred to a computer for reconstruction of a synthetic linear holographic image. The images can be displayed and stored as a record of defect growth.
Linearized Bekenstein varying α models
NASA Astrophysics Data System (ADS)
Avelino, P. P.; Martins, C. J.; Oliveira, J. C.
2004-10-01
We study the simplest class of Bekenstein-type, varying α models, in which the two available free functions (potential and gauge kinetic function) are Taylor-expanded up to linear order. Any realistic model of this type reduces to a model in this class for a certain time interval around the present day. Nevertheless, we show that no such model is consistent with all existing observational results. We discuss possible implications of these findings, and, in particular, clarify the ambiguous statement (often found in the literature) that “the Webb results are inconsistent with Oklo.”
Elementary principles of linear accelerators
NASA Astrophysics Data System (ADS)
Loew, G. A.; Talman, R.
1983-09-01
A short chronology of important milestones in the field of linear accelerators is presented. Proton linacs are first discussed and elementary concepts such as transit time, shunt impedance, and Q are introduced. Critical issues such as phase stability and transverse forces are addressed. An elementary discussion of waveguide acclerating structures is also provided. Finally, electron accelerators addressed. Taking SLAC as an exmple, various topics are discussed such as structure design, choice of parameters, frequency optmization, beam current, emittance, bunch length and beam loading. Recent developments and future challenges are mentioned briefly.
Simultaneous Determination of Cobalt, Copper, and Nickel by Multivariate Linear Regression.
ERIC Educational Resources Information Center
Dado, Greg; Rosenthal, Jeffrey
1990-01-01
Presented is an experiment where the concentrations of three metal ions in a solution are simultaneously determined by ultraviolet-vis spectroscopy. Availability of the computer program used for statistically analyzing data using a multivariate linear regression is listed. (KR)
ERIC Educational Resources Information Center
Montiel, Mariana; Bhatti, Uzma
2010-01-01
This article presents an overview of some issues that were confronted when delivering an online second Linear Algebra course (assuming a previous Introductory Linear Algebra course) to graduate students enrolled in a Secondary Mathematics Education program. The focus is on performance in one particular aspect of the course: "change of basis" and…
Analysis of linear trade models and relation to scale economies
Gomory, Ralph E.; Baumol, William J.
1997-01-01
We discuss linear Ricardo models with a range of parameters. We show that the exact boundary of the region of equilibria of these models is obtained by solving a simple integer programming problem. We show that there is also an exact correspondence between many of the equilibria resulting from families of linear models and the multiple equilibria of economies of scale models. PMID:11038573
Challenges in future linear colliders
Swapan Chattopadhyay; Kaoru Yokoya
2002-09-02
For decades, electron-positron colliders have been complementing proton-proton colliders. But the circular LEP, the largest e-e+ collider, represented an energy limit beyond which energy losses to synchrotron radiation necessitate moving to e-e+ linear colliders (LCs), thereby raising new challenges for accelerator builders. Japanese-American, German, and European collaborations have presented options for the Future Linear Collider (FLC). Key accelerator issues for any FLC option are the achievement of high enough energy and luminosity. Damping rings, taking advantage of the phenomenon of synchrotron radiation, have been developed as the means for decreasing beam size, which is crucial for ensuring a sufficiently high rate of particle-particle collisions. Related challenges are alignment and stability in an environment where even minute ground motion can disrupt performance, and the ability to monitor beam size. The technical challenges exist within a wider context of socioeconomic and political challenges, likely necessitating continued development of international collaboration among parties involved in accelerator-based physics.
Linear Response for Intermittent Maps
NASA Astrophysics Data System (ADS)
Baladi, Viviane; Todd, Mike
2016-11-01
We consider the one parameter family {α mapsto T_{α}} ({α in [0,1)}) of Pomeau-Manneville type interval maps {T_{α}(x) = x(1+2^{α} x^{α})} for {x in [0,1/2)} and {T_{α}(x)=2x-1} for {x in [1/2, 1]}, with the associated absolutely continuous invariant probability measure {μ_{α}}. For {α in (0,1)}, Sarig and Gouëzel proved that the system mixes only polynomially with rate {n^{1-1/{α}}} (in particular, there is no spectral gap). We show that for any {ψ in Lq}, the map {α to int_01 ψ d μ_{α}} is differentiable on {[0,1-1/q)}, and we give a (linear response) formula for the value of the derivative. This is the first time that a linear response formula for the SRB measure is obtained in the setting of slowly mixing dynamics. Our argument shows how cone techniques can be used in this context. For {α ≥ 1/2} we need the {n^{-1/{α}}} decorrelation obtained by Gouëzel under additional conditions.
Repair of overheating linear accelerator
Barkley, Walter; Baldwin, William; Bennett, Gloria; Bitteker, Leo; Borden, Michael; Casados, Jeff; Fitzgerald, Daniel; Gorman, Fred; Johnson, Kenneth; Kurennoy, Sergey; Martinez, Alberto; O’Hara, James; Perez, Edward; Roller, Brandon; Rybarcyk, Lawrence; Stark, Peter; Stockton, Jerry
2004-01-01
Los Alamos Neutron Science Center (LANSCE) is a proton accelerator that produces high energy particle beams for experiments. These beams include neutrons and protons for diverse uses including radiography, isotope production, small feature study, lattice vibrations and material science. The Drift Tube Linear Accelerator (DTL) is the first portion of a half mile long linear section of accelerator that raises the beam energy from 750 keV to 100 MeV. In its 31st year of operation (2003), the DTL experienced serious issues. The first problem was the inability to maintain resonant frequency at full power. The second problem was increased occurrences of over-temperature failure of cooling hoses. These shortcomings led to an investigation during the 2003 yearly preventative maintenance shutdown that showed evidence of excessive heating: discolored interior tank walls and coper oxide deposition in the cooling circuits. Since overheating was suspected to be caused by compromised heat transfer, improving that was the focus of the repair effort. Investigations revealed copper oxide flow inhibition and iron oxide scale build up. Acid cleaning was implemented with careful attention to protection of the base metal, selection of components to clean and minimization of exposure times. The effort has been very successful in bringing the accelerator through a complete eight month run cycle allowing an incredible array of scientific experiments to be completed this year (2003-2004). This paper will describe the systems, investigation analysis, repair, return to production and conclusion.
Non-linear Growth Models in Mplus and SAS
Grimm, Kevin J.; Ram, Nilam
2013-01-01
Non-linear growth curves or growth curves that follow a specified non-linear function in time enable researchers to model complex developmental patterns with parameters that are easily interpretable. In this paper we describe how a variety of sigmoid curves can be fit using the Mplus structural modeling program and the non-linear mixed-effects modeling procedure NLMIXED in SAS. Using longitudinal achievement data collected as part of a study examining the effects of preschool instruction on academic gain we illustrate the procedures for fitting growth models of logistic, Gompertz, and Richards functions. Brief notes regarding the practical benefits, limitations, and choices faced in the fitting and estimation of such models are included. PMID:23882134
Scale and Rotation Invariant Matching Using Linearly Augmented Trees.
Jiang, Hao; Tian, Tai-Peng; Sclaroff, Stan
2015-12-01
We propose a novel linearly augmented tree method for efficient scale and rotation invariant object matching. The proposed method enforces pairwise matching consistency defined on trees, and high-order constraints on all the sites of a template. The pairwise constraints admit arbitrary metrics while the high-order constraints use L1 norms and therefore can be linearized. Such a linearly augmented tree formulation introduces hyperedges and loops into the basic tree structure. But, different from a general loopy graph, its special structure allows us to relax and decompose the optimization into a sequence of tree matching problems that are efficiently solvable by dynamic programming. The proposed method also works on continuous scale and rotation parameters; we can match with a scale up to any large value with the same efficiency. Our experiments on ground truth data and a variety of real images and videos show that the proposed method is efficient, accurate and reliable. PMID:26539858
Heterogenous Acceleration for Linear Algebra in Multi-coprocessor Environments
Luszczek, Piotr R; Tomov, Stanimire Z; Dongarra, Jack J
2015-01-01
We present an efficient and scalable programming model for the development of linear algebra in heterogeneous multi-coprocessor environments. The model incorporates some of the current best design and implementation practices for the heterogeneous acceleration of dense linear algebra (DLA). Examples are given as the basis for solving linear systems' algorithms - the LU, QR, and Cholesky factorizations. To generate the extreme level of parallelism needed for the efficient use of coprocessors, algorithms of interest are redesigned and then split into well-chosen computational tasks. The tasks execution is scheduled over the computational components of a hybrid system of multi-core CPUs and coprocessors using a light-weight runtime system. The use of lightweight runtime systems keeps scheduling overhead low, while enabling the expression of parallelism through otherwise sequential code. This simplifies the development efforts and allows the exploration of the unique strengths of the various hardware components.
Arena, G; Rizzarelli, E; Sammartano, S; Rigano, C
1979-01-01
A non-linear least-squares computer program has been written for the refinement of the parameters involved in potentiometric acid-base titrations. The program ACBA (ACid-BAse titrations) is applicable under quite general conditions to solutions containing one or more acids or bases. The method of refinement used gives the program several advantages over the other programs described previously.
International linear collider reference design report
Aarons, G.
2007-06-22
The International Linear Collider will give physicists a new cosmic doorway to explore energy regimes beyond the reach of today's accelerators. A proposed electron-positron collider, the ILC will complement the Large Hadron Collider, a proton-proton collider at the European Center for Nuclear Research (CERN) in Geneva, Switzerland, together unlocking some of the deepest mysteries in the universe. With LHC discoveries pointing the way, the ILC -- a true precision machine -- will provide the missing pieces of the puzzle. Consisting of two linear accelerators that face each other, the ILC will hurl some 10 billion electrons and their anti-particles, positrons, toward each other at nearly the speed of light. Superconducting accelerator cavities operating at temperatures near absolute zero give the particles more and more energy until they smash in a blazing crossfire at the centre of the machine. Stretching approximately 35 kilometres in length, the beams collide 14,000 times every second at extremely high energies -- 500 billion-electron-volts (GeV). Each spectacular collision creates an array of new particles that could answer some of the most fundamental questions of all time. The current baseline design allows for an upgrade to a 50-kilometre, 1 trillion-electron-volt (TeV) machine during the second stage of the project. This reference design provides the first detailed technical snapshot of the proposed future electron-positron collider, defining in detail the technical parameters and components that make up each section of the 31-kilometer long accelerator. The report will guide the development of the worldwide R&D program, motivate international industrial studies and serve as the basis for the final engineering design needed to make an official project proposal later this decade.