NASA Technical Reports Server (NTRS)
Tuey, R. C.
1972-01-01
Computer solutions of linear programming problems are outlined. Information covers vector spaces, convex sets, and matrix algebra elements for solving simultaneous linear equations. Dual problems, reduced cost analysis, ranges, and error analysis are illustrated.
NASA Technical Reports Server (NTRS)
Lawson, C. L.; Krogh, F. T.; Gold, S. S.; Kincaid, D. R.; Sullivan, J.; Williams, E.; Hanson, R. J.; Haskell, K.; Dongarra, J.; Moler, C. B.
1982-01-01
The Basic Linear Algebra Subprograms (BLAS) library is a collection of 38 FORTRAN-callable routines for performing basic operations of numerical linear algebra. BLAS library is portable and efficient source of basic operations for designers of programs involving linear algebriac computations. BLAS library is supplied in portable FORTRAN and Assembler code versions for IBM 370, UNIVAC 1100 and CDC 6000 series computers.
Sparse linear programming subprogram
Hanson, R.J.; Hiebert, K.L.
1981-12-01
This report describes a subprogram, SPLP(), for solving linear programming problems. The package of subprogram units comprising SPLP() is written in Fortran 77. The subprogram SPLP() is intended for problems involving at most a few thousand constraints and variables. The subprograms are written to take advantage of sparsity in the constraint matrix. A very general problem statement is accepted by SPLP(). It allows upper, lower, or no bounds on the variables. Both the primal and dual solutions are returned as output parameters. The package has many optional features. Among them is the ability to save partial results and then use them to continue the computation at a later time.
NASA Technical Reports Server (NTRS)
Klumpp, A. R.; Lawson, C. L.
1988-01-01
Routines provided for common scalar, vector, matrix, and quaternion operations. Computer program extends Ada programming language to include linear-algebra capabilities similar to HAS/S programming language. Designed for such avionics applications as software for Space Station.
NASA Technical Reports Server (NTRS)
Ferencz, Donald C.; Viterna, Larry A.
1991-01-01
ALPS is a computer program which can be used to solve general linear program (optimization) problems. ALPS was designed for those who have minimal linear programming (LP) knowledge and features a menu-driven scheme to guide the user through the process of creating and solving LP formulations. Once created, the problems can be edited and stored in standard DOS ASCII files to provide portability to various word processors or even other linear programming packages. Unlike many math-oriented LP solvers, ALPS contains an LP parser that reads through the LP formulation and reports several types of errors to the user. ALPS provides a large amount of solution data which is often useful in problem solving. In addition to pure linear programs, ALPS can solve for integer, mixed integer, and binary type problems. Pure linear programs are solved with the revised simplex method. Integer or mixed integer programs are solved initially with the revised simplex, and the completed using the branch-and-bound technique. Binary programs are solved with the method of implicit enumeration. This manual describes how to use ALPS to create, edit, and solve linear programming problems. Instructions for installing ALPS on a PC compatible computer are included in the appendices along with a general introduction to linear programming. A programmers guide is also included for assistance in modifying and maintaining the program.
Linear Programming across the Curriculum
ERIC Educational Resources Information Center
Yoder, S. Elizabeth; Kurz, M. Elizabeth
2015-01-01
Linear programming (LP) is taught in different departments across college campuses with engineering and management curricula. Modeling an LP problem is taught in every linear programming class. As faculty teaching in Engineering and Management departments, the depth to which teachers should expect students to master this particular type of…
ALPS - A LINEAR PROGRAM SOLVER
NASA Technical Reports Server (NTRS)
Viterna, L. A.
1994-01-01
Linear programming is a widely-used engineering and management tool. Scheduling, resource allocation, and production planning are all well-known applications of linear programs (LP's). Most LP's are too large to be solved by hand, so over the decades many computer codes for solving LP's have been developed. ALPS, A Linear Program Solver, is a full-featured LP analysis program. ALPS can solve plain linear programs as well as more complicated mixed integer and pure integer programs. ALPS also contains an efficient solution technique for pure binary (0-1 integer) programs. One of the many weaknesses of LP solvers is the lack of interaction with the user. ALPS is a menu-driven program with no special commands or keywords to learn. In addition, ALPS contains a full-screen editor to enter and maintain the LP formulation. These formulations can be written to and read from plain ASCII files for portability. For those less experienced in LP formulation, ALPS contains a problem "parser" which checks the formulation for errors. ALPS creates fully formatted, readable reports that can be sent to a printer or output file. ALPS is written entirely in IBM's APL2/PC product, Version 1.01. The APL2 workspace containing all the ALPS code can be run on any APL2/PC system (AT or 386). On a 32-bit system, this configuration can take advantage of all extended memory. The user can also examine and modify the ALPS code. The APL2 workspace has also been "packed" to be run on any DOS system (without APL2) as a stand-alone "EXE" file, but has limited memory capacity on a 640K system. A numeric coprocessor (80X87) is optional but recommended. The standard distribution medium for ALPS is a 5.25 inch 360K MS-DOS format diskette. IBM, IBM PC and IBM APL2 are registered trademarks of International Business Machines Corporation. MS-DOS is a registered trademark of Microsoft Corporation.
On the linear programming bound for linear Lee codes.
Astola, Helena; Tabus, Ioan
2016-01-01
Based on an invariance-type property of the Lee-compositions of a linear Lee code, additional equality constraints can be introduced to the linear programming problem of linear Lee codes. In this paper, we formulate this property in terms of an action of the multiplicative group of the field [Formula: see text] on the set of Lee-compositions. We show some useful properties of certain sums of Lee-numbers, which are the eigenvalues of the Lee association scheme, appearing in the linear programming problem of linear Lee codes. Using the additional equality constraints, we formulate the linear programming problem of linear Lee codes in a very compact form, leading to a fast execution, which allows to efficiently compute the bounds for large parameter values of the linear codes.
Gadgets, approximation, and linear programming
Trevisan, L.; Sudan, M.; Sorkin, G.B.; Williamson, D.P.
1996-12-31
We present a linear-programming based method for finding {open_quotes}gadgets{close_quotes}, i.e., combinatorial structures reducing constraints of one optimization problems to constraints of another. A key step in this method is a simple observation which limits the search space to a finite one. Using this new method we present a number of new, computer-constructed gadgets for several different reductions. This method also answers a question posed by on how to prove the optimality of gadgets-we show how LP duality gives such proofs. The new gadgets improve hardness results for MAX CUT and MAX DICUT, showing that approximating these problems to within factors of 60/61 and 44/45 respectively is N P-hard. We also use the gadgets to obtain an improved approximation algorithm for MAX 3SAT which guarantees an approximation ratio of .801. This improves upon the previous best bound of .7704.
Investigating Integer Restrictions in Linear Programming
ERIC Educational Resources Information Center
Edwards, Thomas G.; Chelst, Kenneth R.; Principato, Angela M.; Wilhelm, Thad L.
2015-01-01
Linear programming (LP) is an application of graphing linear systems that appears in many Algebra 2 textbooks. Although not explicitly mentioned in the Common Core State Standards for Mathematics, linear programming blends seamlessly into modeling with mathematics, the fourth Standard for Mathematical Practice (CCSSI 2010, p. 7). In solving a…
Computer Program For Linear Algebra
NASA Technical Reports Server (NTRS)
Krogh, F. T.; Hanson, R. J.
1987-01-01
Collection of routines provided for basic vector operations. Basic Linear Algebra Subprogram (BLAS) library is collection from FORTRAN-callable routines for employing standard techniques to perform basic operations of numerical linear algebra.
Timetabling an Academic Department with Linear Programming.
ERIC Educational Resources Information Center
Bezeau, Lawrence M.
This paper describes an approach to faculty timetabling and course scheduling that uses computerized linear programming. After reviewing the literature on linear programming, the paper discusses the process whereby a timetable was created for a department at the University of New Brunswick. Faculty were surveyed with respect to course offerings…
Solution Methods for Stochastic Dynamic Linear Programs.
1980-12-01
Linear Programming, IIASA , Laxenburg, Austria, June 2-6, 1980. [2] Aghili, P., R.H., Cramer and H.W. Thompson, "On the applicability of two- stage...Laxenburg, Austria, May, 1978. [52] Propoi, A. and V. Krivonozhko, ’The simplex method for dynamic linear programs", RR-78-14, IIASA , Vienna, Austria
The Use of Linear Programming for Prediction.
ERIC Educational Resources Information Center
Schnittjer, Carl J.
The purpose of the study was to develop a linear programming model to be used for prediction, test the accuracy of the predictions, and compare the accuracy with that produced by curvilinear multiple regression analysis. (Author)
A neural network for bounded linear programming
Culioli, J.C.; Protopopescu, V.; Britton, C.; Ericson, N. )
1989-01-01
The purpose of this paper is to describe a neural network implementation of an algorithm recently designed at ORNL to solve the Transportation and the Assignment Problems, and, more generally, any explicitly bounded linear program. 9 refs.
Breadboard linear array scan imager program
NASA Technical Reports Server (NTRS)
1975-01-01
The performance was evaluated of large scale integration photodiode arrays in a linear array scan imaging system breadboard for application to multispectral remote sensing of the earth's resources. Objectives, approach, implementation, and test results of the program are presented.
Fuzzy linear programming for bulb production
NASA Astrophysics Data System (ADS)
Siregar, I.; Suantio, H.; Hanifiah, Y.; Muchtar, M. A.; Nasution, T. H.
2017-01-01
The research was conducted at a bulb company. This company has a high market demand. The increasing of the market demand has caused the company’s production could not fulfill the demand due to production planning is not optimal. Bulb production planning is researched with the aim to enable the company to fulfill the market demand in accordance with the limited resources available. From the data, it is known that the company cannot reach the market demand in the production of the Type A and Type B bulb. In other hands, the Type C bulb is produced exceeds market demand. By using fuzzy linear programming, then obtained the optimal production plans and to reach market demand. Completion of the simple method is done by using software LINGO 13. Application of fuzzy linear programming is being able to increase profits amounted to 7.39% of the ordinary concept of linear programming.
Portfolio optimization using fuzzy linear programming
NASA Astrophysics Data System (ADS)
Pandit, Purnima K.
2013-09-01
Portfolio Optimization (PO) is a problem in Finance, in which investor tries to maximize return and minimize risk by carefully choosing different assets. Expected return and risk are the most important parameters with regard to optimal portfolios. In the simple form PO can be modeled as quadratic programming problem which can be put into equivalent linear form. PO problems with the fuzzy parameters can be solved as multi-objective fuzzy linear programming problem. In this paper we give the solution to such problems with an illustrative example.
Menu-Driven Solver Of Linear-Programming Problems
NASA Technical Reports Server (NTRS)
Viterna, L. A.; Ferencz, D.
1992-01-01
Program assists inexperienced user in formulating linear-programming problems. A Linear Program Solver (ALPS) computer program is full-featured LP analysis program. Solves plain linear-programming problems as well as more-complicated mixed-integer and pure-integer programs. Also contains efficient technique for solution of purely binary linear-programming problems. Written entirely in IBM's APL2/PC software, Version 1.01. Packed program contains licensed material, property of IBM (copyright 1988, all rights reserved).
Generalised Assignment Matrix Methodology in Linear Programming
ERIC Educational Resources Information Center
Jerome, Lawrence
2012-01-01
Discrete Mathematics instructors and students have long been struggling with various labelling and scanning algorithms for solving many important problems. This paper shows how to solve a wide variety of Discrete Mathematics and OR problems using assignment matrices and linear programming, specifically using Excel Solvers although the same…
Spline smoothing of histograms by linear programming
NASA Technical Reports Server (NTRS)
Bennett, J. O.
1972-01-01
An algorithm for an approximating function to the frequency distribution is obtained from a sample of size n. To obtain the approximating function a histogram is made from the data. Next, Euclidean space approximations to the graph of the histogram using central B-splines as basis elements are obtained by linear programming. The approximating function has area one and is nonnegative.
Linear programming computational experience with onyx
Atrek, E.
1994-12-31
ONYX is a linear programming software package based on an efficient variation of the gradient projection method. When fully configured, it is intended for application to industrial size problems. While the computational experience is limited at the time of this abstract, the technique is found to be robust and competitive with existing methodology in terms of both accuracy and speed. An overview of the approach is presented together with a description of program capabilities, followed by a discussion of up-to-date computational experience with the program. Conclusions include advantages of the approach and envisioned future developments.
Controller design approach based on linear programming.
Tanaka, Ryo; Shibasaki, Hiroki; Ogawa, Hiromitsu; Murakami, Takahiro; Ishida, Yoshihisa
2013-11-01
This study explains and demonstrates the design method for a control system with a load disturbance observer. Observer gains are determined by linear programming (LP) in terms of the Routh-Hurwitz stability criterion and the final-value theorem. In addition, the control model has a feedback structure, and feedback gains are determined to be the linear quadratic regulator. The simulation results confirmed that compared with the conventional method, the output estimated by our proposed method converges to a reference input faster when a load disturbance is added to a control system. In addition, we also confirmed the effectiveness of the proposed method by performing an experiment with a DC motor.
A LINEAR PROGRAMMING MODEL OF THE GASEOUSDIFFUSION ISOTOPE-SEPARATION PROCESS,
ISOTOPE SEPARATION, LINEAR PROGRAMMING ), (*GASEOUS DIFFUSION SEPARATION, LINEAR PROGRAMMING ), (* LINEAR PROGRAMMING , GASEOUS DIFFUSION SEPARATION), NUCLEAR REACTORS, REACTOR FUELS, URANIUM, PURIFICATION
Evolving evolutionary algorithms using linear genetic programming.
Oltean, Mihai
2005-01-01
A new model for evolving Evolutionary Algorithms is proposed in this paper. The model is based on the Linear Genetic Programming (LGP) technique. Every LGP chromosome encodes an EA which is used for solving a particular problem. Several Evolutionary Algorithms for function optimization, the Traveling Salesman Problem and the Quadratic Assignment Problem are evolved by using the considered model. Numerical experiments show that the evolved Evolutionary Algorithms perform similarly and sometimes even better than standard approaches for several well-known benchmarking problems.
Neural network models for Linear Programming
Culioli, J.C.; Protopopescu, V.; Britton, C.; Ericson, N. )
1989-01-01
The purpose of this paper is to present a neural network that solves the general Linear Programming (LP) problem. In the first part, we recall Hopfield and Tank's circuit for LP and show that although it converges to stable states, it does not, in general, yield admissible solutions. This is due to the penalization treatment of the constraints. In the second part, we propose an approach based on Lagragrange multipliers that converges to primal and dual admissible solutions. We also show that the duality gap (measuring the optimality) can be rendered, in principle, as small as needed. 11 refs.
New approaches to linear and nonlinear programming
Murray, W.; Saunders, M.A.
1990-03-01
During the last twelve months, research has concentrated on barrier- function methods for linear programming (LP) and quadratic programming (QP). Some ground-work for the application of barrier methods to nonlinearly constrained problems has also begun. In our previous progress report we drew attention to the difficulty of developing robust implementations of barrier methods for LP. We have continued to refine both the primal algorithm and the dual algorithm. We still do not claim that the barrier algorithms are as robust as the simplex method; however, the dual algorithm has solved all the problems in our extensive test set. We have also gained some experience with using the algorithms to solve aircrew scheduling problems.
Matching by linear programming and successive convexification.
Jiang, Hao; Drew, Mark S; Li, Ze-Nian
2007-06-01
We present a novel convex programming scheme to solve matching problems, focusing on the challenging problem of matching in a large search range and with cluttered background. Matching is formulated as metric labeling with L1 regularization terms, for which we propose a novel linear programming relaxation method and an efficient successive convexification implementation. The unique feature of the proposed relaxation scheme is that a much smaller set of basis labels is used to represent the original label space. This greatly reduces the size of the searching space. A successive convexification scheme solves the labeling problem in a coarse to fine manner. Importantly, the original cost function is reconvexified at each stage, in the new focus region only, and the focus region is updated so as to refine the searching result. This makes the method well-suited for large label set matching. Experiments demonstrate successful applications of the proposed matching scheme in object detection, motion estimation, and tracking.
Quantum Algorithm for Linear Programming Problems
NASA Astrophysics Data System (ADS)
Joag, Pramod; Mehendale, Dhananjay
The quantum algorithm (PRL 103, 150502, 2009) solves a system of linear equations with exponential speedup over existing classical algorithms. We show that the above algorithm can be readily adopted in the iterative algorithms for solving linear programming (LP) problems. The first iterative algorithm that we suggest for LP problem follows from duality theory. It consists of finding nonnegative solution of the equation forduality condition; forconstraints imposed by the given primal problem and for constraints imposed by its corresponding dual problem. This problem is called the problem of nonnegative least squares, or simply the NNLS problem. We use a well known method for solving the problem of NNLS due to Lawson and Hanson. This algorithm essentially consists of solving in each iterative step a new system of linear equations . The other iterative algorithms that can be used are those based on interior point methods. The same technique can be adopted for solving network flow problems as these problems can be readily formulated as LP problems. The suggested quantum algorithm cansolveLP problems and Network Flow problems of very large size involving millions of variables.
Consensus contact prediction by linear programming.
Gao, Xin; Bu, Dongbo; Li, Shuai Cheng; Li, Ming; Xu, Jinbo
2007-01-01
Protein inter-residue contacts are of great use for protein structure determination or prediction. Recent CASP events have shown that a few accurately predicted contacts can help improve both computational efficiency and prediction accuracy of the ab inito folding methods. This paper develops an integer linear programming (ILP) method for consensus-based contact prediction. In contrast to the simple "majority voting" method assuming that all the individual servers are equal and independent, our method evaluates their correlations using the maximum likelihood method and constructs some latent independent servers using the principal component analysis technique. Then, we use an integer linear programming model to assign weights to these latent servers in order to maximize the deviation between the correct contacts and incorrect ones; our consensus prediction server is the weighted combination of these latent servers. In addition to the consensus information, our method also uses server-independent correlated mutation (CM) as one of the prediction features. Experimental results demonstrate that our contact prediction server performs better than the "majority voting" method. The accuracy of our method for the top L/5 contacts on CASP7 targets is 73.41%, which is much higher than previously reported studies. On the 16 free modeling (FM) targets, our method achieves an accuracy of 37.21%.
Optimized groundwater containment using linear programming
Quinn, J.J.; Johnson, R.L.; Durham, L.A.
1998-07-01
Groundwater extraction systems are typically installed to contain contaminant plumes. These systems are expensive to install and maintain. A traditional approach to designing such a wellfield is to use a series of trial-and-error simulations to test the effects of various well locations and pump rates. However, optimal locations and pump rates of extraction wells are difficult to determine when the objectives of the potential pumping scheme and the site hydrogeology are considered. This paper describes a case study of an application of linear programming theory to determine optimal well placement and pump rates. Calculations were conducted by using ModMan to link a calibrated MODFLOW flow model with LINDO, a linear programming package. Past activities at the site under study included disposal of contaminants in pits. Several groundwater plumes have been identified, and others may be present. The area of concern is bordered on three sides by a wetland, which receives a portion of its input water budget as groundwater discharge from the disposal area. The objective function of the optimization was to minimize the rate of groundwater extraction while preventing discharge to the marsh across a user-specified boundary. In this manner, the optimization routine selects well locations and pump rates to produce a groundwater divide along this boundary.
An Algorithm for Linearly Constrained Nonlinear Programming Programming Problems.
1980-01-01
ALGORITHM FOR LINEARLY CONSTRAINED NONLINEAR PROGRAMMING PROBLEMS Mokhtar S. Bazaraa and Jamie J. Goode In this paper an algorithm for solving a linearly...distance pro- gramr.ing, as in the works of Bazaraa and Goode 12], and Wolfe [16 can be used for solving this problem. Special methods that take advantage of...34 Pacific Journal of Mathematics, Volume 16, pp. 1-3, 1966. 2. M. S. Bazaraa and J. j. Goode, "An Algorithm for Finding the Shortest Element of a
Robust Control Design via Linear Programming
NASA Technical Reports Server (NTRS)
Keel, L. H.; Bhattacharyya, S. P.
1998-01-01
This paper deals with the problem of synthesizing or designing a feedback controller of fixed dynamic order. The closed loop specifications considered here are given in terms of a target performance vector representing a desired set of closed loop transfer functions connecting various signals. In general these point targets are unattainable with a fixed order controller. By enlarging the target from a fixed point set to an interval set the solvability conditions with a fixed order controller are relaxed and a solution is more easily enabled. Results from the parametric robust control literature can be used to design the interval target family so that the performance deterioration is acceptable, even when plant uncertainty is present. It is shown that it is possible to devise a computationally simple linear programming approach that attempts to meet the desired closed loop specifications.
Ensemble segmentation using efficient integer linear programming.
Alush, Amir; Goldberger, Jacob
2012-10-01
We present a method for combining several segmentations of an image into a single one that in some sense is the average segmentation in order to achieve a more reliable and accurate segmentation result. The goal is to find a point in the "space of segmentations" which is close to all the individual segmentations. We present an algorithm for segmentation averaging. The image is first oversegmented into superpixels. Next, each segmentation is projected onto the superpixel map. An instance of the EM algorithm combined with integer linear programming is applied on the set of binary merging decisions of neighboring superpixels to obtain the average segmentation. Apart from segmentation averaging, the algorithm also reports the reliability of each segmentation. The performance of the proposed algorithm is demonstrated on manually annotated images from the Berkeley segmentation data set and on the results of automatic segmentation algorithms.
Linear programming for learning in neural networks
NASA Astrophysics Data System (ADS)
Raghavan, Raghu
1991-08-01
The authors have previously proposed a network of probabilistic cellular automata (PCAs) as part of an image recognition system designed to integrate model-based and data-driven approaches in a connectionist framework. The PCA arises from some natural requirements on the system which include incorporation of prior knowledge such as in inference rules, locality of inferences, and full parallelism. This network has been applied to recognize objects in both synthetic and in real data. This approach achieves recognition through the short-, rather than the long-time behavior of the dynamics of the PCA. In this paper, some methods are developed for learning the connection strengths by solving linear inequalities: the figures of merit are tendencies or directions of movement of the dynamical system. These 'dynamical' figures of merit result in inequality constraints on the connection strengths which are solved by linear (LP) or quadratic programs (QP). An algorithm is described for processing a large number of samples to determine weights for the PCA. The work may be regarded as either pointing out another application for constrained optimization, or as pointing out the need to extend the perceptron and similar methods for learning. The extension is needed because the neural network operates on a different principle from that for which the perceptron method was devised.
User's manual for LINEAR, a FORTRAN program to derive linear aircraft models
NASA Technical Reports Server (NTRS)
Duke, Eugene L.; Patterson, Brian P.; Antoniewicz, Robert F.
1987-01-01
This report documents a FORTRAN program that provides a powerful and flexible tool for the linearization of aircraft models. The program LINEAR numerically determines a linear system model using nonlinear equations of motion and a user-supplied nonlinear aerodynamic model. The system model determined by LINEAR consists of matrices for both state and observation equations. The program has been designed to allow easy selection and definition of the state, control, and observation variables to be used in a particular model.
Optimized remedial groundwater extraction using linear programming
Quinn, J.J.
1995-12-31
Groundwater extraction systems are typically installed to remediate contaminant plumes or prevent further spread of contamination. These systems are expensive to install and maintain. A traditional approach to designing such a wellfield uses a series of trial-and-error simulations to test the effects of various well locations and pump rates. However, the optimal locations and pump rates of extraction wells are difficult to determine when objectives related to the site hydrogeology and potential pumping scheme are considered. This paper describes a case study of an application of linear programming theory to determine optimal well placement and pump rates. The objectives of the pumping scheme were to contain contaminant migration and reduce contaminant concentrations while minimizing the total amount of water pumped and treated. Past site activities at the area under study included disposal of contaminants in pits. Several groundwater plumes have been identified, and others may be present. The area of concern is bordered on three sides by a wetland, which receives a portion of its input budget as groundwater discharge from the pits. Optimization of the containment pumping scheme was intended to meet three goals: (1) prevent discharge of contaminated groundwater to the wetland, (2) minimize the total water pumped and treated (cost benefit), and (3) avoid dewatering of the wetland (cost and ecological benefits). Possible well locations were placed at known source areas. To constrain the problem, the optimization program was instructed to prevent any flow toward the wetland along a user-specified border. In this manner, the optimization routine selects well locations and pump rates so that a groundwater divide is produced along this boundary.
Linear programming models for cost reimbursement.
Diehr, G; Tamura, H
1989-01-01
Tamura, Lauer, and Sanborn (1985) reported a multiple regression approach to the problem of determining a cost reimbursement (rate-setting) formula for facilities providing long-term care (nursing homes). In this article we propose an alternative approach to this problem, using an absolute-error criterion instead of the least-squares criterion used in regression, with a variety of side constraints incorporated in the derivation of the formula. The mathematical tool for implementation of this approach is linear programming (LP). The article begins with a discussion of the desirable characteristics of a rate-setting formula. The development of a formula with these properties can be easily achieved, in terms of modeling as well as computation, using LP. Specifically, LP provides an efficient computational algorithm to minimize absolute error deviation, thus protecting rates from the effects of unusual observations in the data base. LP also offers modeling flexibility to impose a variety of policy controls. These features are not readily available if a least-squares criterion is used. Examples based on actual data are used to illustrate alternative LP models for rate setting. PMID:2759871
User's manual for interactive LINEAR: A FORTRAN program to derive linear aircraft models
NASA Technical Reports Server (NTRS)
Antoniewicz, Robert F.; Duke, Eugene L.; Patterson, Brian P.
1988-01-01
An interactive FORTRAN program that provides the user with a powerful and flexible tool for the linearization of aircraft aerodynamic models is documented in this report. The program LINEAR numerically determines a linear system model using nonlinear equations of motion and a user-supplied linear or nonlinear aerodynamic model. The nonlinear equations of motion used are six-degree-of-freedom equations with stationary atmosphere and flat, nonrotating earth assumptions. The system model determined by LINEAR consists of matrices for both the state and observation equations. The program has been designed to allow easy selection and definition of the state, control, and observation variables to be used in a particular model.
An Intuitive Approach in Teaching Linear Programming in High School.
ERIC Educational Resources Information Center
Ulep, Soledad A.
1990-01-01
Discusses solving inequality problems involving linear programing. Describes the usual and alternative approaches. Presents an intuitive approach for finding a feasible solution by maximizing the objective function. (YP)
Comparison of open-source linear programming solvers.
Gearhart, Jared Lee; Adair, Kristin Lynn; Durfee, Justin David.; Jones, Katherine A.; Martin, Nathaniel; Detry, Richard Joseph
2013-10-01
When developing linear programming models, issues such as budget limitations, customer requirements, or licensing may preclude the use of commercial linear programming solvers. In such cases, one option is to use an open-source linear programming solver. A survey of linear programming tools was conducted to identify potential open-source solvers. From this survey, four open-source solvers were tested using a collection of linear programming test problems and the results were compared to IBM ILOG CPLEX Optimizer (CPLEX) [1], an industry standard. The solvers considered were: COIN-OR Linear Programming (CLP) [2], [3], GNU Linear Programming Kit (GLPK) [4], lp_solve [5] and Modular In-core Nonlinear Optimization System (MINOS) [6]. As no open-source solver outperforms CPLEX, this study demonstrates the power of commercial linear programming software. CLP was found to be the top performing open-source solver considered in terms of capability and speed. GLPK also performed well but cannot match the speed of CLP or CPLEX. lp_solve and MINOS were considerably slower and encountered issues when solving several test problems.
Symmetry Groups for Linear Programming Relaxations of Orthogonal Array Problems
2015-03-26
Symmetry Groups for Linear Programming Relaxations of Orthogonal Array Problems THESIS MARCH 2015 David M. Arquette, Second Lieutenant, USAF AFIT-ENC...work of the U.S. Government and is not subject to copyright protection in the United States. AFIT-ENC-MS-15-M-003 SYMMETRY GROUPS FOR LINEAR...PUBLIC RELEASE; DISTRIBUTION UNLIMITED. AFIT-ENC-MS-15-M-003 SYMMETRY GROUPS FOR LINEAR PROGRAMMING RELAXATIONS OF ORTHOGONAL ARRAY PROBLEMS David M
Linear Fresnel lens photovoltaic concentrator program
Kull, J.; Maraschin, R.; Rafinejad, D.; Spencer, R.; Sutton, G.
1983-08-01
This report describes Acurex Corporation's design of a linear Fresnel lens Photovoltaic Concentrator Panel. The panel consists of four concentrator modules in an integrated structure. Each module is 10 ft long and has a 39.85 in aperture. The solar cell's active width is 0.90 in. and the cell-lens edge spacing is 23.39 in. There are 58 cells per module. A prototype panel was built and tested. Test results showed a peak electrical efficiency of 10.5% at the operating conditions of 800 W/m/sup 2/ insolation and 90/sup 0/F coolant temperature. The prototype exhibits the manufacturing and assembly concepts developed.
Stochastic Optimal Control and Linear Programming Approach
Buckdahn, R.; Goreac, D.; Quincampoix, M.
2011-04-15
We study a classical stochastic optimal control problem with constraints and discounted payoff in an infinite horizon setting. The main result of the present paper lies in the fact that this optimal control problem is shown to have the same value as a linear optimization problem stated on some appropriate space of probability measures. This enables one to derive a dual formulation that appears to be strongly connected to the notion of (viscosity sub) solution to a suitable Hamilton-Jacobi-Bellman equation. We also discuss relation with long-time average problems.
Linear System of Equations, Matrix Inversion, and Linear Programming Using MS Excel
ERIC Educational Resources Information Center
El-Gebeily, M.; Yushau, B.
2008-01-01
In this note, we demonstrate with illustrations two different ways that MS Excel can be used to solve Linear Systems of Equation, Linear Programming Problems, and Matrix Inversion Problems. The advantage of using MS Excel is its availability and transparency (the user is responsible for most of the details of how a problem is solved). Further, we…
EZLP: An Interactive Computer Program for Solving Linear Programming Problems. Final Report.
ERIC Educational Resources Information Center
Jarvis, John J.; And Others
Designed for student use in solving linear programming problems, the interactive computer program described (EZLP) permits the student to input the linear programming model in exactly the same manner in which it would be written on paper. This report includes a brief review of the development of EZLP; narrative descriptions of program features,…
Linear Programming and Its Application to Pattern Recognition Problems
NASA Technical Reports Server (NTRS)
Omalley, M. J.
1973-01-01
Linear programming and linear programming like techniques as applied to pattern recognition problems are discussed. Three relatively recent research articles on such applications are summarized. The main results of each paper are described, indicating the theoretical tools needed to obtain them. A synopsis of the author's comments is presented with regard to the applicability or non-applicability of his methods to particular problems, including computational results wherever given.
The RANDOM computer program: A linear congruential random number generator
NASA Technical Reports Server (NTRS)
Miles, R. F., Jr.
1986-01-01
The RANDOM Computer Program is a FORTRAN program for generating random number sequences and testing linear congruential random number generators (LCGs). The linear congruential form of random number generator is discussed, and the selection of parameters of an LCG for a microcomputer described. This document describes the following: (1) The RANDOM Computer Program; (2) RANDOM.MOD, the computer code needed to implement an LCG in a FORTRAN program; and (3) The RANCYCLE and the ARITH Computer Programs that provide computational assistance in the selection of parameters for an LCG. The RANDOM, RANCYCLE, and ARITH Computer Programs are written in Microsoft FORTRAN for the IBM PC microcomputer and its compatibles. With only minor modifications, the RANDOM Computer Program and its LCG can be run on most micromputers or mainframe computers.
Linear Programming for Vocational Education Planning. Interim Report.
ERIC Educational Resources Information Center
Young, Robert C.; And Others
The purpose of the paper is to define for potential users of vocational education management information systems a quantitative analysis technique and its utilization to facilitate more effective planning of vocational education programs. Defining linear programming (LP) as a management technique used to solve complex resource allocation problems…
Planning Student Flow with Linear Programming: A Tunisian Case Study.
ERIC Educational Resources Information Center
Bezeau, Lawrence
A student flow model in linear programming format, designed to plan the movement of students into secondary and university programs in Tunisia, is described. The purpose of the plan is to determine a sufficient number of graduating students that would flow back into the system as teachers or move into the labor market to meet fixed manpower…
Hierarchical Multiobjective Linear Programming Problems with Fuzzy Domination Structures
NASA Astrophysics Data System (ADS)
Yano, Hitoshi
2010-10-01
In this paper, we focus on hierarchical multiobjective linear programming problems with fuzzy domination structures where multiple decision makers in a hierarchical organization have their own multiple objective linear functions together with common linear constraints. After introducing decision powers and the solution concept based on the α-level set for the fuzzy convex cone Λ which reflects a fuzzy domination structure, we propose a fuzzy approach to obtain a satisfactory solution which reflects not only the hierarchical relationships between multiple decision makers but also their own preferences for their membership functions. In the proposed method, instead of Pareto optimal concept, a generalized Λ˜α-extreme point concept is introduced. In order to obtain a satisfactory solution from among a generalized Λ˜α-extreme point set, an interactive algorithm based on linear programming is proposed, and an interactive processes are demonstrated by means of an illustrative numerical example.
An Instructional Note on Linear Programming--A Pedagogically Sound Approach.
ERIC Educational Resources Information Center
Mitchell, Richard
1998-01-01
Discusses the place of linear programming in college curricula and the advantages of using linear-programming software. Lists important characteristics of computer software used in linear programming for more effective teaching and learning. (ASK)
A linear-programming approach to temporal reasoning
Jonsson, P.; Baeckstroem, C.
1996-12-31
We present a new formalism, Horn Disjunctive Linear Relations (Horn DLRs), for reasoning about temporal constraints. We prove that deciding satisfiability of sets of Horn DLRs is polynomial by exhibiting an algorithm based upon linear programming. Furthermore, we prove that most other approaches to tractable temporal constraint reasoning can be encoded as Horn DLRs, including the ORD-Horn algebra and most methods for purely quantitative reasoning.
Convergence of linear programming using a Hopfield net
Lu, Shin-yee; Berryman, J.G.
1990-11-01
Hopfield nets are interconnected networks of simple analog Processors. Such networks have been applied to a variety of optimization problems including linear programming problems. We revised the energy function used in a Hopfield net such that the network can be implemented on a digital computer to solve linearing programming problems. We also proved that the revised discrete Hopfield net coverages, and gave the conditions of convergence. The approach is tested on two large and sparse linear programming problems. In both cases we could not reach the optimal solutions, but solutions with 1% error can be attained in less than 3 minutes of CPU time on a SUN SPARC station. The optimal solutions can be obtained by the simplex method, but required five times more CPU time. 10 refs., 1 tab.
From Parity and Payoff Games to Linear Programming
NASA Astrophysics Data System (ADS)
Schewe, Sven
This paper establishes a surprising reduction from parity and mean payoff games to linear programming problems. While such a connection is trivial for solitary games, it is surprising for two player games, because the players have opposing objectives, whose natural translations into an optimisation problem are minimisation and maximisation, respectively. Our reduction to linear programming circumvents the need for concurrent minimisation and maximisation by replacing one of them, the maximisation, by approximation. The resulting optimisation problem can be translated to a linear programme by a simple space transformation, which is inexpensive in the unit cost model, but results in an exponential growth of the coefficients. The discovered connection opens up unexpected applications - like μ-calculus model checking - of linear programming in the unit cost model, and thus turns the intriguing academic problem of finding a polynomial time algorithm for linear programming in this model of computation (and subsequently a strongly polynomial algorithm) into a problem of paramount practical importance: All advancements in this area can immediately be applied to accelerate solving parity and payoff games, or to improve their complexity analysis.
Synthesizing Dynamic Programming Algorithms from Linear Temporal Logic Formulae
NASA Technical Reports Server (NTRS)
Rosu, Grigore; Havelund, Klaus
2001-01-01
The problem of testing a linear temporal logic (LTL) formula on a finite execution trace of events, generated by an executing program, occurs naturally in runtime analysis of software. We present an algorithm which takes an LTL formula and generates an efficient dynamic programming algorithm. The generated algorithm tests whether the LTL formula is satisfied by a finite trace of events given as input. The generated algorithm runs in linear time, its constant depending on the size of the LTL formula. The memory needed is constant, also depending on the size of the formula.
THE SEPARATION OF URANIUM ISOTOPES BY GASEOUS DIFFUSION: A LINEAR PROGRAMMING MODEL,
URANIUM, ISOTOPE SEPARATION), (*GASEOUS DIFFUSION SEPARATION, LINEAR PROGRAMMING ), (* LINEAR PROGRAMMING , GASEOUS DIFFUSION SEPARATION), MATHEMATICAL MODELS, GAS FLOW, NUCLEAR REACTORS, OPERATIONS RESEARCH
Linear combination reading program for capture gamma rays
Tanner, Allan B.
1971-01-01
This program computes a weighting function, Qj, which gives a scalar output value of unity when applied to the spectrum of a desired element and a minimum value (considering statistics) when applied to spectra of materials not containing the desired element. Intermediate values are obtained for materials containing the desired element, in proportion to the amount of the element they contain. The program is written in the BASIC language in a format specific to the Hewlett-Packard 2000A Time-Sharing System, and is an adaptation of an earlier program for linear combination reading for X-ray fluorescence analysis (Tanner and Brinkerhoff, 1971). Following the program is a sample run from a study of the application of the linear combination technique to capture-gamma-ray analysis for calcium (report in preparation).
A Partitioning and Bounded Variable Algorithm for Linear Programming
ERIC Educational Resources Information Center
Sheskin, Theodore J.
2006-01-01
An interesting new partitioning and bounded variable algorithm (PBVA) is proposed for solving linear programming problems. The PBVA is a variant of the simplex algorithm which uses a modified form of the simplex method followed by the dual simplex method for bounded variables. In contrast to the two-phase method and the big M method, the PBVA does…
Interior-Point Methods for Linear Programming: A Review
ERIC Educational Resources Information Center
Singh, J. N.; Singh, D.
2002-01-01
The paper reviews some recent advances in interior-point methods for linear programming and indicates directions in which future progress can be made. Most of the interior-point methods belong to any of three categories: affine-scaling methods, potential reduction methods and central path methods. These methods are discussed together with…
On the Feasibility of a Generalized Linear Program
1989-03-01
generealized linear program by applying the same algorithm to a "phase-one" problem without requiring that the initial basic feasible solution to the latter be non-degenerate. secUrMTY C.AMlIS CAYI S OP ?- PAeES( UII -W & ,
Train repathing in emergencies based on fuzzy linear programming.
Meng, Xuelei; Cui, Bingmou
2014-01-01
Train pathing is a typical problem which is to assign the train trips on the sets of rail segments, such as rail tracks and links. This paper focuses on the train pathing problem, determining the paths of the train trips in emergencies. We analyze the influencing factors of train pathing, such as transferring cost, running cost, and social adverse effect cost. With the overall consideration of the segment and station capability constraints, we build the fuzzy linear programming model to solve the train pathing problem. We design the fuzzy membership function to describe the fuzzy coefficients. Furthermore, the contraction-expansion factors are introduced to contract or expand the value ranges of the fuzzy coefficients, coping with the uncertainty of the value range of the fuzzy coefficients. We propose a method based on triangular fuzzy coefficient and transfer the train pathing (fuzzy linear programming model) to a determinate linear model to solve the fuzzy linear programming problem. An emergency is supposed based on the real data of the Beijing-Shanghai Railway. The model in this paper was solved and the computation results prove the availability of the model and efficiency of the algorithm.
Train Repathing in Emergencies Based on Fuzzy Linear Programming
Cui, Bingmou
2014-01-01
Train pathing is a typical problem which is to assign the train trips on the sets of rail segments, such as rail tracks and links. This paper focuses on the train pathing problem, determining the paths of the train trips in emergencies. We analyze the influencing factors of train pathing, such as transferring cost, running cost, and social adverse effect cost. With the overall consideration of the segment and station capability constraints, we build the fuzzy linear programming model to solve the train pathing problem. We design the fuzzy membership function to describe the fuzzy coefficients. Furthermore, the contraction-expansion factors are introduced to contract or expand the value ranges of the fuzzy coefficients, coping with the uncertainty of the value range of the fuzzy coefficients. We propose a method based on triangular fuzzy coefficient and transfer the train pathing (fuzzy linear programming model) to a determinate linear model to solve the fuzzy linear programming problem. An emergency is supposed based on the real data of the Beijing-Shanghai Railway. The model in this paper was solved and the computation results prove the availability of the model and efficiency of the algorithm. PMID:25121128
LFSPMC: Linear feature selection program using the probability of misclassification
NASA Technical Reports Server (NTRS)
Guseman, L. F., Jr.; Marion, B. P.
1975-01-01
The computational procedure and associated computer program for a linear feature selection technique are presented. The technique assumes that: a finite number, m, of classes exists; each class is described by an n-dimensional multivariate normal density function of its measurement vectors; the mean vector and covariance matrix for each density function are known (or can be estimated); and the a priori probability for each class is known. The technique produces a single linear combination of the original measurements which minimizes the one-dimensional probability of misclassification defined by the transformed densities.
ERIC Educational Resources Information Center
Matzke, Orville R.
The purpose of this study was to formulate a linear programming model to simulate a foundation type support program and to apply this model to a state support program for the public elementary and secondary school districts in the State of Iowa. The model was successful in producing optimal solutions to five objective functions proposed for…
Linear programming model to develop geodiversity map using utility theory
NASA Astrophysics Data System (ADS)
Sepehr, Adel
2015-04-01
In this article, the classification and mapping of geodiversity based on a quantitative methodology was accomplished using linear programming, the central idea of which being that geosites and geomorphosites as main indicators of geodiversity can be evaluated by utility theory. A linear programming method was applied for geodiversity mapping over Khorasan-razavi province located in eastern north of Iran. In this route, the main criteria for distinguishing geodiversity potential in the studied area were considered regarding rocks type (lithology), faults position (tectonic process), karst area (dynamic process), Aeolian landforms frequency and surface river forms. These parameters were investigated by thematic maps including geology, topography and geomorphology at scales 1:100'000, 1:50'000 and 1:250'000 separately, imagery data involving SPOT, ETM+ (Landsat 7) and field operations directly. The geological thematic layer was simplified from the original map using a practical lithologic criterion based on a primary genetic rocks classification representing metamorphic, igneous and sedimentary rocks. The geomorphology map was provided using DEM at scale 30m extracted by ASTER data, geology and google earth images. The geology map shows tectonic status and geomorphology indicated dynamic processes and landform (karst, Aeolian and river). Then, according to the utility theory algorithms, we proposed a linear programming to classify geodiversity degree in the studied area based on geology/morphology parameters. The algorithm used in the methodology was consisted a linear function to be maximized geodiversity to certain constraints in the form of linear equations. The results of this research indicated three classes of geodiversity potential including low, medium and high status. The geodiversity potential shows satisfied conditions in the Karstic areas and Aeolian landscape. Also the utility theory used in the research has been decreased uncertainty of the evaluations.
Efficient numerical methods for entropy-linear programming problems
NASA Astrophysics Data System (ADS)
Gasnikov, A. V.; Gasnikova, E. B.; Nesterov, Yu. E.; Chernov, A. V.
2016-04-01
Entropy-linear programming (ELP) problems arise in various applications. They are usually written as the maximization of entropy (minimization of minus entropy) under affine constraints. In this work, new numerical methods for solving ELP problems are proposed. Sharp estimates for the convergence rates of the proposed methods are established. The approach described applies to a broader class of minimization problems for strongly convex functionals with affine constraints.
Finding Stable Orientations of Assemblies with Linear Programming
1993-06-01
AD-A266 990 Finding Stable Orientations of Assemblies with Linear Programming David Baraff Raju Mattikalli Bruno Repetto Pradeep Khosla CMU-RI-TR-93...Mattikalli, Bruno Repetto and Pradeep Khosla 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) 8. PERFORMING ORGANIZATION REPORT NUMBER The Robotics... Repetto , and D. Baraff. Stability of assemblies. In Interna- tional Conference on Intelligent Robots and Systems, page (to appear). IEEE/RSJ, July 1993
Multiobjective fuzzy stochastic linear programming problems with inexact probability distribution
Hamadameen, Abdulqader Othman; Zainuddin, Zaitul Marlizawati
2014-06-19
This study deals with multiobjective fuzzy stochastic linear programming problems with uncertainty probability distribution which are defined as fuzzy assertions by ambiguous experts. The problem formulation has been presented and the two solutions strategies are; the fuzzy transformation via ranking function and the stochastic transformation when α{sup –}. cut technique and linguistic hedges are used in the uncertainty probability distribution. The development of Sen’s method is employed to find a compromise solution, supported by illustrative numerical example.
Multiobjective fuzzy stochastic linear programming problems with inexact probability distribution
NASA Astrophysics Data System (ADS)
Hamadameen, Abdulqader Othman; Zainuddin, Zaitul Marlizawati
2014-06-01
This study deals with multiobjective fuzzy stochastic linear programming problems with uncertainty probability distribution which are defined as fuzzy assertions by ambiguous experts. The problem formulation has been presented and the two solutions strategies are; the fuzzy transformation via ranking function and the stochastic transformation when α-. cut technique and linguistic hedges are used in the uncertainty probability distribution. The development of Sen's method is employed to find a compromise solution, supported by illustrative numerical example.
Solving linear integer programming problems by a novel neural model.
Cavalieri, S
1999-02-01
The paper deals with integer linear programming problems. As is well known, these are extremely complex problems, even when the number of integer variables is quite low. Literature provides examples of various methods to solve such problems, some of which are of a heuristic nature. This paper proposes an alternative strategy based on the Hopfield neural network. The advantage of the strategy essentially lies in the fact that hardware implementation of the neural model allows for the time required to obtain a solution so as not depend on the size of the problem to be solved. The paper presents a particular class of integer linear programming problems, including well-known problems such as the Travelling Salesman Problem and the Set Covering Problem. After a brief description of this class of problems, it is demonstrated that the original Hopfield model is incapable of supplying valid solutions. This is attributed to the presence of constant bias currents in the dynamic of the neural model. A demonstration of this is given and then a novel neural model is presented which continues to be based on the same architecture as the Hopfield model, but introduces modifications thanks to which the integer linear programming problems presented can be solved. Some numerical examples and concluding remarks highlight the solving capacity of the novel neural model.
MAGDM linear-programming models with distinct uncertain preference structures.
Xu, Zeshui S; Chen, Jian
2008-10-01
Group decision making with preference information on alternatives is an interesting and important research topic which has been receiving more and more attention in recent years. The purpose of this paper is to investigate multiple-attribute group decision-making (MAGDM) problems with distinct uncertain preference structures. We develop some linear-programming models for dealing with the MAGDM problems, where the information about attribute weights is incomplete, and the decision makers have their preferences on alternatives. The provided preference information can be represented in the following three distinct uncertain preference structures: 1) interval utility values; 2) interval fuzzy preference relations; and 3) interval multiplicative preference relations. We first establish some linear-programming models based on decision matrix and each of the distinct uncertain preference structures and, then, develop some linear-programming models to integrate all three structures of subjective uncertain preference information provided by the decision makers and the objective information depicted in the decision matrix. Furthermore, we propose a simple and straightforward approach in ranking and selecting the given alternatives. It is worth pointing out that the developed models can also be used to deal with the situations where the three distinct uncertain preference structures are reduced to the traditional ones, i.e., utility values, fuzzy preference relations, and multiplicative preference relations. Finally, we use a practical example to illustrate in detail the calculation process of the developed approach.
An algorithm for the solution of dynamic linear programs
NASA Technical Reports Server (NTRS)
Psiaki, Mark L.
1989-01-01
The algorithm's objective is to efficiently solve Dynamic Linear Programs (DLP) by taking advantage of their special staircase structure. This algorithm constitutes a stepping stone to an improved algorithm for solving Dynamic Quadratic Programs, which, in turn, would make the nonlinear programming method of Successive Quadratic Programs more practical for solving trajectory optimization problems. The ultimate goal is to being trajectory optimization solution speeds into the realm of real-time control. The algorithm exploits the staircase nature of the large constraint matrix of the equality-constrained DLPs encountered when solving inequality-constrained DLPs by an active set approach. A numerically-stable, staircase QL factorization of the staircase constraint matrix is carried out starting from its last rows and columns. The resulting recursion is like the time-varying Riccati equation from multi-stage LQR theory. The resulting factorization increases the efficiency of all of the typical LP solution operations over that of a dense matrix LP code. At the same time numerical stability is ensured. The algorithm also takes advantage of dynamic programming ideas about the cost-to-go by relaxing active pseudo constraints in a backwards sweeping process. This further decreases the cost per update of the LP rank-1 updating procedure, although it may result in more changes of the active set that if pseudo constraints were relaxed in a non-stagewise fashion. The usual stability of closed-loop Linear/Quadratic optimally-controlled systems, if it carries over to strictly linear cost functions, implies that the saving due to reduced factor update effort may outweigh the cost of an increased number of updates. An aerospace example is presented in which a ground-to-ground rocket's distance is maximized. This example demonstrates the applicability of this class of algorithms to aerospace guidance. It also sheds light on the efficacy of the proposed pseudo constraint relaxation
A recurrent neural network for solving bilevel linear programming problem.
He, Xing; Li, Chuandong; Huang, Tingwen; Li, Chaojie; Huang, Junjian
2014-04-01
In this brief, based on the method of penalty functions, a recurrent neural network (NN) modeled by means of a differential inclusion is proposed for solving the bilevel linear programming problem (BLPP). Compared with the existing NNs for BLPP, the model has the least number of state variables and simple structure. Using nonsmooth analysis, the theory of differential inclusions, and Lyapunov-like method, the equilibrium point sequence of the proposed NNs can approximately converge to an optimal solution of BLPP under certain conditions. Finally, the numerical simulations of a supply chain distribution model have shown excellent performance of the proposed recurrent NNs.
A scalable parallel algorithm for multiple objective linear programs
NASA Technical Reports Server (NTRS)
Wiecek, Malgorzata M.; Zhang, Hong
1994-01-01
This paper presents an ADBASE-based parallel algorithm for solving multiple objective linear programs (MOLP's). Job balance, speedup and scalability are of primary interest in evaluating efficiency of the new algorithm. Implementation results on Intel iPSC/2 and Paragon multiprocessors show that the algorithm significantly speeds up the process of solving MOLP's, which is understood as generating all or some efficient extreme points and unbounded efficient edges. The algorithm gives specially good results for large and very large problems. Motivation and justification for solving such large MOLP's are also included.
Extracting Embedded Generalized Networks from Linear Programming Problems.
1984-09-01
E EXTRACTING EMBEDDED GENERALIZED NETWORKS FROM LINEAR PROGRAMMING PROBLEMS by Gerald G. Brown * . ___Richard D. McBride * R. Kevin Wood LcL7...authorized. EA Gerald ’Brown Richar-rD. McBride 46;val Postgrduate School University of Southern California Monterey, California 93943 Los Angeles...REOT UBE . OV S.SF- PERFOING’ CAORG soN UER. 7. AUTNOR(a) S. CONTRACT ON GRANT NUME111() Gerald G. Brown Richard D. McBride S. PERFORMING ORGANIZATION
A linear programming approach for optimal contrast-tone mapping.
Wu, Xiaolin
2011-05-01
This paper proposes a novel algorithmic approach of image enhancement via optimal contrast-tone mapping. In a fundamental departure from the current practice of histogram equalization for contrast enhancement, the proposed approach maximizes expected contrast gain subject to an upper limit on tone distortion and optionally to other constraints that suppress artifacts. The underlying contrast-tone optimization problem can be solved efficiently by linear programming. This new constrained optimization approach for image enhancement is general, and the user can add and fine tune the constraints to achieve desired visual effects. Experimental results demonstrate clearly superior performance of the new approach over histogram equalization and its variants.
Mining Knowledge from Multiple Criteria Linear Programming Models
NASA Astrophysics Data System (ADS)
Zhang, Peng; Zhu, Xingquan; Li, Aihua; Zhang, Lingling; Shi, Yong
As a promising data mining tool, Multiple Criteria Linear Programming (MCLP) has been widely used in business intelligence. However, a possible limitation of MCLP is that it generates unexplainable black-box models which can only tell us results without reasons. To overcome this shortage, in this paper, we propose a Knowledge Mining strategy which mines from black-box MCLP models to get explainable and understandable knowledge. Different from the traditional Data Mining strategy which focuses on mining knowledge from data, this Knowledge Mining strategy provides a new vision of mining knowledge from black-box models, which can be taken as a special topic of “Intelligent Knowledge Management”.
An Algorithm for Solving Interval Linear Programming Problems
1974-11-01
34regularized" a lä Chames -Cooper so that infeasibility is determined at optimal solution if that is the case. If I(x*(v)) - 0 then x*(v) is an... Chames and Cooper J3]) may be used to compute the new inverse. Theorem 2 The algorithm described above terminates in a finite number of steps...I J 19- REFERENCES 1) A. Ben-Israel and A. Chames , "An Explicit Solution of A Special Class of Linear Programming Problems", Operations
Fuzzy Linear Programming and its Application in Home Textile Firm
NASA Astrophysics Data System (ADS)
Vasant, P.; Ganesan, T.; Elamvazuthi, I.
2011-06-01
In this paper, new fuzzy linear programming (FLP) based methodology using a specific membership function, named as modified logistic membership function is proposed. The modified logistic membership function is first formulated and its flexibility in taking up vagueness in parameter is established by an analytical approach. The developed methodology of FLP has provided a confidence in applying to real life industrial production planning problem. This approach of solving industrial production planning problem can have feedback with the decision maker, the implementer and the analyst.
LCAP2 (Linear Controls Analysis Program). Volume 3. Source Code Description.
1983-11-15
The computer program LCAP2 (Linear Controls Analysis Program) provides the analyst with the capability to numerically perform classical linear ... control analysis techniques such as transfer function manipulation, transfer function evaluation, frequency response, root locus, time response and sampled
LCAP2 (Linear Controls Analysis Program). Volume 2. Interactive LCAP2 User’s Guide.
1983-11-15
The computer program LCAP2 (Linear Controls Analysis Program) provides the analyst with the capability to numerically perform classical linear ... control analysis techniques such as transfer function manuipulation, transfer function evaluation, frequency response, root locus, time response and sampled
LCAP2 (Linear Control Analysis Program). Volume 1. Batch LCAP2 User’s Guide.
1983-11-15
The computer program LCAP2 (Linear Controls Analysis Program) provides the analyst with the capability to numerically perform classical linear ... control analysis techniques such as transfer function manipulation, transfer function evaluation, frequency response, root locus, time response and sampled
APPLICATION OF LINEAR PROGRAMMING TO FACILITY MAINTENANCE PROBLEMS IN THE NAVY SHORE ESTABLISHMENT.
LINEAR PROGRAMMING ), (*NAVAL SHORE FACILITIES, MAINTENANCE), (*MAINTENANCE, COSTS, MATHEMATICAL MODELS, MANAGEMENT PLANNING AND CONTROL, MANPOWER, FEASIBILITY STUDIES, OPTIMIZATION, MANAGEMENT ENGINEERING.
Longitudinal force distribution using quadratically constrained linear programming
NASA Astrophysics Data System (ADS)
Klomp, M.
2011-12-01
In this paper, a new method is presented for the optimisation of force distribution for combined traction/braking and cornering. In order to provide a general, simple and flexible problem formulation, the optimisation is addressed as a quadratically constrained linear programming (QCLP) problem. Apart from fast numerical solutions, different driveline configurations can be included in the QCLP problem in a very straightforward fashion. The optimisation of the distribution of the individual wheel forces using the quasi-steady-state assumption is known to be useful for the study of the influence of particular driveline configurations on the combined lateral and longitudinal grip envelope of a particular vehicle-driveline configuration. The addition of the QCLP problem formulation makes another powerful tool available to the vehicle dynamics analyst to perform such studies.
Microgrid Reliability Modeling and Battery Scheduling Using Stochastic Linear Programming
Cardoso, Goncalo; Stadler, Michael; Siddiqui, Afzal; Marnay, Chris; DeForest, Nicholas; Barbosa-Povoa, Ana; Ferrao, Paulo
2013-05-23
This paper describes the introduction of stochastic linear programming into Operations DER-CAM, a tool used to obtain optimal operating schedules for a given microgrid under local economic and environmental conditions. This application follows previous work on optimal scheduling of a lithium-iron-phosphate battery given the output uncertainty of a 1 MW molten carbonate fuel cell. Both are in the Santa Rita Jail microgrid, located in Dublin, California. This fuel cell has proven unreliable, partially justifying the consideration of storage options. Several stochastic DER-CAM runs are executed to compare different scenarios to values obtained by a deterministic approach. Results indicate that using a stochastic approach provides a conservative yet more lucrative battery schedule. Lower expected energy bills result, given fuel cell outages, in potential savings exceeding 6percent.
Linear programming phase unwrapping for dual-wavelength digital holography.
Wang, Zhaomin; Jiao, Jiannan; Qu, Weijuan; Yang, Fang; Li, Hongru; Tian, Ailing; Asundi, Anand
2017-01-20
A linear programming phase unwrapping method in dual-wavelength digital holography is proposed and verified experimentally. The proposed method uses the square of height difference as a convergence standard and theoretically gives the boundary condition in a searching process. A simulation was performed by unwrapping step structures at different levels of Gaussian noise. As a result, our method is capable of recovering the discontinuities accurately. It is robust and straightforward. In the experiment, a microelectromechanical systems sample and a cylindrical lens were measured separately. The testing results were in good agreement with true values. Moreover, the proposed method is applicable not only in digital holography but also in other dual-wavelength interferometric techniques.
Manipulating multiqudit entanglement witnesses by using linear programming
Jafarizadeh, M. A.; Najarbashi, G.; Habibian, H.
2007-05-15
A class of entanglement witnesses (EWs) called reduction-type entanglement witnesses is introduced, which can detect some multipartite entangled states including positive partial transpose ones with Hilbert space of dimension d{sub 1}(multiply-in-circle sign)d{sub 2}(multiply-in-circle sign){center_dot}{center_dot}{center_dot}(multiply-in-circle sign)d{sub n}. In fact the feasible regions of these EWs turn out to be convex polygons and hence the manipulation of them reduces to linear programming which can be solved exactly by using the simplex method. The decomposability and nondecomposability of these EWs are studied and it is shown that it has a close connection with eigenvalues and optimality of EWs. Also using the Jamiolkowski isomorphism, the corresponding possible positive maps, including the generalized reduction maps of Hall [Phys. Rev. A 72, 022311 (2005)] are obtained.
Towards lexicographic multi-objective linear programming using grossone methodology
NASA Astrophysics Data System (ADS)
Cococcioni, Marco; Pappalardo, Massimo; Sergeyev, Yaroslav D.
2016-10-01
Lexicographic Multi-Objective Linear Programming (LMOLP) problems can be solved in two ways: preemptive and nonpreemptive. The preemptive approach requires the solution of a series of LP problems, with changing constraints (each time the next objective is added, a new constraint appears). The nonpreemptive approach is based on a scalarization of the multiple objectives into a single-objective linear function by a weighted combination of the given objectives. It requires the specification of a set of weights, which is not straightforward and can be time consuming. In this work we present both mathematical and software ingredients necessary to solve LMOLP problems using a recently introduced computational methodology (allowing one to work numerically with infinities and infinitesimals) based on the concept of grossone. The ultimate goal of such an attempt is an implementation of a simplex-like algorithm, able to solve the original LMOLP problem by solving only one single-objective problem and without the need to specify finite weights. The expected advantages are therefore obvious.
Split diversity in constrained conservation prioritization using integer linear programming.
Chernomor, Olga; Minh, Bui Quang; Forest, Félix; Klaere, Steffen; Ingram, Travis; Henzinger, Monika; von Haeseler, Arndt
2015-01-01
Phylogenetic diversity (PD) is a measure of biodiversity based on the evolutionary history of species. Here, we discuss several optimization problems related to the use of PD, and the more general measure split diversity (SD), in conservation prioritization.Depending on the conservation goal and the information available about species, one can construct optimization routines that incorporate various conservation constraints. We demonstrate how this information can be used to select sets of species for conservation action. Specifically, we discuss the use of species' geographic distributions, the choice of candidates under economic pressure, and the use of predator-prey interactions between the species in a community to define viability constraints.Despite such optimization problems falling into the area of NP hard problems, it is possible to solve them in a reasonable amount of time using integer programming. We apply integer linear programming to a variety of models for conservation prioritization that incorporate the SD measure.We exemplarily show the results for two data sets: the Cape region of South Africa and a Caribbean coral reef community. Finally, we provide user-friendly software at http://www.cibiv.at/software/pda.
NASA Astrophysics Data System (ADS)
Vasant, P.; Ganesan, T.; Elamvazuthi, I.
2012-11-01
A fairly reasonable result was obtained for non-linear engineering problems using the optimization techniques such as neural network, genetic algorithms, and fuzzy logic independently in the past. Increasingly, hybrid techniques are being used to solve the non-linear problems to obtain better output. This paper discusses the use of neuro-genetic hybrid technique to optimize the geological structure mapping which is known as seismic survey. It involves the minimization of objective function subject to the requirement of geophysical and operational constraints. In this work, the optimization was initially performed using genetic programming, and followed by hybrid neuro-genetic programming approaches. Comparative studies and analysis were then carried out on the optimized results. The results indicate that the hybrid neuro-genetic hybrid technique produced better results compared to the stand-alone genetic programming method.
Mixed integer linear programming for maximum-parsimony phylogeny inference.
Sridhar, Srinath; Lam, Fumei; Blelloch, Guy E; Ravi, R; Schwartz, Russell
2008-01-01
Reconstruction of phylogenetic trees is a fundamental problem in computational biology. While excellent heuristic methods are available for many variants of this problem, new advances in phylogeny inference will be required if we are to be able to continue to make effective use of the rapidly growing stores of variation data now being gathered. In this paper, we present two integer linear programming (ILP) formulations to find the most parsimonious phylogenetic tree from a set of binary variation data. One method uses a flow-based formulation that can produce exponential numbers of variables and constraints in the worst case. The method has, however, proven extremely efficient in practice on datasets that are well beyond the reach of the available provably efficient methods, solving several large mtDNA and Y-chromosome instances within a few seconds and giving provably optimal results in times competitive with fast heuristics than cannot guarantee optimality. An alternative formulation establishes that the problem can be solved with a polynomial-sized ILP. We further present a web server developed based on the exponential-sized ILP that performs fast maximum parsimony inferences and serves as a front end to a database of precomputed phylogenies spanning the human genome.
Accurate construction of consensus genetic maps via integer linear programming.
Wu, Yonghui; Close, Timothy J; Lonardi, Stefano
2011-01-01
We study the problem of merging genetic maps, when the individual genetic maps are given as directed acyclic graphs. The computational problem is to build a consensus map, which is a directed graph that includes and is consistent with all (or, the vast majority of) the markers in the input maps. However, when markers in the individual maps have ordering conflicts, the resulting consensus map will contain cycles. Here, we formulate the problem of resolving cycles in the context of a parsimonious paradigm that takes into account two types of errors that may be present in the input maps, namely, local reshuffles and global displacements. The resulting combinatorial optimization problem is, in turn, expressed as an integer linear program. A fast approximation algorithm is proposed, and an additional speedup heuristic is developed. Our algorithms were implemented in a software tool named MERGEMAP which is freely available for academic use. An extensive set of experiments shows that MERGEMAP consistently outperforms JOINMAP, which is the most popular tool currently available for this task, both in terms of accuracy and running time. MERGEMAP is available for download at http://www.cs.ucr.edu/~yonghui/mgmap.html.
Maximum likelihood pedigree reconstruction using integer linear programming.
Cussens, James; Bartlett, Mark; Jones, Elinor M; Sheehan, Nuala A
2013-01-01
Large population biobanks of unrelated individuals have been highly successful in detecting common genetic variants affecting diseases of public health concern. However, they lack the statistical power to detect more modest gene-gene and gene-environment interaction effects or the effects of rare variants for which related individuals are ideally required. In reality, most large population studies will undoubtedly contain sets of undeclared relatives, or pedigrees. Although a crude measure of relatedness might sometimes suffice, having a good estimate of the true pedigree would be much more informative if this could be obtained efficiently. Relatives are more likely to share longer haplotypes around disease susceptibility loci and are hence biologically more informative for rare variants than unrelated cases and controls. Distant relatives are arguably more useful for detecting variants with small effects because they are less likely to share masking environmental effects. Moreover, the identification of relatives enables appropriate adjustments of statistical analyses that typically assume unrelatedness. We propose to exploit an integer linear programming optimisation approach to pedigree learning, which is adapted to find valid pedigrees by imposing appropriate constraints. Our method is not restricted to small pedigrees and is guaranteed to return a maximum likelihood pedigree. With additional constraints, we can also search for multiple high-probability pedigrees and thus account for the inherent uncertainty in any particular pedigree reconstruction. The true pedigree is found very quickly by comparison with other methods when all individuals are observed. Extensions to more complex problems seem feasible.
Dense image registration through MRFs and efficient linear programming.
Glocker, Ben; Komodakis, Nikos; Tziritas, Georgios; Navab, Nassir; Paragios, Nikos
2008-12-01
In this paper, we introduce a novel and efficient approach to dense image registration, which does not require a derivative of the employed cost function. In such a context, the registration problem is formulated using a discrete Markov random field objective function. First, towards dimensionality reduction on the variables we assume that the dense deformation field can be expressed using a small number of control points (registration grid) and an interpolation strategy. Then, the registration cost is expressed using a discrete sum over image costs (using an arbitrary similarity measure) projected on the control points, and a smoothness term that penalizes local deviations on the deformation field according to a neighborhood system on the grid. Towards a discrete approach, the search space is quantized resulting in a fully discrete model. In order to account for large deformations and produce results on a high resolution level, a multi-scale incremental approach is considered where the optimal solution is iteratively updated. This is done through successive morphings of the source towards the target image. Efficient linear programming using the primal dual principles is considered to recover the lowest potential of the cost function. Very promising results using synthetic data with known deformations and real data demonstrate the potentials of our approach.
Learning oncogenetic networks by reducing to mixed integer linear programming.
Shahrabi Farahani, Hossein; Lagergren, Jens
2013-01-01
Cancer can be a result of accumulation of different types of genetic mutations such as copy number aberrations. The data from tumors are cross-sectional and do not contain the temporal order of the genetic events. Finding the order in which the genetic events have occurred and progression pathways are of vital importance in understanding the disease. In order to model cancer progression, we propose Progression Networks, a special case of Bayesian networks, that are tailored to model disease progression. Progression networks have similarities with Conjunctive Bayesian Networks (CBNs) [1],a variation of Bayesian networks also proposed for modeling disease progression. We also describe a learning algorithm for learning Bayesian networks in general and progression networks in particular. We reduce the hard problem of learning the Bayesian and progression networks to Mixed Integer Linear Programming (MILP). MILP is a Non-deterministic Polynomial-time complete (NP-complete) problem for which very good heuristics exists. We tested our algorithm on synthetic and real cytogenetic data from renal cell carcinoma. We also compared our learned progression networks with the networks proposed in earlier publications. The software is available on the website https://bitbucket.org/farahani/diprog.
Very Low-Cost Nutritious Diet Plans Designed by Linear Programming.
ERIC Educational Resources Information Center
Foytik, Jerry
1981-01-01
Provides procedural details of Linear Programing, developed by the U.S. Department of Agriculture to devise a dietary guide for consumers that minimizes food costs without sacrificing nutritional quality. Compares Linear Programming with the Thrifty Food Plan, which has been a basis for allocating coupons under the Food Stamp Program. (CS)
NASA Astrophysics Data System (ADS)
Zhadan, V. G.
2016-07-01
The linear semidefinite programming problem is considered. The dual affine scaling method in which all current iterations belong to the feasible set is proposed for its solution. Moreover, the boundaries of the feasible set may be reached. This method is a generalization of a version of the affine scaling method that was earlier developed for linear programs to the case of semidefinite programming.
NASA Technical Reports Server (NTRS)
Pilkey, W. D.; Chen, Y. H.
1974-01-01
An indirect synthesis method is used in the efficient optimal design of multi-degree of freedom, multi-design element, nonlinear, transient systems. A limiting performance analysis which requires linear programming for a kinematically linear system is presented. The system is selected using system identification methods such that the designed system responds as closely as possible to the limiting performance. The efficiency is a result of the method avoiding the repetitive systems analyses accompanying other numerical optimization methods.
Development and validation of a general purpose linearization program for rigid aircraft models
NASA Technical Reports Server (NTRS)
Duke, E. L.; Antoniewicz, R. F.
1985-01-01
A FORTRAN program that provides the user with a powerful and flexible tool for the linearization of aircraft models is discussed. The program LINEAR numerically determines a linear systems model using nonlinear equations of motion and a user-supplied, nonlinear aerodynamic model. The system model determined by LINEAR consists of matrices for both the state and observation equations. The program has been designed to allow easy selection and definition of the state, control, and observation variables to be used in a particular model. Also, included in the report is a comparison of linear and nonlinear models for a high performance aircraft.
A User’s Manual for Interactive Linear Control Programs on IBM/3033.
1982-12-01
There existed a need for an interactive program that would provide the user assistance it solving applications of linear control theory. The linear ...analysis, design and simulation of a broad class of linear control problems. LINCON consists of two groups: matrix manipulation, transfer function and... control program (LINCON) and its user’s guide satisfy this need. A series of ten interactive programs are presented which permit the user to carry out
AN INTRODUCTION TO THE APPLICATION OF DYNAMIC PROGRAMMING TO LINEAR CONTROL SYSTEMS
DYNAMIC PROGRAMMING APPLIED TO OPTIMIZE LINEAR CONTROL SYSTEMS WITH QUADRATIC PERFORMANCE MEASURES. MATHEMATICAL METHODS WHICH MAY BE APPLIED TO SPACE VEHICLE AND RELATED GUIDANCE AND CONTROL PROBLEMS.
NASA Technical Reports Server (NTRS)
Frost, Susan A.; Bodson, Marc; Acosta, Diana M.
2009-01-01
The Next Generation (NextGen) transport aircraft configurations being investigated as part of the NASA Aeronautics Subsonic Fixed Wing Project have more control surfaces, or control effectors, than existing transport aircraft configurations. Conventional flight control is achieved through two symmetric elevators, two antisymmetric ailerons, and a rudder. The five effectors, reduced to three command variables, produce moments along the three main axes of the aircraft and enable the pilot to control the attitude and flight path of the aircraft. The NextGen aircraft will have additional redundant control effectors to control the three moments, creating a situation where the aircraft is over-actuated and where a simple relationship does not exist anymore between the required effector deflections and the desired moments. NextGen flight controllers will incorporate control allocation algorithms to determine the optimal effector commands and attain the desired moments, taking into account the effector limits. Approaches to solving the problem using linear programming and quadratic programming algorithms have been proposed and tested. It is of great interest to understand their relative advantages and disadvantages and how design parameters may affect their properties. In this paper, we investigate the sensitivity of the effector commands with respect to the desired moments and show on some examples that the solutions provided using the l2 norm of quadratic programming are less sensitive than those using the l1 norm of linear programming.
SUBOPT: A CAD program for suboptimal linear regulators
NASA Technical Reports Server (NTRS)
Fleming, P. J.
1985-01-01
An interactive software package which provides design solutions for both standard linear quadratic regulator (LQR) and suboptimal linear regulator problems is described. Intended for time-invariant continuous systems, the package is easily modified to include sampled-data systems. LQR designs are obtained by established techniques while the large class of suboptimal problems containing controller and/or performance index options is solved using a robust gradient minimization technique. Numerical examples demonstrate features of the package and recent developments are described.
ERIC Educational Resources Information Center
Schmitt, M. A.; And Others
1994-01-01
Compares traditional manure application planning techniques calculated to meet agronomic nutrient needs on a field-by-field basis with plans developed using computer-assisted linear programming optimization methods. Linear programming provided the most economical and environmentally sound manure application strategy. (Contains 15 references.) (MDH)
Fundamental solution of the problem of linear programming and method of its determination
NASA Technical Reports Server (NTRS)
Petrunin, S. V.
1978-01-01
The idea of a fundamental solution to a problem in linear programming is introduced. A method of determining the fundamental solution and of applying this method to the solution of a problem in linear programming is proposed. Numerical examples are cited.
A linear circuit analysis program with stiff systems capability
NASA Technical Reports Server (NTRS)
Cook, C. H.; Bavuso, S. J.
1973-01-01
Several existing network analysis programs have been modified and combined to employ a variable topological approach to circuit translation. Efficient numerical integration techniques are used for transient analysis.
Zhao, Yingfeng; Liu, Sanyang
2016-01-01
We present a practical branch and bound algorithm for globally solving generalized linear multiplicative programming problem with multiplicative constraints. To solve the problem, a relaxation programming problem which is equivalent to a linear programming is proposed by utilizing a new two-phase relaxation technique. In the algorithm, lower and upper bounds are simultaneously obtained by solving some linear relaxation programming problems. Global convergence has been proved and results of some sample examples and a small random experiment show that the proposed algorithm is feasible and efficient.
How Relevant Is Linear, Dichotomous Reasoning to Ongoing Program Evaluation?
ERIC Educational Resources Information Center
Nguyen, Tuan D.
1978-01-01
Criticizes Strasser and Deniston's post-planned evaluation (TM 504 253) because of their: (1) emphasis on evaluation research; (2) imposition of experimental rigor; (3) inapplicability to human service projects; (4) inattention to congruity between the program and its environment; (5) distinct characteristics of program evaluation; and (6)…
NASA Technical Reports Server (NTRS)
Fleming, P.
1985-01-01
A design technique is proposed for linear regulators in which a feedback controller of fixed structure is chosen to minimize an integral quadratic objective function subject to the satisfaction of integral quadratic constraint functions. Application of a non-linear programming algorithm to this mathematically tractable formulation results in an efficient and useful computer-aided design tool. Particular attention is paid to computational efficiency and various recommendations are made. Two design examples illustrate the flexibility of the approach and highlight the special insight afforded to the designer.
Programmable calculator program for linear somatic cell scores to estimate mastitis yield losses.
Kirk, J H
1984-02-01
A programmable calculator program calculates loss of milk yield in dairy cows based on linear somatic cell count scores. The program displays the distribution of the herd by lactation number and linear score for present and optimal goal situations. Loss of yield is in pounds and dollars by cow and herd. The program estimates optimal milk production and numbers of fewer cows at the goal for mastitis infection.
LINOPT: A FORTRAN Routine for Solving Linear Programming Problems,
1981-10-09
MD 20910 2R44EA501 I I. CONTROLLING OFFICE NAME ANO AOORESS 12. REPORT DATE 9 October 1981 ’I. NUMBER OF PAGES 46 11. MONITORING AGENCY NAME...block /XXXLP/, which must accordingly be a common block in the calling program. ROUNDOFF CONTROL In the program there are three input variables which...can be used to control roundoff error accummulations. EPS is a tolerance used in checking constraint violations. H is also used to zero out
Application of linear programming techniques for controlling linear dynamic plants in real time
NASA Astrophysics Data System (ADS)
Gabasov, R.; Kirillova, F. M.; Ha, Vo Thi Thanh
2016-03-01
The problem of controlling a linear dynamic plant in real time given its nondeterministic model and imperfect measurements of the inputs and outputs is considered. The concepts of current distributions of the initial state and disturbance parameters are introduced. The method for the implementation of disclosable loop using the separation principle is described. The optimal control problem under uncertainty conditions is reduced to the problems of optimal observation, optimal identification, and optimal control of the deterministic system. To extend the domain where a solution to the optimal control problem under uncertainty exists, a two-stage optimal control method is proposed. Results are illustrated using a dynamic plant of the fourth order.
A linear programming approach for placement of applicants to academic programs.
Kassa, Biniyam Asmare
2013-01-01
This paper reports a linear programming approach for placement of applicants to study programs developed and implemented at the college of Business & Economics, Bahir Dar University, Bahir Dar, Ethiopia. The approach is estimated to significantly streamline the placement decision process at the college by reducing required man hour as well as the time it takes to announce placement decisions. Compared to the previous manual system where only one or two placement criteria were considered, the new approach allows the college's management to easily incorporate additional placement criteria, if needed. Comparison of our approach against manually constructed placement decisions based on actual data for the 2012/13 academic year suggested that about 93 percent of the placements from our model concur with the actual placement decisions. For the remaining 7 percent of placements, however, the actual placements made by the manual system display inconsistencies of decisions judged against the very criteria intended to guide placement decisions by the college's program management office. Overall, the new approach proves to be a significant improvement over the manual system in terms of efficiency of the placement process and the quality of placement decisions.
The Use of Linear (Goal) Programming in the Construction of a Test Blueprint.
ERIC Educational Resources Information Center
Busch, John Christian; Taylor, Raymond G.
The use of a variation of linear programing (goal programing) to develop a test blueprint with multiple specification requirements, is the topic of this paper. The computer program STORM was used to develop a table of test specifications for a test that would measure achievement in introductory statistics in the subdomains of frequency…
Bruhn, Peter; Geyer-Schulz, Andreas
2002-01-01
In this paper, we introduce genetic programming over context-free languages with linear constraints for combinatorial optimization, apply this method to several variants of the multidimensional knapsack problem, and discuss its performance relative to Michalewicz's genetic algorithm with penalty functions. With respect to Michalewicz's approach, we demonstrate that genetic programming over context-free languages with linear constraints improves convergence. A final result is that genetic programming over context-free languages with linear constraints is ideally suited to modeling complementarities between items in a knapsack problem: The more complementarities in the problem, the stronger the performance in comparison to its competitors.
NASA Astrophysics Data System (ADS)
Indarsih, Indrati, Ch. Rini
2016-02-01
In this paper, we define variance of the fuzzy random variables through alpha level. We have a theorem that can be used to know that the variance of fuzzy random variables is a fuzzy number. We have a multi-objective linear programming (MOLP) with fuzzy random of objective function coefficients. We will solve the problem by variance approach. The approach transform the MOLP with fuzzy random of objective function coefficients into MOLP with fuzzy of objective function coefficients. By weighted methods, we have linear programming with fuzzy coefficients and we solve by simplex method for fuzzy linear programming.
User's manual for interfacing a leading edge, vortex rollup program with two linear panel methods
NASA Technical Reports Server (NTRS)
Desilva, B. M. E.; Medan, R. T.
1979-01-01
Sufficient instructions are provided for interfacing the Mangler-Smith, leading edge vortex rollup program with a vortex lattice (POTFAN) method and an advanced higher order, singularity linear analysis for computing the vortex effects for simple canard wing combinations.
NASA Technical Reports Server (NTRS)
1979-01-01
The computer program Linear SCIDNT which evaluates rotorcraft stability and control coefficients from flight or wind tunnel test data is described. It implements the maximum likelihood method to maximize the likelihood function of the parameters based on measured input/output time histories. Linear SCIDNT may be applied to systems modeled by linear constant-coefficient differential equations. This restriction in scope allows the application of several analytical results which simplify the computation and improve its efficiency over the general nonlinear case.
Optimizing the Teaching-Learning Process Through a Linear Programming Model--Stage Increment Model.
ERIC Educational Resources Information Center
Belgard, Maria R.; Min, Leo Yoon-Gee
An operations research method to optimize the teaching-learning process is introduced in this paper. In particular, a linear programing model is proposed which, unlike dynamic or control theory models, allows the computer to react to the responses of a learner in seconds or less. To satisfy the assumptions of linearity, the seemingly complicated…
FSILP: fuzzy-stochastic-interval linear programming for supporting municipal solid waste management.
Li, Pu; Chen, Bing
2011-04-01
Although many studies on municipal solid waste management (MSW management) were conducted under uncertain conditions of fuzzy, stochastic, and interval coexistence, the solution to the conventional linear programming problems of integrating fuzzy method with the other two was inefficient. In this study, a fuzzy-stochastic-interval linear programming (FSILP) method is developed by integrating Nguyen's method with conventional linear programming for supporting municipal solid waste management. The Nguyen's method was used to convert the fuzzy and fuzzy-stochastic linear programming problems into the conventional linear programs, by measuring the attainment values of fuzzy numbers and/or fuzzy random variables, as well as superiority and inferiority between triangular fuzzy numbers/triangular fuzzy-stochastic variables. The developed method can effectively tackle uncertainties described in terms of probability density functions, fuzzy membership functions, and discrete intervals. Moreover, the method can also improve upon the conventional interval fuzzy programming and two-stage stochastic programming approaches, with advantageous capabilities that are easily achieved with fewer constraints and significantly reduces consumption time. The developed model was applied to a case study of municipal solid waste management system in a city. The results indicated that reasonable solutions had been generated. The solution can help quantify the relationship between the change of system cost and the uncertainties, which could support further analysis of tradeoffs between the waste management cost and the system failure risk.
Linearized Programming of Memristors for Artificial Neuro-Sensor Signal Processing.
Yang, Changju; Kim, Hyongsuk
2016-08-19
A linearized programming method of memristor-based neural weights is proposed. Memristor is known as an ideal element to implement a neural synapse due to its embedded functions of analog memory and analog multiplication. Its resistance variation with a voltage input is generally a nonlinear function of time. Linearization of memristance variation about time is very important for the easiness of memristor programming. In this paper, a method utilizing an anti-serial architecture for linear programming is proposed. The anti-serial architecture is composed of two memristors with opposite polarities. It linearizes the variation of memristance due to complimentary actions of two memristors. For programming a memristor, additional memristor with opposite polarity is employed. The linearization effect of weight programming of an anti-serial architecture is investigated and memristor bridge synapse which is built with two sets of anti-serial memristor architecture is taken as an application example of the proposed method. Simulations are performed with memristors of both linear drift model and nonlinear model.
Linearized Programming of Memristors for Artificial Neuro-Sensor Signal Processing
Yang, Changju; Kim, Hyongsuk
2016-01-01
A linearized programming method of memristor-based neural weights is proposed. Memristor is known as an ideal element to implement a neural synapse due to its embedded functions of analog memory and analog multiplication. Its resistance variation with a voltage input is generally a nonlinear function of time. Linearization of memristance variation about time is very important for the easiness of memristor programming. In this paper, a method utilizing an anti-serial architecture for linear programming is proposed. The anti-serial architecture is composed of two memristors with opposite polarities. It linearizes the variation of memristance due to complimentary actions of two memristors. For programming a memristor, additional memristor with opposite polarity is employed. The linearization effect of weight programming of an anti-serial architecture is investigated and memristor bridge synapse which is built with two sets of anti-serial memristor architecture is taken as an application example of the proposed method. Simulations are performed with memristors of both linear drift model and nonlinear model. PMID:27548186
The Computer Program LIAR for Beam Dynamics Calculations in Linear Accelerators
Assmann, R.W.; Adolphsen, C.; Bane, K.; Raubenheimer, T.O.; Siemann, R.H.; Thompson, K.; /SLAC
2011-08-26
Linear accelerators are the central components of the proposed next generation of linear colliders. They need to provide acceleration of up to 750 GeV per beam while maintaining very small normalized emittances. Standard simulation programs, mainly developed for storage rings, do not meet the specific requirements for high energy linear accelerators. We present a new program LIAR ('LInear Accelerator Research code') that includes wakefield effects, a 6D coupled beam description, specific optimization algorithms and other advanced features. Its modular structure allows to use and to extend it easily for different purposes. The program is available for UNIX workstations and Windows PC's. It can be applied to a broad range of accelerators. We present examples of simulations for SLC and NLC.
Micosoft Excel Sensitivity Analysis for Linear and Stochastic Program Feed Formulation
Technology Transfer Automated Retrieval System (TEKTRAN)
Sensitivity analysis is a part of mathematical programming solutions and is used in making nutritional and economic decisions for a given feed formulation problem. The terms, shadow price and reduced cost, are familiar linear program (LP) terms to feed formulators. Because of the nonlinear nature of...
Object matching using a locally affine invariant and linear programming techniques.
Li, Hongsheng; Huang, Xiaolei; He, Lei
2013-02-01
In this paper, we introduce a new matching method based on a novel locally affine-invariant geometric constraint and linear programming techniques. To model and solve the matching problem in a linear programming formulation, all geometric constraints should be able to be exactly or approximately reformulated into a linear form. This is a major difficulty for this kind of matching algorithm. We propose a novel locally affine-invariant constraint which can be exactly linearized and requires a lot fewer auxiliary variables than other linear programming-based methods do. The key idea behind it is that each point in the template point set can be exactly represented by an affine combination of its neighboring points, whose weights can be solved easily by least squares. Errors of reconstructing each matched point using such weights are used to penalize the disagreement of geometric relationships between the template points and the matched points. The resulting overall objective function can be solved efficiently by linear programming techniques. Our experimental results on both rigid and nonrigid object matching show the effectiveness of the proposed algorithm.
Optimal control of a satellite-robot system using direct collocation with non-linear programming
NASA Astrophysics Data System (ADS)
Coverstone-Carroll, V. L.; Wilkey, N. M.
1995-08-01
The non-holonomic behavior of a satellite-robot system is used to develop the system's equations of motion. The resulting non-linear differential equations are transformed into a non-linear programming problem using direct collocation. The link rates of the robot are minimized along optimal reorientations. Optimal solutions to several maneuvers are obtained and the results are interpreted to gain an understanding of the satellite-robot dynamics.
Accommodation of practical constraints by a linear programming jet select. [for Space Shuttle
NASA Technical Reports Server (NTRS)
Bergmann, E.; Weiler, P.
1983-01-01
An experimental spacecraft control system will be incorporated into the Space Shuttle flight software and exercised during a forthcoming mission to evaluate its performance and handling qualities. The control system incorporates a 'phase space' control law to generate rate change requests and a linear programming jet select to compute jet firings. Posed as a linear programming problem, jet selection must represent the rate change request as a linear combination of jet acceleration vectors where the coefficients are the jet firing times, while minimizing the fuel expended in satisfying that request. This problem is solved in real time using a revised Simplex algorithm. In order to implement the jet selection algorithm in the Shuttle flight control computer, it was modified to accommodate certain practical features of the Shuttle such as limited computer throughput, lengthy firing times, and a large number of control jets. To the authors' knowledge, this is the first such application of linear programming. It was made possible by careful consideration of the jet selection problem in terms of the properties of linear programming and the Simplex algorithm. These modifications to the jet select algorithm may by useful for the design of reaction controlled spacecraft.
A New Bound for the Ration Between the 2-Matching Problem and Its Linear Programming Relaxation
Boyd, Sylvia; Carr, Robert
1999-07-28
Consider the 2-matching problem defined on the complete graph, with edge costs which satisfy the triangle inequality. We prove that the value of a minimum cost 2-matching is bounded above by 4/3 times the value of its linear programming relaxation, the fractional 2-matching problem. This lends credibility to a long-standing conjecture that the optimal value for the traveling salesman problem is bounded above by 4/3 times the value of its linear programming relaxation, the subtour elimination problem.
A novel recurrent neural network with finite-time convergence for linear programming.
Liu, Qingshan; Cao, Jinde; Chen, Guanrong
2010-11-01
In this letter, a novel recurrent neural network based on the gradient method is proposed for solving linear programming problems. Finite-time convergence of the proposed neural network is proved by using the Lyapunov method. Compared with the existing neural networks for linear programming, the proposed neural network is globally convergent to exact optimal solutions in finite time, which is remarkable and rare in the literature of neural networks for optimization. Some numerical examples are given to show the effectiveness and excellent performance of the new recurrent neural network.
Ren, Jingzheng; Dong, Liang; Sun, Lu; Goodsite, Michael Evan; Tan, Shiyu; Dong, Lichun
2015-01-01
The aim of this work was to develop a model for optimizing the life cycle cost of biofuel supply chain under uncertainties. Multiple agriculture zones, multiple transportation modes for the transport of grain and biofuel, multiple biofuel plants, and multiple market centers were considered in this model, and the price of the resources, the yield of grain and the market demands were regarded as interval numbers instead of constants. An interval linear programming was developed, and a method for solving interval linear programming was presented. An illustrative case was studied by the proposed model, and the results showed that the proposed model is feasible for designing biofuel supply chain under uncertainties.
Fault detection and initial state verification by linear programming for a class of Petri nets
NASA Technical Reports Server (NTRS)
Rachell, Traxon; Meyer, David G.
1992-01-01
The authors present an algorithmic approach to determining when the marking of a LSMG (live safe marked graph) or a LSFC (live safe free choice) net is in the set of live safe markings M. Hence, once the marking of a net is determined to be in M, then if at some time thereafter the marking of this net is determined not to be in M, this indicates a fault. It is shown how linear programming can be used to determine if m is an element of M. The worst-case computational complexity of each algorithm is bounded by the number of linear programs necessary to compute.
2007-11-02
scarce resources ( Bazaraa vii). The modeling capabilities linear programming provides has made it a success in many fields of study. Since the...Planning and Programming of Facility Construction Projects. 12 May 1994. Bazaraa , Mokhtar S., John J Jarvis and Hanif D. Sherali. Linear Programming
ERIC Educational Resources Information Center
Dyehouse, Melissa; Bennett, Deborah; Harbor, Jon; Childress, Amy; Dark, Melissa
2009-01-01
Logic models are based on linear relationships between program resources, activities, and outcomes, and have been used widely to support both program development and evaluation. While useful in describing some programs, the linear nature of the logic model makes it difficult to capture the complex relationships within larger, multifaceted…
Detection and discovery of near-earth asteroids by the linear program
NASA Astrophysics Data System (ADS)
Stokes, G.; Evans, J.
The Lincoln Near-Earth Asteroid Research (LINEAR) program, which applies space surveillance technology developed for the United States Air Force to discovering asteroids, has been operating for 5 years. During that time LINEAR has provided almost 65% of the worldwide discovery stream and has now discovered 50% of all known asteroids including near-Earth asteroids whose orbital parameters could allow them to pass close to the Earth. In addition, LINEAR has become the leading ground-based discoverer of comets, with more than one hundred comets now named "LINEAR." Generally, LINEAR discovers comets when they are far away from the Sun on their inbound trajectory, thus allowing observation of the heating process commonly missed previously when comets were discovered closer to the Sun. This paper provides an update to recent enhancements of the LINEAR system, details the productivity of the program, and highlights some of the more interesting objects discovered. This work was sponsored by the National Aeronautics and Space Administration under Air Force Contract F19628-00-C-2002. "Opinions, interpretations, conclusions, and recommendations are those of the author and are not necessarily endorsed by the United States Government."
NASA Technical Reports Server (NTRS)
Cooke, C. H.
1975-01-01
STICAP (Stiff Circuit Analysis Program) is a FORTRAN 4 computer program written for the CDC-6400-6600 computer series and SCOPE 3.0 operating system. It provides the circuit analyst a tool for automatically computing the transient responses and frequency responses of large linear time invariant networks, both stiff and nonstiff (algorithms and numerical integration techniques are described). The circuit description and user's program input language is engineer-oriented, making simple the task of using the program. Engineering theories underlying STICAP are examined. A user's manual is included which explains user interaction with the program and gives results of typical circuit design applications. Also, the program structure from a systems programmer's viewpoint is depicted and flow charts and other software documentation are given.
ERIC Educational Resources Information Center
Findorff, Irene K.
This document summarizes the results of a project at Tulane University that was designed to adapt, test, and evaluate a computerized information and menu planning system utilizing linear programing techniques for use in school lunch food service operations. The objectives of the menu planning were to formulate menu items into a palatable,…
An Interactive Method to Solve Infeasibility in Linear Programming Test Assembling Models
ERIC Educational Resources Information Center
Huitzing, Hiddo A.
2004-01-01
In optimal assembly of tests from item banks, linear programming (LP) models have proved to be very useful. Assembly by hand has become nearly impossible, but these LP techniques are able to find the best solutions, given the demands and needs of the test to be assembled and the specifics of the item bank from which it is assembled. However,…
ERIC Educational Resources Information Center
Huitzing, Hiddo A.
2004-01-01
This article shows how set covering with item sampling (SCIS) methods can be used in the analysis and preanalysis of linear programming models for test assembly (LPTA). LPTA models can construct tests, fulfilling a set of constraints set by the test assembler. Sometimes, no solution to the LPTA model exists. The model is then said to be…
Visual, Algebraic and Mixed Strategies in Visually Presented Linear Programming Problems.
ERIC Educational Resources Information Center
Shama, Gilli; Dreyfus, Tommy
1994-01-01
Identified and classified solution strategies of (n=49) 10th-grade students who were presented with linear programming problems in a predominantly visual setting in the form of a computerized game. Visual strategies were developed more frequently than either algebraic or mixed strategies. Appendix includes questionnaires. (Contains 11 references.)…
Chen, W Y C; Dress, A W M; Yu, W Q
2007-09-01
Here, the reliability of a recent approach to use parameterised linear programming for detecting community structures in network has been investigated. Using a one-parameter family of objective functions, a number of "perturbation experiments' document that our approach works rather well. A real-life network and a family of benchmark network are also analysed.
Secret Message Decryption: Group Consulting Projects Using Matrices and Linear Programming
ERIC Educational Resources Information Center
Gurski, Katharine F.
2009-01-01
We describe two short group projects for finite mathematics students that incorporate matrices and linear programming into fictional consulting requests presented as a letter to the students. The students are required to use mathematics to decrypt secret messages in one project involving matrix multiplication and inversion. The second project…
Maximizing Profits for a Commercail Salmon Rearing Facility Using Linear Programming.
A linear programming model of a commercial salmon rearing facility is formulated. A scheme is provided for facility expansion at an optimum rate...maximizing profit to the grower. The variables are the number of fish started in each year and the number of fresh water ponds and salt water pens to
The fastclime Package for Linear Programming and Large-Scale Precision Matrix Estimation in R.
Pang, Haotian; Liu, Han; Vanderbei, Robert
2014-02-01
We develop an R package fastclime for solving a family of regularized linear programming (LP) problems. Our package efficiently implements the parametric simplex algorithm, which provides a scalable and sophisticated tool for solving large-scale linear programs. As an illustrative example, one use of our LP solver is to implement an important sparse precision matrix estimation method called CLIME (Constrained L1 Minimization Estimator). Compared with existing packages for this problem such as clime and flare, our package has three advantages: (1) it efficiently calculates the full piecewise-linear regularization path; (2) it provides an accurate dual certificate as stopping criterion; (3) it is completely coded in C and is highly portable. This package is designed to be useful to statisticians and machine learning researchers for solving a wide range of problems.
User's Guide to the Weighted-Multiple-Linear Regression Program (WREG version 1.0)
Eng, Ken; Chen, Yin-Yu; Kiang, Julie.E.
2009-01-01
Streamflow is not measured at every location in a stream network. Yet hydrologists, State and local agencies, and the general public still seek to know streamflow characteristics, such as mean annual flow or flood flows with different exceedance probabilities, at ungaged basins. The goals of this guide are to introduce and familiarize the user with the weighted multiple-linear regression (WREG) program, and to also provide the theoretical background for program features. The program is intended to be used to develop a regional estimation equation for streamflow characteristics that can be applied at an ungaged basin, or to improve the corresponding estimate at continuous-record streamflow gages with short records. The regional estimation equation results from a multiple-linear regression that relates the observable basin characteristics, such as drainage area, to streamflow characteristics.
Digital program for solving the linear stochastic optimal control and estimation problem
NASA Technical Reports Server (NTRS)
Geyser, L. C.; Lehtinen, B.
1975-01-01
A computer program is described which solves the linear stochastic optimal control and estimation (LSOCE) problem by using a time-domain formulation. The LSOCE problem is defined as that of designing controls for a linear time-invariant system which is disturbed by white noise in such a way as to minimize a performance index which is quadratic in state and control variables. The LSOCE problem and solution are outlined; brief descriptions are given of the solution algorithms, and complete descriptions of each subroutine, including usage information and digital listings, are provided. A test case is included, as well as information on the IBM 7090-7094 DCS time and storage requirements.
Ab initio synthesis of linearly compensated zoom lenses by evolutionary programming.
Pal, Sourav; Hazra, Lakshminarayan
2011-04-01
An approach for ab initio synthesis of the thin lens structure of linearly compensated zoom lenses is reported. This method uses evolutionary programming that explores the available configuration space formed by powers of the individual components, the intercomponent separations, and the relative movement parameters of the moving components. Useful thin lens structures of optically and linearly compensated zoom lens systems are obtained by suitable formulation of the merit function of optimization. This paper reports our investigations on three-component zoom lens structures. Illustrative numerical results are presented.
NASA Astrophysics Data System (ADS)
Zimmermann, Karl-Heinz; Achtziger, Wolfgang
2001-09-01
The size of a systolic array synthesized from a uniform recurrence equation, whose computations are mapped by a linear function to the processors, matches the problem size. In practice, however, there exist several limiting factors on the array size. There are two dual schemes available to derive arrays of smaller size from large-size systolic arrays based on the partitioning of the large-size arrays into subarrays. In LSGP, the subarrays are clustered one-to-one into the processors of a small-size array, while in LPGS, the subarrays are serially assigned to a reduced-size array. In this paper, we propose a common methodology for both LSGP and LPGS based on polyhedral partitionings of large-size k-dimensional systolic arrays which are synthesized from n-dimensional uniform recurrences by linear mappings for allocation and timing. In particular, we address the optimization problem of finding optimal piecewise linear timing functions for small-size arrays. These are mappings composed of linear timing functions for the computations of the subarrays. We study a continuous approximation of this problem by passing from piecewise linear to piecewise quasi-linear timing functions. The resultant problem formulation is then a quadratic programming problem which can be solved by standard algorithms for nonlinear optimization problems.
An application of a linear programing technique to nonlinear minimax problems
NASA Technical Reports Server (NTRS)
Schiess, J. R.
1973-01-01
A differential correction technique for solving nonlinear minimax problems is presented. The basis of the technique is a linear programing algorithm which solves the linear minimax problem. By linearizing the original nonlinear equations about a nominal solution, both nonlinear approximation and estimation problems using the minimax norm may be solved iteratively. Some consideration is also given to improving convergence and to the treatment of problems with more than one measured quantity. A sample problem is treated with this technique and with the least-squares differential correction method to illustrate the properties of the minimax solution. The results indicate that for the sample approximation problem, the minimax technique provides better estimates than the least-squares method if a sufficient amount of data is used. For the sample estimation problem, the minimax estimates are better if the mathematical model is incomplete.
A computer program for linear nonparametric and parametric identification of biological data.
Werness, S A; Anderson, D J
1984-01-01
A computer program package for parametric ad nonparametric linear system identification of both static and dynamic biological data, written for an LSI-11 minicomputer with 28 K of memory, is described. The program has 11 possible commands including an instructional help command. A user can perform nonparametric spectral analysis and estimation of autocorrelation and partial autocorrelation functions of univariate data and estimate nonparametrically the transfer function and possibly an associated noise series of bivariate data. In addition, the commands provide the user the means to derive a parametric autoregressive moving average model for univariate data, to derive a parametric transfer function and noise model for bivariate data, and to perform several model evaluation tests such as pole-zero cancellation, examination of residual whiteness and uncorrelatedness with the input. The program, consisting of a main program and driver subroutine as well as six overlay segments, may be run interactively or automatically.
LDRD final report on massively-parallel linear programming : the parPCx system.
Parekh, Ojas; Phillips, Cynthia Ann; Boman, Erik Gunnar
2005-02-01
This report summarizes the research and development performed from October 2002 to September 2004 at Sandia National Laboratories under the Laboratory-Directed Research and Development (LDRD) project ''Massively-Parallel Linear Programming''. We developed a linear programming (LP) solver designed to use a large number of processors. LP is the optimization of a linear objective function subject to linear constraints. Companies and universities have expended huge efforts over decades to produce fast, stable serial LP solvers. Previous parallel codes run on shared-memory systems and have little or no distribution of the constraint matrix. We have seen no reports of general LP solver runs on large numbers of processors. Our parallel LP code is based on an efficient serial implementation of Mehrotra's interior-point predictor-corrector algorithm (PCx). The computational core of this algorithm is the assembly and solution of a sparse linear system. We have substantially rewritten the PCx code and based it on Trilinos, the parallel linear algebra library developed at Sandia. Our interior-point method can use either direct or iterative solvers for the linear system. To achieve a good parallel data distribution of the constraint matrix, we use a (pre-release) version of a hypergraph partitioner from the Zoltan partitioning library. We describe the design and implementation of our new LP solver called parPCx and give preliminary computational results. We summarize a number of issues related to efficient parallel solution of LPs with interior-point methods including data distribution, numerical stability, and solving the core linear system using both direct and iterative methods. We describe a number of applications of LP specific to US Department of Energy mission areas and we summarize our efforts to integrate parPCx (and parallel LP solvers in general) into Sandia's massively-parallel integer programming solver PICO (Parallel Interger and Combinatorial Optimizer). We
Refining and end use study of coal liquids II - linear programming analysis
Lowe, C.; Tam, S.
1995-12-31
A DOE-funded study is underway to determine the optimum refinery processing schemes for producing transportation fuels that will meet CAAA regulations from direct and indirect coal liquids. The study consists of three major parts: pilot plant testing of critical upgrading processes, linear programming analysis of different processing schemes, and engine emission testing of final products. Currently, fractions of a direct coal liquid produced form bituminous coal are being tested in sequence of pilot plant upgrading processes. This work is discussed in a separate paper. The linear programming model, which is the subject of this paper, has been completed for the petroleum refinery and is being modified to handle coal liquids based on the pilot plant test results. Preliminary coal liquid evaluation studies indicate that, if a refinery expansion scenario is adopted, then the marginal value of the coal liquid (over the base petroleum crude) is $3-4/bbl.
A new one-layer neural network for linear and quadratic programming.
Gao, Xingbao; Liao, Li-Zhi
2010-06-01
In this paper, we present a new neural network for solving linear and quadratic programming problems in real time by introducing some new vectors. The proposed neural network is stable in the sense of Lyapunov and can converge to an exact optimal solution of the original problem when the objective function is convex on the set defined by equality constraints. Compared with existing one-layer neural networks for quadratic programming problems, the proposed neural network has the least neurons and requires weak stability conditions. The validity and transient behavior of the proposed neural network are demonstrated by some simulation results.
A new neural network model for solving random interval linear programming problems.
Arjmandzadeh, Ziba; Safi, Mohammadreza; Nazemi, Alireza
2017-05-01
This paper presents a neural network model for solving random interval linear programming problems. The original problem involving random interval variable coefficients is first transformed into an equivalent convex second order cone programming problem. A neural network model is then constructed for solving the obtained convex second order cone problem. Employing Lyapunov function approach, it is also shown that the proposed neural network model is stable in the sense of Lyapunov and it is globally convergent to an exact satisfactory solution of the original problem. Several illustrative examples are solved in support of this technique.
NASA Technical Reports Server (NTRS)
Lehtinen, B.; Geyser, L. C.
1984-01-01
AESOP is a computer program for use in designing feedback controls and state estimators for linear multivariable systems. AESOP is meant to be used in an interactive manner. Each design task that the program performs is assigned a "function" number. The user accesses these functions either (1) by inputting a list of desired function numbers or (2) by inputting a single function number. In the latter case the choice of the function will in general depend on the results obtained by the previously executed function. The most important of the AESOP functions are those that design,linear quadratic regulators and Kalman filters. The user interacts with the program when using these design functions by inputting design weighting parameters and by viewing graphic displays of designed system responses. Supporting functions are provided that obtain system transient and frequency responses, transfer functions, and covariance matrices. The program can also compute open-loop system information such as stability (eigenvalues), eigenvectors, controllability, and observability. The program is written in ANSI-66 FORTRAN for use on an IBM 3033 using TSS 370. Descriptions of all subroutines and results of two test cases are included in the appendixes.
Romeijn, H Edwin; Ahuja, Ravindra K; Dempsey, James F; Kumar, Arvind; Li, Jonathan G
2003-11-07
We present a novel linear programming (LP) based approach for efficiently solving the intensity modulated radiation therapy (IMRT) fluence-map optimization (FMO) problem to global optimality. Our model overcomes the apparent limitations of a linear-programming approach by approximating any convex objective function by a piecewise linear convex function. This approach allows us to retain the flexibility offered by general convex objective functions, while allowing us to formulate the FMO problem as a LP problem. In addition, a novel type of partial-volume constraint that bounds the tail averages of the differential dose-volume histograms of structures is imposed while retaining linearity as an alternative approach to improve dose homogeneity in the target volumes, and to attempt to spare as many critical structures as possible. The goal of this work is to develop a very rapid global optimization approach that finds high quality dose distributions. Implementation of this model has demonstrated excellent results. We found globally optimal solutions for eight 7-beam head-and-neck cases in less than 3 min of computational time on a single processor personal computer without the use of partial-volume constraints. Adding such constraints increased the running times by a factor of 2-3, but improved the sparing of critical structures. All cases demonstrated excellent target coverage (> 95%), target homogeneity (< 10% overdosing and < 7% underdosing) and organ sparing using at least one of the two models.
Annular precision linear shaped charge flight termination system for the ODES program
Vigil, M.G.; Marchi, D.L.
1994-06-01
The work for the development of an Annular Precision Linear Shaped Charge (APLSC) Flight Termination System (FTS) for the Operation and Deployment Experiment Simulator (ODES) program is discussed and presented in this report. The Precision Linear Shaped Charge (PLSC) concept was recently developed at Sandia. The APLSC component is designed to produce a copper jet to cut four inch diameter holes in each of two spherical tanks, one containing fuel and the other an oxidizer that are hyperbolic when mixed, to terminate the ODES vehicle flight if necessary. The FTS includes two detonators, six Mild Detonating Fuse (MDF) transfer lines, a detonator block, detonation transfer manifold, and the APLSC component. PLSCs have previously been designed in ring components where the jet penetrating axis is either directly away or toward the center of the ring assembly. Typically, these PLSC components are designed to cut metal cylinders from the outside inward or from the inside outward. The ODES program requires an annular linear shaped charge. The (Linear Shaped Charge Analysis) LESCA code was used to design this 65 grain/foot APLSC and data comparing the analytically predicted to experimental data are presented. Jet penetration data are presented to assess the maximum depth and reproducibility of the penetration. Data are presented for full scale tests, including all FTS components, and conducted with nominal 19 inch diameter, spherical tanks.
The solution of the optimization problem of small energy complexes using linear programming methods
NASA Astrophysics Data System (ADS)
Ivanin, O. A.; Director, L. B.
2016-11-01
Linear programming methods were used for solving the optimization problem of schemes and operation modes of distributed generation energy complexes. Applicability conditions of simplex method, applied to energy complexes, including installations of renewable energy (solar, wind), diesel-generators and energy storage, considered. The analysis of decomposition algorithms for various schemes of energy complexes was made. The results of optimization calculations for energy complexes, operated autonomously and as a part of distribution grid, are presented.
Optimized Waterspace Management and Scheduling Using Mixed-Integer Linear Programming
2016-01-01
TECHNICAL REPORT NSWC PCD TR 2015-003 OPTIMIZED WATERSPACE MANAGEMENT AND SCHEDULING USING MIXED-INTEGER LINEAR PROGRAMMING...effects on optimization quality . 24 3 NSWC PCD TR 2015-003 Optimized Waterspace Mgt 1 Introduction The use of autonomous systems to perform increasingly...constraints required for the mathematical formulation of the MCM scheduling problem pertaining to the survey constraints and logistics management . The
A linear programming approach to characterizing norm bounded uncertainty from experimental data
NASA Technical Reports Server (NTRS)
Scheid, R. E.; Bayard, D. S.; Yam, Y.
1991-01-01
The linear programming spectral overbounding and factorization (LPSOF) algorithm, an algorithm for finding a minimum phase transfer function of specified order whose magnitude tightly overbounds a specified nonparametric function of frequency, is introduced. This method has direct application to transforming nonparametric uncertainty bounds (available from system identification experiments) into parametric representations required for modern robust control design software (i.e., a minimum-phase transfer function multiplied by a norm-bounded perturbation).
Katz, Josh M; Winter, Carl K; Buttrey, Samuel E; Fadel, James G
2012-03-01
Western and guideline based diets were compared to determine if dietary improvements resulting from following dietary guidelines reduce acrylamide intake. Acrylamide forms in heat treated foods and is a human neurotoxin and animal carcinogen. Acrylamide intake from the Western diet was estimated with probabilistic techniques using teenage (13-19 years) National Health and Nutrition Examination Survey (NHANES) food consumption estimates combined with FDA data on the levels of acrylamide in a large number of foods. Guideline based diets were derived from NHANES data using linear programming techniques to comport to recommendations from the Dietary Guidelines for Americans, 2005. Whereas the guideline based diets were more properly balanced and rich in consumption of fruits, vegetables, and other dietary components than the Western diets, acrylamide intake (mean±SE) was significantly greater (P<0.001) from consumption of the guideline based diets (0.508±0.003 μg/kg/day) than from consumption of the Western diets (0.441±0.003 μg/kg/day). Guideline based diets contained less acrylamide contributed by French fries and potato chips than Western diets. Overall acrylamide intake, however, was higher in guideline based diets as a result of more frequent breakfast cereal intake. This is believed to be the first example of a risk assessment that combines probabilistic techniques with linear programming and results demonstrate that linear programming techniques can be used to model specific diets for the assessment of toxicological and nutritional dietary components.
Asumptotic behavior of trajectories associated with the exponential penalty in linear programming
Cominetti, R.
1994-12-31
We consider the exponential penality function f(x, r) = c{prime} x + r {Sigma} exp[A{sub i}x - b{sub i}/r] associated with a linear program of the form min {l_brace}c{prime}x : Ax {<=} b{r_brace}. We show that for r close to 0, the unique unconstrained minimizer x(r) of f({center_dot}, r) admits an symptotic expansion of the form x(r) = x* + rd* + {eta}(r) where x* is a particular optimal solution of the linear program and the error term {eta}(r) has an exponentially fast decay. Using duality theory we exhibit an associated dual trajectory {Lambda}(r) which converges exponentially fast to a particular dual optimal solution. Then we study the asymptotic behavior of the solutions of the steepest descent differential equation u(t) = - {del}{sub x}f(u(t), r(t)), u(t{sub 0}) = u{sub 0}; showing that, under suitable conditions on the rate of decrease of r(t), u(t) converges towards an optimal solution {bar u} of the linear program. In particular, if r(t) decays slowly we find that {bar u} = x*.
Stable computation of search directions for near-degenerate linear programming problems
Hough, P.D.
1997-03-01
In this paper, we examine stability issues that arise when computing search directions ({delta}x, {delta}y, {delta} s) for a primal-dual path-following interior point method for linear programming. The dual step {delta}y can be obtained by solving a weighted least-squares problem for which the weight matrix becomes extremely il conditioned near the boundary of the feasible region. Hough and Vavisis proposed using a type of complete orthogonal decomposition (the COD algorithm) to solve such a problem and presented stability results. The work presented here addresses the stable computation of the primal step {delta}x and the change in the dual slacks {delta}s. These directions can be obtained in a straight-forward manner, but near-degeneracy in the linear programming instance introduces ill-conditioning which can cause numerical problems in this approach. Therefore, we propose a new method of computing {delta}x and {delta}s. More specifically, this paper describes and orthogonal projection algorithm that extends the COD method. Unlike other algorithms, this method is stable for interior point methods without assuming nondegeneracy in the linear programming instance. Thus, it is more general than other algorithms on near-degenerate problems.
IESIP - AN IMPROVED EXPLORATORY SEARCH TECHNIQUE FOR PURE INTEGER LINEAR PROGRAMMING PROBLEMS
NASA Technical Reports Server (NTRS)
Fogle, F. R.
1994-01-01
IESIP, an Improved Exploratory Search Technique for Pure Integer Linear Programming Problems, addresses the problem of optimizing an objective function of one or more variables subject to a set of confining functions or constraints by a method called discrete optimization or integer programming. Integer programming is based on a specific form of the general linear programming problem in which all variables in the objective function and all variables in the constraints are integers. While more difficult, integer programming is required for accuracy when modeling systems with small numbers of components such as the distribution of goods, machine scheduling, and production scheduling. IESIP establishes a new methodology for solving pure integer programming problems by utilizing a modified version of the univariate exploratory move developed by Robert Hooke and T.A. Jeeves. IESIP also takes some of its technique from the greedy procedure and the idea of unit neighborhoods. A rounding scheme uses the continuous solution found by traditional methods (simplex or other suitable technique) and creates a feasible integer starting point. The Hook and Jeeves exploratory search is modified to accommodate integers and constraints and is then employed to determine an optimal integer solution from the feasible starting solution. The user-friendly IESIP allows for rapid solution of problems up to 10 variables in size (limited by DOS allocation). Sample problems compare IESIP solutions with the traditional branch-and-bound approach. IESIP is written in Borland's TURBO Pascal for IBM PC series computers and compatibles running DOS. Source code and an executable are provided. The main memory requirement for execution is 25K. This program is available on a 5.25 inch 360K MS DOS format diskette. IESIP was developed in 1990. IBM is a trademark of International Business Machines. TURBO Pascal is registered by Borland International.
A new gradient-based neural network for solving linear and quadratic programming problems.
Leung, Y; Chen, K Z; Jiao, Y C; Gao, X B; Leung, K S
2001-01-01
A new gradient-based neural network is constructed on the basis of the duality theory, optimization theory, convex analysis theory, Lyapunov stability theory, and LaSalle invariance principle to solve linear and quadratic programming problems. In particular, a new function F(x, y) is introduced into the energy function E(x, y) such that the function E(x, y) is convex and differentiable, and the resulting network is more efficient. This network involves all the relevant necessary and sufficient optimality conditions for convex quadratic programming problems. For linear programming and quadratic programming (QP) problems with unique and infinite number of solutions, we have proven strictly that for any initial point, every trajectory of the neural network converges to an optimal solution of the QP and its dual problem. The proposed network is different from the existing networks which use the penalty method or Lagrange method, and the inequality constraints are properly handled. The simulation results show that the proposed neural network is feasible and efficient.
SLFP: A stochastic linear fractional programming approach for sustainable waste management
Zhu, H.; Huang, G.H.
2011-12-15
Highlights: > A new fractional programming (SLFP) method is developed for waste management. > SLFP can solve ratio optimization problems associated with random inputs. > A case study of waste flow allocation demonstrates its applicability. > SLFP helps compare objectives of two aspects and reflect system efficiency. > This study supports in-depth analysis of tradeoffs among multiple system criteria. - Abstract: A stochastic linear fractional programming (SLFP) approach is developed for supporting sustainable municipal solid waste management under uncertainty. The SLFP method can solve ratio optimization problems associated with random information, where chance-constrained programming is integrated into a linear fractional programming framework. It has advantages in: (1) comparing objectives of two aspects, (2) reflecting system efficiency, (3) dealing with uncertainty expressed as probability distributions, and (4) providing optimal-ratio solutions under different system-reliability conditions. The method is applied to a case study of waste flow allocation within a municipal solid waste (MSW) management system. The obtained solutions are useful for identifying sustainable MSW management schemes with maximized system efficiency under various constraint-violation risks. The results indicate that SLFP can support in-depth analysis of the interrelationships among system efficiency, system cost and system-failure risk.
Automated design and optimization of flexible booster autopilots via linear programming, volume 1
NASA Technical Reports Server (NTRS)
Hauser, F. D.
1972-01-01
A nonlinear programming technique was developed for the automated design and optimization of autopilots for large flexible launch vehicles. This technique, which resulted in the COEBRA program, uses the iterative application of linear programming. The method deals directly with the three main requirements of booster autopilot design: to provide (1) good response to guidance commands; (2) response to external disturbances (e.g. wind) to minimize structural bending moment loads and trajectory dispersions; and (3) stability with specified tolerances on the vehicle and flight control system parameters. The method is applicable to very high order systems (30th and greater per flight condition). Examples are provided that demonstrate the successful application of the employed algorithm to the design of autopilots for both single and multiple flight conditions.
Zörnig, Peter
2015-08-01
We present integer programming models for some variants of the farthest string problem. The number of variables and constraints is substantially less than that of the integer linear programming models known in the literature. Moreover, the solution of the linear programming-relaxation contains only a small proportion of noninteger values, which considerably simplifies the rounding process. Numerical tests have shown excellent results, especially when a small set of long sequences is given.
Automatic tracking of linear features on SPOT images using dynamic programming
NASA Astrophysics Data System (ADS)
Bonnefon, Regis; Dherete, Pierre; Desachy, Jacky
1999-12-01
Detection of geographic elements on images is important in the perspective of adding new elements in geographic databases which are sometimes old and so, some elements are not represented. Our goal is to look for linear features like roads, rivers or railways on SPOT images with a resolution of 10 meters. Several methods allow this detection to be realized and may be classified in three categories: (1) Detection operators: the best known is the DUDA Road Operator which determine the belonging degree of a pixel to a linear feature from several 5 X 5 filters. Results are often unsatisfactory. It exists too the Infinite Size Exponential Filter (ISEF), which is a derivative filter and allows edge, valley or roof profile to be found on the image. It can be utilized as an additional information for others methods. (2) Structural tracking: from a starting point, an analysis in several directions is performed to determine the best next point (features may be: homogeneity of radiometry, contrast with environment, ...). From this new point and with an updated direction, the process goes on. Difficulty of these methods is the consideration of occlusions (bridges, tunnels, dense vegetation, ...). (3) Dynamic programming: F* algorithm and snakes are the best known. They allow a path with a minimal cost to be found in a search window. Occlusions are not a problem but two points or more near the searched linear feature must be known to define the window. The method described below is a mixture of structural tracking and dynamic programming (F* algorithm).
A strictly improving linear programming alorithm based on a series of Phase 1 problems
Leichner, S.A.; Dantzig, G.B.; Davis, J.W.
1992-04-01
When used on degenerate problems, the simplex method often takes a number of degenerate steps at a particular vertex before moving to the next. In theory (although rarely in practice), the simplex method can actually cycle at such a degenerate point. Instead of trying to modify the simplex method to avoid degenerate steps, we have developed a new linear programming algorithm that is completely impervious to degeneracy. This new method solves the Phase II problem of finding an optimal solution by solving a series of Phase I feasibility problems. Strict improvement is attained at each iteration in the Phase I algorithm, and the Phase II sequence of feasibility problems has linear convergence in the number of Phase I problems. When tested on the 30 smallest NETLIB linear programming test problems, the computational results for the new Phase II algorithm were over 15% faster than the simplex method; on some problems, it was almost two times faster, and on one problem it was four times faster.
Modified Cholesky factorizations in interior-point algorithms for linear programming.
Wright, S.; Mathematics and Computer Science
1999-01-01
We investigate a modified Cholesky algorithm typical of those used in most interior-point codes for linear programming. Cholesky-based interior-point codes are popular for three reasons: their implementation requires only minimal changes to standard sparse Cholesky algorithms (allowing us to take full advantage of software written by specialists in that area); they tend to be more efficient than competing approaches that use alternative factorizations; and they perform robustly on most practical problems, yielding good interior-point steps even when the coefficient matrix of the main linear system to be solved for the step components is ill conditioned. We investigate this surprisingly robust performance by using analytical tools from matrix perturbation theory and error analysis, illustrating our results with computational experiments. Finally, we point out the potential limitations of this approach.
Optimisation of substrate blends in anaerobic co-digestion using adaptive linear programming.
García-Gen, Santiago; Rodríguez, Jorge; Lema, Juan M
2014-12-01
Anaerobic co-digestion of multiple substrates has the potential to enhance biogas productivity by making use of the complementary characteristics of different substrates. A blending strategy based on a linear programming optimisation method is proposed aiming at maximising COD conversion into methane, but simultaneously maintaining a digestate and biogas quality. The method incorporates experimental and heuristic information to define the objective function and the linear restrictions. The active constraints are continuously adapted (by relaxing the restriction boundaries) such that further optimisations in terms of methane productivity can be achieved. The feasibility of the blends calculated with this methodology was previously tested and accurately predicted with an ADM1-based co-digestion model. This was validated in a continuously operated pilot plant, treating for several months different mixtures of glycerine, gelatine and pig manure at organic loading rates from 1.50 to 4.93 gCOD/Ld and hydraulic retention times between 32 and 40 days at mesophilic conditions.
On Implicit Active Constraints in Linear Semi-Infinite Programs with Unbounded Coefficients
Goberna, M. A.; Lancho, G. A.; Todorov, M. I.; Vera de Serio, V. N.
2011-04-15
The concept of implicit active constraints at a given point provides useful local information about the solution set of linear semi-infinite systems and about the optimal set in linear semi-infinite programming provided the set of gradient vectors of the constraints is bounded, commonly under the additional assumption that there exists some strong Slater point. This paper shows that the mentioned global boundedness condition can be replaced by a weaker local condition (LUB) based on locally active constraints (active in a ball of small radius whose center is some nominal point), providing geometric information about the solution set and Karush-Kuhn-Tucker type conditions for the optimal solution to be strongly unique. The maintaining of the latter property under sufficiently small perturbations of all the data is also analyzed, giving a characterization of its stability with respect to these perturbations in terms of the strong Slater condition, the so-called Extended-Nuernberger condition, and the LUB condition.
NASA Astrophysics Data System (ADS)
Bicocchi, R.; Melacci, P. T.; Bucciarelli, T.
1984-06-01
The design of a sidelobe-reduction network for coherent high-resolution radars using Barker codes and the results of an analytical investigation of its performance are presented and illustrated graphically. Compression is achieved by a matched filter followed by a weighting network designed using linear programming to minimize the implementation to adapt to different operating modes. It is found that the network gives significant increases in sensitivity and resolution while limiting mismatching losses to about 0.2 dB. A typical digital implementation requires only 66 devices for 10-bit input and sampling rate 150 nsec.
An improved multiple linear regression and data analysis computer program package
NASA Technical Reports Server (NTRS)
Sidik, S. M.
1972-01-01
NEWRAP, an improved version of a previous multiple linear regression program called RAPIER, CREDUC, and CRSPLT, allows for a complete regression analysis including cross plots of the independent and dependent variables, correlation coefficients, regression coefficients, analysis of variance tables, t-statistics and their probability levels, rejection of independent variables, plots of residuals against the independent and dependent variables, and a canonical reduction of quadratic response functions useful in optimum seeking experimentation. A major improvement over RAPIER is that all regression calculations are done in double precision arithmetic.
Zheng, Yuanjie; Hunter, Allan A; Wu, Jue; Wang, Hongzhi; Gao, Jianbin; Maguire, Maureen G; Gee, James C
2011-01-01
In this paper, we address the problem of landmark matching based retinal image registration. Two major contributions render our registration algorithm distinguished from many previous methods. One is a novel landmark-matching formulation which enables not only a joint estimation of the correspondences and transformation model but also the optimization with linear programming. The other contribution lies in the introduction of a reinforced self-similarities descriptor in characterizing the local appearance of landmarks. Theoretical analysis and a series of preliminary experimental results show both the effectiveness of our optimization scheme and the high differentiating ability of our features.
Observations on the linear programming formulation of the single reflector design problem.
Canavesi, Cristina; Cassarly, William J; Rolland, Jannick P
2012-02-13
We implemented the linear programming approach proposed by Oliker and by Wang to solve the single reflector problem for a point source and a far-field target. The algorithm was shown to produce solutions that aim the input rays at the intersections between neighboring reflectors. This feature makes it possible to obtain the same reflector with a low number of rays - of the order of the number of targets - as with a high number of rays, greatly reducing the computation complexity of the problem.
NASA Technical Reports Server (NTRS)
Fleming, P.
1983-01-01
A design technique is proposed for linear regulators in which a feedback controller of fixed structure is chosen to minimize an integral quadratic objective function subject to the satisfaction of integral quadratic constraint functions. Application of a nonlinear programming algorithm to this mathematically tractable formulation results in an efficient and useful computer aided design tool. Particular attention is paid to computational efficiency and various recommendations are made. Two design examples illustrate the flexibility of the approach and highlight the special insight afforded to the designer. One concerns helicopter longitudinal dynamics and the other the flight dynamics of an aerodynamically unstable aircraft.
Sun Wei; Huang, Guo H.; Lv Ying; Li Gongchen
2012-06-15
Highlights: Black-Right-Pointing-Pointer Inexact piecewise-linearization-based fuzzy flexible programming is proposed. Black-Right-Pointing-Pointer It's the first application to waste management under multiple complexities. Black-Right-Pointing-Pointer It tackles nonlinear economies-of-scale effects in interval-parameter constraints. Black-Right-Pointing-Pointer It estimates costs more accurately than the linear-regression-based model. Black-Right-Pointing-Pointer Uncertainties are decreased and more satisfactory interval solutions are obtained. - Abstract: To tackle nonlinear economies-of-scale (EOS) effects in interval-parameter constraints for a representative waste management problem, an inexact piecewise-linearization-based fuzzy flexible programming (IPFP) model is developed. In IPFP, interval parameters for waste amounts and transportation/operation costs can be quantified; aspiration levels for net system costs, as well as tolerance intervals for both capacities of waste treatment facilities and waste generation rates can be reflected; and the nonlinear EOS effects transformed from objective function to constraints can be approximated. An interactive algorithm is proposed for solving the IPFP model, which in nature is an interval-parameter mixed-integer quadratically constrained programming model. To demonstrate the IPFP's advantages, two alternative models are developed to compare their performances. One is a conventional linear-regression-based inexact fuzzy programming model (IPFP2) and the other is an IPFP model with all right-hand-sides of fussy constraints being the corresponding interval numbers (IPFP3). The comparison results between IPFP and IPFP2 indicate that the optimized waste amounts would have the similar patterns in both models. However, when dealing with EOS effects in constraints, the IPFP2 may underestimate the net system costs while the IPFP can estimate the costs more accurately. The comparison results between IPFP and IPFP3 indicate
Huang, Hao; Zhang, Guifu; Zhao, Kun; ...
2016-10-20
A hybrid method of combining linear programming (LP) and physical constraints is developed to estimate specific differential phase (KDP) and to improve rain estimation. Moreover, the hybrid KDP estimator and the existing estimators of LP, least squares fitting, and a self-consistent relation of polarimetric radar variables are evaluated and compared using simulated data. Our simulation results indicate the new estimator's superiority, particularly in regions where backscattering phase (δhv) dominates. Further, a quantitative comparison between auto-weather-station rain-gauge observations and KDP-based radar rain estimates for a Meiyu event also demonstrate the superiority of the hybrid KDP estimator over existing methods.
A FORTRAN program for the analysis of linear continuous and sample-data systems
NASA Technical Reports Server (NTRS)
Edwards, J. W.
1976-01-01
A FORTRAN digital computer program which performs the general analysis of linearized control systems is described. State variable techniques are used to analyze continuous, discrete, and sampled data systems. Analysis options include the calculation of system eigenvalues, transfer functions, root loci, root contours, frequency responses, power spectra, and transient responses for open- and closed-loop systems. A flexible data input format allows the user to define systems in a variety of representations. Data may be entered by inputing explicit data matrices or matrices constructed in user written subroutines, by specifying transfer function block diagrams, or by using a combination of these methods.
Murray, W.; Saunders, M.A.
1990-03-01
During the last twelve months, research has concentrated on barrier- function methods for linear programming (LP) and quadratic programming (QP). Some ground-work for the application of barrier methods to nonlinearly constrained problems has also begun. In our previous progress report we drew attention to the difficulty of developing robust implementations of barrier methods for LP. We have continued to refine both the primal algorithm and the dual algorithm. We still do not claim that the barrier algorithms are as robust as the simplex method; however, the dual algorithm has solved all the problems in our extensive test set. We have also gained some experience with using the algorithms to solve aircrew scheduling problems.
Consideration in selecting crops for the human-rated life support system: a linear programming model
NASA Astrophysics Data System (ADS)
Wheeler, E. F.; Kossowski, J.; Goto, E.; Langhans, R. W.; White, G.; Albright, L. D.; Wilcox, D.
A Linear Programming model has been constructed which aids in selecting appropriate crops for CELSS (Controlled Environment Life Support System) food production. A team of Controlled Environment Agriculture (CEA) faculty, staff, graduate students and invited experts representing more than a dozen disciplines, provided a wide range of expertise in developing the model and the crop production program. The model incorporates nutritional content and controlled-environment based production yields of carefully chosen crops into a framework where a crop mix can be constructed to suit the astronauts' needs. The crew's nutritional requirements can be adequately satisfied with only a few crops (assuming vitamin mineral supplements are provided) but this will not be satisfactory from a culinary standpoint. This model is flexible enough that taste and variety driven food choices can be built into the model.
Consideration in selecting crops for the human-rated life support system: a Linear Programming model
NASA Technical Reports Server (NTRS)
Wheeler, E. F.; Kossowski, J.; Goto, E.; Langhans, R. W.; White, G.; Albright, L. D.; Wilcox, D.; Henninger, D. L. (Principal Investigator)
1996-01-01
A Linear Programming model has been constructed which aids in selecting appropriate crops for CELSS (Controlled Environment Life Support System) food production. A team of Controlled Environment Agriculture (CEA) faculty, staff, graduate students and invited experts representing more than a dozen disciplines, provided a wide range of expertise in developing the model and the crop production program. The model incorporates nutritional content and controlled-environment based production yields of carefully chosen crops into a framework where a crop mix can be constructed to suit the astronauts' needs. The crew's nutritional requirements can be adequately satisfied with only a few crops (assuming vitamin mineral supplements are provided) but this will not be satisfactory from a culinary standpoint. This model is flexible enough that taste and variety driven food choices can be built into the model.
NASA Astrophysics Data System (ADS)
Veera Raghavan, Srikant
Semidefinite programming (SDP) is a relatively modern subfield of convex optimization which has been applied to many problems in the reduced density matrix (RDM) formulation of electronic structure. SDPs deal with minimization (or maximization) of linear objective functions of matrices, subject to linear equality and inequality constraints and positivity constraints on the eigenvalues of the matrices. Energies of chemical systems can be expressed as linear functions of RDMs, whose eigenvalues are electron occupation numbers or their products which are expected to be non-negative. Therefore, it is perhaps not surprising that SDPs fit rather naturally in the RDM framework in electronic structure. This dissertation presents SDP applications to two electronic structure theories. The first part of this dissertation (chaps. 1-3) reformulates Hartree-Fock theory in terms of SDPs in order to obtain upper and lower bounds to global Hartree-Fock energies. The upper and lower bounds on the energies are frequently equal thereby providing a first-ever certificate of global optimality for many Hartree-Fock solutions. The SDP approach provides an alternative to the conventional self-consistent field method of obtaining Hartree-Fock energies and densities with the added benefit of global optimality or a rigorous lower bound. Applications are made to the potential energy curves of (H 4)2, N2, C2, CN, Cr2 and NO2. Energies of the first-row transition elements are also calculated. In chapter 4, the effect of using the Hartree-Fock solutions that we calculate as references for coupled cluster singles doubles calculations is presented for some of the above molecules. The second part of this dissertation (chap. 5) presents a SDP approach to electronic structure methods which scale linearly with system size. Linear scaling electronic structure methods are essential in order to make calculations on large systems feasible. Among these methods the so-called density matrix based ones seek to
Glocker, Ben; Paragios, Nikos; Komodakis, Nikos; Tziritas, Georgios; Navab, Nassir
2007-01-01
In this paper we propose a novel non-rigid volume registration based on discrete labeling and linear programming. The proposed framework reformulates registration as a minimal path extraction in a weighted graph. The space of solutions is represented using a set of a labels which are assigned to predefined displacements. The graph topology corresponds to a superimposed regular grid onto the volume. Links between neighborhood control points introduce smoothness, while links between the graph nodes and the labels (end-nodes) measure the cost induced to the objective function through the selection of a particular deformation for a given control point once projected to the entire volume domain, Higher order polynomials are used to express the volume deformation from the ones of the control points. Efficient linear programming that can guarantee the optimal solution up to (a user-defined) bound is considered to recover the optimal registration parameters. Therefore, the method is gradient free, can encode various similarity metrics (simple changes on the graph construction), can guarantee a globally sub-optimal solution and is computational tractable. Experimental validation using simulated data with known deformation, as well as manually segmented data demonstrate the extreme potentials of our approach.
Uncovering signal transduction networks from high-throughput data by integer linear programming.
Zhao, Xing-Ming; Wang, Rui-Sheng; Chen, Luonan; Aihara, Kazuyuki
2008-05-01
Signal transduction is an important process that transmits signals from the outside of a cell to the inside to mediate sophisticated biological responses. Effective computational models to unravel such a process by taking advantage of high-throughput genomic and proteomic data are needed to understand the essential mechanisms underlying the signaling pathways. In this article, we propose a novel method for uncovering signal transduction networks (STNs) by integrating protein interaction with gene expression data. Specifically, we formulate STN identification problem as an integer linear programming (ILP) model, which can be actually solved by a relaxed linear programming algorithm and is flexible for handling various prior information without any restriction on the network structures. The numerical results on yeast MAPK signaling pathways demonstrate that the proposed ILP model is able to uncover STNs or pathways in an efficient and accurate manner. In particular, the prediction results are found to be in high agreement with current biological knowledge and available information in literature. In addition, the proposed model is simple to be interpreted and easy to be implemented even for a large-scale system.
Fan, Yurui; Huang, Guohe; Veawab, Amornvadee
2012-01-01
In this study, a generalized fuzzy linear programming (GFLP) method was developed to deal with uncertainties expressed as fuzzy sets that exist in the constraints and objective function. A stepwise interactive algorithm (SIA) was advanced to solve GFLP model and generate solutions expressed as fuzzy sets. To demonstrate its application, the developed GFLP method was applied to a regional sulfur dioxide (SO2) control planning model to identify effective SO2 mitigation polices with a minimized system performance cost under uncertainty. The results were obtained to represent the amount of SO2 allocated to different control measures from different sources. Compared with the conventional interval-parameter linear programming (ILP) approach, the solutions obtained through GFLP were expressed as fuzzy sets, which can provide intervals for the decision variables and objective function, as well as related possibilities. Therefore, the decision makers can make a tradeoff between model stability and the plausibility based on solutions obtained through GFLP and then identify desired policies for SO2-emission control under uncertainty.
Radial-interval linear programming for environmental management under varied protection levels.
Tan, Qian; Huang, Guo H; Cai, Yanpeng
2010-09-01
In this study, a radial-interval linear programming (RILP) approach was developed for supporting waste management under uncertainty. RILP improved interval-parameter linear programming and its extensions in terms of input reasonableness and output robustness. From the perspective of modeling inputs, RILP could tackle highly uncertain information at the bounds of interval parameters through introducing the concept of fluctuation radius. Regarding modeling outputs, RILP allows controlling the degree of conservatism associated with interval solutions and is capable of quantifying corresponding system risks and benefits. This could facilitate the reflection of interactive relationship between the feasibility of system and the uncertainty of parameters. A computationally tractable algorithm was provided to solve RILP. Then, a long-term waste management case was studied to demonstrate the applicability of the developed methodology. A series of interval solutions obtained under varied protection levels were compared, helping gain insights into the interactions among protection level, violation risk, and system cost. Potential waste allocation alternatives could be generated from these interval solutions, which would be screened in real-world practices according to various projected system conditions as well as decision-makers' willingness to pay and risk tolerance levels. Sensitivity analysis further revealed the significant impact of fluctuation radii of interval parameters on the system. The results indicated that RILP is applicable to a wide spectrum of environmental management problems that are subject to compound uncertainties.
A wavelet-linear genetic programming model for sodium (Na+) concentration forecasting in rivers
NASA Astrophysics Data System (ADS)
Ravansalar, Masoud; Rajaee, Taher; Zounemat-Kermani, Mohammad
2016-06-01
The prediction of water quality parameters in water resources such as rivers is of importance issue that needs to be considered in better management of irrigation systems and water supplies. In this respect, this study proposes a new hybrid wavelet-linear genetic programming (WLGP) model for prediction of monthly sodium (Na+) concentration. The 23-year monthly data used in this study, were measured from the Asi River at the Demirköprü gauging station located in Antakya, Turkey. At first, the measured discharge (Q) and Na+ datasets are initially decomposed into several sub-series using discrete wavelet transform (DWT). Then, these new sub-series are imposed to the ad hoc linear genetic programming (LGP) model as input patterns to predict monthly Na+ one month ahead. The results of the new proposed WLGP model are compared with LGP, WANN and ANN models. Comparison of the models represents the superiority of the WLGP model over the LGP, WANN and ANN models such that the Nash-Sutcliffe efficiencies (NSE) for WLGP, WANN, LGP and ANN models were 0.984, 0.904, 0.484 and 0.351, respectively. The achieved results even points to the superiority of the single LGP model than the ANN model. Continuously, the capability of the proposed WLGP model in terms of prediction of the Na+ peak values is also presented in this study.
Briend, André; Darmon, Nicole; Ferguson, Elaine; Erhardt, Juergen G
2003-01-01
During the complementary feeding period, children require a nutrient-dense diet to meet their high nutritional requirements. International interest exists in the promotion of affordable, nutritionally adequate complementary feeding diets based on locally available foods. In this context, two questions are often asked: 1) is it possible to design a diet suitable for the complementary feeding period using locally available food? and 2) if this is possible, what is the lowest-cost, nutritionally adequate diet available? These questions are usually answered using a "trial and error" approach. However, a more efficient and rigorous technique, based on linear programming, is also available. It has become more readily accessible with the advent of powerful personal computers. The purpose of this review, therefore, is to inform pediatricians and public health professionals about this tool. In this review, the basic principles of linear programming are briefly examined and some practical applications for formulating sound food-based nutritional recommendations in different contexts are explained. This review should facilitate the adoption of this technique by international health professionals.
Chen, Ruoying; Zhang, Zhiwang; Wu, Di; Zhang, Peng; Zhang, Xinyang; Wang, Yong; Shi, Yong
2011-01-21
Protein-protein interactions are fundamentally important in many biological processes and it is in pressing need to understand the principles of protein-protein interactions. Mutagenesis studies have found that only a small fraction of surface residues, known as hot spots, are responsible for the physical binding in protein complexes. However, revealing hot spots by mutagenesis experiments are usually time consuming and expensive. In order to complement the experimental efforts, we propose a new computational approach in this paper to predict hot spots. Our method, Rough Set-based Multiple Criteria Linear Programming (RS-MCLP), integrates rough sets theory and multiple criteria linear programming to choose dominant features and computationally predict hot spots. Our approach is benchmarked by a dataset of 904 alanine-mutated residues and the results show that our RS-MCLP method performs better than other methods, e.g., MCLP, Decision Tree, Bayes Net, and the existing HotSprint database. In addition, we reveal several biological insights based on our analysis. We find that four features (the change of accessible surface area, percentage of the change of accessible surface area, size of a residue, and atomic contacts) are critical in predicting hot spots. Furthermore, we find that three residues (Tyr, Trp, and Phe) are abundant in hot spots through analyzing the distribution of amino acids.
Warid, Warid; Hizam, Hashim; Mariun, Norman; Abdul-Wahab, Noor Izzri
2016-01-01
This paper proposes a new formulation for the multi-objective optimal power flow (MOOPF) problem for meshed power networks considering distributed generation. An efficacious multi-objective fuzzy linear programming optimization (MFLP) algorithm is proposed to solve the aforementioned problem with and without considering the distributed generation (DG) effect. A variant combination of objectives is considered for simultaneous optimization, including power loss, voltage stability, and shunt capacitors MVAR reserve. Fuzzy membership functions for these objectives are designed with extreme targets, whereas the inequality constraints are treated as hard constraints. The multi-objective fuzzy optimal power flow (OPF) formulation was converted into a crisp OPF in a successive linear programming (SLP) framework and solved using an efficient interior point method (IPM). To test the efficacy of the proposed approach, simulations are performed on the IEEE 30-busand IEEE 118-bus test systems. The MFLP optimization is solved for several optimization cases. The obtained results are compared with those presented in the literature. A unique solution with a high satisfaction for the assigned targets is gained. Results demonstrate the effectiveness of the proposed MFLP technique in terms of solution optimality and rapid convergence. Moreover, the results indicate that using the optimal DG location with the MFLP algorithm provides the solution with the highest quality. PMID:26954783
Averaging and Linear Programming in Some Singularly Perturbed Problems of Optimal Control
Gaitsgory, Vladimir; Rossomakhine, Sergey
2015-04-15
The paper aims at the development of an apparatus for analysis and construction of near optimal solutions of singularly perturbed (SP) optimal controls problems (that is, problems of optimal control of SP systems) considered on the infinite time horizon. We mostly focus on problems with time discounting criteria but a possibility of the extension of results to periodic optimization problems is discussed as well. Our consideration is based on earlier results on averaging of SP control systems and on linear programming formulations of optimal control problems. The idea that we exploit is to first asymptotically approximate a given problem of optimal control of the SP system by a certain averaged optimal control problem, then reformulate this averaged problem as an infinite-dimensional linear programming (LP) problem, and then approximate the latter by semi-infinite LP problems. We show that the optimal solution of these semi-infinite LP problems and their duals (that can be found with the help of a modification of an available LP software) allow one to construct near optimal controls of the SP system. We demonstrate the construction with two numerical examples.
Djukanovic, M.; Babic, B.; Milosevic, B.; Sobajic, D.J.; Pao, Y.H. |
1996-05-01
In this paper the blending/transloading facilities are modeled using an interactive fuzzy linear programming (FLP), in order to allow the decision-maker to solve the problem of uncertainty of input information within the fuel scheduling optimization. An interactive decision-making process is formulated in which decision-maker can learn to recognize good solutions by considering all possibilities of fuzziness. The application of the fuzzy formulation is accompanied by a careful examination of the definition of fuzziness, appropriateness of the membership function and interpretation of results. The proposed concept provides a decision support system with integration-oriented features, whereby the decision-maker can learn to recognize the relative importance of factors in the specific domain of optimal fuel scheduling (OFS) problem. The formulation of a fuzzy linear programming problem to obtain a reasonable nonfuzzy solution under consideration of the ambiguity of parameters, represented by fuzzy numbers, is introduced. An additional advantage of the FLP formulation is its ability to deal with multi-objective problems.
Warid, Warid; Hizam, Hashim; Mariun, Norman; Abdul-Wahab, Noor Izzri
2016-01-01
This paper proposes a new formulation for the multi-objective optimal power flow (MOOPF) problem for meshed power networks considering distributed generation. An efficacious multi-objective fuzzy linear programming optimization (MFLP) algorithm is proposed to solve the aforementioned problem with and without considering the distributed generation (DG) effect. A variant combination of objectives is considered for simultaneous optimization, including power loss, voltage stability, and shunt capacitors MVAR reserve. Fuzzy membership functions for these objectives are designed with extreme targets, whereas the inequality constraints are treated as hard constraints. The multi-objective fuzzy optimal power flow (OPF) formulation was converted into a crisp OPF in a successive linear programming (SLP) framework and solved using an efficient interior point method (IPM). To test the efficacy of the proposed approach, simulations are performed on the IEEE 30-busand IEEE 118-bus test systems. The MFLP optimization is solved for several optimization cases. The obtained results are compared with those presented in the literature. A unique solution with a high satisfaction for the assigned targets is gained. Results demonstrate the effectiveness of the proposed MFLP technique in terms of solution optimality and rapid convergence. Moreover, the results indicate that using the optimal DG location with the MFLP algorithm provides the solution with the highest quality.
ERIC Educational Resources Information Center
Nakhanu, Shikuku Beatrice; Musasia, Amadalo Maurice
2015-01-01
The topic Linear Programming is included in the compulsory Kenyan secondary school mathematics curriculum at form four. The topic provides skills for determining best outcomes in a given mathematical model involving some linear relationship. This technique has found application in business, economics as well as various engineering fields. Yet many…
Tonkin, Matthew J.; Tiedeman, Claire R.; Ely, D. Matthew; Hill, Mary C.
2007-01-01
The OPR-PPR program calculates the Observation-Prediction (OPR) and Parameter-Prediction (PPR) statistics that can be used to evaluate the relative importance of various kinds of data to simulated predictions. The data considered fall into three categories: (1) existing observations, (2) potential observations, and (3) potential information about parameters. The first two are addressed by the OPR statistic; the third is addressed by the PPR statistic. The statistics are based on linear theory and measure the leverage of the data, which depends on the location, the type, and possibly the time of the data being considered. For example, in a ground-water system the type of data might be a head measurement at a particular location and time. As a measure of leverage, the statistics do not take into account the value of the measurement. As linear measures, the OPR and PPR statistics require minimal computational effort once sensitivities have been calculated. Sensitivities need to be calculated for only one set of parameter values; commonly these are the values estimated through model calibration. OPR-PPR can calculate the OPR and PPR statistics for any mathematical model that produces the necessary OPR-PPR input files. In this report, OPR-PPR capabilities are presented in the context of using the ground-water model MODFLOW-2000 and the universal inverse program UCODE_2005. The method used to calculate the OPR and PPR statistics is based on the linear equation for prediction standard deviation. Using sensitivities and other information, OPR-PPR calculates (a) the percent increase in the prediction standard deviation that results when one or more existing observations are omitted from the calibration data set; (b) the percent decrease in the prediction standard deviation that results when one or more potential observations are added to the calibration data set; or (c) the percent decrease in the prediction standard deviation that results when potential information on one
Matthew J. Tonkin; Claire R. Tiedeman; D. Matthew Ely; and Mary C. Hill
2007-08-16
The OPR-PPR program calculates the Observation-Prediction (OPR) and Parameter-Prediction (PPR) statistics that can be used to evaluate the relative importance of various kinds of data to simulated predictions. The data considered fall into three categories: (1) existing observations, (2) potential observations, and (3) potential information about parameters. The first two are addressed by the OPR statistic; the third is addressed by the PPR statistic. The statistics are based on linear theory and measure the leverage of the data, which depends on the location, the type, and possibly the time of the data being considered. For example, in a ground-water system the type of data might be a head measurement at a particular location and time. As a measure of leverage, the statistics do not take into account the value of the measurement. As linear measures, the OPR and PPR statistics require minimal computational effort once sensitivities have been calculated. Sensitivities need to be calculated for only one set of parameter values; commonly these are the values estimated through model calibration. OPR-PPR can calculate the OPR and PPR statistics for any mathematical model that produces the necessary OPR-PPR input files. In this report, OPR-PPR capabilities are presented in the context of using the ground-water model MODFLOW-2000 and the universal inverse program UCODE_2005. The method used to calculate the OPR and PPR statistics is based on the linear equation for prediction standard deviation. Using sensitivities and other information, OPR-PPR calculates (a) the percent increase in the prediction standard deviation that results when one or more existing observations are omitted from the calibration data set; (b) the percent decrease in the prediction standard deviation that results when one or more potential observations are added to the calibration data set; or (c) the percent decrease in the prediction standard deviation that results when potential information on one
Sun, Wei; Huang, Guo H; Lv, Ying; Li, Gongchen
2012-06-01
To tackle nonlinear economies-of-scale (EOS) effects in interval-parameter constraints for a representative waste management problem, an inexact piecewise-linearization-based fuzzy flexible programming (IPFP) model is developed. In IPFP, interval parameters for waste amounts and transportation/operation costs can be quantified; aspiration levels for net system costs, as well as tolerance intervals for both capacities of waste treatment facilities and waste generation rates can be reflected; and the nonlinear EOS effects transformed from objective function to constraints can be approximated. An interactive algorithm is proposed for solving the IPFP model, which in nature is an interval-parameter mixed-integer quadratically constrained programming model. To demonstrate the IPFP's advantages, two alternative models are developed to compare their performances. One is a conventional linear-regression-based inexact fuzzy programming model (IPFP2) and the other is an IPFP model with all right-hand-sides of fussy constraints being the corresponding interval numbers (IPFP3). The comparison results between IPFP and IPFP2 indicate that the optimized waste amounts would have the similar patterns in both models. However, when dealing with EOS effects in constraints, the IPFP2 may underestimate the net system costs while the IPFP can estimate the costs more accurately. The comparison results between IPFP and IPFP3 indicate that their solutions would be significantly different. The decreased system uncertainties in IPFP's solutions demonstrate its effectiveness for providing more satisfactory interval solutions than IPFP3. Following its first application to waste management, the IPFP can be potentially applied to other environmental problems under multiple complexities.
Linear ground-water flow, flood-wave response program for programmable calculators
Kernodle, John Michael
1978-01-01
Two programs are documented which solve a discretized analytical equation derived to determine head changes at a point in a one-dimensional ground-water flow system. The programs, written for programmable calculators, are in widely divergent but commonly encountered languages and serve to illustrate the adaptability of the linear model to use in situations where access to true computers is not possible or economical. The analytical method assumes a semi-infinite aquifer which is uniform in thickness and hydrologic characteristics, bounded on one side by an impermeable barrier and on the other parallel side by a fully penetrating stream in complete hydraulic connection with the aquifer. Ground-water heads may be calculated for points along a line which is perpendicular to the impermeable barrie and the fully penetrating stream. Head changes at the observation point are dependent on (1) the distance between that point and the impermeable barrier, (2) the distance between the line of stress (the stream) and the impermeable barrier, (3) aquifer diffusivity, (4) time, and (5) head changes along the line of stress. The primary application of the programs is to determine aquifer diffusivity by the flood-wave response technique. (Woodard-USGS)
NASA Technical Reports Server (NTRS)
Young, Katherine C.; Sobieszczanski-Sobieski, Jaroslaw
1988-01-01
This project has two objectives. The first is to determine whether linear programming techniques can improve performance when handling design optimization problems with a large number of design variables and constraints relative to the feasible directions algorithm. The second purpose is to determine whether using the Kreisselmeier-Steinhauser (KS) function to replace the constraints with one constraint will reduce the cost of total optimization. Comparisons are made using solutions obtained with linear and non-linear methods. The results indicate that there is no cost saving using the linear method or in using the KS function to replace constraints.
NASA Technical Reports Server (NTRS)
Armstrong, E. S.
1975-01-01
A digital computer program (ORACLS) for implementing the optimal regulator theory approach to the design of controllers for linear time-invariant systems is described. The user-oriented program employs the latest numerical techniques and is applicable to both the digital and continuous control problems.
Neji, Radhouène; Besbes, Ahmed; Komodakis, Nikos; Deux, Jean-François; Maatouk, Mezri; Rahmouni, Alain; Bassez, Guillaume; Fleury, Gilles; Paragios, Nikos
2009-01-01
In this paper, we present a manifold clustering method fo the classification of fibers obtained from diffusion tensor images (DTI) of the human skeletal muscle. Using a linear programming formulation of prototype-based clustering, we propose a novel fiber classification algorithm over manifolds that circumvents the necessity to embed the data in low dimensional spaces and determines automatically the number of clusters. Furthermore, we propose the use of angular Hilbertian metrics between multivariate normal distributions to define a family of distances between tensors that we generalize to fibers. These metrics are used to approximate the geodesic distances over the fiber manifold. We also discuss the case where only geodesic distances to a reduced set of landmark fibers are available. The experimental validation of the method is done using a manually annotated significant dataset of DTI of the calf muscle for healthy and diseased subjects.
A minimax technique for time-domain design of preset digital equalizers using linear programming
NASA Technical Reports Server (NTRS)
Vaughn, G. L.; Houts, R. C.
1975-01-01
A linear programming technique is presented for the design of a preset finite-impulse response (FIR) digital filter to equalize the intersymbol interference (ISI) present in a baseband channel with known impulse response. A minimax technique is used which minimizes the maximum absolute error between the actual received waveform and a specified raised-cosine waveform. Transversal and frequency-sampling FIR digital filters are compared as to the accuracy of the approximation, the resultant ISI and the transmitted energy required. The transversal designs typically have slightly better waveform accuracy for a given distortion; however, the frequency-sampling equalizer uses fewer multipliers and requires less transmitted energy. A restricted transversal design is shown to use the least number of multipliers at the cost of a significant increase in energy and loss of waveform accuracy at the receiver.
Approximating high-dimensional dynamics by barycentric coordinates with linear programming.
Hirata, Yoshito; Shiro, Masanori; Takahashi, Nozomu; Aihara, Kazuyuki; Suzuki, Hideyuki; Mas, Paloma
2015-01-01
The increasing development of novel methods and techniques facilitates the measurement of high-dimensional time series but challenges our ability for accurate modeling and predictions. The use of a general mathematical model requires the inclusion of many parameters, which are difficult to be fitted for relatively short high-dimensional time series observed. Here, we propose a novel method to accurately model a high-dimensional time series. Our method extends the barycentric coordinates to high-dimensional phase space by employing linear programming, and allowing the approximation errors explicitly. The extension helps to produce free-running time-series predictions that preserve typical topological, dynamical, and/or geometric characteristics of the underlying attractors more accurately than the radial basis function model that is widely used. The method can be broadly applied, from helping to improve weather forecasting, to creating electronic instruments that sound more natural, and to comprehensively understanding complex biological data.
Optimization Of Irrigation Area of Ukai Right Bank Main Canal-A Linear Programming Approach
NASA Astrophysics Data System (ADS)
Bhuvandas, Nishi; Mirajkar, A. B.; Timbadiya, P. V.; Patel, P. L.
2010-11-01
This paper presents a Linear Programming (LP) model for obtaining optimized cropping area in the command of Ukai reservoir. The objective is to maximize the sum of the relative yields from all crops in the irrigated area for specific range of water availability like 100%, 90%, 80% and 70%. The present study is aimed to get the optimal allocation of irrigation water depending upon the availability of water from the source. The net revenue from agricultural production will be maximized for available irrigation water taking into account the sets of constraints like crop area, cropping pattern and water requirement. The model is applied to a part of Ukai reservoir system namely Ukai Right Bank Main Canal (URBMC), in Gujarat state, India.
Decomposition and (importance) sampling techniques for multi-stage stochastic linear programs
Infanger, G.
1993-11-01
The difficulty of solving large-scale multi-stage stochastic linear programs arises from the sheer number of scenarios associated with numerous stochastic parameters. The number of scenarios grows exponentially with the number of stages and problems get easily out of hand even for very moderate numbers of stochastic parameters per stage. Our method combines dual (Benders) decomposition with Monte Carlo sampling techniques. We employ importance sampling to efficiently obtain accurate estimates of both expected future costs and gradients and right-hand sides of cuts. The method enables us to solve practical large-scale problems with many stages and numerous stochastic parameters per stage. We discuss the theory of sharing and adjusting cuts between different scenarios in a stage. We derive probabilistic lower and upper bounds, where we use importance path sampling for the upper bound estimation. Initial numerical results turned out to be promising.
Boundary detection by linear programming with application to lung fields segmentation
NASA Astrophysics Data System (ADS)
Ibragimov, Bulat; Likar, Boštjan; Pernuš, Franjo
2011-03-01
Medical image segmentation is typically used to locate boundaries of anatomical structures in images acquired by different modalities. As segmentation is of utmost importance for quantitative measurements and analysis of anatomical structures, tracking anatomical changes over time, building anatomical atlases and visualization of medical images, a huge amount of methods have been developed and tested on a wide range of applications in the past. Deformable or parametric shape models are a class of methods that have been widely used for segmentation. A drawback of deformable model approaches it that they require initialization near the final solution. In this paper, we present a segmentation algorithm that incorporates prior knowledge and is composed of two steps. First, reference points on the boundary of an anatomical structure are found by linear programming incorporating prior knowledge. Second, paths between reference points, representing boundary segments, are searched for by optimal control. The segmentation method has been applied to chest radiographs from the publicly available SCR database.
A primary shift rotation nurse scheduling using zero-one linear goal programming.
Huarng, F
1999-01-01
In this study, the author discusses the effect of nurse shift schedules on circadian rhythm and some important ergonomics criteria. The author also reviews and compares different nurse shift scheduling methods via the criteria of flexibility, fairness, continuity in shift assignments, nurses' preferences, and ergonomics principles. In this article, a primary shift rotation system is proposed to provide better continuity in shift assignments to satisfy nurses' preferences. The primary shift rotation system is modeled as a zero-one linear goal programming (LGP) problem. To generate the shift assignment for a unit with 13 nurses, the zero-one LGP model takes less than 3 minutes on average, whereas the head nurses spend approximately 2 to 3 hours on shift scheduling. This study reports the process of implementing the primary shift rotation system.
Approximating high-dimensional dynamics by barycentric coordinates with linear programming
Hirata, Yoshito Aihara, Kazuyuki; Suzuki, Hideyuki; Shiro, Masanori; Takahashi, Nozomu; Mas, Paloma
2015-01-15
The increasing development of novel methods and techniques facilitates the measurement of high-dimensional time series but challenges our ability for accurate modeling and predictions. The use of a general mathematical model requires the inclusion of many parameters, which are difficult to be fitted for relatively short high-dimensional time series observed. Here, we propose a novel method to accurately model a high-dimensional time series. Our method extends the barycentric coordinates to high-dimensional phase space by employing linear programming, and allowing the approximation errors explicitly. The extension helps to produce free-running time-series predictions that preserve typical topological, dynamical, and/or geometric characteristics of the underlying attractors more accurately than the radial basis function model that is widely used. The method can be broadly applied, from helping to improve weather forecasting, to creating electronic instruments that sound more natural, and to comprehensively understanding complex biological data.
Huang, Hao; Zhang, Guifu; Zhao, Kun; Giangrande, Scott E.
2016-10-20
A hybrid method of combining linear programming (LP) and physical constraints is developed to estimate specific differential phase (K_{DP}) and to improve rain estimation. Moreover, the hybrid K_{DP} estimator and the existing estimators of LP, least squares fitting, and a self-consistent relation of polarimetric radar variables are evaluated and compared using simulated data. Our simulation results indicate the new estimator's superiority, particularly in regions where backscattering phase (δ_{hv}) dominates. Further, a quantitative comparison between auto-weather-station rain-gauge observations and K_{DP}-based radar rain estimates for a Meiyu event also demonstrate the superiority of the hybrid K_{DP} estimator over existing methods.
Linear programming method for computing the gamut of object color solid.
Li, Changjun; Luo, M Ronnier; Cho, Maeng-Sub; Kim, Jin-Seo
2010-05-01
Recently there has been great interest in establishing the color gamut of solid colors or the optimum colors. The optimum colors are widely used for quantifying the quality of light sources and evaluating reproduction devices. An enumeration method was developed by Martinez-Verdu et al. [J. Opt. Soc. Am. A 24, 1501 (2007)] for finding optimum colors. However, it was found that the method is too time-costly. In this paper, a linear programming approach is proposed. The proposed method is simple and faster and has the advantage of keeping the characteristics of the true boundary. Comparison of the present method with the method of Martinez-Verdu et al. is also given.
A Linear Programing Economic Analysis of Lake Quality Improvements Using Phosphorus Buffer Curves
NASA Astrophysics Data System (ADS)
Ogg, Clayton W.; Pionke, Harry B.; Heimlich, Ralph E.
1983-02-01
A linear programing model is used to evaluate the economic feasibility of reducing phosphorus loads from cropland to levels that are expected to alter adequately the trophic conditions of a water supply reservoir. The model employs phosphorus buffer curves for distributing phosphorus losses between runoff and eroded soil. Phosphorus pollution reductions are estimated for conservation activities according to the amount of erosion control and phosphorus fertility status. The planning model is intended to provide the best available estimates of pollution control attainable with given budget outlays, as well as to allocate pollution control funds efficiently among watersheds. It also contains sufficient detail to suggest practices for each local soil that are consistent with water quality plans.
Baran, Richard; Northen, Trent R
2013-10-15
Untargeted metabolite profiling using liquid chromatography and mass spectrometry coupled via electrospray ionization is a powerful tool for the discovery of novel natural products, metabolic capabilities, and biomarkers. However, the elucidation of the identities of uncharacterized metabolites from spectral features remains challenging. A critical step in the metabolite identification workflow is the assignment of redundant spectral features (adducts, fragments, multimers) and calculation of the underlying chemical formula. Inspection of the data by experts using computational tools solving partial problems (e.g., chemical formula calculation for individual ions) can be performed to disambiguate alternative solutions and provide reliable results. However, manual curation is tedious and not readily scalable or standardized. Here we describe an automated procedure for the robust automated mass spectra interpretation and chemical formula calculation using mixed integer linear programming optimization (RAMSI). Chemical rules among related ions are expressed as linear constraints and both the spectra interpretation and chemical formula calculation are performed in a single optimization step. This approach is unbiased in that it does not require predefined sets of neutral losses and positive and negative polarity spectra can be combined in a single optimization. The procedure was evaluated with 30 experimental mass spectra and was found to effectively identify the protonated or deprotonated molecule ([M + H](+) or [M - H](-)) while being robust to the presence of background ions. RAMSI provides a much-needed standardized tool for interpreting ions for subsequent identification in untargeted metabolomics workflows.
PAPR reduction in FBMC using an ACE-based linear programming optimization
NASA Astrophysics Data System (ADS)
van der Neut, Nuan; Maharaj, Bodhaswar TJ; de Lange, Frederick; González, Gustavo J.; Gregorio, Fernando; Cousseau, Juan
2014-12-01
This paper presents four novel techniques for peak-to-average power ratio (PAPR) reduction in filter bank multicarrier (FBMC) modulation systems. The approach extends on current PAPR reduction active constellation extension (ACE) methods, as used in orthogonal frequency division multiplexing (OFDM), to an FBMC implementation as the main contribution. The four techniques introduced can be split up into two: linear programming optimization ACE-based techniques and smart gradient-project (SGP) ACE techniques. The linear programming (LP)-based techniques compensate for the symbol overlaps by utilizing a frame-based approach and provide a theoretical upper bound on achievable performance for the overlapping ACE techniques. The overlapping ACE techniques on the other hand can handle symbol by symbol processing. Furthermore, as a result of FBMC properties, the proposed techniques do not require side information transmission. The PAPR performance of the techniques is shown to match, or in some cases improve, on current PAPR techniques for FBMC. Initial analysis of the computational complexity of the SGP techniques indicates that the complexity issues with PAPR reduction in FBMC implementations can be addressed. The out-of-band interference introduced by the techniques is investigated. As a result, it is shown that the interference can be compensated for, whilst still maintaining decent PAPR performance. Additional results are also provided by means of a study of the PAPR reduction of the proposed techniques at a fixed clipping probability. The bit error rate (BER) degradation is investigated to ensure that the trade-off in terms of BER degradation is not too severe. As illustrated by exhaustive simulations, the SGP ACE-based technique proposed are ideal candidates for practical implementation in systems employing the low-complexity polyphase implementation of FBMC modulators. The methods are shown to offer significant PAPR reduction and increase the feasibility of FBMC as
Yu, Guan; Liu, Yufeng; Thung, Kim-Han; Shen, Dinggang
2014-01-01
Accurately identifying mild cognitive impairment (MCI) individuals who will progress to Alzheimer's disease (AD) is very important for making early interventions. Many classification methods focus on integrating multiple imaging modalities such as magnetic resonance imaging (MRI) and fluorodeoxyglucose positron emission tomography (FDG-PET). However, the main challenge for MCI classification using multiple imaging modalities is the existence of a lot of missing data in many subjects. For example, in the Alzheimer's Disease Neuroimaging Initiative (ADNI) study, almost half of the subjects do not have PET images. In this paper, we propose a new and flexible binary classification method, namely Multi-task Linear Programming Discriminant (MLPD) analysis, for the incomplete multi-source feature learning. Specifically, we decompose the classification problem into different classification tasks, i.e., one for each combination of available data sources. To solve all different classification tasks jointly, our proposed MLPD method links them together by constraining them to achieve the similar estimated mean difference between the two classes (under classification) for those shared features. Compared with the state-of-the-art incomplete Multi-Source Feature (iMSF) learning method, instead of constraining different classification tasks to choose a common feature subset for those shared features, MLPD can flexibly and adaptively choose different feature subsets for different classification tasks. Furthermore, our proposed MLPD method can be efficiently implemented by linear programming. To validate our MLPD method, we perform experiments on the ADNI baseline dataset with the incomplete MRI and PET images from 167 progressive MCI (pMCI) subjects and 226 stable MCI (sMCI) subjects. We further compared our method with the iMSF method (using incomplete MRI and PET images) and also the single-task classification method (using only MRI or only subjects with both MRI and PET images
Tetens, Inge; Dejgård Jensen, Jørgen; Smed, Sinne; Gabrijelčič Blenkuš, Mojca; Rayner, Mike; Darmon, Nicole; Robertson, Aileen
2016-01-01
Background Food-Based Dietary Guidelines (FBDGs) are developed to promote healthier eating patterns, but increasing food prices may make healthy eating less affordable. The aim of this study was to design a range of cost-minimized nutritionally adequate health-promoting food baskets (FBs) that help prevent both micronutrient inadequacy and diet-related non-communicable diseases at lowest cost. Methods Average prices for 312 foods were collected within the Greater Copenhagen area. The cost and nutrient content of five different cost-minimized FBs for a family of four were calculated per day using linear programming. The FBs were defined using five different constraints: cultural acceptability (CA), or dietary guidelines (DG), or nutrient recommendations (N), or cultural acceptability and nutrient recommendations (CAN), or dietary guidelines and nutrient recommendations (DGN). The variety and number of foods in each of the resulting five baskets was increased through limiting the relative share of individual foods. Results The one-day version of N contained only 12 foods at the minimum cost of DKK 27 (€ 3.6). The CA, DG, and DGN were about twice of this and the CAN cost ~DKK 81 (€ 10.8). The baskets with the greater variety of foods contained from 70 (CAN) to 134 (DGN) foods and cost between DKK 60 (€ 8.1, N) and DKK 125 (€ 16.8, DGN). Ensuring that the food baskets cover both dietary guidelines and nutrient recommendations doubled the cost while cultural acceptability (CAN) tripled it. Conclusion Use of linear programming facilitates the generation of low-cost food baskets that are nutritionally adequate, health promoting, and culturally acceptable. PMID:27760131
ERIC Educational Resources Information Center
BEANE, DONALD G.
SIXTY-FIVE STUDENTS IN TWO CLASSES IN HIGH SCHOOL GEOMETRY WERE ASSIGNED BY STRATIFIED RANDOM PROCEDURE ON THE BASIS OF THE HENNON-NELSON TEST OF MENTAL ABILITY TO FOUR EXPERIMENTAL GROUPS--TWO USING A LINEAR OR A BRANCHING TYPE PROGRAM EXCLUSIVELY, AND TWO SWITCHING PROGRAM TYPE MIDWAY THROUGH THE EXPERIMENT. A THIRD CLASS, TAUGHT BY THE SAME…
Takabe, Satoshi; Hukushima, Koji
2016-05-01
Typical behavior of the linear programming (LP) problem is studied as a relaxation of the minimum vertex cover (min-VC), a type of integer programming (IP) problem. A lattice-gas model on the Erdös-Rényi random graphs of α-uniform hyperedges is proposed to express both the LP and IP problems of the min-VC in the common statistical mechanical model with a one-parameter family. Statistical mechanical analyses reveal for α=2 that the LP optimal solution is typically equal to that given by the IP below the critical average degree c=e in the thermodynamic limit. The critical threshold for good accuracy of the relaxation extends the mathematical result c=1 and coincides with the replica symmetry-breaking threshold of the IP. The LP relaxation for the minimum hitting sets with α≥3, minimum vertex covers on α-uniform random graphs, is also studied. Analytic and numerical results strongly suggest that the LP relaxation fails to estimate optimal values above the critical average degree c=e/(α-1) where the replica symmetry is broken.
NASA Astrophysics Data System (ADS)
Takabe, Satoshi; Hukushima, Koji
2016-05-01
Typical behavior of the linear programming (LP) problem is studied as a relaxation of the minimum vertex cover (min-VC), a type of integer programming (IP) problem. A lattice-gas model on the Erdös-Rényi random graphs of α -uniform hyperedges is proposed to express both the LP and IP problems of the min-VC in the common statistical mechanical model with a one-parameter family. Statistical mechanical analyses reveal for α =2 that the LP optimal solution is typically equal to that given by the IP below the critical average degree c =e in the thermodynamic limit. The critical threshold for good accuracy of the relaxation extends the mathematical result c =1 and coincides with the replica symmetry-breaking threshold of the IP. The LP relaxation for the minimum hitting sets with α ≥3 , minimum vertex covers on α -uniform random graphs, is also studied. Analytic and numerical results strongly suggest that the LP relaxation fails to estimate optimal values above the critical average degree c =e /(α -1 ) where the replica symmetry is broken.
NASA Technical Reports Server (NTRS)
Wei, Peng; Sridhar, Banavar; Chen, Neil Yi-Nan; Sun, Dengfent
2012-01-01
A class of strategies has been proposed to reduce contrail formation in the United States airspace. A 3D grid based on weather data and the cruising altitude level of aircraft is adjusted to avoid the persistent contrail potential area with the consideration to fuel-efficiency. In this paper, the authors introduce a contrail avoidance strategy on 3D grid by considering additional operationally feasible constraints from an air traffic controller's aspect. First, shifting too many aircraft to the same cruising level will make the miles-in-trail at this level smaller than the safety separation threshold. Furthermore, the high density of aircraft at one cruising level may exceed the workload for the traffic controller. Therefore, in our new model we restrict the number of total aircraft at each level. Second, the aircraft count variation for successive intervals cannot be too drastic since the workload to manage climbing/descending aircraft is much larger than managing cruising aircraft. The contrail reduction is formulated as an integer-programming problem and the problem is shown to have the property of total unimodularity. Solving the corresponding relaxed linear programming with the simplex method provides an optimal and integral solution to the problem. Simulation results are provided to illustrate the methodology.
Projection-free parallel quadratic programming for linear model predictive control
NASA Astrophysics Data System (ADS)
Di Cairano, S.; Brand, M.; Bortoff, S. A.
2013-08-01
A key component in enabling the application of model predictive control (MPC) in fields such as automotive, aerospace, and factory automation is the availability of low-complexity fast optimisation algorithms to solve the MPC finite horizon optimal control problem in architectures with reduced computational capabilities. In this paper, we introduce a projection-free iterative optimisation algorithm and discuss its application to linear MPC. The algorithm, originally developed by Brand for non-negative quadratic programs, is based on a multiplicative update rule and it is shown to converge to a fixed point which is the optimum. An acceleration technique based on a projection-free line search is also introduced, to speed-up the convergence to the optimum. The algorithm is applied to MPC through the dual of the quadratic program (QP) formulated from the MPC finite time optimal control problem. We discuss how termination conditions with guaranteed degree of suboptimality can be enforced, and how the algorithm performance can be optimised by pre-computing the matrices in a parametric form. We show computational results of the algorithm in three common case studies and we compare such results with the results obtained by other available free and commercial QP solvers.
Chen, Vivian Yi-Ju; Yang, Tse-Chuan
2012-08-01
An increasing interest in exploring spatial non-stationarity has generated several specialized analytic software programs; however, few of these programs can be integrated natively into a well-developed statistical environment such as SAS. We not only developed a set of SAS macro programs to fill this gap, but also expanded the geographically weighted generalized linear modeling (GWGLM) by integrating the strengths of SAS into the GWGLM framework. Three features distinguish our work. First, the macro programs of this study provide more kernel weighting functions than the existing programs. Second, with our codes the users are able to better specify the bandwidth selection process compared to the capabilities of existing programs. Third, the development of the macro programs is fully embedded in the SAS environment, providing great potential for future exploration of complicated spatially varying coefficient models in other disciplines. We provided three empirical examples to illustrate the use of the SAS macro programs and demonstrated the advantages explained above.
Aspect-Object Alignment with Integer Linear Programming in Opinion Mining
Zhao, Yanyan; Qin, Bing; Liu, Ting; Yang, Wei
2015-01-01
Target extraction is an important task in opinion mining. In this task, a complete target consists of an aspect and its corresponding object. However, previous work has always simply regarded the aspect as the target itself and has ignored the important "object" element. Thus, these studies have addressed incomplete targets, which are of limited use for practical applications. This paper proposes a novel and important sentiment analysis task, termed aspect-object alignment, to solve the "object neglect" problem. The objective of this task is to obtain the correct corresponding object for each aspect. We design a two-step framework for this task. We first provide an aspect-object alignment classifier that incorporates three sets of features, namely, the basic, relational, and special target features. However, the objects that are assigned to aspects in a sentence often contradict each other and possess many complicated features that are difficult to incorporate into a classifier. To resolve these conflicts, we impose two types of constraints in the second step: intra-sentence constraints and inter-sentence constraints. These constraints are encoded as linear formulations, and Integer Linear Programming (ILP) is used as an inference procedure to obtain a final global decision that is consistent with the constraints. Experiments on a corpus in the camera domain demonstrate that the three feature sets used in the aspect-object alignment classifier are effective in improving its performance. Moreover, the classifier with ILP inference performs better than the classifier without it, thereby illustrating that the two types of constraints that we impose are beneficial. PMID:26000635
Aspect-object alignment with Integer Linear Programming in opinion mining.
Zhao, Yanyan; Qin, Bing; Liu, Ting; Yang, Wei
2015-01-01
Target extraction is an important task in opinion mining. In this task, a complete target consists of an aspect and its corresponding object. However, previous work has always simply regarded the aspect as the target itself and has ignored the important "object" element. Thus, these studies have addressed incomplete targets, which are of limited use for practical applications. This paper proposes a novel and important sentiment analysis task, termed aspect-object alignment, to solve the "object neglect" problem. The objective of this task is to obtain the correct corresponding object for each aspect. We design a two-step framework for this task. We first provide an aspect-object alignment classifier that incorporates three sets of features, namely, the basic, relational, and special target features. However, the objects that are assigned to aspects in a sentence often contradict each other and possess many complicated features that are difficult to incorporate into a classifier. To resolve these conflicts, we impose two types of constraints in the second step: intra-sentence constraints and inter-sentence constraints. These constraints are encoded as linear formulations, and Integer Linear Programming (ILP) is used as an inference procedure to obtain a final global decision that is consistent with the constraints. Experiments on a corpus in the camera domain demonstrate that the three feature sets used in the aspect-object alignment classifier are effective in improving its performance. Moreover, the classifier with ILP inference performs better than the classifier without it, thereby illustrating that the two types of constraints that we impose are beneficial.
Dyehouse, Melissa; Bennett, Deborah; Harbor, Jon; Childress, Amy; Dark, Melissa
2009-08-01
Logic models are based on linear relationships between program resources, activities, and outcomes, and have been used widely to support both program development and evaluation. While useful in describing some programs, the linear nature of the logic model makes it difficult to capture the complex relationships within larger, multifaceted programs. Causal loop diagrams based on a systems thinking approach can better capture a multidimensional, layered program model while providing a more complete understanding of the relationship between program elements, which enables evaluators to examine influences and dependencies between and within program components. Few studies describe how to conceptualize and apply systems models for educational program evaluation. The goal of this paper is to use our NSF-funded, Interdisciplinary GK-12 project: Bringing Authentic Problem Solving in STEM to Rural Middle Schools to illustrate a systems thinking approach to model a complex educational program to aid in evaluation. GK-12 pairs eight teachers with eight STEM doctoral fellows per program year to implement curricula in middle schools. We demonstrate how systems thinking provides added value by modeling the participant groups, instruments, outcomes, and other factors in ways that enhance the interpretation of quantitative and qualitative data. Limitations of the model include added complexity. Implications include better understanding of interactions and outcomes and analyses reflecting interacting or conflicting variables.
A linear programming model to optimize diets in environmental policy scenarios.
Moraes, L E; Wilen, J E; Robinson, P H; Fadel, J G
2012-03-01
The objective was to develop a linear programming model to formulate diets for dairy cattle when environmental policies are present and to examine effects of these policies on diet formulation and dairy cattle nitrogen and mineral excretions as well as methane emissions. The model was developed as a minimum cost diet model. Two types of environmental policies were examined: a tax and a constraint on methane emissions. A tax was incorporated to simulate a greenhouse gas emissions tax policy, and prices of carbon credits in the current carbon markets were attributed to the methane production variable. Three independent runs were made, using carbon dioxide equivalent prices of $5, $17, and $250/t. A constraint was incorporated into the model to simulate the second type of environmental policy, reducing methane emissions by predetermined amounts. The linear programming formulation of this second alternative enabled the calculation of marginal costs of reducing methane emissions. Methane emission and manure production by dairy cows were calculated according to published equations, and nitrogen and mineral excretions were calculated by mass conservation laws. Results were compared with respect to the values generated by a base least-cost model. Current prices of the carbon credit market did not appear onerous enough to have a substantive incentive effect in reducing methane emissions and altering diet costs of our hypothetical dairy herd. However, when emissions of methane were assumed to be reduced by 5, 10, and 13.5% from the base model, total diet costs increased by 5, 19.1, and 48.5%, respectively. Either these increased costs would be passed onto the consumer or dairy producers would go out of business. Nitrogen and potassium excretions were increased by 16.5 and 16.7% with a 13.5% reduction in methane emissions from the base model. Imposing methane restrictions would further increase the demand for grains and other human-edible crops, which is not a progressive
NASA Technical Reports Server (NTRS)
Heidergott, K. W.
1979-01-01
The computer program known as QR is described. Classical control systems analysis and synthesis (root locus, time response, and frequency response) can be performed using this program. Programming details of the QR program are presented.
Identification of human gene structure using linear discriminant functions and dynamic programming
Solovyev, V.V.; Salamov, A.A.; Lawrence, C.B.
1995-12-31
Development of advanced technique to identify gene structure is one of the main challenges of the Human Genome Project. Discriminant analysis was applied to the construction of recognition functions for various components of gene structure. Linear discriminant functions for splice sites, 5{prime}-coding, internal exon, and Y-coding region recognition have been developed. A gene structure prediction system FGENE has been developed based on the exon recognition functions. We compute a graph of mutual compatibility of different exons and present a gene structure models as paths of this directed acyclic graph. For an optimal model selection we apply a variant of dynamic programming algorithm to search for the path in the graph with the maximal value of the corresponding discriminant functions. Prediction by FGENE for 185 complete human gene sequences has 81% exact exon recognition accuracy and 91% accuracy at the level of individual exon nucleotides with the correlation coefficient (C) equals 0.90. Testing FGENE on 35 genes not used in the development of discriminant functions shows 71% accuracy of exact exon prediction and 89% at the nucleotide level (C=0.86). FGENE compares very favorably with the other programs currently used to predict protein-coding regions. Analysis of uncharacterized human sequences based on our methods for splice site (HSPL, RNASPL), internal exons (HEXON), all type of exons (FEXH) and human (FGENEH) and bacterial (CDSB) gene structure prediction and recognition of human and bacterial sequences (HBR) (to test a library for E. coli contamination) is available through the University of Houston, Weizmann Institute of Science network server and a WWW page of the Human Genome Center at Baylor College of Medicine.
Lefkoff, L.J.; Gorelick, S.M.
1987-01-01
A FORTRAN-77 computer program code that helps solve a variety of aquifer management problems involving the control of groundwater hydraulics. It is intended for use with any standard mathematical programming package that uses Mathematical Programming System input format. The computer program creates the input files to be used by the optimization program. These files contain all the hydrologic information and management objectives needed to solve the management problem. Used in conjunction with a mathematical programming code, the computer program identifies the pumping or recharge strategy that achieves a user 's management objective while maintaining groundwater hydraulic conditions within desired limits. The objective may be linear or quadratic, and may involve the minimization of pumping and recharge rates or of variable pumping costs. The problem may contain constraints on groundwater heads, gradients, and velocities for a complex, transient hydrologic system. Linear superposition of solutions to the transient, two-dimensional groundwater flow equation is used by the computer program in conjunction with the response matrix optimization method. A unit stress is applied at each decision well and transient responses at all control locations are computed using a modified version of the U.S. Geological Survey two dimensional aquifer simulation model. The program also computes discounted cost coefficients for the objective function and accounts for transient aquifer conditions. (Author 's abstract)
Christodoulou, Manolis A; Kontogeorgou, Chrysa
2008-10-01
In recent years there has been a great effort to convert the existing Air Traffic Control system into a novel system known as Free Flight. Free Flight is based on the concept that increasing international airspace capacity will grant more freedom to individual pilots during the enroute flight phase, thereby giving them the opportunity to alter flight paths in real time. Under the current system, pilots must request, then receive permission from air traffic controllers to alter flight paths. Understandably the new system allows pilots to gain the upper hand in air traffic. At the same time, however, this freedom increase pilot responsibility. Pilots face a new challenge in avoiding the traffic shares congested air space. In order to ensure safety, an accurate system, able to predict and prevent conflict among aircraft is essential. There are certain flight maneuvers that exist in order to prevent flight disturbances or collision and these are graded in the following categories: vertical, lateral and airspeed. This work focuses on airspeed maneuvers and tries to introduce a new idea for the control of Free Flight, in three dimensions, using neural networks trained with examples prepared through non-linear programming.
Efficient linear programming algorithm to generate the densest lattice sphere packings.
Marcotte, Étienne; Torquato, Salvatore
2013-06-01
Finding the densest sphere packing in d-dimensional Euclidean space R(d) is an outstanding fundamental problem with relevance in many fields, including the ground states of molecular systems, colloidal crystal structures, coding theory, discrete geometry, number theory, and biological systems. Numerically generating the densest sphere packings becomes very challenging in high dimensions due to an exponentially increasing number of possible sphere contacts and sphere configurations, even for the restricted problem of finding the densest lattice sphere packings. In this paper we apply the Torquato-Jiao packing algorithm, which is a method based on solving a sequence of linear programs, to robustly reproduce the densest known lattice sphere packings for dimensions 2 through 19. We show that the TJ algorithm is appreciably more efficient at solving these problems than previously published methods. Indeed, in some dimensions, the former procedure can be as much as three orders of magnitude faster at finding the optimal solutions than earlier ones. We also study the suboptimal local density-maxima solutions (inherent structures or "extreme" lattices) to gain insight about the nature of the topography of the "density" landscape.
Lu, Zhao; Sun, Jing; Butts, Kenneth
2014-05-01
Support vector regression for approximating nonlinear dynamic systems is more delicate than the approximation of indicator functions in support vector classification, particularly for systems that involve multitudes of time scales in their sampled data. The kernel used for support vector learning determines the class of functions from which a support vector machine can draw its solution, and the choice of kernel significantly influences the performance of a support vector machine. In this paper, to bridge the gap between wavelet multiresolution analysis and kernel learning, the closed-form orthogonal wavelet is exploited to construct new multiscale asymmetric orthogonal wavelet kernels for linear programming support vector learning. The closed-form multiscale orthogonal wavelet kernel provides a systematic framework to implement multiscale kernel learning via dyadic dilations and also enables us to represent complex nonlinear dynamics effectively. To demonstrate the superiority of the proposed multiscale wavelet kernel in identifying complex nonlinear dynamic systems, two case studies are presented that aim at building parallel models on benchmark datasets. The development of parallel models that address the long-term/mid-term prediction issue is more intricate and challenging than the identification of series-parallel models where only one-step ahead prediction is required. Simulation results illustrate the effectiveness of the proposed multiscale kernel learning.
An improved exploratory search technique for pure integer linear programming problems
NASA Technical Reports Server (NTRS)
Fogle, F. R.
1990-01-01
The development is documented of a heuristic method for the solution of pure integer linear programming problems. The procedure draws its methodology from the ideas of Hooke and Jeeves type 1 and 2 exploratory searches, greedy procedures, and neighborhood searches. It uses an efficient rounding method to obtain its first feasible integer point from the optimal continuous solution obtained via the simplex method. Since this method is based entirely on simple addition or subtraction of one to each variable of a point in n-space and the subsequent comparison of candidate solutions to a given set of constraints, it facilitates significant complexity improvements over existing techniques. It also obtains the same optimal solution found by the branch-and-bound technique in 44 of 45 small to moderate size test problems. Two example problems are worked in detail to show the inner workings of the method. Furthermore, using an established weighted scheme for comparing computational effort involved in an algorithm, a comparison of this algorithm is made to the more established and rigorous branch-and-bound method. A computer implementation of the procedure, in PC compatible Pascal, is also presented and discussed.
Morris, Melody K.; Saez-Rodriguez, Julio; Lauffenburger, Douglas A.; Alexopoulos, Leonidas G.
2012-01-01
Modeling of signal transduction pathways plays a major role in understanding cells' function and predicting cellular response. Mathematical formalisms based on a logic formalism are relatively simple but can describe how signals propagate from one protein to the next and have led to the construction of models that simulate the cells response to environmental or other perturbations. Constrained fuzzy logic was recently introduced to train models to cell specific data to result in quantitative pathway models of the specific cellular behavior. There are two major issues in this pathway optimization: i) excessive CPU time requirements and ii) loosely constrained optimization problem due to lack of data with respect to large signaling pathways. Herein, we address both issues: the former by reformulating the pathway optimization as a regular nonlinear optimization problem; and the latter by enhanced algorithms to pre/post-process the signaling network to remove parts that cannot be identified given the experimental conditions. As a case study, we tackle the construction of cell type specific pathways in normal and transformed hepatocytes using medium and large-scale functional phosphoproteomic datasets. The proposed Non Linear Programming (NLP) formulation allows for fast optimization of signaling topologies by combining the versatile nature of logic modeling with state of the art optimization algorithms. PMID:23226239
Boundary Control of Linear Uncertain 1-D Parabolic PDE Using Approximate Dynamic Programming.
Talaei, Behzad; Jagannathan, Sarangapani; Singler, John
2017-03-02
This paper develops a near optimal boundary control method for distributed parameter systems governed by uncertain linear 1-D parabolic partial differential equations (PDE) by using approximate dynamic programming. A quadratic surface integral is proposed to express the optimal cost functional for the infinite-dimensional state space. Accordingly, the Hamilton-Jacobi-Bellman (HJB) equation is formulated in the infinite-dimensional domain without using any model reduction. Subsequently, a neural network identifier is developed to estimate the unknown spatially varying coefficient in PDE dynamics. Novel tuning law is proposed to guarantee the boundedness of identifier approximation error in the PDE domain. A radial basis network (RBN) is subsequently proposed to generate an approximate solution for the optimal surface kernel function online. The tuning law for near optimal RBN weights is created, such that the HJB equation error is minimized while the dynamics are identified and closed-loop system remains stable. Ultimate boundedness (UB) of the closed-loop system is verified by using the Lyapunov theory. The performance of the proposed controller is successfully confirmed by simulation on an unstable diffusion-reaction process.
A Mixed Integer Linear Program for Solving a Multiple Route Taxi Scheduling Problem
NASA Technical Reports Server (NTRS)
Montoya, Justin Vincent; Wood, Zachary Paul; Rathinam, Sivakumar; Malik, Waqar Ahmad
2010-01-01
Aircraft movements on taxiways at busy airports often create bottlenecks. This paper introduces a mixed integer linear program to solve a Multiple Route Aircraft Taxi Scheduling Problem. The outputs of the model are in the form of optimal taxi schedules, which include routing decisions for taxiing aircraft. The model extends an existing single route formulation to include routing decisions. An efficient comparison framework compares the multi-route formulation and the single route formulation. The multi-route model is exercised for east side airport surface traffic at Dallas/Fort Worth International Airport to determine if any arrival taxi time savings can be achieved by allowing arrivals to have two taxi routes: a route that crosses an active departure runway and a perimeter route that avoids the crossing. Results indicate that the multi-route formulation yields reduced arrival taxi times over the single route formulation only when a perimeter taxiway is used. In conditions where the departure aircraft are given an optimal and fixed takeoff sequence, accumulative arrival taxi time savings in the multi-route formulation can be as high as 3.6 hours more than the single route formulation. If the departure sequence is not optimal, the multi-route formulation results in less taxi time savings made over the single route formulation, but the average arrival taxi time is significantly decreased.
Triple/quadruple patterning layout decomposition via novel linear programming and iterative rounding
NASA Astrophysics Data System (ADS)
Lin, Yibo; Xu, Xiaoqing; Yu, Bei; Baldick, Ross; Pan, David Z.
2016-03-01
As feature size of the semiconductor technology scales down to 10nm and beyond, multiple patterning lithography (MPL) has become one of the most practical candidates for lithography, along with other emerging technologies such as extreme ultraviolet lithography (EUVL), e-beam lithography (EBL) and directed self assembly (DSA). Due to the delay of EUVL and EBL, triple and even quadruple patterning are considered to be used for lower metal and contact layers with tight pitches. In the process of MPL, layout decomposition is the key design stage, where a layout is split into various parts and each part is manufactured through a separate mask. For metal layers, stitching may be allowed to resolve conflicts, while it is forbidden for contact and via layers. In this paper, we focus on the application of layout decomposition where stitching is not allowed such as for contact and via layers. We propose a linear programming and iterative rounding (LPIR) solving technique to reduce the number of non-integers in the LP relaxation problem. Experimental results show that the proposed algorithms can provide high quality decomposition solutions efficiently while introducing as few conflicts as possible.
Integer Linear Programming for Constrained Multi-Aspect Committee Review Assignment.
Karimzadehgan, Maryam; Zhai, Chengxiang
2012-07-01
Automatic review assignment can significantly improve the productivity of many people such as conference organizers, journal editors and grant administrators. A general setup of the review assignment problem involves assigning a set of reviewers on a committee to a set of documents to be reviewed under the constraint of review quota so that the reviewers assigned to a document can collectively cover multiple topic aspects of the document. No previous work has addressed such a setup of committee review assignments while also considering matching multiple aspects of topics and expertise. In this paper, we tackle the problem of committee review assignment with multi-aspect expertise matching by casting it as an integer linear programming problem. The proposed algorithm can naturally accommodate any probabilistic or deterministic method for modeling multiple aspects to automate committee review assignments. Evaluation using a multi-aspect review assignment test set constructed using ACM SIGIR publications shows that the proposed algorithm is effective and efficient for committee review assignments based on multi-aspect expertise matching.
Quality evaluation of millet-soy blended extrudates formulated through linear programming.
Balasubramanian, S; Singh, K K; Patil, R T; Onkar, Kolhe K
2012-08-01
Whole pearl millet, finger millet and decorticated soy bean blended (millet soy) extrudates formulations were designed using a linear programming (LP) model to minimize the total cost of the finished product. LP formulated composite flour was extruded through twin screw food extruder at different feed rate (6.5-13.5 kg/h), screw speed (200-350 rpm, constant feed moisture (14% wb), barrel temperature (120 °C) and cutter speed (15 rpm). The physical, functional, textural and pasting characteristics of extrudates were examined and their responses were studied. Expansion index (2.31) and sectional expansion index (5.39) was found to be was found maximum for feed rate and screw speed combination 9.5 kg/h and 250 rpm. However, density (0.25 × 10(-3) g/mm(3)) was maximum for 9.5 kg/h and 300 rpm combination. Maximum color change (10.32) was found for 9.5 kg/h feed rate and 200 rpm screw speed. The lower hardness was obtained for the samples extruded at lowest feed rate (6.5 kg/h) for all screw speed and feed rate at 9.5 kg/h for 300-350 rpm screw speed. Peak viscosity decreases with all screw speed of 9.5 kg/h feed rate.
Mitsos, Alexander; Melas, Ioannis N; Morris, Melody K; Saez-Rodriguez, Julio; Lauffenburger, Douglas A; Alexopoulos, Leonidas G
2012-01-01
Modeling of signal transduction pathways plays a major role in understanding cells' function and predicting cellular response. Mathematical formalisms based on a logic formalism are relatively simple but can describe how signals propagate from one protein to the next and have led to the construction of models that simulate the cells response to environmental or other perturbations. Constrained fuzzy logic was recently introduced to train models to cell specific data to result in quantitative pathway models of the specific cellular behavior. There are two major issues in this pathway optimization: i) excessive CPU time requirements and ii) loosely constrained optimization problem due to lack of data with respect to large signaling pathways. Herein, we address both issues: the former by reformulating the pathway optimization as a regular nonlinear optimization problem; and the latter by enhanced algorithms to pre/post-process the signaling network to remove parts that cannot be identified given the experimental conditions. As a case study, we tackle the construction of cell type specific pathways in normal and transformed hepatocytes using medium and large-scale functional phosphoproteomic datasets. The proposed Non Linear Programming (NLP) formulation allows for fast optimization of signaling topologies by combining the versatile nature of logic modeling with state of the art optimization algorithms.
Earthquake mechanisms from linear-programming inversion of seismic-wave amplitude ratios
Julian, B.R.; Foulger, G.R.
1996-01-01
The amplitudes of radiated seismic waves contain far more information about earthquake source mechanisms than do first-motion polarities, but amplitudes are severely distorted by the effects of heterogeneity in the Earth. This distortion can be reduced greatly by using the ratios of amplitudes of appropriately chosen seismic phases, rather than simple amplitudes, but existing methods for inverting amplitude ratios are severely nonlinear and require computationally intensive searching methods to ensure that solutions are globally optimal. Searching methods are particularly costly if general (moment tensor) mechanisms are allowed. Efficient linear-programming methods, which do not suffer from these problems, have previously been applied to inverting polarities and wave amplitudes. We extend these methods to amplitude ratios, in which formulation on inequality constraint for an amplitude ratio takes the same mathematical form as a polarity observation. Three-component digital data for an earthquake at the Hengill-Grensdalur geothermal area in southwestern Iceland illustrate the power of the method. Polarities of P, SH, and SV waves, unusually well distributed on the focal sphere, cannot distinguish between diverse mechanisms, including a double couple. Amplitude ratios, on the other hand, clearly rule out the double-couple solution and require a large explosive isotropic component.
Automatic Design of Synthetic Gene Circuits through Mixed Integer Non-linear Programming
Huynh, Linh; Kececioglu, John; Köppe, Matthias; Tagkopoulos, Ilias
2012-01-01
Automatic design of synthetic gene circuits poses a significant challenge to synthetic biology, primarily due to the complexity of biological systems, and the lack of rigorous optimization methods that can cope with the combinatorial explosion as the number of biological parts increases. Current optimization methods for synthetic gene design rely on heuristic algorithms that are usually not deterministic, deliver sub-optimal solutions, and provide no guaranties on convergence or error bounds. Here, we introduce an optimization framework for the problem of part selection in synthetic gene circuits that is based on mixed integer non-linear programming (MINLP), which is a deterministic method that finds the globally optimal solution and guarantees convergence in finite time. Given a synthetic gene circuit, a library of characterized parts, and user-defined constraints, our method can find the optimal selection of parts that satisfy the constraints and best approximates the objective function given by the user. We evaluated the proposed method in the design of three synthetic circuits (a toggle switch, a transcriptional cascade, and a band detector), with both experimentally constructed and synthetic promoter libraries. Scalability and robustness analysis shows that the proposed framework scales well with the library size and the solution space. The work described here is a step towards a unifying, realistic framework for the automated design of biological circuits. PMID:22536398
Poos, Alexandra M; Maicher, André; Dieckmann, Anna K; Oswald, Marcus; Eils, Roland; Kupiec, Martin; Luke, Brian; König, Rainer
2016-06-02
Understanding telomere length maintenance mechanisms is central in cancer biology as their dysregulation is one of the hallmarks for immortalization of cancer cells. Important for this well-balanced control is the transcriptional regulation of the telomerase genes. We integrated Mixed Integer Linear Programming models into a comparative machine learning based approach to identify regulatory interactions that best explain the discrepancy of telomerase transcript levels in yeast mutants with deleted regulators showing aberrant telomere length, when compared to mutants with normal telomere length. We uncover novel regulators of telomerase expression, several of which affect histone levels or modifications. In particular, our results point to the transcription factors Sum1, Hst1 and Srb2 as being important for the regulation of EST1 transcription, and we validated the effect of Sum1 experimentally. We compiled our machine learning method leading to a user friendly package for R which can straightforwardly be applied to similar problems integrating gene regulator binding information and expression profiles of samples of e.g. different phenotypes, diseases or treatments.
2014-01-01
Background A linear programming (LP) model was proposed to create de-identified data sets that maximally include spatial detail (e.g., geocodes such as ZIP or postal codes, census blocks, and locations on maps) while complying with the HIPAA Privacy Rule’s Expert Determination method, i.e., ensuring that the risk of re-identification is very small. The LP model determines the transition probability from an original location of a patient to a new randomized location. However, it has a limitation for the cases of areas with a small population (e.g., median of 10 people in a ZIP code). Methods We extend the previous LP model to accommodate the cases of a smaller population in some locations, while creating de-identified patient spatial data sets which ensure the risk of re-identification is very small. Results Our LP model was applied to a data set of 11,740 postal codes in the City of Ottawa, Canada. On this data set we demonstrated the limitations of the previous LP model, in that it produces improbable results, and showed how our extensions to deal with small areas allows the de-identification of the whole data set. Conclusions The LP model described in this study can be used to de-identify geospatial information for areas with small populations with minimal distortion to postal codes. Our LP model can be extended to include other information, such as age and gender. PMID:24885457
Method of expanding hyperspheres - an interior algorithm for linear programming problems
Chandrupatla, T.
1994-12-31
A new interior algorithm using some properties of hyperspheres is proposed for the solution of linear programming problems with inequality constraints: maximize c{sup T} x subject to Ax {<=} b where c and rows of A are normalized in the Euclidean sense such that {parallel} c {parallel} = {radical}c{sup T}c = 1 {parallel} a{sub i} {parallel} {radical} A{sub i}A{sub i}{sup T} = 1 for i = 1 to m. The feasible region in the polytope bounded by the constraint planes. We start from an interior point. We pass a plane normal to c until it touches a constraint plane. Then the sphere is expanded so that it keeps contact with the previously touched planes and the expansion proceeds till it touches another plane. The procedure is continued till the sphere touches the c-plane and n constraint planes. We move to the center of the sphere and repeat the process. The interior maximum is reached when the radius of the expanded sphere is less than a critical value say {epsilon}. Problems of direction finding, determination of incoming constraint, sphere jamming, and evaluation of the initial feasible point are discussed.
Optimization of HDR brachytherapy dose distributions using linear programming with penalty costs
Alterovitz, Ron; Lessard, Etienne; Pouliot, Jean; Hsu, I-Chow Joe; O'Brien, James F.; Goldberg, Ken
2006-11-15
Prostate cancer is increasingly treated with high-dose-rate (HDR) brachytherapy, a type of radiotherapy in which a radioactive source is guided through catheters temporarily implanted in the prostate. Clinicians must set dwell times for the source inside the catheters so the resulting dose distribution minimizes deviation from dose prescriptions that conform to patient-specific anatomy. The primary contribution of this paper is to take the well-established dwell times optimization problem defined by Inverse Planning by Simulated Annealing (IPSA) developed at UCSF and exactly formulate it as a linear programming (LP) problem. Because LP problems can be solved exactly and deterministically, this formulation provides strong performance guarantees: one can rapidly find the dwell times solution that globally minimizes IPSA's objective function for any patient case and clinical criteria parameters. For a sample of 20 prostates with volume ranging from 23 to 103 cc, the new LP method optimized dwell times in less than 15 s per case on a standard PC. The dwell times solutions currently being obtained clinically using simulated annealing (SA), a probabilistic method, were quantitatively compared to the mathematically optimal solutions obtained using the LP method. The LP method resulted in significantly improved objective function values compared to SA (P=1.54x10{sup -7}), but none of the dosimetric indices indicated a statistically significant difference (P<0.01). The results indicate that solutions generated by the current version of IPSA are clinically equivalent to the mathematically optimal solutions.
Gustafson, Eric J; Roberts, L Jay; Leefers, Larry A
2006-12-01
Forest management planners require analytical tools to assess the effects of alternative strategies on the sometimes disparate benefits from forests such as timber production and wildlife habitat. We assessed the spatial patterns of alternative management strategies by linking two models that were developed for different purposes. We used a linear programming model (Spectrum) to optimize timber harvest schedules, then a simulation model (HARVEST) to project those schedules in a spatially explicit way and produce maps from which the spatial pattern of habitat could be calculated. We demonstrated the power of this approach by evaluating alternative plans developed for a national forest plan revision in Wisconsin, USA. The amount of forest interior habitat was inversely related to the amount of timber cut, and increased under the alternatives compared to the current plan. The amount of edge habitat was positively related to the amount of timber cut, and increased under all alternatives. The amount of mature northern hardwood interior and edge habitat increased for all alternatives, but mature pine habitat area varied. Mature age classes of all forest types increased, and young classes decreased under all alternatives. The average size of patches (defined by age class) generally decreased. These results are consistent with the design goals of each of the alternatives, but reveal that the spatial differences among the alternatives are modest. These complementary models are valuable for quantifying and comparing the spatial effects of alternative management strategies.
NASA Astrophysics Data System (ADS)
Vorwerk, Kristoffer; Kennings, Andrew; Anjos, Miguel
2008-06-01
In VLSI layout, floorplanning refers to the task of placing macrocells on a chip without overlap while minimizing design objectives such as timing, congestion, and wire length. Experienced VLSI designers have traditionally been able to produce more efficient floorplans than automated methods. However, with the increasing complexity of modern circuits, manual design flows have become infeasible. An efficient top-down strategy for overlap removal which repairs overlaps in floorplans produced by placement algorithms or rough floorplanning methodologies is presented in this article. The algorithmic framework proposed incorporates a novel geometric shifting technique coupled with topological constraint graphs and linear programming within a top-down flow. The effectiveness of this framework is quantified across a broad range of floorplans produced by multiple tools. The method succeeds in producing valid placements in almost all cases; moreover, compared with leading methods, it requires only one-fifth of the run-time and produces placements with 4-13% less wire length and up to 43% less cell movement.
Automatic design of synthetic gene circuits through mixed integer non-linear programming.
Huynh, Linh; Kececioglu, John; Köppe, Matthias; Tagkopoulos, Ilias
2012-01-01
Automatic design of synthetic gene circuits poses a significant challenge to synthetic biology, primarily due to the complexity of biological systems, and the lack of rigorous optimization methods that can cope with the combinatorial explosion as the number of biological parts increases. Current optimization methods for synthetic gene design rely on heuristic algorithms that are usually not deterministic, deliver sub-optimal solutions, and provide no guaranties on convergence or error bounds. Here, we introduce an optimization framework for the problem of part selection in synthetic gene circuits that is based on mixed integer non-linear programming (MINLP), which is a deterministic method that finds the globally optimal solution and guarantees convergence in finite time. Given a synthetic gene circuit, a library of characterized parts, and user-defined constraints, our method can find the optimal selection of parts that satisfy the constraints and best approximates the objective function given by the user. We evaluated the proposed method in the design of three synthetic circuits (a toggle switch, a transcriptional cascade, and a band detector), with both experimentally constructed and synthetic promoter libraries. Scalability and robustness analysis shows that the proposed framework scales well with the library size and the solution space. The work described here is a step towards a unifying, realistic framework for the automated design of biological circuits.
TERA high gradient test program of RF cavities for medical linear accelerators
NASA Astrophysics Data System (ADS)
Degiovanni, A.; Amaldi, U.; Bonomi, R.; Garlasché, M.; Garonna, A.; Verdú-Andrés, S.; Wegner, R.
2011-11-01
The scientific community and the medical industries are putting a considerable effort into the design of compact, reliable and cheap accelerators for hadrontherapy. Up to now only circular accelerators are used to deliver beams with energies suitable for the treatment of deep seated tumors. The TERA Foundation has proposed and designed a hadrontherapy facility based on the cyclinac concept: a high gradient linear accelerator placed downstream of a cyclotron used as an injector. The overall length of the linac, and therefore its final cost, is almost inversely proportional to the average accelerating gradient achieved in the linac. TERA, in collaboration with the CLIC RF group, has started a high gradient test program. The main goal is to study the high gradient behavior of prototype cavities and to determine the appropriate linac operating frequency considering important issues such as machine reliability and availability of distributed power sources. A preliminary test of a 3 GHz cavity has been carried out at the beginning of 2010, giving encouraging results. Further investigations are planned before the end of 2011. A set of 5.7 GHz cavities is under production and will be tested in a near future. The construction and test of a multi-cell structure is also foreseen.
Zheng, Jialin; Zhuang, Wei; Yan, Nian; Kou, Gang; Peng, Hui; McNally, Clancy; Erichsen, David; Cheloha, Abby; Herek, Shelley; Shi, Chris
2004-01-01
The ability to identify neuronal damage in the dendritic arbor during HIV-1-associated dementia (HAD) is crucial for designing specific therapies for the treatment of HAD. To study this process, we utilized a computer-based image analysis method to quantitatively assess HIV-1 viral protein gp120 and glutamate-mediated individual neuronal damage in cultured cortical neurons. Changes in the number of neurites, arbors, branch nodes, cell body area, and average arbor lengths were determined and a database was formed (http://dm.ist.unomaha. edu/database.htm). We further proposed a two-class model of multiple criteria linear programming (MCLP) to classify such HIV-1-mediated neuronal dendritic and synaptic damages. Given certain classes, including treatments with brain-derived neurotrophic factor (BDNF), glutamate, gp120 or non-treatment controls from our in vitro experimental systems, we used the two-class MCLP model to determine the data patterns between classes in order to gain insight about neuronal dendritic damages. This knowledge can be applied in principle to the design and study of specific therapies for the prevention or reversal of neuronal damage associated with HAD. Finally, the MCLP method was compared with a well-known artificial neural network algorithm to test for the relative potential of different data mining applications in HAD research.
Cho, J H; Ahn, K H; Chung, W J; Gwon, E M
2003-01-01
A waste load allocation model using linear programming has been developed for economic water quality management. A modified Qual2e model was used for water quality calculations and transfer coefficients were derived from the calculated water quality. This allocation model was applied to the heavily polluted Gyungan River, located in South Korea. For water quality management of the river, two scenarios were proposed. Scenario 1 proposed to minimise the total waste load reduction in the river basin. Scenario 2 proposed to minimise waste load reduction considering regional equity. Waste loads, which have to be reduced at each sub-basin and WWTP, were determined to meet the water quality goal of the river. Application results of the allocation model indicate that advanced treatment is required for most of the existing WWTPs in the river basin and construction of new WWTPs and capacity expansion of existing plants are necessary. Distribution characteristics of pollution sources and pollutant loads in the river basin was analysed using Arc/View GIS.
Optimal Reservoir Operation for Hydropower Generation using Non-linear Programming Model
NASA Astrophysics Data System (ADS)
Arunkumar, R.; Jothiprakash, V.
2012-05-01
Hydropower generation is one of the vital components of reservoir operation, especially for a large multi-purpose reservoir. Deriving optimal operational rules for such a large multi-purpose reservoir serving various purposes like irrigation, hydropower and flood control are complex, because of the large dimension of the problem and the complexity is more if the hydropower production is not an incidental. Thus optimizing the operations of a reservoir serving various purposes requires a systematic study. In the present study such a large multi-purpose reservoir, namely, Koyna reservoir operations are optimized for maximizing the hydropower production subject to the condition of satisfying the irrigation demands using a non-linear programming model. The hydropower production from the reservoir is analysed for three different dependable inflow conditions, representing wet, normal and dry years. For each dependable inflow conditions, various scenarios have been analyzed based on the constraints on the releases and the results are compared. The annual power production, combined monthly power production from all the powerhouses, end of month storage levels, evaporation losses and surplus are discussed. From different scenarios, it is observed that more hydropower can be generated for various dependable inflow conditions, if the restrictions on releases are slightly relaxed. The study shows that Koyna dam is having potential to generate more hydropower.
Integer Linear Programming for Constrained Multi-Aspect Committee Review Assignment
Karimzadehgan, Maryam; Zhai, ChengXiang
2011-01-01
Automatic review assignment can significantly improve the productivity of many people such as conference organizers, journal editors and grant administrators. A general setup of the review assignment problem involves assigning a set of reviewers on a committee to a set of documents to be reviewed under the constraint of review quota so that the reviewers assigned to a document can collectively cover multiple topic aspects of the document. No previous work has addressed such a setup of committee review assignments while also considering matching multiple aspects of topics and expertise. In this paper, we tackle the problem of committee review assignment with multi-aspect expertise matching by casting it as an integer linear programming problem. The proposed algorithm can naturally accommodate any probabilistic or deterministic method for modeling multiple aspects to automate committee review assignments. Evaluation using a multi-aspect review assignment test set constructed using ACM SIGIR publications shows that the proposed algorithm is effective and efficient for committee review assignments based on multi-aspect expertise matching. PMID:22711970
ERIC Educational Resources Information Center
Mills, James W.; And Others
1973-01-01
The Study reported here tested an application of the Linear Programming Model at the Reading Clinic of Drew University. Results, while not conclusive, indicate that this approach yields greater gains in speed scores than a traditional approach for this population. (Author)
Maillot, Matthieu; Ferguson, Elaine L; Drewnowski, Adam; Darmon, Nicole
2008-06-01
Nutrient profiling ranks foods based on their nutrient content. They may help identify foods with a good nutritional quality for their price. This hypothesis was tested using diet modeling with linear programming. Analyses were undertaken using food intake data from the nationally representative French INCA (enquête Individuelle et Nationale sur les Consommations Alimentaires) survey and its associated food composition and price database. For each food, a nutrient profile score was defined as the ratio between the previously published nutrient density score (NDS) and the limited nutrient score (LIM); a nutritional quality for price indicator was developed and calculated from the relationship between its NDS:LIM and energy cost (in euro/100 kcal). We developed linear programming models to design diets that fulfilled increasing levels of nutritional constraints at a minimal cost. The median NDS:LIM values of foods selected in modeled diets increased as the levels of nutritional constraints increased (P = 0.005). In addition, the proportion of foods with a good nutritional quality for price indicator was higher (P < 0.0001) among foods selected (81%) than among foods not selected (39%) in modeled diets. This agreement between the linear programming and the nutrient profiling approaches indicates that nutrient profiling can help identify foods of good nutritional quality for their price. Linear programming is a useful tool for testing nutrient profiling systems and validating the concept of nutrient profiling.
Technology Transfer Automated Retrieval System (TEKTRAN)
Ready-to-use therapeutic food (RUTF) is the standard of care for children suffering from noncomplicated severe acute malnutrition (SAM). The objective was to develop a comprehensive linear programming (LP) tool to create novel RUTF formulations for Ethiopia. A systematic approach that surveyed inter...
CAMPBELL, PHILIP L.
1999-08-01
This report presents an implementation of the Berlekamp-Massey linear feedback shift-register (LFSR) synthesis algorithm in the C programming language. Two pseudo-code versions of the code are given, the operation of LFSRs is explained, C-version of the pseudo-code versions is presented, and the output of the code, when run on two input samples, is shown.
NASA Technical Reports Server (NTRS)
Geyser, L. C.
1978-01-01
A digital computer program, DYGABCD, was developed that generates linearized, dynamic models of simulated turbofan and turbojet engines. DYGABCD is based on an earlier computer program, DYNGEN, that is capable of calculating simulated nonlinear steady-state and transient performance of one- and two-spool turbojet engines or two- and three-spool turbofan engines. Most control design techniques require linear system descriptions. For multiple-input/multiple-output systems such as turbine engines, state space matrix descriptions of the system are often desirable. DYGABCD computes the state space matrices commonly referred to as the A, B, C, and D matrices required for a linear system description. The report discusses the analytical approach and provides a users manual, FORTRAN listings, and a sample case.
NASA Astrophysics Data System (ADS)
Chang, L.; Chen, Y.; Pan, C.
2009-12-01
Surface water resources are strongly influenced by hydrological conditions, and using only surface water resources as water supplies may have higher shortage risk than before because of the climate change caused by the global warming. Conjunctive use of surface and subsurface water is one of the most effective water resource practices to increase water supply reliability with minimal cost and environmental impact. Therefore, this paper presents a novel stepwise optimization model for optimizing the conjunctive use of surface and subsurface water resources management. At each time step, a two level decomposition approach was proposed to divide the nonlinear optimal conjunctive use problem into a linear surface water subproblem and a nonlinear groundwater subproblem. Because of the two level decomposition approach, a hybrid framework is used for the implementation of the conjunctive use model. In the hybrid framework, evolution algorithms, Genetic Algorithm (GA) and Artificial Neural Network (ANN), and Linear Programming (LP) are used for model solving. GA and LP are respectively used for determining the optimal pumping quantities and reservoir allocation, and ANN is used for the groundwater simulation. In the groundwater simulation, this study uses an ANN to simulate groundwater response and greatly reduce computational loading for unconfined aquifers, unlike conventional “response matrix method” or “embedding method”. Because of the very high performance of LP, the usage of LP for the linear surface water subproblem can significantly decrease the computational burden of entire model. In this study, four cases have been demonstrated. Case #1 is a pure surface water case and others are conjunctive use cases. In Case #2, “surface water supply firstly” is the supply principle between surface water. In Case #3 and #4, the “Index Balance” theory is the supply principle and different operation curves used in different cases respectively. The case result
Integrating Genomics and Proteomics Data to Predict Drug Effects Using Binary Linear Programming
Ji, Zhiwei; Su, Jing; Liu, Chenglin; Wang, Hongyan; Huang, Deshuang; Zhou, Xiaobo
2014-01-01
The Library of Integrated Network-Based Cellular Signatures (LINCS) project aims to create a network-based understanding of biology by cataloging changes in gene expression and signal transduction that occur when cells are exposed to a variety of perturbations. It is helpful for understanding cell pathways and facilitating drug discovery. Here, we developed a novel approach to infer cell-specific pathways and identify a compound's effects using gene expression and phosphoproteomics data under treatments with different compounds. Gene expression data were employed to infer potential targets of compounds and create a generic pathway map. Binary linear programming (BLP) was then developed to optimize the generic pathway topology based on the mid-stage signaling response of phosphorylation. To demonstrate effectiveness of this approach, we built a generic pathway map for the MCF7 breast cancer cell line and inferred the cell-specific pathways by BLP. The first group of 11 compounds was utilized to optimize the generic pathways, and then 4 compounds were used to identify effects based on the inferred cell-specific pathways. Cross-validation indicated that the cell-specific pathways reliably predicted a compound's effects. Finally, we applied BLP to re-optimize the cell-specific pathways to predict the effects of 4 compounds (trichostatin A, MS-275, staurosporine, and digoxigenin) according to compound-induced topological alterations. Trichostatin A and MS-275 (both HDAC inhibitors) inhibited the downstream pathway of HDAC1 and caused cell growth arrest via activation of p53 and p21; the effects of digoxigenin were totally opposite. Staurosporine blocked the cell cycle via p53 and p21, but also promoted cell growth via activated HDAC1 and its downstream pathway. Our approach was also applied to the PC3 prostate cancer cell line, and the cross-validation analysis showed very good accuracy in predicting effects of 4 compounds. In summary, our computational model can be
Modeling the distribution of ciliate protozoa in the reticulo-rumen using linear programming.
Hook, S E; Dijkstra, J; Wright, A-D G; McBride, B W; France, J
2012-01-01
The flow of ciliate protozoa from the reticulo-rumen is significantly less than expected given the total density of rumen protozoa present. To maintain their numbers in the reticulo-rumen, protozoa can be selectively retained through association with feed particles and the rumen wall. Few mathematical models have been designed to model rumen protozoa in both the free-living and attached phases, and the data used in the models were acquired using classical techniques. It has therefore become necessary to provide an updated model that more accurately represents these microorganisms and incorporates the recent literature on distribution, sequestration, and generation times. This paper represents a novel approach to synthesizing experimental data on rumen microorganisms in a quantitative and structured manner. The development of a linear programming model of rumen protozoa in an approximate steady state will be described and applied to data from healthy ruminants consuming commonly fed diets. In the model, protozoa associated with the liquid phase and protozoa attached to particulate matter or sequestered against the rumen wall are distinguished. Growth, passage, death, and transfer of protozoa between both pools are represented. The results from the model application using the contrasting diets of increased forage content versus increased starch content indicate that the majority of rumen protozoa, 63 to 90%, are found in the attached phase, either attached to feed particles or sequestered on the rumen wall. A slightly greater proportion of protozoa are found in the attached phase in animals fed a hay diet compared with a starch diet. This suggests that experimental protocols that only sample protozoa from the rumen fluid could be significantly underestimating the size of the protozoal population of the rumen. Further data are required on the distribution of ciliate protozoa in the rumen of healthy animals to improve model development, but the model described herein
Integrating genomics and proteomics data to predict drug effects using binary linear programming.
Ji, Zhiwei; Su, Jing; Liu, Chenglin; Wang, Hongyan; Huang, Deshuang; Zhou, Xiaobo
2014-01-01
The Library of Integrated Network-Based Cellular Signatures (LINCS) project aims to create a network-based understanding of biology by cataloging changes in gene expression and signal transduction that occur when cells are exposed to a variety of perturbations. It is helpful for understanding cell pathways and facilitating drug discovery. Here, we developed a novel approach to infer cell-specific pathways and identify a compound's effects using gene expression and phosphoproteomics data under treatments with different compounds. Gene expression data were employed to infer potential targets of compounds and create a generic pathway map. Binary linear programming (BLP) was then developed to optimize the generic pathway topology based on the mid-stage signaling response of phosphorylation. To demonstrate effectiveness of this approach, we built a generic pathway map for the MCF7 breast cancer cell line and inferred the cell-specific pathways by BLP. The first group of 11 compounds was utilized to optimize the generic pathways, and then 4 compounds were used to identify effects based on the inferred cell-specific pathways. Cross-validation indicated that the cell-specific pathways reliably predicted a compound's effects. Finally, we applied BLP to re-optimize the cell-specific pathways to predict the effects of 4 compounds (trichostatin A, MS-275, staurosporine, and digoxigenin) according to compound-induced topological alterations. Trichostatin A and MS-275 (both HDAC inhibitors) inhibited the downstream pathway of HDAC1 and caused cell growth arrest via activation of p53 and p21; the effects of digoxigenin were totally opposite. Staurosporine blocked the cell cycle via p53 and p21, but also promoted cell growth via activated HDAC1 and its downstream pathway. Our approach was also applied to the PC3 prostate cancer cell line, and the cross-validation analysis showed very good accuracy in predicting effects of 4 compounds. In summary, our computational model can be
Mitsos, Alexander; Melas, Ioannis N; Siminelakis, Paraskeuas; Chairakaki, Aikaterini D; Saez-Rodriguez, Julio; Alexopoulos, Leonidas G
2009-12-01
Understanding the mechanisms of cell function and drug action is a major endeavor in the pharmaceutical industry. Drug effects are governed by the intrinsic properties of the drug (i.e., selectivity and potency) and the specific signaling transduction network of the host (i.e., normal vs. diseased cells). Here, we describe an unbiased, phosphoproteomic-based approach to identify drug effects by monitoring drug-induced topology alterations. With our proposed method, drug effects are investigated under diverse stimulations of the signaling network. Starting with a generic pathway made of logical gates, we build a cell-type specific map by constraining it to fit 13 key phopshoprotein signals under 55 experimental conditions. Fitting is performed via an Integer Linear Program (ILP) formulation and solution by standard ILP solvers; a procedure that drastically outperforms previous fitting schemes. Then, knowing the cell's topology, we monitor the same key phosphoprotein signals under the presence of drug and we re-optimize the specific map to reveal drug-induced topology alterations. To prove our case, we make a topology for the hepatocytic cell-line HepG2 and we evaluate the effects of 4 drugs: 3 selective inhibitors for the Epidermal Growth Factor Receptor (EGFR) and a non-selective drug. We confirm effects easily predictable from the drugs' main target (i.e., EGFR inhibitors blocks the EGFR pathway) but we also uncover unanticipated effects due to either drug promiscuity or the cell's specific topology. An interesting finding is that the selective EGFR inhibitor Gefitinib inhibits signaling downstream the Interleukin-1alpha (IL1alpha) pathway; an effect that cannot be extracted from binding affinity-based approaches. Our method represents an unbiased approach to identify drug effects on small to medium size pathways which is scalable to larger topologies with any type of signaling interventions (small molecules, RNAi, etc). The method can reveal drug effects on
NASA Technical Reports Server (NTRS)
Utku, S.
1969-01-01
A general purpose digital computer program for the in-core solution of linear equilibrium problems of structural mechanics is documented. The program requires minimum input for the description of the problem. The solution is obtained by means of the displacement method and the finite element technique. Almost any geometry and structure may be handled because of the availability of linear, triangular, quadrilateral, tetrahedral, hexahedral, conical, triangular torus, and quadrilateral torus elements. The assumption of piecewise linear deflection distribution insures monotonic convergence of the deflections from the stiffer side with decreasing mesh size. The stresses are provided by the best-fit strain tensors in the least squares at the mesh points where the deflections are given. The selection of local coordinate systems whenever necessary is automatic. The core memory is used by means of dynamic memory allocation, an optional mesh-point relabelling scheme and imposition of the boundary conditions during the assembly time.
Sixth SIAM conference on applied linear algebra: Final program and abstracts. Final technical report
1997-12-31
Linear algebra plays a central role in mathematics and applications. The analysis and solution of problems from an amazingly wide variety of disciplines depend on the theory and computational techniques of linear algebra. In turn, the diversity of disciplines depending on linear algebra also serves to focus and shape its development. Some problems have special properties (numerical, structural) that can be exploited. Some are simply so large that conventional approaches are impractical. New computer architectures motivate new algorithms, and fresh ways to look at old ones. The pervasive nature of linear algebra in analyzing and solving problems means that people from a wide spectrum--universities, industrial and government laboratories, financial institutions, and many others--share an interest in current developments in linear algebra. This conference aims to bring them together for their mutual benefit. Abstracts of papers presented are included.
Li, Y P; Huang, G H
2006-11-01
In this study, an interval-parameter two-stage mixed integer linear programming (ITMILP) model is developed for supporting long-term planning of waste management activities in the City of Regina. In the ITMILP, both two-stage stochastic programming and interval linear programming are introduced into a general mixed integer linear programming framework. Uncertainties expressed as not only probability density functions but also discrete intervals can be reflected. The model can help tackle the dynamic, interactive and uncertain characteristics of the solid waste management system in the City, and can address issues concerning plans for cost-effective waste diversion and landfill prolongation. Three scenarios are considered based on different waste management policies. The results indicate that reasonable solutions have been generated. They are valuable for supporting the adjustment or justification of the existing waste flow allocation patterns, the long-term capacity planning of the City's waste management system, and the formulation of local policies and regulations regarding waste generation and management.
Wu, Z; Zhang, Y
2008-01-01
The double digestion problem for DNA restriction mapping has been proved to be NP-complete and intractable if the numbers of the DNA fragments become large. Several approaches to the problem have been tested and proved to be effective only for small problems. In this paper, we formulate the problem as a mixed-integer linear program (MIP) by following (Waterman, 1995) in a slightly different form. With this formulation and using state-of-the-art integer programming techniques, we can solve randomly generated problems whose search space sizes are many-magnitude larger than previously reported testing sizes.
Fernandes, L.; Friedlander, A.; Guedes, M.; Judice, J.
2001-07-01
This paper addresses a General Linear Complementarity Problem (GLCP) that has found applications in global optimization. It is shown that a solution of the GLCP can be computed by finding a stationary point of a differentiable function over a set defined by simple bounds on the variables. The application of this result to the solution of bilinear programs and LCPs is discussed. Some computational evidence of its usefulness is included in the last part of the paper.
NASA Technical Reports Server (NTRS)
Arneson, Heather M.; Dousse, Nicholas; Langbort, Cedric
2014-01-01
We consider control design for positive compartmental systems in which each compartment's outflow rate is described by a concave function of the amount of material in the compartment.We address the problem of determining the routing of material between compartments to satisfy time-varying state constraints while ensuring that material reaches its intended destination over a finite time horizon. We give sufficient conditions for the existence of a time-varying state-dependent routing strategy which ensures that the closed-loop system satisfies basic network properties of positivity, conservation and interconnection while ensuring that capacity constraints are satisfied, when possible, or adjusted if a solution cannot be found. These conditions are formulated as a linear programming problem. Instances of this linear programming problem can be solved iteratively to generate a solution to the finite horizon routing problem. Results are given for the application of this control design method to an example problem. Key words: linear programming; control of networks; positive systems; controller constraints and structure.
Yang, X.
1998-12-31
Modeling ground motions from multi-shot, delay-fired mining blasts is important to the understanding of their source characteristics such as spectrum modulation. MineSeis is a MATLAB{reg_sign} (a computer language) Graphical User Interface (GUI) program developed for the effective modeling of these multi-shot mining explosions. The program provides a convenient and interactive tool for modeling studies. Multi-shot, delay-fired mining blasts are modeled as the time-delayed linear superposition of identical single shot sources in the program. These single shots are in turn modeled as the combination of an isotropic explosion source and a spall source. Mueller and Murphy`s (1971) model for underground nuclear explosions is used as the explosion source model. A modification of Anandakrishnan et al.`s (1997) spall model is developed as the spall source model. Delays both due to the delay-firing and due to the single-shot location differences are taken into account in calculating the time delays of the superposition. Both synthetic and observed single-shot seismograms can be used to construct the superpositions. The program uses MATLAB GUI for input and output to facilitate user interaction with the program. With user provided source and path parameters, the program calculates and displays the source time functions, the single shot synthetic seismograms and the superimposed synthetic seismograms. In addition, the program provides tools so that the user can manipulate the results, such as filtering, zooming and creating hard copies.
Ahlfeld, D.P.; Dougherty, D.E.
1994-11-01
MODLP is a computational tool that may help design capture zones for controlling the movement of contaminated groundwater. It creates and solves linear optimization programs that contain constraints on hydraulic head or head differences in a groundwater system. The groundwater domain is represented by USGS MODFLOW groundwater flow simulation model. This document describes the general structure of the computer program, MODLP, the types of constraints that may be imposed, detailed input instructions, interpretation of the output, and the interaction with the MODFLOW simulation kernel.
AESOP: A computer-aided design program for linear multivariable control systems
NASA Technical Reports Server (NTRS)
Lehtinen, B.; Geyser, L. C.
1982-01-01
An interactive computer program (AESOP) which solves quadratic optimal control and is discussed. The program can also be used to perform system analysis calculations such as transient and frequency responses, controllability, observability, etc., in support of the control and filter design computations.
NASA Technical Reports Server (NTRS)
Carlson, Harry W.
1985-01-01
The purpose here is to show how two linearized theory computer programs in combination may be used for the design of low speed wing flap systems capable of high levels of aerodynamic efficiency. A fundamental premise of the study is that high levels of aerodynamic performance for flap systems can be achieved only if the flow about the wing remains predominantly attached. Based on this premise, a wing design program is used to provide idealized attached flow camber surfaces from which candidate flap systems may be derived, and, in a following step, a wing evaluation program is used to provide estimates of the aerodynamic performance of the candidate systems. Design strategies and techniques that may be employed are illustrated through a series of examples. Applicability of the numerical methods to the analysis of a representative flap system (although not a system designed by the process described here) is demonstrated in a comparison with experimental data.
Communications oriented programming of parallel iterative solutions of sparse linear systems
NASA Technical Reports Server (NTRS)
Patrick, M. L.; Pratt, T. W.
1986-01-01
Parallel algorithms are developed for a class of scientific computational problems by partitioning the problems into smaller problems which may be solved concurrently. The effectiveness of the resulting parallel solutions is determined by the amount and frequency of communication and synchronization and the extent to which communication can be overlapped with computation. Three different parallel algorithms for solving the same class of problems are presented, and their effectiveness is analyzed from this point of view. The algorithms are programmed using a new programming environment. Run-time statistics and experience obtained from the execution of these programs assist in measuring the effectiveness of these algorithms.
Maia, Julio Daniel Carvalho; Urquiza Carvalho, Gabriel Aires; Mangueira, Carlos Peixoto; Santana, Sidney Ramos; Cabral, Lucidio Anjos Formiga; Rocha, Gerd B
2012-09-11
In this study, we present some modifications in the semiempirical quantum chemistry MOPAC2009 code that accelerate single-point energy calculations (1SCF) of medium-size (up to 2500 atoms) molecular systems using GPU coprocessors and multithreaded shared-memory CPUs. Our modifications consisted of using a combination of highly optimized linear algebra libraries for both CPU (LAPACK and BLAS from Intel MKL) and GPU (MAGMA and CUBLAS) to hasten time-consuming parts of MOPAC such as the pseudodiagonalization, full diagonalization, and density matrix assembling. We have shown that it is possible to obtain large speedups just by using CPU serial linear algebra libraries in the MOPAC code. As a special case, we show a speedup of up to 14 times for a methanol simulation box containing 2400 atoms and 4800 basis functions, with even greater gains in performance when using multithreaded CPUs (2.1 times in relation to the single-threaded CPU code using linear algebra libraries) and GPUs (3.8 times). This degree of acceleration opens new perspectives for modeling larger structures which appear in inorganic chemistry (such as zeolites and MOFs), biochemistry (such as polysaccharides, small proteins, and DNA fragments), and materials science (such as nanotubes and fullerenes). In addition, we believe that this parallel (GPU-GPU) MOPAC code will make it feasible to use semiempirical methods in lengthy molecular simulations using both hybrid QM/MM and QM/QM potentials.
NASA Astrophysics Data System (ADS)
Qin, Chunbin; Zhang, Huaguang; Luo, Yanhong
2014-05-01
In this paper, a novel theoretic formulation based on adaptive dynamic programming (ADP) is developed to solve online the optimal tracking problem of the continuous-time linear system with unknown dynamics. First, the original system dynamics and the reference trajectory dynamics are transformed into an augmented system. Then, under the same performance index with the original system dynamics, an augmented algebraic Riccati equation is derived. Furthermore, the solutions for the optimal control problem of the augmented system are proven to be equal to the standard solutions for the optimal tracking problem of the original system dynamics. Moreover, a new online algorithm based on the ADP technique is presented to solve the optimal tracking problem of the linear system with unknown system dynamics. Finally, simulation results are given to verify the effectiveness of the theoretic results.
NASA Technical Reports Server (NTRS)
Maskew, B.
1982-01-01
VSAERO is a computer program used to predict the nonlinear aerodynamic characteristics of arbitrary three-dimensional configurations in subsonic flow. Nonlinear effects of vortex separation and vortex surface interaction are treated in an iterative wake-shape calculation procedure, while the effects of viscosity are treated in an iterative loop coupling potential-flow and integral boundary-layer calculations. The program employs a surface singularity panel method using quadrilateral panels on which doublet and source singularities are distributed in a piecewise constant form. This user's manual provides a brief overview of the mathematical model, instructions for configuration modeling and a description of the input and output data. A listing of a sample case is included.
Numerical Scheme for Viability Computation Using Randomized Technique with Linear Programming
Djeridane, Badis
2008-06-12
We deal with the problem of computing viability sets for nonlinear continuous or hybrid systems. Our main objective is to beat the curse of dimensionality, that is, we want to avoid the exponential growth of required computational resource with respect to the dimension of the system. We propose a randomized approach for viability computation: we avoid griding the state-space, use random extraction of points instead, and the computation of viable set test is formulated as a classical feasibility problem. This algorithm was implemented successfully to linear and nonlinear examples. We provide comparison of our results with results of other method.
Joustra, P E; de Wit, J; Struben, V M D; Overbeek, B J H; Fockens, P; Elkhuizen, S G
2010-03-01
To reduce the access times of an endoscopy department, we developed an iterative combination of Discrete Event simulation and Integer Linear Programming. We developed the method in the Endoscopy Department of the Academic Medical Center in Amsterdam and compared different scenarios to reduce the access times for the department. The results show that by a more effective allocation of the current capacity, all procedure types will meet their corresponding performance targets in contrast to the current situation. This improvement can be accomplished without requiring additional equipment and staff. Currently, our recommendations are implemented.
Mathur, Rinku; Adlakha, Neeru
2014-06-01
Phylogenetic trees give the information about the vertical relationships of ancestors and descendants but phylogenetic networks are used to visualize the horizontal relationships among the different organisms. In order to predict reticulate events there is a need to construct phylogenetic networks. Here, a Linear Programming (LP) model has been developed for the construction of phylogenetic network. The model is validated by using data sets of chloroplast of 16S rRNA sequences of photosynthetic organisms and Influenza A/H5N1 viruses. Results obtained are in agreement with those obtained by earlier researchers.
Safikhani, Zhaleh; Sadeghi, Mehdi; Pezeshk, Hamid; Eslahchi, Changiz
2013-01-01
Recent advances in the sequencing technologies have provided a handful of RNA-seq datasets for transcriptome analysis. However, reconstruction of full-length isoforms and estimation of the expression level of transcripts with a low cost are challenging tasks. We propose a novel de novo method named SSP that incorporates interval integer linear programming to resolve alternatively spliced isoforms and reconstruct the whole transcriptome from short reads. Experimental results show that SSP is fast and precise in determining different alternatively spliced isoforms along with the estimation of reconstructed transcript abundances. The SSP software package is available at http://www.bioinf.cs.ipm.ir/software/ssp.
Liu Zhongyi Sun, Wenyu Tian Fangbao
2009-10-15
This paper proposes an infeasible interior-point algorithm with full-Newton step for linear programming, which is an extension of the work of Roos (SIAM J. Optim. 16(4):1110-1136, 2006). The main iteration of the algorithm consists of a feasibility step and several centrality steps. We introduce a kernel function in the algorithm to induce the feasibility step. For parameter p element of [0,1], the polynomial complexity can be proved and the result coincides with the best result for infeasible interior-point methods, that is, O(nlog n/{epsilon})
ERIC Educational Resources Information Center
Bennett, Susan V.; Calderone, Cynthia; Dedrick, Robert F.; Gunn, AnnMarie Alberton
2015-01-01
In this mixed method research, we examined the effects of reading and singing software program (RSSP) as a reading intervention on struggling readers' reading achievement as measured by the Florida Comprehensive Assessment Test, the high stakes state test administered in the state of Florida, at one elementary school. Our team defined struggling…
Leroy, Jef L; García-Guerra, Armando; García, Raquel; Dominguez, Clara; Rivera, Juan; Neufeld, Lynnette M
2008-04-01
The goal of this study was to evaluate the impact of Mexico's conditional cash transfer program, Oportunidades, on the growth of children <24 mo of age living in urban areas. Beneficiary families received cash transfers, a fortified food (targeted to pregnant and lactating women, children 6-23 mo, and children with low weight 2-4 y), and curative health services, among other benefits. Program benefits were conditional on preventative health care utilization and attendance of health and nutrition education sessions. We estimated the impact of the program after 2 y of operation in a panel of 432 children <24 mo of age at baseline (2002). We used difference-in-difference propensity score matching, which takes into account nonrandom program participation and the effects of unobserved fixed characteristics on outcomes. All models controlled for child age, sex, baseline anthropometry, and maternal height. Anthropometric Z-scores were calculated using the new WHO growth reference standards. There was no overall association between program participation and growth in children 6 to 24 mo of age. Children in intervention families younger than 6 mo of age at baseline grew 1.5 cm (P < 0.05) more than children in comparison families, corresponding to 0.41 height-for-age Z-scores (HAZ) (P < 0.05). They also gained an additional 0.76 kg (P < 0.01) or 0.47 weight-for-height Z-scores (P < 0.05). Children living in the poorest intervention households tended (0.05 < P < 0.10) to be taller than comparison children (0.9 cm, 0.27 HAZ). Oportunidades, with its strong nutrition component, is an effective tool to improve the growth of infants in poor urban households.
Linear Regression Modeling of Selected Analytes from the Balad Air Sampling Program
2012-04-05
Spearman correlation coefficient – Option is used in the IBM SPSS® Statistics V20 program when comparing two variables (weather – analyte...The positive Spearman correlation coefficient value (0.598) indicates analyte concentration for benzo[a]pyrene increased during the four sampling...Cadmium The negative Spearman correlation coefficient value (-0.318) indicates that the analyte concentration of cadmium decreased over the four
NASA Technical Reports Server (NTRS)
Hauser, F. D.; Szollosi, G. D.; Lakin, W. S.
1972-01-01
COEBRA, the Computerized Optimization of Elastic Booster Autopilots, is an autopilot design program. The bulk of the design criteria is presented in the form of minimum allowed gain/phase stability margins. COEBRA has two optimization phases: (1) a phase to maximize stability margins; and (2) a phase to optimize structural bending moment load relief capability in the presence of minimum requirements on gain/phase stability margins.
Dalla-Favera, Natalia; Hamacek, Josef; Borkovec, Michal; Jeannerat, Damien; Gumy, Frédéric; Bünzli, Jean-Claude G; Ercolani, Gianfranco; Piguet, Claude
2008-01-01
The contribution of the solvation energies to the assembly of polynuclear helicates reduces the free energy of intermetallic repulsion, DeltaE(MM), in condensed phase to such an extent that stable D(3)-symmetrical tetranuclear lanthanide-containing triple-stranded helicates [Ln(4)(L4)(3)](12+) are quantitatively produced at millimolar concentrations, despite the twelve positive charge borne by these complexes. A detailed modelling of the formation constants using statistical factors, adapted to self-assembly processes involving intra- and intermolecular connections, provides a set of five microscopic parameters, which can be successfully used for rationalizing the stepwise generation of linear bi-, tri- and tetranuclear analogues. Photophysical studies of [Eu(4)(L4)(3)](12+) confirm the existence of two different binding sites producing differentiated metal-centred emission at low temperature, which transforms into single site luminescence at room temperature because of intramolecular energy funelling processes.
Zhang, Xiaoling; Huang, Kai; Zou, Rui; Liu, Yong; Yu, Yajuan
2013-01-01
The conflict of water environment protection and economic development has brought severe water pollution and restricted the sustainable development in the watershed. A risk explicit interval linear programming (REILP) method was used to solve integrated watershed environmental-economic optimization problem. Interval linear programming (ILP) and REILP models for uncertainty-based environmental economic optimization at the watershed scale were developed for the management of Lake Fuxian watershed, China. Scenario analysis was introduced into model solution process to ensure the practicality and operability of optimization schemes. Decision makers' preferences for risk levels can be expressed through inputting different discrete aspiration level values into the REILP model in three periods under two scenarios. Through balancing the optimal system returns and corresponding system risks, decision makers can develop an efficient industrial restructuring scheme based directly on the window of "low risk and high return efficiency" in the trade-off curve. The representative schemes at the turning points of two scenarios were interpreted and compared to identify a preferable planning alternative, which has the relatively low risks and nearly maximum benefits. This study provides new insights and proposes a tool, which was REILP, for decision makers to develop an effectively environmental economic optimization scheme in integrated watershed management.
Ament, D; Ho, J; Loute, E; Remmelswaal, M
1980-06-01
Nested decomposition of linear programs is the result of a multilevel, hierarchical application of the Dantzig-Wolfe decomposition principle. The general structure is called lower block-triangular, and permits direct accounting of long-term effects of investment, service life, etc. LIFT, an algorithm for solving lower block triangular linear programs, is based on state-of-the-art modular LP software. The algorithmic and software aspects of LIFT are outlined, and computational results are presented. 5 figures, 6 tables. (RWR)
CAD of control systems: Application of nonlinear programming to a linear quadratic formulation
NASA Technical Reports Server (NTRS)
Fleming, P.
1983-01-01
The familiar suboptimal regulator design approach is recast as a constrained optimization problem and incorporated in a Computer Aided Design (CAD) package where both design objective and constraints are quadratic cost functions. This formulation permits the separate consideration of, for example, model following errors, sensitivity measures and control energy as objectives to be minimized or limits to be observed. Efficient techniques for computing the interrelated cost functions and their gradients are utilized in conjunction with a nonlinear programming algorithm. The effectiveness of the approach and the degree of insight into the problem which it affords is illustrated in a helicopter regulation design example.
A Decomposition Method and Its Application to Block Angular Linear Programs.
1981-01-01
Theorem 2.2 If the f. are strongly convex then the ft are finite every-1 2. where and are Lipschitz continuously differentiable . Hence g is also finite...everywhere and is Lipschitz continuously differentiable . -10- . . . . ." The derivative g’ of g at the point y is given by 9 (y) - a- A.(x.(y)) where...the dual problem (2.5) and the quadratic programming problems (3.2).. Recall that g(y) is a Lipschitz continuously differentiable function, so to solve
Experimental program to build a multimegawatt lasertron for super linear colliders
Garwin, E.L.; Herrmannsfeldt, W.B.; Sinclair, C.; Weaver, J.N.; Welch, J.J.; Wilson, P.B.
1985-04-01
A lasertron (a microwave ''triode'' with an RF output cavity and an RF modulated laser to illuminate a photocathode) is a possible high power RF amplifier for TeV linear colliders. As the first step toward building a 35 MW, S-band lasertron for a proof of principle demonstration, a 400 kV dc diode is being designed with a GaAs photocathode, a drift-tube and a collector. After some cathode life tests are made in the diode, an RF output cavity will replace the drift tube and a mode-locked, frequency-doubled, Nd:YAG laser, modulated to produce a 1 us-long comb of 60 ps pulses at a 2856 MHz rate, will be used to illuminate the photocathode to make an RF power source out of the device. This paper discusses the plans for the project and includes some results of numerical simulation studies of the lasertron as well as some of the ultra-high vacuum and mechanical design requirements for incorporating a photocathode.
Knapp, Bettina; Kaderali, Lars
2013-01-01
Perturbation experiments for example using RNA interference (RNAi) offer an attractive way to elucidate gene function in a high throughput fashion. The placement of hit genes in their functional context and the inference of underlying networks from such data, however, are challenging tasks. One of the problems in network inference is the exponential number of possible network topologies for a given number of genes. Here, we introduce a novel mathematical approach to address this question. We formulate network inference as a linear optimization problem, which can be solved efficiently even for large-scale systems. We use simulated data to evaluate our approach, and show improved performance in particular on larger networks over state-of-the art methods. We achieve increased sensitivity and specificity, as well as a significant reduction in computing time. Furthermore, we show superior performance on noisy data. We then apply our approach to study the intracellular signaling of human primary nave CD4(+) T-cells, as well as ErbB signaling in trastuzumab resistant breast cancer cells. In both cases, our approach recovers known interactions and points to additional relevant processes. In ErbB signaling, our results predict an important role of negative and positive feedback in controlling the cell cycle progression.
Current Status of the Next Linear Collider X-Band Klystron Development Program
Caryotakis, G.; Haase, A.A.; Jongewaard, E.N.; Pearson, C.; Sprehn, D.W.; /SLAC
2005-05-09
Klystrons capable of driving accelerator sections in the Next Linear Collider (NLC) have been developed at SLAC during the last decade. In addition to fourteen 50 MW solenoid-focused devices and a 50 MW Periodic Permanent Magnet focused (PPM) klystron, a 500 kV 75 MW PPM klystron was tested in 1999 to 80 MW with 3 {micro}s pulses, but very low duty. Subsequent 75 MW prototypes aimed for low-cost manufacture by employing reusable focusing structures external to the vacuum, similar to a solenoid electromagnet. During the PPM klystron development, several partners (CPI, EEV and Toshiba) have participated by constructing partial or complete PPM klystrons. After early failures during testing of the first two devices, SLAC has recently tested this design (XP3-3) to the full NLC specifications of 75 MW, 1.6 {micro}s pulse length, and 120 Hz. This 14.4 kW average power operation came with an efficiency of 50%. The XP3-3 average and peak output power, together with the focusing method, arguably makes it the most advanced high power klystron ever built anywhere in the world. Design considerations and test results for these latest prototypes will be presented.
IKE: An interactive klystron evaluation program for SLAC linear collider klystron performance
Kleban, S.D.; Koontz, R.F.; Vlieks, A.E.
1987-03-01
When the new 65 MW klystrons for the SLC were planned, a computer based interlock and data recording system was implemented in the general electronics upgrade. Significant klystron operating parameters are interlocked and displayed in the SLC central control room through the VAX control computer. A program titled ''IKE'' has been written to record klystron operating data each day, store the data in a database, and provide various sorted operating and statistical information to klystron engineers, and maintenance personnel in the form of terminal listings, bar graphs, and special printed reports. This paper gives an overview of the IKE system, describes its use as a klystron maintenance tool, and explains why it is valuable to klystron engineers.
Blumberg, Leonid M; Desmet, Gert
2016-12-09
The mixing rate (Rϕ) is the temporal rate of increase in the solvent strength in gradient LC. The optimal Rϕ (Rϕ,Opt) is the one at which a required peak capacity of gradient LC analysis is obtained in the shortest time. The balanced mixing program is a one where, for better separation of early eluting solutes, the mixing ramp is preceded by a balanced isocratic hold of the duration depending on Rϕ. The improvement in the separation of the earlier eluites due to the balanced programming has been evaluated. The value of Rϕ,Opt depends on the solvent composition range covered by the mixing ramp and on the column pressure conditions. The Rϕ,Opt for a column operating at maximum instrumental pressure is different from Rϕ,Opt for a column operating below the instrumental pressure limit. On the other hand, it has been shown that the difference in the Rϕ,Opt values under different conditions is not very large so that a single default Rϕ previously recommended for gradient analyses without the isocratic hold also yields a good approximation to the shortest analysis time for all conditions in the balanced analyses. With or without the initial balance isocratic hold, the recommended default Rϕ is about 5%/t0 (5% increase in the solvent strength per each t0-long increment in time) for small-molecule samples, and about an order of magnitude slower (0.5%/t0) for protein samples. A discussion illustrating the use of the optimization criteria employed here for the techniques other than LSS gradient LC is included.
Johnson, Glen D; Mesler, Kristine; Kacica, Marilyn A
2017-02-06
Objective The objective is to estimate community needs with respect to risky adolescent sexual behavior in a way that is risk-adjusted for multiple community factors. Methods Generalized linear mixed modeling was applied for estimating teen pregnancy and sexually transmitted disease (STD) incidence by postal ZIP code in New York State, in a way that adjusts for other community covariables and residual spatial autocorrelation. A community needs index was then obtained by summing the risk-adjusted estimates of pregnancy and STD cases. Results Poisson regression with a spatial random effect was chosen among competing modeling approaches. Both the risk-adjusted caseloads and rates were computed for ZIP codes, which allowed risk-based prioritization to help guide funding decisions for a comprehensive adolescent pregnancy prevention program. Conclusions This approach provides quantitative evidence of community needs with respect to risky adolescent sexual behavior, while adjusting for other community-level variables and stabilizing estimates in areas with small populations. Therefore, it was well accepted by the affected groups and proved valuable for program planning. This methodology may also prove valuable for follow up program evaluation. Current research is directed towards further improving the statistical modeling approach and applying to different health and behavioral outcomes, along with different predictor variables.
Catanzaro, Daniele; Shackney, Stanley E; Schaffer, Alejandro A; Schwartz, Russell
2016-01-01
Ductal Carcinoma In Situ (DCIS) is a precursor lesion of Invasive Ductal Carcinoma (IDC) of the breast. Investigating its temporal progression could provide fundamental new insights for the development of better diagnostic tools to predict which cases of DCIS will progress to IDC. We investigate the problem of reconstructing a plausible progression from single-cell sampled data of an individual with synchronous DCIS and IDC. Specifically, by using a number of assumptions derived from the observation of cellular atypia occurring in IDC, we design a possible predictive model using integer linear programming (ILP). Computational experiments carried out on a preexisting data set of 13 patients with simultaneous DCIS and IDC show that the corresponding predicted progression models are classifiable into categories having specific evolutionary characteristics. The approach provides new insights into mechanisms of clonal progression in breast cancers and helps illustrate the power of the ILP approach for similar problems in reconstructing tumor evolution scenarios under complex sets of constraints.
NASA Technical Reports Server (NTRS)
Houts, R. C.; Burlage, D. W.
1972-01-01
A time domain technique is developed to design finite-duration impulse response digital filters using linear programming. Two related applications of this technique in data transmission systems are considered. The first is the design of pulse shaping digital filters to generate or detect signaling waveforms transmitted over bandlimited channels that are assumed to have ideal low pass or bandpass characteristics. The second is the design of digital filters to be used as preset equalizers in cascade with channels that have known impulse response characteristics. Example designs are presented which illustrate that excellent waveforms can be generated with frequency-sampling filters and the ease with which digital transversal filters can be designed for preset equalization.
NASA Astrophysics Data System (ADS)
Noor-E-Alam, Md.; Doucette, John
2015-08-01
Grid-based location problems (GBLPs) can be used to solve location problems in business, engineering, resource exploitation, and even in the field of medical sciences. To solve these decision problems, an integer linear programming (ILP) model is designed and developed to provide the optimal solution for GBLPs considering fixed cost criteria. Preliminary results show that the ILP model is efficient in solving small to moderate-sized problems. However, this ILP model becomes intractable in solving large-scale instances. Therefore, a decomposition heuristic is proposed to solve these large-scale GBLPs, which demonstrates significant reduction of solution runtimes. To benchmark the proposed heuristic, results are compared with the exact solution via ILP. The experimental results show that the proposed method significantly outperforms the exact method in runtime with minimal (and in most cases, no) loss of optimality.
Wood, Scott T; Dean, Brian C; Dean, Delphine
2013-04-01
This paper presents a novel computer vision algorithm to analyze 3D stacks of confocal images of fluorescently stained single cells. The goal of the algorithm is to create representative in silico model structures that can be imported into finite element analysis software for mechanical characterization. Segmentation of cell and nucleus boundaries is accomplished via standard thresholding methods. Using novel linear programming methods, a representative actin stress fiber network is generated by computing a linear superposition of fibers having minimum discrepancy compared with an experimental 3D confocal image. Qualitative validation is performed through analysis of seven 3D confocal image stacks of adherent vascular smooth muscle cells (VSMCs) grown in 2D culture. The presented method is able to automatically generate 3D geometries of the cell's boundary, nucleus, and representative F-actin network based on standard cell microscopy data. These geometries can be used for direct importation and implementation in structural finite element models for analysis of the mechanics of a single cell to potentially speed discoveries in the fields of regenerative medicine, mechanobiology, and drug discovery.
NASA Astrophysics Data System (ADS)
Jamali, A.; Khaleghi, E.; Gholaminezhad, I.; Nariman-zadeh, N.
2016-05-01
In this paper, a new multi-objective genetic programming (GP) with a diversity preserving mechanism and a real number alteration operator is presented and successfully used for Pareto optimal modelling of some complex non-linear systems using some input-output data. In this study, two different input-output data-sets of a non-linear mathematical model and of an explosive cutting process are considered separately in three-objective optimisation processes. The pertinent conflicting objective functions that have been considered for such Pareto optimisations are namely, training error (TE), prediction error (PE), and the length of tree (complexity of the network) (TL) of the GP models. Such three-objective optimisation implementations leads to some non-dominated choices of GP-type models for both cases representing the trade-offs among those objective functions. Therefore, optimal Pareto fronts of such GP models exhibit the trade-off among the corresponding conflicting objectives and, thus, provide different non-dominated optimal choices of GP-type models. Moreover, the results show that no significant optimality in TE and PE may occur when the TL of the corresponding GP model exceeds some values.
NASA Astrophysics Data System (ADS)
Sharqawy, Mostafa H.
2016-12-01
Pore network models (PNM) of Berea and Fontainebleau sandstones were constructed using nonlinear programming (NLP) and optimization methods. The constructed PNMs are considered as a digital representation of the rock samples which were based on matching the macroscopic properties of the porous media and used to conduct fluid transport simulations including single and two-phase flow. The PNMs consisted of cubic networks of randomly distributed pores and throats sizes and with various connectivity levels. The networks were optimized such that the upper and lower bounds of the pore sizes are determined using the capillary tube bundle model and the Nelder-Mead method instead of guessing them, which reduces the optimization computational time significantly. An open-source PNM framework was employed to conduct transport and percolation simulations such as invasion percolation and Darcian flow. The PNM model was subsequently used to compute the macroscopic properties; porosity, absolute permeability, specific surface area, breakthrough capillary pressure, and primary drainage curve. The pore networks were optimized to allow for the simulation results of the macroscopic properties to be in excellent agreement with the experimental measurements. This study demonstrates that non-linear programming and optimization methods provide a promising method for pore network modeling when computed tomography imaging may not be readily available.
Diffendorfer, James E.; Richards, Paul M.; Dalrymple, George H.; DeAngelis, Donald L.
2001-01-01
We present the application of Linear Programming for estimating biomass fluxes in ecosystem and food web models. We use the herpetological assemblage of the Everglades as an example. We developed food web structures for three common Everglades freshwater habitat types: marsh, prairie, and upland. We obtained a first estimate of the fluxes using field data, literature estimates, and professional judgment. Linear programming was used to obtain a consistent and better estimate of the set of fluxes, while maintaining mass balance and minimizing deviations from point estimates. The results support the view that the Everglades is a spatially heterogeneous system, with changing patterns of energy flux, species composition, and biomasses across the habitat types. We show that a food web/ecosystem perspective, combined with Linear Programming, is a robust method for describing food webs and ecosystems that requires minimal data, produces useful post-solution analyses, and generates hypotheses regarding the structure of energy flow in the system.
Cabrera, V E
2010-01-01
The purpose of the study was 2-fold: 1) to propose a novel modeling framework using Markovian linear programming to optimize dairy farmer-defined goals under different decision schemes and 2) to illustrate the model with a practical application testing diets for entire lactations. A dairy herd population was represented by cow state variables defined by parity (1 to 15), month in lactation (1 to 24), and pregnancy status (0 nonpregnant and 1 to 9 mo of pregnancy). A database of 326,000 lactations of Holsteins from AgSource Dairy Herd Improvement service (http://agsource.crinet.com/page249/DHI) was used to parameterize reproduction, mortality, and involuntary culling. The problem was set up as a Markovian linear program model containing 5,580 decision variables and 8,731 constraints. The model optimized the net revenue of the steady state dairy herd population having 2 options in each state: keeping or replacing an animal. Five diets were studied to assess economic, environmental, and herd structural outcomes. Diets varied in proportions of alfalfa silage (38 to 98% of dry matter), high-moisture ear corn (0 to 42% of dry matter), and soybean meal (0 to 18% of dry matter) within and between lactations, which determined dry matter intake, milk production, and N excretion. Diet ingredient compositions ranged from one of high concentrates to alfalfa silage only. Hence, the model identified the maximum net revenue that included the value of nutrient excretion and the cost of manure disposal associated with the optimal policy. Outcomes related to optimal solutions included the herd population structure, the replacement policy, and the amount of N excreted under each diet experiment. The problem was solved using the Excel Risk Solver Platform with the Standard LP/Quadratic Engine. Consistent replacement policies were to (1) keep pregnant cows, (2) keep primiparous cows longer than multiparous cows, and (3) decrease replacement rates when milk and feed prices are favorable
NASA Astrophysics Data System (ADS)
Bostan, Mohamad; Hadi Afshar, Mohamad; Khadem, Majed
2015-04-01
This article proposes a hybrid linear programming (LP-LP) methodology for the simultaneous optimal design and operation of groundwater utilization systems. The proposed model is an extension of an earlier LP-LP model proposed by the authors for the optimal operation of a set of existing wells. The proposed model can be used to optimally determine the number, configuration and pumping rates of the operational wells out of potential wells with fixed locations to minimize the total cost of utilizing a two-dimensional confined aquifer under steady-state flow conditions. The model is able to take into account the well installation, piping and pump installation costs in addition to the operational costs, including the cost of energy and maintenance. The solution to the problem is defined by well locations and their pumping rates, minimizing the total cost while satisfying a downstream demand, lower/upper bound on the pumping rates, and lower/upper bound on the water level drawdown at the wells. A discretized version of the differential equation governing the flow is first embedded into the model formulation as a set of additional constraints. The resulting mixed-integer highly constrained nonlinear optimization problem is then decomposed into two subproblems with different sets of decision variables, one with a piezometric head and the other with the operational well locations and the corresponding pumping rates. The binary variables representing the well locations are approximated by a continuous variable leading to two LP subproblems. Having started with a random value for all decision variables, the two subproblems are solved iteratively until convergence is achieved. The performance and ability of the proposed method are tested against a hypothetical problem from the literature and the results are presented and compared with those obtained using a mixed-integer nonlinear programming method. The results show the efficiency and effectiveness of the proposed method for
NASA Astrophysics Data System (ADS)
Hung, Linda; Huang, Chen; Shin, Ilgyou; Ho, Gregory S.; Lignères, Vincent L.; Carter, Emily A.
2010-12-01
Orbital-free density functional theory (OFDFT) is a first principles quantum mechanics method to find the ground-state energy of a system by variationally minimizing with respect to the electron density. No orbitals are used in the evaluation of the kinetic energy (unlike Kohn-Sham DFT), and the method scales nearly linearly with the size of the system. The PRinceton Orbital-Free Electronic Structure Software (PROFESS) uses OFDFT to model materials from the atomic scale to the mesoscale. This new version of PROFESS allows the study of larger systems with two significant changes: PROFESS is now parallelized, and the ion-electron and ion-ion terms scale quasilinearly, instead of quadratically as in PROFESS v1 (L. Hung and E.A. Carter, Chem. Phys. Lett. 475 (2009) 163). At the start of a run, PROFESS reads the various input files that describe the geometry of the system (ion positions and cell dimensions), the type of elements (defined by electron-ion pseudopotentials), the actions you want it to perform (minimize with respect to electron density and/or ion positions and/or cell lattice vectors), and the various options for the computation (such as which functionals you want it to use). Based on these inputs, PROFESS sets up a computation and performs the appropriate optimizations. Energies, forces, stresses, material geometries, and electron density configurations are some of the values that can be output throughout the optimization. New version program summaryProgram Title: PROFESS Catalogue identifier: AEBN_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEBN_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 68 721 No. of bytes in distributed program, including test data, etc.: 1 708 547 Distribution format: tar.gz Programming language: Fortran 90 Computer
Ryan, Jason C; Banerjee, Ashis Gopal; Cummings, Mary L; Roy, Nicholas
2014-06-01
Planning operations across a number of domains can be considered as resource allocation problems with timing constraints. An unexplored instance of such a problem domain is the aircraft carrier flight deck, where, in current operations, replanning is done without the aid of any computerized decision support. Rather, veteran operators employ a set of experience-based heuristics to quickly generate new operating schedules. These expert user heuristics are neither codified nor evaluated by the United States Navy; they have grown solely from the convergent experiences of supervisory staff. As unmanned aerial vehicles (UAVs) are introduced in the aircraft carrier domain, these heuristics may require alterations due to differing capabilities. The inclusion of UAVs also allows for new opportunities for on-line planning and control, providing an alternative to the current heuristic-based replanning methodology. To investigate these issues formally, we have developed a decision support system for flight deck operations that utilizes a conventional integer linear program-based planning algorithm. In this system, a human operator sets both the goals and constraints for the algorithm, which then returns a proposed schedule for operator approval. As a part of validating this system, the performance of this collaborative human-automation planner was compared with that of the expert user heuristics over a set of test scenarios. The resulting analysis shows that human heuristics often outperform the plans produced by an optimization algorithm, but are also often more conservative.
Simic, Vladimir; Dimitrijevic, Branka
2015-02-01
An interval linear programming approach is used to formulate and comprehensively test a model for optimal long-term planning of vehicle recycling in the Republic of Serbia. The proposed model is applied to a numerical case study: a 4-year planning horizon (2013-2016) is considered, three legislative cases and three scrap metal price trends are analysed, availability of final destinations for sorted waste flows is explored. Potential and applicability of the developed model are fully illustrated. Detailed insights on profitability and eco-efficiency of the projected contemporary equipped vehicle recycling factory are presented. The influences of the ordinance on the management of end-of-life vehicles in the Republic of Serbia on the vehicle hulks procuring, sorting generated material fractions, sorted waste allocation and sorted metals allocation decisions are thoroughly examined. The validity of the waste management strategy for the period 2010-2019 is tested. The formulated model can create optimal plans for procuring vehicle hulks, sorting generated material fractions, allocating sorted waste flows and allocating sorted metals. Obtained results are valuable for supporting the construction and/or modernisation process of a vehicle recycling system in the Republic of Serbia.
Rosenblum, Michael; Liu, Han; Yen, En-Hsu
2014-01-01
We propose new, optimal methods for analyzing randomized trials, when it is suspected that treatment effects may differ in two predefined subpopulations. Such subpopulations could be defined by a biomarker or risk factor measured at baseline. The goal is to simultaneously learn which subpopulations benefit from an experimental treatment, while providing strong control of the familywise Type I error rate. We formalize this as a multiple testing problem and show it is computationally infeasible to solve using existing techniques. Our solution involves a novel approach, in which we first transform the original multiple testing problem into a large, sparse linear program. We then solve this problem using advanced optimization techniques. This general method can solve a variety of multiple testing problems and decision theory problems related to optimal trial design, for which no solution was previously available. In particular, we construct new multiple testing procedures that satisfy minimax and Bayes optimality criteria. For a given optimality criterion, our new approach yields the optimal tradeoff between power to detect an effect in the overall population versus power to detect effects in subpopulations. We demonstrate our approach in examples motivated by two randomized trials of new treatments for HIV.
NASA Astrophysics Data System (ADS)
Ezenwaji, Emma E.; Anyadike, Raymond N. C.; Igu, Nnaemeka I.
2014-03-01
Recent studies in water supply in Enugu urban area have observed that there is a persistent water supply shortage relative to demand. One of the strategies for achieving a good water supply under the circumstance is through efficient water allocation to consumers. The existing allocation system by the Enugu State Water Corporation is not achieving the desired goal, because it is not based on any scientific criteria. In this study, we have employed the linear programming modelling technique to optimise the allocation of 35,000,000 L of water produced daily by the State Water Corporation and supplied to the four sectors of the town. The result shows that the model allocated 27,470,000 L to the residential sector, 3,360,000 L to commercial, 3,120,000 L to industrial and 882,000 L to public institutions sectors leaving a balance of 168,000 L to be utilised in emergency situations. This allocation pattern departs sharply from the present management technique adopted by the corporation. It is then suggested that for urban water supply to be sustainable in the town, the corporation should rely on this technique for water supply.
Lee, Chaewoo
2014-01-01
The advancement in wideband wireless network supports real time services such as IPTV and live video streaming. However, because of the sharing nature of the wireless medium, efficient resource allocation has been studied to achieve a high level of acceptability and proliferation of wireless multimedia. Scalable video coding (SVC) with adaptive modulation and coding (AMC) provides an excellent solution for wireless video streaming. By assigning different modulation and coding schemes (MCSs) to video layers, SVC can provide good video quality to users in good channel conditions and also basic video quality to users in bad channel conditions. For optimal resource allocation, a key issue in applying SVC in the wireless multicast service is how to assign MCSs and the time resources to each SVC layer in the heterogeneous channel condition. We formulate this problem with integer linear programming (ILP) and provide numerical results to show the performance under 802.16 m environment. The result shows that our methodology enhances the overall system throughput compared to an existing algorithm. PMID:25276862
Lee, Dongyul; Lee, Chaewoo
2014-01-01
The advancement in wideband wireless network supports real time services such as IPTV and live video streaming. However, because of the sharing nature of the wireless medium, efficient resource allocation has been studied to achieve a high level of acceptability and proliferation of wireless multimedia. Scalable video coding (SVC) with adaptive modulation and coding (AMC) provides an excellent solution for wireless video streaming. By assigning different modulation and coding schemes (MCSs) to video layers, SVC can provide good video quality to users in good channel conditions and also basic video quality to users in bad channel conditions. For optimal resource allocation, a key issue in applying SVC in the wireless multicast service is how to assign MCSs and the time resources to each SVC layer in the heterogeneous channel condition. We formulate this problem with integer linear programming (ILP) and provide numerical results to show the performance under 802.16 m environment. The result shows that our methodology enhances the overall system throughput compared to an existing algorithm.
Non-Linear Control Allocation Using Piecewise Linear Functions
2003-08-01
A novel method is presented for the solution of the non- linear control allocation problem. Historically, control allocation has been performed by... linear control allocation problem to be cast as a piecewise linear program. The piecewise linear program is ultimately cast as a mixed-integer linear...piecewise linear control allocation method is shown to be markedly improved when compared to the performance of a more traditional control allocation approach that assumes linearity.
Torquato, S; Jiao, Y
2010-12-01
We have formulated the problem of generating dense packings of nonoverlapping, nontiling nonspherical particles within an adaptive fundamental cell subject to periodic boundary conditions as an optimization problem called the adaptive-shrinking cell (ASC) formulation [S. Torquato and Y. Jiao, Phys. Rev. E 80, 041104 (2009)]. Because the objective function and impenetrability constraints can be exactly linearized for sphere packings with a size distribution in d-dimensional Euclidean space R(d), it is most suitable and natural to solve the corresponding ASC optimization problem using sequential-linear-programming (SLP) techniques. We implement an SLP solution to produce robustly a wide spectrum of jammed sphere packings in R(d) for d=2, 3, 4, 5, and 6 with a diversity of disorder and densities up to the respective maximal densities. A novel feature of this deterministic algorithm is that it can produce a broad range of inherent structures (locally maximally dense and mechanically stable packings), besides the usual disordered ones (such as the maximally random jammed state), with very small computational cost compared to that of the best known packing algorithms by tuning the radius of the influence sphere. For example, in three dimensions, we show that it can produce with high probability a variety of strictly jammed packings with a packing density anywhere in the wide range [0.6, 0.7408...], where π/√18 = 0.7408... corresponds to the density of the densest packing. We also apply the algorithm to generate various disordered packings as well as the maximally dense packings for d=2, 4, 5, and 6. Our jammed sphere packings are characterized and compared to the corresponding packings generated by the well-known Lubachevsky-Stillinger (LS) molecular-dynamics packing algorithm. Compared to the LS procedure, our SLP protocol is able to ensure that the final packings are truly jammed, produces disordered jammed packings with anomalously low densities, and is appreciably
ERIC Educational Resources Information Center
Bessler, William Carl
This paper presents the procedures, results, and conclusions of a study designed to determine the effectiveness of an electronic student response system in teaching biology to the non-major. Nine group-paced linear programs were used. Subjects were 664 college students divided into treatment and control groups. The effectiveness of the response…
Ureba, A.; Salguero, F. J.; Barbeiro, A. R.; Jimenez-Ortega, E.; Baeza, J. A.; Leal, A.; Miras, H.; Linares, R.; Perucha, M.
2014-08-15
Purpose: The authors present a hybrid direct multileaf collimator (MLC) aperture optimization model exclusively based on sequencing of patient imaging data to be implemented on a Monte Carlo treatment planning system (MC-TPS) to allow the explicit radiation transport simulation of advanced radiotherapy treatments with optimal results in efficient times for clinical practice. Methods: The planning system (called CARMEN) is a full MC-TPS, controlled through aMATLAB interface, which is based on the sequencing of a novel map, called “biophysical” map, which is generated from enhanced image data of patients to achieve a set of segments actually deliverable. In order to reduce the required computation time, the conventional fluence map has been replaced by the biophysical map which is sequenced to provide direct apertures that will later be weighted by means of an optimization algorithm based on linear programming. A ray-casting algorithm throughout the patient CT assembles information about the found structures, the mass thickness crossed, as well as PET values. Data are recorded to generate a biophysical map for each gantry angle. These maps are the input files for a home-made sequencer developed to take into account the interactions of photons and electrons with the MLC. For each linac (Axesse of Elekta and Primus of Siemens) and energy beam studied (6, 9, 12, 15 MeV and 6 MV), phase space files were simulated with the EGSnrc/BEAMnrc code. The dose calculation in patient was carried out with the BEAMDOSE code. This code is a modified version of EGSnrc/DOSXYZnrc able to calculate the beamlet dose in order to combine them with different weights during the optimization process. Results: Three complex radiotherapy treatments were selected to check the reliability of CARMEN in situations where the MC calculation can offer an added value: A head-and-neck case (Case I) with three targets delineated on PET/CT images and a demanding dose-escalation; a partial breast
Zhang, Huiling; Huang, Qingsheng; Bei, Zhendong; Wei, Yanjie; Floudas, Christodoulos A
2016-03-01
In this article, we present COMSAT, a hybrid framework for residue contact prediction of transmembrane (TM) proteins, integrating a support vector machine (SVM) method and a mixed integer linear programming (MILP) method. COMSAT consists of two modules: COMSAT_SVM which is trained mainly on position-specific scoring matrix features, and COMSAT_MILP which is an ab initio method based on optimization models. Contacts predicted by the SVM model are ranked by SVM confidence scores, and a threshold is trained to improve the reliability of the predicted contacts. For TM proteins with no contacts above the threshold, COMSAT_MILP is used. The proposed hybrid contact prediction scheme was tested on two independent TM protein sets based on the contact definition of 14 Å between Cα-Cα atoms. First, using a rigorous leave-one-protein-out cross validation on the training set of 90 TM proteins, an accuracy of 66.8%, a coverage of 12.3%, a specificity of 99.3% and a Matthews' correlation coefficient (MCC) of 0.184 were obtained for residue pairs that are at least six amino acids apart. Second, when tested on a test set of 87 TM proteins, the proposed method showed a prediction accuracy of 64.5%, a coverage of 5.3%, a specificity of 99.4% and a MCC of 0.106. COMSAT shows satisfactory results when compared with 12 other state-of-the-art predictors, and is more robust in terms of prediction accuracy as the length and complexity of TM protein increase. COMSAT is freely accessible at http://hpcc.siat.ac.cn/COMSAT/.
Melas, Ioannis N; Samaga, Regina; Alexopoulos, Leonidas G; Klamt, Steffen
2013-01-01
Cross-referencing experimental data with our current knowledge of signaling network topologies is one central goal of mathematical modeling of cellular signal transduction networks. We present a new methodology for data-driven interrogation and training of signaling networks. While most published methods for signaling network inference operate on Bayesian, Boolean, or ODE models, our approach uses integer linear programming (ILP) on interaction graphs to encode constraints on the qualitative behavior of the nodes. These constraints are posed by the network topology and their formulation as ILP allows us to predict the possible qualitative changes (up, down, no effect) of the activation levels of the nodes for a given stimulus. We provide four basic operations to detect and remove inconsistencies between measurements and predicted behavior: (i) find a topology-consistent explanation for responses of signaling nodes measured in a stimulus-response experiment (if none exists, find the closest explanation); (ii) determine a minimal set of nodes that need to be corrected to make an inconsistent scenario consistent; (iii) determine the optimal subgraph of the given network topology which can best reflect measurements from a set of experimental scenarios; (iv) find possibly missing edges that would improve the consistency of the graph with respect to a set of experimental scenarios the most. We demonstrate the applicability of the proposed approach by interrogating a manually curated interaction graph model of EGFR/ErbB signaling against a library of high-throughput phosphoproteomic data measured in primary hepatocytes. Our methods detect interactions that are likely to be inactive in hepatocytes and provide suggestions for new interactions that, if included, would significantly improve the goodness of fit. Our framework is highly flexible and the underlying model requires only easily accessible biological knowledge. All related algorithms were implemented in a freely
Alexopoulos, Leonidas G.; Klamt, Steffen
2013-01-01
Cross-referencing experimental data with our current knowledge of signaling network topologies is one central goal of mathematical modeling of cellular signal transduction networks. We present a new methodology for data-driven interrogation and training of signaling networks. While most published methods for signaling network inference operate on Bayesian, Boolean, or ODE models, our approach uses integer linear programming (ILP) on interaction graphs to encode constraints on the qualitative behavior of the nodes. These constraints are posed by the network topology and their formulation as ILP allows us to predict the possible qualitative changes (up, down, no effect) of the activation levels of the nodes for a given stimulus. We provide four basic operations to detect and remove inconsistencies between measurements and predicted behavior: (i) find a topology-consistent explanation for responses of signaling nodes measured in a stimulus-response experiment (if none exists, find the closest explanation); (ii) determine a minimal set of nodes that need to be corrected to make an inconsistent scenario consistent; (iii) determine the optimal subgraph of the given network topology which can best reflect measurements from a set of experimental scenarios; (iv) find possibly missing edges that would improve the consistency of the graph with respect to a set of experimental scenarios the most. We demonstrate the applicability of the proposed approach by interrogating a manually curated interaction graph model of EGFR/ErbB signaling against a library of high-throughput phosphoproteomic data measured in primary hepatocytes. Our methods detect interactions that are likely to be inactive in hepatocytes and provide suggestions for new interactions that, if included, would significantly improve the goodness of fit. Our framework is highly flexible and the underlying model requires only easily accessible biological knowledge. All related algorithms were implemented in a freely
Simic, Vladimir
2015-01-01
End-of-life vehicles (ELVs) are vehicles that have reached the end of their useful lives and are no longer registered or licensed for use. The ELV recycling problem has become very serious in the last decade and more and more efforts are made in order to reduce the impact of ELVs on the environment. This paper proposes the fuzzy risk explicit interval linear programming model for ELV recycling planning in the EU. It has advantages in reflecting uncertainties presented in terms of intervals in the ELV recycling systems and fuzziness in decision makers' preferences. The formulated model has been applied to a numerical study in which different decision maker types and several ELV types under two EU ELV Directive legislative cases were examined. This study is conducted in order to examine the influences of the decision maker type, the α-cut level, the EU ELV Directive and the ELV type on decisions about vehicle hulks procuring, storing unprocessed hulks, sorting generated material fractions, allocating sorted waste flows and allocating sorted metals. Decision maker type can influence quantity of vehicle hulks kept in storages. The EU ELV Directive and decision maker type have no influence on which vehicle hulk type is kept in the storage. Vehicle hulk type, the EU ELV Directive and decision maker type do not influence the creation of metal allocation plans, since each isolated metal has its regular destination. The valid EU ELV Directive eco-efficiency quotas can be reached even when advanced thermal treatment plants are excluded from the ELV recycling process. The introduction of the stringent eco-efficiency quotas will significantly reduce the quantities of land-filled waste fractions regardless of the type of decision makers who will manage vehicle recycling system. In order to reach these stringent quotas, significant quantities of sorted waste need to be processed in advanced thermal treatment plants. Proposed model can serve as the support for the European
NASA Astrophysics Data System (ADS)
Hilbert, Bryan
2012-10-01
These observations will be used to monitor the signal non-linearity of the IR channel, as well as to update the IR channel non-linearity calibration reference file. The non-linearity behavior of each pixel in the detector will be investigated through the use of full frame and subarray flat fields, while the photometric behavior of point sources will be studied using observations of 47 Tuc. This is a continuation of the Cycle 19 non-linearity monitor, program 12696.
NASA Astrophysics Data System (ADS)
Hilbert, Bryan
2013-10-01
These observations will be used to monitor the signal non-linearity of the IR channel, as well as to update the IR channel non-linearity calibration reference file. The non-linearity behavior of each pixel in the detector will be investigated through the use of full frame and subarray flat fields, while the photometric behavior of point sources will be studied using observations of 47 Tuc. This is a continuation of the Cycle 20 non-linearity monitor, program 13079.
NASA Technical Reports Server (NTRS)
Walker, K. P.
1981-01-01
Results of a 20-month research and development program for nonlinear structural modeling with advanced time-temperature constitutive relationships are reported. The program included: (1) the evaluation of a number of viscoplastic constitutive models in the published literature; (2) incorporation of three of the most appropriate constitutive models into the MARC nonlinear finite element program; (3) calibration of the three constitutive models against experimental data using Hastelloy-X material; and (4) application of the most appropriate constitutive model to a three dimensional finite element analysis of a cylindrical combustor liner louver test specimen to establish the capability of the viscoplastic model to predict component structural response.
1980-05-31
Multiconstraint Zero - One Knapsack Problem ," The Journal of the Operational Research Society, Vol. 30, 1979, pp. 369-378. 69 [41] Kepler, C...programming. Shih [401 has written on a branch and bound method , Kepler and Blackman [41] have demonstrated the use of dynamic programming in the selection of...Portfolio Selection Model," IEEE A. Transactions on Engineering Management, Vol. EM-26, No. 1, 1979, pp. 2-7. [40] Shih, Wei, "A Branch and
NASA Technical Reports Server (NTRS)
Bielawa, R. L.
1976-01-01
The differential equations of motion for the lateral and torsional deformations of a nonlinearly twisted rotor blade in steady flight conditions together with those additional aeroelastic features germane to composite bearingless rotors are derived. The differential equations are formulated in terms of uncoupled (zero pitch and twist) vibratory modes with exact coupling effects due to finite, time variable blade pitch and, to second order, twist. Also presented are derivations of the fully coupled inertia and aerodynamic load distributions, automatic pitch change coupling effects, structural redundancy characteristics of the composite bearingless rotor flexbeam - torque tube system in bending and torsion, and a description of the linearized equations appropriate for eigensolution analyses. Three appendixes are included presenting material appropriate to the digital computer program implementation of the analysis, program G400.
Maillot, Matthieu; Drewnowski, Adam
2011-02-01
The 2010 Dietary Guidelines Advisory Committee has recommended that no more than 5-15% of total dietary energy should be derived from solid fats and added sugars (SoFAS). The guideline was based on USDA food pattern modeling analyses that met the Dietary Reference Intake recommendations and Dietary Guidelines and followed typical American eating habits. This study recreated food intake patterns for 6 of the same gender-age groups by using USDA data sources and a mathematical optimization technique known as linear programming. The analytic process identified food consumption patterns based on 128 food categories that met the nutritional goals for 9 vitamins, 9 minerals, 8 macronutrients, and dietary fiber and minimized deviation from typical American eating habits. Linear programming Model 1 created gender- and age-specific food patterns that corresponded to energy needs for each group. Model 2 created food patterns that were iso-caloric with diets observed for that group in the 2001-2002 NHANES. The optimized food patterns were evaluated with respect to MyPyramid servings goals, energy density [kcal/g (1 kcal = 4.18 kJ)], and energy cost (US$/2000 kcal). The optimized food patterns had more servings of vegetables and fruit, lower energy density, and higher cost compared with the observed diets. All nutrient goals were met. In contrast to the much lower USDA estimates, the 2 models placed SoFAS allowances at between 17 and 33% of total energy, depending on energy needs.
Wang, Hsiao-Fan; Hsu, Hsin-Wei
2010-11-01
With the urgency of global warming, green supply chain management, logistics in particular, has drawn the attention of researchers. Although there are closed-loop green logistics models in the literature, most of them do not consider the uncertain environment in general terms. In this study, a generalized model is proposed where the uncertainty is expressed by fuzzy numbers. An interval programming model is proposed by the defined means and mean square imprecision index obtained from the integrated information of all the level cuts of fuzzy numbers. The resolution for interval programming is based on the decision maker (DM)'s preference. The resulting solution provides useful information on the expected solutions under a confidence level containing a degree of risk. The results suggest that the more optimistic the DM is, the better is the resulting solution. However, a higher risk of violation of the resource constraints is also present. By defining this probable risk, a solution procedure was developed with numerical illustrations. This provides a DM trade-off mechanism between logistic cost and the risk.
NASA Technical Reports Server (NTRS)
Rybicki, G. B.
1985-01-01
The linear instability of line-driven stellar winds to take proper account of the dynamical effect of scattered radiation were analyzed. It is found that: (1) the drag effect of the mean scattered radiation does greatly reduce the contribution of scattering lines to the instability at the very base of the wind, but the instability growth rate associated with such lines rapidly increases as the flow moves outward from the base, reaching more than 50% of the growth rate for pure absorption lines within a stellar radius of the surface, and eventually reaching 80% of that rate at large radii; (2) perturbations in the scattered radiation field may be important for the propagation of wind disturbances, but they have little effect on the wind instability; and (3) the contribution of strongly shadowed lines to the wind instability is often reduced compared to that of unshadowed lines, but their overall effect is not one of damping in the outer parts of the wind. It is concluded that, even when all scattering effects are taken into account, the bulk of the flow in a line-driven stellar wind is still highly unstable.
Holm, Åsa; Larsson, Torbjörn; Tedgren, Åsa Carlsson
2013-08-15
Purpose: Recent research has shown that the optimization model hitherto used in high-dose-rate (HDR) brachytherapy corresponds weakly to the dosimetric indices used to evaluate the quality of a dose distribution. Although alternative models that explicitly include such dosimetric indices have been presented, the inclusion of the dosimetric indices explicitly yields intractable models. The purpose of this paper is to develop a model for optimizing dosimetric indices that is easier to solve than those proposed earlier.Methods: In this paper, the authors present an alternative approach for optimizing dose distributions for HDR brachytherapy where dosimetric indices are taken into account through surrogates based on the conditional value-at-risk concept. This yields a linear optimization model that is easy to solve, and has the advantage that the constraints are easy to interpret and modify to obtain satisfactory dose distributions.Results: The authors show by experimental comparisons, carried out retrospectively for a set of prostate cancer patients, that their proposed model corresponds well with constraining dosimetric indices. All modifications of the parameters in the authors' model yield the expected result. The dose distributions generated are also comparable to those generated by the standard model with respect to the dosimetric indices that are used for evaluating quality.Conclusions: The authors' new model is a viable surrogate to optimizing dosimetric indices and quickly and easily yields high quality dose distributions.
NASA Astrophysics Data System (ADS)
Sidorin, Anatoly
2010-01-01
In linear accelerators the particles are accelerated by either electrostatic fields or oscillating Radio Frequency (RF) fields. Accordingly the linear accelerators are divided in three large groups: electrostatic, induction and RF accelerators. Overview of the different types of accelerators is given. Stability of longitudinal and transverse motion in the RF linear accelerators is briefly discussed. The methods of beam focusing in linacs are described.
Yang, X.
1998-04-01
Large scale (up to 5 kt) chemical blasts are routinely conducted by mining and quarry industries around the world to remove overburden or to fragment rocks. Because of their ability to trigger the future International Monitoring System (IMS) of the Comprehensive Test Ban Treaty (CTBT), these blasts are monitored and studied by verification seismologists for the purpose of discriminating them from possible clandestine nuclear tests. One important component of these studies is the modeling of ground motions from these blasts with theoretical and empirical source models. The modeling exercises provide physical bases to regional discriminants and help to explain the observed signal characteristics. The program MineSeis has been developed to implement the synthetic seismogram modeling of multi-shot blast sources with the linear superposition of single shot sources. Single shot sources used in the modeling are the spherical explosion plus spall model mentioned here. Mueller and Murphy`s (1971) model is used as the spherical explosion model. A modification of Anandakrishnan et al.`s (1997) spall model is developed for the spall component. The program is implemented with the MATLAB{reg_sign} Graphical User Interface (GUI), providing the user with easy, interactive control of the calculation.
1985-04-01
for a Stirling cycle cryocooler . 26 * .*o .. * COMPRESSOR MOTOR FORCE VERSUS ROTOR AXIAL POSITION COMPRESSOR P-V DIAGRAM *COMPRESSOR MOTOR COMPRESSOR...potential. However, the limited test program has demonstrated the application of linear motor drive technology to a Stirling cycle cryocooler design. L...Ace-ss Ion& For flTIC TAB - TABLE OF CONTENTS TITLE IPAGE - 2. DETAILED DESIGN OF LINEAR RESONANCE CRYOCOOLER ......... 3 2.2 Expander
Dibari, Filippo; Diop, El Hadji I; Collins, Steven; Seal, Andrew
2012-05-01
According to the United Nations (UN), 25 million children <5 y of age are currently affected by severe acute malnutrition and need to be treated using special nutritional products such as ready-to-use therapeutic foods (RUTF). Improved formulations are in demand, but a standardized approach for RUTF design has not yet been described. A method relying on linear programming (LP) analysis was developed and piloted in the design of a RUTF prototype for the treatment of wasting in East African children and adults. The LP objective function and decision variables consisted of the lowest formulation price and the weights of the chosen commodities (soy, sorghum, maize, oil, and sugar), respectively. The LP constraints were based on current UN recommendations for the macronutrient content of therapeutic food and included palatability, texture, and maximum food ingredient weight criteria. Nonlinear constraints for nutrient ratios were converted to linear equations to allow their use in LP. The formulation was considered accurate if laboratory results confirmed an energy density difference <10% and a protein or lipid difference <5 g · 100 g(-1) compared to the LP formulation estimates. With this test prototype, the differences were 7%, and 2.3 and -1.0 g · 100 g(-1), respectively, and the formulation accuracy was considered good. LP can contribute to the design of ready-to-use foods (therapeutic, supplementary, or complementary), targeting different forms of malnutrition, while using commodities that are cheaper, regionally available, and meet local cultural preferences. However, as with all prototype feeding products for medical use, composition analysis, safety, acceptability, and clinical effectiveness trials must be conducted to validate the formulation.
Oz, U; Orhan, K; Abe, N
2011-01-01
Objective The aim of this study was to compare the linear and angular measurements made on two-dimensional (2D) conventional cephalometric images and three-dimensional (3D) cone beam CT (CBCT) generated cephalograms derived from a 3D volumetric rendering program. Methods Pre-treatment cephalometric digital radiographs of 11 patients and their corresponding CBCT images were randomly selected. The digital cephalometric radiographs were traced using Vista Dent OC (GAC International, Inc Bohemia, NY) and by hand. CBCT and Maxilim® (Medicim, Sint-Niklass, Belgium) software were used to generate cephalograms from the CBCT data set that were then linked to the 3D hard-tissue surface representations. In total, 16 cephalometric landmarks were identified and 18 widely used measurements (11 linear and 7 angular) were performed by 2 independent observers. Intraobserver reliability was assessed by calculating intraclass correlation coefficients (ICC), interobserver reliability was assessed with Student t-test and analysis of variance (ANOVA). Mann–Whitney U-tests and Kruskal–Wallis H tests were also used to compare the three methods (P < 0.05). Results The results demonstrated no statistically significant difference between interobserver analyses for CBCT-generated cephalograms (P < 0.05), except for Gonion-Menton (Go-Me) and Condylion-Gnathion (Co-Gn). Intraobserver examinations showed low ICCs, which was an indication of poor reproducibility for Go-Me and Sella-Nasion (S-N) in CBCT-generated cephalograms and poor reproducibility for Articulare-Gonion (Ar-Go) in the 2D hand tracing method (P < 0.05). No statistical significance was found for Vista Dent OC measurements (P > 0.05). Conclusions Measurements from in vivo CBCT-generated cephalograms from Maxilim® software were found to be similar to conventional images. Thus, owing to higher radiation exposure, CBCT examinations should only be used when the inherent 3D information could improve the outcome of treatment. PMID
Borbulevych, Oleg Y; Plumley, Joshua A; Martin, Roger I; Merz, Kenneth M; Westerhoff, Lance M
2014-05-01
Macromolecular crystallographic refinement relies on sometimes dubious stereochemical restraints and rudimentary energy functionals to ensure the correct geometry of the model of the macromolecule and any covalently bound ligand(s). The ligand stereochemical restraint file (CIF) requires a priori understanding of the ligand geometry within the active site, and creation of the CIF is often an error-prone process owing to the great variety of potential ligand chemistry and structure. Stereochemical restraints have been replaced with more robust functionals through the integration of the linear-scaling, semiempirical quantum-mechanics (SE-QM) program DivCon with the PHENIX X-ray refinement engine. The PHENIX/DivCon package has been thoroughly validated on a population of 50 protein-ligand Protein Data Bank (PDB) structures with a range of resolutions and chemistry. The PDB structures used for the validation were originally refined utilizing various refinement packages and were published within the past five years. PHENIX/DivCon does not utilize CIF(s), link restraints and other parameters for refinement and hence it does not make as many a priori assumptions about the model. Across the entire population, the method results in reasonable ligand geometries and low ligand strains, even when the original refinement exhibited difficulties, indicating that PHENIX/DivCon is applicable to both single-structure and high-throughput crystallography.
Zhang, Liping; Zhang, Shiwen; Huang, Yajie; Cao, Meng; Huang, Yuanfang; Zhang, Hongyan
2016-01-01
Understanding abandoned mine land (AML) changes during land reclamation is crucial for reusing damaged land resources and formulating sound ecological restoration policies. This study combines the linear programming (LP) model and the CLUE-S model to simulate land-use dynamics in the Mentougou District (Beijing, China) from 2007 to 2020 under three reclamation scenarios, that is, the planning scenario based on the general land-use plan in study area (scenario 1), maximal comprehensive benefits (scenario 2), and maximal ecosystem service value (scenario 3). Nine landscape-scale graph metrics were then selected to describe the landscape characteristics. The results show that the coupled model presented can simulate the dynamics of AML effectively and the spatially explicit transformations of AML were different. New cultivated land dominates in scenario 1, while construction land and forest land account for major percentages in scenarios 2 and 3, respectively. Scenario 3 has an advantage in most of the selected indices as the patches combined most closely. To conclude, reclaiming AML by transformation into more forest can reduce the variability and maintain the stability of the landscape ecological system in study area. These findings contribute to better mapping AML dynamics and providing policy support for the management of AML. PMID:27023575
Koffi-Tessio, E.N.
1982-01-01
This study examines the interrelationship between the energy sector and the production of three agricultural crops (sugar, macadamia nut, and coffee) by small growers on the Big Island of Hawaii. Specifically, it attempts: to explore the patterns of energy use in agriculture; to determine the relative efficiency of fuel use by farm size among the three crops; and to investigate the impacts of higher energy costs on farmers' net revenues under three output-price and three energy-cost scenarios. To meet these objectives, a linear-programming model was developed. The objective function was to maximize net revenues subject to resource availability, production, marketing, and non-negativity constraints. The major conclusions emerging are: higher energy costs have not significantly impacted on farmers' net revenues, but do have a differential impact depending on the output price and resource endowments of each crop grower; farmers are faced with many constraints that do not permit factor substitution. For policy formulation, it was observed that policy makers are overly concerned with the problems facing growers at the macro level, without considering their constraints at the micro level. These micro factors play a dominant role in resource allocation. They must, therefore, be incorporated into a comprehensive energy and agricultural policy at the county and state level.
Zhang, Liping; Zhang, Shiwen; Huang, Yajie; Cao, Meng; Huang, Yuanfang; Zhang, Hongyan
2016-03-24
Understanding abandoned mine land (AML) changes during land reclamation is crucial for reusing damaged land resources and formulating sound ecological restoration policies. This study combines the linear programming (LP) model and the CLUE-S model to simulate land-use dynamics in the Mentougou District (Beijing, China) from 2007 to 2020 under three reclamation scenarios, that is, the planning scenario based on the general land-use plan in study area (scenario 1), maximal comprehensive benefits (scenario 2), and maximal ecosystem service value (scenario 3). Nine landscape-scale graph metrics were then selected to describe the landscape characteristics. The results show that the coupled model presented can simulate the dynamics of AML effectively and the spatially explicit transformations of AML were different. New cultivated land dominates in scenario 1, while construction land and forest land account for major percentages in scenarios 2 and 3, respectively. Scenario 3 has an advantage in most of the selected indices as the patches combined most closely. To conclude, reclaiming AML by transformation into more forest can reduce the variability and maintain the stability of the landscape ecological system in study area. These findings contribute to better mapping AML dynamics and providing policy support for the management of AML.
Borbulevych, Oleg Y.; Plumley, Joshua A.; Martin, Roger I.; Merz, Kenneth M.; Westerhoff, Lance M.
2014-01-01
Macromolecular crystallographic refinement relies on sometimes dubious stereochemical restraints and rudimentary energy functionals to ensure the correct geometry of the model of the macromolecule and any covalently bound ligand(s). The ligand stereochemical restraint file (CIF) requires a priori understanding of the ligand geometry within the active site, and creation of the CIF is often an error-prone process owing to the great variety of potential ligand chemistry and structure. Stereochemical restraints have been replaced with more robust functionals through the integration of the linear-scaling, semiempirical quantum-mechanics (SE-QM) program DivCon with the PHENIX X-ray refinement engine. The PHENIX/DivCon package has been thoroughly validated on a population of 50 protein–ligand Protein Data Bank (PDB) structures with a range of resolutions and chemistry. The PDB structures used for the validation were originally refined utilizing various refinement packages and were published within the past five years. PHENIX/DivCon does not utilize CIF(s), link restraints and other parameters for refinement and hence it does not make as many a priori assumptions about the model. Across the entire population, the method results in reasonable ligand geometries and low ligand strains, even when the original refinement exhibited difficulties, indicating that PHENIX/DivCon is applicable to both single-structure and high-throughput crystallography. PMID:24816093
Christofilos, N.C.; Polk, I.J.
1959-02-17
Improvements in linear particle accelerators are described. A drift tube system for a linear ion accelerator reduces gap capacity between adjacent drift tube ends. This is accomplished by reducing the ratio of the diameter of the drift tube to the diameter of the resonant cavity. Concentration of magnetic field intensity at the longitudinal midpoint of the external sunface of each drift tube is reduced by increasing the external drift tube diameter at the longitudinal center region.
NASA Technical Reports Server (NTRS)
Magnus, A. E.; Epton, M. A.
1981-01-01
Panel aerodynamics (PAN AIR) is a system of computer programs designed to analyze subsonic and supersonic inviscid flows about arbitrary configurations. A panel method is a program which solves a linear partial differential equation by approximating the configuration surface by a set of panels. An overview of the theory of potential flow in general and PAN AIR in particular is given along with detailed mathematical formulations. Fluid dynamics, the Navier-Stokes equation, and the theory of panel methods were also discussed.
ERIC Educational Resources Information Center
Li, Yuan H.; Yang, Yu N.; Tompkins, Leroy J.; Modarresi, Shahpar
2005-01-01
The statistical technique, "Zero-One Linear Programming," that has successfully been used to create multiple tests with similar characteristics (e.g., item difficulties, test information and test specifications) in the area of educational measurement, was deemed to be a suitable method for creating multiple sets of matched samples to be…
NASA Astrophysics Data System (ADS)
Trzaskuś-Żak, Beata; Żak, Andrzej
2013-09-01
This paper presents a method of binary linear programming for the selection of customers to whom a rebate will be offered. In return for the rebate, the customer undertakes payment of its debt to the mine by the deadline specified. In this way, the company is expected to achieve the required rate of collection of receivables. This, of course, will be at the expense of reduced revenue, which can be made up for by increased sales. Customer selection was done in order to keep the overall cost to the mine of the offered rebates as low as possible: where: KcR - total cost of rebates granted by the mine; kj - cost of granting the rebate to a jth customer; xj - decision variables; j = 1, …, n - particular customers. The calculations were performed with the Solver tool (Excel programme). The cost of rebates was calculated from the formula: kj = ΔPj - Kk(j) where: ΔPj - difference in revenues from customer j; Kk(j)- cost of the so-called trade credit with regard to customer j. The cost of the trade credit was calculated from the formula: where r - interest rate on the bank loan, % ts - collection time for the receivable in days (e.g. t1 = 30, t2 = 45,…, t12 = 360); Ns - value of the receivable at collection date ts. This paper presents the general model of linear binary programming for managing receivables by granting rebates. The model, in its general form, aims at: - minimising the objective function: - with the restrictions: - and: xj ɛ (0,1) where: Ntji - value of the timely payments of a customer j in an ith month of the period analysed; Nnji - value of the overdue receivables of a customer j in an ith month of the period analysed; q - the assumed minimum percentage of timely payments collected; Ni - summarised value of all receivables in the month i; m - the number of months in the period analysed. The general model was used for application to the example of the operating Mine X. Furthermore, the study has been extended through the presentation of a binary
Zaghian, Maryam; Cao, Wenhua; Liu, Wei; Kardar, Laleh; Randeniya, Sharmalee; Mohan, Radhe; Lim, Gino
2017-03-01
Robust optimization of intensity-modulated proton therapy (IMPT) takes uncertainties into account during spot weight optimization and leads to dose distributions that are resilient to uncertainties. Previous studies demonstrated benefits of linear programming (LP) for IMPT in terms of delivery efficiency by considerably reducing the number of spots required for the same quality of plans. However, a reduction in the number of spots may lead to loss of robustness. The purpose of this study was to evaluate and compare the performance in terms of plan quality and robustness of two robust optimization approaches using LP and nonlinear programming (NLP) models. The so-called "worst case dose" and "minmax" robust optimization approaches and conventional planning target volume (PTV)-based optimization approach were applied to designing IMPT plans for five patients: two with prostate cancer, one with skull-based cancer, and two with head and neck cancer. For each approach, both LP and NLP models were used. Thus, for each case, six sets of IMPT plans were generated and assessed: LP-PTV-based, NLP-PTV-based, LP-worst case dose, NLP-worst case dose, LP-minmax, and NLP-minmax. The four robust optimization methods behaved differently from patient to patient, and no method emerged as superior to the others in terms of nominal plan quality and robustness against uncertainties. The plans generated using LP-based robust optimization were more robust regarding patient setup and range uncertainties than were those generated using NLP-based robust optimization for the prostate cancer patients. However, the robustness of plans generated using NLP-based methods was superior for the skull-based and head and neck cancer patients. Overall, LP-based methods were suitable for the less challenging cancer cases in which all uncertainty scenarios were able to satisfy tight dose constraints, while NLP performed better in more difficult cases in which most uncertainty scenarios were hard to meet
Stott, A W; Lloyd, J; Humphry, R W; Gunn, G J
2003-05-30
We combined epidemiological and economic concepts and modelling techniques, to integrate animal health into whole-farm business management. This allowed us to assess the relative contribution that disease prevention could make to whole-farm income and to the variability in farm income (risk). It also allowed us to assess disease losses in the context of a farm business rather than as a disease outbreak in isolation. A linear program ("MOTAD") establishes the combination of decision maker's activities that minimise risk for a given level of income within farm-business constraints. The MOTAD model was applied to farm-management decision making in Scottish cow-calf herds and was linked to an epidemiological model of bovine viral diarrhoea (BVD). When BVD was considered in isolation (i.e. without taking into account risk), the minimum expected total cost of BVD (sum of output losses plus expenditure on prevention) was similar whether the herd was susceptible to BVD or of unknown BVD-status at the outset. However, the expected total cost of BVD fell in response to increasing expenditure on prevention in 'susceptible' herds. This relationship was not apparent in herds of unknown BVD-status. As a consequence of this difference, 'susceptible' herds were better able to use investment in BVD biosecurity as a means to increase farm income at minimum risk than herds of unknown BVD-status. 'Susceptible' herds therefore were able to achieve high income targets with less-intensive production than herds of unknown BVD-status. This suggested that maintaining a cow-calf herd free of BVD contributes to farm income and risk management indirectly through its effect on the management of the whole farm. It follows that measurement of the economic impact of BVD requires a whole-farm perspective that includes a consideration of risk. Because farmers generally are considered to be risk adverse, this means that the least-cost disease-control option might not always be the preferred option.
LINEAR - DERIVATION AND DEFINITION OF A LINEAR AIRCRAFT MODEL
NASA Technical Reports Server (NTRS)
Duke, E. L.
1994-01-01
The Derivation and Definition of a Linear Model program, LINEAR, provides the user with a powerful and flexible tool for the linearization of aircraft aerodynamic models. LINEAR was developed to provide a standard, documented, and verified tool to derive linear models for aircraft stability analysis and control law design. Linear system models define the aircraft system in the neighborhood of an analysis point and are determined by the linearization of the nonlinear equations defining vehicle dynamics and sensors. LINEAR numerically determines a linear system model using nonlinear equations of motion and a user supplied linear or nonlinear aerodynamic model. The nonlinear equations of motion used are six-degree-of-freedom equations with stationary atmosphere and flat, nonrotating earth assumptions. LINEAR is capable of extracting both linearized engine effects, such as net thrust, torque, and gyroscopic effects and including these effects in the linear system model. The point at which this linear model is defined is determined either by completely specifying the state and control variables, or by specifying an analysis point on a trajectory and directing the program to determine the control variables and the remaining state variables. The system model determined by LINEAR consists of matrices for both the state and observation equations. The program has been designed to provide easy selection of state, control, and observation variables to be used in a particular model. Thus, the order of the system model is completely under user control. Further, the program provides the flexibility of allowing alternate formulations of both the state and observation equations. Data describing the aircraft and the test case is input to the program through a terminal or formatted data files. All data can be modified interactively from case to case. The aerodynamic model can be defined in two ways: a set of nondimensional stability and control derivatives for the flight point of
Colgate, S.A.
1958-05-27
An improvement is presented in linear accelerators for charged particles with respect to the stable focusing of the particle beam. The improvement consists of providing a radial electric field transverse to the accelerating electric fields and angularly introducing the beam of particles in the field. The results of the foregoing is to achieve a beam which spirals about the axis of the acceleration path. The combination of the electric fields and angular motion of the particles cooperate to provide a stable and focused particle beam.
NASA Astrophysics Data System (ADS)
Czopek, Kazimierz; Trzaskuś-Żak, Beata
2013-06-01
The paper presents an example of a theoretical linear programming model in the management of mine receivables. To this end, an economic production model of linear programming was applied to optimising the revenue of the mine. The amount of product sold by the mine to individual customers was assumed as the decisive variable, and the product price was the parameter of the objective function. As for boundaries, upper receivable limits were assumed for each of the adopted receivable collection cycles. The sequence of collection cycles, and the receivable values assigned to them, were adopted according to the growing probability of overdue and uncollectible receivables. Two receivables-management optimisation cases were analysed, in which the objective function was to maximise the sales value (revenue) of the Mine. The first case studied in the model involves application of a discount to reduce the product price, in a mine whose production output is not being used to capacity. To improve cash flow, the mine offers its customers a reduced price and increased purchasing up to the mine's capacity in exchange for shortened receivable collection times. Fixed and variable-cost accounting is applied to determine the relevant price reduction. In the other case analysed, the mine sells as much as its current output allows, but despite that is still forced to reduce the price of its products. Application of a discount in this case (reducing the product price) inevitably involves shortened receivable collection times and reduced costs of financing trade credit. Artykuł przedstawia przykład teoretycznego modelu programowania liniowego w zarządzaniu należnościami kopalni. Wykorzystano w tym celu model produkcyjno-gospodarczy programowania liniowego do optymalizacji wartości przychodu kopalni. Jako zmienną decyzyjną modelu przyjęto ilość sprzedaży produktu kopalni do poszczególnych odbiorców, natomiast parametrem funkcji celu jest cena sprzedaży produktu. W
NASA Technical Reports Server (NTRS)
2006-01-01
[figure removed for brevity, see original site] Context image for PIA03667 Linear Clouds
These clouds are located near the edge of the south polar region. The cloud tops are the puffy white features in the bottom half of the image.
Image information: VIS instrument. Latitude -80.1N, Longitude 52.1E. 17 meter/pixel resolution.
Note: this THEMIS visual image has not been radiometrically nor geometrically calibrated for this preliminary release. An empirical correction has been performed to remove instrumental effects. A linear shift has been applied in the cross-track and down-track direction to approximate spacecraft and planetary motion. Fully calibrated and geometrically projected images will be released through the Planetary Data System in accordance with Project policies at a later time.
NASA's Jet Propulsion Laboratory manages the 2001 Mars Odyssey mission for NASA's Office of Space Science, Washington, D.C. The Thermal Emission Imaging System (THEMIS) was developed by Arizona State University, Tempe, in collaboration with Raytheon Santa Barbara Remote Sensing. The THEMIS investigation is led by Dr. Philip Christensen at Arizona State University. Lockheed Martin Astronautics, Denver, is the prime contractor for the Odyssey project, and developed and built the orbiter. Mission operations are conducted jointly from Lockheed Martin and from JPL, a division of the California Institute of Technology in Pasadena.
NASA Astrophysics Data System (ADS)
Aragón, J. L.; Vázquez Polo, G.; Gómez, A.
A computational algorithm for the generation of quasiperiodic tiles based on the cut and projection method is presented. The algorithm is capable of projecting any type of lattice embedded in any euclidean space onto any subspace making it possible to generate quasiperiodic tiles with any desired symmetry. The simplex method of linear programming and the Moore-Penrose generalized inverse are used to construct the cut (strip) in the higher dimensional space which is to be projected.
Can linear superiorization be useful for linear optimization problems?
NASA Astrophysics Data System (ADS)
Censor, Yair
2017-04-01
Linear superiorization (LinSup) considers linear programming problems but instead of attempting to solve them with linear optimization methods it employs perturbation resilient feasibility-seeking algorithms and steers them toward reduced (not necessarily minimal) target function values. The two questions that we set out to explore experimentally are: (i) does LinSup provide a feasible point whose linear target function value is lower than that obtained by running the same feasibility-seeking algorithm without superiorization under identical conditions? (ii) How does LinSup fare in comparison with the Simplex method for solving linear programming problems? Based on our computational experiments presented here, the answers to these two questions are: ‘yes’ and ‘very well’, respectively.
NASA Astrophysics Data System (ADS)
Metternicht, Graciela; Blanco, Paula; del Valle, Hector; Laterra, Pedro; Hardtke, Leonardo; Bouza, Pablo
2015-04-01
Wildlife is part of the Patagonian rangelands sheep farming environment, with the potential of providing extra revenue to livestock owners. As sheep farming became less profitable, farmers and ranchers could focus on sustainable wildlife harvesting. It has been argued that sustainable wildlife harvesting is ecologically one of the most rational forms of land use because of its potential to provide multiple products of high value, while reducing pressure on ecosystems. The guanaco (Lama guanicoe) is the most conspicuous wild ungulate of Patagonia. Guanaco ?bre, meat, pelts and hides are economically valuable and have the potential to be used within the present Patagonian context of production systems. Guanaco populations in South America, including Patagonia, have experienced a sustained decline. Causes for this decline are related to habitat alteration, competition for forage with sheep, and lack of reasonable management plans to develop livelihoods for ranchers. In this study we propose an approach to explicitly determinate optimal stocking rates based on trade-offs between guanaco density and livestock grazing intensity on rangelands. The focus of our research is on finding optimal sheep stocking rates at paddock level, to ensure the highest production outputs while: a) meeting requirements of sustainable conservation of guanacos over their minimum viable population; b) maximizing soil carbon sequestration, and c) minimizing soil erosion. In this way, determination of optimal stocking rate in rangelands becomes a multi-objective optimization problem that can be addressed using a Fuzzy Multi-Objective Linear Programming (MOLP) approach. Basically, this approach converts multi-objective problems into single-objective optimizations, by introducing a set of objective weights. Objectives are represented using fuzzy set theory and fuzzy memberships, enabling each objective function to adopt a value between 0 and 1. Each objective function indicates the satisfaction of
Miller, Naomi J.; Perrin, Tess E.; Royer, Michael P.; Wilkerson, Andrea M.; Beeson, Tracy A.
2014-05-20
Although lensed troffers are numerous, there are many other types of optical systems as well. This report looked at the performance of three linear (T8) LED lamps chosen primarily based on their luminous intensity distributions (narrow, medium, and wide beam angles) as well as a benchmark fluorescent lamp in five different troffer types. Also included are the results of a subjective evaluation. Results show that linear (T8) LED lamps can improve luminaire efficiency in K12-lensed and parabolic-louvered troffers, effect little change in volumetric and high-performance diffuse-lensed type luminaires, but reduce efficiency in recessed indirect troffers. These changes can be accompanied by visual appearance and visual comfort consequences, especially when LED lamps with clear lenses and narrow distributions are installed. Linear (T8) LED lamps with diffuse apertures exhibited wider beam angles, performed more similarly to fluorescent lamps, and received better ratings from observers. Guidance is provided on which luminaires are the best candidates for retrofitting with linear (T8) LED lamps.
A Structural Connection between Linear and 0-1 Integer Linear Formulations
ERIC Educational Resources Information Center
Adlakha, V.; Kowalski, K.
2007-01-01
The connection between linear and 0-1 integer linear formulations has attracted the attention of many researchers. The main reason triggering this interest has been an availability of efficient computer programs for solving pure linear problems including the transportation problem. Also the optimality of linear problems is easily verifiable…
NASA Technical Reports Server (NTRS)
Magnus, Alfred E.; Epton, Michael A.
1981-01-01
An outline of the derivation of the differential equation governing linear subsonic and supersonic potential flow is given. The use of Green's Theorem to obtain an integral equation over the boundary surface is discussed. The engineering techniques incorporated in the PAN AIR (Panel Aerodynamics) program (a discretization method which solves the integral equation for arbitrary first order boundary conditions) are then discussed in detail. Items discussed include the construction of the compressibility transformations, splining techniques, imposition of the boundary conditions, influence coefficient computation (including the concept of the finite part of an integral), computation of pressure coefficients, and computation of forces and moments.
Positrons for linear colliders
Ecklund, S.
1987-11-01
The requirements of a positron source for a linear collider are briefly reviewed, followed by methods of positron production and production of photons by electromagnetic cascade showers. Cross sections for the electromagnetic cascade shower processes of positron-electron pair production and Compton scattering are compared. A program used for Monte Carlo analysis of electromagnetic cascades is briefly discussed, and positron distributions obtained from several runs of the program are discussed. Photons from synchrotron radiation and from channeling are also mentioned briefly, as well as positron collection, transverse focusing techniques, and longitudinal capture. Computer ray tracing is then briefly discussed, followed by space-charge effects and thermal heating and stress due to showers. (LEW)
Quasi-linear Dialectica Extraction
NASA Astrophysics Data System (ADS)
Trifonov, Trifon
Gödel's functional interpretation [1] can be used to extract programs from non-constructive proofs. Though correct by construction, the obtained terms can be computationally inefficient. One reason for slow execution is the re-evaluation of equal subterms due to the use of substitution during the extraction process. In the present paper we define a variant of the interpretation, which avoids subterm repetition and achieves an almost linear bound on the size of extracted programs.
Ozsoy, Oyku Eren; Can, Tolga
2013-01-01
Inference of topology of signaling networks from perturbation experiments is a challenging problem. Recently, the inference problem has been formulated as a reference network editing problem and it has been shown that finding the minimum number of edit operations on a reference network to comply with perturbation experiments is an NP-complete problem. In this paper, we propose an integer linear optimization (ILP) model for reconstruction of signaling networks from RNAi data and a reference network. The ILP model guarantees the optimal solution; however, is practical only for small signaling networks of size 10-15 genes due to computational complexity. To scale for large signaling networks, we propose a divide and conquer-based heuristic, in which a given reference network is divided into smaller subnetworks that are solved separately and the solutions are merged together to form the solution for the large network. We validate our proposed approach on real and synthetic data sets, and comparison with the state of the art shows that our proposed approach is able to scale better for large networks while attaining similar or better biological accuracy.
NASA Technical Reports Server (NTRS)
Baruah, P. K.; Bussoletti, J. E.; Chiang, D. T.; Massena, W. A.; Nelson, F. D.; Furdon, D. J.; Tsurusaki, K.
1981-01-01
The Maintenance Document is a guide to the PAN AIR software system, a system which computes the subsonic or supersonic linear potential flow about a body of nearly arbitrary shape, using a higher order panel method. The document describes the over-all system and each program module of the system. Sufficient detail is given for program maintenance, updating and modification. It is assumed that the reader is familiar with programming and CDC (Control Data Corporation) computer systems. The PAN AIR system was written in FORTRAN 4 language except for a few COMPASS language subroutines which exist in the PAN AIR library. Structured programming techniques were used to provide code documentation and maintainability. The operating systems accommodated are NOS 1.2, NOS/BE and SCOPE 2.1.3 on the CDC 6600, 7600 and Cyber 175 computing systems. The system is comprised of a data management system, a program library, an execution control module and nine separate FORTRAN technical modules. Each module calculates part of the posed PAN AIR problem. The data base manager is used to communicate between modules and within modules. The technical modules must be run in a prescribed fashion for each PAN AIR problem. In order to ease the problem of supplying the many JCL cards required to execute the modules, a separate module called MEC (Module Execution Control) was created to automatically supply most of the JCL cards. In addition to the MEC generated JCL, there is an additional set of user supplied JCL cards to initiate the JCL sequence stored on the system.
NASA Technical Reports Server (NTRS)
Purdon, David J.; Baruah, Pranab K.; Bussoletti, John E.; Epton, Michael A.; Massena, William A.; Nelson, Franklin D.; Tsurusaki, Kiyoharu
1990-01-01
The Maintenance Document Version 3.0 is a guide to the PAN AIR software system, a system which computes the subsonic or supersonic linear potential flow about a body of nearly arbitrary shape, using a higher order panel method. The document describes the overall system and each program module of the system. Sufficient detail is given for program maintenance, updating, and modification. It is assumed that the reader is familiar with programming and CRAY computer systems. The PAN AIR system was written in FORTRAN 4 language except for a few CAL language subroutines which exist in the PAN AIR library. Structured programming techniques were used to provide code documentation and maintainability. The operating systems accommodated are COS 1.11, COS 1.12, COS 1.13, and COS 1.14 on the CRAY 1S, 1M, and X-MP computing systems. The system is comprised of a data base management system, a program library, an execution control module, and nine separate FORTRAN technical modules. Each module calculates part of the posed PAN AIR problem. The data base manager is used to communicate between modules and within modules. The technical modules must be run in a prescribed fashion for each PAN AIR problem. In order to ease the problem of supplying the many JCL cards required to execute the modules, a set of CRAY procedures (PAPROCS) was created to automatically supply most of the JCL cards. Most of this document has not changed for Version 3.0. It now, however, strictly applies only to PAN AIR version 3.0. The major changes are: (1) additional sections covering the new FDP module (which calculates streamlines and offbody points); (2) a complete rewrite of the section on the MAG module; and (3) strict applicability to CRAY computing systems.
Granato, Gregory E.
2006-01-01
The Kendall-Theil Robust Line software (KTRLine-version 1.0) is a Visual Basic program that may be used with the Microsoft Windows operating system to calculate parameters for robust, nonparametric estimates of linear-regression coefficients between two continuous variables. The KTRLine software was developed by the U.S. Geological Survey, in cooperation with the Federal Highway Administration, for use in stochastic data modeling with local, regional, and national hydrologic data sets to develop planning-level estimates of potential effects of highway runoff on the quality of receiving waters. The Kendall-Theil robust line was selected because this robust nonparametric method is resistant to the effects of outliers and nonnormality in residuals that commonly characterize hydrologic data sets. The slope of the line is calculated as the median of all possible pairwise slopes between points. The intercept is calculated so that the line will run through the median of input data. A single-line model or a multisegment model may be specified. The program was developed to provide regression equations with an error component for stochastic data generation because nonparametric multisegment regression tools are not available with the software that is commonly used to develop regression models. The Kendall-Theil robust line is a median line and, therefore, may underestimate total mass, volume, or loads unless the error component or a bias correction factor is incorporated into the estimate. Regression statistics such as the median error, the median absolute deviation, the prediction error sum of squares, the root mean square error, the confidence interval for the slope, and the bias correction factor for median estimates are calculated by use of nonparametric methods. These statistics, however, may be used to formulate estimates of mass, volume, or total loads. The program is used to read a two- or three-column tab-delimited input file with variable names in the first row and
The linear separability problem: some testing methods.
Elizondo, D
2006-03-01
The notion of linear separability is used widely in machine learning research. Learning algorithms that use this concept to learn include neural networks (single layer perceptron and recursive deterministic perceptron), and kernel machines (support vector machines). This paper presents an overview of several of the methods for testing linear separability between two classes. The methods are divided into four groups: Those based on linear programming, those based on computational geometry, one based on neural networks, and one based on quadratic programming. The Fisher linear discriminant method is also presented. A section on the quantification of the complexity of classification problems is included.
Wu, Liejun; Duan, Xiaojuan; Liu, Chuanyu; Zhang, Guangxiang; Li, Qing X
2016-07-01
The current theory of programmed temperature gas chromatography considers that solutes are focused by the stationary phase at the column head completely and does not explicitly recognize the different effects of initial temperature (To ) and heating rate (rT ) on the retention time or temperature of a homologue series. In the present study, n-alkanes, 1-alkenes, 1-alkyl alcohols, alkyl benzenes, and fatty acid methyl esters standards were used as model chemicals and were separated on two nonpolar columns, one moderately polar column and one polar column. Effects of To and rT on the retention of nonstationary phase focusing solutes can be explicitly described with isothermal and cubic equation models, respectively. When the solutes were in the stationary phase focusing status, the single-retention behavior of solutes was observed. It is simple, dependent upon rT only and can be well described by the cubic equation model that was visualized through four sequential slope analyses. These observed dual- and single-retention behaviors of solutes were validated by various experimental data, physical properties, and computational simulation.
Gebrehiwot, Tesfay Gebregzabher; San Sebastian, Miguel; Edin, Kerstin; Goicolea, Isabel
2015-01-01
Background In 2003, the Ethiopian Ministry of Health established the Health Extension Program (HEP), with the goal of improving access to health care and health promotion activities in rural areas of the country. This paper aims to assess the association of the HEP with improved utilization of maternal health services in Northern Ethiopia using institution-based retrospective data. Methods Average quarterly total attendances for antenatal care (ANC), delivery care (DC) and post-natal care (PNC) at health posts and health care centres were studied from 2002 to 2012. Regression analysis was applied to two models to assess whether trends were statistically significant. One model was used to estimate the level and trend changes associated with the immediate period of intervention, while changes related to the post-intervention period were estimated by the other. Results The total number of consultations for ANC, DC and PNC increased constantly, particularly after the late-intervention period. Increases were higher for ANC and PNC at health post level and for DC at health centres. A positive statistically significant upward trend was found for DC and PNC in all facilities (p<0.01). The positive trend was also present in ANC at health centres (p = 0.04), but not at health posts. Conclusion Our findings revealed an increase in the use of antenatal, delivery and post-natal care after the introduction of the HEP. We are aware that other factors, that we could not control for, might be explaining that increase. The figures for DC and PNC are however low and more needs to be done in order to increase the access to the health care system as well as the demand for these services by the population. Strengthening of the health information system in the region needs also to be prioritized. PMID:26218074
Hoshiba, Hiroshi; Setoguchi, Kouji; Watanabe, Toshio; Kinoshita, Akihiro; Mizoshita, Kazunori; Sugimoto, Yoshikazu; Takasuga, Akiko
2013-07-01
The c.1326T>G single nucleotide polymorphism (SNP) in the NCAPG gene, which leads to an amino acid change of Ile442 to Met442, was previously identified as a candidate causative variation for a bovine carcass weight quantitative trait loci (QTL) on chromosome 6, which was associated with linear skeletal measurement gains and daily body weight gain at puberty. Recently, we identified the stature quantitative trait nucleotides (QTNs) in the PLAG1-CHCHD7 intergenic region as the causative variations for another carcass weight QTL on chromosome 14. This study aimed to compare the effects of the two QTL on growth and carcass traits using 768 Japanese Black steers from a progeny testing program and to determine whether a genetic interaction was present between them. The FJX_250879 SNP representing the stature QTL was associated with linear skeletal measurements and average daily body weight gain at early and late periods during adolescence. A genetic interaction between FJX_250879 and NCAPG c.1326T>G was detected only for body and rump lengths. Both were associated with increased carcass weight and Longissimus muscle area, and NCAPG c.1326T>G was also associated with reduced subcutaneous fat thickness and increased carcass yield estimate. These results will provide useful information to improve carcass weight in Japanese Black cattle.
Design And Analysis Of Linear Control Systems
NASA Technical Reports Server (NTRS)
Jamison, John W.
1991-01-01
Package of five computer programs developed to assist in design and analysis of linear control systems by use of root-locus and frequency-response methods. Package written in FORTRAN (BODE, TPEAK) and BASIC (LOCUS, KTUNE, and POLYROOT).
Linear induction accelerator parameter options
Birx, D.L.; Caporaso, G.J.; Reginato, L.L.
1986-04-21
The principal undertaking of the Beam Research Program over the past decade has been the investigation of propagating intense self-focused beams. Recently, the major activity of the program has shifted toward the investigation of converting high quality electron beams directly to laser radiation. During the early years of the program, accelerator development was directed toward the generation of very high current (>10 kA), high energy beams (>50 MeV). In its new mission, the program has shifted the emphasis toward the production of lower current beams (>3 kA) with high brightness (>10/sup 6/ A/(rad-cm)/sup 2/) at very high average power levels. In efforts to produce these intense beams, the state of the art of linear induction accelerators (LIA) has been advanced to the point of satisfying not only the current requirements but also future national needs.
Linear Programming Problems for Generalized Uncertainty
ERIC Educational Resources Information Center
Thipwiwatpotjana, Phantipa
2010-01-01
Uncertainty occurs when there is more than one realization that can represent an information. This dissertation concerns merely discrete realizations of an uncertainty. Different interpretations of an uncertainty and their relationships are addressed when the uncertainty is not a probability of each realization. A well known model that can handle…
Dynamic Pricing Criteria in Linear Programming
1988-07-01
zero by the basic variable ratio test in (2.15); jr. indexes the corresponding variable. Define X., = B- 1 A.. as the representation of the incoming...exclude any columns for which 9j = 0. Unfortunately, this would require the representation X., of the columns A., in terms of the cur- rent basis. In...602.50 1.62 SCTAP3 1344 944 1.42 1772.52 988.30 1.79 Geom. Mean: 1.11 Geom. Meen : 1.37 61 Figure 4-11(ctd). Results for (3.6) on Scaled Probleme
FORTRAN Based Linear Programming for Microcomputers.
1982-12-01
NEW MODEL NR!lTE(1.’(//i7X,’’ENTER A PFDBLEM IDENTIFIER’’/7X?’’(MAXIMUN OF 20 *CHARACTERS)’’//3X,’ PROPLEM ID "I,W) READi5,’ iA2O)’’Pt 10(- #RITEUl
Scheduling Outpatient Services: A Linear Programming Approach
1990-07-05
Tubal Ligation , D&C 65 30 658068 M Laparoscopic Tubal Ligation 58 33 657647 OB Post Partum Laparoscopic BTL 55 37 658492 03 Post Partum Laparoucopie BTL...reimbursement. Additionally, several high 0 volume operating room (OR) procedures (e.g., tubal ligations ) were m Mz selected which accounted for a...0 C RACH. M a To account for the costs associated with the inpatient portion 0 of a group package (e.g., tubal ligation ), the average length of m M z
Primal Barrier Methods for Linear Programming
1989-06-01
associated researchers and students were very helpful. I would like to thank Prof. George B. Dantzig for serving on my doctoral committee, for two most...solution with xa = 0 can be found during Phase I, indicates an empty feasible region. When w > 0, we say that ALP has a composite objective function. This...choice if it were not for the fact that it has clear performance disadvantages compared to using the composite objective function. Generally speaking
The Parallel of Decomposition of Linear Programs
1989-11-01
0.1412250000000D+04 1.79 90% 1.3 1391 SCTAPI 10 3 640 38 212 2839 2291 1887 1086 0.1412250DODO D+04 2.11 70% 1.5 18% SCrAPI 10 3 645 33 231 2969 2417...2965 2362 1915 1029 0.14122500OOOD+04 2.30 57% 1.5 191 SCTAPI 10 4 719 34 331 3806 3207 2616 143? 0.1412250000000D+04 2.24 56% 1.1 18A SCrAPI 10 4
Optimized reservoir management: Mixed linear programming
Currie, J.C.; Novotnak, J.F.; Aasboee, B.T.; Kennedy, C.J.
1997-12-01
The Ekofisk field and surrounding Phillips Norway Group fields, also referred to as the greater Ekofisk area fields, are in the southern part of the Norwegian sector of the North Sea. Oil and gas separation and transportation facilities are centrally located on the Ekofisk complex at Ekofisk field. The Ekofisk 2 redevelopment project is designed to replace the oil-/gas-production and -processing capabilities of the existing Ekofisk complex. This requirement grew out of the high operating and maintenance expenses associated with the existing facilities. Other factors of significance were the effects of seafloor subsidence and changing safety regulations. A significant aspect of the Ekofisk field has been reservoir compaction that has resulted in seabed subsidence over the areal extent of the reservoir. After 25 years of production, the cumulative subsidence in the center of the field is more than 21 ft. The redevelopment project addresses the economic, maintenance, and safety factors and maintains the economic viability of Ekofisk and surrounding fields.
Measuring Astronomical Distances with Linear Programming
ERIC Educational Resources Information Center
Narain, Akshar
2015-01-01
A few years ago it was suggested that the distance to celestial bodies could be computed by tracking their position over about 24 hours and then solving a regression problem. One only needed to use inexpensive telescopes, cameras, and astrometry tools, and the experiment could be done from one's backyard. However, it is not obvious to an amateur…
NASA Astrophysics Data System (ADS)
Young, T.
This book is intended to be used as a textbook in a one-semester course at a variety of levels. Because of self-study features incorporated, it may also be used by practicing electronic engineers as a formal and thorough introduction to the subject. The distinction between linear and digital integrated circuits is discussed, taking into account digital and linear signal characteristics, linear and digital integrated circuit characteristics, the definitions for linear and digital circuits, applications of digital and linear integrated circuits, aspects of fabrication, packaging, and classification and numbering. Operational amplifiers are considered along with linear integrated circuit (LIC) power requirements and power supplies, voltage and current regulators, linear amplifiers, linear integrated circuit oscillators, wave-shaping circuits, active filters, DA and AD converters, demodulators, comparators, instrument amplifiers, current difference amplifiers, analog circuits and devices, and aspects of troubleshooting.
... equipment? How is safety ensured? What is this equipment used for? A linear accelerator (LINAC) is the ... Therapy (SBRT) . top of page How does the equipment work? The linear accelerator uses microwave technology (similar ...
NASA Technical Reports Server (NTRS)
Sidwell, Kenneth W.; Baruah, Pranab K.; Bussoletti, John E.; Medan, Richard T.; Conner, R. S.; Purdon, David J.
1990-01-01
A comprehensive description of user problem definition for the PAN AIR (Panel Aerodynamics) system is given. PAN AIR solves the 3-D linear integral equations of subsonic and supersonic flow. Influence coefficient methods are used which employ source and doublet panels as boundary surfaces. Both analysis and design boundary conditions can be used. This User's Manual describes the information needed to use the PAN AIR system. The structure and organization of PAN AIR are described, including the job control and module execution control languages for execution of the program system. The engineering input data are described, including the mathematical and physical modeling requirements. Version 3.0 strictly applies only to PAN AIR version 3.0. The major revisions include: (1) inputs and guidelines for the new FDP module (which calculates streamlines and offbody points); (2) nine new class 1 and class 2 boundary conditions to cover commonly used modeling practices, in particular the vorticity matching Kutta condition; (3) use of the CRAY solid state Storage Device (SSD); and (4) incorporation of errata and typo's together with additional explanation and guidelines.
NASA Technical Reports Server (NTRS)
Epton, Michael A.; Magnus, Alfred E.
1990-01-01
An outline of the derivation of the differential equation governing linear subsonic and supersonic potential flow is given. The use of Green's Theorem to obtain an integral equation over the boundary surface is discussed. The engineering techniques incorporated in the Panel Aerodynamics (PAN AIR) program (a discretization method which solves the integral equation for arbitrary first order boundary conditions) are then discussed in detail. Items discussed include the construction of the compressibility transformation, splining techniques, imposition of the boundary conditions, influence coefficient computation (including the concept of the finite part of an integral), computation of pressure coefficients, and computation of forces and moments. Principal revisions to version 3.0 are the following: (1) appendices H and K more fully describe the Aerodynamic Influence Coefficient (AIC) construction; (2) appendix L now provides a complete description of the AIC solution process; (3) appendix P is new and discusses the theory for the new FDP module (which calculates streamlines and offbody points); and (4) numerous small corrections and revisions reflecting the MAG module rewrite.
Runtime Analysis of Linear Temporal Logic Specifications
NASA Technical Reports Server (NTRS)
Giannakopoulou, Dimitra; Havelund, Klaus
2001-01-01
This report presents an approach to checking a running program against its Linear Temporal Logic (LTL) specifications. LTL is a widely used logic for expressing properties of programs viewed as sets of executions. Our approach consists of translating LTL formulae to finite-state automata, which are used as observers of the program behavior. The translation algorithm we propose modifies standard LTL to B chi automata conversion techniques to generate automata that check finite program traces. The algorithm has been implemented in a tool, which has been integrated with the generic JPaX framework for runtime analysis of Java programs.
Wiedemann, H.
1981-11-01
Since no linear colliders have been built yet it is difficult to know at what energy the linear cost scaling of linear colliders drops below the quadratic scaling of storage rings. There is, however, no doubt that a linear collider facility for a center of mass energy above say 500 GeV is significantly cheaper than an equivalent storage ring. In order to make the linear collider principle feasible at very high energies a number of problems have to be solved. There are two kinds of problems: one which is related to the feasibility of the principle and the other kind of problems is associated with minimizing the cost of constructing and operating such a facility. This lecture series describes the problems and possible solutions. Since the real test of a principle requires the construction of a prototype I will in the last chapter describe the SLC project at the Stanford Linear Accelerator Center.
NASA Technical Reports Server (NTRS)
Holloway, Sidney E., III (Inventor); Crossley, Edward A., Jr. (Inventor); Jones, Irby W. (Inventor); Miller, James B. (Inventor); Davis, C. Calvin (Inventor); Behun, Vaughn D. (Inventor); Goodrich, Lewis R., Sr. (Inventor)
1992-01-01
A linear mass actuator includes an upper housing and a lower housing connectable to each other and having a central passageway passing axially through a mass that is linearly movable in the central passageway. Rollers mounted in the upper and lower housings in frictional engagement with the mass translate the mass linearly in the central passageway and drive motors operatively coupled to the roller means, for rotating the rollers and driving the mass axially in the central passageway.
Linear accelerator for radioisotope production
Hansborough, L.D.; Hamm, R.W.; Stovall, J.E.
1982-02-01
A 200- to 500-..mu..A source of 70- to 90-MeV protons would be a valuable asset to the nuclear medicine program. A linear accelerator (linac) can achieve this performance, and it can be extended to even higher energies and currents. Variable energy and current options are available. A 70-MeV linac is described, based on recent innovations in linear accelerator technology; it would be 27.3 m long and cost approx. $6 million. By operating the radio-frequency (rf) power system at a level necessary to produce a 500-..mu..A beam current, the cost of power deposited in the radioisotope-production target is comparable with existing cyclotrons. If the rf-power system is operated at full power, the same accelerator is capable of producing an 1140-..mu..A beam, and the cost per beam watt on the target is less than half that of comparable cyclotrons.
Hlaing, Lwin Mar; Fahmida, Umi; Htet, Min Kyaw; Utomo, Budi; Firmansyah, Agus; Ferguson, Elaine L
2016-07-01
Poor feeding practices result in inadequate nutrient intakes in young children in developing countries. To improve practices, local food-based complementary feeding recommendations (CFR) are needed. This cross-sectional survey aimed to describe current food consumption patterns of 12-23-month-old Myanmar children (n 106) from Ayeyarwady region in order to identify nutrient requirements that are difficult to achieve using local foods and to formulate affordable and realistic CFR to improve dietary adequacy. Weekly food consumption patterns were assessed using a 12-h weighed dietary record, single 24-h recall and a 5-d food record. Food costs were estimated by market surveys. CFR were formulated by linear programming analysis using WHO Optifood software and evaluated among mothers (n 20) using trial of improved practices (TIP). Findings showed that Ca, Zn, niacin, folate and Fe were 'problem nutrients': nutrients that did not achieve 100 % recommended nutrient intake even when the diet was optimised. Chicken liver, anchovy and roselle leaves were locally available nutrient-dense foods that would fill these nutrient gaps. The final set of six CFR would ensure dietary adequacy for five of twelve nutrients at a minimal cost of 271 kyats/d (based on the exchange rate of 900 kyats/USD at the time of data collection: 3rd quarter of 2012), but inadequacies remained for niacin, folate, thiamin, Fe, Zn, Ca and vitamin B6. TIP showed that mothers believed liver and vegetables would cause worms and diarrhoea, but these beliefs could be overcome to successfully promote liver consumption. Therefore, an acceptable set of CFR were developed to improve the dietary practices of 12-23-month-old Myanmar children using locally available foods. Alternative interventions such as fortification, however, are still needed to ensure dietary adequacy of all nutrients.
Fault tolerant linear actuator
Tesar, Delbert
2004-09-14
In varying embodiments, the fault tolerant linear actuator of the present invention is a new and improved linear actuator with fault tolerance and positional control that may incorporate velocity summing, force summing, or a combination of the two. In one embodiment, the invention offers a velocity summing arrangement with a differential gear between two prime movers driving a cage, which then drives a linear spindle screw transmission. Other embodiments feature two prime movers driving separate linear spindle screw transmissions, one internal and one external, in a totally concentric and compact integrated module.
Linear phase compressive filter
McEwan, Thomas E.
1995-01-01
A phase linear filter for soliton suppression is in the form of a laddered series of stages of non-commensurate low pass filters with each low pass filter having a series coupled inductance (L) and a reverse biased, voltage dependent varactor diode, to ground which acts as a variable capacitance (C). L and C values are set to levels which correspond to a linear or conventional phase linear filter. Inductance is mapped directly from that of an equivalent nonlinear transmission line and capacitance is mapped from the linear case using a large signal equivalent of a nonlinear transmission line.
Linear phase compressive filter
McEwan, T.E.
1995-06-06
A phase linear filter for soliton suppression is in the form of a laddered series of stages of non-commensurate low pass filters with each low pass filter having a series coupled inductance (L) and a reverse biased, voltage dependent varactor diode, to ground which acts as a variable capacitance (C). L and C values are set to levels which correspond to a linear or conventional phase linear filter. Inductance is mapped directly from that of an equivalent nonlinear transmission line and capacitance is mapped from the linear case using a large signal equivalent of a nonlinear transmission line. 2 figs.
Chen, Qingwen; Narayanan, Kumaran
2015-01-01
Recombineering is a powerful genetic engineering technique based on homologous recombination that can be used to accurately modify DNA independent of its sequence or size. One novel application of recombineering is the assembly of linear BACs in E. coli that can replicate autonomously as linear plasmids. A circular BAC is inserted with a short telomeric sequence from phage N15, which is subsequently cut and rejoined by the phage protelomerase enzyme to generate a linear BAC with terminal hairpin telomeres. Telomere-capped linear BACs are protected against exonuclease attack both in vitro and in vivo in E. coli cells and can replicate stably. Here we describe step-by-step protocols to linearize any BAC clone by recombineering, including inserting and screening for presence of the N15 telomeric sequence, linearizing BACs in vivo in E. coli, extracting linear BACs, and verifying the presence of hairpin telomere structures. Linear BACs may be useful for functional expression of genomic loci in cells, maintenance of linear viral genomes in their natural conformation, and for constructing innovative artificial chromosome structures for applications in mammalian and plant cells.
Richter, B.
1985-12-01
A report is given on the goals and progress of the SLAC Linear Collider. The status of the machine and the detectors are discussed and an overview is given of the physics which can be done at this new facility. Some ideas on how (and why) large linear colliders of the future should be built are given.
Linear Equations: Equivalence = Success
ERIC Educational Resources Information Center
Baratta, Wendy
2011-01-01
The ability to solve linear equations sets students up for success in many areas of mathematics and other disciplines requiring formula manipulations. There are many reasons why solving linear equations is a challenging skill for students to master. One major barrier for students is the inability to interpret the equals sign as anything other than…
Alfonso, R; Belinchon, I
2001-01-01
Linear eruptions are sometimes associated with systemic diseases and they may also be induced by various drugs. Paradoxically, such acquired inflammatory skin diseases tend to follow the system of Blaschko's lines. We describe a case of unilateral linear drug eruption caused by ibuprofen, which later became bilateral and generalized.
Linearization of Robot Manipulators
NASA Technical Reports Server (NTRS)
Kreutz, Kenneth
1987-01-01
Four nonlinear control schemes equivalent. Report discusses theory of nonlinear feedback control of robot manipulator, emphasis on control schemes making manipulator input and output behave like decoupled linear system. Approach, called "exact external linearization," contributes efforts to control end-effector trajectories, positions, and orientations.
Linear models: permutation methods
Cade, B.S.; Everitt, B.S.; Howell, D.C.
2005-01-01
Permutation tests (see Permutation Based Inference) for the linear model have applications in behavioral studies when traditional parametric assumptions about the error term in a linear model are not tenable. Improved validity of Type I error rates can be achieved with properly constructed permutation tests. Perhaps more importantly, increased statistical power, improved robustness to effects of outliers, and detection of alternative distributional differences can be achieved by coupling permutation inference with alternative linear model estimators. For example, it is well-known that estimates of the mean in linear model are extremely sensitive to even a single outlying value of the dependent variable compared to estimates of the median [7, 19]. Traditionally, linear modeling focused on estimating changes in the center of distributions (means or medians). However, quantile regression allows distributional changes to be estimated in all or any selected part of a distribution or responses, providing a more complete statistical picture that has relevance to many biological questions [6]...
NASA Technical Reports Server (NTRS)
Clancy, John P.
1988-01-01
The object of the invention is to provide a mechanical force actuator which is lightweight and manipulatable and utilizes linear motion for push or pull forces while maintaining a constant overall length. The mechanical force producing mechanism comprises a linear actuator mechanism and a linear motion shaft mounted parallel to one another. The linear motion shaft is connected to a stationary or fixed housing and to a movable housing where the movable housing is mechanically actuated through actuator mechanism by either manual means or motor means. The housings are adapted to releasably receive a variety of jaw or pulling elements adapted for clamping or prying action. The stationary housing is adapted to be pivotally mounted to permit an angular position of the housing to allow the tool to adapt to skewed interfaces. The actuator mechanisms is operated by a gear train to obtain linear motion of the actuator mechanism.
NASA Technical Reports Server (NTRS)
Vinson, John
1998-01-01
In July of 1999 two linear aerospike rocket engines will power the first flight of NASA's X-33 advanced technology demonstrator. A successful X-33 flight test program will validate the aerospike nozzle concept, a key technical feature of Lockheed Martin's VentureStar(trademark) reusable launch vehicle. The aerospike received serious consideration for NASA's current space shuttle, but was eventually rejected in 1969 in favor of high chamber pressure bell engines, in part because of perceived technical risk. The aerospike engine (discussed below) has several performance advantages over conventional bell engines. However, these performance advantages are difficult to validate by ground test. The space shuttle, a multibillion dollar program intended to provide all of NASA's future space lift could not afford the gamble of choosing a potentially superior though unproven aerospike engine over a conventional bell engine. The X-33 demonstrator provides an opportunity to prove the aerospike's performance advantage in flight before commiting to an operational vehicle.
Aircraft engine mathematical model - linear system approach
NASA Astrophysics Data System (ADS)
Rotaru, Constantin; Roateşi, Simona; Cîrciu, Ionicǎ
2016-06-01
This paper examines a simplified mathematical model of the aircraft engine, based on the theory of linear and nonlinear systems. The dynamics of the engine was represented by a linear, time variant model, near a nominal operating point within a finite time interval. The linearized equations were expressed in a matrix form, suitable for the incorporation in the MAPLE program solver. The behavior of the engine was included in terms of variation of the rotational speed following a deflection of the throttle. The engine inlet parameters can cover a wide range of altitude and Mach numbers.
Linear ubiquitination in immunity.
Shimizu, Yutaka; Taraborrelli, Lucia; Walczak, Henning
2015-07-01
Linear ubiquitination is a post-translational protein modification recently discovered to be crucial for innate and adaptive immune signaling. The function of linear ubiquitin chains is regulated at multiple levels: generation, recognition, and removal. These chains are generated by the linear ubiquitin chain assembly complex (LUBAC), the only known ubiquitin E3 capable of forming the linear ubiquitin linkage de novo. LUBAC is not only relevant for activation of nuclear factor-κB (NF-κB) and mitogen-activated protein kinases (MAPKs) in various signaling pathways, but importantly, it also regulates cell death downstream of immune receptors capable of inducing this response. Recognition of the linear ubiquitin linkage is specifically mediated by certain ubiquitin receptors, which is crucial for translation into the intended signaling outputs. LUBAC deficiency results in attenuated gene activation and increased cell death, causing pathologic conditions in both, mice, and humans. Removal of ubiquitin chains is mediated by deubiquitinases (DUBs). Two of them, OTULIN and CYLD, are constitutively associated with LUBAC. Here, we review the current knowledge on linear ubiquitination in immune signaling pathways and the biochemical mechanisms as to how linear polyubiquitin exerts its functions distinctly from those of other ubiquitin linkage types.
1979-12-01
OPTIMAL LINEAR CONTROL C.A. HARVEY M.G. SAFO NOV G. STEIN J.C. DOYLE HONEYWELL SYSTEMS & RESEARCH CENTER j 2600 RIDGWAY PARKWAY j [ MINNEAPOLIS...RECIPIENT’S CAT ALC-’ W.IMIJUff’? * J~’ CR2 15-238-4F TP P EI)ŕll * (~ Optimal Linear Control ~iOGRPR UBA m a M.G Lnar o Con_ _ _ _ _ _ R PORT__ _ _ I RE...Characterizations of optimal linear controls have been derived, from which guides for selecting the structure of the control system and the weights in
NASA Technical Reports Server (NTRS)
Studer, P. A. (Inventor)
1983-01-01
A linear magnetic bearing system having electromagnetic vernier flux paths in shunt relation with permanent magnets, so that the vernier flux does not traverse the permanent magnet, is described. Novelty is believed to reside in providing a linear magnetic bearing having electromagnetic flux paths that bypass high reluctance permanent magnets. Particular novelty is believed to reside in providing a linear magnetic bearing with a pair of axially spaced elements having electromagnets for establishing vernier x and y axis control. The magnetic bearing system has possible use in connection with a long life reciprocating cryogenic refrigerator that may be used on the space shuttle.
BLAS- BASIC LINEAR ALGEBRA SUBPROGRAMS
NASA Technical Reports Server (NTRS)
Krogh, F. T.
1994-01-01
The Basic Linear Algebra Subprogram (BLAS) library is a collection of FORTRAN callable routines for employing standard techniques in performing the basic operations of numerical linear algebra. The BLAS library was developed to provide a portable and efficient source of basic operations for designers of programs involving linear algebraic computations. The subprograms available in the library cover the operations of dot product, multiplication of a scalar and a vector, vector plus a scalar times a vector, Givens transformation, modified Givens transformation, copy, swap, Euclidean norm, sum of magnitudes, and location of the largest magnitude element. Since these subprograms are to be used in an ANSI FORTRAN context, the cases of single precision, double precision, and complex data are provided for. All of the subprograms have been thoroughly tested and produce consistent results even when transported from machine to machine. BLAS contains Assembler versions and FORTRAN test code for any of the following compilers: Lahey F77L, Microsoft FORTRAN, or IBM Professional FORTRAN. It requires the Microsoft Macro Assembler and a math co-processor. The PC implementation allows individual arrays of over 64K. The BLAS library was developed in 1979. The PC version was made available in 1986 and updated in 1988.
Multiple linear regression analysis
NASA Technical Reports Server (NTRS)
Edwards, T. R.
1980-01-01
Program rapidly selects best-suited set of coefficients. User supplies only vectors of independent and dependent data and specifies confidence level required. Program uses stepwise statistical procedure for relating minimal set of variables to set of observations; final regression contains only most statistically significant coefficients. Program is written in FORTRAN IV for batch execution and has been implemented on NOVA 1200.
NASA Technical Reports Server (NTRS)
Laughlin, Darren
1995-01-01
Inertial linear actuators developed to suppress residual accelerations of nominally stationary or steadily moving platforms. Function like long-stroke version of voice coil in conventional loudspeaker, with superimposed linear variable-differential transformer. Basic concept also applicable to suppression of vibrations of terrestrial platforms. For example, laboratory table equipped with such actuators plus suitable vibration sensors and control circuits made to vibrate much less in presence of seismic, vehicular, and other environmental vibrational disturbances.
NASA Technical Reports Server (NTRS)
Callier, Frank M.; Desoer, Charles A.
1991-01-01
The aim of this book is to provide a systematic and rigorous access to the main topics of linear state-space system theory in both the continuous-time case and the discrete-time case; and the I/O description of linear systems. The main thrusts of the work are the analysis of system descriptions and derivations of their properties, LQ-optimal control, state feedback and state estimation, and MIMO unity-feedback systems.
NASA Astrophysics Data System (ADS)
Morel, Danielle; Levy, William B.
2006-03-01
Information processing in the brain is metabolically expensive and energy usage by the different components of the nervous system is not well understood. In a continuing effort to explore the costs and constraints of information processing at the single neuron level, dendritic processes are being studied. More specifically, the role of various ion channel conductances is explored in terms of integrating dendritic excitatory synaptic input. Biophysical simulations of dendritic behavior show that the complexity of voltage-dependent, non-linear dendritic conductances can produce simplicity in the form of linear synaptic integration. Over increasing levels of synaptic activity, it is shown that two types of voltage-dependent conductances produce linearization over a limited range. This range is determined by the parameters defining the ion channel and the 'passive' properties of the dendrite. A persistent sodium and a transient A-type potassium channel were considered at steady-state transmembrane potentials in the vicinity of and hyperpolarized to the threshold for action potential initiation. The persistent sodium is seen to amplify and linearize the synaptic input over a short range of low synaptic activity. In contrast, the A-type potassium channel has a broader linearization range but tends to operate at higher levels of synaptic bombardment. Given equivalent 'passive' dendritic properties, the persistent sodium is found to be less costly than the A-type potassium in linearizing synaptic input.
Linear stochastic optimal control and estimation
NASA Technical Reports Server (NTRS)
Geyser, L. C.; Lehtinen, F. K. B.
1976-01-01
Digital program has been written to solve the LSOCE problem by using a time-domain formulation. LSOCE problem is defined as that of designing controls for linear time-invariant system which is disturbed by white noise in such a way as to minimize quadratic performance index.
Superconducting linear actuator
NASA Technical Reports Server (NTRS)
Johnson, Bruce; Hockney, Richard
1993-01-01
Special actuators are needed to control the orientation of large structures in space-based precision pointing systems. Electromagnetic actuators that presently exist are too large in size and their bandwidth is too low. Hydraulic fluid actuation also presents problems for many space-based applications. Hydraulic oil can escape in space and contaminate the environment around the spacecraft. A research study was performed that selected an electrically-powered linear actuator that can be used to control the orientation of a large pointed structure. This research surveyed available products, analyzed the capabilities of conventional linear actuators, and designed a first-cut candidate superconducting linear actuator. The study first examined theoretical capabilities of electrical actuators and determined their problems with respect to the application and then determined if any presently available actuators or any modifications to available actuator designs would meet the required performance. The best actuator was then selected based on available design, modified design, or new design for this application. The last task was to proceed with a conceptual design. No commercially-available linear actuator or modification capable of meeting the specifications was found. A conventional moving-coil dc linear actuator would meet the specification, but the back-iron for this actuator would weigh approximately 12,000 lbs. A superconducting field coil, however, eliminates the need for back iron, resulting in an actuator weight of approximately 1000 lbs.
A sequential linear optimization approach for controller design
NASA Technical Reports Server (NTRS)
Horta, L. G.; Juang, J.-N.; Junkins, J. L.
1985-01-01
A linear optimization approach with a simple real arithmetic algorithm is presented for reliable controller design and vibration suppression of flexible structures. Using first order sensitivity of the system eigenvalues with respect to the design parameters in conjunction with a continuation procedure, the method converts a nonlinear optimization problem into a maximization problem with linear inequality constraints. The method of linear programming is then applied to solve the converted linear optimization problem. The general efficiency of the linear programming approach allows the method to handle structural optimization problems with a large number of inequality constraints on the design vector. The method is demonstrated using a truss beam finite element model for the optimal sizing and placement of active/passive-structural members for damping augmentation. Results using both the sequential linear optimization approach and nonlinear optimization are presented and compared. The insensitivity to initial conditions of the linear optimization approach is also demonstrated.
NASA Technical Reports Server (NTRS)
Leviton, Douglas B. (Inventor)
1993-01-01
A Linear Motion Encoding device for measuring the linear motion of a moving object is disclosed in which a light source is mounted on the moving object and a position sensitive detector such as an array photodetector is mounted on a nearby stationary object. The light source emits a light beam directed towards the array photodetector such that a light spot is created on the array. An analog-to-digital converter, connected to the array photodetector is used for reading the position of the spot on the array photodetector. A microprocessor and memory is connected to the analog-to-digital converter to hold and manipulate data provided by the analog-to-digital converter on the position of the spot and to compute the linear displacement of the moving object based upon the data from the analog-to-digital converter.
NASA Technical Reports Server (NTRS)
Chandler, J. A. (Inventor)
1985-01-01
The linear motion valve is described. The valve spool employs magnetically permeable rings, spaced apart axially, which engage a sealing assembly having magnetically permeable pole pieces in magnetic relationship with a magnet. The gap between the ring and the pole pieces is sealed with a ferrofluid. Depletion of the ferrofluid is minimized.
ERIC Educational Resources Information Center
Dobbs, David E.
2013-01-01
A direct method is given for solving first-order linear recurrences with constant coefficients. The limiting value of that solution is studied as "n to infinity." This classroom note could serve as enrichment material for the typical introductory course on discrete mathematics that follows a calculus course.
Improved Electrohydraulic Linear Actuators
NASA Technical Reports Server (NTRS)
Hamtil, James
2004-01-01
A product line of improved electrohydraulic linear actuators has been developed. These actuators are designed especially for use in actuating valves in rocket-engine test facilities. They are also adaptable to many industrial uses, such as steam turbines, process control valves, dampers, motion control, etc. The advantageous features of the improved electrohydraulic linear actuators are best described with respect to shortcomings of prior electrohydraulic linear actuators that the improved ones are intended to supplant. The flow of hydraulic fluid to the two ports of the actuator cylinder is controlled by a servo valve that is controlled by a signal from a servo amplifier that, in turn, receives an analog position-command signal (a current having a value between 4 and 20 mA) from a supervisory control system of the facility. As the position command changes, the servo valve shifts, causing a greater flow of hydraulic fluid to one side of the cylinder and thereby causing the actuator piston to move to extend or retract a piston rod from the actuator body. A linear variable differential transformer (LVDT) directly linked to the piston provides a position-feedback signal, which is compared with the position-command signal in the servo amplifier. When the position-feedback and position-command signals match, the servo valve moves to its null position, in which it holds the actuator piston at a steady position.
Singular linear quadratic control problem for systems with linear and constant delay
NASA Astrophysics Data System (ADS)
Sesekin, A. N.; Andreeva, I. Yu.; Shlyakhov, A. S.
2016-12-01
This article is devoted to the singular linear-quadratic optimization problem on the trajectories of the linear non-autonomous system of differential equations with linear and constant delay. It should be noted that such task does not solve the class of integrable controls, so to ensure the existence of a solution is needed to expand the class of controls to include the control impulse components. For the problem under consideration, we have built program control containing impulse components in the initial and final moments time. This is done under certain assumptions on the functional and the right side of the control system.
Applications of Goal Programming to Education.
ERIC Educational Resources Information Center
Van Dusseldorp, Ralph A.; And Others
This paper discusses goal programming, a computer-based operations research technique that is basically a modification and extension of linear programming. The authors first discuss the similarities and differences between goal programming and linear programming, then describe the limitations of goal programming and its possible applications for…
Nalbantoğlu, Ö Ufuk
2014-01-01
Independent scoring of the aligned sections to determine the quality of biological sequence alignments enables recursive definitions of the overall alignment score. This property is not only biologically meaningful but it also provides the opportunity to find the optimal alignments using dynamic programming-based algorithms. Dynamic programming is an efficient problem solving technique for a class of problems that can be solved by dividing into overlapping subproblems. Pairwise sequence alignment techniques such as Needleman-Wunsch and Smith-Waterman algorithms are applications of dynamic programming on pairwise sequence alignment problems. These algorithms offer polynomial time and space solutions. In this chapter, we introduce the basic dynamic programming solutions for global, semi-global, and local alignment problems. Algorithmic improvements offering quadratic-time and linear-space programs and approximate solutions with space-reduction and seeding heuristics are discussed. We finally introduce the application of these techniques on multiple sequence alignment briefly.
Finite solutions of fully fuzzy linear system
NASA Astrophysics Data System (ADS)
Malkawi, Ghassan; Ahmad, Nazihah; Ibrahim, Haslinda
2014-12-01
The solution of Fully Fuzzy Linear System (FFLS) is normally categorized as unique, finite and infinitely many solutions. However, in the case of more than one solution, the finite or alternative solution is not detected when linear programming is considered. Therefore this paper aims to provide a method of using min-max system and absolute system to append new concept for the consistency of FFLS, which is called finite solution of FFLS, where the FFLS have more than two solutions, and not only an infinite solution.
NASA Technical Reports Server (NTRS)
Goldowsky, Michael P. (Inventor)
1987-01-01
A reciprocating linear motor is formed with a pair of ring-shaped permanent magnets having opposite radial polarizations, held axially apart by a nonmagnetic yoke, which serves as an axially displaceable armature assembly. A pair of annularly wound coils having axial lengths which differ from the axial lengths of the permanent magnets are serially coupled together in mutual opposition and positioned with an outer cylindrical core in axial symmetry about the armature assembly. One embodiment includes a second pair of annularly wound coils serially coupled together in mutual opposition and an inner cylindrical core positioned in axial symmetry inside the armature radially opposite to the first pair of coils. Application of a potential difference across a serial connection of the two pairs of coils creates a current flow perpendicular to the magnetic field created by the armature magnets, thereby causing limited linear displacement of the magnets relative to the coils.
Relativistic Linear Restoring Force
ERIC Educational Resources Information Center
Clark, D.; Franklin, J.; Mann, N.
2012-01-01
We consider two different forms for a relativistic version of a linear restoring force. The pair comes from taking Hooke's law to be the force appearing on the right-hand side of the relativistic expressions: d"p"/d"t" or d"p"/d["tau"]. Either formulation recovers Hooke's law in the non-relativistic limit. In addition to these two forces, we…
Combustion powered linear actuator
Fischer, Gary J.
2007-09-04
The present invention provides robotic vehicles having wheeled and hopping mobilities that are capable of traversing (e.g. by hopping over) obstacles that are large in size relative to the robot and, are capable of operation in unpredictable terrain over long range. The present invention further provides combustion powered linear actuators, which can include latching mechanisms to facilitate pressurized fueling of the actuators, as can be used to provide wheeled vehicles with a hopping mobility.
Buttram, M.T.; Ginn, J.W.
1988-06-21
A linear induction accelerator includes a plurality of adder cavities arranged in a series and provided in a structure which is evacuated so that a vacuum inductance is provided between each adder cavity and the structure. An energy storage system for the adder cavities includes a pulsed current source and a respective plurality of bipolar converting networks connected thereto. The bipolar high-voltage, high-repetition-rate square pulse train sets and resets the cavities. 4 figs.
Wideband Linear Phase Modulator
NASA Technical Reports Server (NTRS)
Mysoor, Narayan R.; Mueller, Robert O.
1994-01-01
Phase modulator for transmission in X band provides large phase deviation that remains nearly linear with voltage over relatively wide range. Operates with low loss over wide frequency band and with stable characteristics over wide temperature range. Phase modulator contains two varactor-diode phase shifters coupled via circulators. Separate drive circuit applies modulating voltages to varactor diodes. Modulation voltages vary in accordance with input to drive circuit.
A proposed method for solving fuzzy system of linear equations.
Kargar, Reza; Allahviranloo, Tofigh; Rostami-Malkhalifeh, Mohsen; Jahanshaloo, Gholam Reza
2014-01-01
This paper proposes a new method for solving fuzzy system of linear equations with crisp coefficients matrix and fuzzy or interval right hand side. Some conditions for the existence of a fuzzy or interval solution of m × n linear system are derived and also a practical algorithm is introduced in detail. The method is based on linear programming problem. Finally the applicability of the proposed method is illustrated by some numerical examples.
Accumulative Equating Error after a Chain of Linear Equatings
ERIC Educational Resources Information Center
Guo, Hongwen
2010-01-01
After many equatings have been conducted in a testing program, equating errors can accumulate to a degree that is not negligible compared to the standard error of measurement. In this paper, the author investigates the asymptotic accumulative standard error of equating (ASEE) for linear equating methods, including chained linear, Tucker, and…
Implementing Linear Algebra Related Algorithms on the TI-92+ Calculator.
ERIC Educational Resources Information Center
Alexopoulos, John; Abraham, Paul
2001-01-01
Demonstrates a less utilized feature of the TI-92+: its natural and powerful programming language. Shows how to implement several linear algebra related algorithms including the Gram-Schmidt process, Least Squares Approximations, Wronskians, Cholesky Decompositions, and Generalized Linear Least Square Approximations with QR Decompositions.…
Ultrasonic linear measurement system
NASA Technical Reports Server (NTRS)
Marshall, Scot H. (Inventor)
1991-01-01
An ultrasonic linear measurement system uses the travel time of surface waves along the perimeter of a three-dimensional curvilinear body to determine the perimeter of the curvilinear body. The system can also be used piece-wise to measure distances along plane surfaces. The system can be used to measure perimeters where use of laser light, optical means or steel tape would be extremely difficult, time consuming or impossible. It can also be used to determine discontinuities in surfaces of known perimeter or dimension.
NASA Technical Reports Server (NTRS)
Perkins, Gerald S. (Inventor)
1980-01-01
A linear actuator which can apply high forces is described, which includes a reciprocating rod having a threaded portion engaged by a nut that is directly coupled to the rotor of an electric motor. The nut is connected to the rotor in a manner that minimizes loading on the rotor, by the use of a coupling that transmits torque to the nut but permits it to shift axially and radially with respect to the rotor. The nut has a threaded hydrostatic bearing for engaging the threaded rod portion, with an oilcarrying groove in the nut being interrupted.
NASA Astrophysics Data System (ADS)
de Jong, Roelof
2005-07-01
This program incorporates a number of tests to analyse the count rate dependent non-linearity seen in NICMOS spectro-photometric observations. In visit 1 we will observe a few fields with stars of a range in luminosity in NGC1850 with NICMOS in NIC1 in F090M, F110W and F160W and NIC2 F110W, F160W, and F180W. We will repeat the observations with flatfield lamp on, creating artificially high count-rates, allowing tests of NICMOS linearity as function of count rate. To access the effect of charge trapping and persistence, we first take darks {so there is not too much charge already trapped}, than take exposures with the lamp off, exposures with the lamp on, and repeat at the end with lamp off. Finally, we continue with taking darks during occultation. In visit 2 we will observe spectro-photometric standard P041C using the G096 and G141 grisms in NIC3, and repeat the lamp off/on/off test to artificially create a high background. In visits 3&4 we repeat photometry measurements of faint standard stars SNAP-2 and WD1657+343, on which the NICMOS non-linearity was originally discovered using grism observations. These measurements are repeated, because previous photometry was obtained with too short exposure times, hence substantially affected by charge trapping non-linearity. Measurements will be made with NIC1: Visit 5 forms the persistence test of the program. The bright star GL-390 {used in a previous persistence test} will iluminate the 3 NICMOS detectors in turn for a fixed time, saturating the center many times, after which a series of darks will be taken to measure the persistence {i.e. trapped electrons and the decay time of the traps}. To determine the wavelength dependence of the trap chance, exposures of the bright star in different filters will be taken, as well as one in the G096 grism with NIC3. Most exposures will be 128s long, but two exposures in the 3rd orbit will be 3x longer, to seperate the effects of count rate versus total counts of the trap
NASA Astrophysics Data System (ADS)
Birx, Daniel
1992-03-01
Among the family of particle accelerators, the Induction Linear Accelerator is the best suited for the acceleration of high current electron beams. Because the electromagnetic radiation used to accelerate the electron beam is not stored in the cavities but is supplied by transmission lines during the beam pulse it is possible to utilize very low Q (typically<10) structures and very large beam pipes. This combination increases the beam breakup limited maximum currents to of order kiloamperes. The micropulse lengths of these machines are measured in 10's of nanoseconds and duty factors as high as 10-4 have been achieved. Until recently the major problem with these machines has been associated with the pulse power drive. Beam currents of kiloamperes and accelerating potentials of megavolts require peak power drives of gigawatts since no energy is stored in the structure. The marriage of liner accelerator technology and nonlinear magnetic compressors has produced some unique capabilities. It now appears possible to produce electron beams with average currents measured in amperes, peak currents in kiloamperes and gradients exceeding 1 MeV/meter, with power efficiencies approaching 50%. The nonlinear magnetic compression technology has replaced the spark gap drivers used on earlier accelerators with state-of-the-art all-solid-state SCR commutated compression chains. The reliability of these machines is now approaching 1010 shot MTBF. In the following paper we will briefly review the historical development of induction linear accelerators and then discuss the design considerations.
Linearly Forced Isotropic Turbulence
NASA Technical Reports Server (NTRS)
Lundgren, T. S.
2003-01-01
Stationary isotropic turbulence is often studied numerically by adding a forcing term to the Navier-Stokes equation. This is usually done for the purpose of achieving higher Reynolds number and longer statistics than is possible for isotropic decaying turbulence. It is generally accepted that forcing the Navier-Stokes equation at low wave number does not influence the small scale statistics of the flow provided that there is wide separation between the largest and smallest scales. It will be shown, however, that the spectral width of the forcing has a noticeable effect on inertial range statistics. A case will be made here for using a broader form of forcing in order to compare computed isotropic stationary turbulence with (decaying) grid turbulence. It is shown that using a forcing function which is directly proportional to the velocity has physical meaning and gives results which are closer to both homogeneous and non-homogeneous turbulence. Section 1 presents a four part series of motivations for linear forcing. Section 2 puts linear forcing to a numerical test with a pseudospectral computation.
The report discusses a two person max -min problem in which the maximizing player moves first and the minimizing player has perfect information of the...The joint constraints as well as the objective function are assumed to be linear. For this problem it is shown that the familiar inequality min max ...or = max min is reversed due to the influence of the joint constraints. The problem is characterized as a nonconvex program and a method of
Linear stochastic optimal control and estimation problem
NASA Technical Reports Server (NTRS)
Geyser, L. C.; Lehtinen, F. K. B.
1980-01-01
Problem involves design of controls for linear time-invariant system disturbed by white noise. Solution is Kalman filter coupled through set of optimal regulator gains to produce desired control signal. Key to solution is solving matrix Riccati differential equation. LSOCE effectively solves problem for wide range of practical applications. Program is written in FORTRAN IV for batch execution and has been implemented on IBM 360.
NASA Astrophysics Data System (ADS)
2001-05-01
Third Nucleus Observed with the VLT Summary New images from the VLT show that one of the two nuclei of Comet LINEAR (C/2001 A2), now about 100 million km from the Earth, has just split into at least two pieces . The three fragments are now moving through space in nearly parallel orbits while they slowly drift apart. This comet will pass through its perihelion (nearest point to the Sun) on May 25, 2001, at a distance of about 116 million kilometres. It has brightened considerably due to the splitting of its "dirty snowball" nucleus and can now be seen with the unaided eye by observers in the southern hemisphere as a faint object in the southern constellation of Lepus (The Hare). PR Photo 18a/01 : Three nuclei of Comet LINEAR . PR Photo 18b/01 : The break-up of Comet LINEAR (false-colour). Comet LINEAR splits and brightens ESO PR Photo 18a/01 ESO PR Photo 18a/01 [Preview - JPEG: 400 x 438 pix - 55k] [Normal - JPEG: 800 x 875 pix - 136k] ESO PR Photo 18b/01 ESO PR Photo 18b/01 [Preview - JPEG: 367 x 400 pix - 112k] [Normal - JPEG: 734 x 800 pix - 272k] Caption : ESO PR Photo 18a/01 shows the three nuclei of Comet LINEAR (C/2001 A2). It is a reproduction of a 1-min exposure in red light, obtained in the early evening of May 16, 2001, with the 8.2-m VLT YEPUN (UT4) telescope at Paranal. ESO PR Photo 18b/01 shows the same image, but in a false-colour rendering for more clarity. The cometary fragment "B" (right) has split into "B1" and "B2" (separation about 1 arcsec, or 500 km) while fragment "A" (upper left) is considerably fainter. Technical information about these photos is available below. Comet LINEAR was discovered on January 3, 2001, and designated by the International Astronomical Union (IAU) as C/2001 A2 (see IAU Circular 7564 [1]). Six weeks ago, it was suddenly observed to brighten (IAUC 7605 [1]). Amateurs all over the world saw the comparatively faint comet reaching naked-eye magnitude and soon thereafter, observations with professional telescopes indicated
Vuori, Kaarina; Strandén, Ismo; Sevón-Aimonen, Marja-Liisa; Mäntysaari, Esa A
2006-01-01
A method based on Taylor series expansion for estimation of location parameters and variance components of non-linear mixed effects models was considered. An attractive property of the method is the opportunity for an easily implemented algorithm. Estimation of non-linear mixed effects models can be done by common methods for linear mixed effects models, and thus existing programs can be used after small modifications. The applicability of this algorithm in animal breeding was studied with simulation using a Gompertz function growth model in pigs. Two growth data sets were analyzed: a full set containing observations from the entire growing period, and a truncated time trajectory set containing animals slaughtered prematurely, which is common in pig breeding. The results from the 50 simulation replicates with full data set indicate that the linearization approach was capable of estimating the original parameters satisfactorily. However, estimation of the parameters related to adult weight becomes unstable in the case of a truncated data set.
Singular linear-quadratic control problem for systems with linear delay
Sesekin, A. N.
2013-12-18
A singular linear-quadratic optimization problem on the trajectories of non-autonomous linear differential equations with linear delay is considered. The peculiarity of this problem is the fact that this problem has no solution in the class of integrable controls. To ensure the existence of solutions is required to expand the class of controls including controls with impulse components. Dynamical systems with linear delay are used to describe the motion of pantograph from the current collector with electric traction, biology, etc. It should be noted that for practical problems fact singularity criterion of quality is quite commonly occurring, and therefore the study of these problems is surely important. For the problem under discussion optimal programming control contained impulse components at the initial and final moments of time is constructed under certain assumptions on the functional and the right side of the control system.
NASA Astrophysics Data System (ADS)
Hagedorn, P.
The mathematical pendulum is used to provide a survey of free and forced oscillations in damped and undamped systems. This simple model is employed to present illustrations for and comparisons between the various approximation schemes. A summary of the Liapunov stability theory is provided. The first and the second method of Liapunov are explained for autonomous as well as for nonautonomous systems. Here, a basic familiarity with the theory of linear oscillations is assumed. La Salle's theorem about the stability of invariant domains is explained in terms of illustrative examples. Self-excited oscillations are examined, taking into account such oscillations in mechanical and electrical systems, analytical approximation methods for the computation of self-excited oscillations, analytical criteria for the existence of limit cycles, forced oscillations in self-excited systems, and self-excited oscillations in systems with several degrees of freedom. Attention is given to Hamiltonian systems and an introduction to the theory of optimal control is provided.
NASA Astrophysics Data System (ADS)
Revenough, Justin
Elastic waves propagating in simple media manifest a surprisingly rich collection of phenomena. Although some can't withstand the complexities of Earth's structure, the majority only grow more interesting and more important as remote sensing probes for seismologists studying the planet's interior. To fully mine the information carried to the surface by seismic waves, seismologists must produce accurate models of the waves. Great strides have been made in this regard. Problems that were entirely intractable a decade ago are now routinely solved on inexpensive workstations. The mathematical representations of waves coded into algorithms have grown vastly more sophisticated and are troubled by many fewer approximations, enforced symmetries, and limitations. They are far from straightforward, and seismologists using them need a firm grasp on wave propagation in simple media. Linear Elastic Waves, by applied mathematician John G. Harris, responds to this need.
Van Atta, C.M.; Beringer, R.; Smith, L.
1959-01-01
A linear accelerator of heavy ions is described. The basic contributions of the invention consist of a method and apparatus for obtaining high energy particles of an element with an increased charge-to-mass ratio. The method comprises the steps of ionizing the atoms of an element, accelerating the resultant ions to an energy substantially equal to one Mev per nucleon, stripping orbital electrons from the accelerated ions by passing the ions through a curtain of elemental vapor disposed transversely of the path of the ions to provide a second charge-to-mass ratio, and finally accelerating the resultant stripped ions to a final energy of at least ten Mev per nucleon.
NASA Technical Reports Server (NTRS)
Holloway, Sidney E., III
1994-01-01
This paper describes the mechanical design, analysis, fabrication, testing, and lessons learned by developing a uniquely designed spaceflight-like actuator. The linear proof mass actuator (LPMA) was designed to attach to both a large space structure and a ground test model without modification. Previous designs lacked the power to perform in a terrestrial environment while other designs failed to produce the desired accelerations or frequency range for spaceflight applications. Thus, the design for a unique actuator was conceived and developed at NASA Langley Research Center. The basic design consists of four large mechanical parts (mass, upper housing, lower housing, and center support) and numerous smaller supporting components including an accelerometer, encoder, and four drive motors. Fabrication personnel were included early in the design phase of the LPMA as part of an integrated manufacturing process to alleviate potential difficulties in machining an already challenging design. Operating testing of the LPMA demonstrated that the actuator is capable of various types of load functions.
NASA Technical Reports Server (NTRS)
Holloway, S. E., III
1995-01-01
This paper describes the mechanical design, analysis, fabrication, testing, and lessons learned by developing a uniquely designed spaceflight-like actuator. The Linear Proof Mass Actuator (LPMA) was designed to attach to both a large space structure and a ground test model without modification. Previous designs lacked the power to perform in a terrestrial environment while other designs failed to produce the desired accelerations or frequency range for spaceflight applications. Thus, the design for a unique actuator was conceived and developed at NASA Langley Research Center. The basic design consists of four large mechanical parts (Mass, Upper Housing, Lower Housing, and Center Support) and numerous smaller supporting components including an accelerometer, encoder, and four drive motors. Fabrication personnel were included early in the design phase of the LPMA as part of an integrated manufacturing process to alleviate potential difficulties in machining an already challenging design. Operational testing of the LPMA demonstrated that the actuator is capable of various types of load functions.
Newman, Gregory A.; Commer, Michael
2006-11-17
Software that simulates and inverts electromagnetic field data for subsurface electrical properties (electrical conductivity) of geological media. The software treats data produced by a time harmonic source field excitation arising from the following antenna geometery: loops and grounded bipoles, as well as point electric and magnetic dioples. The inversion process is carried out using a non-linear conjugate gradient optimization scheme, which minimizes the misfit between field data and model data using a least squares criteria. The software is an upgrade from the code NLCGCS_MP ver 1.0. The upgrade includes the following components: Incorporation of new 1 D field sourcing routines to more accurately simulate the 3D electromagnetic field for arbitrary geologic& media, treatment for generalized finite length transmitting antenna geometry (antennas with vertical and horizontal component directions). In addition, the software has been upgraded to treat transverse anisotropy in electrical conductivity.
NASA Astrophysics Data System (ADS)
Pierce, Alan
This chapter deals with the physical and mathematical aspects of sound when the disturbances are, in some sense, small. Acoustics is usually concerned with small-amplitude phenomena, and consequently a linear description is usually applicable. Disturbances are governed by the properties of the medium in which they occur, and the governing equations are the equations of continuum mechanics, which apply equally to gases, liquids, and solids. These include the mass, momentum, and energy equations, as well as thermodynamic principles. The viscosity and thermal conduction enter into the versions of these equations that apply to fluids. Fluids of typical great interest are air and sea water, and consequently this chapter includes a summary of their relevant acoustic properties. The foundation is also laid for the consideration of acoustic waves in elastic solids, suspensions, bubbly liquids, and porous media.
NASA Astrophysics Data System (ADS)
Pierce, Alan D.
This chapter deals with the physical and mathematical aspects of sound when the disturbances are, in some sense, small. Acoustics is usually concerned with small-amplitude phenomena, and consequently a linear description is usually
Improved Equivalent Linearization Implementations Using Nonlinear Stiffness Evaluation
NASA Technical Reports Server (NTRS)
Rizzi, Stephen A.; Muravyov, Alexander A.
2001-01-01
This report documents two new implementations of equivalent linearization for solving geometrically nonlinear random vibration problems of complicated structures. The implementations are given the acronym ELSTEP, for "Equivalent Linearization using a STiffness Evaluation Procedure." Both implementations of ELSTEP are fundamentally the same in that they use a novel nonlinear stiffness evaluation procedure to numerically compute otherwise inaccessible nonlinear stiffness terms from commercial finite element programs. The commercial finite element program MSC/NASTRAN (NASTRAN) was chosen as the core of ELSTEP. The FORTRAN implementation calculates the nonlinear stiffness terms and performs the equivalent linearization analysis outside of NASTRAN. The Direct Matrix Abstraction Program (DMAP) implementation performs these operations within NASTRAN. Both provide nearly identical results. Within each implementation, two error minimization approaches for the equivalent linearization procedure are available - force and strain energy error minimization. Sample results for a simply supported rectangular plate are included to illustrate the analysis procedure.
A Methodology and Linear Model for System Planning and Evaluation.
ERIC Educational Resources Information Center
Meyer, Richard W.
1982-01-01
The two-phase effort at Clemson University to design a comprehensive library automation program is reported. Phase one was based on a version of IBM's business system planning methodology, and the second was based on a linear model designed to compare existing program systems to the phase one design. (MLW)
Software For Integer Programming
NASA Technical Reports Server (NTRS)
Fogle, F. R.
1992-01-01
Improved Exploratory Search Technique for Pure Integer Linear Programming Problems (IESIP) program optimizes objective function of variables subject to confining functions or constraints, using discrete optimization or integer programming. Enables rapid solution of problems up to 10 variables in size. Integer programming required for accuracy in modeling systems containing small number of components, distribution of goods, scheduling operations on machine tools, and scheduling production in general. Written in Borland's TURBO Pascal.
Simulation of a medical linear accelerator for teaching purposes.
Anderson, Rhys; Lamey, Michael; MacPherson, Miller; Carlone, Marco
2015-05-08
Simulation software for medical linear accelerators that can be used in a teaching environment was developed. The components of linear accelerators were modeled to first order accuracy using analytical expressions taken from the literature. The expressions used constants that were empirically set such that realistic response could be expected. These expressions were programmed in a MATLAB environment with a graphical user interface in order to produce an environment similar to that of linear accelerator service mode. The program was evaluated in a systematic fashion, where parameters affecting the clinical properties of medical linear accelerator beams were adjusted independently, and the effects on beam energy and dose rate recorded. These results confirmed that beam tuning adjustments could be simulated in a simple environment. Further, adjustment of service parameters over a large range was possible, and this allows the demonstration of linear accelerator physics in an environment accessible to both medical physicists and linear accelerator service engineers. In conclusion, a software tool, named SIMAC, was developed to improve the teaching of linear accelerator physics in a simulated environment. SIMAC performed in a similar manner to medical linear accelerators. The authors hope that this tool will be valuable as a teaching tool for medical physicists and linear accelerator service engineers.
NASA Technical Reports Server (NTRS)
Medan, R. T. (Editor); Magnus, A. E.; Sidwell, K. W.; Epton, M. A.
1981-01-01
Numerous applications of the PAN AIR computer program system are presented. PAN AIR is user-oriented tool for analyzing and/or designing aerodynamic configurations in subsonic or supersonic flow using a technique generally referred to as a higher order panel method. Problems solved include simple wings in subsonic and supersonic flow, a wing-body in supersonic flow, wing with deflected flap in subsonic flow, design of two-dimensional and three-dimensional wings, axisymmetric nacelle in supersonic flow, and wing-canard-tail-nacelle-fuselage combination in supersonic flow.
Non-linear sequencing and cognizant failure
NASA Astrophysics Data System (ADS)
Gat, Erann
1999-01-01
Spacecraft are traditionally commanded using linear sequences of time-based commands. Linear sequences work fairly well, but they are difficult and expensive to generate, and are usually not capable of responding to contingencies. Any anomalous behavior while executing a linear sequence generally results in the spacecraft entering a safe mode. Critical sequences like orbit insertions which must be able to respond to faults without going into safe mode are particularly difficult to design and verify. The effort needed to generate command sequences can be reduced by extending the vocabulary of sequences to include more sophisticated control constructs. The simplest extensions are conditionals and loops. Adding these constructs would make a sequencing language look more or less like a traditional programming language or scripting language, and would come with all the difficulties associated with such a language. In particular, verifying the correctness of a sequence would be tantamount to verifying the correctness of a program, which is undecidable in general. We describe an extended vocabulary for non-linear sequencing based on the architectural notion of cognizant failure. A cognizant failure architecture is divided into components whose contract is to either achieve (or maintain) a certain condition, or report that they have failed to do so. Cognizant failure is an easier condition to verify than correctness, but it can provide high confidence in the safety of the spacecraft. Because cognizant failure inherently implies some kind of representation of the intent of an action, the system can respond to contingencies in more robust and general ways. We will describe an implemented non-linear sequencing system that is being flown on the NASA New Millennium Deep Space 1 Mission as part of the Remote Agent Experiment.
Kliman, G.B.; Brynsvold, G.V.; Jahns, T.M.
1989-08-22
A winding and method of winding for a submersible linear pump for pumping liquid sodium are disclosed. The pump includes a stator having a central cylindrical duct preferably vertically aligned. The central vertical duct is surrounded by a system of coils in slots. These slots are interleaved with magnetic flux conducting elements, these magnetic flux conducting elements forming a continuous magnetic field conduction path along the stator. The central duct has placed therein a cylindrical magnetic conducting core, this core having a cylindrical diameter less than the diameter of the cylindrical duct. The core once placed to the duct defines a cylindrical interstitial pumping volume of the pump. This cylindrical interstitial pumping volume preferably defines an inlet at the bottom of the pump, and an outlet at the top of the pump. Pump operation occurs by static windings in the outer stator sequentially conveying toroidal fields from the pump inlet at the bottom of the pump to the pump outlet at the top of the pump. The winding apparatus and method of winding disclosed uses multiple slots per pole per phase with parallel winding legs on each phase equal to or less than the number of slots per pole per phase. The slot sequence per pole per phase is chosen to equalize the variations in flux density of the pump sodium as it passes into the pump at the pump inlet with little or no flux and acquires magnetic flux in passage through the pump to the pump outlet. 4 figs.
Kliman, Gerald B.; Brynsvold, Glen V.; Jahns, Thomas M.
1989-01-01
A winding and method of winding for a submersible linear pump for pumping liquid sodium is disclosed. The pump includes a stator having a central cylindrical duct preferably vertically aligned. The central vertical duct is surrounded by a system of coils in slots. These slots are interleaved with magnetic flux conducting elements, these magnetic flux conducting elements forming a continuous magnetic field conduction path along the stator. The central duct has placed therein a cylindrical magnetic conducting core, this core having a cylindrical diameter less than the diameter of the cylindrical duct. The core once placed to the duct defines a cylindrical interstitial pumping volume of the pump. This cylindrical interstitial pumping volume preferably defines an inlet at the bottom of the pump, and an outlet at the top of the pump. Pump operation occurs by static windings in the outer stator sequentially conveying toroidal fields from the pump inlet at the bottom of the pump to the pump outlet at the top of the pump. The winding apparatus and method of winding disclosed uses multiple slots per pole per phase with parallel winding legs on each phase equal to or less than the number of slots per pole per phase. The slot sequence per pole per phase is chosen to equalize the variations in flux density of the pump sodium as it passes into the pump at the pump inlet with little or no flux and acquires magnetic flux in passage through the pump to the pump outlet.
NASA Astrophysics Data System (ADS)
Nassisi, V.; Delle Side, D.
2017-02-01
Nowadays, the employment and development of fast current pulses require sophisticated systems to perform measurements. Rogowski coils are used to diagnose cylindrical shaped beams; therefore, they are designed and built with a toroidal structure. Recently, to perform experiments of radiofrequency biophysical stresses, flat transmission lines have been developed. Therefore, in this work we developed a linear Rogowski coil to detect current pulses inside flat conductors. The system is first approached by means of transmission line theory. We found that, if the pulse width to be diagnosed is comparable with the propagation time of the signal in the detector, it is necessary to impose a uniform current as input pulse, or to use short coils. We further analysed the effect of the resistance of the coil and the influence of its magnetic properties. As a result, the device we developed is able to record pulses lasting for some hundreds of nanoseconds, depending on the inductance, load impedance, and resistance of the coil. Furthermore, its response is characterized by a sub-nanosecond rise time (˜100 ps). The attenuation coefficient depends mainly on the turn number of the coil, while the fidelity of the response depends both on the magnetic core characteristics and on the current distribution along the plane conductors.
Meisner, John W.; Moore, Robert M.; Bienvenue, Louis L.
1985-03-19
Electromagnetic linear induction pump for liquid metal which includes a unitary pump duct. The duct comprises two substantially flat parallel spaced-apart wall members, one being located above the other and two parallel opposing side members interconnecting the wall members. Located within the duct are a plurality of web members interconnecting the wall members and extending parallel to the side members whereby the wall members, side members and web members define a plurality of fluid passageways, each of the fluid passageways having substantially the same cross-sectional flow area. Attached to an outer surface of each side member is an electrically conductive end bar for the passage of an induced current therethrough. A multi-phase, electrical stator is located adjacent each of the wall members. The duct, stators, and end bars are enclosed in a housing which is provided with an inlet and outlet in fluid communication with opposite ends of the fluid passageways in the pump duct. In accordance with a preferred embodiment, the inlet and outlet includes a transition means which provides for a transition from a round cross-sectional flow path to a substantially rectangular cross-sectional flow path defined by the pump duct.
Generalized Linear Covariance Analysis
NASA Technical Reports Server (NTRS)
Carpenter, James R.; Markley, F. Landis
2014-01-01
This talk presents a comprehensive approach to filter modeling for generalized covariance analysis of both batch least-squares and sequential estimators. We review and extend in two directions the results of prior work that allowed for partitioning of the state space into solve-for'' and consider'' parameters, accounted for differences between the formal values and the true values of the measurement noise, process noise, and textita priori solve-for and consider covariances, and explicitly partitioned the errors into subspaces containing only the influence of the measurement noise, process noise, and solve-for and consider covariances. In this work, we explicitly add sensitivity analysis to this prior work, and relax an implicit assumption that the batch estimator's epoch time occurs prior to the definitive span. We also apply the method to an integrated orbit and attitude problem, in which gyro and accelerometer errors, though not estimated, influence the orbit determination performance. We illustrate our results using two graphical presentations, which we call the variance sandpile'' and the sensitivity mosaic,'' and we compare the linear covariance results to confidence intervals associated with ensemble statistics from a Monte Carlo analysis.
Zweig, George
2015-08-01
An active, three-dimensional, short-wavelength model of cochlear mechanics is derived from an older, one-dimensional, long-wavelength model containing time-delay forces. Remarkably, the long-wavelength model with nonlocal temporal interactions behaves like a short-wavelength model with instantaneous interactions. The cochlear oscillators are driven both by the pressure and its time derivative, the latter presumably a proxy for forces contributed by outer hair cells. The admittance in the short-wavelength region is used to find an integral representation of the transfer function valid for all wavelengths. There are only two free parameters: the pole position in the complex frequency plane of the admittance, and the slope of the transfer-function phase at low frequencies. The new model predicts a dip in amplitude and a corresponding rapid drop in phase, past the peak of the traveling wave. Linear models may be compared by their wavelengths, and if they have the same dimension, by the singularity structure of their admittances.
Berkeley Proton Linear Accelerator
DOE R&D Accomplishments Database
Alvarez, L. W.; Bradner, H.; Franck, J.; Gordon, H.; Gow, J. D.; Marshall, L. C.; Oppenheimer, F. F.; Panofsky, W. K. H.; Richman, C.; Woodyard, J. R.
1953-10-13
A linear accelerator, which increases the energy of protons from a 4 Mev Van de Graaff injector, to a final energy of 31.5 Mev, has been constructed. The accelerator consists of a cavity 40 feet long and 39 inches in diameter, excited at resonance in a longitudinal electric mode with a radio-frequency power of about 2.2 x 10{sup 6} watts peak at 202.5 mc. Acceleration is made possible by the introduction of 46 axial "drift tubes" into the cavity, which is designed such that the particles traverse the distance between the centers of successive tubes in one cycle of the r.f. power. The protons are longitudinally stable as in the synchrotron, and are stabilized transversely by the action of converging fields produced by focusing grids. The electrical cavity is constructed like an inverted airplane fuselage and is supported in a vacuum tank. Power is supplied by 9 high powered oscillators fed from a pulse generator of the artificial transmission line type.
Nassisi, V; Delle Side, D
2017-02-01
Nowadays, the employment and development of fast current pulses require sophisticated systems to perform measurements. Rogowski coils are used to diagnose cylindrical shaped beams; therefore, they are designed and built with a toroidal structure. Recently, to perform experiments of radiofrequency biophysical stresses, flat transmission lines have been developed. Therefore, in this work we developed a linear Rogowski coil to detect current pulses inside flat conductors. The system is first approached by means of transmission line theory. We found that, if the pulse width to be diagnosed is comparable with the propagation time of the signal in the detector, it is necessary to impose a uniform current as input pulse, or to use short coils. We further analysed the effect of the resistance of the coil and the influence of its magnetic properties. As a result, the device we developed is able to record pulses lasting for some hundreds of nanoseconds, depending on the inductance, load impedance, and resistance of the coil. Furthermore, its response is characterized by a sub-nanosecond rise time (∼100 ps). The attenuation coefficient depends mainly on the turn number of the coil, while the fidelity of the response depends both on the magnetic core characteristics and on the current distribution along the plane conductors.
Linear optical properties of solids within the full-potential linearized augmented planewave method
NASA Astrophysics Data System (ADS)
Ambrosch-Draxl, Claudia; Sofo, Jorge O.
2006-07-01
We present a scheme for the calculation of linear optical properties by the all-electron full-potential linearized augmented planewave (LAPW) method. A summary of the theoretical background for the derivation of the dielectric tensor within the random-phase approximation is provided. The momentum matrix elements are evaluated in detail for the LAPW basis, and the interband as well as the intra-band contributions to the dielectric tensor are given. As an example the formalism is applied to Aluminum. The program is available as a module within the WIEN2k code.
Linearly convergent inexact proximal point algorithm for minimization. Revision 1
Zhu, C.
1993-08-01
In this paper, we propose a linearly convergent inexact PPA for minimization, where the inner loop stops when the relative reduction on the residue (defined as the objective value minus the optimal value) of the inner loop subproblem meets some preassigned constant. This inner loop stopping criterion can be achieved in a fixed number of iterations if the inner loop algorithm has a linear rate on the regularized subproblems. Therefore the algorithm is able to avoid the computationally expensive process of solving the inner loop subproblems exactly or asymptotically accurately; a process required by most of the other linearly convergent PPAs. As applications of this inexact PPA, we develop linearly convergent iteration schemes for minimizing functions with singular Hessian matrices, and for solving hemiquadratic extended linear-quadratic programming problems. We also prove that Correa-Lemarechal`s ``implementable form`` of PPA converges linearly under mild conditions.
A GUIDE TO PROGRAMED INSTRUCTION.
ERIC Educational Resources Information Center
LYSAUGHT, JEROME P.; WILLIAMS, CLARENCE M.
THIS IS A GUIDE TO THE PREPARATION OF INSTRUCTIONAL PROGRAMS, DESIGNED FOR THE USE OF TEACHERS AND TRAINING SPECIALISTS. THE TEXT BRIEFLY COVERS THE AREAS OF LEARNING THEORY, SELECTION OF OBJECTIVES, LINEAR AND BRANCHING PROGRAMS, DEVELOPMENT OF A PROGRAM, PROGRAM REVIEW, EVALUATION, APPLICATIONS, AND IMPLICATIONS. CONCEPTS ARE ILLUSTRATED WITH…
Linear Collider Physics Resource Book Snowmass 2001
Ronan , M.T.
2001-06-01
The American particle physics community can look forward to a well-conceived and vital program of experimentation for the next ten years, using both colliders and fixed target beams to study a wide variety of pressing questions. Beyond 2010, these programs will be reaching the end of their expected lives. The CERN LHC will provide an experimental program of the first importance. But beyond the LHC, the American community needs a coherent plan. The Snowmass 2001 Workshop and the deliberations of the HEPAP subpanel offer a rare opportunity to engage the full community in planning our future for the next decade or more. A major accelerator project requires a decade from the beginning of an engineering design to the receipt of the first data. So it is now time to decide whether to begin a new accelerator project that will operate in the years soon after 2010. We believe that the world high-energy physics community needs such a project. With the great promise of discovery in physics at the next energy scale, and with the opportunity for the uncovering of profound insights, we cannot allow our field to contract to a single experimental program at a single laboratory in the world. We believe that an e{sup +}e{sup -} linear collider is an excellent choice for the next major project in high-energy physics. Applying experimental techniques very different from those used at hadron colliders, an e{sup +}e{sup -} linear collider will allow us to build on the discoveries made at the Tevatron and the LHC, and to add a level of precision and clarity that will be necessary to understand the physics of the next energy scale. It is not necessary to anticipate specific results from the hadron collider programs to argue for constructing an e{sup +}e{sup -} linear collider; in any scenario that is now discussed, physics will benefit from the new information that e{sup +}e{sup -} experiments can provide. This last point merits further emphasis. If a new accelerator could be designed and
Linear Algebraic Method for Non-Linear Map Analysis
Yu,L.; Nash, B.
2009-05-04
We present a newly developed method to analyze some non-linear dynamics problems such as the Henon map using a matrix analysis method from linear algebra. Choosing the Henon map as an example, we analyze the spectral structure, the tune-amplitude dependence, the variation of tune and amplitude during the particle motion, etc., using the method of Jordan decomposition which is widely used in conventional linear algebra.
NASA Technical Reports Server (NTRS)
Holloway, Sidney E., III; Crossley, Edward A.; Miller, James B.; Jones, Irby W.; Davis, C. Calvin; Behun, Vaughn D.; Goodrich, Lewis R., Sr.
1995-01-01
Linear proof-mass actuator (LPMA) is friction-driven linear mass actuator capable of applying controlled force to structure in outer space to damp out oscillations. Capable of high accelerations and provides smooth, bidirectional travel of mass. Design eliminates gears and belts. LPMA strong enough to be used terrestrially where linear actuators needed to excite or damp out oscillations. High flexibility designed into LPMA by varying size of motors, mass, and length of stroke, and by modifying control software.
Performance of the SLAC Linear Collider klystrons
Allen, M.A.; Fowkes, W.R.; Koontz, R.F.; Schwarz, H.D.; Seeman, J.T.; Vlieks, A.E.
1987-01-01
There are now 200 new, high power 5045 klystrons installed on the two-mile Stanford Linear Accelerator. Peak power per klystron averages over 63 MW. Average energy contribution is above 240 MeV per station. Electron beam energy has been measured as high as 53 GeV. Energy instability due to klystron malfunction is less than 0.2%. The installed klystrons have logged over one million operating hours with close to 20,000 klystron hours cumulative operating time between failures. Data are being accumulated on klystron operation and failure modes with failure signatures starting to become apparent. To date, no wholesale failure modes have surfaced that would impair the SLAC Linear Collider (SLC) program.
Plasma detachment in linear devices
NASA Astrophysics Data System (ADS)
Ohno, N.
2017-03-01
Plasma detachment research in linear devices, sometimes called divertor plasma simulators, is reviewed. Pioneering works exploring the concept of plasma detachment were conducted in linear devices. Linear devices have contributed greatly to the basic understanding of plasma detachment such as volume plasma recombination processes, detached plasma structure associated with particle and energy transport, and other related issues including enhancement of convective plasma transport, dynamic response of plasma detachment, plasma flow reversal, and magnetic field effect. The importance of plasma detachment research using linear devices will be highlighted aimed at the design of future DEMO.
Quantization of general linear electrodynamics
Rivera, Sergio; Schuller, Frederic P.
2011-03-15
General linear electrodynamics allow for an arbitrary linear constitutive relation between the field strength 2-form and induction 2-form density if crucial hyperbolicity and energy conditions are satisfied, which render the theory predictive and physically interpretable. Taking into account the higher-order polynomial dispersion relation and associated causal structure of general linear electrodynamics, we carefully develop its Hamiltonian formulation from first principles. Canonical quantization of the resulting constrained system then results in a quantum vacuum which is sensitive to the constitutive tensor of the classical theory. As an application we calculate the Casimir effect in a birefringent linear optical medium.
BMDO photovoltaics program overview
NASA Technical Reports Server (NTRS)
Caveny, Leonard H.; Allen, Douglas M.
1994-01-01
This is an overview of the Ballistic Missile Defense Organization (BMDO) Photovoltaic Program. Areas discussed are: (1) BMDO advanced Solar Array program; (2) Brilliant Eyes type satellites; (3) Electric propulsion; (4) Contractor Solar arrays; (5) Iofee Concentrator and Cell development; (6) Entech linear mini-dome concentrator; and (7) Flight test update/plans.
2011-04-01
Comparison of Performance Effectiveness of Linear Control Algorithms Developed for a Simplified Ground Vehicle Suspension System by Ross... Linear Control Algorithms Developed for a Simplified Ground Vehicle Suspension System Ross Brown Motile Robotics, Inc, research contractor at U.S... Linear Control Algorithms Developed for a Simplified Ground Vehicle Suspension System 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT
Suppressing Electron Cloud in Future Linear Colliders
Pivi, M; Kirby, R.E.; Raubenheimer, T.O.; Le Pimpec, F.; /PSI, Villigen
2005-05-27
Any accelerator circulating positively charged beams can suffer from a build-up of an electron cloud (EC) in the beam pipe. The cloud develops through ionization of residual gases, synchrotron radiation and secondary electron emission and, when severe, can cause instability, emittance blow-up or loss of the circulating beam. The electron cloud is potentially a luminosity limiting effect for both the Large Hadron Collider (LHC) and the International Linear Collider (ILC). For the ILC positron damping ring, the development of the electron cloud must be suppressed. This paper discusses the state-of-the-art of the ongoing SLAC and international R&D program to study potential remedies.
Linear algebra and image processing
NASA Astrophysics Data System (ADS)
Allali, Mohamed
2010-09-01
We use the computing technology digital image processing (DIP) to enhance the teaching of linear algebra so as to make the course more visual and interesting. Certainly, this visual approach by using technology to link linear algebra to DIP is interesting and unexpected to both students as well as many faculty.
Linear Algebra and Image Processing
ERIC Educational Resources Information Center
Allali, Mohamed
2010-01-01
We use the computing technology digital image processing (DIP) to enhance the teaching of linear algebra so as to make the course more visual and interesting. Certainly, this visual approach by using technology to link linear algebra to DIP is interesting and unexpected to both students as well as many faculty. (Contains 2 tables and 11 figures.)
Spatial Processes in Linear Ordering
ERIC Educational Resources Information Center
von Hecker, Ulrich; Klauer, Karl Christoph; Wolf, Lukas; Fazilat-Pour, Masoud
2016-01-01
Memory performance in linear order reasoning tasks (A > B, B > C, C > D, etc.) shows quicker, and more accurate responses to queries on wider (AD) than narrower (AB) pairs on a hypothetical linear mental model (A -- B -- C -- D). While indicative of an analogue representation, research so far did not provide positive evidence for spatial…
NASA Technical Reports Server (NTRS)
Johnson, Bruce G.; Gerver, Michael J.; Hawkey, Timothy J.; Fenn, Ralph C.
1993-01-01
Improved linear actuator comprises air slide and linear electric motor. Unit exhibits low friction, low backlash, and more nearly even acceleration. Used in machinery in which positions, velocities, and accelerations must be carefully controlled and/or vibrations must be suppressed.
Computer modeling of batteries from non-linear circuit elements
NASA Technical Reports Server (NTRS)
Waaben, S.; Federico, J.; Moskowitz, I.
1983-01-01
A simple non-linear circuit model for battery behavior is given. It is based on time-dependent features of the well-known PIN change storage diode, whose behavior is described by equations similar to those associated with electrochemical cells. The circuit simulation computer program ADVICE was used to predict non-linear response from a topological description of the battery analog built from advice components. By a reasonable choice of one set of parameters, the circuit accurately simulates a wide spectrum of measured non-linear battery responses to within a few millivolts.
Linear control design for guaranteed stability of uncertain linear systems
NASA Technical Reports Server (NTRS)
Yedavalli, R. K.
1986-01-01
In this paper, a linear control design algorithm based on the elemental perturbation bounds developed recently is presented for a simple second order linear uncertain system satisfying matching conditions. The proposed method is compared with Guaranteed Cost Control (GCC), Multistep Guaranteed Cost Control (MGCC) and the Matching Condition (MC) methods and is shown to give guaranteed stability with lesser values for the control gains than some of the existing methods for the example considered.
Practical Session: Simple Linear Regression
NASA Astrophysics Data System (ADS)
Clausel, M.; Grégoire, G.
2014-12-01
Two exercises are proposed to illustrate the simple linear regression. The first one is based on the famous Galton's data set on heredity. We use the lm R command and get coefficients estimates, standard error of the error, R2, residuals …In the second example, devoted to data related to the vapor tension of mercury, we fit a simple linear regression, predict values, and anticipate on multiple linear regression. This pratical session is an excerpt from practical exercises proposed by A. Dalalyan at EPNC (see Exercises 1 and 2 of http://certis.enpc.fr/~dalalyan/Download/TP_ENPC_4.pdf).
Linear feature selection with applications
NASA Technical Reports Server (NTRS)
Decell, H. P., Jr.; Guseman, L. F., Jr. (Principal Investigator)
1979-01-01
Several ways in which feature selection techniques were used in LACIE are discussed. In all cases, the methods require some a priori information and assumptions; in most, the classification procedure (Bayes optimal) was chosen in advance. The transformations used for dimensionality reduction are linear, that is, the variables in feature space are always linear combinations of the original measurements. Several numerically tractable criteria developed for LACIE, which provide information about the probability of misclassification, are discussed. Recent results on linear feature selection techniques are included. Their use in LACIE is discussed. Related open questions are mentioned.
Manipulator control by exact linearization
NASA Technical Reports Server (NTRS)
Kruetz, K.
1987-01-01
Comments on the application to rigid link manipulators of geometric control theory, resolved acceleration control, operational space control, and nonlinear decoupling theory are given, and the essential unity of these techniques for externally linearizing and decoupling end effector dynamics is discussed. Exploiting the fact that the mass matrix of a rigid link manipulator is positive definite, a consequence of rigid link manipulators belonging to the class of natural physical systems, it is shown that a necessary and sufficient condition for a locally externally linearizing and output decoupling feedback law to exist is that the end effector Jacobian matrix be nonsingular. Furthermore, this linearizing feedback is easy to produce.
Precision magnetic suspension linear bearing
NASA Technical Reports Server (NTRS)
Trumper, David L.; Queen, Michael A.
1992-01-01
We have shown the design and analyzed the electromechanics of a linear motor suitable for independently controlling two suspension degrees of freedom. This motor, at least on paper, meets the requirements for driving an X-Y stage of 10 Kg mass with about 4 m/sq sec acceleration, with travel of several hundred millimeters in X and Y, and with reasonable power dissipation. A conceptual design for such a stage is presented. The theoretical feasibility of linear and planar bearings using single or multiple magnetic suspension linear motors is demonstrated.
Linear Bregman algorithm implemented in parallel GPU
NASA Astrophysics Data System (ADS)
Li, Pengyan; Ke, Jue; Sui, Dong; Wei, Ping
2015-08-01
At present, most compressed sensing (CS) algorithms have poor converging speed, thus are difficult to run on PC. To deal with this issue, we use a parallel GPU, to implement a broadly used compressed sensing algorithm, the Linear Bregman algorithm. Linear iterative Bregman algorithm is a reconstruction algorithm proposed by Osher and Cai. Compared with other CS reconstruction algorithms, the linear Bregman algorithm only involves the vector and matrix multiplication and thresholding operation, and is simpler and more efficient for programming. We use C as a development language and adopt CUDA (Compute Unified Device Architecture) as parallel computing architectures. In this paper, we compared the parallel Bregman algorithm with traditional CPU realized Bregaman algorithm. In addition, we also compared the parallel Bregman algorithm with other CS reconstruction algorithms, such as OMP and TwIST algorithms. Compared with these two algorithms, the result of this paper shows that, the parallel Bregman algorithm needs shorter time, and thus is more convenient for real-time object reconstruction, which is important to people's fast growing demand to information technology.
Estimating population trends with a linear model
Bart, J.; Collins, B.; Morrison, R.I.G.
2003-01-01
We describe a simple and robust method for estimating trends in population size. The method may be used with Breeding Bird Survey data, aerial surveys, point counts, or any other program of repeated surveys at permanent locations. Surveys need not be made at each location during each survey period. The method differs from most existing methods in being design based, rather than model based. The only assumptions are that the nominal sampling plan is followed and that sample size is large enough for use of the t-distribution. Simulations based on two bird data sets from natural populations showed that the point estimate produced by the linear model was essentially unbiased even when counts varied substantially and 25% of the complete data set was missing. The estimating-equation approach, often used to analyze Breeding Bird Survey data, performed similarly on one data set but had substantial bias on the second data set, in which counts were highly variable. The advantages of the linear model are its simplicity, flexibility, and that it is self-weighting. A user-friendly computer program to carry out the calculations is available from the senior author.
Automating linear accelerator quality assurance
Eckhause, Tobias; Thorwarth, Ryan; Moran, Jean M.; Al-Hallaq, Hania; Farrey, Karl; Ritter, Timothy; DeMarco, John; Pawlicki, Todd; Kim, Gwe-Ya; Popple, Richard; Sharma, Vijeshwar; Park, SungYong; Perez, Mario; Booth, Jeremy T.
2015-10-15
Purpose: The purpose of this study was 2-fold. One purpose was to develop an automated, streamlined quality assurance (QA) program for use by multiple centers. The second purpose was to evaluate machine performance over time for multiple centers using linear accelerator (Linac) log files and electronic portal images. The authors sought to evaluate variations in Linac performance to establish as a reference for other centers. Methods: The authors developed analytical software tools for a QA program using both log files and electronic portal imaging device (EPID) measurements. The first tool is a general analysis tool which can read and visually represent data in the log file. This tool, which can be used to automatically analyze patient treatment or QA log files, examines the files for Linac deviations which exceed thresholds. The second set of tools consists of a test suite of QA fields, a standard phantom, and software to collect information from the log files on deviations from the expected values. The test suite was designed to focus on the mechanical tests of the Linac to include jaw, MLC, and collimator positions during static, IMRT, and volumetric modulated arc therapy delivery. A consortium of eight institutions delivered the test suite at monthly or weekly intervals on each Linac using a standard phantom. The behavior of various components was analyzed for eight TrueBeam Linacs. Results: For the EPID and trajectory log file analysis, all observed deviations which exceeded established thresholds for Linac behavior resulted in a beam hold off. In the absence of an interlock-triggering event, the maximum observed log file deviations between the expected and actual component positions (such as MLC leaves) varied from less than 1% to 26% of published tolerance thresholds. The maximum and standard deviations of the variations due to gantry sag, collimator angle, jaw position, and MLC positions are presented. Gantry sag among Linacs was 0.336 ± 0.072 mm. The
Linearity Testing of Photovoltaic Cells
Pinegar, S.; Nalley, D.; Emery, K.
2006-01-01
Photovoltaic devices are rated in terms of their power output or efficiency with respect to a specific spectrum, total irradiance, and temperature. In order to rate photovoltaic devices, a reference detector whose response is linear with total irradiance is needed. This procedure documents a procedure to determine if a detector is linear over the irradiance range of interest. Testing the short circuit current versus the total irradiance is done by illuminating a reference cell candidate with two lamps that are fitted with programmable filter wheels. The purpose is to reject nonlinear samples as determined by national and international standards from being used as primary reference cells. A calibrated linear reference cell tested by the two lamp method yields a linear result.
Linear immunoglobulin A bullous dermatosis.
Fortuna, Giulio; Marinkovich, M Peter
2012-01-01
Linear immunoglobulin A (IgA) bullous dermatosis, also known as linear IgA disease, is an autoimmune mucocutaneous disorder characterized by subepithelial bullae, with IgA autoantibodies directed against several different antigens in the basement membrane zone. Its immunopathologic characteristic resides in the presence of a continuous linear IgA deposit along the basement membrane zone, which is clearly visible on direct immunofluorescence. This disorder shows different clinical features and distribution when adult-onset of linear IgA disease is compared with childhood-onset. Diagnosis is achieved via clinical, histopathologic, and immunopathologic examinations. Two common therapies are dapsone and sulfapyridine, which reduce the inflammatory response and achieve disease remission in a variable period of time.
Acoustic emission linear pulse holography
Collins, H.D.; Busse, L.J.; Lemon, D.K.
1983-10-25
This device relates to the concept of and means for performing Acoustic Emission Linear Pulse Holography, which combines the advantages of linear holographic imaging and Acoustic Emission into a single non-destructive inspection system. This unique system produces a chronological, linear holographic image of a flaw by utilizing the acoustic energy emitted during crack growth. The innovation is the concept of utilizing the crack-generated acoustic emission energy to generate a chronological series of images of a growing crack by applying linear, pulse holographic processing to the acoustic emission data. The process is implemented by placing on a structure an array of piezoelectric sensors (typically 16 or 32 of them) near the defect location. A reference sensor is placed between the defect and the array.
Origin of nonsaturating linear magnetoresistivity
NASA Astrophysics Data System (ADS)
Kisslinger, Ferdinand; Ott, Christian; Weber, Heiko B.
2017-01-01
The observation of nonsaturating classical linear magnetoresistivity has been an enigmatic phenomenon in solid-state physics. We present a study of a two-dimensional ohmic conductor, including local Hall effect and a self-consistent consideration of the environment. An equivalent-circuit scheme delivers a simple and convincing argument why the magnetoresistivity is linear in strong magnetic field, provided that current and biasing electric field are misaligned by a nonlocal mechanism. A finite-element model of a two-dimensional conductor is suited to display the situations that create such deviating currents. Besides edge effects next to electrodes, charge carrier density fluctuations are efficiently generating this effect. However, mobility fluctuations that have frequently been related to linear magnetoresistivity are barely relevant. Despite its rare observation, linear magnetoresitivity is rather the rule than the exception in a regime of low charge carrier densities, misaligned current pathways and strong magnetic field.
Linear Back-Drive Differentials
NASA Technical Reports Server (NTRS)
Waydo, Peter
2003-01-01
Linear back-drive differentials have been proposed as alternatives to conventional gear differentials for applications in which there is only limited rotational motion (e.g., oscillation). The finite nature of the rotation makes it possible to optimize a linear back-drive differential in ways that would not be possible for gear differentials or other differentials that are required to be capable of unlimited rotation. As a result, relative to gear differentials, linear back-drive differentials could be more compact and less massive, could contain fewer complex parts, and could be less sensitive to variations in the viscosities of lubricants. Linear back-drive differentials would operate according to established principles of power ball screws and linear-motion drives, but would utilize these principles in an innovative way. One major characteristic of such mechanisms that would be exploited in linear back-drive differentials is the possibility of designing them to drive or back-drive with similar efficiency and energy input: in other words, such a mechanism can be designed so that a rotating screw can drive a nut linearly or the linear motion of the nut can cause the screw to rotate. A linear back-drive differential (see figure) would include two collinear shafts connected to two parts that are intended to engage in limited opposing rotations. The linear back-drive differential would also include a nut that would be free to translate along its axis but not to rotate. The inner surface of the nut would be right-hand threaded at one end and left-hand threaded at the opposite end to engage corresponding right- and left-handed threads on the shafts. A rotation and torque introduced into the system via one shaft would drive the nut in linear motion. The nut, in turn, would back-drive the other shaft, creating a reaction torque. Balls would reduce friction, making it possible for the shaft/nut coupling on each side to operate with 90 percent efficiency.
Positron sources for Linear Colliders
Gai Wei; Liu Wanming
2009-09-02
Positron beams have many applications and there are many different concepts for positron sources. In this paper, only positron source techniques for linear colliders are covered. In order to achieve high luminosity, a linear collider positron source should have a high beam current, high beam energy, small emittance and, for some applications, a high degree of beam polarization. There are several different schemes presently being developed around the globe. Both the differences between these schemes and their common technical challenges are discussed.
Estimating linear temporal trends from aggregated environmental monitoring data
Erickson, Richard A.; Gray, Brian R.; Eager, Eric A.
2017-01-01
Trend estimates are often used as part of environmental monitoring programs. These trends inform managers (e.g., are desired species increasing or undesired species decreasing?). Data collected from environmental monitoring programs is often aggregated (i.e., averaged), which confounds sampling and process variation. State-space models allow sampling variation and process variations to be separated. We used simulated time-series to compare linear trend estimations from three state-space models, a simple linear regression model, and an auto-regressive model. We also compared the performance of these five models to estimate trends from a long term monitoring program. We specifically estimated trends for two species of fish and four species of aquatic vegetation from the Upper Mississippi River system. We found that the simple linear regression had the best performance of all the given models because it was best able to recover parameters and had consistent numerical convergence. Conversely, the simple linear regression did the worst job estimating populations in a given year. The state-space models did not estimate trends well, but estimated population sizes best when the models converged. We found that a simple linear regression performed better than more complex autoregression and state-space models when used to analyze aggregated environmental monitoring data.
Linear isotherm determination from linear gradient elution experiments.
Pfister, David; Steinebach, Fabian; Morbidelli, Massimo
2015-01-02
A procedure to estimate equilibrium adsorption parameters as a function of the modifier concentration in linear gradient elution chromatography is proposed and its reliability is investigated by comparison with experimental data. Over the past decades, analytical solutions of the so-called equilibrium model under linear gradient elution conditions were derived assuming that proteins and modifier molecules access the same fraction of the pore size distribution of the porous particles. The present approach developed in this work accounts for the size exclusion effect resulting in different exclusions for proteins and modifier. A new analytical solution was derived by applying perturbation theory for differential equations, and the 1st-order approximated solution is presented in this work. Eventually, a turnkey and reliable procedure to efficiently estimate isotherm parameters as a function of modifier concentration from linear gradient elution experiments is proposed.
Transformation matrices between non-linear and linear differential equations
NASA Technical Reports Server (NTRS)
Sartain, R. L.
1983-01-01
In the linearization of systems of non-linear differential equations, those systems which can be exactly transformed into the second order linear differential equation Y"-AY'-BY=0 where Y, Y', and Y" are n x 1 vectors and A and B are constant n x n matrices of real numbers were considered. The 2n x 2n matrix was used to transform the above matrix equation into the first order matrix equation X' = MX. Specially the matrix M and the conditions which will diagonalize or triangularize M were studied. Transformation matrices P and P sub -1 were used to accomplish this diagonalization or triangularization to return to the solution of the second order matrix differential equation system from the first order system.
Determining Optimal Allocation of Naval Obstetric Resources with Linear Programming
2013-12-01
sensitivity analysis simulates a sudden change in population within the catchment area of an MTF. For example, if an aircraft carrier with a crew size... catchment area is assumed to be 2,000. To simulate the effect of said volume increase in female beneficiaries, we increased the LP model demand by 10...NHCP operational control include: • NHCP • BHC 13 Area Camp Pendleton 7 • BHC 21 Area Camp Del Mar • BHC 31 Area Edson Range • BHC 52 Area
Solving Staircase Linear Programs BY THE Simplex Method. 2. Pricing.
1979-11-01
kci’ tt PIV 1t A t j ’ti tl lit~t I pd k’ut * .ittd AO . t l%.Ia t .1Wt I c t I dd li Rme imA mlit At V I ’ % It A 0J ’tuAI I t -As Ict i oa’ta I aI...as t b.I’.ti;ted tt iill. qtev *\\cept thatt the mittbev it - ill ( is INVct’ b lte nuttbet, elt tie ios bet ween thilt, I4- Ill It iNeV r td s,. aee
Optimized wavelet domain watermark embedding strategy using linear programming
NASA Astrophysics Data System (ADS)
Pereira, Shelby; Voloshynovskiy, Sviatoslav V.; Pun, Thierry
2000-04-01
Invisible Digital watermarks have been proposed as a method for discouraging illicit copying and distribution of copyright material. In recent years it has been recognized that embedding information in a transform domain leads to more robust watermarks. In particular, several approaches based on the wavelet transform have ben proposed to address the problem of image water marking. The advantage of the wavelet transform relative to the DFT or DCT is that it allows for localized water marking of the image. A major difficulty, however, in watermarking in any transform domain lies in the fact that constraints on the allowable distortion at any pixel are specified in the spatial domain. In order to insert an invisible watermark, the current trend has been to model the Human Visual Systems and specify a masking function which yields the allowable distortion for any pixel. This complex function combines contrast, luminance, color, texture and edges. The watermark is then inserted in the transform domain and the inverse transform computed. The watermark is finally adjusted to satisfy the constraints on the pixel distortions. However this method is highly suboptimal since it leads to irreversible losses at the embedding stage because the watermark is being adjusted in the spatial domain with no care for the consequences in the transform domain.
Using Cognitive Tutor Software in Learning Linear Algebra Word Concept
ERIC Educational Resources Information Center
Yang, Kai-Ju
2015-01-01
This paper reports on a study of twelve 10th grade students using Cognitive Tutor, a math software program, to learn linear algebra word concept. The study's purpose was to examine whether students' mathematics performance as it is related to using Cognitive Tutor provided evidence to support Koedlinger's (2002) four instructional principles used…
Nonlinear optimization with linear constraints using a projection method
NASA Technical Reports Server (NTRS)
Fox, T.
1982-01-01
Nonlinear optimization problems that are encountered in science and industry are examined. A method of projecting the gradient vector onto a set of linear contraints is developed, and a program that uses this method is presented. The algorithm that generates this projection matrix is based on the Gram-Schmidt method and overcomes some of the objections to the Rosen projection method.
Linear and Branching Formats in Culture Assimilator Training
ERIC Educational Resources Information Center
Malpass, Roy S.; Salancik, Gerald R.
1977-01-01
Defines the "branching format" of training materials as materials not requiring an absolute judgement of appropriateness of alternatives and the "linear format" as materials requiring an independent evaluation of each alternative. Tests these contrasting formats for effectiveness in cross cultural training programs. Available from: International…
Linear titration plots for polyfunctional weak acids and bases.
Midgley, D; McCallum, C
1976-04-01
Procedures are derived for obtaining the equivalence volumes in the potentiometric titrations of polyfunctional weak acids and weak bases by a linear titration plot method. The effect of errors in the equilibrium constants on the accuracy is considered. A Fortran program is available to do the calculations.