Zhao, Yingfeng; Liu, Sanyang
2016-01-01
We present a practical branch and bound algorithm for globally solving generalized linear multiplicative programming problem with multiplicative constraints. To solve the problem, a relaxation programming problem which is equivalent to a linear programming is proposed by utilizing a new two-phase relaxation technique. In the algorithm, lower and upper bounds are simultaneously obtained by solving some linear relaxation programming problems. Global convergence has been proved and results of some sample examples and a small random experiment show that the proposed algorithm is feasible and efficient.
NASA Technical Reports Server (NTRS)
Ferencz, Donald C.; Viterna, Larry A.
1991-01-01
ALPS is a computer program which can be used to solve general linear program (optimization) problems. ALPS was designed for those who have minimal linear programming (LP) knowledge and features a menu-driven scheme to guide the user through the process of creating and solving LP formulations. Once created, the problems can be edited and stored in standard DOS ASCII files to provide portability to various word processors or even other linear programming packages. Unlike many math-oriented LP solvers, ALPS contains an LP parser that reads through the LP formulation and reports several types of errors to the user. ALPS provides a large amount of solution data which is often useful in problem solving. In addition to pure linear programs, ALPS can solve for integer, mixed integer, and binary type problems. Pure linear programs are solved with the revised simplex method. Integer or mixed integer programs are solved initially with the revised simplex, and the completed using the branch-and-bound technique. Binary programs are solved with the method of implicit enumeration. This manual describes how to use ALPS to create, edit, and solve linear programming problems. Instructions for installing ALPS on a PC compatible computer are included in the appendices along with a general introduction to linear programming. A programmers guide is also included for assistance in modifying and maintaining the program.
Computer-aided linear-circuit design.
NASA Technical Reports Server (NTRS)
Penfield, P.
1971-01-01
Usually computer-aided design (CAD) refers to programs that analyze circuits conceived by the circuit designer. Among the services such programs should perform are direct network synthesis, analysis, optimization of network parameters, formatting, storage of miscellaneous data, and related calculations. The program should be embedded in a general-purpose conversational language such as BASIC, JOSS, or APL. Such a program is MARTHA, a general-purpose linear-circuit analyzer embedded in APL.
Semilinear programming: applications and implementation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mohan, S.
Semilinear programming is a method of solving optimization problems with linear constraints where the non-negativity restrictions on the variables are dropped and the objective function coefficients can take on different values depending on whether the variable is positive or negative. The simplex method for linear programming is modified in this thesis to solve general semilinear and piecewise linear programs efficiently without having to transform them into equivalent standard linear programs. Several models in widely different areas of optimization such as production smoothing, facility locations, goal programming and L/sub 1/ estimation are presented first to demonstrate the compact formulation that arisesmore » when such problems are formulated as semilinear programs. A code SLP is constructed using the semilinear programming techniques. Problems in aggregate planning and L/sub 1/ estimation are solved using SLP and equivalent linear programs using a linear programming simplex code. Comparisons of CPU times and number iterations indicate SLP to be far superior. The semilinear programming techniques are extended to piecewise linear programming in the implementation of the code PLP. Piecewise linear models in aggregate planning are solved using PLP and equivalent standard linear programs using a simple upper bounded linear programming code SUBLP.« less
Development and validation of a general purpose linearization program for rigid aircraft models
NASA Technical Reports Server (NTRS)
Duke, E. L.; Antoniewicz, R. F.
1985-01-01
A FORTRAN program that provides the user with a powerful and flexible tool for the linearization of aircraft models is discussed. The program LINEAR numerically determines a linear systems model using nonlinear equations of motion and a user-supplied, nonlinear aerodynamic model. The system model determined by LINEAR consists of matrices for both the state and observation equations. The program has been designed to allow easy selection and definition of the state, control, and observation variables to be used in a particular model. Also, included in the report is a comparison of linear and nonlinear models for a high performance aircraft.
NASA Technical Reports Server (NTRS)
Egebrecht, R. A.; Thorbjornsen, A. R.
1967-01-01
Digital computer programs determine steady-state performance characteristics of active and passive linear circuits. The ac analysis program solves the basic circuit parameters. The compiler program solves these circuit parameters and in addition provides a more versatile program by allowing the user to perform mathematical and logical operations.
NASA Astrophysics Data System (ADS)
Tian, Wenli; Cao, Chengxuan
2017-03-01
A generalized interval fuzzy mixed integer programming model is proposed for the multimodal freight transportation problem under uncertainty, in which the optimal mode of transport and the optimal amount of each type of freight transported through each path need to be decided. For practical purposes, three mathematical methods, i.e. the interval ranking method, fuzzy linear programming method and linear weighted summation method, are applied to obtain equivalents of constraints and parameters, and then a fuzzy expected value model is presented. A heuristic algorithm based on a greedy criterion and the linear relaxation algorithm are designed to solve the model.
TI-59 Programs for Multiple Regression.
1980-05-01
general linear hypothesis model of full rank [ Graybill , 19611 can be written as Y = x 8 + C , s-N(O,o 2I) nxl nxk kxl nxl where Y is the vector of n...a "reduced model " solution, and confidence intervals for linear functions of the coefficients can be obtained using (x’x) and a2, based on the t...O107)l UA.LLL. Library ModuIe NASTER -Puter 0NTINA Cards 1 PROGRAM DESCRIPTION (s s 2 ror the general linear hypothesis model Y - XO + C’ calculates
A comparison of Heuristic method and Llewellyn’s rules for identification of redundant constraints
NASA Astrophysics Data System (ADS)
Estiningsih, Y.; Farikhin; Tjahjana, R. H.
2018-03-01
Important techniques in linear programming is modelling and solving practical optimization. Redundant constraints are consider for their effects on general linear programming problems. Identification and reduce redundant constraints are for avoidance of all the calculations associated when solving an associated linear programming problems. Many researchers have been proposed for identification redundant constraints. This paper a compararison of Heuristic method and Llewellyn’s rules for identification of redundant constraints.
Evaluation of a Nonlinear Finite Element Program - ABAQUS.
1983-03-15
anisotropic properties. * MATEXP - Linearly elastic thermal expansions with isotropic, orthotropic and anisotropic properties. * MATELG - Linearly...elastic materials for general sections (options available for beam and shell elements). • MATEXG - Linearly elastic thermal expansions for general...decomposition of a matrix. * Q-R algorithm • Vector normalization, etc. Obviously, by consolidating all the utility subroutines in a library, ABAQUS has
On the Feasibility of a Generalized Linear Program
1989-03-01
generealized linear program by applying the same algorithm to a "phase-one" problem without requiring that the initial basic feasible solution to the latter be non-degenerate. secUrMTY C.AMlIS CAYI S OP ?- PAeES( UII -W & ,
Linearized Programming of Memristors for Artificial Neuro-Sensor Signal Processing
Yang, Changju; Kim, Hyongsuk
2016-01-01
A linearized programming method of memristor-based neural weights is proposed. Memristor is known as an ideal element to implement a neural synapse due to its embedded functions of analog memory and analog multiplication. Its resistance variation with a voltage input is generally a nonlinear function of time. Linearization of memristance variation about time is very important for the easiness of memristor programming. In this paper, a method utilizing an anti-serial architecture for linear programming is proposed. The anti-serial architecture is composed of two memristors with opposite polarities. It linearizes the variation of memristance due to complimentary actions of two memristors. For programming a memristor, additional memristor with opposite polarity is employed. The linearization effect of weight programming of an anti-serial architecture is investigated and memristor bridge synapse which is built with two sets of anti-serial memristor architecture is taken as an application example of the proposed method. Simulations are performed with memristors of both linear drift model and nonlinear model. PMID:27548186
Linearized Programming of Memristors for Artificial Neuro-Sensor Signal Processing.
Yang, Changju; Kim, Hyongsuk
2016-08-19
A linearized programming method of memristor-based neural weights is proposed. Memristor is known as an ideal element to implement a neural synapse due to its embedded functions of analog memory and analog multiplication. Its resistance variation with a voltage input is generally a nonlinear function of time. Linearization of memristance variation about time is very important for the easiness of memristor programming. In this paper, a method utilizing an anti-serial architecture for linear programming is proposed. The anti-serial architecture is composed of two memristors with opposite polarities. It linearizes the variation of memristance due to complimentary actions of two memristors. For programming a memristor, additional memristor with opposite polarity is employed. The linearization effect of weight programming of an anti-serial architecture is investigated and memristor bridge synapse which is built with two sets of anti-serial memristor architecture is taken as an application example of the proposed method. Simulations are performed with memristors of both linear drift model and nonlinear model.
NASA Technical Reports Server (NTRS)
Utku, S.
1969-01-01
A general purpose digital computer program for the in-core solution of linear equilibrium problems of structural mechanics is documented. The program requires minimum input for the description of the problem. The solution is obtained by means of the displacement method and the finite element technique. Almost any geometry and structure may be handled because of the availability of linear, triangular, quadrilateral, tetrahedral, hexahedral, conical, triangular torus, and quadrilateral torus elements. The assumption of piecewise linear deflection distribution insures monotonic convergence of the deflections from the stiffer side with decreasing mesh size. The stresses are provided by the best-fit strain tensors in the least squares at the mesh points where the deflections are given. The selection of local coordinate systems whenever necessary is automatic. The core memory is used by means of dynamic memory allocation, an optional mesh-point relabelling scheme and imposition of the boundary conditions during the assembly time.
Trinker, Horst
2011-10-28
We study the distribution of triples of codewords of codes and ordered codes. Schrijver [A. Schrijver, New code upper bounds from the Terwilliger algebra and semidefinite programming, IEEE Trans. Inform. Theory 51 (8) (2005) 2859-2866] used the triple distribution of a code to establish a bound on the number of codewords based on semidefinite programming. In the first part of this work, we generalize this approach for ordered codes. In the second part, we consider linear codes and linear ordered codes and present a MacWilliams-type identity for the triple distribution of their dual code. Based on the non-negativity of this linear transform, we establish a linear programming bound and conclude with a table of parameters for which this bound yields better results than the standard linear programming bound.
Implementing general quantum measurements on linear optical and solid-state qubits
NASA Astrophysics Data System (ADS)
Ota, Yukihiro; Ashhab, Sahel; Nori, Franco
2013-03-01
We show a systematic construction for implementing general measurements on a single qubit, including both strong (or projection) and weak measurements. We mainly focus on linear optical qubits. The present approach is composed of simple and feasible elements, i.e., beam splitters, wave plates, and polarizing beam splitters. We show how the parameters characterizing the measurement operators are controlled by the linear optical elements. We also propose a method for the implementation of general measurements in solid-state qubits. Furthermore, we show an interesting application of the general measurements, i.e., entanglement amplification. YO is partially supported by the SPDR Program, RIKEN. SA and FN acknowledge ARO, NSF grant No. 0726909, JSPS-RFBR contract No. 12-02-92100, Grant-in-Aid for Scientific Research (S), MEXT Kakenhi on Quantum Cybernetics, and the JSPS via its FIRST program.
ERIC Educational Resources Information Center
Leff, H. Stephen; Turner, Ralph R.
This report focuses on the use of linear programming models to address the issues of how vocational rehabilitation (VR) resources should be allocated in order to maximize program efficiency within given resource constraints. A general introduction to linear programming models is first presented that describes the major types of models available,…
AN EVALUATION OF HEURISTICS FOR THRESHOLD-FUNCTION TEST-SYNTHESIS,
Linear programming offers the most attractive procedure for testing and obtaining optimal threshold gate realizations for functions generated in...The design of the experiments may be of general interest to students of automatic problem solving; the results should be of interest in threshold logic and linear programming. (Author)
The microcomputer scientific software series 2: general linear model--regression.
Harold M. Rauscher
1983-01-01
The general linear model regression (GLMR) program provides the microcomputer user with a sophisticated regression analysis capability. The output provides a regression ANOVA table, estimators of the regression model coefficients, their confidence intervals, confidence intervals around the predicted Y-values, residuals for plotting, a check for multicollinearity, a...
NASA Technical Reports Server (NTRS)
1979-01-01
The computer program Linear SCIDNT which evaluates rotorcraft stability and control coefficients from flight or wind tunnel test data is described. It implements the maximum likelihood method to maximize the likelihood function of the parameters based on measured input/output time histories. Linear SCIDNT may be applied to systems modeled by linear constant-coefficient differential equations. This restriction in scope allows the application of several analytical results which simplify the computation and improve its efficiency over the general nonlinear case.
Interior point techniques for LP and NLP
DOE Office of Scientific and Technical Information (OSTI.GOV)
Evtushenko, Y.
By using surjective mapping the initial constrained optimization problem is transformed to a problem in a new space with only equality constraints. For the numerical solution of the latter problem we use the generalized gradient-projection method and Newton`s method. After inverse transformation to the initial space we obtain the family of numerical methods for solving optimization problems with equality and inequality constraints. In the linear programming case after some simplification we obtain Dikin`s algorithm, affine scaling algorithm and generalized primal dual interior point linear programming algorithm.
Implementing Linear Algebra Related Algorithms on the TI-92+ Calculator.
ERIC Educational Resources Information Center
Alexopoulos, John; Abraham, Paul
2001-01-01
Demonstrates a less utilized feature of the TI-92+: its natural and powerful programming language. Shows how to implement several linear algebra related algorithms including the Gram-Schmidt process, Least Squares Approximations, Wronskians, Cholesky Decompositions, and Generalized Linear Least Square Approximations with QR Decompositions.…
No-signaling quantum key distribution: solution by linear programming
NASA Astrophysics Data System (ADS)
Hwang, Won-Young; Bae, Joonwoo; Killoran, Nathan
2015-02-01
We outline a straightforward approach for obtaining a secret key rate using only no-signaling constraints and linear programming. Assuming an individual attack, we consider all possible joint probabilities. Initially, we study only the case where Eve has binary outcomes, and we impose constraints due to the no-signaling principle and given measurement outcomes. Within the remaining space of joint probabilities, by using linear programming, we get bound on the probability of Eve correctly guessing Bob's bit. We then make use of an inequality that relates this guessing probability to the mutual information between Bob and a more general Eve, who is not binary-restricted. Putting our computed bound together with the Csiszár-Körner formula, we obtain a positive key generation rate. The optimal value of this rate agrees with known results, but was calculated in a more straightforward way, offering the potential of generalization to different scenarios.
The Use of Shrinkage Techniques in the Estimation of Attrition Rates for Large Scale Manpower Models
1988-07-27
auto regressive model combined with a linear program that solves for the coefficients using MAD. But this success has diminished with time (Rowe...8217Harrison-Stevens Forcasting and the Multiprocess Dy- namic Linear Model ", The American Statistician, v.40, pp. 12 9 - 1 3 5 . 1986. 8. Box, G. E. P. and...1950. 40. McCullagh, P. and Nelder, J., Generalized Linear Models , Chapman and Hall. 1983. 41. McKenzie, E. General Exponential Smoothing and the
High profile students’ growth of mathematical understanding in solving linier programing problems
NASA Astrophysics Data System (ADS)
Utomo; Kusmayadi, TA; Pramudya, I.
2018-04-01
Linear program has an important role in human’s life. This linear program is learned in senior high school and college levels. This material is applied in economy, transportation, military and others. Therefore, mastering linear program is useful for provision of life. This research describes a growth of mathematical understanding in solving linear programming problems based on the growth of understanding by the Piere-Kieren model. Thus, this research used qualitative approach. The subjects were students of grade XI in Salatiga city. The subjects of this study were two students who had high profiles. The researcher generally chose the subjects based on the growth of understanding from a test result in the classroom; the mark from the prerequisite material was ≥ 75. Both of the subjects were interviewed by the researcher to know the students’ growth of mathematical understanding in solving linear programming problems. The finding of this research showed that the subjects often folding back to the primitive knowing level to go forward to the next level. It happened because the subjects’ primitive understanding was not comprehensive.
MSC products for the simulation of tire behavior
NASA Technical Reports Server (NTRS)
Muskivitch, John C.
1995-01-01
The modeling of tires and the simulation of tire behavior are complex problems. The MacNeal-Schwendler Corporation (MSC) has a number of finite element analysis products that can be used to address the complexities of tire modeling and simulation. While there are many similarities between the products, each product has a number of capabilities that uniquely enable it to be used for a specific aspect of tire behavior. This paper discusses the following programs: (1) MSC/NASTRAN - general purpose finite element program for linear and nonlinear static and dynamic analysis; (2) MSC/ADAQUS - nonlinear statics and dynamics finite element program; (3) MSC/PATRAN AFEA (Advanced Finite Element Analysis) - general purpose finite element program with a subset of linear and nonlinear static and dynamic analysis capabilities with an integrated version of MSC/PATRAN for pre- and post-processing; and (4) MSC/DYTRAN - nonlinear explicit transient dynamics finite element program.
NASA Astrophysics Data System (ADS)
Zhang, Chenglong; Guo, Ping
2017-10-01
The vague and fuzzy parametric information is a challenging issue in irrigation water management problems. In response to this problem, a generalized fuzzy credibility-constrained linear fractional programming (GFCCFP) model is developed for optimal irrigation water allocation under uncertainty. The model can be derived from integrating generalized fuzzy credibility-constrained programming (GFCCP) into a linear fractional programming (LFP) optimization framework. Therefore, it can solve ratio optimization problems associated with fuzzy parameters, and examine the variation of results under different credibility levels and weight coefficients of possibility and necessary. It has advantages in: (1) balancing the economic and resources objectives directly; (2) analyzing system efficiency; (3) generating more flexible decision solutions by giving different credibility levels and weight coefficients of possibility and (4) supporting in-depth analysis of the interrelationships among system efficiency, credibility level and weight coefficient. The model is applied to a case study of irrigation water allocation in the middle reaches of Heihe River Basin, northwest China. Therefore, optimal irrigation water allocation solutions from the GFCCFP model can be obtained. Moreover, factorial analysis on the two parameters (i.e. λ and γ) indicates that the weight coefficient is a main factor compared with credibility level for system efficiency. These results can be effective for support reasonable irrigation water resources management and agricultural production.
Fan, Yurui; Huang, Guohe; Veawab, Amornvadee
2012-01-01
In this study, a generalized fuzzy linear programming (GFLP) method was developed to deal with uncertainties expressed as fuzzy sets that exist in the constraints and objective function. A stepwise interactive algorithm (SIA) was advanced to solve GFLP model and generate solutions expressed as fuzzy sets. To demonstrate its application, the developed GFLP method was applied to a regional sulfur dioxide (SO2) control planning model to identify effective SO2 mitigation polices with a minimized system performance cost under uncertainty. The results were obtained to represent the amount of SO2 allocated to different control measures from different sources. Compared with the conventional interval-parameter linear programming (ILP) approach, the solutions obtained through GFLP were expressed as fuzzy sets, which can provide intervals for the decision variables and objective function, as well as related possibilities. Therefore, the decision makers can make a tradeoff between model stability and the plausibility based on solutions obtained through GFLP and then identify desired policies for SO2-emission control under uncertainty.
Chen, Vivian Yi-Ju; Yang, Tse-Chuan
2012-08-01
An increasing interest in exploring spatial non-stationarity has generated several specialized analytic software programs; however, few of these programs can be integrated natively into a well-developed statistical environment such as SAS. We not only developed a set of SAS macro programs to fill this gap, but also expanded the geographically weighted generalized linear modeling (GWGLM) by integrating the strengths of SAS into the GWGLM framework. Three features distinguish our work. First, the macro programs of this study provide more kernel weighting functions than the existing programs. Second, with our codes the users are able to better specify the bandwidth selection process compared to the capabilities of existing programs. Third, the development of the macro programs is fully embedded in the SAS environment, providing great potential for future exploration of complicated spatially varying coefficient models in other disciplines. We provided three empirical examples to illustrate the use of the SAS macro programs and demonstrated the advantages explained above. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.
A primer for biomedical scientists on how to execute model II linear regression analysis.
Ludbrook, John
2012-04-01
1. There are two very different ways of executing linear regression analysis. One is Model I, when the x-values are fixed by the experimenter. The other is Model II, in which the x-values are free to vary and are subject to error. 2. I have received numerous complaints from biomedical scientists that they have great difficulty in executing Model II linear regression analysis. This may explain the results of a Google Scholar search, which showed that the authors of articles in journals of physiology, pharmacology and biochemistry rarely use Model II regression analysis. 3. I repeat my previous arguments in favour of using least products linear regression analysis for Model II regressions. I review three methods for executing ordinary least products (OLP) and weighted least products (WLP) regression analysis: (i) scientific calculator and/or computer spreadsheet; (ii) specific purpose computer programs; and (iii) general purpose computer programs. 4. Using a scientific calculator and/or computer spreadsheet, it is easy to obtain correct values for OLP slope and intercept, but the corresponding 95% confidence intervals (CI) are inaccurate. 5. Using specific purpose computer programs, the freeware computer program smatr gives the correct OLP regression coefficients and obtains 95% CI by bootstrapping. In addition, smatr can be used to compare the slopes of OLP lines. 6. When using general purpose computer programs, I recommend the commercial programs systat and Statistica for those who regularly undertake linear regression analysis and I give step-by-step instructions in the Supplementary Information as to how to use loss functions. © 2011 The Author. Clinical and Experimental Pharmacology and Physiology. © 2011 Blackwell Publishing Asia Pty Ltd.
A sequential linear optimization approach for controller design
NASA Technical Reports Server (NTRS)
Horta, L. G.; Juang, J.-N.; Junkins, J. L.
1985-01-01
A linear optimization approach with a simple real arithmetic algorithm is presented for reliable controller design and vibration suppression of flexible structures. Using first order sensitivity of the system eigenvalues with respect to the design parameters in conjunction with a continuation procedure, the method converts a nonlinear optimization problem into a maximization problem with linear inequality constraints. The method of linear programming is then applied to solve the converted linear optimization problem. The general efficiency of the linear programming approach allows the method to handle structural optimization problems with a large number of inequality constraints on the design vector. The method is demonstrated using a truss beam finite element model for the optimal sizing and placement of active/passive-structural members for damping augmentation. Results using both the sequential linear optimization approach and nonlinear optimization are presented and compared. The insensitivity to initial conditions of the linear optimization approach is also demonstrated.
LDRD final report on massively-parallel linear programming : the parPCx system.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Parekh, Ojas; Phillips, Cynthia Ann; Boman, Erik Gunnar
2005-02-01
This report summarizes the research and development performed from October 2002 to September 2004 at Sandia National Laboratories under the Laboratory-Directed Research and Development (LDRD) project ''Massively-Parallel Linear Programming''. We developed a linear programming (LP) solver designed to use a large number of processors. LP is the optimization of a linear objective function subject to linear constraints. Companies and universities have expended huge efforts over decades to produce fast, stable serial LP solvers. Previous parallel codes run on shared-memory systems and have little or no distribution of the constraint matrix. We have seen no reports of general LP solver runsmore » on large numbers of processors. Our parallel LP code is based on an efficient serial implementation of Mehrotra's interior-point predictor-corrector algorithm (PCx). The computational core of this algorithm is the assembly and solution of a sparse linear system. We have substantially rewritten the PCx code and based it on Trilinos, the parallel linear algebra library developed at Sandia. Our interior-point method can use either direct or iterative solvers for the linear system. To achieve a good parallel data distribution of the constraint matrix, we use a (pre-release) version of a hypergraph partitioner from the Zoltan partitioning library. We describe the design and implementation of our new LP solver called parPCx and give preliminary computational results. We summarize a number of issues related to efficient parallel solution of LPs with interior-point methods including data distribution, numerical stability, and solving the core linear system using both direct and iterative methods. We describe a number of applications of LP specific to US Department of Energy mission areas and we summarize our efforts to integrate parPCx (and parallel LP solvers in general) into Sandia's massively-parallel integer programming solver PICO (Parallel Interger and Combinatorial Optimizer). We conclude with directions for long-term future algorithmic research and for near-term development that could improve the performance of parPCx.« less
Shen, Peiping; Zhang, Tongli; Wang, Chunfeng
2017-01-01
This article presents a new approximation algorithm for globally solving a class of generalized fractional programming problems (P) whose objective functions are defined as an appropriate composition of ratios of affine functions. To solve this problem, the algorithm solves an equivalent optimization problem (Q) via an exploration of a suitably defined nonuniform grid. The main work of the algorithm involves checking the feasibility of linear programs associated with the interesting grid points. It is proved that the proposed algorithm is a fully polynomial time approximation scheme as the ratio terms are fixed in the objective function to problem (P), based on the computational complexity result. In contrast to existing results in literature, the algorithm does not require the assumptions on quasi-concavity or low-rank of the objective function to problem (P). Numerical results are given to illustrate the feasibility and effectiveness of the proposed algorithm.
NASA Technical Reports Server (NTRS)
Middleton, W. D.; Lundry, J. L.
1976-01-01
An integrated system of computer programs was developed for the design and analysis of supersonic configurations. The system uses linearized theory methods for the calculation of surface pressures and supersonic area rule concepts in combination with linearized theory for calculation of aerodynamic force coefficients. Interactive graphics are optional at the user's request. Schematics of the program structure and the individual overlays and subroutines are described.
NASA Technical Reports Server (NTRS)
Middleton, W. D.; Lundry, J. L.
1975-01-01
An integrated system of computer programs has been developed for the design and analysis of supersonic configurations. The system uses linearized theory methods for the calculation of surface pressures and supersonic area rule concepts in combination with linearized theory for calculation of aerodynamic force coefficients. Interactive graphics are optional at the user's request. This part presents a general description of the system and describes the theoretical methods used.
NASA Technical Reports Server (NTRS)
Dieudonne, J. E.
1978-01-01
A numerical technique was developed which generates linear perturbation models from nonlinear aircraft vehicle simulations. The technique is very general and can be applied to simulations of any system that is described by nonlinear differential equations. The computer program used to generate these models is discussed, with emphasis placed on generation of the Jacobian matrices, calculation of the coefficients needed for solving the perturbation model, and generation of the solution of the linear differential equations. An example application of the technique to a nonlinear model of the NASA terminal configured vehicle is included.
Low-rank regularization for learning gene expression programs.
Ye, Guibo; Tang, Mengfan; Cai, Jian-Feng; Nie, Qing; Xie, Xiaohui
2013-01-01
Learning gene expression programs directly from a set of observations is challenging due to the complexity of gene regulation, high noise of experimental measurements, and insufficient number of experimental measurements. Imposing additional constraints with strong and biologically motivated regularizations is critical in developing reliable and effective algorithms for inferring gene expression programs. Here we propose a new form of regulation that constrains the number of independent connectivity patterns between regulators and targets, motivated by the modular design of gene regulatory programs and the belief that the total number of independent regulatory modules should be small. We formulate a multi-target linear regression framework to incorporate this type of regulation, in which the number of independent connectivity patterns is expressed as the rank of the connectivity matrix between regulators and targets. We then generalize the linear framework to nonlinear cases, and prove that the generalized low-rank regularization model is still convex. Efficient algorithms are derived to solve both the linear and nonlinear low-rank regularized problems. Finally, we test the algorithms on three gene expression datasets, and show that the low-rank regularization improves the accuracy of gene expression prediction in these three datasets.
A binary linear programming formulation of the graph edit distance.
Justice, Derek; Hero, Alfred
2006-08-01
A binary linear programming formulation of the graph edit distance for unweighted, undirected graphs with vertex attributes is derived and applied to a graph recognition problem. A general formulation for editing graphs is used to derive a graph edit distance that is proven to be a metric, provided the cost function for individual edit operations is a metric. Then, a binary linear program is developed for computing this graph edit distance, and polynomial time methods for determining upper and lower bounds on the solution of the binary program are derived by applying solution methods for standard linear programming and the assignment problem. A recognition problem of comparing a sample input graph to a database of known prototype graphs in the context of a chemical information system is presented as an application of the new method. The costs associated with various edit operations are chosen by using a minimum normalized variance criterion applied to pairwise distances between nearest neighbors in the database of prototypes. The new metric is shown to perform quite well in comparison to existing metrics when applied to a database of chemical graphs.
NASA Technical Reports Server (NTRS)
Magnus, A. E.; Epton, M. A.
1981-01-01
Panel aerodynamics (PAN AIR) is a system of computer programs designed to analyze subsonic and supersonic inviscid flows about arbitrary configurations. A panel method is a program which solves a linear partial differential equation by approximating the configuration surface by a set of panels. An overview of the theory of potential flow in general and PAN AIR in particular is given along with detailed mathematical formulations. Fluid dynamics, the Navier-Stokes equation, and the theory of panel methods were also discussed.
An Ada Linear-Algebra Software Package Modeled After HAL/S
NASA Technical Reports Server (NTRS)
Klumpp, Allan R.; Lawson, Charles L.
1990-01-01
New avionics software written more easily. Software package extends Ada programming language to include linear-algebra capabilities similar to those of HAL/S programming language. Designed for such avionics applications as Space Station flight software. In addition to built-in functions of HAL/S, package incorporates quaternion functions used in Space Shuttle and Galileo projects and routines from LINPAK solving systems of equations involving general square matrices. Contains two generic programs: one for floating-point computations and one for integer computations. Written on IBM/AT personal computer running under PC DOS, v.3.1.
Software For Integer Programming
NASA Technical Reports Server (NTRS)
Fogle, F. R.
1992-01-01
Improved Exploratory Search Technique for Pure Integer Linear Programming Problems (IESIP) program optimizes objective function of variables subject to confining functions or constraints, using discrete optimization or integer programming. Enables rapid solution of problems up to 10 variables in size. Integer programming required for accuracy in modeling systems containing small number of components, distribution of goods, scheduling operations on machine tools, and scheduling production in general. Written in Borland's TURBO Pascal.
NASA Technical Reports Server (NTRS)
Muravyov, Alexander A.; Turner, Travis L.; Robinson, Jay H.; Rizzi, Stephen A.
1999-01-01
In this paper, the problem of random vibration of geometrically nonlinear MDOF structures is considered. The solutions obtained by application of two different versions of a stochastic linearization method are compared with exact (F-P-K) solutions. The formulation of a relatively new version of the stochastic linearization method (energy-based version) is generalized to the MDOF system case. Also, a new method for determination of nonlinear sti ness coefficients for MDOF structures is demonstrated. This method in combination with the equivalent linearization technique is implemented in a new computer program. Results in terms of root-mean-square (RMS) displacements obtained by using the new program and an existing in-house code are compared for two examples of beam-like structures.
Finite element modelling of non-linear magnetic circuits using Cosmic NASTRAN
NASA Technical Reports Server (NTRS)
Sheerer, T. J.
1986-01-01
The general purpose Finite Element Program COSMIC NASTRAN currently has the ability to model magnetic circuits with constant permeablilities. An approach was developed which, through small modifications to the program, allows modelling of non-linear magnetic devices including soft magnetic materials, permanent magnets and coils. Use of the NASTRAN code resulted in output which can be used for subsequent mechanical analysis using a variation of the same computer model. Test problems were found to produce theoretically verifiable results.
GLOBAL SOLUTIONS TO FOLDED CONCAVE PENALIZED NONCONVEX LEARNING
Liu, Hongcheng; Yao, Tao; Li, Runze
2015-01-01
This paper is concerned with solving nonconvex learning problems with folded concave penalty. Despite that their global solutions entail desirable statistical properties, there lack optimization techniques that guarantee global optimality in a general setting. In this paper, we show that a class of nonconvex learning problems are equivalent to general quadratic programs. This equivalence facilitates us in developing mixed integer linear programming reformulations, which admit finite algorithms that find a provably global optimal solution. We refer to this reformulation-based technique as the mixed integer programming-based global optimization (MIPGO). To our knowledge, this is the first global optimization scheme with a theoretical guarantee for folded concave penalized nonconvex learning with the SCAD penalty (Fan and Li, 2001) and the MCP penalty (Zhang, 2010). Numerical results indicate a significant outperformance of MIPGO over the state-of-the-art solution scheme, local linear approximation, and other alternative solution techniques in literature in terms of solution quality. PMID:27141126
Impact of Linear Programming on Computer Development.
1985-06-01
soon see. It all really began when Dal Hitchcock, an advisor to General Rawlings , the Air Comptroller, and Marshall Wood, an expert on military...unifying principles . Of course, I thought first to try to adapt the Leontief Input-Output Model. But Marshall and I also talked about certain...still with the Ford Motor Company. I told him about my presentation to General Rawlings on the possibility of a "program Integrator" for planning and
Method for nonlinear exponential regression analysis
NASA Technical Reports Server (NTRS)
Junkin, B. G.
1972-01-01
Two computer programs developed according to two general types of exponential models for conducting nonlinear exponential regression analysis are described. Least squares procedure is used in which the nonlinear problem is linearized by expanding in a Taylor series. Program is written in FORTRAN 5 for the Univac 1108 computer.
AN ADA LINEAR ALGEBRA PACKAGE MODELED AFTER HAL/S
NASA Technical Reports Server (NTRS)
Klumpp, A. R.
1994-01-01
This package extends the Ada programming language to include linear algebra capabilities similar to those of the HAL/S programming language. The package is designed for avionics applications such as Space Station flight software. In addition to the HAL/S built-in functions, the package incorporates the quaternion functions used in the Shuttle and Galileo projects, and routines from LINPAK that solve systems of equations involving general square matrices. Language conventions in this package follow those of HAL/S to the maximum extent practical and minimize the effort required for writing new avionics software and translating existent software into Ada. Valid numeric types in this package include scalar, vector, matrix, and quaternion declarations. (Quaternions are fourcomponent vectors used in representing motion between two coordinate frames). Single precision and double precision floating point arithmetic is available in addition to the standard double precision integer manipulation. Infix operators are used instead of function calls to define dot products, cross products, quaternion products, and mixed scalar-vector, scalar-matrix, and vector-matrix products. The package contains two generic programs: one for floating point, and one for integer. The actual component type is passed as a formal parameter to the generic linear algebra package. The procedures for solving systems of linear equations defined by general matrices include GEFA, GECO, GESL, and GIDI. The HAL/S functions include ABVAL, UNIT, TRACE, DET, INVERSE, TRANSPOSE, GET, PUT, FETCH, PLACE, and IDENTITY. This package is written in Ada (Version 1.2) for batch execution and is machine independent. The linear algebra software depends on nothing outside the Ada language except for a call to a square root function for floating point scalars (such as SQRT in the DEC VAX MATHLIB library). This program was developed in 1989, and is a copyrighted work with all copyright vested in NASA.
Computer Series, 65. Bits and Pieces, 26.
ERIC Educational Resources Information Center
Moore, John W., Ed.
1985-01-01
Describes: (l) a microcomputer-based system for filing test questions and assembling examinations; (2) microcomputer use in practical and simulated experiments of gamma rays scattering by outer shell electrons; (3) an interactive, screen-oriented, general linear regression program; and (4) graphics drill and game programs for benzene synthesis.…
PIFCGT: A PIF autopilot design program for general aviation aircraft
NASA Technical Reports Server (NTRS)
Broussard, J. R.
1983-01-01
This report documents the PIFCGT computer program. In FORTRAN, PIFCGT is a computer design aid for determing Proportional-Integral-Filter (PIF) control laws for aircraft autopilots implemented with a Command Generator Tracker (CGT). The program uses Linear-Quadratic-Regulator synthesis algorithms to determine feedback gains, and includes software to solve the feedforward matrix equation which is useful in determining the command generator tracker feedforward gains. The program accepts aerodynamic stability derivatives and computes the corresponding aerodynamic linear model. The nine autopilot modes that can be designed include four maneuver modes (ROLL SEL, PITCH SEL, HDG SEL, ALT SEL), four final approach models (APR GS, APR LOCI, APR LOCR, APR LOCP), and a BETA HOLD mode. The program has been compiled and executed on a CDC computer.
NASA Technical Reports Server (NTRS)
Hinnant, Howard E.; Hodges, Dewey H.
1987-01-01
The General Rotorcraft Aeromechanical Stability Program (GRASP) was developed to analyse the steady-state and linearized dynamic behavior of rotorcraft in hovering and axial flight conditions. Because of the nature of problems GRASP was created to solve, the geometrically nonlinear behavior of beams is one area in which the program must perform well in order to be of any value. Numerical results obtained from GRASP are compared to both static and dynamic experimental data obtained for a cantilever beam undergoing large displacements and rotations caused by deformations. The correlation is excellent in all cases.
The microcomputer scientific software series 3: general linear model--analysis of variance.
Harold M. Rauscher
1985-01-01
A BASIC language set of programs, designed for use on microcomputers, is presented. This set of programs will perform the analysis of variance for any statistical model describing either balanced or unbalanced designs. The program computes and displays the degrees of freedom, Type I sum of squares, and the mean square for the overall model, the error, and each factor...
A statistical package for computing time and frequency domain analysis
NASA Technical Reports Server (NTRS)
Brownlow, J.
1978-01-01
The spectrum analysis (SPA) program is a general purpose digital computer program designed to aid in data analysis. The program does time and frequency domain statistical analyses as well as some preanalysis data preparation. The capabilities of the SPA program include linear trend removal and/or digital filtering of data, plotting and/or listing of both filtered and unfiltered data, time domain statistical characterization of data, and frequency domain statistical characterization of data.
Generalizations of Tikhonov's regularized method of least squares to non-Euclidean vector norms
NASA Astrophysics Data System (ADS)
Volkov, V. V.; Erokhin, V. I.; Kakaev, V. V.; Onufrei, A. Yu.
2017-09-01
Tikhonov's regularized method of least squares and its generalizations to non-Euclidean norms, including polyhedral, are considered. The regularized method of least squares is reduced to mathematical programming problems obtained by "instrumental" generalizations of the Tikhonov lemma on the minimal (in a certain norm) solution of a system of linear algebraic equations with respect to an unknown matrix. Further studies are needed for problems concerning the development of methods and algorithms for solving reduced mathematical programming problems in which the objective functions and admissible domains are constructed using polyhedral vector norms.
NASA Astrophysics Data System (ADS)
Al-Kuhali, K.; Hussain M., I.; Zain Z., M.; Mullenix, P.
2015-05-01
Aim: This paper contribute to the flat panel display industry it terms of aggregate production planning. Methodology: For the minimization cost of total production of LCD manufacturing, a linear programming was applied. The decision variables are general production costs, additional cost incurred for overtime production, additional cost incurred for subcontracting, inventory carrying cost, backorder costs and adjustments for changes incurred within labour levels. Model has been developed considering a manufacturer having several product types, which the maximum types are N, along a total time period of T. Results: Industrial case study based on Malaysia is presented to test and to validate the developed linear programming model for aggregate production planning. Conclusion: The model development is fit under stable environment conditions. Overall it can be recommended to adapt the proven linear programming model to production planning of Malaysian flat panel display industry.
A general numerical analysis program for the superconducting quasiparticle mixer
NASA Technical Reports Server (NTRS)
Hicks, R. G.; Feldman, M. J.; Kerr, A. R.
1986-01-01
A user-oriented computer program SISCAP (SIS Computer Analysis Program) for analyzing SIS mixers is described. The program allows arbitrary impedance terminations to be specified at all LO harmonics and sideband frequencies. It is therefore able to treat a much more general class of SIS mixers than the widely used three-frequency analysis, for which the harmonics are assumed to be short-circuited. An additional program, GETCHI, provides the necessary input data to program SISCAP. The SISCAP program performs a nonlinear analysis to determine the SIS junction voltage waveform produced by the local oscillator. The quantum theory of mixing is used in its most general form, treating the large signal properties of the mixer in the time domain. A small signal linear analysis is then used to find the conversion loss and port impedances. The noise analysis includes thermal noise from the termination resistances and shot noise from the periodic LO current. Quantum noise is not considered. Many aspects of the program have been adequately verified and found accurate.
Solving large mixed linear models using preconditioned conjugate gradient iteration.
Strandén, I; Lidauer, M
1999-12-01
Continuous evaluation of dairy cattle with a random regression test-day model requires a fast solving method and algorithm. A new computing technique feasible in Jacobi and conjugate gradient based iterative methods using iteration on data is presented. In the new computing technique, the calculations in multiplication of a vector by a matrix were recorded to three steps instead of the commonly used two steps. The three-step method was implemented in a general mixed linear model program that used preconditioned conjugate gradient iteration. Performance of this program in comparison to other general solving programs was assessed via estimation of breeding values using univariate, multivariate, and random regression test-day models. Central processing unit time per iteration with the new three-step technique was, at best, one-third that needed with the old technique. Performance was best with the test-day model, which was the largest and most complex model used. The new program did well in comparison to other general software. Programs keeping the mixed model equations in random access memory required at least 20 and 435% more time to solve the univariate and multivariate animal models, respectively. Computations of the second best iteration on data took approximately three and five times longer for the animal and test-day models, respectively, than did the new program. Good performance was due to fast computing time per iteration and quick convergence to the final solutions. Use of preconditioned conjugate gradient based methods in solving large breeding value problems is supported by our findings.
NASA Technical Reports Server (NTRS)
Lehtinen, B.; Geyser, L. C.
1984-01-01
AESOP is a computer program for use in designing feedback controls and state estimators for linear multivariable systems. AESOP is meant to be used in an interactive manner. Each design task that the program performs is assigned a "function" number. The user accesses these functions either (1) by inputting a list of desired function numbers or (2) by inputting a single function number. In the latter case the choice of the function will in general depend on the results obtained by the previously executed function. The most important of the AESOP functions are those that design,linear quadratic regulators and Kalman filters. The user interacts with the program when using these design functions by inputting design weighting parameters and by viewing graphic displays of designed system responses. Supporting functions are provided that obtain system transient and frequency responses, transfer functions, and covariance matrices. The program can also compute open-loop system information such as stability (eigenvalues), eigenvectors, controllability, and observability. The program is written in ANSI-66 FORTRAN for use on an IBM 3033 using TSS 370. Descriptions of all subroutines and results of two test cases are included in the appendixes.
Applied Multiple Linear Regression: A General Research Strategy
ERIC Educational Resources Information Center
Smith, Brandon B.
1969-01-01
Illustrates some of the basic concepts and procedures for using regression analysis in experimental design, analysis of variance, analysis of covariance, and curvilinear regression. Applications to evaluation of instruction and vocational education programs are illustrated. (GR)
The checkpoint ordering problem
Hungerländer, P.
2017-01-01
Abstract We suggest a new variant of a row layout problem: Find an ordering of n departments with given lengths such that the total weighted sum of their distances to a given checkpoint is minimized. The Checkpoint Ordering Problem (COP) is both of theoretical and practical interest. It has several applications and is conceptually related to some well-studied combinatorial optimization problems, namely the Single-Row Facility Layout Problem, the Linear Ordering Problem and a variant of parallel machine scheduling. In this paper we study the complexity of the (COP) and its special cases. The general version of the (COP) with an arbitrary but fixed number of checkpoints is NP-hard in the weak sense. We propose both a dynamic programming algorithm and an integer linear programming approach for the (COP) . Our computational experiments indicate that the (COP) is hard to solve in practice. While the run time of the dynamic programming algorithm strongly depends on the length of the departments, the integer linear programming approach is able to solve instances with up to 25 departments to optimality. PMID:29170574
A FORTRAN program for the analysis of linear continuous and sample-data systems
NASA Technical Reports Server (NTRS)
Edwards, J. W.
1976-01-01
A FORTRAN digital computer program which performs the general analysis of linearized control systems is described. State variable techniques are used to analyze continuous, discrete, and sampled data systems. Analysis options include the calculation of system eigenvalues, transfer functions, root loci, root contours, frequency responses, power spectra, and transient responses for open- and closed-loop systems. A flexible data input format allows the user to define systems in a variety of representations. Data may be entered by inputing explicit data matrices or matrices constructed in user written subroutines, by specifying transfer function block diagrams, or by using a combination of these methods.
An accelerated proximal augmented Lagrangian method and its application in compressive sensing.
Sun, Min; Liu, Jing
2017-01-01
As a first-order method, the augmented Lagrangian method (ALM) is a benchmark solver for linearly constrained convex programming, and in practice some semi-definite proximal terms are often added to its primal variable's subproblem to make it more implementable. In this paper, we propose an accelerated PALM with indefinite proximal regularization (PALM-IPR) for convex programming with linear constraints, which generalizes the proximal terms from semi-definite to indefinite. Under mild assumptions, we establish the worst-case [Formula: see text] convergence rate of PALM-IPR in a non-ergodic sense. Finally, numerical results show that our new method is feasible and efficient for solving compressive sensing.
Automating approximate Bayesian computation by local linear regression.
Thornton, Kevin R
2009-07-07
In several biological contexts, parameter inference often relies on computationally-intensive techniques. "Approximate Bayesian Computation", or ABC, methods based on summary statistics have become increasingly popular. A particular flavor of ABC based on using a linear regression to approximate the posterior distribution of the parameters, conditional on the summary statistics, is computationally appealing, yet no standalone tool exists to automate the procedure. Here, I describe a program to implement the method. The software package ABCreg implements the local linear-regression approach to ABC. The advantages are: 1. The code is standalone, and fully-documented. 2. The program will automatically process multiple data sets, and create unique output files for each (which may be processed immediately in R), facilitating the testing of inference procedures on simulated data, or the analysis of multiple data sets. 3. The program implements two different transformation methods for the regression step. 4. Analysis options are controlled on the command line by the user, and the program is designed to output warnings for cases where the regression fails. 5. The program does not depend on any particular simulation machinery (coalescent, forward-time, etc.), and therefore is a general tool for processing the results from any simulation. 6. The code is open-source, and modular.Examples of applying the software to empirical data from Drosophila melanogaster, and testing the procedure on simulated data, are shown. In practice, the ABCreg simplifies implementing ABC based on local-linear regression.
Brown, Angus M
2006-04-01
The objective of this present study was to demonstrate a method for fitting complex electrophysiological data with multiple functions using the SOLVER add-in of the ubiquitous spreadsheet Microsoft Excel. SOLVER minimizes the difference between the sum of the squares of the data to be fit and the function(s) describing the data using an iterative generalized reduced gradient method. While it is a straightforward procedure to fit data with linear functions, and we have previously demonstrated a method of non-linear regression analysis of experimental data based upon a single function, it is more complex to fit data with multiple functions, usually requiring specialized expensive computer software. In this paper we describe an easily understood program for fitting experimentally acquired data, in this case the stimulus-evoked compound action potential from the mouse optic nerve, with multiple Gaussian functions. The program is flexible and can be applied to describe data with a wide variety of user-input functions.
Discrete Time McKean–Vlasov Control Problem: A Dynamic Programming Approach
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pham, Huyên, E-mail: pham@math.univ-paris-diderot.fr; Wei, Xiaoli, E-mail: tyswxl@gmail.com
We consider the stochastic optimal control problem of nonlinear mean-field systems in discrete time. We reformulate the problem into a deterministic control problem with marginal distribution as controlled state variable, and prove that dynamic programming principle holds in its general form. We apply our method for solving explicitly the mean-variance portfolio selection and the multivariate linear-quadratic McKean–Vlasov control problem.
Performance Analysis and Design Synthesis (PADS) computer program. Volume 3: User manual
NASA Technical Reports Server (NTRS)
1972-01-01
The two-fold purpose of the Performance Analysis and Design Synthesis (PADS) computer program is discussed. The program can size launch vehicles in conjunction with calculus-of-variations optimal trajectories and can also be used as a general purpose branched trajectory optimization program. For trajectory optimization alone or with sizing, PADS has two trajectory modules. The first trajectory module uses the method of steepest descent. The second module uses the method of quasi-linearization, which requires a starting solution from the first trajectory module.
Space Trajectories Error Analysis (STEAP) Programs. Volume 1: Analytic manual, update
NASA Technical Reports Server (NTRS)
1971-01-01
Manual revisions are presented for the modified and expanded STEAP series. The STEAP 2 is composed of three independent but related programs: NOMAL for the generation of n-body nominal trajectories performing a number of deterministic guidance events; ERRAN for the linear error analysis and generalized covariance analysis along specific targeted trajectories; and SIMUL for testing the mathematical models used in the navigation and guidance process. The analytic manual provides general problem description, formulation, and solution and the detailed analysis of subroutines. The programmers' manual gives descriptions of the overall structure of the programs as well as the computational flow and analysis of the individual subroutines. The user's manual provides information on the input and output quantities of the programs. These are updates to N69-36472 and N69-36473.
The Impact of New Technology on Accounting Education.
ERIC Educational Resources Information Center
Shaoul, Jean
The introduction of computers in the Department of Accounting and Finance at Manchester University is described. General background outlining the increasing need for microcomputers in the accounting curriculum (including financial modelling tools and decision support systems such as linear programming, statistical packages, and simulation) is…
Analyzing longitudinal data with the linear mixed models procedure in SPSS.
West, Brady T
2009-09-01
Many applied researchers analyzing longitudinal data share a common misconception: that specialized statistical software is necessary to fit hierarchical linear models (also known as linear mixed models [LMMs], or multilevel models) to longitudinal data sets. Although several specialized statistical software programs of high quality are available that allow researchers to fit these models to longitudinal data sets (e.g., HLM), rapid advances in general purpose statistical software packages have recently enabled analysts to fit these same models when using preferred packages that also enable other more common analyses. One of these general purpose statistical packages is SPSS, which includes a very flexible and powerful procedure for fitting LMMs to longitudinal data sets with continuous outcomes. This article aims to present readers with a practical discussion of how to analyze longitudinal data using the LMMs procedure in the SPSS statistical software package.
NASA Technical Reports Server (NTRS)
Phillips, K.
1976-01-01
A mathematical model for job scheduling in a specified context is presented. The model uses both linear programming and combinatorial methods. While designed with a view toward optimization of scheduling of facility and plant operations at the Deep Space Communications Complex, the context is sufficiently general to be widely applicable. The general scheduling problem including options for scheduling objectives is discussed and fundamental parameters identified. Mathematical algorithms for partitioning problems germane to scheduling are presented.
Optimized Waterspace Management and Scheduling Using Mixed-Integer Linear Programming
2016-01-01
Complete [30]. Proposition 4.1 satisfies the first criterion. For the second criterion, we will use the Traveling Salesman Problem (TSP), which has been...A branch and cut algorithm for the symmetric generalized traveling salesman problem , Operations Research 45 (1997) 378–394. [33] J. Silberholz, B...Golden, The generalized traveling salesman problem : A new genetic algorithm ap- proach, Extended Horizons: Advances in Computing, Optimization, and
Computer programs for the solution of systems of linear algebraic equations
NASA Technical Reports Server (NTRS)
Sequi, W. T.
1973-01-01
FORTRAN subprograms for the solution of systems of linear algebraic equations are described, listed, and evaluated in this report. Procedures considered are direct solution, iteration, and matrix inversion. Both incore methods and those which utilize auxiliary data storage devices are considered. Some of the subroutines evaluated require the entire coefficient matrix to be in core, whereas others account for banding or sparceness of the system. General recommendations relative to equation solving are made, and on the basis of tests, specific subprograms are recommended.
NASA Astrophysics Data System (ADS)
Ryan, D. P.; Roth, G. S.
1982-04-01
Complete documentation of the 15 programs and 11 data files of the EPA Atomic Absorption Instrument Automation System is presented. The system incorporates the following major features: (1) multipoint calibration using first, second, or third degree regression or linear interpolation, (2) timely quality control assessments for spiked samples, duplicates, laboratory control standards, reagent blanks, and instrument check standards, (3) reagent blank subtraction, and (4) plotting of calibration curves and raw data peaks. The programs of this system are written in Data General Extended BASIC, Revision 4.3, as enhanced for multi-user, real-time data acquisition. They run in a Data General Nova 840 minicomputer under the operating system RDOS, Revision 6.2. There is a functional description, a symbol definitions table, a functional flowchart, a program listing, and a symbol cross reference table for each program. The structure of every data file is also detailed.
Huang, Kuo -Ling; Mehrotra, Sanjay
2016-11-08
We present a homogeneous algorithm equipped with a modified potential function for the monotone complementarity problem. We show that this potential function is reduced by at least a constant amount if a scaled Lipschitz condition (SLC) is satisfied. A practical algorithm based on this potential function is implemented in a software package named iOptimize. The implementation in iOptimize maintains global linear and polynomial time convergence properties, while achieving practical performance. It either successfully solves the problem, or concludes that the SLC is not satisfied. When compared with the mature software package MOSEK (barrier solver version 6.0.0.106), iOptimize solves convex quadraticmore » programming problems, convex quadratically constrained quadratic programming problems, and general convex programming problems in fewer iterations. Moreover, several problems for which MOSEK fails are solved to optimality. In addition, we also find that iOptimize detects infeasibility more reliably than the general nonlinear solvers Ipopt (version 3.9.2) and Knitro (version 8.0).« less
NASA Technical Reports Server (NTRS)
Gupta, K. K.; Akyuz, F. A.; Heer, E.
1972-01-01
This program, an extension of the linear equilibrium problem solver ELAS, is an updated and extended version of its earlier form (written in FORTRAN 2 for the IBM 7094 computer). A synchronized material property concept utilizing incremental time steps and the finite element matrix displacement approach has been adopted for the current analysis. A special option enables employment of constant time steps in the logarithmic scale, thereby reducing computational efforts resulting from accumulative material memory effects. A wide variety of structures with elastic or viscoelastic material properties can be analyzed by VISCEL. The program is written in FORTRAN 5 language for the Univac 1108 computer operating under the EXEC 8 system. Dynamic storage allocation is automatically effected by the program, and the user may request up to 195K core memory in a 260K Univac 1108/EXEC 8 machine. The physical program VISCEL, consisting of about 7200 instructions, has four distinct links (segments), and the compiled program occupies a maximum of about 11700 words decimal of core storage.
Solution of the Generalized Noah's Ark Problem.
Billionnet, Alain
2013-01-01
The phylogenetic diversity (PD) of a set of species is a measure of the evolutionary distance among the species in the collection, based on a phylogenetic tree. Such a tree is composed of a root, internal nodes, and leaves that correspond to the set of taxa under study. With each edge of the tree is associated a non-negative branch length (evolutionary distance). If a particular survival probability is associated with each taxon, the PD measure becomes the expected PD measure. In the Noah's Ark Problem (NAP) introduced by Weitzman (1998), these survival probabilities can be increased at some cost. The problem is to determine how best to allocate a limited amount of resources to maximize the expected PD of the considered species. It is easy to formulate the NAP as a (difficult) nonlinear 0-1 programming problem. The aim of this article is to show that a general version of the NAP (GNAP) can be solved simply and efficiently with any set of edge weights and any set of survival probabilities by using standard mixed-integer linear programming software. The crucial point to move from a nonlinear program in binary variables to a mixed-integer linear program, is to approximate the logarithmic function by the lower envelope of a set of tangents to the curve. Solving the obtained mixed-integer linear program provides not only a near-optimal solution but also an upper bound on the value of the optimal solution. We also applied this approach to a generalization of the nature reserve problem (GNRP) that consists of selecting a set of regions to be conserved so that the expected PD of the set of species present in these regions is maximized. In this case, the survival probabilities of different taxa are not independent of each other. Computational results are presented to illustrate potentialities of the approach. Near-optimal solutions with hypothetical phylogenetic trees comprising about 4000 taxa are obtained in a few seconds or minutes of computing time for the GNAP, and in about 30 min for the GNRP. In all the cases the average guarantee varies from 0% to 1.20%.
Decision Support Systems: Theory.
1976-01-01
Ko tt r, 1.. "Toward an Explicit Model for Media Selection," ,J. Advertising Res. 4, 14-41 , Mar. 1964. Kriebel, C. tt., "MIS Technology - A View of...Research Study of Sales Re- sponse to Advertising ," Opns. Res. 5, 370-381 ,1957. Von Bertalanffy, Ludwig , General Systems Theory. New York: George...Zangwill, W. I., " Media Slection by Decision Programming," J. Advertising Res. 5.30-36 , Sept. 1964. Zeleny, M., Linear Multiobjective Programming
Menu-Driven Solver Of Linear-Programming Problems
NASA Technical Reports Server (NTRS)
Viterna, L. A.; Ferencz, D.
1992-01-01
Program assists inexperienced user in formulating linear-programming problems. A Linear Program Solver (ALPS) computer program is full-featured LP analysis program. Solves plain linear-programming problems as well as more-complicated mixed-integer and pure-integer programs. Also contains efficient technique for solution of purely binary linear-programming problems. Written entirely in IBM's APL2/PC software, Version 1.01. Packed program contains licensed material, property of IBM (copyright 1988, all rights reserved).
NASA Astrophysics Data System (ADS)
Balac, Stéphane; Fernandez, Arnaud
2016-02-01
The computer program SPIP is aimed at solving the Generalized Non-Linear Schrödinger equation (GNLSE), involved in optics e.g. in the modelling of light-wave propagation in an optical fibre, by the Interaction Picture method, a new efficient alternative method to the Symmetric Split-Step method. In the SPIP program a dedicated costless adaptive step-size control based on the use of a 4th order embedded Runge-Kutta method is implemented in order to speed up the resolution.
An Evaluation of Nutrition Education Program for Low-Income Youth
ERIC Educational Resources Information Center
Kemirembe, Olive M. K.; Radhakrishna, Rama B.; Gurgevich, Elise; Yoder, Edgar P.; Ingram, Patreese D.
2011-01-01
A quasi-experimental design consisting of pretest, posttest, and delayed posttest comparison control group was used. Nutrition knowledge and behaviors were measured at pretest (time 1) posttest (time 2) and delayed posttest (time 3). General Linear Model (GLM) repeated measure ANCOVA results showed that youth who received nutrition education…
A Problem on Optimal Transportation
ERIC Educational Resources Information Center
Cechlarova, Katarina
2005-01-01
Mathematical optimization problems are not typical in the classical curriculum of mathematics. In this paper we show how several generalizations of an easy problem on optimal transportation were solved by gifted secondary school pupils in a correspondence mathematical seminar, how they can be used in university courses of linear programming and…
Linear Programming Problems for Generalized Uncertainty
ERIC Educational Resources Information Center
Thipwiwatpotjana, Phantipa
2010-01-01
Uncertainty occurs when there is more than one realization that can represent an information. This dissertation concerns merely discrete realizations of an uncertainty. Different interpretations of an uncertainty and their relationships are addressed when the uncertainty is not a probability of each realization. A well known model that can handle…
ERIC Educational Resources Information Center
Moody, John Charles
Assessed were the effects of linear and modified linear programed materials on the achievement of slow learners in tenth grade Biological Sciences Curriculum Study (BSCS) Special Materials biology. Two hundred and six students were randomly placed into four programed materials formats: linear programed materials, modified linear program with…
An algorithm for control system design via parameter optimization. M.S. Thesis
NASA Technical Reports Server (NTRS)
Sinha, P. K.
1972-01-01
An algorithm for design via parameter optimization has been developed for linear-time-invariant control systems based on the model reference adaptive control concept. A cost functional is defined to evaluate the system response relative to nominal, which involves in general the error between the system and nominal response, its derivatives and the control signals. A program for the practical implementation of this algorithm has been developed, with the computational scheme for the evaluation of the performance index based on Lyapunov's theorem for stability of linear invariant systems.
Optimal design of neural stimulation current waveforms.
Halpern, Mark
2009-01-01
This paper contains results on the design of electrical signals for delivering charge through electrodes to achieve neural stimulation. A generalization of the usual constant current stimulation phase to a stepped current waveform is presented. The electrode current design is then formulated as the calculation of the current step sizes to minimize the peak electrode voltage while delivering a specified charge in a given number of time steps. This design problem can be formulated as a finite linear program, or alternatively by using techniques for discrete-time linear system design.
Lyubetsky, Vassily; Gershgorin, Roman; Gorbunov, Konstantin
2017-12-06
Chromosome structure is a very limited model of the genome including the information about its chromosomes such as their linear or circular organization, the order of genes on them, and the DNA strand encoding a gene. Gene lengths, nucleotide composition, and intergenic regions are ignored. Although highly incomplete, such structure can be used in many cases, e.g., to reconstruct phylogeny and evolutionary events, to identify gene synteny, regulatory elements and promoters (considering highly conserved elements), etc. Three problems are considered; all assume unequal gene content and the presence of gene paralogs. The distance problem is to determine the minimum number of operations required to transform one chromosome structure into another and the corresponding transformation itself including the identification of paralogs in two structures. We use the DCJ model which is one of the most studied combinatorial rearrangement models. Double-, sesqui-, and single-operations as well as deletion and insertion of a chromosome region are considered in the model; the single ones comprise cut and join. In the reconstruction problem, a phylogenetic tree with chromosome structures in the leaves is given. It is necessary to assign the structures to inner nodes of the tree to minimize the sum of distances between terminal structures of each edge and to identify the mutual paralogs in a fairly large set of structures. A linear algorithm is known for the distance problem without paralogs, while the presence of paralogs makes it NP-hard. If paralogs are allowed but the insertion and deletion operations are missing (and special constraints are imposed), the reduction of the distance problem to integer linear programming is known. Apparently, the reconstruction problem is NP-hard even in the absence of paralogs. The problem of contigs is to find the optimal arrangements for each given set of contigs, which also includes the mutual identification of paralogs. We proved that these problems can be reduced to integer linear programming formulations, which allows an algorithm to redefine the problems to implement a very special case of the integer linear programming tool. The results were tested on synthetic and biological samples. Three well-known problems were reduced to a very special case of integer linear programming, which is a new method of their solutions. Integer linear programming is clearly among the main computational methods and, as generally accepted, is fast on average; in particular, computation systems specifically targeted at it are available. The challenges are to reduce the size of the corresponding integer linear programming formulations and to incorporate a more detailed biological concept in our model of the reconstruction.
A simple approach to optimal control of invasive species.
Hastings, Alan; Hall, Richard J; Taylor, Caz M
2006-12-01
The problem of invasive species and their control is one of the most pressing applied issues in ecology today. We developed simple approaches based on linear programming for determining the optimal removal strategies of different stage or age classes for control of invasive species that are still in a density-independent phase of growth. We illustrate the application of this method to the specific example of invasive Spartina alterniflora in Willapa Bay, WA. For all such systems, linear programming shows in general that the optimal strategy in any time step is to prioritize removal of a single age or stage class. The optimal strategy adjusts which class is the focus of control through time and can be much more cost effective than prioritizing removal of the same stage class each year.
Evaluating a Policing Strategy Intended to Disrupt an Illicit Street-Level Drug Market
ERIC Educational Resources Information Center
Corsaro, Nicholas; Brunson, Rod K.; McGarrell, Edmund F.
2010-01-01
The authors examined a strategic policing initiative that was implemented in a high crime Nashville, Tennessee neighborhood by utilizing a mixed-methodological evaluation approach in order to provide (a) a descriptive process assessment of program fidelity; (b) an interrupted time-series analysis relying upon generalized linear models; (c)…
Application of a Non-linear Program to the Establishment of a Hub and Spoke System in Africa
2010-06-01
barriers, religious fanaticism, and hostage taking (King, 1994). Perhaps the greatest impact on the history of Africa and its relations with the United......areas of commerce, light manufacturing, and the tourism industry (United States Department of State, 2009). Generally speaking, Kenya maintains a
1982-12-21
and W. T. ZIEMBA (1981). Intro- duction to concave and generalized concave functions. In Gener- alized Concavity in Optimization and Economics (S...Schaible and W. T. Ziemba , eds.), pp. 21-50. Academic Press, New York. BANK, B., J. GUDDAT, D. KLATTE, B. KUMMER, and K. TAMMER (1982). Non- Linear
Program Development to Study Faired Towlines.
1980-02-01
34Automobiles under Crash Loading", lIT Research Institute, October 1977. [19] Antman , S. S., "Ordinary Differential Equations of Non-Linear Elas...1976. [20] Green, A. E. and N. Laws, "A General Theory of Rods", Proceedings of the Royal Society (London) A293, pp. 145-155, 1966. [21] Antman , S. S
Generalized Fluid System Simulation Program (GFSSP) - Version 6
NASA Technical Reports Server (NTRS)
Majumdar, Alok; LeClair, Andre; Moore, Ric; Schallhorn, Paul
2015-01-01
The Generalized Fluid System Simulation Program (GFSSP) is a finite-volume based general-purpose computer program for analyzing steady state and time-dependent flow rates, pressures, temperatures, and concentrations in a complex flow network. The program is capable of modeling real fluids with phase changes, compressibility, mixture thermodynamics, conjugate heat transfer between solid and fluid, fluid transients, pumps, compressors, flow control valves and external body forces such as gravity and centrifugal. The thermo-fluid system to be analyzed is discretized into nodes, branches, and conductors. The scalar properties such as pressure, temperature, and concentrations are calculated at nodes. Mass flow rates and heat transfer rates are computed in branches and conductors. The graphical user interface allows users to build their models using the 'point, drag, and click' method; the users can also run their models and post-process the results in the same environment. The integrated fluid library supplies thermodynamic and thermo-physical properties of 36 fluids, and 24 different resistance/source options are provided for modeling momentum sources or sinks in the branches. Users can introduce new physics, non-linear and time-dependent boundary conditions through user-subroutine.
NASA Astrophysics Data System (ADS)
Bezruczko, N.; Stanley, T.; Battle, M.; Latty, C.
2016-11-01
Despite broad sweeping pronouncements by international research organizations that social sciences are being integrated into global research programs, little attention has been directed toward obstacles blocking productive collaborations. In particular, social sciences routinely implement nonlinear, ordinal measures, which fundamentally inhibit integration with overarching scientific paradigms. The widely promoted general linear model in contemporary social science methods is largely based on untransformed scores and ratings, which are neither objective nor linear. This issue has historically separated physical and social sciences, which this report now asserts is unnecessary. In this research, nonlinear, subjective caregiver ratings of confidence to care for children supported by complex, medical technologies were transformed to an objective scale defined by logits (N=70). Transparent linear units from this transformation provided foundational insights into measurement properties of a social- humanistic caregiving construct, which clarified physical and social caregiver implications. Parameterized items and ratings were also subjected to multivariate hierarchical analysis, then decomposed to demonstrate theoretical coherence (R2 >.50), which provided further support for convergence of mathematical parameterization, physical expectations, and a social-humanistic construct. These results present substantial support for improving integration of social sciences with contemporary scientific research programs by emphasizing construction of common variables with objective, linear units.
Computer user's manual for a generalized curve fit and plotting program
NASA Technical Reports Server (NTRS)
Schlagheck, R. A.; Beadle, B. D., II; Dolerhie, B. D., Jr.; Owen, J. W.
1973-01-01
A FORTRAN coded program has been developed for generating plotted output graphs on 8-1/2 by 11-inch paper. The program is designed to be used by engineers, scientists, and non-programming personnel on any IBM 1130 system that includes a 1627 plotter. The program has been written to provide a fast and efficient method of displaying plotted data without having to generate any additions. Various output options are available to the program user for displaying data in four different types of formatted plots. These options include discrete linear, continuous, and histogram graphical outputs. The manual contains information about the use and operation of this program. A mathematical description of the least squares goodness of fit test is presented. A program listing is also included.
An Instructional Note on Linear Programming--A Pedagogically Sound Approach.
ERIC Educational Resources Information Center
Mitchell, Richard
1998-01-01
Discusses the place of linear programming in college curricula and the advantages of using linear-programming software. Lists important characteristics of computer software used in linear programming for more effective teaching and learning. (ASK)
Liu, Jing; Duan, Yongrui; Sun, Min
2017-01-01
This paper introduces a symmetric version of the generalized alternating direction method of multipliers for two-block separable convex programming with linear equality constraints, which inherits the superiorities of the classical alternating direction method of multipliers (ADMM), and which extends the feasible set of the relaxation factor α of the generalized ADMM to the infinite interval [Formula: see text]. Under the conditions that the objective function is convex and the solution set is nonempty, we establish the convergence results of the proposed method, including the global convergence, the worst-case [Formula: see text] convergence rate in both the ergodic and the non-ergodic senses, where k denotes the iteration counter. Numerical experiments to decode a sparse signal arising in compressed sensing are included to illustrate the efficiency of the new method.
On 2- and 3-person games on polyhedral sets
DOE Office of Scientific and Technical Information (OSTI.GOV)
Belenky, A.S.
1994-12-31
Special classes of 3 person games are considered where the sets of players` allowable strategies are polyhedral and the payoff functions are defined as maxima, on a polyhedral set, of certain kind of sums of linear and bilinear functions. Necessary and sufficient conditions, which are easy to verify, for a Nash point in these games are established, and a finite method, based on these conditions, for calculating Nash points is proposed. It is shown that the game serves as a generalization of a model for a problem of waste products evacuation from a territory. The method makes it possible tomore » reduce calculation of a Nash point to solving some linear and quadratic programming problems formulated on the basis of the original 3-person game. A class of 2-person games on connected polyhedral sets is considered, with the payoff function being a sum of two linear functions and one bilinear function. Necessary and sufficient conditions are established for the min-max, the max-min, and for a certain equilibrium. It is shown that the corresponding points can be calculated from auxiliary linear programming problems formulated on the basis of the master game.« less
IESIP - AN IMPROVED EXPLORATORY SEARCH TECHNIQUE FOR PURE INTEGER LINEAR PROGRAMMING PROBLEMS
NASA Technical Reports Server (NTRS)
Fogle, F. R.
1994-01-01
IESIP, an Improved Exploratory Search Technique for Pure Integer Linear Programming Problems, addresses the problem of optimizing an objective function of one or more variables subject to a set of confining functions or constraints by a method called discrete optimization or integer programming. Integer programming is based on a specific form of the general linear programming problem in which all variables in the objective function and all variables in the constraints are integers. While more difficult, integer programming is required for accuracy when modeling systems with small numbers of components such as the distribution of goods, machine scheduling, and production scheduling. IESIP establishes a new methodology for solving pure integer programming problems by utilizing a modified version of the univariate exploratory move developed by Robert Hooke and T.A. Jeeves. IESIP also takes some of its technique from the greedy procedure and the idea of unit neighborhoods. A rounding scheme uses the continuous solution found by traditional methods (simplex or other suitable technique) and creates a feasible integer starting point. The Hook and Jeeves exploratory search is modified to accommodate integers and constraints and is then employed to determine an optimal integer solution from the feasible starting solution. The user-friendly IESIP allows for rapid solution of problems up to 10 variables in size (limited by DOS allocation). Sample problems compare IESIP solutions with the traditional branch-and-bound approach. IESIP is written in Borland's TURBO Pascal for IBM PC series computers and compatibles running DOS. Source code and an executable are provided. The main memory requirement for execution is 25K. This program is available on a 5.25 inch 360K MS DOS format diskette. IESIP was developed in 1990. IBM is a trademark of International Business Machines. TURBO Pascal is registered by Borland International.
Gstat: a program for geostatistical modelling, prediction and simulation
NASA Astrophysics Data System (ADS)
Pebesma, Edzer J.; Wesseling, Cees G.
1998-01-01
Gstat is a computer program for variogram modelling, and geostatistical prediction and simulation. It provides a generic implementation of the multivariable linear model with trends modelled as a linear function of coordinate polynomials or of user-defined base functions, and independent or dependent, geostatistically modelled, residuals. Simulation in gstat comprises conditional or unconditional (multi-) Gaussian sequential simulation of point values or block averages, or (multi-) indicator sequential simulation. Besides many of the popular options found in other geostatistical software packages, gstat offers the unique combination of (i) an interactive user interface for modelling variograms and generalized covariances (residual variograms), that uses the device-independent plotting program gnuplot for graphical display, (ii) support for several ascii and binary data and map file formats for input and output, (iii) a concise, intuitive and flexible command language, (iv) user customization of program defaults, (v) no built-in limits, and (vi) free, portable ANSI-C source code. This paper describes the class of problems gstat can solve, and addresses aspects of efficiency and implementation, managing geostatistical projects, and relevant technical details.
ERIC Educational Resources Information Center
Dissemination and Assessment Center for Bilingual Education, Austin, TX.
This is one of a series of student booklets designed for use in a bilingual mathematics program in grades 6-8. The general format is to present each page in both Spanish and English. The mathematical topics in this booklet include liquid, dry, linear, weight, and time measures. (MK)
Profile-Likelihood Approach for Estimating Generalized Linear Mixed Models with Factor Structures
ERIC Educational Resources Information Center
Jeon, Minjeong; Rabe-Hesketh, Sophia
2012-01-01
In this article, the authors suggest a profile-likelihood approach for estimating complex models by maximum likelihood (ML) using standard software and minimal programming. The method works whenever setting some of the parameters of the model to known constants turns the model into a standard model. An important class of models that can be…
Development of Robotics Applications in a Solid Propellant Mixing Laboratory
1988-06-01
implementation of robotic hardware and software into a laboratory environment requires a carefully structured series of phases which examines, in...strategy. The general methodology utilized in this project is discussed in Appendix A. The proposed laboratory robotics development program was structured ...Accessibility - Potential modifications - Safety precautions e) Robot Transport - Slider mechanisms - Linear tracks - Gantry configuration - Mobility f
Monolithic ceramic analysis using the SCARE program
NASA Technical Reports Server (NTRS)
Manderscheid, Jane M.
1988-01-01
The Structural Ceramics Analysis and Reliability Evaluation (SCARE) computer program calculates the fast fracture reliability of monolithic ceramic components. The code is a post-processor to the MSC/NASTRAN general purpose finite element program. The SCARE program automatically accepts the MSC/NASTRAN output necessary to compute reliability. This includes element stresses, temperatures, volumes, and areas. The SCARE program computes two-parameter Weibull strength distributions from input fracture data for both volume and surface flaws. The distributions can then be used to calculate the reliability of geometrically complex components subjected to multiaxial stress states. Several fracture criteria and flaw types are available for selection by the user, including out-of-plane crack extension theories. The theoretical basis for the reliability calculations was proposed by Batdorf. These models combine linear elastic fracture mechanics (LEFM) with Weibull statistics to provide a mechanistic failure criterion. Other fracture theories included in SCARE are the normal stress averaging technique and the principle of independent action. The objective of this presentation is to summarize these theories, including their limitations and advantages, and to provide a general description of the SCARE program, along with example problems.
Tools for Basic Statistical Analysis
NASA Technical Reports Server (NTRS)
Luz, Paul L.
2005-01-01
Statistical Analysis Toolset is a collection of eight Microsoft Excel spreadsheet programs, each of which performs calculations pertaining to an aspect of statistical analysis. These programs present input and output data in user-friendly, menu-driven formats, with automatic execution. The following types of calculations are performed: Descriptive statistics are computed for a set of data x(i) (i = 1, 2, 3 . . . ) entered by the user. Normal Distribution Estimates will calculate the statistical value that corresponds to cumulative probability values, given a sample mean and standard deviation of the normal distribution. Normal Distribution from two Data Points will extend and generate a cumulative normal distribution for the user, given two data points and their associated probability values. Two programs perform two-way analysis of variance (ANOVA) with no replication or generalized ANOVA for two factors with four levels and three repetitions. Linear Regression-ANOVA will curvefit data to the linear equation y=f(x) and will do an ANOVA to check its significance.
NASA Technical Reports Server (NTRS)
Nakazawa, Shohei
1991-01-01
Formulations and algorithms implemented in the MHOST finite element program are discussed. The code uses a novel concept of the mixed iterative solution technique for the efficient 3-D computations of turbine engine hot section components. The general framework of variational formulation and solution algorithms are discussed which were derived from the mixed three field Hu-Washizu principle. This formulation enables the use of nodal interpolation for coordinates, displacements, strains, and stresses. Algorithmic description of the mixed iterative method includes variations for the quasi static, transient dynamic and buckling analyses. The global-local analysis procedure referred to as the subelement refinement is developed in the framework of the mixed iterative solution, of which the detail is presented. The numerically integrated isoparametric elements implemented in the framework is discussed. Methods to filter certain parts of strain and project the element discontinuous quantities to the nodes are developed for a family of linear elements. Integration algorithms are described for linear and nonlinear equations included in MHOST program.
Gritti, Fabrice
2016-11-18
An new class of gradient liquid chromatography (GLC) is proposed and its performance is analyzed from a theoretical viewpoint. During the course of such gradients, both the solvent strength and the column temperature are simultaneously changed in time and space. The solvent and temperature gradients propagate along the chromatographic column at their own and independent linear velocity. This class of gradient is called combined solvent- and temperature-programmed gradient liquid chromatography (CST-GLC). The general expressions of the retention time, retention factor, and of the temporal peak width of the analytes at elution in CST-GLC are derived for linear solvent strength (LSS) retention models, modified van't Hoff retention behavior, linear and non-distorted solvent gradients, and for linear temperature gradients. In these conditions, the theory predicts that CST-GLC is equivalent to a unique and apparent dynamic solvent gradient. The apparent solvent gradient steepness is the sum of the solvent and temperature steepness. The apparent solvent linear velocity is the reciprocal of the steepness-averaged sum of the reciprocal of the actual solvent and temperature linear velocities. The advantage of CST-GLC over conventional GLC is demonstrated for the resolution of protein digests (peptide mapping) when applying smooth, retained, and linear acetonitrile gradients in combination with a linear temperature gradient (from 20°C to 90°C) using 300μm×150mm capillary columns packed with sub-2 μm particles. The benefit of CST-GLC is demonstrated when the temperature gradient propagates at the same velocity as the chromatographic speed. The experimental proof-of-concept for the realization of temperature ramps propagating at a finite and constant linear velocity is also briefly described. Copyright © 2016 Elsevier B.V. All rights reserved.
Robust Control Design via Linear Programming
NASA Technical Reports Server (NTRS)
Keel, L. H.; Bhattacharyya, S. P.
1998-01-01
This paper deals with the problem of synthesizing or designing a feedback controller of fixed dynamic order. The closed loop specifications considered here are given in terms of a target performance vector representing a desired set of closed loop transfer functions connecting various signals. In general these point targets are unattainable with a fixed order controller. By enlarging the target from a fixed point set to an interval set the solvability conditions with a fixed order controller are relaxed and a solution is more easily enabled. Results from the parametric robust control literature can be used to design the interval target family so that the performance deterioration is acceptable, even when plant uncertainty is present. It is shown that it is possible to devise a computationally simple linear programming approach that attempts to meet the desired closed loop specifications.
Algorithm 937: MINRES-QLP for Symmetric and Hermitian Linear Equations and Least-Squares Problems.
Choi, Sou-Cheng T; Saunders, Michael A
2014-02-01
We describe algorithm MINRES-QLP and its FORTRAN 90 implementation for solving symmetric or Hermitian linear systems or least-squares problems. If the system is singular, MINRES-QLP computes the unique minimum-length solution (also known as the pseudoinverse solution), which generally eludes MINRES. In all cases, it overcomes a potential instability in the original MINRES algorithm. A positive-definite pre-conditioner may be supplied. Our FORTRAN 90 implementation illustrates a design pattern that allows users to make problem data known to the solver but hidden and secure from other program units. In particular, we circumvent the need for reverse communication. Example test programs input and solve real or complex problems specified in Matrix Market format. While we focus here on a FORTRAN 90 implementation, we also provide and maintain MATLAB versions of MINRES and MINRES-QLP.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gearhart, Jared Lee; Adair, Kristin Lynn; Durfee, Justin David.
When developing linear programming models, issues such as budget limitations, customer requirements, or licensing may preclude the use of commercial linear programming solvers. In such cases, one option is to use an open-source linear programming solver. A survey of linear programming tools was conducted to identify potential open-source solvers. From this survey, four open-source solvers were tested using a collection of linear programming test problems and the results were compared to IBM ILOG CPLEX Optimizer (CPLEX) [1], an industry standard. The solvers considered were: COIN-OR Linear Programming (CLP) [2], [3], GNU Linear Programming Kit (GLPK) [4], lp_solve [5] and Modularmore » In-core Nonlinear Optimization System (MINOS) [6]. As no open-source solver outperforms CPLEX, this study demonstrates the power of commercial linear programming software. CLP was found to be the top performing open-source solver considered in terms of capability and speed. GLPK also performed well but cannot match the speed of CLP or CPLEX. lp_solve and MINOS were considerably slower and encountered issues when solving several test problems.« less
2012-01-01
Background The Poisson-Boltzmann (PB) equation and its linear approximation have been widely used to describe biomolecular electrostatics. Generalized Born (GB) models offer a convenient computational approximation for the more fundamental approach based on the Poisson-Boltzmann equation, and allows estimation of pairwise contributions to electrostatic effects in the molecular context. Results We have implemented in a single program most common analyses of the electrostatic properties of proteins. The program first computes generalized Born radii, via a surface integral and then it uses generalized Born radii (using a finite radius test particle) to perform electrostic analyses. In particular the ouput of the program entails, depending on user's requirement: 1) the generalized Born radius of each atom; 2) the electrostatic solvation free energy; 3) the electrostatic forces on each atom (currently in a dvelopmental stage); 4) the pH-dependent properties (total charge and pH-dependent free energy of folding in the pH range -2 to 18; 5) the pKa of all ionizable groups; 6) the electrostatic potential at the surface of the molecule; 7) the electrostatic potential in a volume surrounding the molecule; Conclusions Although at the expense of limited flexibility the program provides most common analyses with requirement of a single input file in PQR format. The results obtained are comparable to those obtained using state-of-the-art Poisson-Boltzmann solvers. A Linux executable with example input and output files is provided as supplementary material. PMID:22536964
NASA Technical Reports Server (NTRS)
Gupta, K. K.
1997-01-01
A multidisciplinary, finite element-based, highly graphics-oriented, linear and nonlinear analysis capability that includes such disciplines as structures, heat transfer, linear aerodynamics, computational fluid dynamics, and controls engineering has been achieved by integrating several new modules in the original STARS (STructural Analysis RoutineS) computer program. Each individual analysis module is general-purpose in nature and is effectively integrated to yield aeroelastic and aeroservoelastic solutions of complex engineering problems. Examples of advanced NASA Dryden Flight Research Center projects analyzed by the code in recent years include the X-29A, F-18 High Alpha Research Vehicle/Thrust Vectoring Control System, B-52/Pegasus Generic Hypersonics, National AeroSpace Plane (NASP), SR-71/Hypersonic Launch Vehicle, and High Speed Civil Transport (HSCT) projects. Extensive graphics capabilities exist for convenient model development and postprocessing of analysis results. The program is written in modular form in standard FORTRAN language to run on a variety of computers, such as the IBM RISC/6000, SGI, DEC, Cray, and personal computer; associated graphics codes use OpenGL and IBM/graPHIGS language for color depiction. This program is available from COSMIC, the NASA agency for distribution of computer programs.
The Association Between Health Program Participation and Employee Retention.
Mitchell, Rebecca J; Ozminkowski, Ronald J; Hartley, Stephen K
2016-09-01
Using health plan membership as a proxy for employee retention, the objective of this study was to examine whether use of health promotion programs was associated with employee retention. Propensity score weighted generalized linear regression models were used to estimate the association between telephonic programs or health risk surveys and retention. Analyses were conducted with six study samples based on type of program participation. Retention rates were highest for employees with either telephonic program activity or health risk surveys and lowest for employees who did not participate in any interventions. Participants ranged from 71% more likely to 5% less likely to remain with their employers compared with nonparticipants, depending on the sample used in analyses. Using health promotion programs in combination with health risk surveys may lead to improvements in employee retention.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, Kuo -Ling; Mehrotra, Sanjay
We present a homogeneous algorithm equipped with a modified potential function for the monotone complementarity problem. We show that this potential function is reduced by at least a constant amount if a scaled Lipschitz condition (SLC) is satisfied. A practical algorithm based on this potential function is implemented in a software package named iOptimize. The implementation in iOptimize maintains global linear and polynomial time convergence properties, while achieving practical performance. It either successfully solves the problem, or concludes that the SLC is not satisfied. When compared with the mature software package MOSEK (barrier solver version 6.0.0.106), iOptimize solves convex quadraticmore » programming problems, convex quadratically constrained quadratic programming problems, and general convex programming problems in fewer iterations. Moreover, several problems for which MOSEK fails are solved to optimality. In addition, we also find that iOptimize detects infeasibility more reliably than the general nonlinear solvers Ipopt (version 3.9.2) and Knitro (version 8.0).« less
User's manual for LINEAR, a FORTRAN program to derive linear aircraft models
NASA Technical Reports Server (NTRS)
Duke, Eugene L.; Patterson, Brian P.; Antoniewicz, Robert F.
1987-01-01
This report documents a FORTRAN program that provides a powerful and flexible tool for the linearization of aircraft models. The program LINEAR numerically determines a linear system model using nonlinear equations of motion and a user-supplied nonlinear aerodynamic model. The system model determined by LINEAR consists of matrices for both state and observation equations. The program has been designed to allow easy selection and definition of the state, control, and observation variables to be used in a particular model.
LQR-Based Optimal Distributed Cooperative Design for Linear Discrete-Time Multiagent Systems.
Zhang, Huaguang; Feng, Tao; Liang, Hongjing; Luo, Yanhong
2017-03-01
In this paper, a novel linear quadratic regulator (LQR)-based optimal distributed cooperative design method is developed for synchronization control of general linear discrete-time multiagent systems on a fixed, directed graph. Sufficient conditions are derived for synchronization, which restrict the graph eigenvalues into a bounded circular region in the complex plane. The synchronizing speed issue is also considered, and it turns out that the synchronizing region reduces as the synchronizing speed becomes faster. To obtain more desirable synchronizing capacity, the weighting matrices are selected by sufficiently utilizing the guaranteed gain margin of the optimal regulators. Based on the developed LQR-based cooperative design framework, an approximate dynamic programming technique is successfully introduced to overcome the (partially or completely) model-free cooperative design for linear multiagent systems. Finally, two numerical examples are given to illustrate the effectiveness of the proposed design methods.
Housing first for homeless persons with active addiction: are we overreaching?
Kertesz, Stefan G; Crouch, Kimberly; Milby, Jesse B; Cusimano, Robert E; Schumacher, Joseph E
2009-06-01
More than 350 communities in the United States have committed to ending chronic homelessness. One nationally prominent approach, Housing First, offers early access to permanent housing without requiring completion of treatment or, for clients with addiction, proof of sobriety. This article reviews studies of Housing First and more traditional rehabilitative (e.g., "linear") recovery interventions, focusing on the outcomes obtained by both approaches for homeless individuals with addictive disorders. According to reviews of comparative trials and case series reports, Housing First reports document excellent housing retention, despite the limited amount of data pertaining to homeless clients with active and severe addiction. Several linear programs cite reductions in addiction severity but have shortcomings in long-term housing success and retention. This article suggests that the current research data are not sufficient to identify an optimal housing and rehabilitation approach for an important homeless subgroup. The research regarding Housing First and linear approaches can be strengthened in several ways, and policymakers should be cautious about generalizing the results of available Housing First studies to persons with active addiction when they enter housing programs.
SIMD Optimization of Linear Expressions for Programmable Graphics Hardware
Bajaj, Chandrajit; Ihm, Insung; Min, Jungki; Oh, Jinsang
2009-01-01
The increased programmability of graphics hardware allows efficient graphical processing unit (GPU) implementations of a wide range of general computations on commodity PCs. An important factor in such implementations is how to fully exploit the SIMD computing capacities offered by modern graphics processors. Linear expressions in the form of ȳ = Ax̄ + b̄, where A is a matrix, and x̄, ȳ and b̄ are vectors, constitute one of the most basic operations in many scientific computations. In this paper, we propose a SIMD code optimization technique that enables efficient shader codes to be generated for evaluating linear expressions. It is shown that performance can be improved considerably by efficiently packing arithmetic operations into four-wide SIMD instructions through reordering of the operations in linear expressions. We demonstrate that the presented technique can be used effectively for programming both vertex and pixel shaders for a variety of mathematical applications, including integrating differential equations and solving a sparse linear system of equations using iterative methods. PMID:19946569
Corominas, Albert; Fossas, Enric
2015-01-01
We assume a monopolistic market for a non-durable non-renewable resource such as crude oil, phosphates or fossil water. Stating the problem of obtaining optimal policies on extraction and pricing of the resource as a non-linear program allows general conclusions to be drawn under diverse assumptions about the demand curve, discount rates and length of the planning horizon. We compare the results with some common beliefs about the pace of exhaustion of this kind of resources.
ERIC Educational Resources Information Center
Williams, Daniel G.
Planners in multicounty rural areas can use the Rural Development, Activity Analysis Planning (RDAAP) model to try to influence the optimal growth of their areas among different general economic goals. The model implies that best industries for rural areas have: high proportion of imported inputs; low transportation costs; high value added/output…
1983-11-01
spectrum of the linear stability theory has multiple roots with zero real parts. Then the general forms of the amplitude equations may be found for given...76 Dynamical Generation of Eastern Boundary Currents George eronis. .......................... 77 ..Amplitude Equations Edward...Associated Countercurrent. Benoit Cushman-Roisin ....... .................... ... 103 Turbulently Generated Eastern Boundary Currents Roger L. Hughes
Nathaniel E. Seavy; Suhel Quader; John D. Alexander; C. John Ralph
2005-01-01
The success of avian monitoring programs to effectively guide management decisions requires that studies be efficiently designed and data be properly analyzed. A complicating factor is that point count surveys often generate data with non-normal distributional properties. In this paper we review methods of dealing with deviations from normal assumptions, and we focus...
On the linear programming bound for linear Lee codes.
Astola, Helena; Tabus, Ioan
2016-01-01
Based on an invariance-type property of the Lee-compositions of a linear Lee code, additional equality constraints can be introduced to the linear programming problem of linear Lee codes. In this paper, we formulate this property in terms of an action of the multiplicative group of the field [Formula: see text] on the set of Lee-compositions. We show some useful properties of certain sums of Lee-numbers, which are the eigenvalues of the Lee association scheme, appearing in the linear programming problem of linear Lee codes. Using the additional equality constraints, we formulate the linear programming problem of linear Lee codes in a very compact form, leading to a fast execution, which allows to efficiently compute the bounds for large parameter values of the linear codes.
Projective-Dual Method for Solving Systems of Linear Equations with Nonnegative Variables
NASA Astrophysics Data System (ADS)
Ganin, B. V.; Golikov, A. I.; Evtushenko, Yu. G.
2018-02-01
In order to solve an underdetermined system of linear equations with nonnegative variables, the projection of a given point onto its solutions set is sought. The dual of this problem—the problem of unconstrained maximization of a piecewise-quadratic function—is solved by Newton's method. The problem of unconstrained optimization dual of the regularized problem of finding the projection onto the solution set of the system is considered. A connection of duality theory and Newton's method with some known algorithms of projecting onto a standard simplex is shown. On the example of taking into account the specifics of the constraints of the transport linear programming problem, the possibility to increase the efficiency of calculating the generalized Hessian matrix is demonstrated. Some examples of numerical calculations using MATLAB are presented.
CSOLNP: Numerical Optimization Engine for Solving Non-linearly Constrained Problems.
Zahery, Mahsa; Maes, Hermine H; Neale, Michael C
2017-08-01
We introduce the optimizer CSOLNP, which is a C++ implementation of the R package RSOLNP (Ghalanos & Theussl, 2012, Rsolnp: General non-linear optimization using augmented Lagrange multiplier method. R package version, 1) alongside some improvements. CSOLNP solves non-linearly constrained optimization problems using a Sequential Quadratic Programming (SQP) algorithm. CSOLNP, NPSOL (a very popular implementation of SQP method in FORTRAN (Gill et al., 1986, User's guide for NPSOL (version 4.0): A Fortran package for nonlinear programming (No. SOL-86-2). Stanford, CA: Stanford University Systems Optimization Laboratory), and SLSQP (another SQP implementation available as part of the NLOPT collection (Johnson, 2014, The NLopt nonlinear-optimization package. Retrieved from http://ab-initio.mit.edu/nlopt)) are three optimizers available in OpenMx package. These optimizers are compared in terms of runtimes, final objective values, and memory consumption. A Monte Carlo analysis of the performance of the optimizers was performed on ordinal and continuous models with five variables and one or two factors. While the relative difference between the objective values is less than 0.5%, CSOLNP is in general faster than NPSOL and SLSQP for ordinal analysis. As for continuous data, none of the optimizers performs consistently faster than the others. In terms of memory usage, we used Valgrind's heap profiler tool, called Massif, on one-factor threshold models. CSOLNP and NPSOL consume the same amount of memory, while SLSQP uses 71 MB more memory than the other two optimizers.
Marchese Robinson, Richard L; Palczewska, Anna; Palczewski, Jan; Kidley, Nathan
2017-08-28
The ability to interpret the predictions made by quantitative structure-activity relationships (QSARs) offers a number of advantages. While QSARs built using nonlinear modeling approaches, such as the popular Random Forest algorithm, might sometimes be more predictive than those built using linear modeling approaches, their predictions have been perceived as difficult to interpret. However, a growing number of approaches have been proposed for interpreting nonlinear QSAR models in general and Random Forest in particular. In the current work, we compare the performance of Random Forest to those of two widely used linear modeling approaches: linear Support Vector Machines (SVMs) (or Support Vector Regression (SVR)) and partial least-squares (PLS). We compare their performance in terms of their predictivity as well as the chemical interpretability of the predictions using novel scoring schemes for assessing heat map images of substructural contributions. We critically assess different approaches for interpreting Random Forest models as well as for obtaining predictions from the forest. We assess the models on a large number of widely employed public-domain benchmark data sets corresponding to regression and binary classification problems of relevance to hit identification and toxicology. We conclude that Random Forest typically yields comparable or possibly better predictive performance than the linear modeling approaches and that its predictions may also be interpreted in a chemically and biologically meaningful way. In contrast to earlier work looking at interpretation of nonlinear QSAR models, we directly compare two methodologically distinct approaches for interpreting Random Forest models. The approaches for interpreting Random Forest assessed in our article were implemented using open-source programs that we have made available to the community. These programs are the rfFC package ( https://r-forge.r-project.org/R/?group_id=1725 ) for the R statistical programming language and the Python program HeatMapWrapper [ https://doi.org/10.5281/zenodo.495163 ] for heat map generation.
Approximating high-dimensional dynamics by barycentric coordinates with linear programming
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hirata, Yoshito, E-mail: yoshito@sat.t.u-tokyo.ac.jp; Aihara, Kazuyuki; Suzuki, Hideyuki
The increasing development of novel methods and techniques facilitates the measurement of high-dimensional time series but challenges our ability for accurate modeling and predictions. The use of a general mathematical model requires the inclusion of many parameters, which are difficult to be fitted for relatively short high-dimensional time series observed. Here, we propose a novel method to accurately model a high-dimensional time series. Our method extends the barycentric coordinates to high-dimensional phase space by employing linear programming, and allowing the approximation errors explicitly. The extension helps to produce free-running time-series predictions that preserve typical topological, dynamical, and/or geometric characteristics ofmore » the underlying attractors more accurately than the radial basis function model that is widely used. The method can be broadly applied, from helping to improve weather forecasting, to creating electronic instruments that sound more natural, and to comprehensively understanding complex biological data.« less
Approximating high-dimensional dynamics by barycentric coordinates with linear programming.
Hirata, Yoshito; Shiro, Masanori; Takahashi, Nozomu; Aihara, Kazuyuki; Suzuki, Hideyuki; Mas, Paloma
2015-01-01
The increasing development of novel methods and techniques facilitates the measurement of high-dimensional time series but challenges our ability for accurate modeling and predictions. The use of a general mathematical model requires the inclusion of many parameters, which are difficult to be fitted for relatively short high-dimensional time series observed. Here, we propose a novel method to accurately model a high-dimensional time series. Our method extends the barycentric coordinates to high-dimensional phase space by employing linear programming, and allowing the approximation errors explicitly. The extension helps to produce free-running time-series predictions that preserve typical topological, dynamical, and/or geometric characteristics of the underlying attractors more accurately than the radial basis function model that is widely used. The method can be broadly applied, from helping to improve weather forecasting, to creating electronic instruments that sound more natural, and to comprehensively understanding complex biological data.
Algorithm 937: MINRES-QLP for Symmetric and Hermitian Linear Equations and Least-Squares Problems
Choi, Sou-Cheng T.; Saunders, Michael A.
2014-01-01
We describe algorithm MINRES-QLP and its FORTRAN 90 implementation for solving symmetric or Hermitian linear systems or least-squares problems. If the system is singular, MINRES-QLP computes the unique minimum-length solution (also known as the pseudoinverse solution), which generally eludes MINRES. In all cases, it overcomes a potential instability in the original MINRES algorithm. A positive-definite pre-conditioner may be supplied. Our FORTRAN 90 implementation illustrates a design pattern that allows users to make problem data known to the solver but hidden and secure from other program units. In particular, we circumvent the need for reverse communication. Example test programs input and solve real or complex problems specified in Matrix Market format. While we focus here on a FORTRAN 90 implementation, we also provide and maintain MATLAB versions of MINRES and MINRES-QLP. PMID:25328255
Neji, Radhouène; Besbes, Ahmed; Komodakis, Nikos; Deux, Jean-François; Maatouk, Mezri; Rahmouni, Alain; Bassez, Guillaume; Fleury, Gilles; Paragios, Nikos
2009-01-01
In this paper, we present a manifold clustering method fo the classification of fibers obtained from diffusion tensor images (DTI) of the human skeletal muscle. Using a linear programming formulation of prototype-based clustering, we propose a novel fiber classification algorithm over manifolds that circumvents the necessity to embed the data in low dimensional spaces and determines automatically the number of clusters. Furthermore, we propose the use of angular Hilbertian metrics between multivariate normal distributions to define a family of distances between tensors that we generalize to fibers. These metrics are used to approximate the geodesic distances over the fiber manifold. We also discuss the case where only geodesic distances to a reduced set of landmark fibers are available. The experimental validation of the method is done using a manually annotated significant dataset of DTI of the calf muscle for healthy and diseased subjects.
Making a Difference in Science Education: The Impact of Undergraduate Research Programs
Eagan, M. Kevin; Hurtado, Sylvia; Chang, Mitchell J.; Garcia, Gina A.; Herrera, Felisha A.; Garibay, Juan C.
2014-01-01
To increase the numbers of underrepresented racial minority students in science, technology, engineering, and mathematics (STEM), federal and private agencies have allocated significant funding to undergraduate research programs, which have been shown to students’ intentions of enrolling in graduate or professional school. Analyzing a longitudinal sample of 4,152 aspiring STEM majors who completed the 2004 Freshman Survey and 2008 College Senior Survey, this study utilizes multinomial hierarchical generalized linear modeling (HGLM) and propensity score matching techniques to examine how participation in undergraduate research affects STEM students’ intentions to enroll in STEM and non-STEM graduate and professional programs. Findings indicate that participation in an undergraduate research program significantly improved students’ probability of indicating plans to enroll in a STEM graduate program. PMID:25190821
Fatigue life estimation program for Part 23 airplanes, `AFS.FOR`
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kaul, S.K.
1993-12-31
The purpose of this paper is to introduce to the general aviation industry a computer program which estimates the safe fatigue life of any Federal Aviation Regulation (FAR) Part 23 airplane. The algorithm uses the methodology (Miner`s Linear Cumulative Damage Theory) and the various data presented in the Federal Aviation Administration (FAA) Report No. AFS-120-73-2, dated May 1973. The program is written in FORTRAN 77 language and is executable on a desk top personal computer. The program prompts the user for the input data needed and provides a variety of options for its intended use. The program is envisaged tomore » be released through issuance of a FAA report, which will contain the appropriate comments, instructions, warnings and limitations.« less
Engdahl, Bo; Tambs, Kristian; Borchgrevink, Hans M; Hoffman, Howard J
2005-01-01
This study aims to describe the association between otoacoustic emissions (OAEs) and pure-tone hearing thresholds (PTTs) in an unscreened adult population (N =6415), to determine the efficiency by which TEOAEs and DPOAEs can identify ears with elevated PTTs, and to investigate whether a combination of DPOAE and TEOAE responses improves this performance. Associations were examined by linear regression analysis and ANOVA. Test performance was assessed by receiver operator characteristic (ROC) curves. The relation between OAEs and PTTs appeared curvilinear with a moderate degree of non-linearity. Combining DPOAEs and TEOAEs improved performance. Test performance depended on the cut-off thresholds defining elevated PTTs with optimal values between 25 and 45 dB HL, depending on frequency and type of OAE measure. The unique constitution of the present large sample, which reflects the general adult population, makes these results applicable to population-based studies and screening programs.
EZLP: An Interactive Computer Program for Solving Linear Programming Problems. Final Report.
ERIC Educational Resources Information Center
Jarvis, John J.; And Others
Designed for student use in solving linear programming problems, the interactive computer program described (EZLP) permits the student to input the linear programming model in exactly the same manner in which it would be written on paper. This report includes a brief review of the development of EZLP; narrative descriptions of program features,…
VENVAL : a plywood mill cost accounting program
Henry Spelter
1991-01-01
This report documents a package of computer programs called VENVAL. These programs prepare plywood mill data for a linear programming (LP) model that, in turn, calculates the optimum mix of products to make, given a set of technologies and market prices. (The software to solve a linear program is not provided and must be obtained separately.) Linear programming finds...
Ruikes, Franca G H; Zuidema, Sytse U; Akkermans, Reinier P; Assendelft, Willem J J; Schers, Henk J; Koopmans, Raymond T C M
2016-01-01
The increasing number of community-dwelling frail elderly people poses a challenge to general practice. We evaluated the effectiveness of a general practitioner-led extensive, multicomponent program integrating cure, care, and welfare for the prevention of functional decline. We performed a cluster controlled trial in 12 general practices in Nijmegen, the Netherlands. Community-dwelling frail elderly people aged ≥70 years were identified with the EASY-Care two-step older persons screening instrument. In 6 general practices, 287 frail elderly received care according to the CareWell primary care program. This consisted of proactive care planning, case management, medication reviews, and multidisciplinary team meetings with a general practitioner, practice and/or community nurse, elderly care physician, and social worker. In another 6 general practices, 249 participants received care as usual. The primary outcome was independence in functioning during (instrumental) activities of daily living (Katz-15 index). Secondary outcomes were quality of life [EuroQol (EQ5D+C) instrument], mental health and health-related social functioning (36-item RAND Short Form survey instrument), institutionalization, hospitalization, and mortality. Outcomes were assessed at baseline and at 12 months, and were analyzed with linear mixed-model analyses. A total of 204 participants (71.1%) in the intervention group and 165 participants (66.3%) in the control group completed the study. No differences between groups regarding independence in functioning and secondary outcomes were found. We found no evidence for the effectiveness of a multifaceted integrated care program in the prevention of adverse outcomes in community-dwelling frail elderly people. Large-scale implementation of this program is not advocated. © Copyright 2016 by the American Board of Family Medicine.
Ranking Forestry Investments With Parametric Linear Programming
Paul A. Murphy
1976-01-01
Parametric linear programming is introduced as a technique for ranking forestry investments under multiple constraints; it combines the advantages of simple tanking and linear programming as capital budgeting tools.
Investigating Integer Restrictions in Linear Programming
ERIC Educational Resources Information Center
Edwards, Thomas G.; Chelst, Kenneth R.; Principato, Angela M.; Wilhelm, Thad L.
2015-01-01
Linear programming (LP) is an application of graphing linear systems that appears in many Algebra 2 textbooks. Although not explicitly mentioned in the Common Core State Standards for Mathematics, linear programming blends seamlessly into modeling with mathematics, the fourth Standard for Mathematical Practice (CCSSI 2010, p. 7). In solving a…
ERIC Educational Resources Information Center
Dissemination and Assessment Center for Bilingual Education, Austin, TX.
This is one of a series of student booklets designed for use in a bilingual mathematics program in grades 6-8. The general format is to present each page in both Spanish and English. The mathematical topics in this booklet include measurement, perimeter, and area. (MK)
Forced vibration analysis of rotating cyclic structures in NASTRAN
NASA Technical Reports Server (NTRS)
Elchuri, V.; Gallo, A. M.; Skalski, S. C.
1981-01-01
A new capability was added to the general purpose finite element program NASTRAN Level 17.7 to conduct forced vibration analysis of tuned cyclic structures rotating about their axis of symmetry. The effects of Coriolis and centripetal accelerations together with those due to linear acceleration of the axis of rotation were included. The theoretical, user's, programmer's and demonstration manuals for this new capability are presented.
User's manual for interactive LINEAR: A FORTRAN program to derive linear aircraft models
NASA Technical Reports Server (NTRS)
Antoniewicz, Robert F.; Duke, Eugene L.; Patterson, Brian P.
1988-01-01
An interactive FORTRAN program that provides the user with a powerful and flexible tool for the linearization of aircraft aerodynamic models is documented in this report. The program LINEAR numerically determines a linear system model using nonlinear equations of motion and a user-supplied linear or nonlinear aerodynamic model. The nonlinear equations of motion used are six-degree-of-freedom equations with stationary atmosphere and flat, nonrotating earth assumptions. The system model determined by LINEAR consists of matrices for both the state and observation equations. The program has been designed to allow easy selection and definition of the state, control, and observation variables to be used in a particular model.
A computer graphics display and data compression technique
NASA Technical Reports Server (NTRS)
Teague, M. J.; Meyer, H. G.; Levenson, L. (Editor)
1974-01-01
The computer program discussed is intended for the graphical presentation of a general dependent variable X that is a function of two independent variables, U and V. The required input to the program is the variation of the dependent variable with one of the independent variables for various fixed values of the other. The computer program is named CRP, and the output is provided by the SD 4060 plotter. Program CRP is an extremely flexible program that offers the user a wide variety of options. The dependent variable may be presented in either a linear or a logarithmic manner. Automatic centering of the plot is provided in the ordinate direction, and the abscissa is scaled automatically for a logarithmic plot. A description of the carpet plot technique is given along with the coordinates system used in the program. Various aspects of the program logic are discussed and detailed documentation of the data card format is presented.
ADS: A FORTRAN program for automated design synthesis: Version 1.10
NASA Technical Reports Server (NTRS)
Vanderplaats, G. N.
1985-01-01
A new general-purpose optimization program for engineering design is described. ADS (Automated Design Synthesis - Version 1.10) is a FORTRAN program for solution of nonlinear constrained optimization problems. The program is segmented into three levels: strategy, optimizer, and one-dimensional search. At each level, several options are available so that a total of over 100 possible combinations can be created. Examples of available strategies are sequential unconstrained minimization, the Augmented Lagrange Multiplier method, and Sequential Linear Programming. Available optimizers include variable metric methods and the Method of Feasible Directions as examples, and one-dimensional search options include polynomial interpolation and the Golden Section method as examples. Emphasis is placed on ease of use of the program. All information is transferred via a single parameter list. Default values are provided for all internal program parameters such as convergence criteria, and the user is given a simple means to over-ride these, if desired.
Split diversity in constrained conservation prioritization using integer linear programming.
Chernomor, Olga; Minh, Bui Quang; Forest, Félix; Klaere, Steffen; Ingram, Travis; Henzinger, Monika; von Haeseler, Arndt
2015-01-01
Phylogenetic diversity (PD) is a measure of biodiversity based on the evolutionary history of species. Here, we discuss several optimization problems related to the use of PD, and the more general measure split diversity (SD), in conservation prioritization.Depending on the conservation goal and the information available about species, one can construct optimization routines that incorporate various conservation constraints. We demonstrate how this information can be used to select sets of species for conservation action. Specifically, we discuss the use of species' geographic distributions, the choice of candidates under economic pressure, and the use of predator-prey interactions between the species in a community to define viability constraints.Despite such optimization problems falling into the area of NP hard problems, it is possible to solve them in a reasonable amount of time using integer programming. We apply integer linear programming to a variety of models for conservation prioritization that incorporate the SD measure.We exemplarily show the results for two data sets: the Cape region of South Africa and a Caribbean coral reef community. Finally, we provide user-friendly software at http://www.cibiv.at/software/pda.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Burchett, Deon L.; Chen, Richard Li-Yang; Phillips, Cynthia A.
This report summarizes the work performed under the project project Next-Generation Algo- rithms for Assessing Infrastructure Vulnerability and Optimizing System Resilience. The goal of the project was to improve mathematical programming-based optimization technology for in- frastructure protection. In general, the owner of a network wishes to design a network a network that can perform well when certain transportation channels are inhibited (e.g. destroyed) by an adversary. These are typically bi-level problems where the owner designs a system, an adversary optimally attacks it, and then the owner can recover by optimally using the remaining network. This project funded three years ofmore » Deon Burchett's graduate research. Deon's graduate advisor, Professor Jean-Philippe Richard, and his Sandia advisors, Richard Chen and Cynthia Phillips, supported Deon on other funds or volunteer time. This report is, therefore. essentially a replication of the Ph.D. dissertation it funded [12] in a format required for project documentation. The thesis had some general polyhedral research. This is the study of the structure of the feasi- ble region of mathematical programs, such as integer programs. For example, an integer program optimizes a linear objective function subject to linear constraints, and (nonlinear) integrality con- straints on the variables. The feasible region without the integrality constraints is a convex polygon. Careful study of additional valid constraints can significantly improve computational performance. Here is the abstract from the dissertation: We perform a polyhedral study of a multi-commodity generalization of variable upper bound flow models. In particular, we establish some relations between facets of single- and multi- commodity models. We then introduce a new family of inequalities, which generalizes traditional flow cover inequalities to the multi-commodity context. We present encouraging numerical results. We also consider the directed edge-failure resilient network design problem (DRNDP). This problem entails the design of a directed multi-commodity flow network that is capable of fulfilling a specified percentage of demands in the event that any G arcs are destroyed, where G is a constant parameter. We present a formulation of DRNDP and solve it in a branch-column-cut framework. We present computational results.« less
Optimization Research of Generation Investment Based on Linear Programming Model
NASA Astrophysics Data System (ADS)
Wu, Juan; Ge, Xueqian
Linear programming is an important branch of operational research and it is a mathematical method to assist the people to carry out scientific management. GAMS is an advanced simulation and optimization modeling language and it will combine a large number of complex mathematical programming, such as linear programming LP, nonlinear programming NLP, MIP and other mixed-integer programming with the system simulation. In this paper, based on the linear programming model, the optimized investment decision-making of generation is simulated and analyzed. At last, the optimal installed capacity of power plants and the final total cost are got, which provides the rational decision-making basis for optimized investments.
Portfolio optimization by using linear programing models based on genetic algorithm
NASA Astrophysics Data System (ADS)
Sukono; Hidayat, Y.; Lesmana, E.; Putra, A. S.; Napitupulu, H.; Supian, S.
2018-01-01
In this paper, we discussed the investment portfolio optimization using linear programming model based on genetic algorithms. It is assumed that the portfolio risk is measured by absolute standard deviation, and each investor has a risk tolerance on the investment portfolio. To complete the investment portfolio optimization problem, the issue is arranged into a linear programming model. Furthermore, determination of the optimum solution for linear programming is done by using a genetic algorithm. As a numerical illustration, we analyze some of the stocks traded on the capital market in Indonesia. Based on the analysis, it is shown that the portfolio optimization performed by genetic algorithm approach produces more optimal efficient portfolio, compared to the portfolio optimization performed by a linear programming algorithm approach. Therefore, genetic algorithms can be considered as an alternative on determining the investment portfolio optimization, particularly using linear programming models.
Quadratic Programming for Allocating Control Effort
NASA Technical Reports Server (NTRS)
Singh, Gurkirpal
2005-01-01
A computer program calculates an optimal allocation of control effort in a system that includes redundant control actuators. The program implements an iterative (but otherwise single-stage) algorithm of the quadratic-programming type. In general, in the quadratic-programming problem, one seeks the values of a set of variables that minimize a quadratic cost function, subject to a set of linear equality and inequality constraints. In this program, the cost function combines control effort (typically quantified in terms of energy or fuel consumed) and control residuals (differences between commanded and sensed values of variables to be controlled). In comparison with prior control-allocation software, this program offers approximately equal accuracy but much greater computational efficiency. In addition, this program offers flexibility, robustness to actuation failures, and a capability for selective enforcement of control requirements. The computational efficiency of this program makes it suitable for such complex, real-time applications as controlling redundant aircraft actuators or redundant spacecraft thrusters. The program is written in the C language for execution in a UNIX operating system.
VASP- VARIABLE DIMENSION AUTOMATIC SYNTHESIS PROGRAM
NASA Technical Reports Server (NTRS)
White, J. S.
1994-01-01
VASP is a variable dimension Fortran version of the Automatic Synthesis Program, ASP. The program is used to implement Kalman filtering and control theory. Basically, it consists of 31 subprograms for solving most modern control problems in linear, time-variant (or time-invariant) control systems. These subprograms include operations of matrix algebra, computation of the exponential of a matrix and its convolution integral, and the solution of the matrix Riccati equation. The user calls these subprograms by means of a FORTRAN main program, and so can easily obtain solutions to most general problems of extremization of a quadratic functional of the state of the linear dynamical system. Particularly, these problems include the synthesis of the Kalman filter gains and the optimal feedback gains for minimization of a quadratic performance index. VASP, as an outgrowth of the Automatic Synthesis Program, has the following improvements: more versatile programming language; more convenient input/output format; some new subprograms which consolidate certain groups of statements that are often repeated; and variable dimensioning. The pertinent difference between the two programs is that VASP has variable dimensioning and more efficient storage. The documentation for the VASP program contains a VASP dictionary and example problems. The dictionary contains a description of each subroutine and instructions on its use. The example problems include dynamic response, optimal control gain, solution of the sampled data matrix Riccati equation, matrix decomposition, and a pseudo-inverse of a matrix. This program is written in FORTRAN IV and has been implemented on the IBM 360. The VASP program was developed in 1971.
A reduced successive quadratic programming strategy for errors-in-variables estimation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tjoa, I.-B.; Biegler, L. T.; Carnegie-Mellon Univ.
Parameter estimation problems in process engineering represent a special class of nonlinear optimization problems, because the maximum likelihood structure of the objective function can be exploited. Within this class, the errors in variables method (EVM) is particularly interesting. Here we seek a weighted least-squares fit to the measurements with an underdetermined process model. Thus, both the number of variables and degrees of freedom available for optimization increase linearly with the number of data sets. Large optimization problems of this type can be particularly challenging and expensive to solve because, for general-purpose nonlinear programming (NLP) algorithms, the computational effort increases atmore » least quadratically with problem size. In this study we develop a tailored NLP strategy for EVM problems. The method is based on a reduced Hessian approach to successive quadratic programming (SQP), but with the decomposition performed separately for each data set. This leads to the elimination of all variables but the model parameters, which are determined by a QP coordination step. In this way the computational effort remains linear in the number of data sets. Moreover, unlike previous approaches to the EVM problem, global and superlinear properties of the SQP algorithm apply naturally. Also, the method directly incorporates inequality constraints on the model parameters (although not on the fitted variables). This approach is demonstrated on five example problems with up to 102 degrees of freedom. Compared to general-purpose NLP algorithms, large improvements in computational performance are observed.« less
NASA Technical Reports Server (NTRS)
Muraca, R. J.; Stephens, M. V.; Dagenhart, J. R.
1975-01-01
A general analysis capable of predicting performance characteristics of cross-wind axis turbines was developed, including the effects of airfoil geometry, support struts, blade aspect ratio, windmill solidity, blade interference and curved flow. The results were compared with available wind tunnel results for a catenary blade shape. A theoretical performance curve for an aerodynamically efficient straight blade configuration was also presented. In addition, a linearized analytical solution applicable for straight configurations was developed. A listing of the computer program developed for numerical solutions of the general performance equations is included in the appendix.
Kapinos, Kandice A; Caloyeras, John P; Liu, Hangsheng; Mattke, Soeren
2015-12-01
This article aims to test whether a workplace wellness program reduces health care cost for higher risk employees or employees with greater participation. The program effect on costs was estimated using a generalized linear model with a log-link function using a difference-in-difference framework with a propensity score matched sample of employees using claims and program data from a large US firm from 2003 to 2011. The program targeting higher risk employees did not yield cost savings. Employees participating in five or more sessions aimed at encouraging more healthful living had about $20 lower per member per month costs relative to matched comparisons (P = 0.002). Our results add to the growing evidence base that workplace wellness programs aimed at primary prevention do not reduce health care cost, with the exception of those employees who choose to participate more actively.
MM Algorithms for Geometric and Signomial Programming
Lange, Kenneth; Zhou, Hua
2013-01-01
This paper derives new algorithms for signomial programming, a generalization of geometric programming. The algorithms are based on a generic principle for optimization called the MM algorithm. In this setting, one can apply the geometric-arithmetic mean inequality and a supporting hyperplane inequality to create a surrogate function with parameters separated. Thus, unconstrained signomial programming reduces to a sequence of one-dimensional minimization problems. Simple examples demonstrate that the MM algorithm derived can converge to a boundary point or to one point of a continuum of minimum points. Conditions under which the minimum point is unique or occurs in the interior of parameter space are proved for geometric programming. Convergence to an interior point occurs at a linear rate. Finally, the MM framework easily accommodates equality and inequality constraints of signomial type. For the most important special case, constrained quadratic programming, the MM algorithm involves very simple updates. PMID:24634545
MM Algorithms for Geometric and Signomial Programming.
Lange, Kenneth; Zhou, Hua
2014-02-01
This paper derives new algorithms for signomial programming, a generalization of geometric programming. The algorithms are based on a generic principle for optimization called the MM algorithm. In this setting, one can apply the geometric-arithmetic mean inequality and a supporting hyperplane inequality to create a surrogate function with parameters separated. Thus, unconstrained signomial programming reduces to a sequence of one-dimensional minimization problems. Simple examples demonstrate that the MM algorithm derived can converge to a boundary point or to one point of a continuum of minimum points. Conditions under which the minimum point is unique or occurs in the interior of parameter space are proved for geometric programming. Convergence to an interior point occurs at a linear rate. Finally, the MM framework easily accommodates equality and inequality constraints of signomial type. For the most important special case, constrained quadratic programming, the MM algorithm involves very simple updates.
Very Low-Cost Nutritious Diet Plans Designed by Linear Programming.
ERIC Educational Resources Information Center
Foytik, Jerry
1981-01-01
Provides procedural details of Linear Programing, developed by the U.S. Department of Agriculture to devise a dietary guide for consumers that minimizes food costs without sacrificing nutritional quality. Compares Linear Programming with the Thrifty Food Plan, which has been a basis for allocating coupons under the Food Stamp Program. (CS)
NASA Astrophysics Data System (ADS)
Kusumawati, Rosita; Subekti, Retno
2017-04-01
Fuzzy bi-objective linear programming (FBOLP) model is bi-objective linear programming model in fuzzy number set where the coefficients of the equations are fuzzy number. This model is proposed to solve portfolio selection problem which generate an asset portfolio with the lowest risk and the highest expected return. FBOLP model with normal fuzzy numbers for risk and expected return of stocks is transformed into linear programming (LP) model using magnitude ranking function.
Generalized Ultrametric Semilattices of Linear Signals
2014-01-23
53–73, 1998. [8] John C. Eidson , Edward A. Lee, Slobodan Matic, Sanjit A. Seshia, and Jia Zou. Distributed real- time software for cyber-physical...Theoretical Computer Science, 16(1):5–24, 1981. 41 [37] Yang Zhao, Jie Liu, and Edward A. Lee. A programming model for time - synchronized distributed real...response, including the time for reviewing instructions, searching existing data sources, gathering and maintaining the data needed, and completing and
Combinatorial approaches to gene recognition.
Roytberg, M A; Astakhova, T V; Gelfand, M S
1997-01-01
Recognition of genes via exon assembly approaches leads naturally to the use of dynamic programming. We consider the general graph-theoretical formulation of the exon assembly problem and analyze in detail some specific variants: multicriterial optimization in the case of non-linear gene-scoring functions; context-dependent schemes for scoring exons and related procedures for exon filtering; and highly specific recognition of arbitrary gene segments, oligonucleotide probes and polymerase chain reaction (PCR) primers.
NASA Astrophysics Data System (ADS)
Stoitsov, M. V.; Schunck, N.; Kortelainen, M.; Michel, N.; Nam, H.; Olsen, E.; Sarich, J.; Wild, S.
2013-06-01
We describe the new version 2.00d of the code HFBTHO that solves the nuclear Skyrme-Hartree-Fock (HF) or Skyrme-Hartree-Fock-Bogoliubov (HFB) problem by using the cylindrical transformed deformed harmonic oscillator basis. In the new version, we have implemented the following features: (i) the modified Broyden method for non-linear problems, (ii) optional breaking of reflection symmetry, (iii) calculation of axial multipole moments, (iv) finite temperature formalism for the HFB method, (v) linear constraint method based on the approximation of the Random Phase Approximation (RPA) matrix for multi-constraint calculations, (vi) blocking of quasi-particles in the Equal Filling Approximation (EFA), (vii) framework for generalized energy density with arbitrary density-dependences, and (viii) shared memory parallelism via OpenMP pragmas. Program summaryProgram title: HFBTHO v2.00d Catalog identifier: ADUI_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADUI_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU General Public License version 3 No. of lines in distributed program, including test data, etc.: 167228 No. of bytes in distributed program, including test data, etc.: 2672156 Distribution format: tar.gz Programming language: FORTRAN-95. Computer: Intel Pentium-III, Intel Xeon, AMD-Athlon, AMD-Opteron, Cray XT5, Cray XE6. Operating system: UNIX, LINUX, WindowsXP. RAM: 200 Mwords Word size: 8 bits Classification: 17.22. Does the new version supercede the previous version?: Yes Catalog identifier of previous version: ADUI_v1_0 Journal reference of previous version: Comput. Phys. Comm. 167 (2005) 43 Nature of problem: The solution of self-consistent mean-field equations for weakly-bound paired nuclei requires a correct description of the asymptotic properties of nuclear quasi-particle wave functions. In the present implementation, this is achieved by using the single-particle wave functions of the transformed harmonic oscillator, which allows for an accurate description of deformation effects and pairing correlations in nuclei arbitrarily close to the particle drip lines. Solution method: The program uses the axial Transformed Harmonic Oscillator (THO) single- particle basis to expand quasi-particle wave functions. It iteratively diagonalizes the Hartree-Fock-Bogoliubov Hamiltonian based on generalized Skyrme-like energy densities and zero-range pairing interactions until a self-consistent solution is found. A previous version of the program was presented in: M.V. Stoitsov, J. Dobaczewski, W. Nazarewicz, P. Ring, Comput. Phys. Commun. 167 (2005) 43-63. Reasons for new version: Version 2.00d of HFBTHO provides a number of new options such as the optional breaking of reflection symmetry, the calculation of axial multipole moments, the finite temperature formalism for the HFB method, optimized multi-constraint calculations, the treatment of odd-even and odd-odd nuclei in the blocking approximation, and the framework for generalized energy density with arbitrary density-dependences. It is also the first version of HFBTHO to contain threading capabilities. Summary of revisions: The modified Broyden method has been implemented, Optional breaking of reflection symmetry has been implemented, The calculation of all axial multipole moments up to λ=8 has been implemented, The finite temperature formalism for the HFB method has been implemented, The linear constraint method based on the approximation of the Random Phase Approximation (RPA) matrix for multi-constraint calculations has been implemented, The blocking of quasi-particles in the Equal Filling Approximation (EFA) has been implemented, The framework for generalized energy density functionals with arbitrary density-dependence has been implemented, Shared memory parallelism via OpenMP pragmas has been implemented. Restrictions: Axial- and time-reversal symmetries are assumed. Unusual features: The user must have access to the LAPACK subroutines DSYEVD, DSYTRF and DSYTRI, and their dependences, which compute eigenvalues and eigenfunctions of real symmetric matrices, the LAPACK subroutines DGETRI and DGETRF, which invert arbitrary real matrices, and the BLAS routines DCOPY, DSCAL, DGEMM and DGEMV for double-precision linear algebra (or provide another set of subroutines that can perform such tasks). The BLAS and LAPACK subroutines can be obtained from the Netlib Repository at the University of Tennessee, Knoxville: http://netlib2.cs.utk.edu/. Running time: Highly variable, as it depends on the nucleus, size of the basis, requested accuracy, requested configuration, compiler and libraries, and hardware architecture. An order of magnitude would be a few seconds for ground-state configurations in small bases N≈8-12, to a few minutes in very deformed configuration of a heavy nucleus with a large basis N>20.
ERIC Educational Resources Information Center
Dyehouse, Melissa; Bennett, Deborah; Harbor, Jon; Childress, Amy; Dark, Melissa
2009-01-01
Logic models are based on linear relationships between program resources, activities, and outcomes, and have been used widely to support both program development and evaluation. While useful in describing some programs, the linear nature of the logic model makes it difficult to capture the complex relationships within larger, multifaceted…
NASA Technical Reports Server (NTRS)
Klumpp, A. R.
1994-01-01
Ten families of subprograms are bundled together for the General-Purpose Ada Packages. The families bring to Ada many features from HAL/S, PL/I, FORTRAN, and other languages. These families are: string subprograms (INDEX, TRIM, LOAD, etc.); scalar subprograms (MAX, MIN, REM, etc.); array subprograms (MAX, MIN, PROD, SUM, GET, and PUT); numerical subprograms (EXP, CUBIC, etc.); service subprograms (DATE_TIME function, etc.); Linear Algebra II; Runge-Kutta integrators; and three text I/O families of packages. In two cases, a family consists of a single non-generic package. In all other cases, a family comprises a generic package and its instances for a selected group of scalar types. All generic packages are designed to be easily instantiated for the types declared in the user facility. The linear algebra package is LINRAG2. This package includes subprograms supplementing those in NPO-17985, An Ada Linear Algebra Package Modeled After HAL/S (LINRAG). Please note that LINRAG2 cannot be compiled without LINRAG. Most packages have widespread applicability, although some are oriented for avionics applications. All are designed to facilitate writing new software in Ada. Several of the packages use conventions introduced by other programming languages. A package of string subprograms is based on HAL/S (a language designed for the avionics software in the Space Shuttle) and PL/I. Packages of scalar and array subprograms are taken from HAL/S or generalized current Ada subprograms. A package of Runge-Kutta integrators is patterned after a built-in MAC (MIT Algebraic Compiler) integrator. Those packages modeled after HAL/S make it easy to translate existing HAL/S software to Ada. The General-Purpose Ada Packages program source code is available on two 360K 5.25" MS-DOS format diskettes. The software was developed using VAX Ada v1.5 under DEC VMS v4.5. It should be portable to any validated Ada compiler and it should execute either interactively or in batch. The largest package requires 205K of main memory on a DEC VAX running VMS. The software was developed in 1989, and is a copyrighted work with all copyright vested in NASA.
NASA Astrophysics Data System (ADS)
Mert, Bayram Ali; Dag, Ahmet
2017-12-01
In this study, firstly, a practical and educational geostatistical program (JeoStat) was developed, and then example analysis of porosity parameter distribution, using oilfield data, was presented. With this program, two or three-dimensional variogram analysis can be performed by using normal, log-normal or indicator transformed data. In these analyses, JeoStat offers seven commonly used theoretical variogram models (Spherical, Gaussian, Exponential, Linear, Generalized Linear, Hole Effect and Paddington Mix) to the users. These theoretical models can be easily and quickly fitted to experimental models using a mouse. JeoStat uses ordinary kriging interpolation technique for computation of point or block estimate, and also uses cross-validation test techniques for validation of the fitted theoretical model. All the results obtained by the analysis as well as all the graphics such as histogram, variogram and kriging estimation maps can be saved to the hard drive, including digitised graphics and maps. As such, the numerical values of any point in the map can be monitored using a mouse and text boxes. This program is available to students, researchers, consultants and corporations of any size free of charge. The JeoStat software package and source codes available at: http://www.jeostat.com/JeoStat_2017.0.rar.
Integer Linear Programming for Constrained Multi-Aspect Committee Review Assignment
Karimzadehgan, Maryam; Zhai, ChengXiang
2011-01-01
Automatic review assignment can significantly improve the productivity of many people such as conference organizers, journal editors and grant administrators. A general setup of the review assignment problem involves assigning a set of reviewers on a committee to a set of documents to be reviewed under the constraint of review quota so that the reviewers assigned to a document can collectively cover multiple topic aspects of the document. No previous work has addressed such a setup of committee review assignments while also considering matching multiple aspects of topics and expertise. In this paper, we tackle the problem of committee review assignment with multi-aspect expertise matching by casting it as an integer linear programming problem. The proposed algorithm can naturally accommodate any probabilistic or deterministic method for modeling multiple aspects to automate committee review assignments. Evaluation using a multi-aspect review assignment test set constructed using ACM SIGIR publications shows that the proposed algorithm is effective and efficient for committee review assignments based on multi-aspect expertise matching. PMID:22711970
A linear programming approach to max-sum problem: a review.
Werner, Tomás
2007-07-01
The max-sum labeling problem, defined as maximizing a sum of binary (i.e., pairwise) functions of discrete variables, is a general NP-hard optimization problem with many applications, such as computing the MAP configuration of a Markov random field. We review a not widely known approach to the problem, developed by Ukrainian researchers Schlesinger et al. in 1976, and show how it contributes to recent results, most importantly, those on the convex combination of trees and tree-reweighted max-product. In particular, we review Schlesinger et al.'s upper bound on the max-sum criterion, its minimization by equivalent transformations, its relation to the constraint satisfaction problem, the fact that this minimization is dual to a linear programming relaxation of the original problem, and the three kinds of consistency necessary for optimality of the upper bound. We revisit problems with Boolean variables and supermodular problems. We describe two algorithms for decreasing the upper bound. We present an example application for structural image analysis.
Improvements in aircraft extraction programs
NASA Technical Reports Server (NTRS)
Balakrishnan, A. V.; Maine, R. E.
1976-01-01
Flight data from an F-8 Corsair and a Cessna 172 was analyzed to demonstrate specific improvements in the LRC parameter extraction computer program. The Cramer-Rao bounds were shown to provide a satisfactory relative measure of goodness of parameter estimates. It was not used as an absolute measure due to an inherent uncertainty within a multiplicative factor, traced in turn to the uncertainty in the noise bandwidth in the statistical theory of parameter estimation. The measure was also derived on an entirely nonstatistical basis, yielding thereby also an interpretation of the significance of off-diagonal terms in the dispersion matrix. The distinction between coefficients as linear and non-linear was shown to be important in its implication to a recommended order of parameter iteration. Techniques of improving convergence generally, were developed, and tested out on flight data. In particular, an easily implemented modification incorporating a gradient search was shown to improve initial estimates and thus remove a common cause for lack of convergence.
Linear Programming across the Curriculum
ERIC Educational Resources Information Center
Yoder, S. Elizabeth; Kurz, M. Elizabeth
2015-01-01
Linear programming (LP) is taught in different departments across college campuses with engineering and management curricula. Modeling an LP problem is taught in every linear programming class. As faculty teaching in Engineering and Management departments, the depth to which teachers should expect students to master this particular type of…
Fundamental solution of the problem of linear programming and method of its determination
NASA Technical Reports Server (NTRS)
Petrunin, S. V.
1978-01-01
The idea of a fundamental solution to a problem in linear programming is introduced. A method of determining the fundamental solution and of applying this method to the solution of a problem in linear programming is proposed. Numerical examples are cited.
A Sawmill Manager Adapts To Change With Linear Programming
George F. Dutrow; James E. Granskog
1973-01-01
Linear programming provides guidelines for increasing sawmill capacity and flexibility and for determining stumpagepurchasing strategy. The operator of a medium-sized sawmill implemented improvements suggested by linear programming analysis; results indicate a 45 percent increase in revenue and a 36 percent hike in volume processed.
Leadership, Burnout, and Job Satisfaction in Outpatient Drug-Free Treatment Programs
Broome, Kirk M.; Knight, Danica K.; Edwards, Jennifer R.; Flynn, Patrick M.
2009-01-01
Counselors are a critical component of substance abuse treatment programming, but their working experiences are not yet well understood. As treatment-improvement efforts focus increasingly on these individuals, their perceptions of program leadership, emotional burnout, and job satisfaction and related attitudes take on greater significance. This study explores counselor views and the impact of organizational context using data from a nationwide set of 94 outpatient drug-free (ODF) treatment programs in a hierarchical linear model (HLM) analysis. Results show counselors hold generally positive opinions of program director leadership and job satisfaction, and have low levels of burnout, but they also have important variations in their ratings. Higher counselor caseloads were related to poorer ratings, and leadership behaviors predicted both satisfaction and burnout. These findings add further evidence that treatment providers should also address the workplace environment for staff as part of quality-improvement efforts. PMID:19339143
Leadership, burnout, and job satisfaction in outpatient drug-free treatment programs.
Broome, Kirk M; Knight, Danica K; Edwards, Jennifer R; Flynn, Patrick M
2009-09-01
Counselors are a critical component of substance abuse treatment programming, but their working experiences are not yet well understood. As treatment improvement efforts focus increasingly on these individuals, their perceptions of program leadership, emotional burnout, and job satisfaction and related attitudes take on greater significance. This study explores counselor views and the impact of organizational context using data from a nationwide set of 94 outpatient drug-free treatment programs in a hierarchical linear model analysis. Results show counselors hold generally positive opinions of program director leadership and job satisfaction and have low levels of burnout, but they also have important variations in their ratings. Higher counselor caseloads were related to poorer ratings, and leadership behaviors predicted both satisfaction and burnout. These findings add further evidence that treatment providers should also address the workplace environment for staff as part of quality improvement efforts.
Dynamic System Coupler Program (DYSCO 4.1). Volume 1. Theoretical Manual
1989-01-01
present analysis is as follows: 1. Triplet X, Y, Z represents an inertia frame, R. The R system coordinates are the rotor shaft axes when there is...small perturbation analysis . 2.5 3-D MODAL STRUCTURE - CFM3 A three-dimensional structure is represented as a linear combination of orth ogonal modes...Include rotor blade damage modeling, Elgen analysis development, general time history solution development, frequency domain solution development
Timetabling an Academic Department with Linear Programming.
ERIC Educational Resources Information Center
Bezeau, Lawrence M.
This paper describes an approach to faculty timetabling and course scheduling that uses computerized linear programming. After reviewing the literature on linear programming, the paper discusses the process whereby a timetable was created for a department at the University of New Brunswick. Faculty were surveyed with respect to course offerings…
ERIC Educational Resources Information Center
Schmitt, M. A.; And Others
1994-01-01
Compares traditional manure application planning techniques calculated to meet agronomic nutrient needs on a field-by-field basis with plans developed using computer-assisted linear programming optimization methods. Linear programming provided the most economical and environmentally sound manure application strategy. (Contains 15 references.) (MDH)
AESOP- INTERACTIVE DESIGN OF LINEAR QUADRATIC REGULATORS AND KALMAN FILTERS
NASA Technical Reports Server (NTRS)
Lehtinen, B.
1994-01-01
AESOP was developed to solve a number of problems associated with the design of controls and state estimators for linear time-invariant systems. The systems considered are modeled in state-variable form by a set of linear differential and algebraic equations with constant coefficients. Two key problems solved by AESOP are the linear quadratic regulator (LQR) design problem and the steady-state Kalman filter design problem. AESOP is designed to be used in an interactive manner. The user can solve design problems and analyze the solutions in a single interactive session. Both numerical and graphical information are available to the user during the session. The AESOP program is structured around a list of predefined functions. Each function performs a single computation associated with control, estimation, or system response determination. AESOP contains over sixty functions and permits the easy inclusion of user defined functions. The user accesses these functions either by inputting a list of desired functions in the order they are to be performed, or by specifying a single function to be performed. The latter case is used when the choice of function and function order depends on the results of previous functions. The available AESOP functions are divided into several general areas including: 1) program control, 2) matrix input and revision, 3) matrix formation, 4) open-loop system analysis, 5) frequency response, 6) transient response, 7) transient function zeros, 8) LQR and Kalman filter design, 9) eigenvalues and eigenvectors, 10) covariances, and 11) user-defined functions. The most important functions are those that design linear quadratic regulators and Kalman filters. The user interacts with AESOP when using these functions by inputting design weighting parameters and by viewing displays of designed system response. Support functions obtain system transient and frequency responses, transfer functions, and covariance matrices. AESOP can also provide the user with open-loop system information including stability, controllability, and observability. The AESOP program is written in FORTRAN IV for interactive execution and has been implemented on an IBM 3033 computer using TSS 370. As currently configured, AESOP has a central memory requirement of approximately 2 Megs of 8 bit bytes. Memory requirements can be reduced by redimensioning arrays in the AESOP program. Graphical output requires adaptation of the AESOP plot routines to whatever device is available. The AESOP program was developed in 1984.
Applications of Goal Programming to Education.
ERIC Educational Resources Information Center
Van Dusseldorp, Ralph A.; And Others
This paper discusses goal programming, a computer-based operations research technique that is basically a modification and extension of linear programming. The authors first discuss the similarities and differences between goal programming and linear programming, then describe the limitations of goal programming and its possible applications for…
Rowan, L.C.; Trautwein, C.M.; Purdy, T.L.
1990-01-01
This study was undertaken as part of the Conterminous U.S. Mineral Assessment Program (CUSMAP). The purpose of the study was to map linear features on Landsat Multispectral Scanner (MSS) images and a proprietary side-looking airborne radar (SLAR) image mosaic and to determine the spatial relationship between these linear features and the locations of metallic mineral occurrE-nces. The results show a close spatial association of linear features with metallic mineral occurrences in parts of the quadrangle, but in other areas the association is less well defined. Linear features are defined as distinct linear and slightly curvilinear elements mappable on MSS and SLAR images. The features generally represent linear segments of streams, ridges, and terminations of topographic features; however, they may also represent tonal patterns that are related to variations in lithology and vegetation. Most linear features in the Butte quadrangle probably represent underlying structural elements, such as fractures (with and without displacement), dikes, and alignment of fold axes. However, in areas underlain by sedimentary rocks, some of the linear features may reflect bedding traces. This report describes the geologic setting of the Butte quadrangle, the procedures used in mapping and analyzing the linear features, and the results of the study. Relationship of these features to placer and non-metal deposits were not analyzed in this study and are not discussed in this report.
NASA Technical Reports Server (NTRS)
Klumpp, A. R.; Lawson, C. L.
1988-01-01
Routines provided for common scalar, vector, matrix, and quaternion operations. Computer program extends Ada programming language to include linear-algebra capabilities similar to HAS/S programming language. Designed for such avionics applications as software for Space Station.
Rodríguez Rodríguez, Luis Pablo
2009-01-01
The indication of physical activity in patients with Diabetes mellitus type 2 and in the metabolic syndrome has a scientific proven evidence. There does no exist a general definite program of this activity. There is designed his methodology, modality, intensity, frequency and duration. The first results are presented in diabete type 2 patients and of them those who meet with metabolic syndrome by means of evaluation of the HbA1c, total cholesterol, triglycerides and IMC. On emphasizes that there is reached a very high and linear decrease of the HbA1c, with the individualized program of physical cardiovascular activity.
Recursive partitioned inversion of large (1500 x 1500) symmetric matrices
NASA Technical Reports Server (NTRS)
Putney, B. H.; Brownd, J. E.; Gomez, R. A.
1976-01-01
A recursive algorithm was designed to invert large, dense, symmetric, positive definite matrices using small amounts of computer core, i.e., a small fraction of the core needed to store the complete matrix. The described algorithm is a generalized Gaussian elimination technique. Other algorithms are also discussed for the Cholesky decomposition and step inversion techniques. The purpose of the inversion algorithm is to solve large linear systems of normal equations generated by working geodetic problems. The algorithm was incorporated into a computer program called SOLVE. In the past the SOLVE program has been used in obtaining solutions published as the Goddard earth models.
NASA Technical Reports Server (NTRS)
Witkop, D. L.; Dale, B. J.; Gellin, S.
1991-01-01
The programming aspects of SFENES are described in the User's Manual. The information presented is provided for the installation programmer. It is sufficient to fully describe the general program logic and required peripheral storage. All element generated data is stored externally to reduce required memory allocation. A separate section is devoted to the description of these files thereby permitting the optimization of Input/Output (I/O) time through efficient buffer descriptions. Individual subroutine descriptions are presented along with the complete Fortran source listings. A short description of the major control, computation, and I/O phases is included to aid in obtaining an overall familiarity with the program's components. Finally, a discussion of the suggested overlay structure which allows the program to execute with a reasonable amount of memory allocation is presented.
Combinatorial optimization games
DOE Office of Scientific and Technical Information (OSTI.GOV)
Deng, X.; Ibaraki, Toshihide; Nagamochi, Hiroshi
1997-06-01
We introduce a general integer programming formulation for a class of combinatorial optimization games, which immediately allows us to improve the algorithmic result for finding amputations in the core (an important solution concept in cooperative game theory) of the network flow game on simple networks by Kalai and Zemel. An interesting result is a general theorem that the core for this class of games is nonempty if and only if a related linear program has an integer optimal solution. We study the properties for this mathematical condition to hold for several interesting problems, and apply them to resolve algorithmic andmore » complexity issues for their cores along the line as put forward in: decide whether the core is empty; if the core is empty, find an imputation in the core; given an imputation x, test whether x is in the core. We also explore the properties of totally balanced games in this succinct formulation of cooperative games.« less
NASA Technical Reports Server (NTRS)
Tuey, R. C.
1972-01-01
Computer solutions of linear programming problems are outlined. Information covers vector spaces, convex sets, and matrix algebra elements for solving simultaneous linear equations. Dual problems, reduced cost analysis, ranges, and error analysis are illustrated.
Neural control of magnetic suspension systems
NASA Technical Reports Server (NTRS)
Gray, W. Steven
1993-01-01
The purpose of this research program is to design, build and test (in cooperation with NASA personnel from the NASA Langley Research Center) neural controllers for two different small air-gap magnetic suspension systems. The general objective of the program is to study neural network architectures for the purpose of control in an experimental setting and to demonstrate the feasibility of the concept. The specific objectives of the research program are: (1) to demonstrate through simulation and experimentation the feasibility of using neural controllers to stabilize a nonlinear magnetic suspension system; (2) to investigate through simulation and experimentation the performance of neural controllers designs under various types of parametric and nonparametric uncertainty; (3) to investigate through simulation and experimentation various types of neural architectures for real-time control with respect to performance and complexity; and (4) to benchmark in an experimental setting the performance of neural controllers against other types of existing linear and nonlinear compensator designs. To date, the first one-dimensional, small air-gap magnetic suspension system has been built, tested and delivered to the NASA Langley Research Center. The device is currently being stabilized with a digital linear phase-lead controller. The neural controller hardware is under construction. Two different neural network paradigms are under consideration, one based on hidden layer feedforward networks trained via back propagation and one based on using Gaussian radial basis functions trained by analytical methods related to stability conditions. Some advanced nonlinear control algorithms using feedback linearization and sliding mode control are in simulation studies.
Diegelmann, Mona; Jansen, Carl-Philipp; Wahl, Hans-Werner; Schilling, Oliver K; Schnabel, Eva-Luisa; Hauer, Klaus
2018-06-01
Physical activity (PA) may counteract depressive symptoms in nursing home (NH) residents considering biological, psychological, and person-environment transactional pathways. Empirical results, however, have remained inconsistent. Addressing potential shortcomings of previous research, we examined the effect of a whole-ecology PA intervention program on NH residents' depressive symptoms using generalized linear mixed-models (GLMMs). We used longitudinal data from residents of two German NHs who were included without any pre-selection regarding physical and mental functioning (n = 163, M age = 83.1, 53-100 years; 72% female) and assessed on four occasions each three months apart. Residents willing to participate received a 12-week PA training program. Afterwards, the training was implemented in weekly activity schedules by NH staff. We ran GLMMs to account for the highly skewed depressive symptoms outcome measure (12-item Geriatric Depression Scale-Residential) by using gamma distribution. Exercising (n = 78) and non-exercising residents (n = 85) showed a comparable level of depressive symptoms at pretest. For exercising residents, depressive symptoms stabilized between pre-, posttest, and at follow-up, whereas an increase was observed for non-exercising residents. The intervention group's stabilization in depressive symptoms was maintained at follow-up, but increased further for non-exercising residents. Implementing an innovative PA intervention appears to be a promising approach to prevent the increase of NH residents' depressive symptoms. At the data-analytical level, GLMMs seem to be a promising tool for intervention research at large, because all longitudinally available data points and non-normality of outcome data can be considered.
Object matching using a locally affine invariant and linear programming techniques.
Li, Hongsheng; Huang, Xiaolei; He, Lei
2013-02-01
In this paper, we introduce a new matching method based on a novel locally affine-invariant geometric constraint and linear programming techniques. To model and solve the matching problem in a linear programming formulation, all geometric constraints should be able to be exactly or approximately reformulated into a linear form. This is a major difficulty for this kind of matching algorithm. We propose a novel locally affine-invariant constraint which can be exactly linearized and requires a lot fewer auxiliary variables than other linear programming-based methods do. The key idea behind it is that each point in the template point set can be exactly represented by an affine combination of its neighboring points, whose weights can be solved easily by least squares. Errors of reconstructing each matched point using such weights are used to penalize the disagreement of geometric relationships between the template points and the matched points. The resulting overall objective function can be solved efficiently by linear programming techniques. Our experimental results on both rigid and nonrigid object matching show the effectiveness of the proposed algorithm.
Klein-Weyl's program and the ontology of gauge and quantum systems
NASA Astrophysics Data System (ADS)
Catren, Gabriel
2018-02-01
We distinguish two orientations in Weyl's analysis of the fundamental role played by the notion of symmetry in physics, namely an orientation inspired by Klein's Erlangen program and a phenomenological-transcendental orientation. By privileging the former to the detriment of the latter, we sketch a group(oid)-theoretical program-that we call the Klein-Weyl program-for the interpretation of both gauge theories and quantum mechanics in a single conceptual framework. This program is based on Weyl's notion of a "structure-endowed entity" equipped with a "group of automorphisms". First, we analyze what Weyl calls the "problem of relativity" in the frameworks provided by special relativity, general relativity, and Yang-Mills theories. We argue that both general relativity and Yang-Mills theories can be understood in terms of a localization of Klein's Erlangen program: while the latter describes the group-theoretical automorphisms of a single structure (such as homogenous geometries), local gauge symmetries and the corresponding gauge fields (Ehresmann connections) can be naturally understood in terms of the groupoid-theoretical isomorphisms in a family of identical structures. Second, we argue that quantum mechanics can be understood in terms of a linearization of Klein's Erlangen program. This stance leads us to an interpretation of the fact that quantum numbers are "indices characterizing representations of groups" ((Weyl, 1931a), p.xxi) in terms of a correspondence between the ontological categories of identity and determinateness.
Generalized graphs and unitary irrational central charge in the superconformal master equation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Halpern, M.B.; Obers, N.A.
1991-12-01
For each magic basis of Lie {ital g}, it is known that the Virasoro master equation on affine {ital g} contains a generalized graph theory of conformal level-families. In this paper, it is found that the superconformal master equation on affine {ital g}{times}SO(dim {ital g}) similarly contains a generalized graph theory of superconformal level-families for each magic basis of {ital g}. The superconformal level-families satisfy linear equations on the generalized graphs, and the first exact unitary irrational solutions of the superconformal master equation are obtained on the sine-area graphs of {ital g}=SU({ital n}), including the simplest unitary irrational central chargesmore » {ital c}=6{ital nx}/({ital nx}+8 sin{sup 2}(rs{pi}/n)) yet observed in the program.« less
The generalized quadratic knapsack problem. A neuronal network approach.
Talaván, Pedro M; Yáñez, Javier
2006-05-01
The solution of an optimization problem through the continuous Hopfield network (CHN) is based on some energy or Lyapunov function, which decreases as the system evolves until a local minimum value is attained. A new energy function is proposed in this paper so that any 0-1 linear constrains programming with quadratic objective function can be solved. This problem, denoted as the generalized quadratic knapsack problem (GQKP), includes as particular cases well-known problems such as the traveling salesman problem (TSP) and the quadratic assignment problem (QAP). This new energy function generalizes those proposed by other authors. Through this energy function, any GQKP can be solved with an appropriate parameter setting procedure, which is detailed in this paper. As a particular case, and in order to test this generalized energy function, some computational experiments solving the traveling salesman problem are also included.
An algorithm for the numerical solution of linear differential games
DOE Office of Scientific and Technical Information (OSTI.GOV)
Polovinkin, E S; Ivanov, G E; Balashov, M V
2001-10-31
A numerical algorithm for the construction of stable Krasovskii bridges, Pontryagin alternating sets, and also of piecewise program strategies solving two-person linear differential (pursuit or evasion) games on a fixed time interval is developed on the basis of a general theory. The aim of the first player (the pursuer) is to hit a prescribed target (terminal) set by the phase vector of the control system at the prescribed time. The aim of the second player (the evader) is the opposite. A description of numerical algorithms used in the solution of differential games of the type under consideration is presented andmore » estimates of the errors resulting from the approximation of the game sets by polyhedra are presented.« less
Adoption of robotics in a general surgery residency program: at what cost?
Mehaffey, J Hunter; Michaels, Alex D; Mullen, Matthew G; Yount, Kenan W; Meneveau, Max O; Smith, Philip W; Friel, Charles M; Schirmer, Bruce D
2017-06-01
Robotic technology is increasingly being utilized by general surgeons. However, the impact of introducing robotics to surgical residency has not been examined. This study aims to assess the financial costs and training impact of introducing robotics at an academic general surgery residency program. All patients who underwent laparoscopic or robotic cholecystectomy, ventral hernia repair (VHR), and inguinal hernia repair (IHR) at our institution from 2011-2015 were identified. The effect of robotic surgery on laparoscopic case volume was assessed with linear regression analysis. Resident participation, operative time, hospital costs, and patient charges were also evaluated. We identified 2260 laparoscopic and 139 robotic operations. As the volume of robotic cases increased, the number of laparoscopic cases steadily decreased. Residents participated in all laparoscopic cases and 70% of robotic cases but operated from the robot console in only 21% of cases. Mean operative time was increased for robotic cholecystectomy (+22%), IHR (+55%), and VHR (+61%). Financial analysis revealed higher median hospital costs per case for robotic cholecystectomy (+$411), IHR (+$887), and VHR (+$1124) as well as substantial associated fixed costs. Introduction of robotic surgery had considerable negative impact on laparoscopic case volume and significantly decreased resident participation. Increased operative time and hospital costs are substantial. An institution must be cognizant of these effects when considering implementing robotics in departments with a general surgery residency program. Copyright © 2017 Elsevier Inc. All rights reserved.
A generalized Poisson and Poisson-Boltzmann solver for electrostatic environments.
Fisicaro, G; Genovese, L; Andreussi, O; Marzari, N; Goedecker, S
2016-01-07
The computational study of chemical reactions in complex, wet environments is critical for applications in many fields. It is often essential to study chemical reactions in the presence of applied electrochemical potentials, taking into account the non-trivial electrostatic screening coming from the solvent and the electrolytes. As a consequence, the electrostatic potential has to be found by solving the generalized Poisson and the Poisson-Boltzmann equations for neutral and ionic solutions, respectively. In the present work, solvers for both problems have been developed. A preconditioned conjugate gradient method has been implemented for the solution of the generalized Poisson equation and the linear regime of the Poisson-Boltzmann, allowing to solve iteratively the minimization problem with some ten iterations of the ordinary Poisson equation solver. In addition, a self-consistent procedure enables us to solve the non-linear Poisson-Boltzmann problem. Both solvers exhibit very high accuracy and parallel efficiency and allow for the treatment of periodic, free, and slab boundary conditions. The solver has been integrated into the BigDFT and Quantum-ESPRESSO electronic-structure packages and will be released as an independent program, suitable for integration in other codes.
A generalized Poisson and Poisson-Boltzmann solver for electrostatic environments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fisicaro, G., E-mail: giuseppe.fisicaro@unibas.ch; Goedecker, S.; Genovese, L.
2016-01-07
The computational study of chemical reactions in complex, wet environments is critical for applications in many fields. It is often essential to study chemical reactions in the presence of applied electrochemical potentials, taking into account the non-trivial electrostatic screening coming from the solvent and the electrolytes. As a consequence, the electrostatic potential has to be found by solving the generalized Poisson and the Poisson-Boltzmann equations for neutral and ionic solutions, respectively. In the present work, solvers for both problems have been developed. A preconditioned conjugate gradient method has been implemented for the solution of the generalized Poisson equation and themore » linear regime of the Poisson-Boltzmann, allowing to solve iteratively the minimization problem with some ten iterations of the ordinary Poisson equation solver. In addition, a self-consistent procedure enables us to solve the non-linear Poisson-Boltzmann problem. Both solvers exhibit very high accuracy and parallel efficiency and allow for the treatment of periodic, free, and slab boundary conditions. The solver has been integrated into the BigDFT and Quantum-ESPRESSO electronic-structure packages and will be released as an independent program, suitable for integration in other codes.« less
Yock, Adam D; Rao, Arvind; Dong, Lei; Beadle, Beth M; Garden, Adam S; Kudchadker, Rajat J; Court, Laurence E
2014-05-01
The purpose of this work was to develop and evaluate the accuracy of several predictive models of variation in tumor volume throughout the course of radiation therapy. Nineteen patients with oropharyngeal cancers were imaged daily with CT-on-rails for image-guided alignment per an institutional protocol. The daily volumes of 35 tumors in these 19 patients were determined and used to generate (1) a linear model in which tumor volume changed at a constant rate, (2) a general linear model that utilized the power fit relationship between the daily and initial tumor volumes, and (3) a functional general linear model that identified and exploited the primary modes of variation between time series describing the changing tumor volumes. Primary and nodal tumor volumes were examined separately. The accuracy of these models in predicting daily tumor volumes were compared with those of static and linear reference models using leave-one-out cross-validation. In predicting the daily volume of primary tumors, the general linear model and the functional general linear model were more accurate than the static reference model by 9.9% (range: -11.6%-23.8%) and 14.6% (range: -7.3%-27.5%), respectively, and were more accurate than the linear reference model by 14.2% (range: -6.8%-40.3%) and 13.1% (range: -1.5%-52.5%), respectively. In predicting the daily volume of nodal tumors, only the 14.4% (range: -11.1%-20.5%) improvement in accuracy of the functional general linear model compared to the static reference model was statistically significant. A general linear model and a functional general linear model trained on data from a small population of patients can predict the primary tumor volume throughout the course of radiation therapy with greater accuracy than standard reference models. These more accurate models may increase the prognostic value of information about the tumor garnered from pretreatment computed tomography images and facilitate improved treatment management.
Methods, Software and Tools for Three Numerical Applications. Final report
DOE Office of Scientific and Technical Information (OSTI.GOV)
E. R. Jessup
2000-03-01
This is a report of the results of the authors work supported by DOE contract DE-FG03-97ER25325. They proposed to study three numerical problems. They are: (1) the extension of the PMESC parallel programming library; (2) the development of algorithms and software for certain generalized eigenvalue and singular value (SVD) problems, and (3) the application of techniques of linear algebra to an information retrieval technique known as latent semantic indexing (LSI).
Structural optimization: Status and promise
NASA Astrophysics Data System (ADS)
Kamat, Manohar P.
Chapters contained in this book include fundamental concepts of optimum design, mathematical programming methods for constrained optimization, function approximations, approximate reanalysis methods, dual mathematical programming methods for constrained optimization, a generalized optimality criteria method, and a tutorial and survey of multicriteria optimization in engineering. Also included are chapters on the compromise decision support problem and the adaptive linear programming algorithm, sensitivity analyses of discrete and distributed systems, the design sensitivity analysis of nonlinear structures, optimization by decomposition, mixed elements in shape sensitivity analysis of structures based on local criteria, and optimization of stiffened cylindrical shells subjected to destabilizing loads. Other chapters are on applications to fixed-wing aircraft and spacecraft, integrated optimum structural and control design, modeling concurrency in the design of composite structures, and tools for structural optimization. (No individual items are abstracted in this volume)
Multidisciplinary analysis of actively controlled large flexible spacecraft
NASA Technical Reports Server (NTRS)
Cooper, Paul A.; Young, John W.; Sutter, Thomas R.
1986-01-01
The control of Flexible Structures (COFS) program has supported the development of an analysis capability at the Langley Research Center called the Integrated Multidisciplinary Analysis Tool (IMAT) which provides an efficient data storage and transfer capability among commercial computer codes to aid in the dynamic analysis of actively controlled structures. IMAT is a system of computer programs which transfers Computer-Aided-Design (CAD) configurations, structural finite element models, material property and stress information, structural and rigid-body dynamic model information, and linear system matrices for control law formulation among various commercial applications programs through a common database. Although general in its formulation, IMAT was developed specifically to aid in the evaluation of the structures. A description of the IMAT system and results of an application of the system are given.
LINEAR - DERIVATION AND DEFINITION OF A LINEAR AIRCRAFT MODEL
NASA Technical Reports Server (NTRS)
Duke, E. L.
1994-01-01
The Derivation and Definition of a Linear Model program, LINEAR, provides the user with a powerful and flexible tool for the linearization of aircraft aerodynamic models. LINEAR was developed to provide a standard, documented, and verified tool to derive linear models for aircraft stability analysis and control law design. Linear system models define the aircraft system in the neighborhood of an analysis point and are determined by the linearization of the nonlinear equations defining vehicle dynamics and sensors. LINEAR numerically determines a linear system model using nonlinear equations of motion and a user supplied linear or nonlinear aerodynamic model. The nonlinear equations of motion used are six-degree-of-freedom equations with stationary atmosphere and flat, nonrotating earth assumptions. LINEAR is capable of extracting both linearized engine effects, such as net thrust, torque, and gyroscopic effects and including these effects in the linear system model. The point at which this linear model is defined is determined either by completely specifying the state and control variables, or by specifying an analysis point on a trajectory and directing the program to determine the control variables and the remaining state variables. The system model determined by LINEAR consists of matrices for both the state and observation equations. The program has been designed to provide easy selection of state, control, and observation variables to be used in a particular model. Thus, the order of the system model is completely under user control. Further, the program provides the flexibility of allowing alternate formulations of both the state and observation equations. Data describing the aircraft and the test case is input to the program through a terminal or formatted data files. All data can be modified interactively from case to case. The aerodynamic model can be defined in two ways: a set of nondimensional stability and control derivatives for the flight point of interest, or a full non-linear aerodynamic model as used in simulations. LINEAR is written in FORTRAN and has been implemented on a DEC VAX computer operating under VMS with a virtual memory requirement of approximately 296K of 8 bit bytes. Both an interactive and batch version are included. LINEAR was developed in 1988.
Wang, Hsiao-Fan; Hsu, Hsin-Wei
2010-11-01
With the urgency of global warming, green supply chain management, logistics in particular, has drawn the attention of researchers. Although there are closed-loop green logistics models in the literature, most of them do not consider the uncertain environment in general terms. In this study, a generalized model is proposed where the uncertainty is expressed by fuzzy numbers. An interval programming model is proposed by the defined means and mean square imprecision index obtained from the integrated information of all the level cuts of fuzzy numbers. The resolution for interval programming is based on the decision maker (DM)'s preference. The resulting solution provides useful information on the expected solutions under a confidence level containing a degree of risk. The results suggest that the more optimistic the DM is, the better is the resulting solution. However, a higher risk of violation of the resource constraints is also present. By defining this probable risk, a solution procedure was developed with numerical illustrations. This provides a DM trade-off mechanism between logistic cost and the risk. Copyright 2010 Elsevier Ltd. All rights reserved.
Regression modeling of ground-water flow
Cooley, R.L.; Naff, R.L.
1985-01-01
Nonlinear multiple regression methods are developed to model and analyze groundwater flow systems. Complete descriptions of regression methodology as applied to groundwater flow models allow scientists and engineers engaged in flow modeling to apply the methods to a wide range of problems. Organization of the text proceeds from an introduction that discusses the general topic of groundwater flow modeling, to a review of basic statistics necessary to properly apply regression techniques, and then to the main topic: exposition and use of linear and nonlinear regression to model groundwater flow. Statistical procedures are given to analyze and use the regression models. A number of exercises and answers are included to exercise the student on nearly all the methods that are presented for modeling and statistical analysis. Three computer programs implement the more complex methods. These three are a general two-dimensional, steady-state regression model for flow in an anisotropic, heterogeneous porous medium, a program to calculate a measure of model nonlinearity with respect to the regression parameters, and a program to analyze model errors in computed dependent variables such as hydraulic head. (USGS)
Item Purification in Differential Item Functioning Using Generalized Linear Mixed Models
ERIC Educational Resources Information Center
Liu, Qian
2011-01-01
For this dissertation, four item purification procedures were implemented onto the generalized linear mixed model for differential item functioning (DIF) analysis, and the performance of these item purification procedures was investigated through a series of simulations. Among the four procedures, forward and generalized linear mixed model (GLMM)…
FSILP: fuzzy-stochastic-interval linear programming for supporting municipal solid waste management.
Li, Pu; Chen, Bing
2011-04-01
Although many studies on municipal solid waste management (MSW management) were conducted under uncertain conditions of fuzzy, stochastic, and interval coexistence, the solution to the conventional linear programming problems of integrating fuzzy method with the other two was inefficient. In this study, a fuzzy-stochastic-interval linear programming (FSILP) method is developed by integrating Nguyen's method with conventional linear programming for supporting municipal solid waste management. The Nguyen's method was used to convert the fuzzy and fuzzy-stochastic linear programming problems into the conventional linear programs, by measuring the attainment values of fuzzy numbers and/or fuzzy random variables, as well as superiority and inferiority between triangular fuzzy numbers/triangular fuzzy-stochastic variables. The developed method can effectively tackle uncertainties described in terms of probability density functions, fuzzy membership functions, and discrete intervals. Moreover, the method can also improve upon the conventional interval fuzzy programming and two-stage stochastic programming approaches, with advantageous capabilities that are easily achieved with fewer constraints and significantly reduces consumption time. The developed model was applied to a case study of municipal solid waste management system in a city. The results indicated that reasonable solutions had been generated. The solution can help quantify the relationship between the change of system cost and the uncertainties, which could support further analysis of tradeoffs between the waste management cost and the system failure risk. Copyright © 2010 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Indarsih, Indrati, Ch. Rini
2016-02-01
In this paper, we define variance of the fuzzy random variables through alpha level. We have a theorem that can be used to know that the variance of fuzzy random variables is a fuzzy number. We have a multi-objective linear programming (MOLP) with fuzzy random of objective function coefficients. We will solve the problem by variance approach. The approach transform the MOLP with fuzzy random of objective function coefficients into MOLP with fuzzy of objective function coefficients. By weighted methods, we have linear programming with fuzzy coefficients and we solve by simplex method for fuzzy linear programming.
Bruhn, Peter; Geyer-Schulz, Andreas
2002-01-01
In this paper, we introduce genetic programming over context-free languages with linear constraints for combinatorial optimization, apply this method to several variants of the multidimensional knapsack problem, and discuss its performance relative to Michalewicz's genetic algorithm with penalty functions. With respect to Michalewicz's approach, we demonstrate that genetic programming over context-free languages with linear constraints improves convergence. A final result is that genetic programming over context-free languages with linear constraints is ideally suited to modeling complementarities between items in a knapsack problem: The more complementarities in the problem, the stronger the performance in comparison to its competitors.
NASA Technical Reports Server (NTRS)
Lawson, C. L.; Krogh, F. T.; Gold, S. S.; Kincaid, D. R.; Sullivan, J.; Williams, E.; Hanson, R. J.; Haskell, K.; Dongarra, J.; Moler, C. B.
1982-01-01
The Basic Linear Algebra Subprograms (BLAS) library is a collection of 38 FORTRAN-callable routines for performing basic operations of numerical linear algebra. BLAS library is portable and efficient source of basic operations for designers of programs involving linear algebriac computations. BLAS library is supplied in portable FORTRAN and Assembler code versions for IBM 370, UNIVAC 1100 and CDC 6000 series computers.
Smoothed Residual Plots for Generalized Linear Models. Technical Report #450.
ERIC Educational Resources Information Center
Brant, Rollin
Methods for examining the viability of assumptions underlying generalized linear models are considered. By appealing to the likelihood, a natural generalization of the raw residual plot for normal theory models is derived and is applied to investigating potential misspecification of the linear predictor. A smooth version of the plot is also…
Zörnig, Peter
2015-08-01
We present integer programming models for some variants of the farthest string problem. The number of variables and constraints is substantially less than that of the integer linear programming models known in the literature. Moreover, the solution of the linear programming-relaxation contains only a small proportion of noninteger values, which considerably simplifies the rounding process. Numerical tests have shown excellent results, especially when a small set of long sequences is given.
A Block-LU Update for Large-Scale Linear Programming
1990-01-01
linear programming problems. Results are given from runs on the Cray Y -MP. 1. Introduction We wish to use the simplex method [Dan63] to solve the...standard linear program, minimize cTx subject to Ax = b 1< x <U, where A is an m by n matrix and c, x, 1, u, and b are of appropriate dimension. The simplex...the identity matrix. The basis is used to solve for the search direction y and the dual variables 7r in the following linear systems: Bky = aq (1.2) and
Computer studies of baroclinic flow. [Atmospheric General Circulation Experiment
NASA Technical Reports Server (NTRS)
Gall, R.
1985-01-01
Programs necessary for computing the transition curve on the regime diagram for the atmospheric general circulation experiment (AGOE) were completed and used to determine the regime diagram for the rotating annulus and some axisymmetric flows for one possible AGOE configuration. The effect of geometrical constraints on the size of eddies developing from a basic state is being examined. In AGOE, the geometric constraint should be the width of the shear zone or the baroclinic zone. Linear and nonlinear models are to be used to examine both barotropic and baroclinic flows. The results should help explain the scale selection mechanism of baroclinic eddies in the atmosphere experimental models such as AGOE, and the multiple vortex phenomenon in tornadoes.
Synthesis of stiffened shells of revolution
NASA Technical Reports Server (NTRS)
Thornton, W. A.
1974-01-01
Computer programs for the synthesis of shells of various configurations were developed. The conditions considered are: (1) uniform shells (mainly cones) using a membrane buckling analysis, (2) completely uniform shells (cones, spheres, toroidal segments) using linear bending prebuckling analysis, and (3) revision of second design process to reduce the number of design variables to about 30 by considering piecewise uniform designs. A perturbation formula was derived and this allows exact derivatives of the general buckling load to be computed with little additional computer time.
Linear System of Equations, Matrix Inversion, and Linear Programming Using MS Excel
ERIC Educational Resources Information Center
El-Gebeily, M.; Yushau, B.
2008-01-01
In this note, we demonstrate with illustrations two different ways that MS Excel can be used to solve Linear Systems of Equation, Linear Programming Problems, and Matrix Inversion Problems. The advantage of using MS Excel is its availability and transparency (the user is responsible for most of the details of how a problem is solved). Further, we…
Inelastic strain analogy for piecewise linear computation of creep residues in built-up structures
NASA Technical Reports Server (NTRS)
Jenkins, Jerald M.
1987-01-01
An analogy between inelastic strains caused by temperature and those caused by creep is presented in terms of isotropic elasticity. It is shown how the theoretical aspects can be blended with existing finite-element computer programs to exact a piecewise linear solution. The creep effect is determined by using the thermal stress computational approach, if appropriate alterations are made to the thermal expansion of the individual elements. The overall transient solution is achieved by consecutive piecewise linear iterations. The total residue caused by creep is obtained by accumulating creep residues for each iteration and then resubmitting the total residues for each element as an equivalent input. A typical creep law is tested for incremental time convergence. The results indicate that the approach is practical, with a valid indication of the extent of creep after approximately 20 hr of incremental time. The general analogy between body forces and inelastic strain gradients is discussed with respect to how an inelastic problem can be worked as an elastic problem.
Housing First for Homeless Persons with Active Addiction: Are We Overreaching?
Kertesz, Stefan G; Crouch, Kimberly; Milby, Jesse B; Cusimano, Robert E; Schumacher, Joseph E
2009-01-01
Context More than 350 communities in the United States have committed to ending chronic homelessness. One nationally prominent approach, Housing First, offers early access to permanent housing without requiring completion of treatment or, for clients with addiction, proof of sobriety. Methods This article reviews studies of Housing First and more traditional rehabilitative (e.g., “linear”) recovery interventions, focusing on the outcomes obtained by both approaches for homeless individuals with addictive disorders. Findings According to reviews of comparative trials and case series reports, Housing First reports document excellent housing retention, despite the limited amount of data pertaining to homeless clients with active and severe addiction. Several linear programs cite reductions in addiction severity but have shortcomings in long-term housing success and retention. Conclusions This article suggests that the current research data are not sufficient to identify an optimal housing and rehabilitation approach for an important homeless subgroup. The research regarding Housing First and linear approaches can be strengthened in several ways, and policymakers should be cautious about generalizing the results of available Housing First studies to persons with active addiction when they enter housing programs. PMID:19523126
NASA Astrophysics Data System (ADS)
Chun, Tae Yoon; Lee, Jae Young; Park, Jin Bae; Choi, Yoon Ho
2018-06-01
In this paper, we propose two multirate generalised policy iteration (GPI) algorithms applied to discrete-time linear quadratic regulation problems. The proposed algorithms are extensions of the existing GPI algorithm that consists of the approximate policy evaluation and policy improvement steps. The two proposed schemes, named heuristic dynamic programming (HDP) and dual HDP (DHP), based on multirate GPI, use multi-step estimation (M-step Bellman equation) at the approximate policy evaluation step for estimating the value function and its gradient called costate, respectively. Then, we show that these two methods with the same update horizon can be considered equivalent in the iteration domain. Furthermore, monotonically increasing and decreasing convergences, so called value iteration (VI)-mode and policy iteration (PI)-mode convergences, are proved to hold for the proposed multirate GPIs. Further, general convergence properties in terms of eigenvalues are also studied. The data-driven online implementation methods for the proposed HDP and DHP are demonstrated and finally, we present the results of numerical simulations performed to verify the effectiveness of the proposed methods.
Approximate labeling via graph cuts based on linear programming.
Komodakis, Nikos; Tziritas, Georgios
2007-08-01
A new framework is presented for both understanding and developing graph-cut-based combinatorial algorithms suitable for the approximate optimization of a very wide class of Markov Random Fields (MRFs) that are frequently encountered in computer vision. The proposed framework utilizes tools from the duality theory of linear programming in order to provide an alternative and more general view of state-of-the-art techniques like the \\alpha-expansion algorithm, which is included merely as a special case. Moreover, contrary to \\alpha-expansion, the derived algorithms generate solutions with guaranteed optimality properties for a much wider class of problems, for example, even for MRFs with nonmetric potentials. In addition, they are capable of providing per-instance suboptimality bounds in all occasions, including discrete MRFs with an arbitrary potential function. These bounds prove to be very tight in practice (that is, very close to 1), which means that the resulting solutions are almost optimal. Our algorithms' effectiveness is demonstrated by presenting experimental results on a variety of low-level vision tasks, such as stereo matching, image restoration, image completion, and optical flow estimation, as well as on synthetic problems.
Non-linear programming in shakedown analysis with plasticity and friction
NASA Astrophysics Data System (ADS)
Spagnoli, A.; Terzano, M.; Barber, J. R.; Klarbring, A.
2017-07-01
Complete frictional contacts, when subjected to cyclic loading, may sometimes develop a favourable situation where slip ceases after a few cycles, an occurrence commonly known as frictional shakedown. Its resemblance to shakedown in plasticity has prompted scholars to apply direct methods, derived from the classical theorems of limit analysis, in order to assess a safe limit to the external loads applied on the system. In circumstances where zones of plastic deformation develop in the material (e.g., because of the large stress concentrations near the sharp edges of a complete contact), it is reasonable to expect an effect of mutual interaction of frictional slip and plastic strains on the load limit below which the global behaviour is non dissipative, i.e., both slip and plastic strains go to zero after some dissipative load cycles. In this paper, shakedown of general two-dimensional discrete systems, involving both friction and plasticity, is discussed and the shakedown limit load is calculated using a non-linear programming algorithm based on the static theorem of limit analysis. An illustrative example related to an elastic-plastic solid containing a frictional crack is provided.
Donovan, Michael; Khan, Asaduzzaman; Johnston, Venerina
2017-03-01
Introduction The aim of this study is to determine whether a workplace-based early intervention injury prevention program reduces work-related musculoskeletal compensation outcomes in poultry meat processing workers. Methods A poultry meatworks in Queensland, Australia implemented an onsite early intervention which included immediate reporting and triage, reassurance, multidisciplinary participatory consultation, workplace modifica tion and onsite physiotherapy. Secondary pre-post analyses of the meatworks' compensation data over 4 years were performed, with the intervention commencing 2 years into the study period. Outcome measures included rate of claims, costs per claim and work days absent at an individual claim level. Where possible, similar analyses were performed on data for Queensland's poultry meat processing industry (excluding the meatworks used in this study). Results At the intervention meatworks, in the post intervention period an 18 % reduction in claims per 1 million working hours (p = 0.017) was observed. Generalized linear modelling revealed a significant reduction in average costs per claim of $831 (OR 0.74; 95 % CI 0.59-0.93; p = 0.009). Median days absent was reduced by 37 % (p = 0.024). For the poultry meat processing industry over the same period, generalized linear modelling revealed no significant change in average costs per claim (OR 1.02; 95 % CI 0.76-1.36; p = 0.91). Median days absent was unchanged (p = 0.93). Conclusion The introduction of an onsite, workplace-based early intervention injury prevention program demonstrated positive effects on compensation outcomes for work-related musculoskeletal disorders in poultry meat processing workers. Prospective studies are needed to confirm the findings of the present study.
Can Linear Superiorization Be Useful for Linear Optimization Problems?
Censor, Yair
2017-01-01
Linear superiorization considers linear programming problems but instead of attempting to solve them with linear optimization methods it employs perturbation resilient feasibility-seeking algorithms and steers them toward reduced (not necessarily minimal) target function values. The two questions that we set out to explore experimentally are (i) Does linear superiorization provide a feasible point whose linear target function value is lower than that obtained by running the same feasibility-seeking algorithm without superiorization under identical conditions? and (ii) How does linear superiorization fare in comparison with the Simplex method for solving linear programming problems? Based on our computational experiments presented here, the answers to these two questions are: “yes” and “very well”, respectively. PMID:29335660
Linear Programming and Its Application to Pattern Recognition Problems
NASA Technical Reports Server (NTRS)
Omalley, M. J.
1973-01-01
Linear programming and linear programming like techniques as applied to pattern recognition problems are discussed. Three relatively recent research articles on such applications are summarized. The main results of each paper are described, indicating the theoretical tools needed to obtain them. A synopsis of the author's comments is presented with regard to the applicability or non-applicability of his methods to particular problems, including computational results wherever given.
Wagner, Justin P; Chen, David C; Donahue, Timothy R; Quach, Chi; Hines, O Joe; Hiatt, Jonathan R; Tillou, Areti
2014-01-01
To satisfy trainees' operative competency requirements while improving feedback validity and timeliness using a mobile Web-based platform. The Southern Illinois University Operative Performance Rating Scale (OPRS) was embedded into a website formatted for mobile devices. From March 2013 to February 2014, faculty members were instructed to complete the OPRS form while providing verbal feedback to the operating resident at the conclusion of each procedure. Submitted data were compiled automatically within a secure Web-based spreadsheet. Conventional end-of-rotation performance (CERP) evaluations filed 2006 to 2013 and OPRS performance scores were compared by year of training using serial and independent-samples t tests. The mean CERP scores and OPRS overall resident operative performance scores were directly compared using a linear regression model. OPRS mobile site analytics were reviewed using a Web-based reporting program. Large university-based general surgery residency program. General Surgery faculty used the mobile Web OPRS system to rate resident performance. Residents and the program director reviewed evaluations semiannually. Over the study period, 18 faculty members and 37 residents logged 176 operations using the mobile OPRS system. There were 334 total OPRS website visits. Median time to complete an evaluation was 45 minutes from the end of the operation, and faculty spent an average of 134 seconds on the site to enter 1 assessment. In the 38,506 CERP evaluations reviewed, mean performance scores showed a positive linear trend of 2% change per year of training (p = 0.001). OPRS overall resident operative performance scores showed a significant linear (p = 0.001), quadratic (p = 0.001), and cubic (p = 0.003) trend of change per year of clinical training, reflecting the resident operative experience in our training program. Differences between postgraduate year-1 and postgraduate year-5 overall performance scores were greater with the OPRS (mean = 0.96, CI: 0.55-1.38) than with CERP measures (mean = 0.37, CI: 0.34-0.41). Additionally, there were consistent increases in each of the OPRS subcategories. In contrast to CERPs, the OPRS fully satisfies the Accreditation Council for Graduate Medical Education and American Board of Surgery operative assessment requirements. The mobile Web platform provides a convenient interface, broad accessibility, automatic data compilation, and compatibility with common database and statistical software. Our mobile OPRS system encourages candid feedback dialog and generates a comprehensive review of individual and group-wide operative proficiency in real time. Copyright © 2014 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.
Multi-Constraint Multi-Variable Optimization of Source-Driven Nuclear Systems
NASA Astrophysics Data System (ADS)
Watkins, Edward Francis
1995-01-01
A novel approach to the search for optimal designs of source-driven nuclear systems is investigated. Such systems include radiation shields, fusion reactor blankets and various neutron spectrum-shaping assemblies. The novel approach involves the replacement of the steepest-descents optimization algorithm incorporated in the code SWAN by a significantly more general and efficient sequential quadratic programming optimization algorithm provided by the code NPSOL. The resulting SWAN/NPSOL code system can be applied to more general, multi-variable, multi-constraint shield optimization problems. The constraints it accounts for may include simple bounds on variables, linear constraints, and smooth nonlinear constraints. It may also be applied to unconstrained, bound-constrained and linearly constrained optimization. The shield optimization capabilities of the SWAN/NPSOL code system is tested and verified in a variety of optimization problems: dose minimization at constant cost, cost minimization at constant dose, and multiple-nonlinear constraint optimization. The replacement of the optimization part of SWAN with NPSOL is found feasible and leads to a very substantial improvement in the complexity of optimization problems which can be efficiently handled.
Development of an integrated aeroservoelastic analysis program and correlation with test data
NASA Technical Reports Server (NTRS)
Gupta, K. K.; Brenner, M. J.; Voelker, L. S.
1991-01-01
The details and results are presented of the general-purpose finite element STructural Analysis RoutineS (STARS) to perform a complete linear aeroelastic and aeroservoelastic analysis. The earlier version of the STARS computer program enabled effective finite element modeling as well as static, vibration, buckling, and dynamic response of damped and undamped systems, including those with pre-stressed and spinning structures. Additions to the STARS program include aeroelastic modeling for flutter and divergence solutions, and hybrid control system augmentation for aeroservoelastic analysis. Numerical results of the X-29A aircraft pertaining to vibration, flutter-divergence, and open- and closed-loop aeroservoelastic controls analysis are compared to ground vibration, wind-tunnel, and flight-test results. The open- and closed-loop aeroservoelastic control analyses are based on a hybrid formulation representing the interaction of structural, aerodynamic, and flight-control dynamics.
Two Computer Programs for the Statistical Evaluation of a Weighted Linear Composite.
ERIC Educational Resources Information Center
Sands, William A.
1978-01-01
Two computer programs (one batch, one interactive) are designed to provide statistics for a weighted linear combination of several component variables. Both programs provide mean, variance, standard deviation, and a validity coefficient. (Author/JKS)
The RANDOM computer program: A linear congruential random number generator
NASA Technical Reports Server (NTRS)
Miles, R. F., Jr.
1986-01-01
The RANDOM Computer Program is a FORTRAN program for generating random number sequences and testing linear congruential random number generators (LCGs). The linear congruential form of random number generator is discussed, and the selection of parameters of an LCG for a microcomputer described. This document describes the following: (1) The RANDOM Computer Program; (2) RANDOM.MOD, the computer code needed to implement an LCG in a FORTRAN program; and (3) The RANCYCLE and the ARITH Computer Programs that provide computational assistance in the selection of parameters for an LCG. The RANDOM, RANCYCLE, and ARITH Computer Programs are written in Microsoft FORTRAN for the IBM PC microcomputer and its compatibles. With only minor modifications, the RANDOM Computer Program and its LCG can be run on most micromputers or mainframe computers.
Ranking Surgical Residency Programs: Reputation Survey or Outcomes Measures?
Wilson, Adam B; Torbeck, Laura J; Dunnington, Gary L
2015-01-01
The release of general surgery residency program rankings by Doximity and U.S. News & World Report accentuates the need to define and establish measurable standards of program quality. This study evaluated the extent to which program rankings based solely on peer nominations correlated with familiar program outcomes measures. Publicly available data were collected for all 254 general surgery residency programs. To generate a rudimentary outcomes-based program ranking, surgery programs were rank-ordered according to an average percentile rank that was calculated using board pass rates and the prevalence of alumni publications. A Kendall τ-b rank correlation computed the linear association between program rankings based on reputation alone and those derived from outcomes measures to validate whether reputation was a reasonable surrogate for globally judging program quality. For the 218 programs with complete data eligible for analysis, the mean board pass rate was 72% with a standard deviation of 14%. A total of 60 programs were placed in the 75th percentile or above for the number of publications authored by program alumni. The correlational analysis reported a significant correlation of 0.428, indicating only a moderate association between programs ranked by outcomes measures and those ranked according to reputation. Seventeen programs that were ranked in the top 30 according to reputation were also ranked in the top 30 based on outcomes measures. This study suggests that reputation alone does not fully capture a representative snapshot of a program's quality. Rather, the use of multiple quantifiable indicators and attributes unique to programs ought to be given more consideration when assigning ranks to denote program quality. It is advised that the interpretation and subsequent use of program rankings be met with caution until further studies can rigorously demonstrate best practices for awarding program standings. Copyright © 2015 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.
Accommodation of practical constraints by a linear programming jet select. [for Space Shuttle
NASA Technical Reports Server (NTRS)
Bergmann, E.; Weiler, P.
1983-01-01
An experimental spacecraft control system will be incorporated into the Space Shuttle flight software and exercised during a forthcoming mission to evaluate its performance and handling qualities. The control system incorporates a 'phase space' control law to generate rate change requests and a linear programming jet select to compute jet firings. Posed as a linear programming problem, jet selection must represent the rate change request as a linear combination of jet acceleration vectors where the coefficients are the jet firing times, while minimizing the fuel expended in satisfying that request. This problem is solved in real time using a revised Simplex algorithm. In order to implement the jet selection algorithm in the Shuttle flight control computer, it was modified to accommodate certain practical features of the Shuttle such as limited computer throughput, lengthy firing times, and a large number of control jets. To the authors' knowledge, this is the first such application of linear programming. It was made possible by careful consideration of the jet selection problem in terms of the properties of linear programming and the Simplex algorithm. These modifications to the jet select algorithm may by useful for the design of reaction controlled spacecraft.
Aksu, Yaman; Miller, David J; Kesidis, George; Yang, Qing X
2010-05-01
Feature selection for classification in high-dimensional spaces can improve generalization, reduce classifier complexity, and identify important, discriminating feature "markers." For support vector machine (SVM) classification, a widely used technique is recursive feature elimination (RFE). We demonstrate that RFE is not consistent with margin maximization, central to the SVM learning approach. We thus propose explicit margin-based feature elimination (MFE) for SVMs and demonstrate both improved margin and improved generalization, compared with RFE. Moreover, for the case of a nonlinear kernel, we show that RFE assumes that the squared weight vector 2-norm is strictly decreasing as features are eliminated. We demonstrate this is not true for the Gaussian kernel and, consequently, RFE may give poor results in this case. MFE for nonlinear kernels gives better margin and generalization. We also present an extension which achieves further margin gains, by optimizing only two degrees of freedom--the hyperplane's intercept and its squared 2-norm--with the weight vector orientation fixed. We finally introduce an extension that allows margin slackness. We compare against several alternatives, including RFE and a linear programming method that embeds feature selection within the classifier design. On high-dimensional gene microarray data sets, University of California at Irvine (UCI) repository data sets, and Alzheimer's disease brain image data, MFE methods give promising results.
Holmes, George M; Pink, George H; Friedman, Sarah A
2013-01-01
To compare the financial performance of rural hospitals with Medicare payment provisions to those paid under prospective payment and to estimate the financial consequences of elimination of the Critical Access Hospital (CAH) program. Financial data for 2004-2010 were collected from the Healthcare Cost Reporting Information System (HCRIS) for rural hospitals. HCRIS data were used to calculate measures of the profitability, liquidity, capital structure, and financial strength of rural hospitals. Linear mixed models accounted for the method of Medicare reimbursement, time trends, hospital, and market characteristics. Simulations were used to estimate profitability of CAHs if they reverted to prospective payment. CAHs generally had lower unadjusted financial performance than other types of rural hospitals, but after adjustment for hospital characteristics, CAHs had generally higher financial performance. Special payment provisions by Medicare to rural hospitals are important determinants of financial performance. In particular, the financial condition of CAHs would be worse if they were paid under prospective payment. © 2012 National Rural Health Association.
Generalized hydrodynamics and non-equilibrium steady states in integrable many-body quantum systems
NASA Astrophysics Data System (ADS)
Vasseur, Romain; Bulchandani, Vir; Karrasch, Christoph; Moore, Joel
The long-time dynamics of thermalizing many-body quantum systems can typically be described in terms of a conventional hydrodynamics picture that results from the decay of all but a few slow modes associated with standard conservation laws (such as particle number, energy, or momentum). However, hydrodynamics is expected to fail for integrable systems that are characterized by an infinite number of conservation laws, leading to unconventional transport properties and to complex non-equilibrium states beyond the traditional dogma of statistical mechanics. In this talk, I will describe recent attempts to understand such stationary states far from equilibrium using a generalized hydrodynamics picture. I will discuss the consistency of ``Bethe-Boltzmann'' kinetic equations with linear response Drude weights and with density-matrix renormalization group calculations. This work was supported by the Department of Energy through the Quantum Materials program (R. V.), NSF DMR-1206515, AFOSR MURI and a Simons Investigatorship (J. E. M.), DFG through the Emmy Noether program KA 3360/2-1 (C. K.).
Archer, Kristin R.; Devin, Clinton J.; Vanston, Susan W.; Koyama, Tatsuki; Phillips, Sharon; George, Steven Z.; McGirt, Matthew J.; Spengler, Dan M.; Aaronson, Oran S.; Cheng, Joseph S.; Wegener, Stephen T.
2015-01-01
The purpose of this study was to determine the efficacy of a cognitive-behavioral based physical therapy (CBPT) program for improving outcomes in patients following lumbar spine surgery. A randomized controlled trial was conducted in 86 adults undergoing a laminectomy with or without arthrodesis for a lumbar degenerative condition. Patients were screened preoperatively for high fear of movement using the Tampa Scale for Kinesiophobia. Randomization to either CBPT or an Education program occurred at 6 weeks after surgery. Assessments were completed pre-treatment, post-treatment and at 3 month follow-up. The primary outcomes were pain and disability measured by the Brief Pain Inventory and Oswestry Disability Index. Secondary outcomes included general health (SF-12) and performance-based tests (5-Chair Stand, Timed Up and Go, 10 Meter Walk). Multivariable linear regression analyses found that CBPT participants had significantly greater decreases in pain and disability and increases in general health and physical performance compared to the Education group at 3 month follow-up. Results suggest a targeted CBPT program may result in significant and clinically meaningful improvement in postoperative outcomes. CBPT has the potential to be an evidence-based program that clinicians can recommend for patients at-risk for poor recovery following spine surgery. PMID:26476267
Timber management planning with timber ram and goal programming
Richard C. Field
1978-01-01
By using goal programming to enhance the linear programming of Timber RAM, multiple decision criteria were incorporated in the timber management planning of a National Forest in the southeastern United States. Combining linear and goal programming capitalizes on the advantages of the two techniques and produces operationally feasible solutions. This enhancement may...
NASA Technical Reports Server (NTRS)
Correia, Manning J.; Luke, Brian L.; McGrath, Braden J.; Clark, John B.; Rupert, Angus H.
1996-01-01
While considerable attention has been given to visual-vestibular interaction (VVI) during angular motion of the head as might occur during an aircraft spin, much less attention has been given to VVI during linear motion of the head. Such interaction might occur, for example, while viewing a stationary or moving display during vertical take-off and landing operations Research into linear VVI, particularly during prolonged periods of linear acceleration, has been hampered by the unavailability of a programmable translator capable of large excursions We collaborated with Otis Elevator Co. and used their research tower and elevator, whose motion could be digitally programmed, to vertically translate human subjects over a distance of 92.3 meters with a peak linear acceleration of 2 meters/sec(exp 2) During pulsatile or sinusoidal translation, the subjects viewed moving stripes (optokinetic stimulus) or a fixed point source (light emitting diode, led, display), respectively and it was generally found that. The direction of linear acceleration relative to the cardinal head axes and the direction of the slow component of optokinetic nystagmus (OKN) determined the extent of VVI during concomitant stripe motion and linear acceleration. Acceleration along the z head axis (A(sub z)) produced the largest VVI, particularly when the slow component of OKN was in the same direction as eye movements produced by the linear acceleration and Eye movements produced by linear acceleration are suppressed by viewing a fixed target at frequencies below 10 Hz But, above this frequency the suppression produced by VVI is removed. Finally, as demonstrated in non-human primates, vergence of the eyes appears to modulate the vertical eye movement response to linear acceleration in humans.
Automatic classification of artifactual ICA-components for artifact removal in EEG signals.
Winkler, Irene; Haufe, Stefan; Tangermann, Michael
2011-08-02
Artifacts contained in EEG recordings hamper both, the visual interpretation by experts as well as the algorithmic processing and analysis (e.g. for Brain-Computer Interfaces (BCI) or for Mental State Monitoring). While hand-optimized selection of source components derived from Independent Component Analysis (ICA) to clean EEG data is widespread, the field could greatly profit from automated solutions based on Machine Learning methods. Existing ICA-based removal strategies depend on explicit recordings of an individual's artifacts or have not been shown to reliably identify muscle artifacts. We propose an automatic method for the classification of general artifactual source components. They are estimated by TDSEP, an ICA method that takes temporal correlations into account. The linear classifier is based on an optimized feature subset determined by a Linear Programming Machine (LPM). The subset is composed of features from the frequency-, the spatial- and temporal domain. A subject independent classifier was trained on 640 TDSEP components (reaction time (RT) study, n = 12) that were hand labeled by experts as artifactual or brain sources and tested on 1080 new components of RT data of the same study. Generalization was tested on new data from two studies (auditory Event Related Potential (ERP) paradigm, n = 18; motor imagery BCI paradigm, n = 80) that used data with different channel setups and from new subjects. Based on six features only, the optimized linear classifier performed on level with the inter-expert disagreement (<10% Mean Squared Error (MSE)) on the RT data. On data of the auditory ERP study, the same pre-calculated classifier generalized well and achieved 15% MSE. On data of the motor imagery paradigm, we demonstrate that the discriminant information used for BCI is preserved when removing up to 60% of the most artifactual source components. We propose a universal and efficient classifier of ICA components for the subject independent removal of artifacts from EEG data. Based on linear methods, it is applicable for different electrode placements and supports the introspection of results. Trained on expert ratings of large data sets, it is not restricted to the detection of eye- and muscle artifacts. Its performance and generalization ability is demonstrated on data of different EEG studies.
ERIC Educational Resources Information Center
Matzke, Orville R.
The purpose of this study was to formulate a linear programming model to simulate a foundation type support program and to apply this model to a state support program for the public elementary and secondary school districts in the State of Iowa. The model was successful in producing optimal solutions to five objective functions proposed for…
Generalized Multilevel Structural Equation Modeling
ERIC Educational Resources Information Center
Rabe-Hesketh, Sophia; Skrondal, Anders; Pickles, Andrew
2004-01-01
A unifying framework for generalized multilevel structural equation modeling is introduced. The models in the framework, called generalized linear latent and mixed models (GLLAMM), combine features of generalized linear mixed models (GLMM) and structural equation models (SEM) and consist of a response model and a structural model for the latent…
Development of an energy storage tank model
NASA Astrophysics Data System (ADS)
Buckley, Robert Christopher
A linearized, one-dimensional finite difference model employing an implicit finite difference method for energy storage tanks is developed, programmed with MATLAB, and demonstrated for different applications. A set of nodal energy equations is developed by considering the energy interactions on a small control volume. The general method of solving these equations is described as are other features of the simulation program. Two modeling applications are presented: the first using a hot water storage tank with a solar collector and an absorption chiller to cool a building in the summer, the second using a molten salt storage system with a solar collector and steam power plant to generate electricity. Recommendations for further study as well as all of the source code generated in the project are also provided.
PROTEUS two-dimensional Navier-Stokes computer code, version 1.0. Volume 2: User's guide
NASA Technical Reports Server (NTRS)
Towne, Charles E.; Schwab, John R.; Benson, Thomas J.; Suresh, Ambady
1990-01-01
A new computer code was developed to solve the two-dimensional or axisymmetric, Reynolds averaged, unsteady compressible Navier-Stokes equations in strong conservation law form. The thin-layer or Euler equations may also be solved. Turbulence is modeled using an algebraic eddy viscosity model. The objective was to develop a code for aerospace applications that is easy to use and easy to modify. Code readability, modularity, and documentation were emphasized. The equations are written in nonorthogonal body-fitted coordinates, and solved by marching in time using a fully-coupled alternating direction-implicit procedure with generalized first- or second-order time differencing. All terms are linearized using second-order Taylor series. The boundary conditions are treated implicitly, and may be steady, unsteady, or spatially periodic. Simple Cartesian or polar grids may be generated internally by the program. More complex geometries require an externally generated computational coordinate system. The documentation is divided into three volumes. Volume 2 is the User's Guide, and describes the program's general features, the input and output, the procedure for setting up initial conditions, the computer resource requirements, the diagnostic messages that may be generated, the job control language used to run the program, and several test cases.
NASA Astrophysics Data System (ADS)
Ebrahimnejad, Ali
2015-08-01
There are several methods, in the literature, for solving fuzzy variable linear programming problems (fuzzy linear programming in which the right-hand-side vectors and decision variables are represented by trapezoidal fuzzy numbers). In this paper, the shortcomings of some existing methods are pointed out and to overcome these shortcomings a new method based on the bounded dual simplex method is proposed to determine the fuzzy optimal solution of that kind of fuzzy variable linear programming problems in which some or all variables are restricted to lie within lower and upper bounds. To illustrate the proposed method, an application example is solved and the obtained results are given. The advantages of the proposed method over existing methods are discussed. Also, one application of this algorithm in solving bounded transportation problems with fuzzy supplies and demands is dealt with. The proposed method is easy to understand and to apply for determining the fuzzy optimal solution of bounded fuzzy variable linear programming problems occurring in real-life situations.
Doubly robust estimation of generalized partial linear models for longitudinal data with dropouts.
Lin, Huiming; Fu, Bo; Qin, Guoyou; Zhu, Zhongyi
2017-12-01
We develop a doubly robust estimation of generalized partial linear models for longitudinal data with dropouts. Our method extends the highly efficient aggregate unbiased estimating function approach proposed in Qu et al. (2010) to a doubly robust one in the sense that under missing at random (MAR), our estimator is consistent when either the linear conditional mean condition is satisfied or a model for the dropout process is correctly specified. We begin with a generalized linear model for the marginal mean, and then move forward to a generalized partial linear model, allowing for nonparametric covariate effect by using the regression spline smoothing approximation. We establish the asymptotic theory for the proposed method and use simulation studies to compare its finite sample performance with that of Qu's method, the complete-case generalized estimating equation (GEE) and the inverse-probability weighted GEE. The proposed method is finally illustrated using data from a longitudinal cohort study. © 2017, The International Biometric Society.
Guo, P; Huang, G H
2009-01-01
In this study, an inexact fuzzy chance-constrained two-stage mixed-integer linear programming (IFCTIP) approach is proposed for supporting long-term planning of waste-management systems under multiple uncertainties in the City of Regina, Canada. The method improves upon the existing inexact two-stage programming and mixed-integer linear programming techniques by incorporating uncertainties expressed as multiple uncertainties of intervals and dual probability distributions within a general optimization framework. The developed method can provide an effective linkage between the predefined environmental policies and the associated economic implications. Four special characteristics of the proposed method make it unique compared with other optimization techniques that deal with uncertainties. Firstly, it provides a linkage to predefined policies that have to be respected when a modeling effort is undertaken; secondly, it is useful for tackling uncertainties presented as intervals, probabilities, fuzzy sets and their incorporation; thirdly, it facilitates dynamic analysis for decisions of facility-expansion planning and waste-flow allocation within a multi-facility, multi-period, multi-level, and multi-option context; fourthly, the penalties are exercised with recourse against any infeasibility, which permits in-depth analyses of various policy scenarios that are associated with different levels of economic consequences when the promised solid waste-generation rates are violated. In a companion paper, the developed method is applied to a real case for the long-term planning of waste management in the City of Regina, Canada.
Schuna, John M; Lauersdorf, Rebekah L; Behrens, Timothy K; Liguori, Gary; Liebert, Mina L
2013-02-01
After-school programs may provide valuable opportunities for children to accumulate healthful physical activity (PA). This study assessed the PA of third-, fourth-, and fifth-grade children in the Keep It Moving! (KIM) after-school PA program, which was implemented in an ethnically diverse and low socioeconomic status school district in Colorado Springs, Colorado. The PA of KIM participating children (N = 116) at 4 elementary schools was objectively assessed using ActiGraph accelerometers and the System for Observing Fitness Instruction Time (SOFIT). Linear mixed-effects models or generalized linear mixed-effects models were used to compare time spent in sedentary (SED) behaviors, light PA (LPA), moderate PA (MPA), vigorous PA (VPA), and moderate-to-vigorous PA (MVPA) between genders and weight status classifications during KIM sessions. Children accumulated 7.6 minutes of SED time, 26.9 minutes of LPA, and 22.2 minutes of MVPA during KIM sessions. Boys accumulated less SED time (p < .05) and LPA (p = .04) than girls, but accumulated more MPA (p = .04), VPA (p = .03), and MVPA (p = .03). Overweight/obese children accumulated more LPA (p = .04) and less VPA (p < .05) than nonoverweight children. SOFIT data indicated that children spent a considerable proportion of KIM sessions being very active (12.4%), walking (36.0%), or standing (40.3%). The KIM program provides opportunities for disadvantaged children to accumulate substantial amounts of MVPA (>20 minutes per session) in an effort to meet current PA guidelines. © 2013, American School Health Association.
Co-doped sodium chloride crystals exposed to different irradiation temperature
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ortiz-Morales, A.; Cruz-Zaragoza, E.; Furetta, C.
2013-07-03
Monocrystals of NaCl:XCl{sub 2}:MnCl{sub 2}(X = Ca,Cd) at four different concentrations have been analyzed. The crystals were exposed to different irradiation temperature, such as at room temperature (RT), solid water (SW), dry ice (DI) and liquid nitrogen (LN). The samples were irradiated with photon from {sup 60}Co irradiators. The co-doped sodium chloride crystals show a complex structure of glow curves that can be related to different distribution of traps. The linearity response was analyzed with the F(D) index. The F(D) value was less than unity indicating a sub-linear response was obtained from the TL response on the function of themore » dose. The glow curves were deconvoluted by using the CGCD program based on the first, second and general order kinetics.« less
Attitude dynamics simulation subroutines for systems of hinge-connected rigid bodies
NASA Technical Reports Server (NTRS)
Fleischer, G. E.; Likins, P. W.
1974-01-01
Several computer subroutines are designed to provide the solution to minimum-dimension sets of discrete-coordinate equations of motion for systems consisting of an arbitrary number of hinge-connected rigid bodies assembled in a tree topology. In particular, these routines may be applied to: (1) the case of completely unrestricted hinge rotations, (2) the totally linearized case (all system rotations are small), and (3) the mixed, or partially linearized, case. The use of the programs in each case is demonstrated using a five-body spacecraft and attitude control system configuration. The ability of the subroutines to accommodate prescribed motions of system bodies is also demonstrated. Complete listings and user instructions are included for these routines (written in FORTRAN V) which are intended as multi- and general-purpose tools in the simulation of spacecraft and other complex electromechanical systems.
Optimum testing of multiple hypotheses in quantum detection theory
NASA Technical Reports Server (NTRS)
Yuen, H. P.; Kennedy, R. S.; Lax, M.
1975-01-01
The problem of specifying the optimum quantum detector in multiple hypotheses testing is considered for application to optical communications. The quantum digital detection problem is formulated as a linear programming problem on an infinite-dimensional space. A necessary and sufficient condition is derived by the application of a general duality theorem specifying the optimum detector in terms of a set of linear operator equations and inequalities. Existence of the optimum quantum detector is also established. The optimality of commuting detection operators is discussed in some examples. The structure and performance of the optimal receiver are derived for the quantum detection of narrow-band coherent orthogonal and simplex signals. It is shown that modal photon counting is asymptotically optimum in the limit of a large signaling alphabet and that the capacity goes to infinity in the absence of a bandwidth limitation.
Laschober, Tanja C.; de Tormes Eby, Lillian Turner
2013-01-01
The main goals of the current study were to investigate whether there are linear or curvilinear relationships between substance use disorder counselors’ job performance and actual turnover after 1 year utilizing four indicators of job performance and three turnover statuses (voluntary, involuntary, and no turnover as the reference group). Using longitudinal data from 440 matched counselor-clinical supervisor dyads, results indicate that overall, counselors with lower job performance are more likely to turn over voluntarily and involuntarily than not to turn over. Further, one of the job performance measures shows a significant curvilinear effect. We conclude that the negative consequences often assumed to be “caused” by counselor turnover may be overstated because those who leave both voluntarily and involuntarily demonstrate generally lower performance than those who remain employed at their treatment program. PMID:22527711
Laschober, Tanja C; de Tormes Eby, Lillian Turner
2013-07-01
The main goals of the current study were to investigate whether there are linear or curvilinear relationships between substance use disorder counselors' job performance and actual turnover after 1 year utilizing four indicators of job performance and three turnover statuses (voluntary, involuntary, and no turnover as the reference group). Using longitudinal data from 440 matched counselor-clinical supervisor dyads, results indicate that overall, counselors with lower job performance are more likely to turn over voluntarily and involuntarily than not to turn over. Further, one of the job performance measures shows a significant curvilinear effect. We conclude that the negative consequences often assumed to be "caused" by counselor turnover may be overstated because those who leave both voluntarily and involuntarily demonstrate generally lower performance than those who remain employed at their treatment program.
Mills, Stacia; Wolitzky-Taylor, Kate; Xiao, Anna Q; Bourque, Marie Claire; Rojas, Sandra M Peynado; Bhattacharya, Debanjana; Simpson, Annabelle K; Maye, Aleea; Lo, Pachida; Clark, Aaron; Lim, Russell; Lu, Francis G
2016-10-01
The authors assessed whether a 1-h didactic session on the DSM-5 Cultural Formulation Interview (CFI) improves cultural competence of general psychiatry residents. Psychiatry residents at six residency programs completed demographics and pre-intervention questionnaires, were exposed to a 1-h session on the CFI, and completed a post-intervention questionnaire. Repeated measures ANCOVA compared pre- to post-intervention change. Linear regression assessed whether previous cultural experience predicted post-intervention scores. Mean scores on the questionnaire significantly changed from pre- to post-intervention (p < 0.001). Previous cultural experience did not predict post-intervention scores. Psychiatry residents' cultural competence scores improved with a 1-h session on the CFI but with notable limitations.
Anderson, D.R.
1975-01-01
Optimal exploitation strategies were studied for an animal population in a Markovian (stochastic, serially correlated) environment. This is a general case and encompasses a number of important special cases as simplifications. Extensive empirical data on the Mallard (Anas platyrhynchos) were used as an example of general theory. The number of small ponds on the central breeding grounds was used as an index to the state of the environment. A general mathematical model was formulated to provide a synthesis of the existing literature, estimates of parameters developed from an analysis of data, and hypotheses regarding the specific effect of exploitation on total survival. The literature and analysis of data were inconclusive concerning the effect of exploitation on survival. Therefore, two hypotheses were explored: (1) exploitation mortality represents a largely additive form of mortality, and (2) exploitation mortality is compensatory with other forms of mortality, at least to some threshold level. Models incorporating these two hypotheses were formulated as stochastic dynamic programming models and optimal exploitation strategies were derived numerically on a digital computer. Optimal exploitation strategies were found to exist under the rather general conditions. Direct feedback control was an integral component in the optimal decision-making process. Optimal exploitation was found to be substantially different depending upon the hypothesis regarding the effect of exploitation on the population. If we assume that exploitation is largely an additive force of mortality in Mallards, then optimal exploitation decisions are a convex function of the size of the breeding population and a linear or slight concave function of the environmental conditions. Under the hypothesis of compensatory mortality forces, optimal exploitation decisions are approximately linearly related to the size of the Mallard breeding population. Dynamic programming is suggested as a very general formulation for realistic solutions to the general optimal exploitation problem. The concepts of state vectors and stage transformations are completely general. Populations can be modeled stochastically and the objective function can include extra-biological factors. The optimal level of exploitation in year t must be based on the observed size of the population and the state of the environment in year t unless the dynamics of the population, the state of the environment, and the result of the exploitation decisions are completely deterministic. Exploitation based on an average harvest, or harvest rate, or designed to maintain a constant breeding population size is inefficient.
Can linear superiorization be useful for linear optimization problems?
NASA Astrophysics Data System (ADS)
Censor, Yair
2017-04-01
Linear superiorization (LinSup) considers linear programming problems but instead of attempting to solve them with linear optimization methods it employs perturbation resilient feasibility-seeking algorithms and steers them toward reduced (not necessarily minimal) target function values. The two questions that we set out to explore experimentally are: (i) does LinSup provide a feasible point whose linear target function value is lower than that obtained by running the same feasibility-seeking algorithm without superiorization under identical conditions? (ii) How does LinSup fare in comparison with the Simplex method for solving linear programming problems? Based on our computational experiments presented here, the answers to these two questions are: ‘yes’ and ‘very well’, respectively.
Archer, Kristin R; Devin, Clinton J; Vanston, Susan W; Koyama, Tatsuki; Phillips, Sharon E; George, Steven Z; McGirt, Matthew J; Spengler, Dan M; Aaronson, Oran S; Cheng, Joseph S; Wegener, Stephen T
2016-01-01
The purpose of this study was to determine the efficacy of a cognitive-behavioral-based physical therapy (CBPT) program for improving outcomes in patients after lumbar spine surgery. A randomized controlled trial was conducted on 86 adults undergoing a laminectomy with or without arthrodesis for a lumbar degenerative condition. Patients were screened preoperatively for high fear of movement using the Tampa Scale for Kinesiophobia. Randomization to either CBPT or an education program occurred at 6 weeks after surgery. Assessments were completed pretreatment, posttreatment and at 3-month follow-up. The primary outcomes were pain and disability measured by the Brief Pain Inventory and Oswestry Disability Index. Secondary outcomes included general health (SF-12) and performance-based tests (5-Chair Stand, Timed Up and Go, 10-Meter Walk). Multivariable linear regression analyses found that CBPT participants had significantly greater decreases in pain and disability and increases in general health and physical performance compared with the education group at the 3-month follow-up. Results suggest a targeted CBPT program may result in significant and clinically meaningful improvement in postoperative outcomes. CBPT has the potential to be an evidence-based program that clinicians can recommend for patients at risk for poor recovery after spine surgery. This study investigated a targeted cognitive-behavioral-based physical therapy program for patients after lumbar spine surgery. Findings lend support to the hypothesis that incorporating cognitive-behavioral strategies into postoperative physical therapy may address psychosocial risk factors and improve pain, disability, general health, and physical performance outcomes. Copyright © 2016 American Pain Society. Published by Elsevier Inc. All rights reserved.
Portfolio optimization using fuzzy linear programming
NASA Astrophysics Data System (ADS)
Pandit, Purnima K.
2013-09-01
Portfolio Optimization (PO) is a problem in Finance, in which investor tries to maximize return and minimize risk by carefully choosing different assets. Expected return and risk are the most important parameters with regard to optimal portfolios. In the simple form PO can be modeled as quadratic programming problem which can be put into equivalent linear form. PO problems with the fuzzy parameters can be solved as multi-objective fuzzy linear programming problem. In this paper we give the solution to such problems with an illustrative example.
Users manual for linear Time-Varying Helicopter Simulation (Program TVHIS)
NASA Technical Reports Server (NTRS)
Burns, M. R.
1979-01-01
A linear time-varying helicopter simulation program (TVHIS) is described. The program is designed as a realistic yet efficient helicopter simulation. It is based on a linear time-varying helicopter model which includes rotor, actuator, and sensor models, as well as a simulation of flight computer logic. The TVHIS can generate a mean trajectory simulation along a nominal trajectory, or propagate covariance of helicopter states, including rigid-body, turbulence, control command, controller states, and rigid-body state estimates.
A set-covering based heuristic algorithm for the periodic vehicle routing problem.
Cacchiani, V; Hemmelmayr, V C; Tricoire, F
2014-01-30
We present a hybrid optimization algorithm for mixed-integer linear programming, embedding both heuristic and exact components. In order to validate it we use the periodic vehicle routing problem (PVRP) as a case study. This problem consists of determining a set of minimum cost routes for each day of a given planning horizon, with the constraints that each customer must be visited a required number of times (chosen among a set of valid day combinations), must receive every time the required quantity of product, and that the number of routes per day (each respecting the capacity of the vehicle) does not exceed the total number of available vehicles. This is a generalization of the well-known vehicle routing problem (VRP). Our algorithm is based on the linear programming (LP) relaxation of a set-covering-like integer linear programming formulation of the problem, with additional constraints. The LP-relaxation is solved by column generation, where columns are generated heuristically by an iterated local search algorithm. The whole solution method takes advantage of the LP-solution and applies techniques of fixing and releasing of the columns as a local search, making use of a tabu list to avoid cycling. We show the results of the proposed algorithm on benchmark instances from the literature and compare them to the state-of-the-art algorithms, showing the effectiveness of our approach in producing good quality solutions. In addition, we report the results on realistic instances of the PVRP introduced in Pacheco et al. (2011) [24] and on benchmark instances of the periodic traveling salesman problem (PTSP), showing the efficacy of the proposed algorithm on these as well. Finally, we report the new best known solutions found for all the tested problems.
A set-covering based heuristic algorithm for the periodic vehicle routing problem
Cacchiani, V.; Hemmelmayr, V.C.; Tricoire, F.
2014-01-01
We present a hybrid optimization algorithm for mixed-integer linear programming, embedding both heuristic and exact components. In order to validate it we use the periodic vehicle routing problem (PVRP) as a case study. This problem consists of determining a set of minimum cost routes for each day of a given planning horizon, with the constraints that each customer must be visited a required number of times (chosen among a set of valid day combinations), must receive every time the required quantity of product, and that the number of routes per day (each respecting the capacity of the vehicle) does not exceed the total number of available vehicles. This is a generalization of the well-known vehicle routing problem (VRP). Our algorithm is based on the linear programming (LP) relaxation of a set-covering-like integer linear programming formulation of the problem, with additional constraints. The LP-relaxation is solved by column generation, where columns are generated heuristically by an iterated local search algorithm. The whole solution method takes advantage of the LP-solution and applies techniques of fixing and releasing of the columns as a local search, making use of a tabu list to avoid cycling. We show the results of the proposed algorithm on benchmark instances from the literature and compare them to the state-of-the-art algorithms, showing the effectiveness of our approach in producing good quality solutions. In addition, we report the results on realistic instances of the PVRP introduced in Pacheco et al. (2011) [24] and on benchmark instances of the periodic traveling salesman problem (PTSP), showing the efficacy of the proposed algorithm on these as well. Finally, we report the new best known solutions found for all the tested problems. PMID:24748696
Wildhaber, M.L.; Holan, S.H.; Bryan, J.L.; Gladish, D.W.; Ellersieck, M.
2011-01-01
In 2003, the US Army Corps of Engineers initiated the Pallid Sturgeon Population Assessment Program (PSPAP) to monitor pallid sturgeon and the fish community of the Missouri River. The power analysis of PSPAP presented here was conducted to guide sampling design and effort decisions. The PSPAP sampling design has a nested structure with multiple gear subsamples within a river bend. Power analyses were based on a normal linear mixed model, using a mixed cell means approach, with variance estimates from the original data. It was found that, at current effort levels, at least 20 years for pallid and 10 years for shovelnose sturgeon is needed to detect a 5% annual decline. Modified bootstrap simulations suggest power estimates from the original data are conservative due to excessive zero fish counts. In general, the approach presented is applicable to a wide array of animal monitoring programs.
NASA Technical Reports Server (NTRS)
Collins, L.; Saunders, D.
1986-01-01
User information for program PROFILE, an aerodynamics design utility for refining, plotting, and tabulating airfoil profiles is provided. The theory and implementation details for two of the more complex options are also presented. These are the REFINE option, for smoothing curvature in selected regions while retaining or seeking some specified thickness ratio, and the OPTIMIZE option, which seeks a specified curvature distribution. REFINE uses linear techniques to manipulate ordinates via the central difference approximation to second derivatives, while OPTIMIZE works directly with curvature using nonlinear least squares techniques. Use of programs QPLOT and BPLOT is also described, since all of the plots provided by PROFILE (airfoil coordinates, curvature distributions) are achieved via the general purpose QPLOT utility. BPLOT illustrates (again, via QPLOT) the shape functions used by two of PROFILE's options. The programs were designed and implemented for the Applied Aerodynamics Branch at NASA Ames Research Center, Moffett Field, California, and written in FORTRAN and run on a VAX-11/780 under VMS.
NASA Technical Reports Server (NTRS)
Hairr, John W.; Huang, Jui-Ten; Ingram, J. Edward; Shah, Bharat M.
1992-01-01
The ISPAN Program (Interactive Stiffened Panel Analysis) is an interactive design tool that is intended to provide a means of performing simple and self contained preliminary analysis of aircraft primary structures made of composite materials. The program combines a series of modules with the finite element code DIAL as its backbone. Four ISPAN Modules were developed and are documented. These include: (1) flat stiffened panel; (2) curved stiffened panel; (3) flat tubular panel; and (4) curved geodesic panel. Users are instructed to input geometric and material properties, load information and types of analysis (linear, bifurcation buckling, or post-buckling) interactively. The program utilizing this information will generate finite element mesh and perform analysis. The output in the form of summary tables of stress or margins of safety, contour plots of loads or stress, and deflected shape plots may be generalized and used to evaluate specific design.
Estimation of group means when adjusting for covariates in generalized linear models.
Qu, Yongming; Luo, Junxiang
2015-01-01
Generalized linear models are commonly used to analyze categorical data such as binary, count, and ordinal outcomes. Adjusting for important prognostic factors or baseline covariates in generalized linear models may improve the estimation efficiency. The model-based mean for a treatment group produced by most software packages estimates the response at the mean covariate, not the mean response for this treatment group for the studied population. Although this is not an issue for linear models, the model-based group mean estimates in generalized linear models could be seriously biased for the true group means. We propose a new method to estimate the group mean consistently with the corresponding variance estimation. Simulation showed the proposed method produces an unbiased estimator for the group means and provided the correct coverage probability. The proposed method was applied to analyze hypoglycemia data from clinical trials in diabetes. Copyright © 2014 John Wiley & Sons, Ltd.
Linear Programming for Vocational Education Planning. Interim Report.
ERIC Educational Resources Information Center
Young, Robert C.; And Others
The purpose of the paper is to define for potential users of vocational education management information systems a quantitative analysis technique and its utilization to facilitate more effective planning of vocational education programs. Defining linear programming (LP) as a management technique used to solve complex resource allocation problems…
Mathematical Modeling of Intestinal Iron Absorption Using Genetic Programming
Colins, Andrea; Gerdtzen, Ziomara P.; Nuñez, Marco T.; Salgado, J. Cristian
2017-01-01
Iron is a trace metal, key for the development of living organisms. Its absorption process is complex and highly regulated at the transcriptional, translational and systemic levels. Recently, the internalization of the DMT1 transporter has been proposed as an additional regulatory mechanism at the intestinal level, associated to the mucosal block phenomenon. The short-term effect of iron exposure in apical uptake and initial absorption rates was studied in Caco-2 cells at different apical iron concentrations, using both an experimental approach and a mathematical modeling framework. This is the first report of short-term studies for this system. A non-linear behavior in the apical uptake dynamics was observed, which does not follow the classic saturation dynamics of traditional biochemical models. We propose a method for developing mathematical models for complex systems, based on a genetic programming algorithm. The algorithm is aimed at obtaining models with a high predictive capacity, and considers an additional parameter fitting stage and an additional Jackknife stage for estimating the generalization error. We developed a model for the iron uptake system with a higher predictive capacity than classic biochemical models. This was observed both with the apical uptake dataset used for generating the model and with an independent initial rates dataset used to test the predictive capacity of the model. The model obtained is a function of time and the initial apical iron concentration, with a linear component that captures the global tendency of the system, and a non-linear component that can be associated to the movement of DMT1 transporters. The model presented in this paper allows the detailed analysis, interpretation of experimental data, and identification of key relevant components for this complex biological process. This general method holds great potential for application to the elucidation of biological mechanisms and their key components in other complex systems. PMID:28072870
Probabilistic dual heuristic programming-based adaptive critic
NASA Astrophysics Data System (ADS)
Herzallah, Randa
2010-02-01
Adaptive critic (AC) methods have common roots as generalisations of dynamic programming for neural reinforcement learning approaches. Since they approximate the dynamic programming solutions, they are potentially suitable for learning in noisy, non-linear and non-stationary environments. In this study, a novel probabilistic dual heuristic programming (DHP)-based AC controller is proposed. Distinct to current approaches, the proposed probabilistic (DHP) AC method takes uncertainties of forward model and inverse controller into consideration. Therefore, it is suitable for deterministic and stochastic control problems characterised by functional uncertainty. Theoretical development of the proposed method is validated by analytically evaluating the correct value of the cost function which satisfies the Bellman equation in a linear quadratic control problem. The target value of the probabilistic critic network is then calculated and shown to be equal to the analytically derived correct value. Full derivation of the Riccati solution for this non-standard stochastic linear quadratic control problem is also provided. Moreover, the performance of the proposed probabilistic controller is demonstrated on linear and non-linear control examples.
Contextual Fraction as a Measure of Contextuality.
Abramsky, Samson; Barbosa, Rui Soares; Mansfield, Shane
2017-08-04
We consider the contextual fraction as a quantitative measure of contextuality of empirical models, i.e., tables of probabilities of measurement outcomes in an experimental scenario. It provides a general way to compare the degree of contextuality across measurement scenarios; it bears a precise relationship to violations of Bell inequalities; its value, and a witnessing inequality, can be computed using linear programing; it is monotonic with respect to the "free" operations of a resource theory for contextuality; and it measures quantifiable advantages in informatic tasks, such as games and a form of measurement-based quantum computing.
Contextual Fraction as a Measure of Contextuality
NASA Astrophysics Data System (ADS)
Abramsky, Samson; Barbosa, Rui Soares; Mansfield, Shane
2017-08-01
We consider the contextual fraction as a quantitative measure of contextuality of empirical models, i.e., tables of probabilities of measurement outcomes in an experimental scenario. It provides a general way to compare the degree of contextuality across measurement scenarios; it bears a precise relationship to violations of Bell inequalities; its value, and a witnessing inequality, can be computed using linear programing; it is monotonic with respect to the "free" operations of a resource theory for contextuality; and it measures quantifiable advantages in informatic tasks, such as games and a form of measurement-based quantum computing.
FORTRAN plotting subroutines for the space plasma laboratory
NASA Technical Reports Server (NTRS)
Williams, R.
1983-01-01
The computer program known as PLOTRW was custom made to satisfy some of the graphics requirements for the data collected in the Space Plasma Laboratory at the Johnson Space Center (JSC). The general requirements for the program were as follows: (1) all subroutines shall be callable through a FORTRAN source program; (2) all graphs shall fill one page and be properly labeled; (3) there shall be options for linear axes and logarithmic axes; (4) each axis shall have tick marks equally spaced with numeric values printed at the beginning tick mark and at the last tick mark; and (5) there shall be three options for plotting. These are: (1) point plot, (2) line plot and (3) point-line plot. The subroutines were written in FORTRAN IV for the LSI-11 Digital equipment Corporation (DEC) Computer. The program is now operational and can be run on any TEKTRONICX graphics terminal that uses a DEC Real-Time-11 (RT-11) operating system.
NASA Astrophysics Data System (ADS)
Kassa, Semu Mitiku; Tsegay, Teklay Hailay
2017-08-01
Tri-level optimization problems are optimization problems with three nested hierarchical structures, where in most cases conflicting objectives are set at each level of hierarchy. Such problems are common in management, engineering designs and in decision making situations in general, and are known to be strongly NP-hard. Existing solution methods lack universality in solving these types of problems. In this paper, we investigate a tri-level programming problem with quadratic fractional objective functions at each of the three levels. A solution algorithm has been proposed by applying fuzzy goal programming approach and by reformulating the fractional constraints to equivalent but non-fractional non-linear constraints. Based on the transformed formulation, an iterative procedure is developed that can yield a satisfactory solution to the tri-level problem. The numerical results on various illustrative examples demonstrated that the proposed algorithm is very much promising and it can also be used to solve larger-sized as well as n-level problems of similar structure.
ERIC Educational Resources Information Center
Smith, Karan B.
1996-01-01
Presents activities which highlight major concepts of linear programming. Demonstrates how technology allows students to solve linear programming problems using exploration prior to learning algorithmic methods. (DDR)
Hospital costs associated with surgical site infections in general and vascular surgery patients.
Boltz, Melissa M; Hollenbeak, Christopher S; Julian, Kathleen G; Ortenzi, Gail; Dillon, Peter W
2011-11-01
Although much has been written about excess cost and duration of stay (DOS) associated with surgical site infections (SSIs) after cardiothoracic surgery, less has been reported after vascular and general surgery. We used data from the National Surgical Quality Improvement Program (NSQIP) to estimate the total cost and DOS associated with SSIs in patients undergoing general and vascular surgery. Using standard NSQIP practices, data were collected on patients undergoing general and vascular surgery at a single academic center between 2007 and 2009 and were merged with fully loaded operating costs obtained from the hospital accounting database. Logistic regression was used to determine which patient and preoperative variables influenced the occurrence of SSIs. After adjusting for patient characteristics, costs and DOS were fit to linear regression models to determine the effect of SSIs. Of the 2,250 general and vascular surgery patients sampled, SSIs were observed in 186 inpatients. Predisposing factors of SSIs were male sex, insulin-dependent diabetes, steroid use, wound classification, and operative time (P < .05). After adjusting for those characteristics, the total excess cost and DOS attributable to SSIs were $10,497 (P < .0001) and 4.3 days (P < .0001), respectively. SSIs complicating general and vascular surgical procedures share many risk factors with SSIs after cardiothoracic surgery. Although the excess costs and DOS associated with SSIs after general and vascular surgery are somewhat less, they still represent substantial financial and opportunity costs to hospitals and suggest, along with the implications for patient care, a continuing need for cost-effective quality improvement and programs of infection prevention. Copyright © 2011 Mosby, Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yavari, M., E-mail: yavari@iaukashan.ac.ir
2016-06-15
We generalize the results of Nesterenko [13, 14] and Gogilidze and Surovtsev [15] for DNA structures. Using the generalized Hamiltonian formalism, we investigate solutions of the equilibrium shape equations for the linear free energy model.
Measuring Efficiency of Secondary Healthcare Providers in Slovenia
Blatnik, Patricia; Bojnec, Štefan; Tušak, Matej
2017-01-01
Abstract The chief aim of this study was to analyze secondary healthcare providers' efficiency, focusing on the efficiency analysis of Slovene general hospitals. We intended to present a complete picture of technical, allocative, and cost or economic efficiency of general hospitals. Methods We researched the aspects of efficiency with two econometric methods. First, we calculated the necessary quotients of efficiency with the stochastic frontier analyze (SFA), which are realized by econometric evaluation of stochastic frontier functions; then, with the data envelopment analyze (DEA), we calculated the necessary quotients that are based on the linear programming method. Results Results on measures of efficiency showed that the two chosen methods produced two different conclusions. The SFA method concluded Celje General Hospital is the most efficient general hospital, whereas the DEA method concluded Brežice General Hospital was the hospital to be declared as the most efficient hospital. Conclusion Our results are a useful tool that can aid managers, payers, and designers of healthcare policy to better understand how general hospitals operate. The participants can accordingly decide with less difficulty on any further business operations of general hospitals, having the best practices of general hospitals at their disposal. PMID:28730180
Koda, Shin-ichi
2015-05-28
It has been shown by some existing studies that some linear dynamical systems defined on a dendritic network are equivalent to those defined on a set of one-dimensional networks in special cases and this transformation to the simple picture, which we call linear chain (LC) decomposition, has a significant advantage in understanding properties of dendrimers. In this paper, we expand the class of LC decomposable system with some generalizations. In addition, we propose two general sufficient conditions for LC decomposability with a procedure to systematically realize the LC decomposition. Some examples of LC decomposable linear dynamical systems are also presented with their graphs. The generalization of the LC decomposition is implemented in the following three aspects: (i) the type of linear operators; (ii) the shape of dendritic networks on which linear operators are defined; and (iii) the type of symmetry operations representing the symmetry of the systems. In the generalization (iii), symmetry groups that represent the symmetry of dendritic systems are defined. The LC decomposition is realized by changing the basis of a linear operator defined on a dendritic network into bases of irreducible representations of the symmetry group. The achievement of this paper makes it easier to utilize the LC decomposition in various cases. This may lead to a further understanding of the relation between structure and functions of dendrimers in future studies.
NASA Technical Reports Server (NTRS)
Pototzky, Anthony S; Murphy, Patrick C.
2014-01-01
Improving aerodynamic models for adverse loss-of-control conditions in flight is an area being researched under the NASA Aviation Safety Program. Aerodynamic models appropriate for loss of control conditions require a more general mathematical representation to predict nonlinear unsteady behaviors. As more general aerodynamic models are studied that include nonlinear higher order effects, the possibility of measurements that confound aerodynamic and structural responses are probable. In this study an initial step is taken to look at including structural flexibility in analysis of rigid-body forced-oscillation testing that accounts for dynamic rig, sting and balance flexibility. Because of the significant testing required and associated costs in a general study, it makes sense to capitalize on low cost analytical methods where possible, especially where structural flexibility can be accounted for by a low cost method. This paper provides an initial look at using linear lifting surface theory applied to rigid-body aircraft roll forced-oscillation tests.
Neural networks for feedback feedforward nonlinear control systems.
Parisini, T; Zoppoli, R
1994-01-01
This paper deals with the problem of designing feedback feedforward control strategies to drive the state of a dynamic system (in general, nonlinear) so as to track any desired trajectory joining the points of given compact sets, while minimizing a certain cost function (in general, nonquadratic). Due to the generality of the problem, conventional methods are difficult to apply. Thus, an approximate solution is sought by constraining control strategies to take on the structure of multilayer feedforward neural networks. After discussing the approximation properties of neural control strategies, a particular neural architecture is presented, which is based on what has been called the "linear-structure preserving principle". The original functional problem is then reduced to a nonlinear programming one, and backpropagation is applied to derive the optimal values of the synaptic weights. Recursive equations to compute the gradient components are presented, which generalize the classical adjoint system equations of N-stage optimal control theory. Simulation results related to nonlinear nonquadratic problems show the effectiveness of the proposed method.
NASA Technical Reports Server (NTRS)
Pilkey, W. D.; Chen, Y. H.
1974-01-01
An indirect synthesis method is used in the efficient optimal design of multi-degree of freedom, multi-design element, nonlinear, transient systems. A limiting performance analysis which requires linear programming for a kinematically linear system is presented. The system is selected using system identification methods such that the designed system responds as closely as possible to the limiting performance. The efficiency is a result of the method avoiding the repetitive systems analyses accompanying other numerical optimization methods.
Learning oncogenetic networks by reducing to mixed integer linear programming.
Shahrabi Farahani, Hossein; Lagergren, Jens
2013-01-01
Cancer can be a result of accumulation of different types of genetic mutations such as copy number aberrations. The data from tumors are cross-sectional and do not contain the temporal order of the genetic events. Finding the order in which the genetic events have occurred and progression pathways are of vital importance in understanding the disease. In order to model cancer progression, we propose Progression Networks, a special case of Bayesian networks, that are tailored to model disease progression. Progression networks have similarities with Conjunctive Bayesian Networks (CBNs) [1],a variation of Bayesian networks also proposed for modeling disease progression. We also describe a learning algorithm for learning Bayesian networks in general and progression networks in particular. We reduce the hard problem of learning the Bayesian and progression networks to Mixed Integer Linear Programming (MILP). MILP is a Non-deterministic Polynomial-time complete (NP-complete) problem for which very good heuristics exists. We tested our algorithm on synthetic and real cytogenetic data from renal cell carcinoma. We also compared our learned progression networks with the networks proposed in earlier publications. The software is available on the website https://bitbucket.org/farahani/diprog.
Earthquake mechanisms from linear-programming inversion of seismic-wave amplitude ratios
Julian, B.R.; Foulger, G.R.
1996-01-01
The amplitudes of radiated seismic waves contain far more information about earthquake source mechanisms than do first-motion polarities, but amplitudes are severely distorted by the effects of heterogeneity in the Earth. This distortion can be reduced greatly by using the ratios of amplitudes of appropriately chosen seismic phases, rather than simple amplitudes, but existing methods for inverting amplitude ratios are severely nonlinear and require computationally intensive searching methods to ensure that solutions are globally optimal. Searching methods are particularly costly if general (moment tensor) mechanisms are allowed. Efficient linear-programming methods, which do not suffer from these problems, have previously been applied to inverting polarities and wave amplitudes. We extend these methods to amplitude ratios, in which formulation on inequality constraint for an amplitude ratio takes the same mathematical form as a polarity observation. Three-component digital data for an earthquake at the Hengill-Grensdalur geothermal area in southwestern Iceland illustrate the power of the method. Polarities of P, SH, and SV waves, unusually well distributed on the focal sphere, cannot distinguish between diverse mechanisms, including a double couple. Amplitude ratios, on the other hand, clearly rule out the double-couple solution and require a large explosive isotropic component.
Application of Semi-Definite Programming for Many-Fermion Systems
NASA Astrophysics Data System (ADS)
Zhao, Zhengji; Braams, Bastiaan; Fukuda, Mituhiro; Overton, Michael
2003-03-01
The ground state energy and other important observables of a many-fermion system with one- and two-body interactions only can all be obtained from the first order and second order Reduced Density Matrices (RDM's) of the system. Using these density matrices and a family of associated representability conditions one may obtain an approximation method for electronic structure theory that is in the mathematical form of Semi-Definite Programming (SDP): minimize a linear matrix functional over a space of positive semidefinite matrices subject to linear constraints. The representability conditions are some known necessary conditions, starting with the well-known P, Q, and G conditions [Claude Garrod and Jerome K. Percus, Reducation of the N-Particle Variational Problem, J. Math. Phys. 5 (1964) 1756-1776]. The RDM method with SDP has great potential advantages over the wave function method when the particle number N is large. The dimension of the full configuration space increases exponentially with N, but in RDM method with SDP the dimension of the objective matrix (which includes RDM's) increases only polynomially with N. We will report on the effect of adding the generalized three-index conditions proposed in [R. M. Erdahl, Representability, Int. J. Quantum Chem. 13 (1978) 697-718].
ERIC Educational Resources Information Center
Nakhanu, Shikuku Beatrice; Musasia, Amadalo Maurice
2015-01-01
The topic Linear Programming is included in the compulsory Kenyan secondary school mathematics curriculum at form four. The topic provides skills for determining best outcomes in a given mathematical model involving some linear relationship. This technique has found application in business, economics as well as various engineering fields. Yet many…
Slope Estimation in Noisy Piecewise Linear Functions✩
Ingle, Atul; Bucklew, James; Sethares, William; Varghese, Tomy
2014-01-01
This paper discusses the development of a slope estimation algorithm called MAPSlope for piecewise linear data that is corrupted by Gaussian noise. The number and locations of slope change points (also known as breakpoints) are assumed to be unknown a priori though it is assumed that the possible range of slope values lies within known bounds. A stochastic hidden Markov model that is general enough to encompass real world sources of piecewise linear data is used to model the transitions between slope values and the problem of slope estimation is addressed using a Bayesian maximum a posteriori approach. The set of possible slope values is discretized, enabling the design of a dynamic programming algorithm for posterior density maximization. Numerical simulations are used to justify choice of a reasonable number of quantization levels and also to analyze mean squared error performance of the proposed algorithm. An alternating maximization algorithm is proposed for estimation of unknown model parameters and a convergence result for the method is provided. Finally, results using data from political science, finance and medical imaging applications are presented to demonstrate the practical utility of this procedure. PMID:25419020
NASA Technical Reports Server (NTRS)
Silva, Walter A.
1993-01-01
A methodology for modeling nonlinear unsteady aerodynamic responses, for subsequent use in aeroservoelastic analysis and design, using the Volterra-Wiener theory of nonlinear systems is presented. The methodology is extended to predict nonlinear unsteady aerodynamic responses of arbitrary frequency. The Volterra-Wiener theory uses multidimensional convolution integrals to predict the response of nonlinear systems to arbitrary inputs. The CAP-TSD (Computational Aeroelasticity Program - Transonic Small Disturbance) code is used to generate linear and nonlinear unit impulse responses that correspond to each of the integrals for a rectangular wing with a NACA 0012 section with pitch and plunge degrees of freedom. The computed kernels then are used to predict linear and nonlinear unsteady aerodynamic responses via convolution and compared to responses obtained using the CAP-TSD code directly. The results indicate that the approach can be used to predict linear unsteady aerodynamic responses exactly for any input amplitude or frequency at a significant cost savings. Convolution of the nonlinear terms results in nonlinear unsteady aerodynamic responses that compare reasonably well with those computed using the CAP-TSD code directly but at significant computational cost savings.
Slope Estimation in Noisy Piecewise Linear Functions.
Ingle, Atul; Bucklew, James; Sethares, William; Varghese, Tomy
2015-03-01
This paper discusses the development of a slope estimation algorithm called MAPSlope for piecewise linear data that is corrupted by Gaussian noise. The number and locations of slope change points (also known as breakpoints) are assumed to be unknown a priori though it is assumed that the possible range of slope values lies within known bounds. A stochastic hidden Markov model that is general enough to encompass real world sources of piecewise linear data is used to model the transitions between slope values and the problem of slope estimation is addressed using a Bayesian maximum a posteriori approach. The set of possible slope values is discretized, enabling the design of a dynamic programming algorithm for posterior density maximization. Numerical simulations are used to justify choice of a reasonable number of quantization levels and also to analyze mean squared error performance of the proposed algorithm. An alternating maximization algorithm is proposed for estimation of unknown model parameters and a convergence result for the method is provided. Finally, results using data from political science, finance and medical imaging applications are presented to demonstrate the practical utility of this procedure.
SYSTEMS ANALYSIS, * WATER SUPPLIES, MATHEMATICAL MODELS, OPTIMIZATION, ECONOMICS, LINEAR PROGRAMMING, HYDROLOGY, REGIONS, ALLOCATIONS, RESTRAINT, RIVERS, EVAPORATION, LAKES, UTAH, SALVAGE, MINES(EXCAVATIONS).
BIODEGRADATION PROBABILITY PROGRAM (BIODEG)
The Biodegradation Probability Program (BIODEG) calculates the probability that a chemical under aerobic conditions with mixed cultures of microorganisms will biodegrade rapidly or slowly. It uses fragment constants developed using multiple linear and non-linear regressions and d...
The Use of Linear Programming for Prediction.
ERIC Educational Resources Information Center
Schnittjer, Carl J.
The purpose of the study was to develop a linear programming model to be used for prediction, test the accuracy of the predictions, and compare the accuracy with that produced by curvilinear multiple regression analysis. (Author)
Lederman, Regina P; Chan, Wenyaw; Roberts-Gray, Cynthia
2004-01-01
In this study, the authors compared differences in sexual risk attitudes and intentions for three groups of youth (experimental program, n = 90; attention control, n = 80; and nonparticipant control, n = 634) aged 12-14 years. Two student groups participated with their parents in programs focused on strengthening family interaction and prevention of sexual risks, HIV, and adolescent pregnancy. Surveys assessed students' attitudes and intentions regarding early sexual and other health-risk behaviors, family interactions, and perceived parental disapproval of risk behaviors. The authors used general linear modeling to compare results. The experimental prevention program differentiated the total scores of the 3 groups (p < .05). A similar result was obtained for student intentions to avoid sex (p < .01). Pairwise comparisons showed the experimental program group scored higher than the nonparticipant group on total scores (p < .01) and on students' intention to avoid sex (p < .01). The results suggest this novel educational program involving both parents and students offers a promising approach to HIV and teen pregnancy prevention.
Shek, Daniel T L; Ma, Cecilia M S
2011-02-03
The Tier 1 Program of the Project P.A.T.H.S. (Positive Adolescent Training through Holistic Social Programs) is a positive youth development program implemented in school settings utilizing a curricular-based approach. In the third year of the Full Implementation Phase, 19 experimental schools (n = 3,006 students) and 24 control schools (n = 3,727 students) participated in a randomized group trial. Analyses based on linear mixed models via SPSS showed that participants in the experimental schools displayed better positive youth development than did participants in the control schools based on different indicators derived from the Chinese Positive Youth Development Scale, including positive self-identity, prosocial behavior, and general positive youth development attributes. Differences between experimental and control participants were also found when students who joined the Tier 1 Program and perceived the program to be beneficial were employed as participants of the experimental schools. The present findings strongly suggest that the Project P.A.T.H.S. is making an important positive impact for junior secondary school students in Hong Kong.
SOCR Analyses - an Instructional Java Web-based Statistical Analysis Toolkit.
Chu, Annie; Cui, Jenny; Dinov, Ivo D
2009-03-01
The Statistical Online Computational Resource (SOCR) designs web-based tools for educational use in a variety of undergraduate courses (Dinov 2006). Several studies have demonstrated that these resources significantly improve students' motivation and learning experiences (Dinov et al. 2008). SOCR Analyses is a new component that concentrates on data modeling and analysis using parametric and non-parametric techniques supported with graphical model diagnostics. Currently implemented analyses include commonly used models in undergraduate statistics courses like linear models (Simple Linear Regression, Multiple Linear Regression, One-Way and Two-Way ANOVA). In addition, we implemented tests for sample comparisons, such as t-test in the parametric category; and Wilcoxon rank sum test, Kruskal-Wallis test, Friedman's test, in the non-parametric category. SOCR Analyses also include several hypothesis test models, such as Contingency tables, Friedman's test and Fisher's exact test.The code itself is open source (http://socr.googlecode.com/), hoping to contribute to the efforts of the statistical computing community. The code includes functionality for each specific analysis model and it has general utilities that can be applied in various statistical computing tasks. For example, concrete methods with API (Application Programming Interface) have been implemented in statistical summary, least square solutions of general linear models, rank calculations, etc. HTML interfaces, tutorials, source code, activities, and data are freely available via the web (www.SOCR.ucla.edu). Code examples for developers and demos for educators are provided on the SOCR Wiki website.In this article, the pedagogical utilization of the SOCR Analyses is discussed, as well as the underlying design framework. As the SOCR project is on-going and more functions and tools are being added to it, these resources are constantly improved. The reader is strongly encouraged to check the SOCR site for most updated information and newly added models.
Synthesizing Dynamic Programming Algorithms from Linear Temporal Logic Formulae
NASA Technical Reports Server (NTRS)
Rosu, Grigore; Havelund, Klaus
2001-01-01
The problem of testing a linear temporal logic (LTL) formula on a finite execution trace of events, generated by an executing program, occurs naturally in runtime analysis of software. We present an algorithm which takes an LTL formula and generates an efficient dynamic programming algorithm. The generated algorithm tests whether the LTL formula is satisfied by a finite trace of events given as input. The generated algorithm runs in linear time, its constant depending on the size of the LTL formula. The memory needed is constant, also depending on the size of the formula.
On the stability and instantaneous velocity of grasped frictionless objects
NASA Technical Reports Server (NTRS)
Trinkle, Jeffrey C.
1992-01-01
A quantitative test for form closure valid for any number of contact points is formulated as a linear program, the optimal objective value of which provides a measure of how far a grasp is from losing form closure. Another contribution of the study is the formulation of a linear program whose solution yields the same information as the classical approach. The benefit of the formulation is that explicit testing of all possible combinations of contact interactions can be avoided by the algorithm used to solve the linear program.
A novel recurrent neural network with finite-time convergence for linear programming.
Liu, Qingshan; Cao, Jinde; Chen, Guanrong
2010-11-01
In this letter, a novel recurrent neural network based on the gradient method is proposed for solving linear programming problems. Finite-time convergence of the proposed neural network is proved by using the Lyapunov method. Compared with the existing neural networks for linear programming, the proposed neural network is globally convergent to exact optimal solutions in finite time, which is remarkable and rare in the literature of neural networks for optimization. Some numerical examples are given to show the effectiveness and excellent performance of the new recurrent neural network.
General Rotorcraft Aeromechanical Stability Program (GRASP): Theory manual
NASA Technical Reports Server (NTRS)
Hodges, Dewey H.; Hopkins, A. Stewart; Kunz, Donald L.; Hinnant, Howard E.
1990-01-01
The general rotorcraft aeromechanical stability program (GRASP) was developed to calculate aeroelastic stability for rotorcraft in hovering flight, vertical flight, and ground contact conditions. GRASP is described in terms of its capabilities and its philosophy of modeling. The equations of motion that govern the physical system are described, as well as the analytical approximations used to derive them. The equations include the kinematical equation, the element equations, and the constraint equations. In addition, the solution procedures used by GRASP are described. GRASP is capable of treating the nonlinear static and linearized dynamic behavior of structures represented by arbitrary collections of rigid-body and beam elements. These elements may be connected in an arbitrary fashion, and are permitted to have large relative motions. The main limitation of this analysis is that periodic coefficient effects are not treated, restricting rotorcraft flight conditions to hover, axial flight, and ground contact. Instead of following the methods employed in other rotorcraft programs. GRASP is designed to be a hybrid of the finite-element method and the multibody methods used in spacecraft analysis. GRASP differs from traditional finite-element programs by allowing multiple levels of substructure in which the substructures can move and/or rotate relative to others with no small-angle approximations. This capability facilitates the modeling of rotorcraft structures, including the rotating/nonrotating interface and the details of the blade/root kinematics for various types. GRASP differs from traditional multibody programs by considering aeroelastic effects, including inflow dynamics (simple unsteady aerodynamics) and nonlinear aerodynamic coefficients.
From master slave interferometry to complex master slave interferometry: theoretical work
NASA Astrophysics Data System (ADS)
Rivet, Sylvain; Bradu, Adrian; Maria, Michael; Feuchter, Thomas; Leick, Lasse; Podoleanu, Adrian
2018-03-01
A general theoretical framework is described to obtain the advantages and the drawbacks of two novel Fourier Domain Optical Coherence Tomography (OCT) methods denoted as Master/Slave Interferometry (MSI) and its extension denoted as Complex Master/Slave Interferometry (CMSI). Instead of linearizing the digital data representing the channeled spectrum before a Fourier transform can be applied to it (as in OCT standard methods), channeled spectrum is decomposed on the basis of local oscillations. This replaces the need for linearization, generally time consuming, before any calculation of the depth profile in the range of interest. In this model two functions, g and h, are introduced. The function g describes the modulation chirp of the channeled spectrum signal due to nonlinearities in the decoding process from wavenumber to time. The function h describes the dispersion in the interferometer. The utilization of these two functions brings two major improvements to previous implementations of the MSI method. The paper details the steps to obtain the functions g and h, and represents the CMSI in a matrix formulation that enables to implement easily this method in LabVIEW by using parallel programming with multi-cores.
Demographic and obstetric factors affecting women's sexual functioning during pregnancy.
Abouzari-Gazafroodi, Kobra; Najafi, Fatemeh; Kazemnejad, Ehsan; Rahnama, Parvin; Montazeri, Ali
2015-08-19
Sexual desire and frequency of sexual relationships during pregnancy remains challenging. This study aimed to assess factors that affect women's sexual functioning during pregnancy. This was a cross sectional study carried out at prenatal care clinics of public health services in Iran. An author-designed structured questionnaire including items on socio-demographic characteristics, obstetric history, the current pregnancy, and women's sexual functioning during pregnancy was used to collect data. The generalized linear model was performed in order to find out factors that affect women's sexual functioning during pregnancy. In all, 518 pregnant women participated in the study. The mean age of participants was 26.4 years (SD = 4.7). Overall 309 women (59.7%) scored less than mean on sexual functioning. The results obtained from generalized linear model demonstrated that that lower education, unwanted pregnancy, earlier stage of pregnancy, older age, and longer duration of marriage were the most important factors contributing to disturbed sexual functioning among couples. The findings suggest that sexual function during pregnancy might be disturbed due to several factors. Indeed issues on sexual relationship should be included as part of prenatal care and reproductive health programs for every woman.
Three-dimensional vibration analysis of a uniform beam with offset inertial masses at the ends
NASA Technical Reports Server (NTRS)
Robertson, D. K.
1985-01-01
Analysis of a flexible beam with displaced end-located inertial masses is presented. The resulting three-dimensional mode shape is shown to consist of two one-plane bending modes and one torsional mode. These three components of the mode shapes are shown to be linear combinations of trigonometric and hyperbolic sine and cosine functions. Boundary conditions are derived to obtain nonlinear algebraic equations through kinematic coupling of the general solutions of the three governing partial differential equations. A method of solution which takes these boundary conditions into account is also presented. A computer program has been written to obtain unique solutions to the resulting nonlinear algebraic equations. This program, which calculates natural frequencies and three-dimensional mode shapes for any number of modes, is presented and discussed.
Large-scale linear programs in planning and prediction.
DOT National Transportation Integrated Search
2017-06-01
Large-scale linear programs are at the core of many traffic-related optimization problems in both planning and prediction. Moreover, many of these involve significant uncertainty, and hence are modeled using either chance constraints, or robust optim...
Womack, James C; Mardirossian, Narbe; Head-Gordon, Martin; Skylaris, Chris-Kriton
2016-11-28
Accurate and computationally efficient exchange-correlation functionals are critical to the successful application of linear-scaling density functional theory (DFT). Local and semi-local functionals of the density are naturally compatible with linear-scaling approaches, having a general form which assumes the locality of electronic interactions and which can be efficiently evaluated by numerical quadrature. Presently, the most sophisticated and flexible semi-local functionals are members of the meta-generalized-gradient approximation (meta-GGA) family, and depend upon the kinetic energy density, τ, in addition to the charge density and its gradient. In order to extend the theoretical and computational advantages of τ-dependent meta-GGA functionals to large-scale DFT calculations on thousands of atoms, we have implemented support for τ-dependent meta-GGA functionals in the ONETEP program. In this paper we lay out the theoretical innovations necessary to implement τ-dependent meta-GGA functionals within ONETEP's linear-scaling formalism. We present expressions for the gradient of the τ-dependent exchange-correlation energy, necessary for direct energy minimization. We also derive the forms of the τ-dependent exchange-correlation potential and kinetic energy density in terms of the strictly localized, self-consistently optimized orbitals used by ONETEP. To validate the numerical accuracy of our self-consistent meta-GGA implementation, we performed calculations using the B97M-V and PKZB meta-GGAs on a variety of small molecules. Using only a minimal basis set of self-consistently optimized local orbitals, we obtain energies in excellent agreement with large basis set calculations performed using other codes. Finally, to establish the linear-scaling computational cost and applicability of our approach to large-scale calculations, we present the outcome of self-consistent meta-GGA calculations on amyloid fibrils of increasing size, up to tens of thousands of atoms.
NASA Astrophysics Data System (ADS)
Womack, James C.; Mardirossian, Narbe; Head-Gordon, Martin; Skylaris, Chris-Kriton
2016-11-01
Accurate and computationally efficient exchange-correlation functionals are critical to the successful application of linear-scaling density functional theory (DFT). Local and semi-local functionals of the density are naturally compatible with linear-scaling approaches, having a general form which assumes the locality of electronic interactions and which can be efficiently evaluated by numerical quadrature. Presently, the most sophisticated and flexible semi-local functionals are members of the meta-generalized-gradient approximation (meta-GGA) family, and depend upon the kinetic energy density, τ, in addition to the charge density and its gradient. In order to extend the theoretical and computational advantages of τ-dependent meta-GGA functionals to large-scale DFT calculations on thousands of atoms, we have implemented support for τ-dependent meta-GGA functionals in the ONETEP program. In this paper we lay out the theoretical innovations necessary to implement τ-dependent meta-GGA functionals within ONETEP's linear-scaling formalism. We present expressions for the gradient of the τ-dependent exchange-correlation energy, necessary for direct energy minimization. We also derive the forms of the τ-dependent exchange-correlation potential and kinetic energy density in terms of the strictly localized, self-consistently optimized orbitals used by ONETEP. To validate the numerical accuracy of our self-consistent meta-GGA implementation, we performed calculations using the B97M-V and PKZB meta-GGAs on a variety of small molecules. Using only a minimal basis set of self-consistently optimized local orbitals, we obtain energies in excellent agreement with large basis set calculations performed using other codes. Finally, to establish the linear-scaling computational cost and applicability of our approach to large-scale calculations, we present the outcome of self-consistent meta-GGA calculations on amyloid fibrils of increasing size, up to tens of thousands of atoms.
Hospital costs associated with smoking in veterans undergoing general surgery.
Kamath, Aparna S; Vaughan Sarrazin, Mary; Vander Weg, Mark W; Cai, Xueya; Cullen, Joseph; Katz, David A
2012-06-01
Approximately 30% of patients undergoing elective general surgery smoke cigarettes. The association between smoking status and hospital costs in general surgery patients is unknown. The objectives of this study were to compare total inpatient costs in current smokers, former smokers, and never smokers undergoing general surgical procedures in Veterans Affairs (VA) hospitals; and to determine whether the relationship between smoking and cost is mediated by postoperative complications. Patients undergoing general surgery during the period of October 1, 2005 to September 30, 2006 were identified in the VA Surgical Quality Improvement Program (VASQIP) data set. Inpatient costs were extracted from the VA Decision Support System (DSS). Relative surgical costs (incurred during index hospitalization and within 30 days of operation) for current and former smokers relative to never smokers, and possible mediators of the association between smoking status and cost were estimated using generalized linear regression models. Models were adjusted for preoperative and operative variables, accounting for clustering of costs at the hospital level. Of the 14,853 general surgical patients, 34% were current smokers, 39% were former smokers, and 27% were never smokers. After controlling for patient covariates, current smokers had significantly higher costs compared with never smokers: relative cost was 1.04 (95% Cl 1.00 to 1.07; p = 0.04); relative costs for former smokers did not differ significantly from those of never smokers: 1.02 (95% Cl 0.99 to 1.06; p = 0.14). The relationship between smoking and hospital costs for current smokers was partially mediated by postoperative respiratory complications. These findings complement emerging evidence recommending effective smoking cessation programs in general surgical patients and provide an estimate of the potential savings that could be accrued during the preoperative period. Published by Elsevier Inc.
Fuzzy linear model for production optimization of mining systems with multiple entities
NASA Astrophysics Data System (ADS)
Vujic, Slobodan; Benovic, Tomo; Miljanovic, Igor; Hudej, Marjan; Milutinovic, Aleksandar; Pavlovic, Petar
2011-12-01
Planning and production optimization within multiple mines or several work sites (entities) mining systems by using fuzzy linear programming (LP) was studied. LP is the most commonly used operations research methods in mining engineering. After the introductory review of properties and limitations of applying LP, short reviews of the general settings of deterministic and fuzzy LP models are presented. With the purpose of comparative analysis, the application of both LP models is presented using the example of the Bauxite Basin Niksic with five mines. After the assessment, LP is an efficient mathematical modeling tool in production planning and solving many other single-criteria optimization problems of mining engineering. After the comparison of advantages and deficiencies of both deterministic and fuzzy LP models, the conclusion presents benefits of the fuzzy LP model but is also stating that seeking the optimal plan of production means to accomplish the overall analysis that will encompass the LP model approaches.
The Use of Efficient Broadcast Protocols in Asynchronous Distributed Systems. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Schmuck, Frank Bernhard
1988-01-01
Reliable broadcast protocols are important tools in distributed and fault-tolerant programming. They are useful for sharing information and for maintaining replicated data in a distributed system. However, a wide range of such protocols has been proposed. These protocols differ in their fault tolerance and delivery ordering characteristics. There is a tradeoff between the cost of a broadcast protocol and how much ordering it provides. It is, therefore, desirable to employ protocols that support only a low degree of ordering whenever possible. This dissertation presents techniques for deciding how strongly ordered a protocol is necessary to solve a given application problem. It is shown that there are two distinct classes of application problems: problems that can be solved with efficient, asynchronous protocols, and problems that require global ordering. The concept of a linearization function that maps partially ordered sets of events to totally ordered histories is introduced. How to construct an asynchronous implementation that solves a given problem if a linearization function for it can be found is shown. It is proved that in general the question of whether a problem has an asynchronous solution is undecidable. Hence there exists no general algorithm that would automatically construct a suitable linearization function for a given problem. Therefore, an important subclass of problems that have certain commutativity properties are considered. Techniques for constructing asynchronous implementations for this class are presented. These techniques are useful for constructing efficient asynchronous implementations for a broad range of practical problems.
Planning Student Flow with Linear Programming: A Tunisian Case Study.
ERIC Educational Resources Information Center
Bezeau, Lawrence
A student flow model in linear programming format, designed to plan the movement of students into secondary and university programs in Tunisia, is described. The purpose of the plan is to determine a sufficient number of graduating students that would flow back into the system as teachers or move into the labor market to meet fixed manpower…
Molenaar, Dylan; Tuerlinckx, Francis; van der Maas, Han L J
2015-01-01
A generalized linear modeling framework to the analysis of responses and response times is outlined. In this framework, referred to as bivariate generalized linear item response theory (B-GLIRT), separate generalized linear measurement models are specified for the responses and the response times that are subsequently linked by cross-relations. The cross-relations can take various forms. Here, we focus on cross-relations with a linear or interaction term for ability tests, and cross-relations with a curvilinear term for personality tests. In addition, we discuss how popular existing models from the psychometric literature are special cases in the B-GLIRT framework depending on restrictions in the cross-relation. This allows us to compare existing models conceptually and empirically. We discuss various extensions of the traditional models motivated by practical problems. We also illustrate the applicability of our approach using various real data examples, including data on personality and cognitive ability.
Linear decomposition approach for a class of nonconvex programming problems.
Shen, Peiping; Wang, Chunfeng
2017-01-01
This paper presents a linear decomposition approach for a class of nonconvex programming problems by dividing the input space into polynomially many grids. It shows that under certain assumptions the original problem can be transformed and decomposed into a polynomial number of equivalent linear programming subproblems. Based on solving a series of liner programming subproblems corresponding to those grid points we can obtain the near-optimal solution of the original problem. Compared to existing results in the literature, the proposed algorithm does not require the assumptions of quasi-concavity and differentiability of the objective function, and it differs significantly giving an interesting approach to solving the problem with a reduced running time.
Full three-body problem in effective-field-theory models of gravity
NASA Astrophysics Data System (ADS)
Battista, Emmanuele; Esposito, Giampiero
2014-10-01
Recent work in the literature has studied the restricted three-body problem within the framework of effective-field-theory models of gravity. This paper extends such a program by considering the full three-body problem, when the Newtonian potential is replaced by a more general central potential which depends on the mutual separations of the three bodies. The general form of the equations of motion is written down, and they are studied when the interaction potential reduces to the quantum-corrected central potential considered recently in the literature. A recursive algorithm is found for solving the associated variational equations, which describe small departures from given periodic solutions of the equations of motion. Our scheme involves repeated application of a 2×2 matrix of first-order linear differential operators.
Parlesak, Alexandr; Geelhoed, Diederike; Robertson, Aileen
2014-06-01
Chronic undernutrition is prevalent in Mozambique, where children suffer from stunting, vitamin A deficiency, anemia, and other nutrition-related disorders. Complete diet formulation products (CDFPs) are increasingly promoted to prevent chronic undernutrition. Using linear programming, to investigate whether diet diversification using local foods should be prioritized in order to reduce the prevalence of chronic undernutrition. Market prices of local foods were collected in Tete City, Mozambique. Linear programming was applied to calculate the cheapest possible fully nutritious food baskets (FNFB) by stepwise addition of micronutrient-dense localfoods. Only the top quintile of Mozambican households, using average expenditure data, could afford the FNFB that was designed using linear programming from a spectrum of local standard foods. The addition of beef heart or liver, dried fish and fresh moringa leaves, before applying linear programming decreased the price by a factor of up to 2.6. As a result, the top three quintiles could afford the FNFB optimized using both diversification strategy and linear programming. CDFPs, when added to the baskets, were unable to overcome the micronutrient gaps without greatly exceeding recommended energy intakes, due to their high ratio of energy to micronutrient density. Dietary diversification strategies using local, low-cost, nutrient-dense foods can meet all micronutrient recommendations and overcome all micronutrient gaps. The success of linear programming to identify a low-cost FNFB depends entirely on the investigators' ability to select appropriate micronutrient-dense foods. CDFPs added to food baskets are unable to overcome micronutrient gaps without greatly exceeding recommended energy intake.
Improved Equivalent Linearization Implementations Using Nonlinear Stiffness Evaluation
NASA Technical Reports Server (NTRS)
Rizzi, Stephen A.; Muravyov, Alexander A.
2001-01-01
This report documents two new implementations of equivalent linearization for solving geometrically nonlinear random vibration problems of complicated structures. The implementations are given the acronym ELSTEP, for "Equivalent Linearization using a STiffness Evaluation Procedure." Both implementations of ELSTEP are fundamentally the same in that they use a novel nonlinear stiffness evaluation procedure to numerically compute otherwise inaccessible nonlinear stiffness terms from commercial finite element programs. The commercial finite element program MSC/NASTRAN (NASTRAN) was chosen as the core of ELSTEP. The FORTRAN implementation calculates the nonlinear stiffness terms and performs the equivalent linearization analysis outside of NASTRAN. The Direct Matrix Abstraction Program (DMAP) implementation performs these operations within NASTRAN. Both provide nearly identical results. Within each implementation, two error minimization approaches for the equivalent linearization procedure are available - force and strain energy error minimization. Sample results for a simply supported rectangular plate are included to illustrate the analysis procedure.
Linear combination reading program for capture gamma rays
Tanner, Allan B.
1971-01-01
This program computes a weighting function, Qj, which gives a scalar output value of unity when applied to the spectrum of a desired element and a minimum value (considering statistics) when applied to spectra of materials not containing the desired element. Intermediate values are obtained for materials containing the desired element, in proportion to the amount of the element they contain. The program is written in the BASIC language in a format specific to the Hewlett-Packard 2000A Time-Sharing System, and is an adaptation of an earlier program for linear combination reading for X-ray fluorescence analysis (Tanner and Brinkerhoff, 1971). Following the program is a sample run from a study of the application of the linear combination technique to capture-gamma-ray analysis for calcium (report in preparation).
Linear discrete systems with memory: a generalization of the Langmuir model
NASA Astrophysics Data System (ADS)
Băleanu, Dumitru; Nigmatullin, Raoul R.
2013-10-01
In this manuscript we analyzed a general solution of the linear nonlocal Langmuir model within time scale calculus. Several generalizations of the Langmuir model are presented together with their exact corresponding solutions. The physical meaning of the proposed models are investigated and their corresponding geometries are reported.
Primal-dual techniques for online algorithms and mechanisms
NASA Astrophysics Data System (ADS)
Liaghat, Vahid
An offline algorithm is one that knows the entire input in advance. An online algorithm, however, processes its input in a serial fashion. In contrast to offline algorithms, an online algorithm works in a local fashion and has to make irrevocable decisions without having the entire input. Online algorithms are often not optimal since their irrevocable decisions may turn out to be inefficient after receiving the rest of the input. For a given online problem, the goal is to design algorithms which are competitive against the offline optimal solutions. In a classical offline scenario, it is often common to see a dual analysis of problems that can be formulated as a linear or convex program. Primal-dual and dual-fitting techniques have been successfully applied to many such problems. Unfortunately, the usual tricks come short in an online setting since an online algorithm should make decisions without knowing even the whole program. In this thesis, we study the competitive analysis of fundamental problems in the literature such as different variants of online matching and online Steiner connectivity, via online dual techniques. Although there are many generic tools for solving an optimization problem in the offline paradigm, in comparison, much less is known for tackling online problems. The main focus of this work is to design generic techniques for solving integral linear optimization problems where the solution space is restricted via a set of linear constraints. A general family of these problems are online packing/covering problems. Our work shows that for several seemingly unrelated problems, primal-dual techniques can be successfully applied as a unifying approach for analyzing these problems. We believe this leads to generic algorithmic frameworks for solving online problems. In the first part of the thesis, we show the effectiveness of our techniques in the stochastic settings and their applications in Bayesian mechanism design. In particular, we introduce new techniques for solving a fundamental linear optimization problem, namely, the stochastic generalized assignment problem (GAP). This packing problem generalizes various problems such as online matching, ad allocation, bin packing, etc. We furthermore show applications of such results in the mechanism design by introducing Prophet Secretary, a novel Bayesian model for online auctions. In the second part of the thesis, we focus on the covering problems. We develop the framework of "Disk Painting" for a general class of network design problems that can be characterized by proper functions. This class generalizes the node-weighted and edge-weighted variants of several well-known Steiner connectivity problems. We furthermore design a generic technique for solving the prize-collecting variants of these problems when there exists a dual analysis for the non-prize-collecting counterparts. Hence, we solve the online prize-collecting variants of several network design problems for the first time. Finally we focus on designing techniques for online problems with mixed packing/covering constraints. We initiate the study of degree-bounded graph optimization problems in the online setting by designing an online algorithm with a tight competitive ratio for the degree-bounded Steiner forest problem. We hope these techniques establishes a starting point for the analysis of the important class of online degree-bounded optimization on graphs.
Evaluating forest management policies by parametric linear programing
Daniel I. Navon; Richard J. McConnen
1967-01-01
An analytical and simulation technique, parametric linear programing explores alternative conditions and devises an optimal management plan for each condition. Its application in solving policy-decision problems in the management of forest lands is illustrated in an example.
Guevara, V R
2004-02-01
A nonlinear programming optimization model was developed to maximize margin over feed cost in broiler feed formulation and is described in this paper. The model identifies the optimal feed mix that maximizes profit margin. Optimum metabolizable energy level and performance were found by using Excel Solver nonlinear programming. Data from an energy density study with broilers were fitted to quadratic equations to express weight gain, feed consumption, and the objective function income over feed cost in terms of energy density. Nutrient:energy ratio constraints were transformed into equivalent linear constraints. National Research Council nutrient requirements and feeding program were used for examining changes in variables. The nonlinear programming feed formulation method was used to illustrate the effects of changes in different variables on the optimum energy density, performance, and profitability and was compared with conventional linear programming. To demonstrate the capabilities of the model, I determined the impact of variation in prices. Prices for broiler, corn, fish meal, and soybean meal were increased and decreased by 25%. Formulations were identical in all other respects. Energy density, margin, and diet cost changed compared with conventional linear programming formulation. This study suggests that nonlinear programming can be more useful than conventional linear programming to optimize performance response to energy density in broiler feed formulation because an energy level does not need to be set.
NASA Astrophysics Data System (ADS)
Philipp, Stephanie B.
Increasing retention of students in science, technology, engineering, or mathematics (STEM) programs of study is a priority for many colleges and universities. This study examines an undergraduate teaching assistant (UTA) program implemented in a general chemistry course for STEM majors to provide peer learning assistance to entrylevel students. This study measured the content knowledge growth of UTAs compared to traditional graduate teaching assistants (GTAs) over the semester, and described the development of peer learning assistance skills of the UTAs as an outcome of semesterlong training and support from both science education and STEM faculty. Impact of the UTA program on final exam grades, persistence of students to enroll in the next chemistry course required by their intended major, and STEM identity of students were estimated. The study sample comprised 284 students in 14 general chemistry recitation sections led by six UTAs and 310 students in 15 general chemistry recitation sections led by three traditional GTAs for comparison. Results suggested that both UTAs and GTAs made significant learning gains in general chemistry content knowledge, and there was no significant difference in content knowledge between UTA and GTA groups. Student evaluations, researcher observations, and chemistry faculty comments confirm UTAs were using the learning strategies discussed in the semester-long training program. UTA-led students rated their TAs significantly higher in teaching quality and student care and encouragement, which correlated with stronger STEM recognition by those students. The results of hierarchical linear model (HLM) analysis showed little variance in final exam grades explained by section-level variables; most variance was explained by student-level variables: mathematics ACT score, college GPA, and intention to enroll in the next general chemistry course. Students having higher college GPAs were helped more by having a UTA. Results from logistic regression of persistence outcome variable showed that students are three times more likely to persist to CHEM 202 if they had a UTA in CHEM 201. Other positive predictors of retention included having strong college grades, and having strong ACT math scores. Coupled with HLM analysis result that UTAs were more effective at helping students with higher college GPAs achieve higher grades, the stronger persistence of UTA-led students showed that the UTA program is an effective program for retention of introductory-level students in STEM majors.
NASA Astrophysics Data System (ADS)
Domnisoru, L.; Modiga, A.; Gasparotti, C.
2016-08-01
At the ship's design, the first step of the hull structural assessment is based on the longitudinal strength analysis, with head wave equivalent loads by the ships' classification societies’ rules. This paper presents an enhancement of the longitudinal strength analysis, considering the general case of the oblique quasi-static equivalent waves, based on the own non-linear iterative procedure and in-house program. The numerical approach is developed for the mono-hull ships, without restrictions on 3D-hull offset lines non-linearities, and involves three interlinked iterative cycles on floating, pitch and roll trim equilibrium conditions. Besides the ship-wave equilibrium parameters, the ship's girder wave induced loads are obtained. As numerical study case we have considered a large LPG liquefied petroleum gas carrier. The numerical results of the large LPG are compared with the statistical design values from several ships' classification societies’ rules. This study makes possible to obtain the oblique wave conditions that are inducing the maximum loads into the large LPG ship's girder. The numerical results of this study are pointing out that the non-linear iterative approach is necessary for the computation of the extreme loads induced by the oblique waves, ensuring better accuracy of the large LPG ship's longitudinal strength assessment.
NASA Technical Reports Server (NTRS)
Young, Katherine C.; Sobieszczanski-Sobieski, Jaroslaw
1988-01-01
This project has two objectives. The first is to determine whether linear programming techniques can improve performance when handling design optimization problems with a large number of design variables and constraints relative to the feasible directions algorithm. The second purpose is to determine whether using the Kreisselmeier-Steinhauser (KS) function to replace the constraints with one constraint will reduce the cost of total optimization. Comparisons are made using solutions obtained with linear and non-linear methods. The results indicate that there is no cost saving using the linear method or in using the KS function to replace constraints.
Adaptation of MSC/NASTRAN to a supercomputer
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gloudeman, J.F.; Hodge, J.C.
1982-01-01
MSC/NASTRAN is a large-scale general purpose digital computer program which solves a wider variety of engineering analysis problems by the finite element method. The program capabilities include static and dynamic structural analysis (linear and nonlinear), heat transfer, acoustics, electromagnetism and other types of field problems. It is used worldwide by large and small companies in such diverse fields as automotive, aerospace, civil engineering, shipbuilding, offshore oil, industrial equipment, chemical engineering, biomedical research, optics and government research. The paper presents the significant aspects of the adaptation of MSC/NASTRAN to the Cray-1. First, the general architecture and predominant functional use of MSC/NASTRANmore » are discussed to help explain the imperatives and the challenges of this undertaking. The key characteristics of the Cray-1 which influenced the decision to undertake this effort are then reviewed to help identify performance targets. An overview of the MSC/NASTRAN adaptation effort is then given to help define the scope of the project. Finally, some measures of MSC/NASTRAN's operational performance on the Cray-1 are given, along with a few guidelines to help avoid improper interpretation. 17 references.« less
A General Tool for Evaluating High-Contrast Coronagraphic Telescope Performance Error Budgets
NASA Technical Reports Server (NTRS)
Marchen, Luis F.; Shaklan, Stuart B.
2009-01-01
This paper describes a general purpose Coronagraph Performance Error Budget (CPEB) tool that we have developed under the NASA Exoplanet Exploration Program. The CPEB automates many of the key steps required to evaluate the scattered starlight contrast in the dark hole of a space-based coronagraph. It operates in 3 steps: first, a CodeV or Zemax prescription is converted into a MACOS optical prescription. Second, a Matlab program calls ray-trace code that generates linear beam-walk and aberration sensitivity matrices for motions of the optical elements and line-of-sight pointing, with and without controlled coarse and fine-steering mirrors. Third, the sensitivity matrices are imported by macros into Excel 2007 where the error budget is created. Once created, the user specifies the quality of each optic from a predefined set of PSDs. The spreadsheet creates a nominal set of thermal and jitter motions and combines them with the sensitivity matrices to generate an error budget for the system. The user can easily modify the motion allocations to perform trade studies.
FPGA Coprocessor for Accelerated Classification of Images
NASA Technical Reports Server (NTRS)
Pingree, Paula J.; Scharenbroich, Lucas J.; Werne, Thomas A.
2008-01-01
An effort related to that described in the preceding article focuses on developing a spaceborne processing platform for fast and accurate onboard classification of image data, a critical part of modern satellite image processing. The approach again has been to exploit the versatility of recently developed hybrid Virtex-4FX field-programmable gate array (FPGA) to run diverse science applications on embedded processors while taking advantage of the reconfigurable hardware resources of the FPGAs. In this case, the FPGA serves as a coprocessor that implements legacy C-language support-vector-machine (SVM) image-classification algorithms to detect and identify natural phenomena such as flooding, volcanic eruptions, and sea-ice break-up. The FPGA provides hardware acceleration for increased onboard processing capability than previously demonstrated in software. The original C-language program demonstrated on an imaging instrument aboard the Earth Observing-1 (EO-1) satellite implements a linear-kernel SVM algorithm for classifying parts of the images as snow, water, ice, land, or cloud or unclassified. Current onboard processors, such as on EO-1, have limited computing power, extremely limited active storage capability and are no longer considered state-of-the-art. Using commercially available software that translates C-language programs into hardware description language (HDL) files, the legacy C-language program, and two newly formulated programs for a more capable expanded-linear-kernel and a more accurate polynomial-kernel SVM algorithm, have been implemented in the Virtex-4FX FPGA. In tests, the FPGA implementations have exhibited significant speedups over conventional software implementations running on general-purpose hardware.
High Tc superconducting materials and devices
NASA Technical Reports Server (NTRS)
Haertling, Gene H.
1990-01-01
The high Tc Y1Ba2Cu3O(7-x) ceramic materials, initially developed in 1987, are now being extensively investigated for a variety of engineering applications. The superconductor applications which are presently identified as of most interest to NASA-LaRC are low-noise, low thermal conductivity grounding links; large-area linear Meissner-effect bearings; and sensitive, low-noise sensors and leads. Devices designed for these applications require the development of a number of processing and fabrication technologies. Included among the technologies most specific to the present needs are tapecasting, melt texturing, magnetic field grain alignment, superconductor/polymer composite fabrication, thin film MOD (metal-organic decomposition) processing, screen printing of thick films, and photolithography of thin films. The overall objective of the program was to establish a high Tc superconductivity laboratory capability at NASA-LaRC and demonstrate this capability by fabricating superconducting 123 material via bulk and thin film processes. Specific objectives include: order equipment and set up laboratory; prepare 1 kg batches of 123 material via oxide raw material; construct tapecaster and tapecaster 123 material; fabricate 123 grounding link; fabricate 123 composite for Meissner linear bearing; develop 123 thin film processes (nitrates, acetates); establish Tc and Jc measurement capability; and set up a commercial use of space program in superconductivity at LaRC. In general, most of the objectives of the program were met. Finally, efforts to implement a commercial use of space program in superconductivity at LaRC were completed and at least two industrial companies have indicated their interest in participating.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dritz, K.W.; Boyle, J.M.
This paper addresses the problem of measuring and analyzing the performance of fine-grained parallel programs running on shared-memory multiprocessors. Such processors use locking (either directly in the application program, or indirectly in a subroutine library or the operating system) to serialize accesses to global variables. Given sufficiently high rates of locking, the chief factor preventing linear speedup (besides lack of adequate inherent parallelism in the application) is lock contention - the blocking of processes that are trying to acquire a lock currently held by another process. We show how a high-resolution, low-overhead clock may be used to measure both lockmore » contention and lack of parallel work. Several ways of presenting the results are covered, culminating in a method for calculating, in a single multiprocessing run, both the speedup actually achieved and the speedup lost to contention for each lock and to lack of parallel work. The speedup losses are reported in the same units, ''processor-equivalents,'' as the speedup achieved. Both are obtained without having to perform the usual one-process comparison run. We chronicle also a variety of experiments motivated by actual results obtained with our measurement method. The insights into program performance that we gained from these experiments helped us to refine the parts of our programs concerned with communication and synchronization. Ultimately these improvements reduced lock contention to a negligible amount and yielded nearly linear speedup in applications not limited by lack of parallel work. We describe two generally applicable strategies (''code motion out of critical regions'' and ''critical-region fissioning'') for reducing lock contention and one (''lock/variable fusion'') applicable only on certain architectures.« less
An approximate generalized linear model with random effects for informative missing data.
Follmann, D; Wu, M
1995-03-01
This paper develops a class of models to deal with missing data from longitudinal studies. We assume that separate models for the primary response and missingness (e.g., number of missed visits) are linked by a common random parameter. Such models have been developed in the econometrics (Heckman, 1979, Econometrica 47, 153-161) and biostatistics (Wu and Carroll, 1988, Biometrics 44, 175-188) literature for a Gaussian primary response. We allow the primary response, conditional on the random parameter, to follow a generalized linear model and approximate the generalized linear model by conditioning on the data that describes missingness. The resultant approximation is a mixed generalized linear model with possibly heterogeneous random effects. An example is given to illustrate the approximate approach, and simulations are performed to critique the adequacy of the approximation for repeated binary data.
NASA Technical Reports Server (NTRS)
Fleming, P.
1985-01-01
A design technique is proposed for linear regulators in which a feedback controller of fixed structure is chosen to minimize an integral quadratic objective function subject to the satisfaction of integral quadratic constraint functions. Application of a non-linear programming algorithm to this mathematically tractable formulation results in an efficient and useful computer-aided design tool. Particular attention is paid to computational efficiency and various recommendations are made. Two design examples illustrate the flexibility of the approach and highlight the special insight afforded to the designer.
Review: Optimization methods for groundwater modeling and management
NASA Astrophysics Data System (ADS)
Yeh, William W.-G.
2015-09-01
Optimization methods have been used in groundwater modeling as well as for the planning and management of groundwater systems. This paper reviews and evaluates the various optimization methods that have been used for solving the inverse problem of parameter identification (estimation), experimental design, and groundwater planning and management. Various model selection criteria are discussed, as well as criteria used for model discrimination. The inverse problem of parameter identification concerns the optimal determination of model parameters using water-level observations. In general, the optimal experimental design seeks to find sampling strategies for the purpose of estimating the unknown model parameters. A typical objective of optimal conjunctive-use planning of surface water and groundwater is to minimize the operational costs of meeting water demand. The optimization methods include mathematical programming techniques such as linear programming, quadratic programming, dynamic programming, stochastic programming, nonlinear programming, and the global search algorithms such as genetic algorithms, simulated annealing, and tabu search. Emphasis is placed on groundwater flow problems as opposed to contaminant transport problems. A typical two-dimensional groundwater flow problem is used to explain the basic formulations and algorithms that have been used to solve the formulated optimization problems.
Resultant as the determinant of a Koszul complex
NASA Astrophysics Data System (ADS)
Anokhina, A. S.; Morozov, A. Yu.; Shakirov, Sh. R.
2009-09-01
The determinant is a very important characteristic of a linear map between vector spaces. Two generalizations of linear maps are intensively used in modern theory: linear complexes (nilpotent chains of linear maps) and nonlinear maps. The determinant of a complex and the resultant are then the corresponding generalizations of the determinant of a linear map. It turns out that these two quantities are related: the resultant of a nonlinear map is the determinant of the corresponding Koszul complex. We give an elementary introduction into these notions and relations, which will definitely play a role in the future development of theoretical physics.
Probabilistic Structural Analysis Program
NASA Technical Reports Server (NTRS)
Pai, Shantaram S.; Chamis, Christos C.; Murthy, Pappu L. N.; Stefko, George L.; Riha, David S.; Thacker, Ben H.; Nagpal, Vinod K.; Mital, Subodh K.
2010-01-01
NASA/NESSUS 6.2c is a general-purpose, probabilistic analysis program that computes probability of failure and probabilistic sensitivity measures of engineered systems. Because NASA/NESSUS uses highly computationally efficient and accurate analysis techniques, probabilistic solutions can be obtained even for extremely large and complex models. Once the probabilistic response is quantified, the results can be used to support risk-informed decisions regarding reliability for safety-critical and one-of-a-kind systems, as well as for maintaining a level of quality while reducing manufacturing costs for larger-quantity products. NASA/NESSUS has been successfully applied to a diverse range of problems in aerospace, gas turbine engines, biomechanics, pipelines, defense, weaponry, and infrastructure. This program combines state-of-the-art probabilistic algorithms with general-purpose structural analysis and lifting methods to compute the probabilistic response and reliability of engineered structures. Uncertainties in load, material properties, geometry, boundary conditions, and initial conditions can be simulated. The structural analysis methods include non-linear finite-element methods, heat-transfer analysis, polymer/ceramic matrix composite analysis, monolithic (conventional metallic) materials life-prediction methodologies, boundary element methods, and user-written subroutines. Several probabilistic algorithms are available such as the advanced mean value method and the adaptive importance sampling method. NASA/NESSUS 6.2c is structured in a modular format with 15 elements.
Zink, V; Štípková, M; Lassen, J
2011-10-01
The aim of this study was to estimate genetic parameters for fertility traits and linear type traits in the Czech Holstein dairy cattle population. Phenotypic data regarding 12 linear type traits, measured in first lactation, and 3 fertility traits, measured in each of first and second lactation, were collected from 2005 to 2009 in the progeny testing program of the Czech-Moravian Breeders Corporation. The number of animals for each linear type trait was 59,467, except for locomotion, where 53,436 animals were recorded. The 3-generation pedigree file included 164,125 animals. (Co)variance components were estimated using AI-REML in a series of bivariate analyses, which were implemented via the DMU package. Fertility traits included days from calving to first service (CF1), days open (DO1), and days from first to last service (FL1) in first lactation, and days from calving to first service (CF2), days open (DO2), and days from first to last service (FL2) in second lactation. The number of animals with fertility data varied between traits and ranged from 18,915 to 58,686. All heritability estimates for reproduction traits were low, ranging from 0.02 to 0.04. Heritability estimates for linear type traits ranged from 0.03 for locomotion to 0.39 for stature. Estimated genetic correlations between fertility traits and linear type traits were generally neutral or positive, whereas genetic correlations between body condition score and CF1, DO1, FL1, CF2 and DO2 were mostly negative, with the greatest correlation between BCS and CF2 (-0.51). Genetic correlations with locomotion were greatest for CF1 and CF2 (-0.34 for both). Results of this study show that cows that are genetically extreme for angularity, stature, and body depth tend to perform poorly for fertility traits. At the same time, cows that are genetically predisposed for low body condition score or high locomotion score are generally inferior in fertility. Copyright © 2011 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
A New Pattern of Getting Nasty Number in Graphical Method
NASA Astrophysics Data System (ADS)
Sumathi, P.; Indhumathi, N.
2018-04-01
This paper proposed a new technique of getting nasty numbers using graphical method in linear programming problem and it has been proved for various Linear programming problems. And also some characterisation of nasty numbers is discussed in this paper.
NASA Astrophysics Data System (ADS)
Pradanti, Paskalia; Hartono
2018-03-01
Determination of insulin injection dose in diabetes mellitus treatment can be considered as an optimal control problem. This article is aimed to simulate optimal blood glucose control for patient with diabetes mellitus. The blood glucose regulation of diabetic patient is represented by Ackerman’s Linear Model. This problem is then solved using dynamic programming method. The desired blood glucose level is obtained by minimizing the performance index in Lagrange form. The results show that dynamic programming based on Ackerman’s Linear Model is quite good to solve the problem.
Bisimulation equivalence of differential-algebraic systems
NASA Astrophysics Data System (ADS)
Megawati, Noorma Yulia; Schaft, Arjan van der
2018-01-01
In this paper, the notion of bisimulation relation for linear input-state-output systems is extended to general linear differential-algebraic (DAE) systems. Geometric control theory is used to derive a linear-algebraic characterisation of bisimulation relations, and an algorithm for computing the maximal bisimulation relation between two linear DAE systems. The general definition is specialised to the case where the matrix pencil sE - A is regular. Furthermore, by developing a one-sided version of bisimulation, characterisations of simulation and abstraction are obtained.
SPAR reference manual. [for stress analysis
NASA Technical Reports Server (NTRS)
Whetstone, W. D.
1974-01-01
SPAR is a system of related programs which may be operated either in batch or demand (teletype) mode. Information exchange between programs is automatically accomplished through one or more direct access libraries, known collectively as the data complex. Card input is command-oriented, in free-field form. Capabilities available in the first production release of the system are fully documented, and include linear stress analysis, linear bifurcation buckling analysis, and linear vibrational analysis.
Using Linear and Quadratic Functions to Teach Number Patterns in Secondary School
ERIC Educational Resources Information Center
Kenan, Kok Xiao-Feng
2017-01-01
This paper outlines an approach to definitively find the general term in a number pattern, of either a linear or quadratic form, by using the general equation of a linear or quadratic function. This approach is governed by four principles: (1) identifying the position of the term (input) and the term itself (output); (2) recognising that each…
ERIC Educational Resources Information Center
British Standards Institution, London (England).
To promote interchangeability of teaching machines and programs, so that the user is not so limited in his choice of programs, the British Standards Institute has offered a standard. Part I of the standard deals with linear teaching machines and programs that make use of the roll or sheet methods of presentation. Requirements cover: spools,…
Frequency assignments for HFDF receivers in a search and rescue network
NASA Astrophysics Data System (ADS)
Johnson, Krista E.
1990-03-01
This thesis applies a multiobjective linear programming approach to the problem of assigning frequencies to high frequency direction finding (HFDF) receivers in a search-and-rescue network in order to maximize the expected number of geolocations of vessels in distress. The problem is formulated as a multiobjective integer linear programming problem. The integrality of the solutions is guaranteed by the totally unimodularity of the A-matrix. Two approaches are taken to solve the multiobjective linear programming problem: (1) the multiobjective simplex method as implemented in ADBASE; and (2) an iterative approach. In this approach, the individual objective functions are weighted and combined in a single additive objective function. The resulting single objective problem is expressed as a network programming problem and solved using SAS NETFLOW. The process is then repeated with different weightings for the objective functions. The solutions obtained from the multiobjective linear programs are evaluated using a FORTRAN program to determine which solution provides the greatest expected number of geolocations. This solution is then compared to the sample mean and standard deviation for the expected number of geolocations resulting from 10,000 random frequency assignments for the network.
Meta-analysis in Stata using gllamm.
Bagos, Pantelis G
2015-12-01
There are several user-written programs for performing meta-analysis in Stata (Stata Statistical Software: College Station, TX: Stata Corp LP). These include metan, metareg, mvmeta, and glst. However, there are several cases for which these programs do not suffice. For instance, there is no software for performing univariate meta-analysis with correlated estimates, for multilevel or hierarchical meta-analysis, or for meta-analysis of longitudinal data. In this work, we show with practical applications that many disparate models, including but not limited to the ones mentioned earlier, can be fitted using gllamm. The software is very versatile and can handle a wide variety of models with applications in a wide range of disciplines. The method presented here takes advantage of these modeling capabilities and makes use of appropriate transformations, based on the Cholesky decomposition of the inverse of the covariance matrix, known as generalized least squares, in order to handle correlated data. The models described earlier can be thought of as special instances of a general linear mixed-model formulation, but to the author's knowledge, a general exposition in order to incorporate all the available models for meta-analysis as special cases and the instructions to fit them in Stata has not been presented so far. Source code is available at http:www.compgen.org/tools/gllamm. Copyright © 2015 John Wiley & Sons, Ltd.
Fixing extensions to general relativity in the nonlinear regime
NASA Astrophysics Data System (ADS)
Cayuso, Juan; Ortiz, Néstor; Lehner, Luis
2017-10-01
The question of what gravitational theory could supersede General Relativity has been central in theoretical physics for decades. Many disparate alternatives have been proposed motivated by cosmology, quantum gravity and phenomenological angles, and have been subjected to tests derived from cosmological, solar system and pulsar observations typically restricted to linearized regimes. Gravitational waves from compact binaries provide new opportunities to probe these theories in the strongly gravitating/highly dynamical regimes. To this end however, a reliable understanding of the dynamics in such a regime is required. Unfortunately, most of these theories fail to define well posed initial value problems, which prevents at face value from meeting such challenge. In this work, we introduce a consistent program able to remedy this situation. This program is inspired in the approach to "fixing" viscous relativistic hydrodynamics introduced by Israel and Stewart in the late 70's. We illustrate how to implement this approach to control undesirable effects of higher order derivatives in gravity theories and argue how the modified system still captures the true dynamics of the putative underlying theories in 3 +1 dimensions. We sketch the implementation of this idea in a couple of effective theories of gravity, one in the context of Noncommutative Geometry, and one in the context of Chern-Simons modified General Relativity.
Online learning control using adaptive critic designs with sparse kernel machines.
Xu, Xin; Hou, Zhongsheng; Lian, Chuanqiang; He, Haibo
2013-05-01
In the past decade, adaptive critic designs (ACDs), including heuristic dynamic programming (HDP), dual heuristic programming (DHP), and their action-dependent ones, have been widely studied to realize online learning control of dynamical systems. However, because neural networks with manually designed features are commonly used to deal with continuous state and action spaces, the generalization capability and learning efficiency of previous ACDs still need to be improved. In this paper, a novel framework of ACDs with sparse kernel machines is presented by integrating kernel methods into the critic of ACDs. To improve the generalization capability as well as the computational efficiency of kernel machines, a sparsification method based on the approximately linear dependence analysis is used. Using the sparse kernel machines, two kernel-based ACD algorithms, that is, kernel HDP (KHDP) and kernel DHP (KDHP), are proposed and their performance is analyzed both theoretically and empirically. Because of the representation learning and generalization capability of sparse kernel machines, KHDP and KDHP can obtain much better performance than previous HDP and DHP with manually designed neural networks. Simulation and experimental results of two nonlinear control problems, that is, a continuous-action inverted pendulum problem and a ball and plate control problem, demonstrate the effectiveness of the proposed kernel ACD methods.
Quantum corrections to the generalized Proca theory via a matter field
NASA Astrophysics Data System (ADS)
Amado, André; Haghani, Zahra; Mohammadi, Azadeh; Shahidi, Shahab
2017-09-01
We study the quantum corrections to the generalized Proca theory via matter loops. We consider two types of interactions, linear and nonlinear in the vector field. Calculating the one-loop correction to the vector field propagator, three- and four-point functions, we show that the non-linear interactions are harmless, although they renormalize the theory. The linear matter-vector field interactions introduce ghost degrees of freedom to the generalized Proca theory. Treating the theory as an effective theory, we calculate the energy scale up to which the theory remains healthy.
NASA Technical Reports Server (NTRS)
Geyser, L. C.
1978-01-01
A digital computer program, DYGABCD, was developed that generates linearized, dynamic models of simulated turbofan and turbojet engines. DYGABCD is based on an earlier computer program, DYNGEN, that is capable of calculating simulated nonlinear steady-state and transient performance of one- and two-spool turbojet engines or two- and three-spool turbofan engines. Most control design techniques require linear system descriptions. For multiple-input/multiple-output systems such as turbine engines, state space matrix descriptions of the system are often desirable. DYGABCD computes the state space matrices commonly referred to as the A, B, C, and D matrices required for a linear system description. The report discusses the analytical approach and provides a users manual, FORTRAN listings, and a sample case.
Automatic Classification of Artifactual ICA-Components for Artifact Removal in EEG Signals
2011-01-01
Background Artifacts contained in EEG recordings hamper both, the visual interpretation by experts as well as the algorithmic processing and analysis (e.g. for Brain-Computer Interfaces (BCI) or for Mental State Monitoring). While hand-optimized selection of source components derived from Independent Component Analysis (ICA) to clean EEG data is widespread, the field could greatly profit from automated solutions based on Machine Learning methods. Existing ICA-based removal strategies depend on explicit recordings of an individual's artifacts or have not been shown to reliably identify muscle artifacts. Methods We propose an automatic method for the classification of general artifactual source components. They are estimated by TDSEP, an ICA method that takes temporal correlations into account. The linear classifier is based on an optimized feature subset determined by a Linear Programming Machine (LPM). The subset is composed of features from the frequency-, the spatial- and temporal domain. A subject independent classifier was trained on 640 TDSEP components (reaction time (RT) study, n = 12) that were hand labeled by experts as artifactual or brain sources and tested on 1080 new components of RT data of the same study. Generalization was tested on new data from two studies (auditory Event Related Potential (ERP) paradigm, n = 18; motor imagery BCI paradigm, n = 80) that used data with different channel setups and from new subjects. Results Based on six features only, the optimized linear classifier performed on level with the inter-expert disagreement (<10% Mean Squared Error (MSE)) on the RT data. On data of the auditory ERP study, the same pre-calculated classifier generalized well and achieved 15% MSE. On data of the motor imagery paradigm, we demonstrate that the discriminant information used for BCI is preserved when removing up to 60% of the most artifactual source components. Conclusions We propose a universal and efficient classifier of ICA components for the subject independent removal of artifacts from EEG data. Based on linear methods, it is applicable for different electrode placements and supports the introspection of results. Trained on expert ratings of large data sets, it is not restricted to the detection of eye- and muscle artifacts. Its performance and generalization ability is demonstrated on data of different EEG studies. PMID:21810266
NASA Technical Reports Server (NTRS)
Huang, L. C. P.; Cook, R. A.
1973-01-01
Models utilizing various sub-sets of the six degrees of freedom are used in trajectory simulation. A 3-D model with only linear degrees of freedom is especially attractive, since the coefficients for the angular degrees of freedom are the most difficult to determine and the angular equations are the most time consuming for the computer to evaluate. A computer program is developed that uses three separate subsections to predict trajectories. A launch rail subsection is used until the rocket has left its launcher. The program then switches to a special 3-D section which computes motions in two linear and one angular degrees of freedom. When the rocket trims out, the program switches to the standard, three linear degrees of freedom model.
NASA Astrophysics Data System (ADS)
Vasant, P.; Ganesan, T.; Elamvazuthi, I.
2012-11-01
A fairly reasonable result was obtained for non-linear engineering problems using the optimization techniques such as neural network, genetic algorithms, and fuzzy logic independently in the past. Increasingly, hybrid techniques are being used to solve the non-linear problems to obtain better output. This paper discusses the use of neuro-genetic hybrid technique to optimize the geological structure mapping which is known as seismic survey. It involves the minimization of objective function subject to the requirement of geophysical and operational constraints. In this work, the optimization was initially performed using genetic programming, and followed by hybrid neuro-genetic programming approaches. Comparative studies and analysis were then carried out on the optimized results. The results indicate that the hybrid neuro-genetic hybrid technique produced better results compared to the stand-alone genetic programming method.
Aircraft model prototypes which have specified handling-quality time histories
NASA Technical Reports Server (NTRS)
Johnson, S. H.
1978-01-01
Several techniques for obtaining linear constant-coefficient airplane models from specified handling-quality time histories are discussed. The pseudodata method solves the basic problem, yields specified eigenvalues, and accommodates state-variable transfer-function zero suppression. The algebraic equations to be solved are bilinear, at worst. The disadvantages are reduced generality and no assurance that the resulting model will be airplane like in detail. The method is fully illustrated for a fourth-order stability-axis small motion model with three lateral handling quality time histories specified. The FORTRAN program which obtains and verifies the model is included and fully documented.
Understanding Solubility through Excel Spreadsheets
NASA Astrophysics Data System (ADS)
Brown, Pamela
2001-02-01
This article describes assignments related to the solubility of inorganic salts that can be given in an introductory general chemistry course. Le Châtelier's principle, solubility, unit conversion, and thermodynamics are tied together to calculate heats of solution by two methods: heats of formation and an application of the van't Hoff equation. These assignments address the need for math, graphing, and computer skills in the chemical technology program by developing skill in the use of Microsoft Excel to prepare spreadsheets and graphs and to perform linear and nonlinear curve-fitting. Background information on the value of understanding and predicting solubility is provided.
Multicolor optical polarimetry of reddened stars in the small Magellanic cloud
NASA Technical Reports Server (NTRS)
Magalhaes, Antonio M.; Coyne, G. V.; Piirola, Valero; Rodrigues, C. V.
1989-01-01
First results of an on-going program to determine the wavelength dependence of the interstellar optical polarization of reddened stars in the Small Magellanic Cloud (SMC) are presented. IUE observations of reddened stars in the SMC (Bouchet et al. 1985) generally show marked differences in the extinction law as compared to both the Galaxy and the Large Megallanic Cloud. The aim here is to determine the wavelength dependence of the optical linear polarization in the direction of several such stars in the SMC in order to further constrain the dust composition and size distribution in that galaxy.
A General Accelerated Degradation Model Based on the Wiener Process.
Liu, Le; Li, Xiaoyang; Sun, Fuqiang; Wang, Ning
2016-12-06
Accelerated degradation testing (ADT) is an efficient tool to conduct material service reliability and safety evaluations by analyzing performance degradation data. Traditional stochastic process models are mainly for linear or linearization degradation paths. However, those methods are not applicable for the situations where the degradation processes cannot be linearized. Hence, in this paper, a general ADT model based on the Wiener process is proposed to solve the problem for accelerated degradation data analysis. The general model can consider the unit-to-unit variation and temporal variation of the degradation process, and is suitable for both linear and nonlinear ADT analyses with single or multiple acceleration variables. The statistical inference is given to estimate the unknown parameters in both constant stress and step stress ADT. The simulation example and two real applications demonstrate that the proposed method can yield reliable lifetime evaluation results compared with the existing linear and time-scale transformation Wiener processes in both linear and nonlinear ADT analyses.
A General Accelerated Degradation Model Based on the Wiener Process
Liu, Le; Li, Xiaoyang; Sun, Fuqiang; Wang, Ning
2016-01-01
Accelerated degradation testing (ADT) is an efficient tool to conduct material service reliability and safety evaluations by analyzing performance degradation data. Traditional stochastic process models are mainly for linear or linearization degradation paths. However, those methods are not applicable for the situations where the degradation processes cannot be linearized. Hence, in this paper, a general ADT model based on the Wiener process is proposed to solve the problem for accelerated degradation data analysis. The general model can consider the unit-to-unit variation and temporal variation of the degradation process, and is suitable for both linear and nonlinear ADT analyses with single or multiple acceleration variables. The statistical inference is given to estimate the unknown parameters in both constant stress and step stress ADT. The simulation example and two real applications demonstrate that the proposed method can yield reliable lifetime evaluation results compared with the existing linear and time-scale transformation Wiener processes in both linear and nonlinear ADT analyses. PMID:28774107
Generating AN Optimum Treatment Plan for External Beam Radiation Therapy.
NASA Astrophysics Data System (ADS)
Kabus, Irwin
1990-01-01
The application of linear programming to the generation of an optimum external beam radiation treatment plan is investigated. MPSX, an IBM linear programming software package was used. All data originated from the CAT scan of an actual patient who was treated for a pancreatic malignant tumor before this study began. An examination of several alternatives for representing the cross section of the patient showed that it was sufficient to use a set of strategically placed points in the vital organs and tumor and a grid of points spaced about one half inch apart for the healthy tissue. Optimum treatment plans were generated from objective functions representing various treatment philosophies. The optimum plans were based on allowing for 216 external radiation beams which accounted for wedges of any size. A beam reduction scheme then reduced the number of beams in the optimum plan to a number of beams small enough for implementation. Regardless of the objective function, the linear programming treatment plan preserved about 95% of the patient's right kidney vs. 59% for the plan the hospital actually administered to the patient. The clinician, on the case, found most of the linear programming treatment plans to be superior to the hospital plan. An investigation was made, using parametric linear programming, concerning any possible benefits derived from generating treatment plans based on objective functions made up of convex combinations of two objective functions, however, this proved to have only limited value. This study also found, through dual variable analysis, that there was no benefit gained from relaxing some of the constraints on the healthy regions of the anatomy. This conclusion was supported by the clinician. Finally several schemes were found that, under certain conditions, can further reduce the number of beams in the final linear programming treatment plan.
A Flash X-Ray Facility for the Naval Postgraduate School
1985-06-01
ionizing radiation, *• NPS has had active programs with a Van de Graaff generator, a reactor, radioactive sources, X-ray machines and a linear electron ...interaction of radiation with matter and with coherent radiation. Currently the most active program is at the linear electron accelerator which over...twenty years has produced some 75 theses. The flash X-ray machine was obtained to expan-i and complement the capabilities of the linear electron
Discrete Methods and their Applications
1993-02-03
problem of finding all near-optimal solutions to a linear program. In paper [18], we give a brief and elementary proof of a result of Hoffman [1952) about...relies only on linear programming duality; second, we obtain geometric and algebraic representations of the bounds that are determined explicitly in...same. We have studied the problem of finding the minimum n such that a given unit interval graph is an n--graph. A linear time algorithm to compute
1992-12-01
desirable. In this study, the proposed model consists of a thick-walled, highly deformable elastic tube in which the blood flow is described by linearized ...presented a mechanical model consisting of linearized Navier-Stokes and finite elasticity equations to predict blood pooling under acceleration stress... linear multielement model of the cardiovascular system which can calculate blood pressures and flows at any point in the cardio- vascular system. It
ERIC Educational Resources Information Center
Xu, Xueli; von Davier, Matthias
2008-01-01
The general diagnostic model (GDM) utilizes located latent classes for modeling a multidimensional proficiency variable. In this paper, the GDM is extended by employing a log-linear model for multiple populations that assumes constraints on parameters across multiple groups. This constrained model is compared to log-linear models that assume…
ERIC Educational Resources Information Center
KANTASEWI, NIPHON
THE PURPOSE OF THE STUDY WAS TO COMPARE THE EFFECTIVENESS OF (1) LECTURE PRESENTATIONS, (2) LINEAR PROGRAM USE IN CLASS WITH AND WITHOUT DISCUSSION, AND (3) LINEAR PROGRAMS USED OUTSIDE OF CLASS WITH INCLASS PROBLEMS OR DISCUSSION. THE 126 COLLEGE STUDENTS ENROLLED IN A BACTERIOLOGY COURSE WERE RANDOMLY ASSIGNED TO THREE GROUPS. IN A SUCCEEDING…
ERIC Educational Resources Information Center
Nowak, Christoph; Heinrichs, Nina
2008-01-01
A meta-analysis encompassing all studies evaluating the impact of the Triple P-Positive Parenting Program on parent and child outcome measures was conducted in an effort to identify variables that moderate the program's effectiveness. Hierarchical linear models (HLM) with three levels of data were employed to analyze effect sizes. The results (N =…
User's manual for interfacing a leading edge, vortex rollup program with two linear panel methods
NASA Technical Reports Server (NTRS)
Desilva, B. M. E.; Medan, R. T.
1979-01-01
Sufficient instructions are provided for interfacing the Mangler-Smith, leading edge vortex rollup program with a vortex lattice (POTFAN) method and an advanced higher order, singularity linear analysis for computing the vortex effects for simple canard wing combinations.
Diffendorfer, James E.; Richards, Paul M.; Dalrymple, George H.; DeAngelis, Donald L.
2001-01-01
We present the application of Linear Programming for estimating biomass fluxes in ecosystem and food web models. We use the herpetological assemblage of the Everglades as an example. We developed food web structures for three common Everglades freshwater habitat types: marsh, prairie, and upland. We obtained a first estimate of the fluxes using field data, literature estimates, and professional judgment. Linear programming was used to obtain a consistent and better estimate of the set of fluxes, while maintaining mass balance and minimizing deviations from point estimates. The results support the view that the Everglades is a spatially heterogeneous system, with changing patterns of energy flux, species composition, and biomasses across the habitat types. We show that a food web/ecosystem perspective, combined with Linear Programming, is a robust method for describing food webs and ecosystems that requires minimal data, produces useful post-solution analyses, and generates hypotheses regarding the structure of energy flow in the system.
NASA Technical Reports Server (NTRS)
Arneson, Heather M.; Dousse, Nicholas; Langbort, Cedric
2014-01-01
We consider control design for positive compartmental systems in which each compartment's outflow rate is described by a concave function of the amount of material in the compartment.We address the problem of determining the routing of material between compartments to satisfy time-varying state constraints while ensuring that material reaches its intended destination over a finite time horizon. We give sufficient conditions for the existence of a time-varying state-dependent routing strategy which ensures that the closed-loop system satisfies basic network properties of positivity, conservation and interconnection while ensuring that capacity constraints are satisfied, when possible, or adjusted if a solution cannot be found. These conditions are formulated as a linear programming problem. Instances of this linear programming problem can be solved iteratively to generate a solution to the finite horizon routing problem. Results are given for the application of this control design method to an example problem. Key words: linear programming; control of networks; positive systems; controller constraints and structure.
Train repathing in emergencies based on fuzzy linear programming.
Meng, Xuelei; Cui, Bingmou
2014-01-01
Train pathing is a typical problem which is to assign the train trips on the sets of rail segments, such as rail tracks and links. This paper focuses on the train pathing problem, determining the paths of the train trips in emergencies. We analyze the influencing factors of train pathing, such as transferring cost, running cost, and social adverse effect cost. With the overall consideration of the segment and station capability constraints, we build the fuzzy linear programming model to solve the train pathing problem. We design the fuzzy membership function to describe the fuzzy coefficients. Furthermore, the contraction-expansion factors are introduced to contract or expand the value ranges of the fuzzy coefficients, coping with the uncertainty of the value range of the fuzzy coefficients. We propose a method based on triangular fuzzy coefficient and transfer the train pathing (fuzzy linear programming model) to a determinate linear model to solve the fuzzy linear programming problem. An emergency is supposed based on the real data of the Beijing-Shanghai Railway. The model in this paper was solved and the computation results prove the availability of the model and efficiency of the algorithm.
Computer Program For Linear Algebra
NASA Technical Reports Server (NTRS)
Krogh, F. T.; Hanson, R. J.
1987-01-01
Collection of routines provided for basic vector operations. Basic Linear Algebra Subprogram (BLAS) library is collection from FORTRAN-callable routines for employing standard techniques to perform basic operations of numerical linear algebra.
PGOPHER: A program for simulating rotational, vibrational and electronic spectra
NASA Astrophysics Data System (ADS)
Western, Colin M.
2017-01-01
The PGOPHER program is a general purpose program for simulating and fitting molecular spectra, particularly the rotational structure. The current version can handle linear molecules, symmetric tops and asymmetric tops and many possible transitions, both allowed and forbidden, including multiphoton and Raman spectra in addition to the common electric dipole absorptions. Many different interactions can be included in the calculation, including those arising from electron and nuclear spin, and external electric and magnetic fields. Multiple states and interactions between them can also be accounted for, limited only by available memory. Fitting of experimental data can be to line positions (in many common formats), intensities or band contours and the parameters determined can be level populations as well as rotational constants. PGOPHER is provided with a powerful and flexible graphical user interface to simplify many of the tasks required in simulating, understanding and fitting molecular spectra, including Fortrat diagrams and energy level plots in addition to overlaying experimental and simulated spectra. The program is open source, and can be compiled with open source tools. This paper provides a formal description of the operation of version 9.1.
Automatic blocking of nested loops
NASA Technical Reports Server (NTRS)
Schreiber, Robert; Dongarra, Jack J.
1990-01-01
Blocked algorithms have much better properties of data locality and therefore can be much more efficient than ordinary algorithms when a memory hierarchy is involved. On the other hand, they are very difficult to write and to tune for particular machines. The reorganization is considered of nested loops through the use of known program transformations in order to create blocked algorithms automatically. The program transformations used are strip mining, loop interchange, and a variant of loop skewing in which invertible linear transformations (with integer coordinates) of the loop indices are allowed. Some problems are solved concerning the optimal application of these transformations. It is shown, in a very general setting, how to choose a nearly optimal set of transformed indices. It is then shown, in one particular but rather frequently occurring situation, how to choose an optimal set of block sizes.
NASA Technical Reports Server (NTRS)
Kenney, G. P.
1975-01-01
The results of the sensor performance evaluation of the 13.9 GHz radiometer/scatterometer, which was part of the earth resources experiment package on Skylab. Findings are presented in the areas of housekeeping parameters, antenna gain and scanning performance, dynamic range, linearity, precision, resolution, stability, integration time, and transmitter output. Supplementary analyses covering performance anomalies, data stream peculiarities, aircraft sensor data comparisons, scatterometer saturation characteristics, and RF heating effects are reported. Results of the evaluation show that instrument performance was generally as expected, but capability degradations were observed to result from three major anomalies. Conclusions are drawn from the evaluation results, and recommendations for improving the effectiveness of a future program are offered. An addendum describes the special evaluation techniques developed and applied in the sensor performance evaluation tasks.
Program Flow Analyzer. Volume 3
1984-08-01
metrics are defined using these basic terms. Of interest is another measure for the size of the program, called the volume: V N x log 2 n. 5 The unit of...correlated to actual data and most useful for test. The formula des - cribing difficulty may be expressed as: nl X N2D - 2 -I/L *Difficulty then, is the...linearly independent program paths through any program graph. A maximal set of these linearly independent paths, called a "basis set," can always be found
NASA Technical Reports Server (NTRS)
Bowman, L. M.
1984-01-01
An interactive steady state frequency response computer program with graphics is documented. Single or multiple forces may be applied to the structure using a modal superposition approach to calculate response. The method can be reapplied to linear, proportionally damped structures in which the damping may be viscous or structural. The theoretical approach and program organization are described. Example problems, user instructions, and a sample interactive session are given to demonstate the program's capability in solving a variety of problems.
FASOR - A second generation shell of revolution code
NASA Technical Reports Server (NTRS)
Cohen, G. A.
1978-01-01
An integrated computer program entitled Field Analysis of Shells of Revolution (FASOR) currently under development for NASA is described. When completed, this code will treat prebuckling, buckling, initial postbuckling and vibrations under axisymmetric static loads as well as linear response and bifurcation under asymmetric static loads. Although these modes of response are treated by existing programs, FASOR extends the class of problems treated to include general anisotropy and transverse shear deformations of stiffened laminated shells. At the same time, a primary goal is to develop a program which is free of the usual problems of modeling, numerical convergence and ill-conditioning, laborious problem setup, limitations on problem size and interpretation of output. The field method is briefly described, the shell differential equations are cast in a suitable form for solution by this method and essential aspects of the input format are presented. Numerical results are given for both unstiffened and stiffened anisotropic cylindrical shells and compared with previously published analytical solutions.
NASA Astrophysics Data System (ADS)
Shorikov, A. F.
2016-12-01
In this article we consider a discrete-time dynamical system consisting of a set a controllable objects (region and forming it municipalities). The dynamics each of these is described by the corresponding linear or nonlinear discrete-time recurrent vector relations and its control system consist from two levels: basic level (control level I) that is dominating level and auxiliary level (control level II) that is subordinate level. Both levels have different criterions of functioning and united by information and control connections which defined in advance. In this article we study the problem of optimization of guaranteed result for program control by the final state of regional social and economic system in the presence of risks vectors. For this problem we propose a mathematical model in the form of two-level hierarchical minimax program control problem of the final states of this system with incomplete information and the general scheme for its solving.
The impact of an online disease management program on medical costs among health plan members.
Schwartz, Steven M; Day, Brian; Wildenhaus, Kevin; Silberman, Anna; Wang, Chun; Silberman, Jordan
2010-01-01
This study evaluated the economic impact of an online disease management program within a broader population health management strategy. A retrospective, quasi-experimental, cohort design evaluated program participants and a matched cohort of nonparticipants on 2003-2007 claims data in a mixed model. The study was conducted through Highmark Inc, Blue Cross Blue Shield, covering 4.8 million members in five regions of Pennsylvania. Overall, 413 online self-management program participants were compared with a matched cohort of 360 nonparticipants. The costs and claims data were measured per person per calendar year. Total payments were aggregated from inpatient, outpatient, professional services, and pharmacy payments. The costs of the online program were estimated on a per-participant basis. All dollars were adjusted to 2008 values. The online intervention, implemented in 2006, was a commercially available, tailored program for chronic condition self management, nested within the Blues on Call(SM) condition management strategy. General linear modeling (with covariate adjustment) was used. Data trends were also explored using second-order polynomial regressions. Health care costs per person per year were $757 less than predicted for participants relative to matched nonparticipants, yielding a return on investment of $9.89 for every dollar spent on the program. This online intervention showed a favorable and cost-effective impact on health care cost.
Generalised Assignment Matrix Methodology in Linear Programming
ERIC Educational Resources Information Center
Jerome, Lawrence
2012-01-01
Discrete Mathematics instructors and students have long been struggling with various labelling and scanning algorithms for solving many important problems. This paper shows how to solve a wide variety of Discrete Mathematics and OR problems using assignment matrices and linear programming, specifically using Excel Solvers although the same…
NASA Technical Reports Server (NTRS)
Mitchell, C. E.; Eckert, K.
1979-01-01
A program for predicting the linear stability of liquid propellant rocket engines is presented. The underlying model assumptions and analytical steps necessary for understanding the program and its input and output are also given. The rocket engine is modeled as a right circular cylinder with an injector with a concentrated combustion zone, a nozzle, finite mean flow, and an acoustic admittance, or the sensitive time lag theory. The resulting partial differential equations are combined into two governing integral equations by the use of the Green's function method. These equations are solved using a successive approximation technique for the small amplitude (linear) case. The computational method used as well as the various user options available are discussed. Finally, a flow diagram, sample input and output for a typical application and a complete program listing for program MODULE are presented.
The fastclime Package for Linear Programming and Large-Scale Precision Matrix Estimation in R.
Pang, Haotian; Liu, Han; Vanderbei, Robert
2014-02-01
We develop an R package fastclime for solving a family of regularized linear programming (LP) problems. Our package efficiently implements the parametric simplex algorithm, which provides a scalable and sophisticated tool for solving large-scale linear programs. As an illustrative example, one use of our LP solver is to implement an important sparse precision matrix estimation method called CLIME (Constrained L 1 Minimization Estimator). Compared with existing packages for this problem such as clime and flare, our package has three advantages: (1) it efficiently calculates the full piecewise-linear regularization path; (2) it provides an accurate dual certificate as stopping criterion; (3) it is completely coded in C and is highly portable. This package is designed to be useful to statisticians and machine learning researchers for solving a wide range of problems.
On the LHC sensitivity for non-thermalised hidden sectors
NASA Astrophysics Data System (ADS)
Kahlhoefer, Felix
2018-04-01
We show under rather general assumptions that hidden sectors that never reach thermal equilibrium in the early Universe are also inaccessible for the LHC. In other words, any particle that can be produced at the LHC must either have been in thermal equilibrium with the Standard Model at some point or must be produced via the decays of another hidden sector particle that has been in thermal equilibrium. To reach this conclusion, we parametrise the cross section connecting the Standard Model to the hidden sector in a very general way and use methods from linear programming to calculate the largest possible number of LHC events compatible with the requirement of non-thermalisation. We find that even the HL-LHC cannot possibly produce more than a few events with energy above 10 GeV involving states from a non-thermalised hidden sector.
PAN AIR modeling studies. [higher order panel method for aircraft design
NASA Technical Reports Server (NTRS)
Towne, M. C.; Strande, S. M.; Erickson, L. L.; Kroo, I. M.; Enomoto, F. Y.; Carmichael, R. L.; Mcpherson, K. F.
1983-01-01
PAN AIR is a computer program that predicts subsonic or supersonic linear potential flow about arbitrary configurations. The code's versatility and generality afford numerous possibilities for modeling flow problems. Although this generality provides great flexibility, it also means that studies are required to establish the dos and don'ts of modeling. The purpose of this paper is to describe and evaluate a variety of methods for modeling flows with PAN AIR. The areas discussed are effects of panel density, internal flow modeling, forebody modeling in subsonic flow, propeller slipstream modeling, effect of wake length, wing-tail-wake interaction, effect of trailing-edge paneling on the Kutta condition, well- and ill-posed boundary-value problems, and induced-drag calculations. These nine topics address problems that are of practical interest to the users of PAN AIR.
Linear Goal Programming as a Military Decision Aid.
1988-04-01
JAMES F. MAJOR9 USAF 13a. TYPE OF REPORT 13b. TIME COVERED 14. DATE OF REPORT (Year, Month, Day) 15. PAGE COUNT IFROM____ TO 1988 APRIL 64 16...air warfare, advanced armour warfare, the potential f or space warfare, and many other advances have expanded the breadth of weapons employed to the...written by A. Charnes and W. W. Cooper, Management Models and Industrial Applications of Linear Programming In 1961.(3:5) Since this time linear
Training hospital managers for strategic planning and management: a prospective study.
Terzic-Supic, Zorica; Bjegovic-Mikanovic, Vesna; Vukovic, Dejana; Santric-Milicevic, Milena; Marinkovic, Jelena; Vasic, Vladimir; Laaser, Ulrich
2015-02-26
Training is the systematic acquisition of skills, rules, concepts, or attitudes and is one of the most important components in any organization's strategy. There is increasing demand for formal and informal training programs especially for physicians in leadership positions. This study determined the learning outcomes after a specific training program for hospital management teams. The study was conducted during 2006 and 2007 at the Centre School of Public Health and Management, Faculty of Medicine, University of Belgrade and included 107 participants involved in the management in 20 Serbian general hospitals. The management teams were multidisciplinary, consisting of five members on average: the director of the general hospital, the deputy directors, the head nurse, and the chiefs of support services. The managers attended a training program, which comprised four modules addressing specific topics. Three reviewers independently evaluated the level of management skills at the beginning and 12 months after the training program. Principal component analysis and subsequent stepwise multiple linear regression analysis were performed to determine predictors of learning outcomes. The quality of the SWOT (strengths, weaknesses, opportunities and threats) analyses performed by the trainees improved with differences between 0.35 and 0.49 on a Likert scale (p < 0.001). Principal component analysis explained 81% of the variance affecting their quality of strategic planning. Following the training program, the external environment, strategic positioning, and quality of care were predictors of learning outcomes. The four regression models used showed that the training program had positive effects (p < 0.001) on the ability to formulate a Strategic Plan comprising the hospital mission, vision, strategic objectives, and action plan. This study provided evidence that training for strategic planning and management enhanced the strategic decision-making of hospital management teams, which is a requirement for hospitals in an increasingly competitive, complex and challenging context. For the first time, half of state general hospitals involved in team training have formulated the development of an official strategic plan. The positive effects of the formal training program justify additional investment in future education and training.
Stochastic Dynamic Mixed-Integer Programming (SD-MIP)
2015-05-05
stochastic linear programming ( SLP ) problems. By using a combination of ideas from cutting plane theory of deterministic MIP (especially disjunctive...developed to date. b) As part of this project, we have also developed tools for very large scale Stochastic Linear Programming ( SLP ). There are...several reasons for this. First, SLP models continue to challenge many of the fastest computers to date, and many applications within the DoD (e.g
Notes of the Design of Two Supercavitating Hydrofoils
1975-07-01
Foil Section Characteristics Definition Tulin Two -Term Levi - Civita Larock and Street Two -Term three pararreter Prcgram and Inputs linearized two ...36 NOMENCLATURE Symbol Description Dimensions AIA 2 Angle distribution multipliers in Levi - radians Civita Program AR Aspect ratio CL Lift coefficient...angle of attack radian B Constant angle in Levi - Civita program radian 6 Linearized angle of attack superposed degrees C Wu’s 1955 program parameter
Distributed lag effects and vulnerable groups of floods on bacillary dysentery in Huaihua, China
Liu, Zhi-Dong; Li, Jing; Zhang, Ying; Ding, Guo-Yong; Xu, Xin; Gao, Lu; Liu, Xue-Na; Liu, Qi-Yong; Jiang, Bao-Fa
2016-01-01
Understanding the potential links between floods and bacillary dysentery in China is important to develop appropriate intervention programs after floods. This study aimed to explore the distributed lag effects of floods on bacillary dysentery and to identify the vulnerable groups in Huaihua, China. Weekly number of bacillary dysentery cases from 2005–2011 were obtained during flood season. Flood data and meteorological data over the same period were obtained from the China Meteorological Data Sharing Service System. To examine the distributed lag effects, a generalized linear mixed model combined with a distributed lag non-linear model were developed to assess the relationship between floods and bacillary dysentery. A total of 3,709 cases of bacillary dysentery were notified over the study period. The effects of floods on bacillary dysentery continued for approximately 3 weeks with a cumulative risk ratio equal to 1.52 (95% CI: 1.08–2.12). The risks of bacillary dysentery were higher in females, farmers and people aged 15–64 years old. This study suggests floods have increased the risk of bacillary dysentery with 3 weeks’ effects, especially for the vulnerable groups identified. Public health programs should be taken to prevent and control a potential risk of bacillary dysentery after floods. PMID:27427387
Zhang, Liping; Zhang, Shiwen; Huang, Yajie; Cao, Meng; Huang, Yuanfang; Zhang, Hongyan
2016-01-01
Understanding abandoned mine land (AML) changes during land reclamation is crucial for reusing damaged land resources and formulating sound ecological restoration policies. This study combines the linear programming (LP) model and the CLUE-S model to simulate land-use dynamics in the Mentougou District (Beijing, China) from 2007 to 2020 under three reclamation scenarios, that is, the planning scenario based on the general land-use plan in study area (scenario 1), maximal comprehensive benefits (scenario 2), and maximal ecosystem service value (scenario 3). Nine landscape-scale graph metrics were then selected to describe the landscape characteristics. The results show that the coupled model presented can simulate the dynamics of AML effectively and the spatially explicit transformations of AML were different. New cultivated land dominates in scenario 1, while construction land and forest land account for major percentages in scenarios 2 and 3, respectively. Scenario 3 has an advantage in most of the selected indices as the patches combined most closely. To conclude, reclaiming AML by transformation into more forest can reduce the variability and maintain the stability of the landscape ecological system in study area. These findings contribute to better mapping AML dynamics and providing policy support for the management of AML. PMID:27023575
Distributed lag effects and vulnerable groups of floods on bacillary dysentery in Huaihua, China.
Liu, Zhi-Dong; Li, Jing; Zhang, Ying; Ding, Guo-Yong; Xu, Xin; Gao, Lu; Liu, Xue-Na; Liu, Qi-Yong; Jiang, Bao-Fa
2016-07-18
Understanding the potential links between floods and bacillary dysentery in China is important to develop appropriate intervention programs after floods. This study aimed to explore the distributed lag effects of floods on bacillary dysentery and to identify the vulnerable groups in Huaihua, China. Weekly number of bacillary dysentery cases from 2005-2011 were obtained during flood season. Flood data and meteorological data over the same period were obtained from the China Meteorological Data Sharing Service System. To examine the distributed lag effects, a generalized linear mixed model combined with a distributed lag non-linear model were developed to assess the relationship between floods and bacillary dysentery. A total of 3,709 cases of bacillary dysentery were notified over the study period. The effects of floods on bacillary dysentery continued for approximately 3 weeks with a cumulative risk ratio equal to 1.52 (95% CI: 1.08-2.12). The risks of bacillary dysentery were higher in females, farmers and people aged 15-64 years old. This study suggests floods have increased the risk of bacillary dysentery with 3 weeks' effects, especially for the vulnerable groups identified. Public health programs should be taken to prevent and control a potential risk of bacillary dysentery after floods.
Zhang, Liping; Zhang, Shiwen; Huang, Yajie; Cao, Meng; Huang, Yuanfang; Zhang, Hongyan
2016-03-24
Understanding abandoned mine land (AML) changes during land reclamation is crucial for reusing damaged land resources and formulating sound ecological restoration policies. This study combines the linear programming (LP) model and the CLUE-S model to simulate land-use dynamics in the Mentougou District (Beijing, China) from 2007 to 2020 under three reclamation scenarios, that is, the planning scenario based on the general land-use plan in study area (scenario 1), maximal comprehensive benefits (scenario 2), and maximal ecosystem service value (scenario 3). Nine landscape-scale graph metrics were then selected to describe the landscape characteristics. The results show that the coupled model presented can simulate the dynamics of AML effectively and the spatially explicit transformations of AML were different. New cultivated land dominates in scenario 1, while construction land and forest land account for major percentages in scenarios 2 and 3, respectively. Scenario 3 has an advantage in most of the selected indices as the patches combined most closely. To conclude, reclaiming AML by transformation into more forest can reduce the variability and maintain the stability of the landscape ecological system in study area. These findings contribute to better mapping AML dynamics and providing policy support for the management of AML.
Distributed lag effects and vulnerable groups of floods on bacillary dysentery in Huaihua, China
NASA Astrophysics Data System (ADS)
Liu, Zhi-Dong; Li, Jing; Zhang, Ying; Ding, Guo-Yong; Xu, Xin; Gao, Lu; Liu, Xue-Na; Liu, Qi-Yong; Jiang, Bao-Fa
2016-07-01
Understanding the potential links between floods and bacillary dysentery in China is important to develop appropriate intervention programs after floods. This study aimed to explore the distributed lag effects of floods on bacillary dysentery and to identify the vulnerable groups in Huaihua, China. Weekly number of bacillary dysentery cases from 2005-2011 were obtained during flood season. Flood data and meteorological data over the same period were obtained from the China Meteorological Data Sharing Service System. To examine the distributed lag effects, a generalized linear mixed model combined with a distributed lag non-linear model were developed to assess the relationship between floods and bacillary dysentery. A total of 3,709 cases of bacillary dysentery were notified over the study period. The effects of floods on bacillary dysentery continued for approximately 3 weeks with a cumulative risk ratio equal to 1.52 (95% CI: 1.08-2.12). The risks of bacillary dysentery were higher in females, farmers and people aged 15-64 years old. This study suggests floods have increased the risk of bacillary dysentery with 3 weeks’ effects, especially for the vulnerable groups identified. Public health programs should be taken to prevent and control a potential risk of bacillary dysentery after floods.
Theory of mode coupling in spin torque oscillators coupled to a thermal bath of magnons
NASA Astrophysics Data System (ADS)
Zhou, Yan; Zhang, Shulei; Li, Dong; Heinonen, Olle
Recently, numerous experimental investigations have shown that the dynamics of a single spin torque oscillator (STO) exhibits complex behavior stemming from interactions between two or more modes of the oscillator. Examples are the observed mode-hopping and mode coexistence. There has been some initial work indicating how the theory for a single-mode (macro-spin) spin torque oscillator should be generalized to include several modes and the interactions between them. In this work, we rigorously derive such a theory starting with the generalized Landau-Lifshitz-Gilbert equation in the presence of the current-driven spin transfer torques. We will first show, in general, that how a linear mode coupling would arise through the coupling of the system to a thermal bath of magnons, which implies that the manifold of orbits and fixed points may shift with temperature. We then apply our theory to two experimentally interesting systems: 1) a STO patterned into nano-pillar with circular or elliptical cross-sections and 2) a nano-contact STO. For both cases, we found that in order to get mode coupling, it would be necessary to have either a finite in-plane component of the external field or an Oersted field. We will also discuss the temperature dependence of the linear mode coupling. Y. Zhou acknowledges the support by the Seed Funding Program for Basic Research from the University of Hong Kong, and University Grants Committee of Hong Kong (Contract No. AoE/P-04/08).
Model-based optimal design of experiments - semidefinite and nonlinear programming formulations
Duarte, Belmiro P.M.; Wong, Weng Kee; Oliveira, Nuno M.C.
2015-01-01
We use mathematical programming tools, such as Semidefinite Programming (SDP) and Nonlinear Programming (NLP)-based formulations to find optimal designs for models used in chemistry and chemical engineering. In particular, we employ local design-based setups in linear models and a Bayesian setup in nonlinear models to find optimal designs. In the latter case, Gaussian Quadrature Formulas (GQFs) are used to evaluate the optimality criterion averaged over the prior distribution for the model parameters. Mathematical programming techniques are then applied to solve the optimization problems. Because such methods require the design space be discretized, we also evaluate the impact of the discretization scheme on the generated design. We demonstrate the techniques for finding D–, A– and E–optimal designs using design problems in biochemical engineering and show the method can also be directly applied to tackle additional issues, such as heteroscedasticity in the model. Our results show that the NLP formulation produces highly efficient D–optimal designs but is computationally less efficient than that required for the SDP formulation. The efficiencies of the generated designs from the two methods are generally very close and so we recommend the SDP formulation in practice. PMID:26949279
Model-based optimal design of experiments - semidefinite and nonlinear programming formulations.
Duarte, Belmiro P M; Wong, Weng Kee; Oliveira, Nuno M C
2016-02-15
We use mathematical programming tools, such as Semidefinite Programming (SDP) and Nonlinear Programming (NLP)-based formulations to find optimal designs for models used in chemistry and chemical engineering. In particular, we employ local design-based setups in linear models and a Bayesian setup in nonlinear models to find optimal designs. In the latter case, Gaussian Quadrature Formulas (GQFs) are used to evaluate the optimality criterion averaged over the prior distribution for the model parameters. Mathematical programming techniques are then applied to solve the optimization problems. Because such methods require the design space be discretized, we also evaluate the impact of the discretization scheme on the generated design. We demonstrate the techniques for finding D -, A - and E -optimal designs using design problems in biochemical engineering and show the method can also be directly applied to tackle additional issues, such as heteroscedasticity in the model. Our results show that the NLP formulation produces highly efficient D -optimal designs but is computationally less efficient than that required for the SDP formulation. The efficiencies of the generated designs from the two methods are generally very close and so we recommend the SDP formulation in practice.
Vicente de Sousa, Odete; Soares Guerra, Rita; Sousa, Ana Sofia; Pais Henriques, Bebiana; Pereira Monteiro, Anabela; Amaral, Teresa Freitas
2017-09-01
This study aims to evaluate the impact of oral nutritional supplementation (ONS) and a psychomotor rehabilitation program on nutritional and functional status of community-dwelling patients with Alzheimer's disease (AD). A 21-day prospective randomized controlled trial was conducted and third intervention group performed a psychomotor rehabilitation program. Patients were followed up for 180 days. Mean (standard deviation) score of Mini Nutritional Assessment (MNA) increased both in the nutritional supplementation group (NSG; n = 25), 0.4 (0.8), and in the nutritional supplementation psychomotor rehabilitation program group (NSPRG; n = 11), 1.5 (1.0), versus -0.1 (1.1) in the control group (CG; n = 43), P < .05. Further improvements at 90-day follow-up for MNA in NSG: 1.3 (1.2) and NSPRG: 1.6 (1.0) versus 0.3 (1.7) in CG ( P < .05) were observed. General linear model analysis showed that the NSG and NSPRG ▵MNA score improved after intervention, at 21 days and 90 days, was independent of the MNA and Mini-Mental State Examination scores at baseline ( Ps > .05). The ONS and a psychomotor rehabilitation program have a positive impact on long-term nutritional and functional status of patients with AD.
Riding and handling qualities of light aircraft: A review and analysis
NASA Technical Reports Server (NTRS)
Smetana, F. O.; Summery, D. C.; Johnson, W. D.
1972-01-01
Design procedures and supporting data necessary for configuring light aircraft to obtain desired responses to pilot commands and gusts are presented. The procedures employ specializations of modern military and jet transport practice where these provide an improvement over earlier practice. General criteria for riding and handling qualities are discussed in terms of the airframe dynamics. Methods available in the literature for calculating the coefficients required for a linearized analysis of the airframe dynamics are reviewed in detail. The review also treats the relation of spin and stall to airframe geometry. Root locus analysis is used to indicate the sensitivity of airframe dynamics to variations in individual stability derivatives and to variations in geometric parameters. Computer programs are given for finding the frequencies, damping ratios, and time constants of all rigid body modes and for generating time histories of aircraft motions in response to control inputs. Appendices are included presenting the derivation of the linearized equations of motion; the stability derivatives; the transfer functions; approximate solutions for the frequency, damping ratio, and time constants; an indication of methods to be used when linear analysis is inadequate; sample calculations; and an explanation of the use of root locus diagrams and Bode plots.
Li, Chuan; Li, Lin; Zhang, Jie; Alexov, Emil
2012-01-01
The Gauss-Seidel method is a standard iterative numerical method widely used to solve a system of equations and, in general, is more efficient comparing to other iterative methods, such as the Jacobi method. However, standard implementation of the Gauss-Seidel method restricts its utilization in parallel computing due to its requirement of using updated neighboring values (i.e., in current iteration) as soon as they are available. Here we report an efficient and exact (not requiring assumptions) method to parallelize iterations and to reduce the computational time as a linear/nearly linear function of the number of CPUs. In contrast to other existing solutions, our method does not require any assumptions and is equally applicable for solving linear and nonlinear equations. This approach is implemented in the DelPhi program, which is a finite difference Poisson-Boltzmann equation solver to model electrostatics in molecular biology. This development makes the iterative procedure on obtaining the electrostatic potential distribution in the parallelized DelPhi several folds faster than that in the serial code. Further we demonstrate the advantages of the new parallelized DelPhi by computing the electrostatic potential and the corresponding energies of large supramolecular structures. PMID:22674480
Liu, Lan; Jiang, Tao
2007-01-01
With the launch of the international HapMap project, the haplotype inference problem has attracted a great deal of attention in the computational biology community recently. In this paper, we study the question of how to efficiently infer haplotypes from genotypes of individuals related by a pedigree without mating loops, assuming that the hereditary process was free of mutations (i.e. the Mendelian law of inheritance) and recombinants. We model the haplotype inference problem as a system of linear equations as in [10] and present an (optimal) linear-time (i.e. O(mn) time) algorithm to generate a particular solution (A particular solution of any linear system is an assignment of numerical values to the variables in the system which satisfies the equations in the system.) to the haplotype inference problem, where m is the number of loci (or markers) in a genotype and n is the number of individuals in the pedigree. Moreover, the algorithm also provides a general solution (A general solution of any linear system is denoted by the span of a basis in the solution space to its associated homogeneous system, offset from the origin by a vector, namely by any particular solution. A general solution for ZRHC is very useful in practice because it allows the end user to efficiently enumerate all solutions for ZRHC and performs tasks such as random sampling.) in O(mn2) time, which is optimal because the size of a general solution could be as large as Theta(mn2). The key ingredients of our construction are (i) a fast consistency checking procedure for the system of linear equations introduced in [10] based on a careful investigation of the relationship between the equations (ii) a novel linear-time method for solving linear equations without invoking the Gaussian elimination method. Although such a fast method for solving equations is not known for general systems of linear equations, we take advantage of the underlying loop-free pedigree graph and some special properties of the linear equations.
Work Related Stress, Burnout, Job Satisfaction and General Health of Nurses
Khamisa, Natasha; Oldenburg, Brian; Peltzer, Karl; Ilic, Dragan
2015-01-01
Gaps in research focusing on work related stress, burnout, job satisfaction and general health of nurses is evident within developing contexts like South Africa. This study identified the relationship between work related stress, burnout, job satisfaction and general health of nurses. A total of 1200 nurses from four hospitals were invited to participate in this cross-sectional study (75% response rate). Participants completed five questionnaires and multiple linear regression analysis was used to determine significant relationships between variables. Staff issues are best associated with burnout as well as job satisfaction. Burnout explained the highest amount of variance in mental health of nurses. These are known to compromise productivity and performance, as well as affect the quality of patient care. Issues, such as security risks in the workplace, affect job satisfaction and health of nurses. Although this is more salient to developing contexts it is important in developing strategies and intervention programs towards improving nurse and patient related outcomes. PMID:25588157
NASA Astrophysics Data System (ADS)
Jacobs, Verne
Dynamical descriptions for the propagation of quantized electromagnetic fields, in the presence of environmental interactions, are systematically and self-consistently developed in the complimentary Schrödinger and Heisenberg pictures. An open-systems (non-equilibrium) quantum-electrodynamics description is thereby provided for electromagnetic-field propagation in general non-local and non-stationary dispersive and absorbing optical media, including a fundamental microscopic treatment of decoherence and relaxation processes due to environmental collisional and electromagnetic interactions. Particular interest is centered on entangled states and other non-classical states of electromagnetic fields, which may be created by non-linear electromagnetic interactions and detected by the measurement of various electromagnetic-field correlation functions. Accordingly, we present dynamical descriptions based on general forms of electromagnetic-field correlation functions involving both the electric-field and the magnetic-field components of the electromagnetic field, which are treated on an equal footing. Work supported by the Office of Naval Research through the Basic Research Program at The Naval Research Laboratory.
On the classical and quantum integrability of systems of resonant oscillators
NASA Astrophysics Data System (ADS)
Marino, Massimo
2017-01-01
We study in this paper systems of harmonic oscillators with resonant frequencies. For these systems we present general procedures for the construction of sets of functionally independent constants of motion, which can be used for the definition of generalized actionangle variables, in accordance with the general description of degenerate integrable systems which was presented by Nekhoroshev in a seminal paper in 1972. We then apply to these classical integrable systems the procedure of quantization which has been proposed to the author by Nekhoroshev during his last years of activity at Milan University. This procedure is based on the construction of linear operators by means of the symmetrization of the classical constants of motion mentioned above. For 3 oscillators with resonance 1: 1: 2, by using a computer program we have discovered an exceptional integrable system, which cannot be obtained with the standard methods based on the obvious symmetries of the Hamiltonian function. In this exceptional case, quantum integrability can be realized only by means of a modification of the symmetrization procedure.
Work related stress, burnout, job satisfaction and general health of nurses.
Khamisa, Natasha; Oldenburg, Brian; Peltzer, Karl; Ilic, Dragan
2015-01-12
Gaps in research focusing on work related stress, burnout, job satisfaction and general health of nurses is evident within developing contexts like South Africa. This study identified the relationship between work related stress, burnout, job satisfaction and general health of nurses. A total of 1200 nurses from four hospitals were invited to participate in this cross-sectional study (75% response rate). Participants completed five questionnaires and multiple linear regression analysis was used to determine significant relationships between variables. Staff issues are best associated with burnout as well as job satisfaction. Burnout explained the highest amount of variance in mental health of nurses. These are known to compromise productivity and performance, as well as affect the quality of patient care. Issues, such as security risks in the workplace, affect job satisfaction and health of nurses. Although this is more salient to developing contexts it is important in developing strategies and intervention programs towards improving nurse and patient related outcomes.
The Next Linear Collider Program
The Next Linear Collider at SLAC Navbar NLC Playpen Warning: This page is provided as a place for Comments & Suggestions | Desktop Trouble Call | Linear Collider Group at FNAL || This page was updated
Microwave and Electron Beam Computer Programs
1988-06-01
Research (ONR). SCRIBE was adapted by MRC from the Stanford Linear Accelerator Center Beam Trajectory Program, EGUN . oTIC NSECE Acc !,,o For IDL1C I...achieved with SCRIBE. It is a ver- sion of the Stanford Linear Accelerator (SLAC) code EGUN (Ref. 8), extensively modified by MRC for research on
Interior-Point Methods for Linear Programming: A Review
ERIC Educational Resources Information Center
Singh, J. N.; Singh, D.
2002-01-01
The paper reviews some recent advances in interior-point methods for linear programming and indicates directions in which future progress can be made. Most of the interior-point methods belong to any of three categories: affine-scaling methods, potential reduction methods and central path methods. These methods are discussed together with…
ALPS - A LINEAR PROGRAM SOLVER
NASA Technical Reports Server (NTRS)
Viterna, L. A.
1994-01-01
Linear programming is a widely-used engineering and management tool. Scheduling, resource allocation, and production planning are all well-known applications of linear programs (LP's). Most LP's are too large to be solved by hand, so over the decades many computer codes for solving LP's have been developed. ALPS, A Linear Program Solver, is a full-featured LP analysis program. ALPS can solve plain linear programs as well as more complicated mixed integer and pure integer programs. ALPS also contains an efficient solution technique for pure binary (0-1 integer) programs. One of the many weaknesses of LP solvers is the lack of interaction with the user. ALPS is a menu-driven program with no special commands or keywords to learn. In addition, ALPS contains a full-screen editor to enter and maintain the LP formulation. These formulations can be written to and read from plain ASCII files for portability. For those less experienced in LP formulation, ALPS contains a problem "parser" which checks the formulation for errors. ALPS creates fully formatted, readable reports that can be sent to a printer or output file. ALPS is written entirely in IBM's APL2/PC product, Version 1.01. The APL2 workspace containing all the ALPS code can be run on any APL2/PC system (AT or 386). On a 32-bit system, this configuration can take advantage of all extended memory. The user can also examine and modify the ALPS code. The APL2 workspace has also been "packed" to be run on any DOS system (without APL2) as a stand-alone "EXE" file, but has limited memory capacity on a 640K system. A numeric coprocessor (80X87) is optional but recommended. The standard distribution medium for ALPS is a 5.25 inch 360K MS-DOS format diskette. IBM, IBM PC and IBM APL2 are registered trademarks of International Business Machines Corporation. MS-DOS is a registered trademark of Microsoft Corporation.
NASA Astrophysics Data System (ADS)
Wu, Bofeng; Huang, Chao-Guang
2018-04-01
The 1 /r expansion in the distance to the source is applied to the linearized f (R ) gravity, and its multipole expansion in the radiation field with irreducible Cartesian tensors is presented. Then, the energy, momentum, and angular momentum in the gravitational waves are provided for linearized f (R ) gravity. All of these results have two parts, which are associated with the tensor part and the scalar part in the multipole expansion of linearized f (R ) gravity, respectively. The former is the same as that in General Relativity, and the latter, as the correction to the result in General Relativity, is caused by the massive scalar degree of freedom and plays an important role in distinguishing General Relativity and f (R ) gravity.
Runtime Analysis of Linear Temporal Logic Specifications
NASA Technical Reports Server (NTRS)
Giannakopoulou, Dimitra; Havelund, Klaus
2001-01-01
This report presents an approach to checking a running program against its Linear Temporal Logic (LTL) specifications. LTL is a widely used logic for expressing properties of programs viewed as sets of executions. Our approach consists of translating LTL formulae to finite-state automata, which are used as observers of the program behavior. The translation algorithm we propose modifies standard LTL to B chi automata conversion techniques to generate automata that check finite program traces. The algorithm has been implemented in a tool, which has been integrated with the generic JPaX framework for runtime analysis of Java programs.
The Next Linear Collider Program-News
The Next Linear Collider at SLAC Navbar The Next Linear Collider In The Press The Secretary of Linear Collider is a high-priority goal of this plan. http://www.sc.doe.gov/Sub/Facilities_for_future/20 -term projects in conceputal stages (the Linear Collider is the highest priority project in this
Generalized Bezout's Theorem and its applications in coding theory
NASA Technical Reports Server (NTRS)
Berg, Gene A.; Feng, Gui-Liang; Rao, T. R. N.
1996-01-01
This paper presents a generalized Bezout theorem which can be used to determine a tighter lower bound of the number of distinct points of intersection of two or more curves for a large class of plane curves. A new approach to determine a lower bound on the minimum distance (and also the generalized Hamming weights) for algebraic-geometric codes defined from a class of plane curves is introduced, based on the generalized Bezout theorem. Examples of more efficient linear codes are constructed using the generalized Bezout theorem and the new approach. For d = 4, the linear codes constructed by the new construction are better than or equal to the known linear codes. For d greater than 5, these new codes are better than the known codes. The Klein code over GF(2(sup 3)) is also constructed.
Wang, Guanghui; Wu, Wells W; Zeng, Weihua; Chou, Chung-Lin; Shen, Rong-Fong
2006-05-01
A critical step in protein biomarker discovery is the ability to contrast proteomes, a process referred generally as quantitative proteomics. While stable-isotope labeling (e.g., ICAT, 18O- or 15N-labeling, or AQUA) remains the core technology used in mass spectrometry-based proteomic quantification, increasing efforts have been directed to the label-free approach that relies on direct comparison of peptide peak areas between LC-MS runs. This latter approach is attractive to investigators for its simplicity as well as cost effectiveness. In the present study, the reproducibility and linearity of using a label-free approach to highly complex proteomes were evaluated. Various amounts of proteins from different proteomes were subjected to repeated LC-MS analyses using an ion trap or Fourier transform mass spectrometer. Highly reproducible data were obtained between replicated runs, as evidenced by nearly ideal Pearson's correlation coefficients (for ion's peak areas or retention time) and average peak area ratios. In general, more than 50% and nearly 90% of the peptide ion ratios deviated less than 10% and 20%, respectively, from the average in duplicate runs. In addition, the multiplicity ratios of the amounts of proteins used correlated nicely with the observed averaged ratios of peak areas calculated from detected peptides. Furthermore, the removal of abundant proteins from the samples led to an improvement in reproducibility and linearity. A computer program has been written to automate the processing of data sets from experiments with groups of multiple samples for statistical analysis. Algorithms for outlier-resistant mean estimation and for adjusting statistical significance threshold in multiplicity of testing were incorporated to minimize the rate of false positives. The program was applied to quantify changes in proteomes of parental and p53-deficient HCT-116 human cells and found to yield reproducible results. Overall, this study demonstrates an alternative approach that allows global quantification of differentially expressed proteins in complex proteomes. The utility of this method to biomarker discovery is likely to synergize with future improvements in the detecting sensitivity of mass spectrometers.
NASA Astrophysics Data System (ADS)
Haig, Jodie A.; Lambert, Gwladys I.; Sumpton, Wayne D.; Mayer, David G.; Werry, Jonathan M.
2018-01-01
Understanding shark habitat use is vital for informing better ecological management of coastal areas and shark populations. The Queensland Shark Control Program (QSCP) operates over ∼1800 km of Queensland coastline. Between 1996 and 2012, catch, total length and sex were recorded from most of the 1992 bull shark (Carcharhinus leucas) caught on drum lines and gill-nets as part of the QSCP (sex and length was not successfully recorded for all individuals). Gear was set at multiple sites within ten locations. Analysis of monthly catch data resulted in a zero-inflated dataset for the 17 years of records. Five models were trialled for suitability of standardising the bull shark catch per unit effort (CPUE) using available habitat and environmental data. Three separate models for presence-absence and presence-only were run and outputs combined using a delta-lognormal framework for generalized linear and generalized additive models. The delta-lognormal generalized linear model approach resulted in best fit to explain patterns in CPUE. Greater CPUE occurred on drum lines, and greater numbers of bull sharks were caught on both gear types in summer months, with tropical sites, and sites with greater adjacent wetland habitats catching consistently more bull sharks compared to sub-tropical sites. The CPUE data did not support a hypothesis of population decline indicative of coastal overfishing. However, the total length of sharks declined slightly through time for those caught in the tropics; subtropical catches were dominated by females and a large proportion of all bull sharks caught were smaller than the size-at-maturity reported for this species. These factors suggest that growth and sex overfishing of Queensland bull shark populations may be occurring but are not yet detectable in the available data. The data highlight available coastal wetlands, river size, length of coastline and distance to the 50 m depth contour are important for consideration in future whole of lifecycle bull shark management. As concerns for shark populations grow, there is an increasing requirement to collate available data from control programs, fisheries, ecological and research datasets to identify sustainable management options and enable informed stock assessments of bull shark and other threatened shark species.
A program for identification of linear systems
NASA Technical Reports Server (NTRS)
Buell, J.; Kalaba, R.; Ruspini, E.; Yakush, A.
1971-01-01
A program has been written for the identification of parameters in certain linear systems. These systems appear in biomedical problems, particularly in compartmental models of pharmacokinetics. The method presented here assumes that some of the state variables are regularly modified by jump conditions. This simulates administration of drugs following some prescribed drug regime. Parameters are identified by a least-square fit of the linear differential system to a set of experimental observations. The method is especially suited when the interval of observation of the system is very long.
Robust Neighboring Optimal Guidance for the Advanced Launch System
NASA Technical Reports Server (NTRS)
Hull, David G.
1993-01-01
In recent years, optimization has become an engineering tool through the availability of numerous successful nonlinear programming codes. Optimal control problems are converted into parameter optimization (nonlinear programming) problems by assuming the control to be piecewise linear, making the unknowns the nodes or junction points of the linear control segments. Once the optimal piecewise linear control (suboptimal) control is known, a guidance law for operating near the suboptimal path is the neighboring optimal piecewise linear control (neighboring suboptimal control). Research conducted under this grant has been directed toward the investigation of neighboring suboptimal control as a guidance scheme for an advanced launch system.
Estimating linear temporal trends from aggregated environmental monitoring data
Erickson, Richard A.; Gray, Brian R.; Eager, Eric A.
2017-01-01
Trend estimates are often used as part of environmental monitoring programs. These trends inform managers (e.g., are desired species increasing or undesired species decreasing?). Data collected from environmental monitoring programs is often aggregated (i.e., averaged), which confounds sampling and process variation. State-space models allow sampling variation and process variations to be separated. We used simulated time-series to compare linear trend estimations from three state-space models, a simple linear regression model, and an auto-regressive model. We also compared the performance of these five models to estimate trends from a long term monitoring program. We specifically estimated trends for two species of fish and four species of aquatic vegetation from the Upper Mississippi River system. We found that the simple linear regression had the best performance of all the given models because it was best able to recover parameters and had consistent numerical convergence. Conversely, the simple linear regression did the worst job estimating populations in a given year. The state-space models did not estimate trends well, but estimated population sizes best when the models converged. We found that a simple linear regression performed better than more complex autoregression and state-space models when used to analyze aggregated environmental monitoring data.
Measured and predicted structural behavior of the HiMAT tailored composite wing
NASA Technical Reports Server (NTRS)
Nelson, Lawrence H.
1987-01-01
A series of load tests was conducted on the HiMAT tailored composite wing. Coupon tests were also run on a series of unbalanced laminates, including the ply configuration of the wing, the purpose of which was to compare the measured and predicted behavior of unbalanced laminates, including - in the case of the wing - a comparison between the behavior of the full scale structure and coupon tests. Both linear and nonlinear finite element (NASTRAN) analyses were carried out on the wing. Both linear and nonlinear point-stress analyses were performed on the coupons. All test articles were instrumented with strain gages, and wing deflections measured. The leading and trailing edges were found to have no effect on the response of the wing to applied loads. A decrease in the stiffness of the wing box was evident over the 27-test program. The measured load-strain behavior of the wing was found to be linear, in contrast to coupon tests of the same laminate, which were nonlinear. A linear NASTRAN analysis of the wing generally correlated more favorably with measurements than did a nonlinear analysis. An examination of the predicted deflections in the wing root region revealed an anomalous behavior of the structural model that cannot be explained. Both hysteresis and creep appear to be less significant in the wing tests than in the corresponding laminate coupon tests.
Fault detection and initial state verification by linear programming for a class of Petri nets
NASA Technical Reports Server (NTRS)
Rachell, Traxon; Meyer, David G.
1992-01-01
The authors present an algorithmic approach to determining when the marking of a LSMG (live safe marked graph) or a LSFC (live safe free choice) net is in the set of live safe markings M. Hence, once the marking of a net is determined to be in M, then if at some time thereafter the marking of this net is determined not to be in M, this indicates a fault. It is shown how linear programming can be used to determine if m is an element of M. The worst-case computational complexity of each algorithm is bounded by the number of linear programs necessary to compute.
Two algorithms for neural-network design and training with application to channel equalization.
Sweatman, C Z; Mulgrew, B; Gibson, G J
1998-01-01
We describe two algorithms for designing and training neural-network classifiers. The first, the linear programming slab algorithm (LPSA), is motivated by the problem of reconstructing digital signals corrupted by passage through a dispersive channel and by additive noise. It constructs a multilayer perceptron (MLP) to separate two disjoint sets by using linear programming methods to identify network parameters. The second, the perceptron learning slab algorithm (PLSA), avoids the computational costs of linear programming by using an error-correction approach to identify parameters. Both algorithms operate in highly constrained parameter spaces and are able to exploit symmetry in the classification problem. Using these algorithms, we develop a number of procedures for the adaptive equalization of a complex linear 4-quadrature amplitude modulation (QAM) channel, and compare their performance in a simulation study. Results are given for both stationary and time-varying channels, the latter based on the COST 207 GSM propagation model.
A Few New 2+1-Dimensional Nonlinear Dynamics and the Representation of Riemann Curvature Tensors
NASA Astrophysics Data System (ADS)
Wang, Yan; Zhang, Yufeng; Zhang, Xiangzhi
2016-09-01
We first introduced a linear stationary equation with a quadratic operator in ∂x and ∂y, then a linear evolution equation is given by N-order polynomials of eigenfunctions. As applications, by taking N=2, we derived a (2+1)-dimensional generalized linear heat equation with two constant parameters associative with a symmetric space. When taking N=3, a pair of generalized Kadomtsev-Petviashvili equations with the same eigenvalues with the case of N=2 are generated. Similarly, a second-order flow associative with a homogeneous space is derived from the integrability condition of the two linear equations, which is a (2+1)-dimensional hyperbolic equation. When N=3, the third second flow associative with the homogeneous space is generated, which is a pair of new generalized Kadomtsev-Petviashvili equations. Finally, as an application of a Hermitian symmetric space, we established a pair of spectral problems to obtain a new (2+1)-dimensional generalized Schrödinger equation, which is expressed by the Riemann curvature tensors.
Linear programming computational experience with onyx
DOE Office of Scientific and Technical Information (OSTI.GOV)
Atrek, E.
1994-12-31
ONYX is a linear programming software package based on an efficient variation of the gradient projection method. When fully configured, it is intended for application to industrial size problems. While the computational experience is limited at the time of this abstract, the technique is found to be robust and competitive with existing methodology in terms of both accuracy and speed. An overview of the approach is presented together with a description of program capabilities, followed by a discussion of up-to-date computational experience with the program. Conclusions include advantages of the approach and envisioned future developments.
STAR adaptation of QR algorithm. [program for solving over-determined systems of linear equations
NASA Technical Reports Server (NTRS)
Shah, S. N.
1981-01-01
The QR algorithm used on a serial computer and executed on the Control Data Corporation 6000 Computer was adapted to execute efficiently on the Control Data STAR-100 computer. How the scalar program was adapted for the STAR-100 and why these adaptations yielded an efficient STAR program is described. Program listings of the old scalar version and the vectorized SL/1 version are presented in the appendices. Execution times for the two versions applied to the same system of linear equations, are compared.
Next Linear Collider Home Page
Welcome to the Next Linear Collider NLC Home Page If you would like to learn about linear colliders in general and about this next-generation linear collider project's mission, design ideas, and Linear Collider. line | NLC Home | NLC Technical | SLAC | mcdunn Tuesday, February 14, 2006 01:32:11 PM
Control of Distributed Parameter Systems
1990-08-01
vari- ant of the general Lotka - Volterra model for interspecific competition. The variant described the emergence of one subpopulation from another as a...distribut ion unlimited. I&. ARSTRACT (MAUMUnw2O1 A unified arioroximation framework for Parameter estimation In general linear POE models has been completed...unified approximation framework for parameter estimation in general linear PDE models. This framework has provided the theoretical basis for a number of
Optimal Facility Location Tool for Logistics Battle Command (LBC)
2015-08-01
64 Appendix B. VBA Code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72 Appendix C. Story...should city planners have located emergency service facilities so that all households (the demand) had equal access to coverage?” The critical...programming language called Visual Basic for Applications ( VBA ). CPLEX is a commercial solver for linear, integer, and mixed integer linear programming problems
Fitting program for linear regressions according to Mahon (1996)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Trappitsch, Reto G.
2018-01-09
This program takes the users' Input data and fits a linear regression to it using the prescription presented by Mahon (1996). Compared to the commonly used York fit, this method has the correct prescription for measurement error propagation. This software should facilitate the proper fitting of measurements with a simple Interface.
USDA-ARS?s Scientific Manuscript database
Ready-to-use therapeutic food (RUTF) is the standard of care for children suffering from noncomplicated severe acute malnutrition (SAM). The objective was to develop a comprehensive linear programming (LP) tool to create novel RUTF formulations for Ethiopia. A systematic approach that surveyed inter...
Secret Message Decryption: Group Consulting Projects Using Matrices and Linear Programming
ERIC Educational Resources Information Center
Gurski, Katharine F.
2009-01-01
We describe two short group projects for finite mathematics students that incorporate matrices and linear programming into fictional consulting requests presented as a letter to the students. The students are required to use mathematics to decrypt secret messages in one project involving matrix multiplication and inversion. The second project…
LCPT: a program for finding linear canonical transformations. [In MACSYMA
DOE Office of Scientific and Technical Information (OSTI.GOV)
Char, B.W.; McNamara, B.
This article describes a MACSYMA program to compute symbolically a canonical linear transformation between coordinate systems. The difficulties in implementation of this canonical small physics problem are also discussed, along with the implications that may be drawn from such difficulties about widespread MACSYMA usage by the community of computational/theoretical physicists.
ERIC Educational Resources Information Center
Mills, James W.; And Others
1973-01-01
The Study reported here tested an application of the Linear Programming Model at the Reading Clinic of Drew University. Results, while not conclusive, indicate that this approach yields greater gains in speed scores than a traditional approach for this population. (Author)
Visual, Algebraic and Mixed Strategies in Visually Presented Linear Programming Problems.
ERIC Educational Resources Information Center
Shama, Gilli; Dreyfus, Tommy
1994-01-01
Identified and classified solution strategies of (n=49) 10th-grade students who were presented with linear programming problems in a predominantly visual setting in the form of a computerized game. Visual strategies were developed more frequently than either algebraic or mixed strategies. Appendix includes questionnaires. (Contains 11 references.)…
A model for managing sources of groundwater pollution
Gorelick, Steven M.
1982-01-01
The waste disposal capacity of a groundwater system can be maximized while maintaining water quality at specified locations by using a groundwater pollutant source management model that is based upon linear programing and numerical simulation. The decision variables of the management model are solute waste disposal rates at various facilities distributed over space. A concentration response matrix is used in the management model to describe transient solute transport and is developed using the U.S. Geological Survey solute transport simulation model. The management model was applied to a complex hypothetical groundwater system. Large-scale management models were formulated as dual linear programing problems to reduce numerical difficulties and computation time. Linear programing problems were solved using a numerically stable, available code. Optimal solutions to problems with successively longer management time horizons indicated that disposal schedules at some sites are relatively independent of the number of disposal periods. Optimal waste disposal schedules exhibited pulsing rather than constant disposal rates. Sensitivity analysis using parametric linear programing showed that a sharp reduction in total waste disposal potential occurs if disposal rates at any site are increased beyond their optimal values.
NASA Technical Reports Server (NTRS)
Rankin, C. C.
1988-01-01
A consistent linearization is provided for the element-dependent corotational formulation, providing the proper first and second variation of the strain energy. As a result, the warping problem that has plagued flat elements has been overcome, with beneficial effects carried over to linear solutions. True Newton quadratic convergence has been restored to the Structural Analysis of General Shells (STAGS) code for conservative loading using the full corotational implementation. Some implications for general finite element analysis are discussed, including what effect the automatic frame invariance provided by this work might have on the development of new, improved elements.
Pei, Soo-Chang; Ding, Jian-Jiun
2005-03-01
Prolate spheroidal wave functions (PSWFs) are known to be useful for analyzing the properties of the finite-extension Fourier transform (fi-FT). We extend the theory of PSWFs for the finite-extension fractional Fourier transform, the finite-extension linear canonical transform, and the finite-extension offset linear canonical transform. These finite transforms are more flexible than the fi-FT and can model much more generalized optical systems. We also illustrate how to use the generalized prolate spheroidal functions we derive to analyze the energy-preservation ratio, the self-imaging phenomenon, and the resonance phenomenon of the finite-sized one-stage or multiple-stage optical systems.
Ghadie, Mohamed A; Japkowicz, Nathalie; Perkins, Theodore J
2015-08-15
Stem cell differentiation is largely guided by master transcriptional regulators, but it also depends on the expression of other types of genes, such as cell cycle genes, signaling genes, metabolic genes, trafficking genes, etc. Traditional approaches to understanding gene expression patterns across multiple conditions, such as principal components analysis or K-means clustering, can group cell types based on gene expression, but they do so without knowledge of the differentiation hierarchy. Hierarchical clustering can organize cell types into a tree, but in general this tree is different from the differentiation hierarchy itself. Given the differentiation hierarchy and gene expression data at each node, we construct a weighted Euclidean distance metric such that the minimum spanning tree with respect to that metric is precisely the given differentiation hierarchy. We provide a set of linear constraints that are provably sufficient for the desired construction and a linear programming approach to identify sparse sets of weights, effectively identifying genes that are most relevant for discriminating different parts of the tree. We apply our method to microarray gene expression data describing 38 cell types in the hematopoiesis hierarchy, constructing a weighted Euclidean metric that uses just 175 genes. However, we find that there are many alternative sets of weights that satisfy the linear constraints. Thus, in the style of random-forest training, we also construct metrics based on random subsets of the genes and compare them to the metric of 175 genes. We then report on the selected genes and their biological functions. Our approach offers a new way to identify genes that may have important roles in stem cell differentiation. tperkins@ohri.ca Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Linearized self-consistent quasiparticle GW method: Application to semiconductors and simple metals
NASA Astrophysics Data System (ADS)
Kutepov, A. L.; Oudovenko, V. S.; Kotliar, G.
2017-10-01
We present a code implementing the linearized quasiparticle self-consistent GW method (LQSGW) in the LAPW basis. Our approach is based on the linearization of the self-energy around zero frequency which differs it from the existing implementations of the QSGW method. The linearization allows us to use Matsubara frequencies instead of working on the real axis. This results in efficiency gains by switching to the imaginary time representation in the same way as in the space time method. The all electron LAPW basis set eliminates the need for pseudopotentials. We discuss the advantages of our approach, such as its N3 scaling with the system size N, as well as its shortcomings. We apply our approach to study the electronic properties of selected semiconductors, insulators, and simple metals and show that our code produces the results very close to the previously published QSGW data. Our implementation is a good platform for further many body diagrammatic resummations such as the vertex-corrected GW approach and the GW+DMFT method. Program Files doi:http://dx.doi.org/10.17632/cpchkfty4w.1 Licensing provisions: GNU General Public License Programming language: Fortran 90 External routines/libraries: BLAS, LAPACK, MPI (optional) Nature of problem: Direct implementation of the GW method scales as N4 with the system size, which quickly becomes prohibitively time consuming even in the modern computers. Solution method: We implemented the GW approach using a method that switches between real space and momentum space representations. Some operations are faster in real space, whereas others are more computationally efficient in the reciprocal space. This makes our approach scale as N3. Restrictions: The limiting factor is usually the memory available in a computer. Using 10 GB/core of memory allows us to study the systems up to 15 atoms per unit cell.
Nutrient density score of typical Indonesian foods and dietary formulation using linear programming.
Jati, Ignasius Radix A P; Vadivel, Vellingiri; Nöhr, Donatus; Biesalski, Hans Konrad
2012-12-01
The present research aimed to analyse the nutrient density (ND), nutrient adequacy score (NAS) and energy density (ED) of Indonesian foods and to formulate a balanced diet using linear programming. Data on typical Indonesian diets were obtained from the Indonesian Socio-Economic Survey 2008. ND was investigated for 122 Indonesian foods. NAS was calculated for single nutrients such as Fe, Zn and vitamin A. Correlation analysis was performed between ND and ED, as well as between monthly expenditure class and food consumption pattern in Indonesia. Linear programming calculations were performed using the software POM-QM for Windows version 3. Republic of Indonesia, 2008. Public households (n 68 800). Vegetables had the highest ND of the food groups, followed by animal-based foods, fruits and staple foods. Based on NAS, the top ten food items for each food group were identified. Most of the staple foods had high ED and contributed towards daily energy fulfillment, followed by animal-based foods, vegetables and fruits. Commodities with high ND tended to have low ED. Linear programming could be used to formulate a balanced diet. In contrast to staple foods, purchases of fruit, vegetables and animal-based foods increased with the rise of monthly expenditure. People should select food items based on ND and NAS to alleviate micronutrient deficiencies in Indonesia. Dietary formulation calculated using linear programming to achieve RDA levels for micronutrients could be recommended for different age groups of the Indonesian population.
Maillot, Matthieu; Ferguson, Elaine L; Drewnowski, Adam; Darmon, Nicole
2008-06-01
Nutrient profiling ranks foods based on their nutrient content. They may help identify foods with a good nutritional quality for their price. This hypothesis was tested using diet modeling with linear programming. Analyses were undertaken using food intake data from the nationally representative French INCA (enquête Individuelle et Nationale sur les Consommations Alimentaires) survey and its associated food composition and price database. For each food, a nutrient profile score was defined as the ratio between the previously published nutrient density score (NDS) and the limited nutrient score (LIM); a nutritional quality for price indicator was developed and calculated from the relationship between its NDS:LIM and energy cost (in euro/100 kcal). We developed linear programming models to design diets that fulfilled increasing levels of nutritional constraints at a minimal cost. The median NDS:LIM values of foods selected in modeled diets increased as the levels of nutritional constraints increased (P = 0.005). In addition, the proportion of foods with a good nutritional quality for price indicator was higher (P < 0.0001) among foods selected (81%) than among foods not selected (39%) in modeled diets. This agreement between the linear programming and the nutrient profiling approaches indicates that nutrient profiling can help identify foods of good nutritional quality for their price. Linear programming is a useful tool for testing nutrient profiling systems and validating the concept of nutrient profiling.
De Carvalho, Irene Stuart Torrié; Granfeldt, Yvonne; Dejmek, Petr; Håkansson, Andreas
2015-03-01
Linear programming has been used extensively as a tool for nutritional recommendations. Extending the methodology to food formulation presents new challenges, since not all combinations of nutritious ingredients will produce an acceptable food. Furthermore, it would help in implementation and in ensuring the feasibility of the suggested recommendations. To extend the previously used linear programming methodology from diet optimization to food formulation using consistency constraints. In addition, to exemplify usability using the case of a porridge mix formulation for emergency situations in rural Mozambique. The linear programming method was extended with a consistency constraint based on previously published empirical studies on swelling of starch in soft porridges. The new method was exemplified using the formulation of a nutritious, minimum-cost porridge mix for children aged 1 to 2 years for use as a complete relief food, based primarily on local ingredients, in rural Mozambique. A nutritious porridge fulfilling the consistency constraints was found; however, the minimum cost was unfeasible with local ingredients only. This illustrates the challenges in formulating nutritious yet economically feasible foods from local ingredients. The high cost was caused by the high cost of mineral-rich foods. A nutritious, low-cost porridge that fulfills the consistency constraints was obtained by including supplements of zinc and calcium salts as ingredients. The optimizations were successful in fulfilling all constraints and provided a feasible porridge, showing that the extended constrained linear programming methodology provides a systematic tool for designing nutritious foods.
Fiber Optic Control System Integration program: for optical flight control system development
NASA Astrophysics Data System (ADS)
Weaver, Thomas L.; Seal, Daniel W.
1994-10-01
Hardware and software were developed for optical feedback links in the flight control system of an F/A-18 aircraft. Developments included passive optical sensors and optoelectronics to operate the sensors. Sensors with different methods of operation were obtained from different manufacturers and integrated with common optoelectronics. The sensors were the following: Air Data Temperature; Air Data Pressure; and Leading Edge Flap, Nose Wheel Steering, Trailing Edge Flap, Pitch Stick, Rudder, Rudder Pedal, Stabilator, and Engine Power Lever Control Position. The sensors were built for a variety of aircraft locations and harsh environments. The sensors and optoelectronics were as similar as practical to a production system. The integrated system was installed by NASA for flight testing. Wavelength Division Multiplexing proved successful as a system design philosophy. Some sensors appeared to be better choices for aircraft applications than others, with digital sensors generally being better than analog sensors, and rotary sensors generally being better than linear sensors. The most successful sensor approaches were selected for use in a follow-on program in which the sensors will not just be flown on the aircraft and their performance recorded; but, the optical sensors will be used in closing flight control loops.
Parameter estimation in spiking neural networks: a reverse-engineering approach.
Rostro-Gonzalez, H; Cessac, B; Vieville, T
2012-04-01
This paper presents a reverse engineering approach for parameter estimation in spiking neural networks (SNNs). We consider the deterministic evolution of a time-discretized network with spiking neurons, where synaptic transmission has delays, modeled as a neural network of the generalized integrate and fire type. Our approach aims at by-passing the fact that the parameter estimation in SNN results in a non-deterministic polynomial-time hard problem when delays are to be considered. Here, this assumption has been reformulated as a linear programming (LP) problem in order to perform the solution in a polynomial time. Besides, the LP problem formulation makes the fact that the reverse engineering of a neural network can be performed from the observation of the spike times explicit. Furthermore, we point out how the LP adjustment mechanism is local to each neuron and has the same structure as a 'Hebbian' rule. Finally, we present a generalization of this approach to the design of input-output (I/O) transformations as a practical method to 'program' a spiking network, i.e. find a set of parameters allowing us to exactly reproduce the network output, given an input. Numerical verifications and illustrations are provided.
Krylov Subspace Methods for Complex Non-Hermitian Linear Systems. Thesis
NASA Technical Reports Server (NTRS)
Freund, Roland W.
1991-01-01
We consider Krylov subspace methods for the solution of large sparse linear systems Ax = b with complex non-Hermitian coefficient matrices. Such linear systems arise in important applications, such as inverse scattering, numerical solution of time-dependent Schrodinger equations, underwater acoustics, eddy current computations, numerical computations in quantum chromodynamics, and numerical conformal mapping. Typically, the resulting coefficient matrices A exhibit special structures, such as complex symmetry, or they are shifted Hermitian matrices. In this paper, we first describe a Krylov subspace approach with iterates defined by a quasi-minimal residual property, the QMR method, for solving general complex non-Hermitian linear systems. Then, we study special Krylov subspace methods designed for the two families of complex symmetric respectively shifted Hermitian linear systems. We also include some results concerning the obvious approach to general complex linear systems by solving equivalent real linear systems for the real and imaginary parts of x. Finally, numerical experiments for linear systems arising from the complex Helmholtz equation are reported.
Users manual for flight control design programs
NASA Technical Reports Server (NTRS)
Nalbandian, J. Y.
1975-01-01
Computer programs for the design of analog and digital flight control systems are documented. The program DIGADAPT uses linear-quadratic-gaussian synthesis algorithms in the design of command response controllers and state estimators, and it applies covariance propagation analysis to the selection of sampling intervals for digital systems. Program SCHED executes correlation and regression analyses for the development of gain and trim schedules to be used in open-loop explicit-adaptive control laws. A linear-time-varying simulation of aircraft motions is provided by the program TVHIS, which includes guidance and control logic, as well as models for control actuator dynamics. The programs are coded in FORTRAN and are compiled and executed on both IBM and CDC computers.
Transportation optimization with fuzzy trapezoidal numbers based on possibility theory.
He, Dayi; Li, Ran; Huang, Qi; Lei, Ping
2014-01-01
In this paper, a parametric method is introduced to solve fuzzy transportation problem. Considering that parameters of transportation problem have uncertainties, this paper develops a generalized fuzzy transportation problem with fuzzy supply, demand and cost. For simplicity, these parameters are assumed to be fuzzy trapezoidal numbers. Based on possibility theory and consistent with decision-makers' subjectiveness and practical requirements, the fuzzy transportation problem is transformed to a crisp linear transportation problem by defuzzifying fuzzy constraints and objectives with application of fractile and modality approach. Finally, a numerical example is provided to exemplify the application of fuzzy transportation programming and to verify the validity of the proposed methods.
Wu, Jibo
2016-01-01
In this article, a generalized difference-based ridge estimator is proposed for the vector parameter in a partial linear model when the errors are dependent. It is supposed that some additional linear constraints may hold to the whole parameter space. Its mean-squared error matrix is compared with the generalized restricted difference-based estimator. Finally, the performance of the new estimator is explained by a simulation study and a numerical example.
On Generalizations of Cochran’s Theorem and Projection Matrices.
1980-08-01
Definiteness of the Estimated Dispersion Matrix in a Multivariate Linear Model ," F. Pukelsheim and George P.H. Styan, May 1978. TECHNICAL REPORTS...with applications to the analysis of covariance," Proc. Cambridge Philos. Soc., 30, pp. 178-191. Graybill , F. A. and Marsaglia, G. (1957...34Idempotent matrices and quad- ratic forms in the general linear hypothesis," Ann. Math. Statist., 28, pp. 678-686. Greub, W. (1975). Linear Algebra (4th ed
A diffusion model of protected population on bilocal habitat with generalized resource
NASA Astrophysics Data System (ADS)
Vasilyev, Maxim D.; Trofimtsev, Yuri I.; Vasilyeva, Natalya V.
2017-11-01
A model of population distribution in a two-dimensional area divided by an ecological barrier, i.e. the boundaries of natural reserve, is considered. Distribution of the population is defined by diffusion, directed migrations and areal resource. The exchange of specimens occurs between two parts of the habitat. The mathematical model is presented in the form of a boundary value problem for a system of non-linear parabolic equations with variable parameters of diffusion and growth function. The splitting space variables, sweep method and simple iteration methods were used for the numerical solution of a system. A set of programs was coded in Python. Numerical simulation results for the two-dimensional unsteady non-linear problem are analyzed in detail. The influence of migration flow coefficients and functions of natural birth/death ratio on the distributions of population densities is investigated. The results of the research would allow to describe the conditions of the stable and sustainable existence of populations in bilocal habitat containing the protected and non-protected zones.
NASA Astrophysics Data System (ADS)
Gorelick, Steven M.; Voss, Clifford I.; Gill, Philip E.; Murray, Walter; Saunders, Michael A.; Wright, Margaret H.
1984-04-01
A simulation-management methodology is demonstrated for the rehabilitation of aquifers that have been subjected to chemical contamination. Finite element groundwater flow and contaminant transport simulation are combined with nonlinear optimization. The model is capable of determining well locations plus pumping and injection rates for groundwater quality control. Examples demonstrate linear or nonlinear objective functions subject to linear and nonlinear simulation and water management constraints. Restrictions can be placed on hydraulic heads, stresses, and gradients, in addition to contaminant concentrations and fluxes. These restrictions can be distributed over space and time. Three design strategies are demonstrated for an aquifer that is polluted by a constant contaminant source: they are pumping for contaminant removal, water injection for in-ground dilution, and a pumping, treatment, and injection cycle. A transient model designs either contaminant plume interception or in-ground dilution so that water quality standards are met. The method is not limited to these cases. It is generally applicable to the optimization of many types of distributed parameter systems.
Correlates and Predictors of Psychological Distress Among Older Asian Immigrants in California.
Chang, Miya; Moon, Ailee
2016-01-01
Psychological distress occurs frequently in older minority immigrants because many have limited social resources and undergo a difficult process related to immigration and acculturation. Despite a rapid increase in the number of Asian immigrants, relatively little research has focused on subgroup mental health comparisons. This study examines the prevalence of psychological distress, and relationship with socio-demographic factors, and health care utilization among older Asian immigrants. Weighted data from Asian immigrants 65 and older from 5 countries (n = 1,028) who participated in the California Health Interview Survey (CHIS) were analyzed descriptively and in multiple linear regressions. The prevalence of psychological distress varied significantly across the 5 ethnic groups, from Filipinos (4.83%) to Chinese (1.64%). General health status, cognitive and physical impairment, and health care utilization are all associated (p < .05) with psychological distress in multiple linear regressions. These findings are similar to those from previous studies. The findings reinforce the need to develop more culturally effective mental health services and outreach programs.
Wagner, Tyler; Irwin, Brian J.; James R. Bence,; Daniel B. Hayes,
2016-01-01
Monitoring to detect temporal trends in biological and habitat indices is a critical component of fisheries management. Thus, it is important that management objectives are linked to monitoring objectives. This linkage requires a definition of what constitutes a management-relevant “temporal trend.” It is also important to develop expectations for the amount of time required to detect a trend (i.e., statistical power) and for choosing an appropriate statistical model for analysis. We provide an overview of temporal trends commonly encountered in fisheries management, review published studies that evaluated statistical power of long-term trend detection, and illustrate dynamic linear models in a Bayesian context, as an additional analytical approach focused on shorter term change. We show that monitoring programs generally have low statistical power for detecting linear temporal trends and argue that often management should be focused on different definitions of trends, some of which can be better addressed by alternative analytical approaches.
Equicontrollability and the model following problem
NASA Technical Reports Server (NTRS)
Curran, R. T.
1971-01-01
Equicontrollability and its application to the linear time-invariant model-following problem are discussed. The problem is presented in the form of two systems, the plant and the model. The requirement is to find a controller to apply to the plant so that the resultant compensated plant behaves, in an input-output sense, the same as the model. All systems are assumed to be linear and time-invariant. The basic approach is to find suitable equicontrollable realizations of the plant and model and to utilize feedback so as to produce a controller of minimal state dimension. The concept of equicontrollability is a generalization of control canonical (phase variable) form applied to multivariable systems. It allows one to visualize clearly the effects of feedback and to pinpoint the parameters of a multivariable system which are invariant under feedback. The basic contributions are the development of equicontrollable form; solution of the model-following problem in an entirely algorithmic way, suitable for computer programming; and resolution of questions on system decoupling.
Evaluation of a transfinite element numerical solution method for nonlinear heat transfer problems
NASA Technical Reports Server (NTRS)
Cerro, J. A.; Scotti, S. J.
1991-01-01
Laplace transform techniques have been widely used to solve linear, transient field problems. A transform-based algorithm enables calculation of the response at selected times of interest without the need for stepping in time as required by conventional time integration schemes. The elimination of time stepping can substantially reduce computer time when transform techniques are implemented in a numerical finite element program. The coupling of transform techniques with spatial discretization techniques such as the finite element method has resulted in what are known as transfinite element methods. Recently attempts have been made to extend the transfinite element method to solve nonlinear, transient field problems. This paper examines the theoretical basis and numerical implementation of one such algorithm, applied to nonlinear heat transfer problems. The problem is linearized and solved by requiring a numerical iteration at selected times of interest. While shown to be acceptable for weakly nonlinear problems, this algorithm is ineffective as a general nonlinear solution method.
NASA Technical Reports Server (NTRS)
Tiffany, Sherwood H.; Adams, William M., Jr.
1988-01-01
The approximation of unsteady generalized aerodynamic forces in the equations of motion of a flexible aircraft are discussed. Two methods of formulating these approximations are extended to include the same flexibility in constraining the approximations and the same methodology in optimizing nonlinear parameters as another currently used extended least-squares method. Optimal selection of nonlinear parameters is made in each of the three methods by use of the same nonlinear, nongradient optimizer. The objective of the nonlinear optimization is to obtain rational approximations to the unsteady aerodynamics whose state-space realization is lower order than that required when no optimization of the nonlinear terms is performed. The free linear parameters are determined using the least-squares matrix techniques of a Lagrange multiplier formulation of an objective function which incorporates selected linear equality constraints. State-space mathematical models resulting from different approaches are described and results are presented that show comparative evaluations from application of each of the extended methods to a numerical example.
Generalized massive optimal data compression
NASA Astrophysics Data System (ADS)
Alsing, Justin; Wandelt, Benjamin
2018-05-01
In this paper, we provide a general procedure for optimally compressing N data down to n summary statistics, where n is equal to the number of parameters of interest. We show that compression to the score function - the gradient of the log-likelihood with respect to the parameters - yields n compressed statistics that are optimal in the sense that they preserve the Fisher information content of the data. Our method generalizes earlier work on linear Karhunen-Loéve compression for Gaussian data whilst recovering both lossless linear compression and quadratic estimation as special cases when they are optimal. We give a unified treatment that also includes the general non-Gaussian case as long as mild regularity conditions are satisfied, producing optimal non-linear summary statistics when appropriate. As a worked example, we derive explicitly the n optimal compressed statistics for Gaussian data in the general case where both the mean and covariance depend on the parameters.
Bayerstadler, Andreas; Benstetter, Franz; Heumann, Christian; Winter, Fabian
2014-09-01
Predictive Modeling (PM) techniques are gaining importance in the worldwide health insurance business. Modern PM methods are used for customer relationship management, risk evaluation or medical management. This article illustrates a PM approach that enables the economic potential of (cost-) effective disease management programs (DMPs) to be fully exploited by optimized candidate selection as an example of successful data-driven business management. The approach is based on a Generalized Linear Model (GLM) that is easy to apply for health insurance companies. By means of a small portfolio from an emerging country, we show that our GLM approach is stable compared to more sophisticated regression techniques in spite of the difficult data environment. Additionally, we demonstrate for this example of a setting that our model can compete with the expensive solutions offered by professional PM vendors and outperforms non-predictive standard approaches for DMP selection commonly used in the market.
Hendriksen, Ingrid J.M.; Snoijer, Mirjam; de Kok, Brenda P.H.; van Vilsteren, Jeroen; Hofstetter, Hedwig
2016-01-01
Objective: Evaluation of the effectiveness of a workplace health promotion program on employees’ vitality, health, and work-related outcomes, and exploring the influence of organizational support and the supervisors’ role on these outcomes. Methods: The 5-month intervention included activities at management, team, and individual level targeting self-management to perform healthy behaviors: a kick-off session, vitality training sessions, workshops, individual coaching, and intervision. Outcome measures were collected using questionnaires, health checks, and sickness absence data at baseline, after the intervention and at 10 months follow-up. For analysis linear and generalized mixed models were used. Results: Vitality, work performance, sickness absence, and self-management significantly improved. Good organizational support and involved supervisors were significantly associated with lower sickness absence. Conclusions: Including all organizational levels and focusing on increasing self-management provided promising results for improving vitality, health, and work-related outcomes. PMID:27136605
Hendriksen, Ingrid J M; Snoijer, Mirjam; de Kok, Brenda P H; van Vilsteren, Jeroen; Hofstetter, Hedwig
2016-06-01
Evaluation of the effectiveness of a workplace health promotion program on employees' vitality, health, and work-related outcomes, and exploring the influence of organizational support and the supervisors' role on these outcomes. The 5-month intervention included activities at management, team, and individual level targeting self-management to perform healthy behaviors: a kick-off session, vitality training sessions, workshops, individual coaching, and intervision. Outcome measures were collected using questionnaires, health checks, and sickness absence data at baseline, after the intervention and at 10 months follow-up. For analysis linear and generalized mixed models were used. Vitality, work performance, sickness absence, and self-management significantly improved. Good organizational support and involved supervisors were significantly associated with lower sickness absence. Including all organizational levels and focusing on increasing self-management provided promising results for improving vitality, health, and work-related outcomes.
An optimization model for energy generation and distribution in a dynamic facility
NASA Technical Reports Server (NTRS)
Lansing, F. L.
1981-01-01
An analytical model is described using linear programming for the optimum generation and distribution of energy demands among competing energy resources and different economic criteria. The model, which will be used as a general engineering tool in the analysis of the Deep Space Network ground facility, considers several essential decisions for better design and operation. The decisions sought for the particular energy application include: the optimum time to build an assembly of elements, inclusion of a storage medium of some type, and the size or capacity of the elements that will minimize the total life-cycle cost over a given number of years. The model, which is structured in multiple time divisions, employ the decomposition principle for large-size matrices, the branch-and-bound method in mixed-integer programming, and the revised simplex technique for efficient and economic computer use.
Non-Linear Approach in Kinesiology Should Be Preferred to the Linear--A Case of Basketball.
Trninić, Marko; Jeličić, Mario; Papić, Vladan
2015-07-01
In kinesiology, medicine, biology and psychology, in which research focus is on dynamical self-organized systems, complex connections exist between variables. Non-linear nature of complex systems has been discussed and explained by the example of non-linear anthropometric predictors of performance in basketball. Previous studies interpreted relations between anthropometric features and measures of effectiveness in basketball by (a) using linear correlation models, and by (b) including all basketball athletes in the same sample of participants regardless of their playing position. In this paper the significance and character of linear and non-linear relations between simple anthropometric predictors (AP) and performance criteria consisting of situation-related measures of effectiveness (SE) in basketball were determined and evaluated. The sample of participants consisted of top-level junior basketball players divided in three groups according to their playing time (8 minutes and more per game) and playing position: guards (N = 42), forwards (N = 26) and centers (N = 40). Linear (general model) and non-linear (general model) regression models were calculated simultaneously and separately for each group. The conclusion is viable: non-linear regressions are frequently superior to linear correlations when interpreting actual association logic among research variables.
NASA Astrophysics Data System (ADS)
Ikelle, Luc T.; Osen, Are; Amundsen, Lasse; Shen, Yunqing
2004-12-01
The classical linear solutions to the problem of multiple attenuation, like predictive deconvolution, τ-p filtering, or F-K filtering, are generally fast, stable, and robust compared to non-linear solutions, which are generally either iterative or in the form of a series with an infinite number of terms. These qualities have made the linear solutions more attractive to seismic data-processing practitioners. However, most linear solutions, including predictive deconvolution or F-K filtering, contain severe assumptions about the model of the subsurface and the class of free-surface multiples they can attenuate. These assumptions limit their usefulness. In a recent paper, we described an exception to this assertion for OBS data. We showed in that paper that a linear and non-iterative solution to the problem of attenuating free-surface multiples which is as accurate as iterative non-linear solutions can be constructed for OBS data. We here present a similar linear and non-iterative solution for attenuating free-surface multiples in towed-streamer data. For most practical purposes, this linear solution is as accurate as the non-linear ones.
NASA Astrophysics Data System (ADS)
Kaplan, Melike; Hosseini, Kamyar; Samadani, Farzan; Raza, Nauman
2018-07-01
A wide range of problems in different fields of the applied sciences especially non-linear optics is described by non-linear Schrödinger's equations (NLSEs). In the present paper, a specific type of NLSEs known as the cubic-quintic non-linear Schrödinger's equation including an anti-cubic term has been studied. The generalized Kudryashov method along with symbolic computation package has been exerted to carry out this objective. As a consequence, a series of optical soliton solutions have formally been retrieved. It is corroborated that the generalized form of Kudryashov method is a direct, effectual, and reliable technique to deal with various types of non-linear Schrödinger's equations.
Linear and nonlinear dynamic analysis by boundary element method. Ph.D. Thesis, 1986 Final Report
NASA Technical Reports Server (NTRS)
Ahmad, Shahid
1991-01-01
An advanced implementation of the direct boundary element method (BEM) applicable to free-vibration, periodic (steady-state) vibration and linear and nonlinear transient dynamic problems involving two and three-dimensional isotropic solids of arbitrary shape is presented. Interior, exterior, and half-space problems can all be solved by the present formulation. For the free-vibration analysis, a new real variable BEM formulation is presented which solves the free-vibration problem in the form of algebraic equations (formed from the static kernels) and needs only surface discretization. In the area of time-domain transient analysis, the BEM is well suited because it gives an implicit formulation. Although the integral formulations are elegant, because of the complexity of the formulation it has never been implemented in exact form. In the present work, linear and nonlinear time domain transient analysis for three-dimensional solids has been implemented in a general and complete manner. The formulation and implementation of the nonlinear, transient, dynamic analysis presented here is the first ever in the field of boundary element analysis. Almost all the existing formulation of BEM in dynamics use the constant variation of the variables in space and time which is very unrealistic for engineering problems and, in some cases, it leads to unacceptably inaccurate results. In the present work, linear and quadratic isoparametric boundary elements are used for discretization of geometry and functional variations in space. In addition, higher order variations in time are used. These methods of analysis are applicable to piecewise-homogeneous materials, such that not only problems of the layered media and the soil-structure interaction can be analyzed but also a large problem can be solved by the usual sub-structuring technique. The analyses have been incorporated in a versatile, general-purpose computer program. Some numerical problems are solved and, through comparisons with available analytical and numerical results, the stability and high accuracy of these dynamic analysis techniques are established.
SOCR Analyses – an Instructional Java Web-based Statistical Analysis Toolkit
Chu, Annie; Cui, Jenny; Dinov, Ivo D.
2011-01-01
The Statistical Online Computational Resource (SOCR) designs web-based tools for educational use in a variety of undergraduate courses (Dinov 2006). Several studies have demonstrated that these resources significantly improve students' motivation and learning experiences (Dinov et al. 2008). SOCR Analyses is a new component that concentrates on data modeling and analysis using parametric and non-parametric techniques supported with graphical model diagnostics. Currently implemented analyses include commonly used models in undergraduate statistics courses like linear models (Simple Linear Regression, Multiple Linear Regression, One-Way and Two-Way ANOVA). In addition, we implemented tests for sample comparisons, such as t-test in the parametric category; and Wilcoxon rank sum test, Kruskal-Wallis test, Friedman's test, in the non-parametric category. SOCR Analyses also include several hypothesis test models, such as Contingency tables, Friedman's test and Fisher's exact test. The code itself is open source (http://socr.googlecode.com/), hoping to contribute to the efforts of the statistical computing community. The code includes functionality for each specific analysis model and it has general utilities that can be applied in various statistical computing tasks. For example, concrete methods with API (Application Programming Interface) have been implemented in statistical summary, least square solutions of general linear models, rank calculations, etc. HTML interfaces, tutorials, source code, activities, and data are freely available via the web (www.SOCR.ucla.edu). Code examples for developers and demos for educators are provided on the SOCR Wiki website. In this article, the pedagogical utilization of the SOCR Analyses is discussed, as well as the underlying design framework. As the SOCR project is on-going and more functions and tools are being added to it, these resources are constantly improved. The reader is strongly encouraged to check the SOCR site for most updated information and newly added models. PMID:21546994
The Next Linear Collider Program
text only International Study Group (ISG) Meetings NLC Home Page NLC Technical SLAC Eleventh Linear Collider International Study Group at KEK, December 16 - 19, 2003 Tenth (X) Linear Collider International Study Group at SLAC, June, 2003 Nineth Linear Collider ,International Study Group at KEK, December 10-13
Modelling of Asphalt Concrete Stiffness in the Linear Viscoelastic Region
NASA Astrophysics Data System (ADS)
Mazurek, Grzegorz; Iwański, Marek
2017-10-01
Stiffness modulus is a fundamental parameter used in the modelling of the viscoelastic behaviour of bituminous mixtures. On the basis of the master curve in the linear viscoelasticity range, the mechanical properties of asphalt concrete at different loading times and temperatures can be predicted. This paper discusses the construction of master curves under rheological mathematical models i.e. the sigmoidal function model (MEPDG), the fractional model, and Bahia and co-workers’ model in comparison to the results from mechanistic rheological models i.e. the generalized Huet-Sayegh model, the generalized Maxwell model and the Burgers model. For the purposes of this analysis, the reference asphalt concrete mix (denoted as AC16W) intended for the binder coarse layer and for traffic category KR3 (5×105
NASA Technical Reports Server (NTRS)
Tseng, K.; Morino, L.
1975-01-01
A general formulation is presented for the analysis of steady and unsteady, subsonic and supersonic aerodynamics for complex aircraft configurations. The theoretical formulation, the numerical procedure, the description of the program SOUSSA (steady, oscillatory and unsteady, subsonic and supersonic aerodynamics) and numerical results are included. In particular, generalized forces for fully unsteady (complex frequency) aerodynamics for a wing-body configuration, AGARD wing-tail interference in both subsonic and supersonic flows as well as flutter analysis results are included. The theoretical formulation is based upon an integral equation, which includes completely arbitrary motion. Steady and oscillatory aerodynamic flows are considered. Here small-amplitude, fully transient response in the time domain is considered. This yields the aerodynamic transfer function (Laplace transform of the fully unsteady operator) for frequency domain analysis. This is particularly convenient for the linear systems analysis of the whole aircraft.
Modal forced vibration analysis of aerodynamically excited turbosystems
NASA Technical Reports Server (NTRS)
Elchuri, V.
1985-01-01
Theoretical aspects of a new capability to determine the vibratory response of turbosystems subjected to aerodynamic excitation are presented. Turbosystems such as advanced turbopropellers with highly swept blades, and axial-flow compressors and turbines can be analyzed using this capability. The capability has been developed and implemented in the April 1984 release of the general purpose finite element program NASTRAN. The dynamic response problem is addressed in terms of the normal modal coordinates of these tuned rotating cyclic structures. Both rigid and flexible hubs/disks are considered. Coriolis and centripetal accelerations, as well as differential stiffness effects are included. Generally non-uniform steady inflow fields and uniform flow fields arbitrarily inclined at small angles with respect to the axis of rotation of the turbosystem are considered sources of aerodynamic excitation. The spatial non-uniformities are considered to be small deviations from a principally uniform inflow. Subsonic and supersonic relative inflows are addressed, with provision for linearly interpolating transonic airloads.
Eric J. Gustafson; L. Jay Roberts; Larry A. Leefers
2006-01-01
Forest management planners require analytical tools to assess the effects of alternative strategies on the sometimes disparate benefits from forests such as timber production and wildlife habitat. We assessed the spatial patterns of alternative management strategies by linking two models that were developed for different purposes. We used a linear programming model (...
A Partitioning and Bounded Variable Algorithm for Linear Programming
ERIC Educational Resources Information Center
Sheskin, Theodore J.
2006-01-01
An interesting new partitioning and bounded variable algorithm (PBVA) is proposed for solving linear programming problems. The PBVA is a variant of the simplex algorithm which uses a modified form of the simplex method followed by the dual simplex method for bounded variables. In contrast to the two-phase method and the big M method, the PBVA does…
Airborne Tactical Crossload Planner
2017-12-01
set out in the Airborne Standard Operating Procedure (ASOP). 14. SUBJECT TERMS crossload, airborne, optimization, integer linear programming ...they land to their respective sub-mission locations. In this thesis, we formulate and implement an integer linear program called the Tactical...to meet any desired crossload objectives. xiv We demonstrate TCP with two real-world tactical problems from recent airborne operations: one by the
Radar Resource Management in a Dense Target Environment
2014-03-01
problem faced by networked MFRs . While relaxing our assumptions concerning information gain presents numerous challenges worth exploring, future research...linear programming MFR multifunction phased array radar MILP mixed integer linear programming NATO North Atlantic Treaty Organization PDF probability...1: INTRODUCTION Multifunction phased array radars ( MFRs ) are capable of performing various tasks in rapid succession. The performance of target search
Linear circuit analysis program for IBM 1620 Monitor 2, 1311/1443 data processing system /CIRCS/
NASA Technical Reports Server (NTRS)
Hatfield, J.
1967-01-01
CIRCS is modification of IBSNAP Circuit Analysis Program, for use on smaller systems. This data processing system retains the basic dc, transient analysis, and FORTRAN 2 formats. It can be used on the IBM 1620/1311 Monitor I Mod 5 system, and solves a linear network containing 15 nodes and 45 branches.
Ren, Jingzheng; Dong, Liang; Sun, Lu; Goodsite, Michael Evan; Tan, Shiyu; Dong, Lichun
2015-01-01
The aim of this work was to develop a model for optimizing the life cycle cost of biofuel supply chain under uncertainties. Multiple agriculture zones, multiple transportation modes for the transport of grain and biofuel, multiple biofuel plants, and multiple market centers were considered in this model, and the price of the resources, the yield of grain and the market demands were regarded as interval numbers instead of constants. An interval linear programming was developed, and a method for solving interval linear programming was presented. An illustrative case was studied by the proposed model, and the results showed that the proposed model is feasible for designing biofuel supply chain under uncertainties. Copyright © 2015 Elsevier Ltd. All rights reserved.
LFSPMC: Linear feature selection program using the probability of misclassification
NASA Technical Reports Server (NTRS)
Guseman, L. F., Jr.; Marion, B. P.
1975-01-01
The computational procedure and associated computer program for a linear feature selection technique are presented. The technique assumes that: a finite number, m, of classes exists; each class is described by an n-dimensional multivariate normal density function of its measurement vectors; the mean vector and covariance matrix for each density function are known (or can be estimated); and the a priori probability for each class is known. The technique produces a single linear combination of the original measurements which minimizes the one-dimensional probability of misclassification defined by the transformed densities.
SLFP: a stochastic linear fractional programming approach for sustainable waste management.
Zhu, H; Huang, G H
2011-12-01
A stochastic linear fractional programming (SLFP) approach is developed for supporting sustainable municipal solid waste management under uncertainty. The SLFP method can solve ratio optimization problems associated with random information, where chance-constrained programming is integrated into a linear fractional programming framework. It has advantages in: (1) comparing objectives of two aspects, (2) reflecting system efficiency, (3) dealing with uncertainty expressed as probability distributions, and (4) providing optimal-ratio solutions under different system-reliability conditions. The method is applied to a case study of waste flow allocation within a municipal solid waste (MSW) management system. The obtained solutions are useful for identifying sustainable MSW management schemes with maximized system efficiency under various constraint-violation risks. The results indicate that SLFP can support in-depth analysis of the interrelationships among system efficiency, system cost and system-failure risk. Copyright © 2011 Elsevier Ltd. All rights reserved.
Recent Changes in Pgopher: a General Purpose Program for Simulating Rotational Structure
NASA Astrophysics Data System (ADS)
Western, Colin
2010-06-01
Key features of the PGOPHER program include the simulation and fitting of the rotational structure of linear molecules and symmetric and asymmetric tops, including effects due to unpaired electrons and nuclear spin. The program is written to be as general as possible, and can handle many effects such as multiple interacting states, predissociation and multiphoton transitions. It is designed to be easy to use, with a flexible graphical user interface. PGOPHER has been released as an open source program, and can be freely downloaded from the website at http://pgopher.chm.bris.ac.uk. Recent additions include a mode which allows the calculation of vibrational energy levels starting from a harmonic model and the multidimensional Franck-Condon factors required to calculate intensities of vibronic transitions. PGOPHER takes account of both the displacement along normal co-ordinates and mixing between modes (the Duschinsky effect). l matrices produced from ab initio programs can be directly read by PGOPHER or the mode displacements and mixing can be fit to observed spectra. In addition the effects of external electric and/or magnetic fields can now be calculated, including plots of energy level against electric field suitable for predicting Stark deceleration, focussing and trapping of molecules. The figure shows a typical plot, showing the electric field tuning of the M = 0 levels of 202, 111 and 110 levels of (NO)_2. Other new features include fits to combination differences, simulation of the Doppler split peak typical of Fourier transform microwave spectroscopy, specifying a nuclear spin temperature independent of rotational temperature and interactive adjustment of parameter values with the mouse in addition to typing values.
Three-dimensional modeling of flexible pavements : research implementation plan.
DOT National Transportation Integrated Search
2006-02-14
Many of the asphalt pavement analysis programs are based on linear elastic models. A linear viscoelastic models : would be superior to linear elastic models for analyzing the response of asphalt concrete pavements to loads. There : is a need to devel...
Final Report---Optimization Under Nonconvexity and Uncertainty: Algorithms and Software
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jeff Linderoth
2011-11-06
the goal of this work was to develop new algorithmic techniques for solving large-scale numerical optimization problems, focusing on problems classes that have proven to be among the most challenging for practitioners: those involving uncertainty and those involving nonconvexity. This research advanced the state-of-the-art in solving mixed integer linear programs containing symmetry, mixed integer nonlinear programs, and stochastic optimization problems. The focus of the work done in the continuation was on Mixed Integer Nonlinear Programs (MINLP)s and Mixed Integer Linear Programs (MILP)s, especially those containing a great deal of symmetry.
Sung, Ki Wol; Kang, Hye Seung; Nam, Ji Ran; Park, Mi Kyung; Park, Ji Hyeon
2018-04-01
This study aimed to estimate the effects of a health mentoring program on fasting blood sugar, total cholesterol, triglyceride, physical activity, self care behavior and social support changes among community-dwelling vulnerable elderly individuals with diabetes. A non-equivalent control group pre-post-test design was used. Participants were 70 community-dwelling vulnerable elderly individuals with diabetes. They were assigned to the experimental (n=30) or comparative (n=30) or control group (n=28). The experimental group participated in the health mentoring program, while the comparative group participated in health education program, the control group did not participate in any program. Data analyses involved a chi-square test, Fisher's exact test, a generalized linear model, and the Bonferroni correction, using SPSS 23.0. Compared to the control group, the experimental and comparative groups showed a significant decrease in fasting blood sugar, total cholesterol, and triglyceride. Compared to the comparative and control groups, the experimental group showed significant improvement in self care behavior. However, there were no statistical differences in physical activity or social support among the three groups. These findings indicate that the health mentoring program is an effective intervention for community-dwelling vulnerable elderly individuals with diabetes. This program can be used as an efficient strategy for diabetes self-management within this population. © 2018 Korean Society of Nursing Science.
Solution Methods for Stochastic Dynamic Linear Programs.
1980-12-01
16, No. 11, pp. 652-675, July 1970. [28] Glassey, C.R., "Dynamic linear programs for production scheduling", OR 19, pp. 45-56. 1971 . 129 Glassey, C.R...Huang, C.C., I. Vertinsky, W.T. Ziemba, ’Sharp bounds on the value of perfect information", OR 25, pp. 128-139, 1977. [37 Kall , P., ’Computational... 1971 . [701 Ziemba, W.T., *Computational algorithms for convex stochastic programs with simple recourse", OR 8, pp. 414-431, 1970. 131 UNCLASSI FIED
NASA Astrophysics Data System (ADS)
Wu, Dongmei; Wang, Zhongcheng
2006-03-01
According to Mickens [R.E. Mickens, Comments on a Generalized Galerkin's method for non-linear oscillators, J. Sound Vib. 118 (1987) 563], the general HB (harmonic balance) method is an approximation to the convergent Fourier series representation of the periodic solution of a nonlinear oscillator and not an approximation to an expansion in terms of a small parameter. Consequently, for a nonlinear undamped Duffing equation with a driving force Bcos(ωx), to find a periodic solution when the fundamental frequency is identical to ω, the corresponding Fourier series can be written as y˜(x)=∑n=1m acos[(2n-1)ωx]. How to calculate the coefficients of the Fourier series efficiently with a computer program is still an open problem. For HB method, by substituting approximation y˜(x) into force equation, expanding the resulting expression into a trigonometric series, then letting the coefficients of the resulting lowest-order harmonic be zero, one can obtain approximate coefficients of approximation y˜(x) [R.E. Mickens, Comments on a Generalized Galerkin's method for non-linear oscillators, J. Sound Vib. 118 (1987) 563]. But for nonlinear differential equations such as Duffing equation, it is very difficult to construct higher-order analytical approximations, because the HB method requires solving a set of algebraic equations for a large number of unknowns with very complex nonlinearities. To overcome the difficulty, forty years ago, Urabe derived a computational method for Duffing equation based on Galerkin procedure [M. Urabe, A. Reiter, Numerical computation of nonlinear forced oscillations by Galerkin's procedure, J. Math. Anal. Appl. 14 (1966) 107-140]. Dooren obtained an approximate solution of the Duffing oscillator with a special set of parameters by using Urabe's method [R. van Dooren, Stabilization of Cowell's classic finite difference method for numerical integration, J. Comput. Phys. 16 (1974) 186-192]. In this paper, in the frame of the general HB method, we present a new iteration algorithm to calculate the coefficients of the Fourier series. By using this new method, the iteration procedure starts with a(x)cos(ωx)+b(x)sin(ωx), and the accuracy may be improved gradually by determining new coefficients a,a,… will be produced automatically in an one-by-one manner. In all the stage of calculation, we need only to solve a cubic equation. Using this new algorithm, we develop a Mathematica program, which demonstrates following main advantages over the previous HB method: (1) it avoids solving a set of associate nonlinear equations; (2) it is easier to be implemented into a computer program, and produces a highly accurate solution with analytical expression efficiently. It is interesting to find that, generally, for a given set of parameters, a nonlinear Duffing equation can have three independent oscillation modes. For some sets of the parameters, it can have two modes with complex displacement and one with real displacement. But in some cases, it can have three modes, all of them having real displacement. Therefore, we can divide the parameters into two classes, according to the solution property: there is only one mode with real displacement and there are three modes with real displacement. This program should be useful to study the dynamically periodic behavior of a Duffing oscillator and can provide an approximate analytical solution with high-accuracy for testing the error behavior of newly developed numerical methods with a wide range of parameters. Program summaryTitle of program:AnalyDuffing.nb Catalogue identifier:ADWR_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADWR_v1_0 Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Licensing provisions:none Computer for which the program is designed and others on which it has been tested:the program has been designed for a microcomputer and been tested on the microcomputer. Computers:IBM PC Installations:the address(es) of your computer(s) Operating systems under which the program has been tested:Windows XP Programming language used:Software Mathematica 4.2, 5.0 and 5.1 No. of lines in distributed program, including test data, etc.:23 663 No. of bytes in distributed program, including test data, etc.:152 321 Distribution format:tar.gz Memory required to execute with typical data:51 712 Bytes No. of bits in a word: No. of processors used:1 Has the code been vectorized?:no Peripherals used:no Program Library subprograms used:no Nature of physical problem:To find an approximate solution with analytical expressions for the undamped nonlinear Duffing equation with periodic driving force when the fundamental frequency is identical to the driving force. Method of solution:In the frame of the general HB method, by using a new iteration algorithm to calculate the coefficients of the Fourier series, we can obtain an approximate analytical solution with high-accuracy efficiently. Restrictions on the complexity of the problem:For problems, which have a large driving frequency, the convergence may be a little slow, because more iterative times are needed. Typical running time:several seconds Unusual features of the program:For an undamped Duffing equation, it can provide all the solutions or the oscillation modes with real displacement for any interesting parameters, for the required accuracy, efficiently. The program can be used to study the dynamically periodic behavior of a nonlinear oscillator, and can provide a high-accurate approximate analytical solution for developing high-accurate numerical method.
Lefkoff, L.J.; Gorelick, S.M.
1987-01-01
A FORTRAN-77 computer program code that helps solve a variety of aquifer management problems involving the control of groundwater hydraulics. It is intended for use with any standard mathematical programming package that uses Mathematical Programming System input format. The computer program creates the input files to be used by the optimization program. These files contain all the hydrologic information and management objectives needed to solve the management problem. Used in conjunction with a mathematical programming code, the computer program identifies the pumping or recharge strategy that achieves a user 's management objective while maintaining groundwater hydraulic conditions within desired limits. The objective may be linear or quadratic, and may involve the minimization of pumping and recharge rates or of variable pumping costs. The problem may contain constraints on groundwater heads, gradients, and velocities for a complex, transient hydrologic system. Linear superposition of solutions to the transient, two-dimensional groundwater flow equation is used by the computer program in conjunction with the response matrix optimization method. A unit stress is applied at each decision well and transient responses at all control locations are computed using a modified version of the U.S. Geological Survey two dimensional aquifer simulation model. The program also computes discounted cost coefficients for the objective function and accounts for transient aquifer conditions. (Author 's abstract)
NASA Technical Reports Server (NTRS)
Cooke, C. H.
1975-01-01
STICAP (Stiff Circuit Analysis Program) is a FORTRAN 4 computer program written for the CDC-6400-6600 computer series and SCOPE 3.0 operating system. It provides the circuit analyst a tool for automatically computing the transient responses and frequency responses of large linear time invariant networks, both stiff and nonstiff (algorithms and numerical integration techniques are described). The circuit description and user's program input language is engineer-oriented, making simple the task of using the program. Engineering theories underlying STICAP are examined. A user's manual is included which explains user interaction with the program and gives results of typical circuit design applications. Also, the program structure from a systems programmer's viewpoint is depicted and flow charts and other software documentation are given.
Huppert, Theodore J
2016-01-01
Functional near-infrared spectroscopy (fNIRS) is a noninvasive neuroimaging technique that uses low levels of light to measure changes in cerebral blood oxygenation levels. In the majority of NIRS functional brain studies, analysis of this data is based on a statistical comparison of hemodynamic levels between a baseline and task or between multiple task conditions by means of a linear regression model: the so-called general linear model. Although these methods are similar to their implementation in other fields, particularly for functional magnetic resonance imaging, the specific application of these methods in fNIRS research differs in several key ways related to the sources of noise and artifacts unique to fNIRS. In this brief communication, we discuss the application of linear regression models in fNIRS and the modifications needed to generalize these models in order to deal with structured (colored) noise due to systemic physiology and noise heteroscedasticity due to motion artifacts. The objective of this work is to present an overview of these noise properties in the context of the linear model as it applies to fNIRS data. This work is aimed at explaining these mathematical issues to the general fNIRS experimental researcher but is not intended to be a complete mathematical treatment of these concepts.
Linear systems with structure group and their feedback invariants
NASA Technical Reports Server (NTRS)
Martin, C.; Hermann, R.
1977-01-01
A general method described by Hermann and Martin (1976) for the study of the feedback invariants of linear systems is considered. It is shown that this method, which makes use of ideas of topology and algebraic geometry, is very useful in the investigation of feedback problems for which the classical methods are not suitable. The transfer function as a curve in the Grassmanian is examined. The general concepts studied in the context of specific systems and applications are organized in terms of the theory of Lie groups and algebraic geometry. Attention is given to linear systems which have a structure group, linear mechanical systems, and feedback invariants. The investigation shows that Lie group techniques are powerful and useful tools for analysis of the feedback structure of linear systems.
Confidence Intervals for Assessing Heterogeneity in Generalized Linear Mixed Models
ERIC Educational Resources Information Center
Wagler, Amy E.
2014-01-01
Generalized linear mixed models are frequently applied to data with clustered categorical outcomes. The effect of clustering on the response is often difficult to practically assess partly because it is reported on a scale on which comparisons with regression parameters are difficult to make. This article proposes confidence intervals for…
Estimation of Complex Generalized Linear Mixed Models for Measurement and Growth
ERIC Educational Resources Information Center
Jeon, Minjeong
2012-01-01
Maximum likelihood (ML) estimation of generalized linear mixed models (GLMMs) is technically challenging because of the intractable likelihoods that involve high dimensional integrations over random effects. The problem is magnified when the random effects have a crossed design and thus the data cannot be reduced to small independent clusters. A…
A General Linear Model (GLM) was used to evaluate the deviation of predicted values from expected values for a complex environmental model. For this demonstration, we used the default level interface of the Regional Mercury Cycling Model (R-MCM) to simulate epilimnetic total mer...
Modeling containment of large wildfires using generalized linear mixed-model analysis
Mark Finney; Isaac C. Grenfell; Charles W. McHugh
2009-01-01
Billions of dollars are spent annually in the United States to contain large wildland fires, but the factors contributing to suppression success remain poorly understood. We used a regression model (generalized linear mixed-model) to model containment probability of individual fires, assuming that containment was a repeated-measures problem (fixed effect) and...
Posterior propriety for hierarchical models with log-likelihoods that have norm bounds
Michalak, Sarah E.; Morris, Carl N.
2015-07-17
Statisticians often use improper priors to express ignorance or to provide good frequency properties, requiring that posterior propriety be verified. Our paper addresses generalized linear mixed models, GLMMs, when Level I parameters have Normal distributions, with many commonly-used hyperpriors. It provides easy-to-verify sufficient posterior propriety conditions based on dimensions, matrix ranks, and exponentiated norm bounds, ENBs, for the Level I likelihood. Since many familiar likelihoods have ENBs, which is often verifiable via log-concavity and MLE finiteness, our novel use of ENBs permits unification of posterior propriety results and posterior MGF/moment results for many useful Level I distributions, including those commonlymore » used with multilevel generalized linear models, e.g., GLMMs and hierarchical generalized linear models, HGLMs. Furthermore, those who need to verify existence of posterior distributions or of posterior MGFs/moments for a multilevel generalized linear model given a proper or improper multivariate F prior as in Section 1 should find the required results in Sections 1 and 2 and Theorem 3 (GLMMs), Theorem 4 (HGLMs), or Theorem 5 (posterior MGFs/moments).« less
NASA Technical Reports Server (NTRS)
Wiggins, R. A.
1972-01-01
The discrete general linear inverse problem reduces to a set of m equations in n unknowns. There is generally no unique solution, but we can find k linear combinations of parameters for which restraints are determined. The parameter combinations are given by the eigenvectors of the coefficient matrix. The number k is determined by the ratio of the standard deviations of the observations to the allowable standard deviations in the resulting solution. Various linear combinations of the eigenvectors can be used to determine parameter resolution and information distribution among the observations. Thus we can determine where information comes from among the observations and exactly how it constraints the set of possible models. The application of such analyses to surface-wave and free-oscillation observations indicates that (1) phase, group, and amplitude observations for any particular mode provide basically the same type of information about the model; (2) observations of overtones can enhance the resolution considerably; and (3) the degree of resolution has generally been overestimated for many model determinations made from surface waves.
Huang, Jian; Zhang, Cun-Hui
2013-01-01
The ℓ1-penalized method, or the Lasso, has emerged as an important tool for the analysis of large data sets. Many important results have been obtained for the Lasso in linear regression which have led to a deeper understanding of high-dimensional statistical problems. In this article, we consider a class of weighted ℓ1-penalized estimators for convex loss functions of a general form, including the generalized linear models. We study the estimation, prediction, selection and sparsity properties of the weighted ℓ1-penalized estimator in sparse, high-dimensional settings where the number of predictors p can be much larger than the sample size n. Adaptive Lasso is considered as a special case. A multistage method is developed to approximate concave regularized estimation by applying an adaptive Lasso recursively. We provide prediction and estimation oracle inequalities for single- and multi-stage estimators, a general selection consistency theorem, and an upper bound for the dimension of the Lasso estimator. Important models including the linear regression, logistic regression and log-linear models are used throughout to illustrate the applications of the general results. PMID:24348100
Capisizu, Ana; Aurelian, Sorina; Zamfirescu, Andreea; Omer, Ioana; Haras, Monica; Ciobotaru, Camelia; Onose, Liliana; Spircu, Tiberiu; Onose, Gelu
2015-01-01
To assess the impact of socio-demographic and comorbidity factors, and quantified depressive symptoms on disability in inpatients. Observational cross-sectional study, including a number of 80 elderly (16 men, 64 women; mean age 72.48 years; standard deviation 9.95 years) admitted in the Geriatrics Clinic of "St. Luca" Hospital, Bucharest, between May-July, 2012. We used the Functional Independence Measure, Geriatric Depression Scale and an array of socio-demographic and poly-pathology parameters. Statistical analysis included Wilcoxon and Kruskal-Wallis tests for ordinal variables, linear bivariate correlations, general linear model analysis, ANOVA. FIM scores were negatively correlated with age (R=-0.301; 95%CI=-0.439 -0.163; p=0.007); GDS scores had a statistically significant negative correlation (R=-0.322; 95% CI=-0.324 -0.052; p=0.004) with FIM scores. A general linear model, including other variables (gender, age, provenance, matrimonial state, living conditions, education, respectively number of chronic illnesses) as factors, found living conditions (p=0.027) and the combination of matrimonial state and gender (p=0.004) to significantly influence FIM scores. ANOVA showed significant differences in FIM scores stratified by the number of chronic diseases (p=0.035). Our study objectified the negative impact of depression on functional status; interestingly, education had no influence on FIM scores; living conditions and a combination of matrimonial state and gender had an important impact: patients with living spouses showed better functional scores than divorced/widowers; the number of chronic diseases also affected the FIM scores: lower in patients with significant polypathology. These findings should be considered when designing geriatric rehabilitation programs, especially for home--including skilled--cares.
User document for computer programs for ring-stiffened shells of revolution
NASA Technical Reports Server (NTRS)
Cohen, G. A.
1973-01-01
A user manual and related program documentation is presented for six compatible computer programs for structural analysis of axisymmetric shell structures. The programs apply to a common structural model but analyze different modes of structural response. In particular, they are: (1) Linear static response under asymmetric loads; (2) Buckling of linear states under asymmetric loads; (3) Nonlinear static response under axisymmetric loads; (4) Buckling nonlinear states under axisymmetric (5) Imperfection sensitivity of buckling modes under axisymmetric loads; and (6) Vibrations about nonlinear states under axisymmetric loads. These programs treat branched shells of revolution with an arbitrary arrangement of a large number of open branches but with at most one closed branch.
Alternative mathematical programming formulations for FSS synthesis
NASA Technical Reports Server (NTRS)
Reilly, C. H.; Mount-Campbell, C. A.; Gonsalvez, D. J. A.; Levis, C. A.
1986-01-01
A variety of mathematical programming models and two solution strategies are suggested for the problem of allocating orbital positions to (synthesizing) satellites in the Fixed Satellite Service. Mixed integer programming and almost linear programming formulations are presented in detail for each of two objectives: (1) positioning satellites as closely as possible to specified desired locations, and (2) minimizing the total length of the geostationary arc allocated to the satellites whose positions are to be determined. Computational results for mixed integer and almost linear programming models, with the objective of positioning satellites as closely as possible to their desired locations, are reported for three six-administration test problems and a thirteen-administration test problem.
NASA Astrophysics Data System (ADS)
Bičák, Jiří; Schmidt, Josef
2016-01-01
The question of the uniqueness of energy-momentum tensors in the linearized general relativity and in the linear massive gravity is analyzed without using variational techniques. We start from a natural ansatz for the form of the tensor (for example, that it is a linear combination of the terms quadratic in the first derivatives), and require it to be conserved as a consequence of field equations. In the case of the linear gravity in a general gauge we find a four-parametric system of conserved second-rank tensors which contains a unique symmetric tensor. This turns out to be the linearized Landau-Lifshitz pseudotensor employed often in full general relativity. We elucidate the relation of the four-parametric system to the expression proposed recently by Butcher et al. "on physical grounds" in harmonic gauge, and we show that the results coincide in the case of high-frequency waves in vacuum after a suitable averaging. In the massive gravity we show how one can arrive at the expression which coincides with the "generalized linear symmetric Landau-Lifshitz" tensor. However, there exists another uniquely given simpler symmetric tensor which can be obtained by adding the divergence of a suitable superpotential to the canonical energy-momentum tensor following from the Fierz-Pauli action. In contrast to the symmetric tensor derived by the Belinfante procedure which involves the second derivatives of the field variables, this expression contains only the field and its first derivatives. It is simpler than the generalized Landau-Lifshitz tensor but both yield the same total quantities since they differ by the divergence of a superpotential. We also discuss the role of the gauge conditions in the proofs of the uniqueness. In the Appendix, the symbolic tensor manipulation software cadabra is briefly described. It is very effective in obtaining various results which would otherwise require lengthy calculations.
ERIC Educational Resources Information Center
Findorff, Irene K.
This document summarizes the results of a project at Tulane University that was designed to adapt, test, and evaluate a computerized information and menu planning system utilizing linear programing techniques for use in school lunch food service operations. The objectives of the menu planning were to formulate menu items into a palatable,…
Spline smoothing of histograms by linear programming
NASA Technical Reports Server (NTRS)
Bennett, J. O.
1972-01-01
An algorithm for an approximating function to the frequency distribution is obtained from a sample of size n. To obtain the approximating function a histogram is made from the data. Next, Euclidean space approximations to the graph of the histogram using central B-splines as basis elements are obtained by linear programming. The approximating function has area one and is nonnegative.
Patricia K. Lebow; Henry Spelter; Peter J. Ince
2003-01-01
This report provides documentation and user information for FPL-PELPS, a personal computer price endogenous linear programming system for economic modeling. Originally developed to model the North American pulp and paper industry, FPL-PELPS follows its predecessors in allowing the modeling of any appropriate sector to predict consumption, production and capacity by...
2013-01-01
Background “Mental health for everyone” is a school program for mental health literacy and prevention aimed at secondary schools (13–15 yrs). The main aim was to investigate whether mental health literacy, could be improved by a 3-days universal education programme by: a) improving naming of symptom profiles of mental disorder, b) reducing prejudiced beliefs, and c) improving knowledge about where to seek help for mental health problems. A secondary aim was to investigate whether adolescent sex and age influenced the above mentioned variables. A third aim was to investigate whether prejudiced beliefs influenced knowledge about available help. Method This non-randomized cluster controlled trial included 1070 adolescents (53.9% boys, M age14 yrs) from three schools in a Norwegian town. One school (n = 520) received the intervention, and two schools (n = 550) formed the control group. Pre-test and follow-up were three months apart. Linear mixed models and generalized estimating equations models were employed for analysis. Results Mental health literacy improved contingent on the intervention, and there was a shift towards suggesting primary health care as a place to seek help. Those with more prejudiced beleifs did not suggest places to seek help for mental health problems. Generally, girls and older adolescents recognized symptom profiles better and had lower levels of prejudiced beliefs. Conclusions A low cost general school program may improve mental health literacy in adolescents. Gender specific programs and attention to the age and maturity of the students should be considered when mental health literacy programmes are designed and tried out. Prejudice should be addressed before imparting information about mental health issues. PMID:24053381
Fogolari, Federico; Corazza, Alessandra; Esposito, Gennaro
2015-04-05
The generalized Born model in the Onufriev, Bashford, and Case (Onufriev et al., Proteins: Struct Funct Genet 2004, 55, 383) implementation has emerged as one of the best compromises between accuracy and speed of computation. For simulations of nucleic acids, however, a number of issues should be addressed: (1) the generalized Born model is based on a linear model and the linearization of the reference Poisson-Boltmann equation may be questioned for highly charged systems as nucleic acids; (2) although much attention has been given to potentials, solvation forces could be much less sensitive to linearization than the potentials; and (3) the accuracy of the Onufriev-Bashford-Case (OBC) model for nucleic acids depends on fine tuning of parameters. Here, we show that the linearization of the Poisson Boltzmann equation has mild effects on computed forces, and that with optimal choice of the OBC model parameters, solvation forces, essential for molecular dynamics simulations, agree well with those computed using the reference Poisson-Boltzmann model. © 2015 Wiley Periodicals, Inc.
Rovniak, Liza S; Sallis, James F; Kraschnewski, Jennifer L; Sciamanna, Christopher N; Kiser, Elizabeth J; Ray, Chester A; Chinchilli, Vernon M; Ding, Ding; Matthews, Stephen A; Bopp, Melissa; George, Daniel R; Hovell, Melbourne F
2013-08-14
High rates of physical inactivity compromise the health status of populations globally. Social networks have been shown to influence physical activity (PA), but little is known about how best to engineer social networks to sustain PA. To improve procedures for building networks that shape PA as a normative behavior, there is a need for more specific hypotheses about how social variables influence PA. There is also a need to integrate concepts from network science with ecological concepts that often guide the design of in-person and electronically-mediated interventions. Therefore, this paper: (1) proposes a conceptual model that integrates principles from network science and ecology across in-person and electronically-mediated intervention modes; and (2) illustrates the application of this model to the design and evaluation of a social network intervention for PA. A conceptual model for engineering social networks was developed based on a scoping literature review of modifiable social influences on PA. The model guided the design of a cluster randomized controlled trial in which 308 sedentary adults were randomly assigned to three groups: WalkLink+: prompted and provided feedback on participants' online and in-person social-network interactions to expand networks for PA, plus provided evidence-based online walking program and weekly walking tips; WalkLink: evidence-based online walking program and weekly tips only; Minimal Treatment Control: weekly tips only. The effects of these treatment conditions were assessed at baseline, post-program, and 6-month follow-up. The primary outcome was accelerometer-measured PA. Secondary outcomes included objectively-measured aerobic fitness, body mass index, waist circumference, blood pressure, and neighborhood walkability; and self-reported measures of the physical environment, social network environment, and social network interactions. The differential effects of the three treatment conditions on primary and secondary outcomes will be analyzed using general linear modeling (GLM), or generalized linear modeling if the assumptions for GLM cannot be met. Results will contribute to greater understanding of how to conceptualize and implement social networks to support long-term PA. Establishing social networks for PA across multiple life settings could contribute to cultural norms that sustain active living. ClinicalTrials.gov NCT01142804.
NASA Technical Reports Server (NTRS)
Deng, Xiaomin; Newman, James C., Jr.
1997-01-01
ZIP2DL is a two-dimensional, elastic-plastic finte element program for stress analysis and crack growth simulations, developed for the NASA Langley Research Center. It has many of the salient features of the ZIP2D program. For example, ZIP2DL contains five material models (linearly elastic, elastic-perfectly plastic, power-law hardening, linear hardening, and multi-linear hardening models), and it can simulate mixed-mode crack growth for prescribed crack growth paths under plane stress, plane strain and mixed state of stress conditions. Further, as an extension of ZIP2D, it also includes a number of new capabilities. The large-deformation kinematics in ZIP2DL will allow it to handle elastic problems with large strains and large rotations, and elastic-plastic problems with small strains and large rotations. Loading conditions in terms of surface traction, concentrated load, and nodal displacement can be applied with a default linear time dependence or they can be programmed according to a user-defined time dependence through a user subroutine. The restart capability of ZIP2DL will make it possible to stop the execution of the program at any time, analyze the results and/or modify execution options and resume and continue the execution of the program. This report includes three sectons: a theoretical manual section, a user manual section, and an example manual secton. In the theoretical secton, the mathematics behind the various aspects of the program are concisely outlined. In the user manual section, a line-by-line explanation of the input data is given. In the example manual secton, three types of examples are presented to demonstrate the accuracy and illustrate the use of this program.
Asymptotic aspect of derivations in Banach algebras.
Roh, Jaiok; Chang, Ick-Soon
2017-01-01
We prove that every approximate linear left derivation on a semisimple Banach algebra is continuous. Also, we consider linear derivations on Banach algebras and we first study the conditions for a linear derivation on a Banach algebra. Then we examine the functional inequalities related to a linear derivation and their stability. We finally take central linear derivations with radical ranges on semiprime Banach algebras and a continuous linear generalized left derivation on a semisimple Banach algebra.
NASA Technical Reports Server (NTRS)
1980-01-01
Programs exploring and demonstrating new technologies in general aviation propulsion are considered. These programs are the quiet, clean, general aviation turbofan (QCGAT) program; the general aviation turbine engine (GATE) study program; the general aviation propeller technology program; and the advanced rotary, diesel, and reciprocating engine programs.
Decision Support System for Reservoir Management and Operation in Africa
NASA Astrophysics Data System (ADS)
Navar, D. A.
2016-12-01
Africa is currently experiencing a surge in dam construction for flood control, water supply and hydropower production, but ineffective reservoir management has caused problems in the region, such as water shortages, flooding and loss of potential hydropower generation. Our research aims to remedy ineffective reservoir management by developing a novel Decision Support System(DSS) to equip water managers with a technical planning tool based on the state of the art in hydrological sciences. The DSS incorporates a climate forecast model, a hydraulic model of the watershed, and an optimization model to effectively plan for the operation of a system of cascade large-scale reservoirs for hydropower production, while treating water supply and flood control as constraints. Our team will use the newly constructed hydropower plants in the Omo Gibe basin of Ethiopia as the test case. Using the basic HIDROTERM software developed in Brazil, the General Algebraic Modeling System (GAMS) utilizes a combination of linear programing (LP) and non-linear programming (NLP) in conjunction with real time hydrologic and energy demand data to optimize the monthly and daily operations of the reservoir system. We compare the DSS model results with the current reservoir operating policy used by the water managers of that region. We also hope the DSS will eliminate the current dangers associated with the mismanagement of large scale water resources projects in Africa.
Prediction of LDEF ionizing radiation environment
NASA Astrophysics Data System (ADS)
Watts, John W.; Parnell, T. A.; Derrickson, James H.; Armstrong, T. W.; Benton, E. V.
1992-01-01
The Long Duration Exposure Facility (LDEF) spacecraft flew in a 28.5 deg inclination circular orbit with an altitude in the range from 172 to 258.5 nautical miles. For this orbital altitude and inclination two components contribute most of the penetrating charge particle radiation encountered - the galactic cosmic rays and the geomagnetically trapped Van Allen protons. Where shielding is less than 1.0 g/sq cm geomagnetically trapped electrons make a significant contribution. The 'Vette' models together with the associated magnetic filed models were used to obtain the trapped electron and proton fluences. The mission proton doses were obtained from the fluence using the Burrell proton dose program. For the electron and bremsstrahlung dose we used the Marshall Space Flight Center (MSFC) electron dose program. The predicted doses were in general agreement with those measured with on-board thermoluminescent detector (TLD) dosimeters. The NRL package of programs, Cosmic Ray Effects on MicroElectronics (CREME), was used to calculate the linear energy transfer (LET) spectrum due to galactic cosmic rays (GCR) and trapped protons for comparison with LDEF measurements.
The role of predictive uncertainty in the operational management of reservoirs
NASA Astrophysics Data System (ADS)
Todini, E.
2014-09-01
The present work deals with the operational management of multi-purpose reservoirs, whose optimisation-based rules are derived, in the planning phase, via deterministic (linear and nonlinear programming, dynamic programming, etc.) or via stochastic (generally stochastic dynamic programming) approaches. In operation, the resulting deterministic or stochastic optimised operating rules are then triggered based on inflow predictions. In order to fully benefit from predictions, one must avoid using them as direct inputs to the reservoirs, but rather assess the "predictive knowledge" in terms of a predictive probability density to be operationally used in the decision making process for the estimation of expected benefits and/or expected losses. Using a theoretical and extremely simplified case, it will be shown why directly using model forecasts instead of the full predictive density leads to less robust reservoir management decisions. Moreover, the effectiveness and the tangible benefits for using the entire predictive probability density instead of the model predicted values will be demonstrated on the basis of the Lake Como management system, operational since 1997, as well as on the basis of a case study on the lake of Aswan.
Implementing Learning Assistants and Tutorials in the Laboratory Environment
NASA Astrophysics Data System (ADS)
Stewart, John; Henderson, Rachel; Miller, Paul
2016-03-01
This talk describes the results of a novel implementation of a Learning Assistant (LA) program where the LAs facilitated the presentation of the Tutorials in Introductory Physics as part of an otherwise traditional laboratory. LAs received both general training in the teaching of science and specific training in the presentation of the Tutorials. The LAs acted as the lead laboratory instructor for one hour each lab. The program required very little interaction from the lecturer. The program showed a substantial increase in learning gains on the Force and Motion Conceptual Inventory in the first semester course, but weaker improvement of learning gains on the Conceptual Survey of Electricity and Magnetism in the second semester course. Multiple linear regression showed that gender, student ability, and whether the student was on-sequence were significant regressors. The instructor was a substantial random effect (SD = 0 . 10), but the teaching assistant (SD = 0 . 00) and learning assistant (SD = 0 . 01) were much weaker random effects on the normalized gain. The instructor standing (tenure-track, teaching faculty, or adjunct) was a weakly significant regressor (p < 0 . 05).
Pease, J M; Morselli, M F
1987-01-01
This paper deals with a computer program adapted to a statistical method for analyzing an unlimited quantity of binary recorded data of an independent circular variable (e.g. wind direction), and a linear variable (e.g. maple sap flow volume). Circular variables cannot be statistically analyzed with linear methods, unless they have been transformed. The program calculates a critical quantity, the acrophase angle (PHI, phi o). The technique is adapted from original mathematics [1] and is written in Fortran 77 for easier conversion between computer networks. Correlation analysis can be performed following the program or regression which, because of the circular nature of the independent variable, becomes periodic regression. The technique was tested on a file of approximately 4050 data pairs.
Preliminary SPE Phase II Far Field Ground Motion Estimates
DOE Office of Scientific and Technical Information (OSTI.GOV)
Steedman, David W.
2014-03-06
Phase II of the Source Physics Experiment (SPE) program will be conducted in alluvium. Several candidate sites were identified. These include existing large diameter borehole U1e. One criterion for acceptance is expected far field ground motion. In June 2013 we were requested to estimate peak response 2 km from the borehole due to the largest planned SPE Phase II experiment: a contained 50- Ton event. The cube-root scaled range for this event is 5423 m/KT 1/3. The generally accepted first order estimate of ground motions from an explosive event is to refer to the standard data base for explosive eventsmore » (Perrett and Bass, 1975). This reference is a compilation and analysis of ground motion data from numerous nuclear and chemical explosive events from Nevada National Security Site (formerly the Nevada Test Site, or NTS) and other locations. The data were compiled and analyzed for various geologic settings including dry alluvium, which we believe is an accurate descriptor for the SPE Phase II setting. The Perrett and Bass plots of peak velocity and peak yield-scaled displacement, both vs. yield-scaled range, are provided here. Their analysis of both variables resulted in bi-linear fits: a close-in non-linear regime and a more distant linear regime.« less
Nonlinear programming extensions to rational function approximations of unsteady aerodynamics
NASA Technical Reports Server (NTRS)
Tiffany, Sherwood H.; Adams, William M., Jr.
1987-01-01
This paper deals with approximating unsteady generalized aerodynamic forces in the equations of motion of a flexible aircraft. Two methods of formulating these approximations are extended to include both the same flexibility in constraining them and the same methodology in optimizing nonlinear parameters as another currently used 'extended least-squares' method. Optimal selection of 'nonlinear' parameters is made in each of the three methods by use of the same nonlinear (nongradient) optimizer. The objective of the nonlinear optimization is to obtain rational approximations to the unsteady aerodynamics whose state-space realization is of lower order than that required when no optimization of the nonlinear terms is performed. The free 'linear' parameters are determined using least-squares matrix techniques on a Lagrange multiplier formulation of an objective function which incorporates selected linear equality constraints. State-space mathematical models resulting from the different approaches are described, and results are presented which show comparative evaluations from application of each of the extended methods to a numerical example. The results obtained for the example problem show a significant (up to 63 percent) reduction in the number of differential equations used to represent the unsteady aerodynamic forces in linear time-invariant equations of motion as compared to a conventional method in which nonlinear terms are not optimized.
General linear methods and friends: Toward efficient solutions of multiphysics problems
NASA Astrophysics Data System (ADS)
Sandu, Adrian
2017-07-01
Time dependent multiphysics partial differential equations are of great practical importance as they model diverse phenomena that appear in mechanical and chemical engineering, aeronautics, astrophysics, meteorology and oceanography, financial modeling, environmental sciences, etc. There is no single best time discretization for the complex multiphysics systems of practical interest. We discuss "multimethod" approaches that combine different time steps and discretizations using the rigourous frameworks provided by Partitioned General Linear Methods and Generalize-structure Additive Runge Kutta Methods..
An Application to the Prediction of LOD Change Based on General Regression Neural Network
NASA Astrophysics Data System (ADS)
Zhang, X. H.; Wang, Q. J.; Zhu, J. J.; Zhang, H.
2011-07-01
Traditional prediction of the LOD (length of day) change was based on linear models, such as the least square model and the autoregressive technique, etc. Due to the complex non-linear features of the LOD variation, the performances of the linear model predictors are not fully satisfactory. This paper applies a non-linear neural network - general regression neural network (GRNN) model to forecast the LOD change, and the results are analyzed and compared with those obtained with the back propagation neural network and other models. The comparison shows that the performance of the GRNN model in the prediction of the LOD change is efficient and feasible.
Health-related quality-of-life in low-income, uninsured men with prostate cancer.
Krupski, Tracey L; Fink, Arlene; Kwan, Lorna; Maliski, Sally; Connor, Sarah E; Clerkin, Barbara; Litwin, Mark S
2005-05-01
The objective was to describe health-related quality-of-life (HRQOL) in low-income men with prostate cancer. Subjects were drawn from a statewide public assistance prostate cancer program. Telephone and mail surveys included the RAND 12-item Health Survey and UCLA Prostate Cancer Index Short Form and were compared with normative age-matched men without cancer from the general population reported on in the literature. Of 286 eligible men, 233 (81%) agreed to participate and completed the necessary items. The sample consisted of 51% Hispanics, 23% non-Hispanic whites, and 17% African Americans. The low-income men had worse scores in every domain of prostate-specific and general HRQOL than had the age-matched general population controls. The degree of disparity indicated substantial clinical differences in almost every domain of physical and emotional functioning between the sample group and the control group. Linear regression modeling determined that among the low-income men, Hispanic race, and income level were predictive of worse physical functioning, whereas only comorbidities predicted mental health. Low-income patients with prostate cancer appear to have quality-of-life profiles that are meaningfully worse than age-matched men from the general population without cancer reported on in the literature.
Program for the solution of multipoint boundary value problems of quasilinear differential equations
NASA Technical Reports Server (NTRS)
1973-01-01
Linear equations are solved by a method of superposition of solutions of a sequence of initial value problems. For nonlinear equations and/or boundary conditions, the solution is iterative and in each iteration a problem like the linear case is solved. A simple Taylor series expansion is used for the linearization of both nonlinear equations and nonlinear boundary conditions. The perturbation method of solution is used in preference to quasilinearization because of programming ease, and smaller storage requirements; and experiments indicate that the desired convergence properties exist although no proof or convergence is given.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hutchinson, S.A.; Shadid, J.N.; Tuminaro, R.S.
1995-10-01
Aztec is an iterative library that greatly simplifies the parallelization process when solving the linear systems of equations Ax = b where A is a user supplied n x n sparse matrix, b is a user supplied vector of length n and x is a vector of length n to be computed. Aztec is intended as a software tool for users who want to avoid cumbersome parallel programming details but who have large sparse linear systems which require an efficiently utilized parallel processing system. A collection of data transformation tools are provided that allow for easy creation of distributed sparsemore » unstructured matrices for parallel solution. Once the distributed matrix is created, computation can be performed on any of the parallel machines running Aztec: nCUBE 2, IBM SP2 and Intel Paragon, MPI platforms as well as standard serial and vector platforms. Aztec includes a number of Krylov iterative methods such as conjugate gradient (CG), generalized minimum residual (GMRES) and stabilized biconjugate gradient (BICGSTAB) to solve systems of equations. These Krylov methods are used in conjunction with various preconditioners such as polynomial or domain decomposition methods using LU or incomplete LU factorizations within subdomains. Although the matrix A can be general, the package has been designed for matrices arising from the approximation of partial differential equations (PDEs). In particular, the Aztec package is oriented toward systems arising from PDE applications.« less