Sample records for linear programming based

  1. Portfolio optimization by using linear programing models based on genetic algorithm

    NASA Astrophysics Data System (ADS)

    Sukono; Hidayat, Y.; Lesmana, E.; Putra, A. S.; Napitupulu, H.; Supian, S.

    2018-01-01

    In this paper, we discussed the investment portfolio optimization using linear programming model based on genetic algorithms. It is assumed that the portfolio risk is measured by absolute standard deviation, and each investor has a risk tolerance on the investment portfolio. To complete the investment portfolio optimization problem, the issue is arranged into a linear programming model. Furthermore, determination of the optimum solution for linear programming is done by using a genetic algorithm. As a numerical illustration, we analyze some of the stocks traded on the capital market in Indonesia. Based on the analysis, it is shown that the portfolio optimization performed by genetic algorithm approach produces more optimal efficient portfolio, compared to the portfolio optimization performed by a linear programming algorithm approach. Therefore, genetic algorithms can be considered as an alternative on determining the investment portfolio optimization, particularly using linear programming models.

  2. Optimization Research of Generation Investment Based on Linear Programming Model

    NASA Astrophysics Data System (ADS)

    Wu, Juan; Ge, Xueqian

    Linear programming is an important branch of operational research and it is a mathematical method to assist the people to carry out scientific management. GAMS is an advanced simulation and optimization modeling language and it will combine a large number of complex mathematical programming, such as linear programming LP, nonlinear programming NLP, MIP and other mixed-integer programming with the system simulation. In this paper, based on the linear programming model, the optimized investment decision-making of generation is simulated and analyzed. At last, the optimal installed capacity of power plants and the final total cost are got, which provides the rational decision-making basis for optimized investments.

  3. PubMed

    Trinker, Horst

    2011-10-28

    We study the distribution of triples of codewords of codes and ordered codes. Schrijver [A. Schrijver, New code upper bounds from the Terwilliger algebra and semidefinite programming, IEEE Trans. Inform. Theory 51 (8) (2005) 2859-2866] used the triple distribution of a code to establish a bound on the number of codewords based on semidefinite programming. In the first part of this work, we generalize this approach for ordered codes. In the second part, we consider linear codes and linear ordered codes and present a MacWilliams-type identity for the triple distribution of their dual code. Based on the non-negativity of this linear transform, we establish a linear programming bound and conclude with a table of parameters for which this bound yields better results than the standard linear programming bound.

  4. A Comparison of Linear and Systems Thinking Approaches for Program Evaluation Illustrated Using the Indiana Interdisciplinary GK-12

    ERIC Educational Resources Information Center

    Dyehouse, Melissa; Bennett, Deborah; Harbor, Jon; Childress, Amy; Dark, Melissa

    2009-01-01

    Logic models are based on linear relationships between program resources, activities, and outcomes, and have been used widely to support both program development and evaluation. While useful in describing some programs, the linear nature of the logic model makes it difficult to capture the complex relationships within larger, multifaceted…

  5. Applications of Goal Programming to Education.

    ERIC Educational Resources Information Center

    Van Dusseldorp, Ralph A.; And Others

    This paper discusses goal programming, a computer-based operations research technique that is basically a modification and extension of linear programming. The authors first discuss the similarities and differences between goal programming and linear programming, then describe the limitations of goal programming and its possible applications for…

  6. On the linear programming bound for linear Lee codes.

    PubMed

    Astola, Helena; Tabus, Ioan

    2016-01-01

    Based on an invariance-type property of the Lee-compositions of a linear Lee code, additional equality constraints can be introduced to the linear programming problem of linear Lee codes. In this paper, we formulate this property in terms of an action of the multiplicative group of the field [Formula: see text] on the set of Lee-compositions. We show some useful properties of certain sums of Lee-numbers, which are the eigenvalues of the Lee association scheme, appearing in the linear programming problem of linear Lee codes. Using the additional equality constraints, we formulate the linear programming problem of linear Lee codes in a very compact form, leading to a fast execution, which allows to efficiently compute the bounds for large parameter values of the linear codes.

  7. Object matching using a locally affine invariant and linear programming techniques.

    PubMed

    Li, Hongsheng; Huang, Xiaolei; He, Lei

    2013-02-01

    In this paper, we introduce a new matching method based on a novel locally affine-invariant geometric constraint and linear programming techniques. To model and solve the matching problem in a linear programming formulation, all geometric constraints should be able to be exactly or approximately reformulated into a linear form. This is a major difficulty for this kind of matching algorithm. We propose a novel locally affine-invariant constraint which can be exactly linearized and requires a lot fewer auxiliary variables than other linear programming-based methods do. The key idea behind it is that each point in the template point set can be exactly represented by an affine combination of its neighboring points, whose weights can be solved easily by least squares. Errors of reconstructing each matched point using such weights are used to penalize the disagreement of geometric relationships between the template points and the matched points. The resulting overall objective function can be solved efficiently by linear programming techniques. Our experimental results on both rigid and nonrigid object matching show the effectiveness of the proposed algorithm.

  8. High profile students’ growth of mathematical understanding in solving linier programing problems

    NASA Astrophysics Data System (ADS)

    Utomo; Kusmayadi, TA; Pramudya, I.

    2018-04-01

    Linear program has an important role in human’s life. This linear program is learned in senior high school and college levels. This material is applied in economy, transportation, military and others. Therefore, mastering linear program is useful for provision of life. This research describes a growth of mathematical understanding in solving linear programming problems based on the growth of understanding by the Piere-Kieren model. Thus, this research used qualitative approach. The subjects were students of grade XI in Salatiga city. The subjects of this study were two students who had high profiles. The researcher generally chose the subjects based on the growth of understanding from a test result in the classroom; the mark from the prerequisite material was ≥ 75. Both of the subjects were interviewed by the researcher to know the students’ growth of mathematical understanding in solving linear programming problems. The finding of this research showed that the subjects often folding back to the primitive knowing level to go forward to the next level. It happened because the subjects’ primitive understanding was not comprehensive.

  9. Linearized Programming of Memristors for Artificial Neuro-Sensor Signal Processing

    PubMed Central

    Yang, Changju; Kim, Hyongsuk

    2016-01-01

    A linearized programming method of memristor-based neural weights is proposed. Memristor is known as an ideal element to implement a neural synapse due to its embedded functions of analog memory and analog multiplication. Its resistance variation with a voltage input is generally a nonlinear function of time. Linearization of memristance variation about time is very important for the easiness of memristor programming. In this paper, a method utilizing an anti-serial architecture for linear programming is proposed. The anti-serial architecture is composed of two memristors with opposite polarities. It linearizes the variation of memristance due to complimentary actions of two memristors. For programming a memristor, additional memristor with opposite polarity is employed. The linearization effect of weight programming of an anti-serial architecture is investigated and memristor bridge synapse which is built with two sets of anti-serial memristor architecture is taken as an application example of the proposed method. Simulations are performed with memristors of both linear drift model and nonlinear model. PMID:27548186

  10. Linearized Programming of Memristors for Artificial Neuro-Sensor Signal Processing.

    PubMed

    Yang, Changju; Kim, Hyongsuk

    2016-08-19

    A linearized programming method of memristor-based neural weights is proposed. Memristor is known as an ideal element to implement a neural synapse due to its embedded functions of analog memory and analog multiplication. Its resistance variation with a voltage input is generally a nonlinear function of time. Linearization of memristance variation about time is very important for the easiness of memristor programming. In this paper, a method utilizing an anti-serial architecture for linear programming is proposed. The anti-serial architecture is composed of two memristors with opposite polarities. It linearizes the variation of memristance due to complimentary actions of two memristors. For programming a memristor, additional memristor with opposite polarity is employed. The linearization effect of weight programming of an anti-serial architecture is investigated and memristor bridge synapse which is built with two sets of anti-serial memristor architecture is taken as an application example of the proposed method. Simulations are performed with memristors of both linear drift model and nonlinear model.

  11. Optimal blood glucose control in diabetes mellitus treatment using dynamic programming based on Ackerman’s linear model

    NASA Astrophysics Data System (ADS)

    Pradanti, Paskalia; Hartono

    2018-03-01

    Determination of insulin injection dose in diabetes mellitus treatment can be considered as an optimal control problem. This article is aimed to simulate optimal blood glucose control for patient with diabetes mellitus. The blood glucose regulation of diabetic patient is represented by Ackerman’s Linear Model. This problem is then solved using dynamic programming method. The desired blood glucose level is obtained by minimizing the performance index in Lagrange form. The results show that dynamic programming based on Ackerman’s Linear Model is quite good to solve the problem.

  12. A duality approach for solving bounded linear programming problems with fuzzy variables based on ranking functions and its application in bounded transportation problems

    NASA Astrophysics Data System (ADS)

    Ebrahimnejad, Ali

    2015-08-01

    There are several methods, in the literature, for solving fuzzy variable linear programming problems (fuzzy linear programming in which the right-hand-side vectors and decision variables are represented by trapezoidal fuzzy numbers). In this paper, the shortcomings of some existing methods are pointed out and to overcome these shortcomings a new method based on the bounded dual simplex method is proposed to determine the fuzzy optimal solution of that kind of fuzzy variable linear programming problems in which some or all variables are restricted to lie within lower and upper bounds. To illustrate the proposed method, an application example is solved and the obtained results are given. The advantages of the proposed method over existing methods are discussed. Also, one application of this algorithm in solving bounded transportation problems with fuzzy supplies and demands is dealt with. The proposed method is easy to understand and to apply for determining the fuzzy optimal solution of bounded fuzzy variable linear programming problems occurring in real-life situations.

  13. Nutrient density score of typical Indonesian foods and dietary formulation using linear programming.

    PubMed

    Jati, Ignasius Radix A P; Vadivel, Vellingiri; Nöhr, Donatus; Biesalski, Hans Konrad

    2012-12-01

    The present research aimed to analyse the nutrient density (ND), nutrient adequacy score (NAS) and energy density (ED) of Indonesian foods and to formulate a balanced diet using linear programming. Data on typical Indonesian diets were obtained from the Indonesian Socio-Economic Survey 2008. ND was investigated for 122 Indonesian foods. NAS was calculated for single nutrients such as Fe, Zn and vitamin A. Correlation analysis was performed between ND and ED, as well as between monthly expenditure class and food consumption pattern in Indonesia. Linear programming calculations were performed using the software POM-QM for Windows version 3. Republic of Indonesia, 2008. Public households (n 68 800). Vegetables had the highest ND of the food groups, followed by animal-based foods, fruits and staple foods. Based on NAS, the top ten food items for each food group were identified. Most of the staple foods had high ED and contributed towards daily energy fulfillment, followed by animal-based foods, vegetables and fruits. Commodities with high ND tended to have low ED. Linear programming could be used to formulate a balanced diet. In contrast to staple foods, purchases of fruit, vegetables and animal-based foods increased with the rise of monthly expenditure. People should select food items based on ND and NAS to alleviate micronutrient deficiencies in Indonesia. Dietary formulation calculated using linear programming to achieve RDA levels for micronutrients could be recommended for different age groups of the Indonesian population.

  14. Probabilistic dual heuristic programming-based adaptive critic

    NASA Astrophysics Data System (ADS)

    Herzallah, Randa

    2010-02-01

    Adaptive critic (AC) methods have common roots as generalisations of dynamic programming for neural reinforcement learning approaches. Since they approximate the dynamic programming solutions, they are potentially suitable for learning in noisy, non-linear and non-stationary environments. In this study, a novel probabilistic dual heuristic programming (DHP)-based AC controller is proposed. Distinct to current approaches, the proposed probabilistic (DHP) AC method takes uncertainties of forward model and inverse controller into consideration. Therefore, it is suitable for deterministic and stochastic control problems characterised by functional uncertainty. Theoretical development of the proposed method is validated by analytically evaluating the correct value of the cost function which satisfies the Bellman equation in a linear quadratic control problem. The target value of the probabilistic critic network is then calculated and shown to be equal to the analytically derived correct value. Full derivation of the Riccati solution for this non-standard stochastic linear quadratic control problem is also provided. Moreover, the performance of the proposed probabilistic controller is demonstrated on linear and non-linear control examples.

  15. Train repathing in emergencies based on fuzzy linear programming.

    PubMed

    Meng, Xuelei; Cui, Bingmou

    2014-01-01

    Train pathing is a typical problem which is to assign the train trips on the sets of rail segments, such as rail tracks and links. This paper focuses on the train pathing problem, determining the paths of the train trips in emergencies. We analyze the influencing factors of train pathing, such as transferring cost, running cost, and social adverse effect cost. With the overall consideration of the segment and station capability constraints, we build the fuzzy linear programming model to solve the train pathing problem. We design the fuzzy membership function to describe the fuzzy coefficients. Furthermore, the contraction-expansion factors are introduced to contract or expand the value ranges of the fuzzy coefficients, coping with the uncertainty of the value range of the fuzzy coefficients. We propose a method based on triangular fuzzy coefficient and transfer the train pathing (fuzzy linear programming model) to a determinate linear model to solve the fuzzy linear programming problem. An emergency is supposed based on the real data of the Beijing-Shanghai Railway. The model in this paper was solved and the computation results prove the availability of the model and efficiency of the algorithm.

  16. Problem Based Learning Technique and Its Effect on Acquisition of Linear Programming Skills by Secondary School Students in Kenya

    ERIC Educational Resources Information Center

    Nakhanu, Shikuku Beatrice; Musasia, Amadalo Maurice

    2015-01-01

    The topic Linear Programming is included in the compulsory Kenyan secondary school mathematics curriculum at form four. The topic provides skills for determining best outcomes in a given mathematical model involving some linear relationship. This technique has found application in business, economics as well as various engineering fields. Yet many…

  17. Can Linear Superiorization Be Useful for Linear Optimization Problems?

    PubMed Central

    Censor, Yair

    2017-01-01

    Linear superiorization considers linear programming problems but instead of attempting to solve them with linear optimization methods it employs perturbation resilient feasibility-seeking algorithms and steers them toward reduced (not necessarily minimal) target function values. The two questions that we set out to explore experimentally are (i) Does linear superiorization provide a feasible point whose linear target function value is lower than that obtained by running the same feasibility-seeking algorithm without superiorization under identical conditions? and (ii) How does linear superiorization fare in comparison with the Simplex method for solving linear programming problems? Based on our computational experiments presented here, the answers to these two questions are: “yes” and “very well”, respectively. PMID:29335660

  18. Users manual for linear Time-Varying Helicopter Simulation (Program TVHIS)

    NASA Technical Reports Server (NTRS)

    Burns, M. R.

    1979-01-01

    A linear time-varying helicopter simulation program (TVHIS) is described. The program is designed as a realistic yet efficient helicopter simulation. It is based on a linear time-varying helicopter model which includes rotor, actuator, and sensor models, as well as a simulation of flight computer logic. The TVHIS can generate a mean trajectory simulation along a nominal trajectory, or propagate covariance of helicopter states, including rigid-body, turbulence, control command, controller states, and rigid-body state estimates.

  19. Mass Optimization of Battery/Supercapacitors Hybrid Systems Based on a Linear Programming Approach

    NASA Astrophysics Data System (ADS)

    Fleury, Benoit; Labbe, Julien

    2014-08-01

    The objective of this paper is to show that, on a specific launcher-type mission profile, a 40% gain of mass is expected using a battery/supercapacitors active hybridization instead of a single battery solution. This result is based on the use of a linear programming optimization approach to perform the mass optimization of the hybrid power supply solution.

  20. SCI Identification (SCIDNT) program user's guide. [maximum likelihood method for linear rotorcraft models

    NASA Technical Reports Server (NTRS)

    1979-01-01

    The computer program Linear SCIDNT which evaluates rotorcraft stability and control coefficients from flight or wind tunnel test data is described. It implements the maximum likelihood method to maximize the likelihood function of the parameters based on measured input/output time histories. Linear SCIDNT may be applied to systems modeled by linear constant-coefficient differential equations. This restriction in scope allows the application of several analytical results which simplify the computation and improve its efficiency over the general nonlinear case.

  1. A novel recurrent neural network with finite-time convergence for linear programming.

    PubMed

    Liu, Qingshan; Cao, Jinde; Chen, Guanrong

    2010-11-01

    In this letter, a novel recurrent neural network based on the gradient method is proposed for solving linear programming problems. Finite-time convergence of the proposed neural network is proved by using the Lyapunov method. Compared with the existing neural networks for linear programming, the proposed neural network is globally convergent to exact optimal solutions in finite time, which is remarkable and rare in the literature of neural networks for optimization. Some numerical examples are given to show the effectiveness and excellent performance of the new recurrent neural network.

  2. Can linear superiorization be useful for linear optimization problems?

    NASA Astrophysics Data System (ADS)

    Censor, Yair

    2017-04-01

    Linear superiorization (LinSup) considers linear programming problems but instead of attempting to solve them with linear optimization methods it employs perturbation resilient feasibility-seeking algorithms and steers them toward reduced (not necessarily minimal) target function values. The two questions that we set out to explore experimentally are: (i) does LinSup provide a feasible point whose linear target function value is lower than that obtained by running the same feasibility-seeking algorithm without superiorization under identical conditions? (ii) How does LinSup fare in comparison with the Simplex method for solving linear programming problems? Based on our computational experiments presented here, the answers to these two questions are: ‘yes’ and ‘very well’, respectively.

  3. [Study on the 3D mathematical mode of the muscle groups applied to human mandible by a linear programming method].

    PubMed

    Wang, Dongmei; Yu, Liniu; Zhou, Xianlian; Wang, Chengtao

    2004-02-01

    Four types of 3D mathematical mode of the muscle groups applied to the human mandible have been developed. One is based on electromyography (EMG) and the others are based on linear programming with different objective function. Each model contains 26 muscle forces and two joint forces, allowing simulation of static bite forces and concomitant joint reaction forces for various bite point locations and mandibular positions. In this paper, the method of image processing to measure the position and direction of muscle forces according to 3D CAD model was built with CT data. Matlab optimization toolbox is applied to solve the three modes based on linear programming. Results show that the model with an objective function requiring a minimum sum of the tensions in the muscles is reasonable and agrees very well with the normal physiology activity.

  4. Three-dimensional modeling of flexible pavements : research implementation plan.

    DOT National Transportation Integrated Search

    2006-02-14

    Many of the asphalt pavement analysis programs are based on linear elastic models. A linear viscoelastic models : would be superior to linear elastic models for analyzing the response of asphalt concrete pavements to loads. There : is a need to devel...

  5. Automating approximate Bayesian computation by local linear regression.

    PubMed

    Thornton, Kevin R

    2009-07-07

    In several biological contexts, parameter inference often relies on computationally-intensive techniques. "Approximate Bayesian Computation", or ABC, methods based on summary statistics have become increasingly popular. A particular flavor of ABC based on using a linear regression to approximate the posterior distribution of the parameters, conditional on the summary statistics, is computationally appealing, yet no standalone tool exists to automate the procedure. Here, I describe a program to implement the method. The software package ABCreg implements the local linear-regression approach to ABC. The advantages are: 1. The code is standalone, and fully-documented. 2. The program will automatically process multiple data sets, and create unique output files for each (which may be processed immediately in R), facilitating the testing of inference procedures on simulated data, or the analysis of multiple data sets. 3. The program implements two different transformation methods for the regression step. 4. Analysis options are controlled on the command line by the user, and the program is designed to output warnings for cases where the regression fails. 5. The program does not depend on any particular simulation machinery (coalescent, forward-time, etc.), and therefore is a general tool for processing the results from any simulation. 6. The code is open-source, and modular.Examples of applying the software to empirical data from Drosophila melanogaster, and testing the procedure on simulated data, are shown. In practice, the ABCreg simplifies implementing ABC based on local-linear regression.

  6. Comparison of acrylamide intake from Western and guideline based diets using probabilistic techniques and linear programming.

    PubMed

    Katz, Josh M; Winter, Carl K; Buttrey, Samuel E; Fadel, James G

    2012-03-01

    Western and guideline based diets were compared to determine if dietary improvements resulting from following dietary guidelines reduce acrylamide intake. Acrylamide forms in heat treated foods and is a human neurotoxin and animal carcinogen. Acrylamide intake from the Western diet was estimated with probabilistic techniques using teenage (13-19 years) National Health and Nutrition Examination Survey (NHANES) food consumption estimates combined with FDA data on the levels of acrylamide in a large number of foods. Guideline based diets were derived from NHANES data using linear programming techniques to comport to recommendations from the Dietary Guidelines for Americans, 2005. Whereas the guideline based diets were more properly balanced and rich in consumption of fruits, vegetables, and other dietary components than the Western diets, acrylamide intake (mean±SE) was significantly greater (P<0.001) from consumption of the guideline based diets (0.508±0.003 μg/kg/day) than from consumption of the Western diets (0.441±0.003 μg/kg/day). Guideline based diets contained less acrylamide contributed by French fries and potato chips than Western diets. Overall acrylamide intake, however, was higher in guideline based diets as a result of more frequent breakfast cereal intake. This is believed to be the first example of a risk assessment that combines probabilistic techniques with linear programming and results demonstrate that linear programming techniques can be used to model specific diets for the assessment of toxicological and nutritional dietary components. Copyright © 2011 Elsevier Ltd. All rights reserved.

  7. Novel methods for Solving Economic Dispatch of Security-Constrained Unit Commitment Based on Linear Programming

    NASA Astrophysics Data System (ADS)

    Guo, Sangang

    2017-09-01

    There are two stages in solving security-constrained unit commitment problems (SCUC) within Lagrangian framework: one is to obtain feasible units’ states (UC), the other is power economic dispatch (ED) for each unit. The accurate solution of ED is more important for enhancing the efficiency of the solution to SCUC for the fixed feasible units’ statues. Two novel methods named after Convex Combinatorial Coefficient Method and Power Increment Method respectively based on linear programming problem for solving ED are proposed by the piecewise linear approximation to the nonlinear convex fuel cost functions. Numerical testing results show that the methods are effective and efficient.

  8. Linear decomposition approach for a class of nonconvex programming problems.

    PubMed

    Shen, Peiping; Wang, Chunfeng

    2017-01-01

    This paper presents a linear decomposition approach for a class of nonconvex programming problems by dividing the input space into polynomially many grids. It shows that under certain assumptions the original problem can be transformed and decomposed into a polynomial number of equivalent linear programming subproblems. Based on solving a series of liner programming subproblems corresponding to those grid points we can obtain the near-optimal solution of the original problem. Compared to existing results in the literature, the proposed algorithm does not require the assumptions of quasi-concavity and differentiability of the objective function, and it differs significantly giving an interesting approach to solving the problem with a reduced running time.

  9. Generating AN Optimum Treatment Plan for External Beam Radiation Therapy.

    NASA Astrophysics Data System (ADS)

    Kabus, Irwin

    1990-01-01

    The application of linear programming to the generation of an optimum external beam radiation treatment plan is investigated. MPSX, an IBM linear programming software package was used. All data originated from the CAT scan of an actual patient who was treated for a pancreatic malignant tumor before this study began. An examination of several alternatives for representing the cross section of the patient showed that it was sufficient to use a set of strategically placed points in the vital organs and tumor and a grid of points spaced about one half inch apart for the healthy tissue. Optimum treatment plans were generated from objective functions representing various treatment philosophies. The optimum plans were based on allowing for 216 external radiation beams which accounted for wedges of any size. A beam reduction scheme then reduced the number of beams in the optimum plan to a number of beams small enough for implementation. Regardless of the objective function, the linear programming treatment plan preserved about 95% of the patient's right kidney vs. 59% for the plan the hospital actually administered to the patient. The clinician, on the case, found most of the linear programming treatment plans to be superior to the hospital plan. An investigation was made, using parametric linear programming, concerning any possible benefits derived from generating treatment plans based on objective functions made up of convex combinations of two objective functions, however, this proved to have only limited value. This study also found, through dual variable analysis, that there was no benefit gained from relaxing some of the constraints on the healthy regions of the anatomy. This conclusion was supported by the clinician. Finally several schemes were found that, under certain conditions, can further reduce the number of beams in the final linear programming treatment plan.

  10. A generalized interval fuzzy mixed integer programming model for a multimodal transportation problem under uncertainty

    NASA Astrophysics Data System (ADS)

    Tian, Wenli; Cao, Chengxuan

    2017-03-01

    A generalized interval fuzzy mixed integer programming model is proposed for the multimodal freight transportation problem under uncertainty, in which the optimal mode of transport and the optimal amount of each type of freight transported through each path need to be decided. For practical purposes, three mathematical methods, i.e. the interval ranking method, fuzzy linear programming method and linear weighted summation method, are applied to obtain equivalents of constraints and parameters, and then a fuzzy expected value model is presented. A heuristic algorithm based on a greedy criterion and the linear relaxation algorithm are designed to solve the model.

  11. Linear programming model to develop geodiversity map using utility theory

    NASA Astrophysics Data System (ADS)

    Sepehr, Adel

    2015-04-01

    In this article, the classification and mapping of geodiversity based on a quantitative methodology was accomplished using linear programming, the central idea of which being that geosites and geomorphosites as main indicators of geodiversity can be evaluated by utility theory. A linear programming method was applied for geodiversity mapping over Khorasan-razavi province located in eastern north of Iran. In this route, the main criteria for distinguishing geodiversity potential in the studied area were considered regarding rocks type (lithology), faults position (tectonic process), karst area (dynamic process), Aeolian landforms frequency and surface river forms. These parameters were investigated by thematic maps including geology, topography and geomorphology at scales 1:100'000, 1:50'000 and 1:250'000 separately, imagery data involving SPOT, ETM+ (Landsat 7) and field operations directly. The geological thematic layer was simplified from the original map using a practical lithologic criterion based on a primary genetic rocks classification representing metamorphic, igneous and sedimentary rocks. The geomorphology map was provided using DEM at scale 30m extracted by ASTER data, geology and google earth images. The geology map shows tectonic status and geomorphology indicated dynamic processes and landform (karst, Aeolian and river). Then, according to the utility theory algorithms, we proposed a linear programming to classify geodiversity degree in the studied area based on geology/morphology parameters. The algorithm used in the methodology was consisted a linear function to be maximized geodiversity to certain constraints in the form of linear equations. The results of this research indicated three classes of geodiversity potential including low, medium and high status. The geodiversity potential shows satisfied conditions in the Karstic areas and Aeolian landscape. Also the utility theory used in the research has been decreased uncertainty of the evaluations.

  12. Linear programming computational experience with onyx

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Atrek, E.

    1994-12-31

    ONYX is a linear programming software package based on an efficient variation of the gradient projection method. When fully configured, it is intended for application to industrial size problems. While the computational experience is limited at the time of this abstract, the technique is found to be robust and competitive with existing methodology in terms of both accuracy and speed. An overview of the approach is presented together with a description of program capabilities, followed by a discussion of up-to-date computational experience with the program. Conclusions include advantages of the approach and envisioned future developments.

  13. From diets to foods: using linear programming to formulate a nutritious, minimum-cost porridge mix for children aged 1 to 2 years.

    PubMed

    De Carvalho, Irene Stuart Torrié; Granfeldt, Yvonne; Dejmek, Petr; Håkansson, Andreas

    2015-03-01

    Linear programming has been used extensively as a tool for nutritional recommendations. Extending the methodology to food formulation presents new challenges, since not all combinations of nutritious ingredients will produce an acceptable food. Furthermore, it would help in implementation and in ensuring the feasibility of the suggested recommendations. To extend the previously used linear programming methodology from diet optimization to food formulation using consistency constraints. In addition, to exemplify usability using the case of a porridge mix formulation for emergency situations in rural Mozambique. The linear programming method was extended with a consistency constraint based on previously published empirical studies on swelling of starch in soft porridges. The new method was exemplified using the formulation of a nutritious, minimum-cost porridge mix for children aged 1 to 2 years for use as a complete relief food, based primarily on local ingredients, in rural Mozambique. A nutritious porridge fulfilling the consistency constraints was found; however, the minimum cost was unfeasible with local ingredients only. This illustrates the challenges in formulating nutritious yet economically feasible foods from local ingredients. The high cost was caused by the high cost of mineral-rich foods. A nutritious, low-cost porridge that fulfills the consistency constraints was obtained by including supplements of zinc and calcium salts as ingredients. The optimizations were successful in fulfilling all constraints and provided a feasible porridge, showing that the extended constrained linear programming methodology provides a systematic tool for designing nutritious foods.

  14. DYGABCD: A program for calculating linear A, B, C, and D matrices from a nonlinear dynamic engine simulation

    NASA Technical Reports Server (NTRS)

    Geyser, L. C.

    1978-01-01

    A digital computer program, DYGABCD, was developed that generates linearized, dynamic models of simulated turbofan and turbojet engines. DYGABCD is based on an earlier computer program, DYNGEN, that is capable of calculating simulated nonlinear steady-state and transient performance of one- and two-spool turbojet engines or two- and three-spool turbofan engines. Most control design techniques require linear system descriptions. For multiple-input/multiple-output systems such as turbine engines, state space matrix descriptions of the system are often desirable. DYGABCD computes the state space matrices commonly referred to as the A, B, C, and D matrices required for a linear system description. The report discusses the analytical approach and provides a users manual, FORTRAN listings, and a sample case.

  15. Refining and end use study of coal liquids II - linear programming analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lowe, C.; Tam, S.

    1995-12-31

    A DOE-funded study is underway to determine the optimum refinery processing schemes for producing transportation fuels that will meet CAAA regulations from direct and indirect coal liquids. The study consists of three major parts: pilot plant testing of critical upgrading processes, linear programming analysis of different processing schemes, and engine emission testing of final products. Currently, fractions of a direct coal liquid produced form bituminous coal are being tested in sequence of pilot plant upgrading processes. This work is discussed in a separate paper. The linear programming model, which is the subject of this paper, has been completed for themore » petroleum refinery and is being modified to handle coal liquids based on the pilot plant test results. Preliminary coal liquid evaluation studies indicate that, if a refinery expansion scenario is adopted, then the marginal value of the coal liquid (over the base petroleum crude) is $3-4/bbl.« less

  16. A novel approach based on preference-based index for interval bilevel linear programming problem.

    PubMed

    Ren, Aihong; Wang, Yuping; Xue, Xingsi

    2017-01-01

    This paper proposes a new methodology for solving the interval bilevel linear programming problem in which all coefficients of both objective functions and constraints are considered as interval numbers. In order to keep as much uncertainty of the original constraint region as possible, the original problem is first converted into an interval bilevel programming problem with interval coefficients in both objective functions only through normal variation of interval number and chance-constrained programming. With the consideration of different preferences of different decision makers, the concept of the preference level that the interval objective function is preferred to a target interval is defined based on the preference-based index. Then a preference-based deterministic bilevel programming problem is constructed in terms of the preference level and the order relation [Formula: see text]. Furthermore, the concept of a preference δ -optimal solution is given. Subsequently, the constructed deterministic nonlinear bilevel problem is solved with the help of estimation of distribution algorithm. Finally, several numerical examples are provided to demonstrate the effectiveness of the proposed approach.

  17. Fast intersection detection algorithm for PC-based robot off-line programming

    NASA Astrophysics Data System (ADS)

    Fedrowitz, Christian H.

    1994-11-01

    This paper presents a method for fast and reliable collision detection in complex production cells. The algorithm is part of the PC-based robot off-line programming system of the University of Siegen (Ropsus). The method is based on a solid model which is managed by a simplified constructive solid geometry model (CSG-model). The collision detection problem is divided in two steps. In the first step the complexity of the problem is reduced in linear time. In the second step the remaining solids are tested for intersection. For this the Simplex algorithm, which is known from linear optimization, is used. It computes a point which is common to two convex polyhedra. The polyhedra intersect, if such a point exists. Regarding the simplified geometrical model of Ropsus the algorithm runs also in linear time. In conjunction with the first step a resultant collision detection algorithm is found which requires linear time in all. Moreover it computes the resultant intersection polyhedron using the dual transformation.

  18. TI-59 Programs for Multiple Regression.

    DTIC Science & Technology

    1980-05-01

    general linear hypothesis model of full rank [ Graybill , 19611 can be written as Y = x 8 + C , s-N(O,o 2I) nxl nxk kxl nxl where Y is the vector of n...a "reduced model " solution, and confidence intervals for linear functions of the coefficients can be obtained using (x’x) and a2, based on the t...O107)l UA.LLL. Library ModuIe NASTER -Puter 0NTINA Cards 1 PROGRAM DESCRIPTION (s s 2 ror the general linear hypothesis model Y - XO + C’ calculates

  19. Teaching Machines and Programmed Instruction.

    ERIC Educational Resources Information Center

    Kay, Harry; And Others

    The various devices used in programed instruction range from the simple linear programed book to branching and skip branching programs, adaptive teaching machines, and even complex computer based systems. In order to provide a background for the would-be programer, the essential principles of each of these devices is outlined. Different ideas of…

  20. QuBiLS-MIDAS: a parallel free-software for molecular descriptors computation based on multilinear algebraic maps.

    PubMed

    García-Jacas, César R; Marrero-Ponce, Yovani; Acevedo-Martínez, Liesner; Barigye, Stephen J; Valdés-Martiní, José R; Contreras-Torres, Ernesto

    2014-07-05

    The present report introduces the QuBiLS-MIDAS software belonging to the ToMoCoMD-CARDD suite for the calculation of three-dimensional molecular descriptors (MDs) based on the two-linear (bilinear), three-linear, and four-linear (multilinear or N-linear) algebraic forms. Thus, it is unique software that computes these tensor-based indices. These descriptors, establish relations for two, three, and four atoms by using several (dis-)similarity metrics or multimetrics, matrix transformations, cutoffs, local calculations and aggregation operators. The theoretical background of these N-linear indices is also presented. The QuBiLS-MIDAS software was developed in the Java programming language and employs the Chemical Development Kit library for the manipulation of the chemical structures and the calculation of the atomic properties. This software is composed by a desktop user-friendly interface and an Abstract Programming Interface library. The former was created to simplify the configuration of the different options of the MDs, whereas the library was designed to allow its easy integration to other software for chemoinformatics applications. This program provides functionalities for data cleaning tasks and for batch processing of the molecular indices. In addition, it offers parallel calculation of the MDs through the use of all available processors in current computers. The studies of complexity of the main algorithms demonstrate that these were efficiently implemented with respect to their trivial implementation. Lastly, the performance tests reveal that this software has a suitable behavior when the amount of processors is increased. Therefore, the QuBiLS-MIDAS software constitutes a useful application for the computation of the molecular indices based on N-linear algebraic maps and it can be used freely to perform chemoinformatics studies. Copyright © 2014 Wiley Periodicals, Inc.

  1. LDRD final report on massively-parallel linear programming : the parPCx system.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Parekh, Ojas; Phillips, Cynthia Ann; Boman, Erik Gunnar

    2005-02-01

    This report summarizes the research and development performed from October 2002 to September 2004 at Sandia National Laboratories under the Laboratory-Directed Research and Development (LDRD) project ''Massively-Parallel Linear Programming''. We developed a linear programming (LP) solver designed to use a large number of processors. LP is the optimization of a linear objective function subject to linear constraints. Companies and universities have expended huge efforts over decades to produce fast, stable serial LP solvers. Previous parallel codes run on shared-memory systems and have little or no distribution of the constraint matrix. We have seen no reports of general LP solver runsmore » on large numbers of processors. Our parallel LP code is based on an efficient serial implementation of Mehrotra's interior-point predictor-corrector algorithm (PCx). The computational core of this algorithm is the assembly and solution of a sparse linear system. We have substantially rewritten the PCx code and based it on Trilinos, the parallel linear algebra library developed at Sandia. Our interior-point method can use either direct or iterative solvers for the linear system. To achieve a good parallel data distribution of the constraint matrix, we use a (pre-release) version of a hypergraph partitioner from the Zoltan partitioning library. We describe the design and implementation of our new LP solver called parPCx and give preliminary computational results. We summarize a number of issues related to efficient parallel solution of LPs with interior-point methods including data distribution, numerical stability, and solving the core linear system using both direct and iterative methods. We describe a number of applications of LP specific to US Department of Energy mission areas and we summarize our efforts to integrate parPCx (and parallel LP solvers in general) into Sandia's massively-parallel integer programming solver PICO (Parallel Interger and Combinatorial Optimizer). We conclude with directions for long-term future algorithmic research and for near-term development that could improve the performance of parPCx.« less

  2. PCR-based detection of a rare linear DNA in cell culture.

    PubMed

    Saveliev, Sergei V.

    2002-11-11

    The described method allows for detection of rare linear DNA fragments generated during genomic deletions. The predicted limit of the detection is one DNA molecule per 10(7) or more cells. The method is based on anchor PCR and involves gel separation of the linear DNA fragment and chromosomal DNA before amplification. The detailed chemical structure of the ends of the linear DNA can be defined with the use of additional PCR-based protocols. The method was applied to study the short-lived linear DNA generated during programmed genomic deletions in a ciliate. It can be useful in studies of spontaneous DNA deletions in cell culture or for tracking intracellular modifications at the ends of transfected DNA during gene therapy trials.

  3. PCR-based detection of a rare linear DNA in cell culture

    PubMed Central

    2002-01-01

    The described method allows for detection of rare linear DNA fragments generated during genomic deletions. The predicted limit of the detection is one DNA molecule per 107 or more cells. The method is based on anchor PCR and involves gel separation of the linear DNA fragment and chromosomal DNA before amplification. The detailed chemical structure of the ends of the linear DNA can be defined with the use of additional PCR-based protocols. The method was applied to study the short-lived linear DNA generated during programmed genomic deletions in a ciliate. It can be useful in studies of spontaneous DNA deletions in cell culture or for tracking intracellular modifications at the ends of transfected DNA during gene therapy trials. PMID:12734566

  4. Waste management under multiple complexities: Inexact piecewise-linearization-based fuzzy flexible programming

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sun Wei; Huang, Guo H., E-mail: huang@iseis.org; Institute for Energy, Environment and Sustainable Communities, University of Regina, Regina, Saskatchewan, S4S 0A2

    2012-06-15

    Highlights: Black-Right-Pointing-Pointer Inexact piecewise-linearization-based fuzzy flexible programming is proposed. Black-Right-Pointing-Pointer It's the first application to waste management under multiple complexities. Black-Right-Pointing-Pointer It tackles nonlinear economies-of-scale effects in interval-parameter constraints. Black-Right-Pointing-Pointer It estimates costs more accurately than the linear-regression-based model. Black-Right-Pointing-Pointer Uncertainties are decreased and more satisfactory interval solutions are obtained. - Abstract: To tackle nonlinear economies-of-scale (EOS) effects in interval-parameter constraints for a representative waste management problem, an inexact piecewise-linearization-based fuzzy flexible programming (IPFP) model is developed. In IPFP, interval parameters for waste amounts and transportation/operation costs can be quantified; aspiration levels for net system costs, as well as tolerancemore » intervals for both capacities of waste treatment facilities and waste generation rates can be reflected; and the nonlinear EOS effects transformed from objective function to constraints can be approximated. An interactive algorithm is proposed for solving the IPFP model, which in nature is an interval-parameter mixed-integer quadratically constrained programming model. To demonstrate the IPFP's advantages, two alternative models are developed to compare their performances. One is a conventional linear-regression-based inexact fuzzy programming model (IPFP2) and the other is an IPFP model with all right-hand-sides of fussy constraints being the corresponding interval numbers (IPFP3). The comparison results between IPFP and IPFP2 indicate that the optimized waste amounts would have the similar patterns in both models. However, when dealing with EOS effects in constraints, the IPFP2 may underestimate the net system costs while the IPFP can estimate the costs more accurately. The comparison results between IPFP and IPFP3 indicate that their solutions would be significantly different. The decreased system uncertainties in IPFP's solutions demonstrate its effectiveness for providing more satisfactory interval solutions than IPFP3. Following its first application to waste management, the IPFP can be potentially applied to other environmental problems under multiple complexities.« less

  5. Life cycle cost optimization of biofuel supply chains under uncertainties based on interval linear programming.

    PubMed

    Ren, Jingzheng; Dong, Liang; Sun, Lu; Goodsite, Michael Evan; Tan, Shiyu; Dong, Lichun

    2015-01-01

    The aim of this work was to develop a model for optimizing the life cycle cost of biofuel supply chain under uncertainties. Multiple agriculture zones, multiple transportation modes for the transport of grain and biofuel, multiple biofuel plants, and multiple market centers were considered in this model, and the price of the resources, the yield of grain and the market demands were regarded as interval numbers instead of constants. An interval linear programming was developed, and a method for solving interval linear programming was presented. An illustrative case was studied by the proposed model, and the results showed that the proposed model is feasible for designing biofuel supply chain under uncertainties. Copyright © 2015 Elsevier Ltd. All rights reserved.

  6. Linearly Adjustable International Portfolios

    NASA Astrophysics Data System (ADS)

    Fonseca, R. J.; Kuhn, D.; Rustem, B.

    2010-09-01

    We present an approach to multi-stage international portfolio optimization based on the imposition of a linear structure on the recourse decisions. Multiperiod decision problems are traditionally formulated as stochastic programs. Scenario tree based solutions however can become intractable as the number of stages increases. By restricting the space of decision policies to linear rules, we obtain a conservative tractable approximation to the original problem. Local asset prices and foreign exchange rates are modelled separately, which allows for a direct measure of their impact on the final portfolio value.

  7. Two algorithms for neural-network design and training with application to channel equalization.

    PubMed

    Sweatman, C Z; Mulgrew, B; Gibson, G J

    1998-01-01

    We describe two algorithms for designing and training neural-network classifiers. The first, the linear programming slab algorithm (LPSA), is motivated by the problem of reconstructing digital signals corrupted by passage through a dispersive channel and by additive noise. It constructs a multilayer perceptron (MLP) to separate two disjoint sets by using linear programming methods to identify network parameters. The second, the perceptron learning slab algorithm (PLSA), avoids the computational costs of linear programming by using an error-correction approach to identify parameters. Both algorithms operate in highly constrained parameter spaces and are able to exploit symmetry in the classification problem. Using these algorithms, we develop a number of procedures for the adaptive equalization of a complex linear 4-quadrature amplitude modulation (QAM) channel, and compare their performance in a simulation study. Results are given for both stationary and time-varying channels, the latter based on the COST 207 GSM propagation model.

  8. Computing Gröbner and Involutive Bases for Linear Systems of Difference Equations

    NASA Astrophysics Data System (ADS)

    Yanovich, Denis

    2018-02-01

    The computation of involutive bases and Gröbner bases for linear systems of difference equations is solved and its importance for physical and mathematical problems is discussed. The algorithm and issues concerning its implementation in C are presented and calculation times are compared with the competing programs. The paper ends with consideration on the parallel version of this implementation and its scalability.

  9. Menu-Driven Solver Of Linear-Programming Problems

    NASA Technical Reports Server (NTRS)

    Viterna, L. A.; Ferencz, D.

    1992-01-01

    Program assists inexperienced user in formulating linear-programming problems. A Linear Program Solver (ALPS) computer program is full-featured LP analysis program. Solves plain linear-programming problems as well as more-complicated mixed-integer and pure-integer programs. Also contains efficient technique for solution of purely binary linear-programming problems. Written entirely in IBM's APL2/PC software, Version 1.01. Packed program contains licensed material, property of IBM (copyright 1988, all rights reserved).

  10. A Value-Added Approach to Selecting the Best Master of Business Administration (MBA) Program

    ERIC Educational Resources Information Center

    Fisher, Dorothy M.; Kiang, Melody; Fisher, Steven A.

    2007-01-01

    Although numerous studies rank master of business administration (MBA) programs, prospective students' selection of the best MBA program is a formidable task. In this study, the authors used a linear-programming-based model called data envelopment analysis (DEA) to evaluate MBA programs. The DEA model connects costs to benefits to evaluate the…

  11. Semilinear programming: applications and implementation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mohan, S.

    Semilinear programming is a method of solving optimization problems with linear constraints where the non-negativity restrictions on the variables are dropped and the objective function coefficients can take on different values depending on whether the variable is positive or negative. The simplex method for linear programming is modified in this thesis to solve general semilinear and piecewise linear programs efficiently without having to transform them into equivalent standard linear programs. Several models in widely different areas of optimization such as production smoothing, facility locations, goal programming and L/sub 1/ estimation are presented first to demonstrate the compact formulation that arisesmore » when such problems are formulated as semilinear programs. A code SLP is constructed using the semilinear programming techniques. Problems in aggregate planning and L/sub 1/ estimation are solved using SLP and equivalent linear programs using a linear programming simplex code. Comparisons of CPU times and number iterations indicate SLP to be far superior. The semilinear programming techniques are extended to piecewise linear programming in the implementation of the code PLP. Piecewise linear models in aggregate planning are solved using PLP and equivalent standard linear programs using a simple upper bounded linear programming code SUBLP.« less

  12. A New Stochastic Equivalent Linearization Implementation for Prediction of Geometrically Nonlinear Vibrations

    NASA Technical Reports Server (NTRS)

    Muravyov, Alexander A.; Turner, Travis L.; Robinson, Jay H.; Rizzi, Stephen A.

    1999-01-01

    In this paper, the problem of random vibration of geometrically nonlinear MDOF structures is considered. The solutions obtained by application of two different versions of a stochastic linearization method are compared with exact (F-P-K) solutions. The formulation of a relatively new version of the stochastic linearization method (energy-based version) is generalized to the MDOF system case. Also, a new method for determination of nonlinear sti ness coefficients for MDOF structures is demonstrated. This method in combination with the equivalent linearization technique is implemented in a new computer program. Results in terms of root-mean-square (RMS) displacements obtained by using the new program and an existing in-house code are compared for two examples of beam-like structures.

  13. The Effects of Linear and Modified Linear Programed Materials on the Achievement of Slow Learners in Tenth Grade BSCS Special Materials Biology.

    ERIC Educational Resources Information Center

    Moody, John Charles

    Assessed were the effects of linear and modified linear programed materials on the achievement of slow learners in tenth grade Biological Sciences Curriculum Study (BSCS) Special Materials biology. Two hundred and six students were randomly placed into four programed materials formats: linear programed materials, modified linear program with…

  14. A model for managing sources of groundwater pollution

    USGS Publications Warehouse

    Gorelick, Steven M.

    1982-01-01

    The waste disposal capacity of a groundwater system can be maximized while maintaining water quality at specified locations by using a groundwater pollutant source management model that is based upon linear programing and numerical simulation. The decision variables of the management model are solute waste disposal rates at various facilities distributed over space. A concentration response matrix is used in the management model to describe transient solute transport and is developed using the U.S. Geological Survey solute transport simulation model. The management model was applied to a complex hypothetical groundwater system. Large-scale management models were formulated as dual linear programing problems to reduce numerical difficulties and computation time. Linear programing problems were solved using a numerically stable, available code. Optimal solutions to problems with successively longer management time horizons indicated that disposal schedules at some sites are relatively independent of the number of disposal periods. Optimal waste disposal schedules exhibited pulsing rather than constant disposal rates. Sensitivity analysis using parametric linear programing showed that a sharp reduction in total waste disposal potential occurs if disposal rates at any site are increased beyond their optimal values.

  15. Fuzzy Linear Programming and its Application in Home Textile Firm

    NASA Astrophysics Data System (ADS)

    Vasant, P.; Ganesan, T.; Elamvazuthi, I.

    2011-06-01

    In this paper, new fuzzy linear programming (FLP) based methodology using a specific membership function, named as modified logistic membership function is proposed. The modified logistic membership function is first formulated and its flexibility in taking up vagueness in parameter is established by an analytical approach. The developed methodology of FLP has provided a confidence in applying to real life industrial production planning problem. This approach of solving industrial production planning problem can have feedback with the decision maker, the implementer and the analyst.

  16. Analyses of Multishaft Rotor-Bearing Response

    NASA Technical Reports Server (NTRS)

    Nelson, H. D.; Meacham, W. L.

    1985-01-01

    Method works for linear and nonlinear systems. Finite-element-based computer program developed to analyze free and forced response of multishaft rotor-bearing systems. Acronym, ARDS, denotes Analysis of Rotor Dynamic Systems. Systems with nonlinear interconnection or support bearings or both analyzed by numerically integrating reduced set of coupledsystem equations. Linear systems analyzed in closed form for steady excitations and treated as equivalent to nonlinear systems for transient excitation. ARDS is FORTRAN program developed on an Amdahl 470 (similar to IBM 370).

  17. Structural overview and learner control in hypermedia instructional programs

    NASA Astrophysics Data System (ADS)

    Burke, Patricia Anne

    1998-09-01

    This study examined the effects of a structural overview and learner control in a computer-based program on the achievement, attitudes, time in program and Linearity of path of fifth-grade students. Four versions of a computer-based instructional program about the Sun and planets were created in a 2 x 2 factorial design. The program consisted of ten sections, one for each planet and one for the Sun. Two structural overview conditions (structural overview, no structural overview) were crossed with two control conditions (learner control, program control). Subjects in the structural overview condition chose the order in which they would learn about the planets from among three options: ordered by distance from the Sun, ordered by size, or ordered by temperature. Subjects in the learner control condition were able to move freely among screens within a section and to choose their next section after finishing the previous one. In contrast, those in the program control condition advanced through the program in a prescribed linear manner. A 2 x 2 ANOVA yielded no significant differences in posttest scores for either independent variable or for their interaction. The structural overview was most likely not effective because subjects spent only a small percentage of their total time on the structural overview screens and they were not required to act upon the information in those screens. Learner control over content sequencing may not have been effective because most learner-control subjects chose the same overall sequence of instruction (i.e., distance from the Sun) prescribed for program-control subjects. Learner-control subjects chose to view an average of 40 more screens than the fixed number of 160 screens in the program-control version. However, program-control subjects spent significantly more time per screen than learner-control subjects, and the total time in program did not differ significantly between the two groups. Learner-control subjects receiving the structural overview deviated from the linear path significantly more often than subjects who did not have the structural overview, but deviation from the linear path was not associated with higher posttest scores.

  18. Computation of non-monotonic Lyapunov functions for continuous-time systems

    NASA Astrophysics Data System (ADS)

    Li, Huijuan; Liu, AnPing

    2017-09-01

    In this paper, we propose two methods to compute non-monotonic Lyapunov functions for continuous-time systems which are asymptotically stable. The first method is to solve a linear optimization problem on a compact and bounded set. The proposed linear programming based algorithm delivers a CPA1

  19. Sequential computation of elementary modes and minimal cut sets in genome-scale metabolic networks using alternate integer linear programming

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Song, Hyun-Seob; Goldberg, Noam; Mahajan, Ashutosh

    Elementary (flux) modes (EMs) have served as a valuable tool for investigating structural and functional properties of metabolic networks. Identification of the full set of EMs in genome-scale networks remains challenging due to combinatorial explosion of EMs in complex networks. It is often, however, that only a small subset of relevant EMs needs to be known, for which optimization-based sequential computation is a useful alternative. Most of the currently available methods along this line are based on the iterative use of mixed integer linear programming (MILP), the effectiveness of which significantly deteriorates as the number of iterations builds up. Tomore » alleviate the computational burden associated with the MILP implementation, we here present a novel optimization algorithm termed alternate integer linear programming (AILP). Results: Our algorithm was designed to iteratively solve a pair of integer programming (IP) and linear programming (LP) to compute EMs in a sequential manner. In each step, the IP identifies a minimal subset of reactions, the deletion of which disables all previously identified EMs. Thus, a subsequent LP solution subject to this reaction deletion constraint becomes a distinct EM. In cases where no feasible LP solution is available, IP-derived reaction deletion sets represent minimal cut sets (MCSs). Despite the additional computation of MCSs, AILP achieved significant time reduction in computing EMs by orders of magnitude. The proposed AILP algorithm not only offers a computational advantage in the EM analysis of genome-scale networks, but also improves the understanding of the linkage between EMs and MCSs.« less

  20. An Instructional Note on Linear Programming--A Pedagogically Sound Approach.

    ERIC Educational Resources Information Center

    Mitchell, Richard

    1998-01-01

    Discusses the place of linear programming in college curricula and the advantages of using linear-programming software. Lists important characteristics of computer software used in linear programming for more effective teaching and learning. (ASK)

  1. Controller design approach based on linear programming.

    PubMed

    Tanaka, Ryo; Shibasaki, Hiroki; Ogawa, Hiromitsu; Murakami, Takahiro; Ishida, Yoshihisa

    2013-11-01

    This study explains and demonstrates the design method for a control system with a load disturbance observer. Observer gains are determined by linear programming (LP) in terms of the Routh-Hurwitz stability criterion and the final-value theorem. In addition, the control model has a feedback structure, and feedback gains are determined to be the linear quadratic regulator. The simulation results confirmed that compared with the conventional method, the output estimated by our proposed method converges to a reference input faster when a load disturbance is added to a control system. In addition, we also confirmed the effectiveness of the proposed method by performing an experiment with a DC motor. © 2013 ISA. Published by ISA. All rights reserved.

  2. Exact and heuristic algorithms for Space Information Flow.

    PubMed

    Uwitonze, Alfred; Huang, Jiaqing; Ye, Yuanqing; Cheng, Wenqing; Li, Zongpeng

    2018-01-01

    Space Information Flow (SIF) is a new promising research area that studies network coding in geometric space, such as Euclidean space. The design of algorithms that compute the optimal SIF solutions remains one of the key open problems in SIF. This work proposes the first exact SIF algorithm and a heuristic SIF algorithm that compute min-cost multicast network coding for N (N ≥ 3) given terminal nodes in 2-D Euclidean space. Furthermore, we find that the Butterfly network in Euclidean space is the second example besides the Pentagram network where SIF is strictly better than Euclidean Steiner minimal tree. The exact algorithm design is based on two key techniques: Delaunay triangulation and linear programming. Delaunay triangulation technique helps to find practically good candidate relay nodes, after which a min-cost multicast linear programming model is solved over the terminal nodes and the candidate relay nodes, to compute the optimal multicast network topology, including the optimal relay nodes selected by linear programming from all the candidate relay nodes and the flow rates on the connection links. The heuristic algorithm design is also based on Delaunay triangulation and linear programming techniques. The exact algorithm can achieve the optimal SIF solution with an exponential computational complexity, while the heuristic algorithm can achieve the sub-optimal SIF solution with a polynomial computational complexity. We prove the correctness of the exact SIF algorithm. The simulation results show the effectiveness of the heuristic SIF algorithm.

  3. Integer Linear Programming in Computational Biology

    NASA Astrophysics Data System (ADS)

    Althaus, Ernst; Klau, Gunnar W.; Kohlbacher, Oliver; Lenhof, Hans-Peter; Reinert, Knut

    Computational molecular biology (bioinformatics) is a young research field that is rich in NP-hard optimization problems. The problem instances encountered are often huge and comprise thousands of variables. Since their introduction into the field of bioinformatics in 1997, integer linear programming (ILP) techniques have been successfully applied to many optimization problems. These approaches have added much momentum to development and progress in related areas. In particular, ILP-based approaches have become a standard optimization technique in bioinformatics. In this review, we present applications of ILP-based techniques developed by members and former members of Kurt Mehlhorn’s group. These techniques were introduced to bioinformatics in a series of papers and popularized by demonstration of their effectiveness and potential.

  4. Safety Verification of a Fault Tolerant Reconfigurable Autonomous Goal-Based Robotic Control System

    NASA Technical Reports Server (NTRS)

    Braman, Julia M. B.; Murray, Richard M; Wagner, David A.

    2007-01-01

    Fault tolerance and safety verification of control systems are essential for the success of autonomous robotic systems. A control architecture called Mission Data System (MDS), developed at the Jet Propulsion Laboratory, takes a goal-based control approach. In this paper, a method for converting goal network control programs into linear hybrid systems is developed. The linear hybrid system can then be verified for safety in the presence of failures using existing symbolic model checkers. An example task is simulated in MDS and successfully verified using HyTech, a symbolic model checking software for linear hybrid systems.

  5. Prediction of atmospheric degradation data for POPs by gene expression programming.

    PubMed

    Luan, F; Si, H Z; Liu, H T; Wen, Y Y; Zhang, X Y

    2008-01-01

    Quantitative structure-activity relationship models for the prediction of the mean and the maximum atmospheric degradation half-life values of persistent organic pollutants were developed based on the linear heuristic method (HM) and non-linear gene expression programming (GEP). Molecular descriptors, calculated from the structures alone, were used to represent the characteristics of the compounds. HM was used both to pre-select the whole descriptor sets and to build the linear model. GEP yielded satisfactory prediction results: the square of the correlation coefficient r(2) was 0.80 and 0.81 for the mean and maximum half-life values of the test set, and the root mean square errors were 0.448 and 0.426, respectively. The results of this work indicate that the GEP is a very promising tool for non-linear approximations.

  6. Minimum time acceleration of aircraft turbofan engines by using an algorithm based on nonlinear programming

    NASA Technical Reports Server (NTRS)

    Teren, F.

    1977-01-01

    Minimum time accelerations of aircraft turbofan engines are presented. The calculation of these accelerations was made by using a piecewise linear engine model, and an algorithm based on nonlinear programming. Use of this model and algorithm allows such trajectories to be readily calculated on a digital computer with a minimal expenditure of computer time.

  7. Simulated Analysis of Linear Reversible Enzyme Inhibition with SCILAB

    ERIC Educational Resources Information Center

    Antuch, Manuel; Ramos, Yaquelin; Álvarez, Rubén

    2014-01-01

    SCILAB is a lesser-known program (than MATLAB) for numeric simulations and has the advantage of being free software. A challenging software-based activity to analyze the most common linear reversible inhibition types with SCILAB is described. Students establish typical values for the concentration of enzyme, substrate, and inhibitor to simulate…

  8. LQR-Based Optimal Distributed Cooperative Design for Linear Discrete-Time Multiagent Systems.

    PubMed

    Zhang, Huaguang; Feng, Tao; Liang, Hongjing; Luo, Yanhong

    2017-03-01

    In this paper, a novel linear quadratic regulator (LQR)-based optimal distributed cooperative design method is developed for synchronization control of general linear discrete-time multiagent systems on a fixed, directed graph. Sufficient conditions are derived for synchronization, which restrict the graph eigenvalues into a bounded circular region in the complex plane. The synchronizing speed issue is also considered, and it turns out that the synchronizing region reduces as the synchronizing speed becomes faster. To obtain more desirable synchronizing capacity, the weighting matrices are selected by sufficiently utilizing the guaranteed gain margin of the optimal regulators. Based on the developed LQR-based cooperative design framework, an approximate dynamic programming technique is successfully introduced to overcome the (partially or completely) model-free cooperative design for linear multiagent systems. Finally, two numerical examples are given to illustrate the effectiveness of the proposed design methods.

  9. Nutrient profiling can help identify foods of good nutritional quality for their price: a validation study with linear programming.

    PubMed

    Maillot, Matthieu; Ferguson, Elaine L; Drewnowski, Adam; Darmon, Nicole

    2008-06-01

    Nutrient profiling ranks foods based on their nutrient content. They may help identify foods with a good nutritional quality for their price. This hypothesis was tested using diet modeling with linear programming. Analyses were undertaken using food intake data from the nationally representative French INCA (enquête Individuelle et Nationale sur les Consommations Alimentaires) survey and its associated food composition and price database. For each food, a nutrient profile score was defined as the ratio between the previously published nutrient density score (NDS) and the limited nutrient score (LIM); a nutritional quality for price indicator was developed and calculated from the relationship between its NDS:LIM and energy cost (in euro/100 kcal). We developed linear programming models to design diets that fulfilled increasing levels of nutritional constraints at a minimal cost. The median NDS:LIM values of foods selected in modeled diets increased as the levels of nutritional constraints increased (P = 0.005). In addition, the proportion of foods with a good nutritional quality for price indicator was higher (P < 0.0001) among foods selected (81%) than among foods not selected (39%) in modeled diets. This agreement between the linear programming and the nutrient profiling approaches indicates that nutrient profiling can help identify foods of good nutritional quality for their price. Linear programming is a useful tool for testing nutrient profiling systems and validating the concept of nutrient profiling.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gearhart, Jared Lee; Adair, Kristin Lynn; Durfee, Justin David.

    When developing linear programming models, issues such as budget limitations, customer requirements, or licensing may preclude the use of commercial linear programming solvers. In such cases, one option is to use an open-source linear programming solver. A survey of linear programming tools was conducted to identify potential open-source solvers. From this survey, four open-source solvers were tested using a collection of linear programming test problems and the results were compared to IBM ILOG CPLEX Optimizer (CPLEX) [1], an industry standard. The solvers considered were: COIN-OR Linear Programming (CLP) [2], [3], GNU Linear Programming Kit (GLPK) [4], lp_solve [5] and Modularmore » In-core Nonlinear Optimization System (MINOS) [6]. As no open-source solver outperforms CPLEX, this study demonstrates the power of commercial linear programming software. CLP was found to be the top performing open-source solver considered in terms of capability and speed. GLPK also performed well but cannot match the speed of CLP or CPLEX. lp_solve and MINOS were considerably slower and encountered issues when solving several test problems.« less

  11. A risk explicit interval linear programming model for uncertainty-based environmental economic optimization in the Lake Fuxian watershed, China.

    PubMed

    Zhang, Xiaoling; Huang, Kai; Zou, Rui; Liu, Yong; Yu, Yajuan

    2013-01-01

    The conflict of water environment protection and economic development has brought severe water pollution and restricted the sustainable development in the watershed. A risk explicit interval linear programming (REILP) method was used to solve integrated watershed environmental-economic optimization problem. Interval linear programming (ILP) and REILP models for uncertainty-based environmental economic optimization at the watershed scale were developed for the management of Lake Fuxian watershed, China. Scenario analysis was introduced into model solution process to ensure the practicality and operability of optimization schemes. Decision makers' preferences for risk levels can be expressed through inputting different discrete aspiration level values into the REILP model in three periods under two scenarios. Through balancing the optimal system returns and corresponding system risks, decision makers can develop an efficient industrial restructuring scheme based directly on the window of "low risk and high return efficiency" in the trade-off curve. The representative schemes at the turning points of two scenarios were interpreted and compared to identify a preferable planning alternative, which has the relatively low risks and nearly maximum benefits. This study provides new insights and proposes a tool, which was REILP, for decision makers to develop an effectively environmental economic optimization scheme in integrated watershed management.

  12. A Risk Explicit Interval Linear Programming Model for Uncertainty-Based Environmental Economic Optimization in the Lake Fuxian Watershed, China

    PubMed Central

    Zou, Rui; Liu, Yong; Yu, Yajuan

    2013-01-01

    The conflict of water environment protection and economic development has brought severe water pollution and restricted the sustainable development in the watershed. A risk explicit interval linear programming (REILP) method was used to solve integrated watershed environmental-economic optimization problem. Interval linear programming (ILP) and REILP models for uncertainty-based environmental economic optimization at the watershed scale were developed for the management of Lake Fuxian watershed, China. Scenario analysis was introduced into model solution process to ensure the practicality and operability of optimization schemes. Decision makers' preferences for risk levels can be expressed through inputting different discrete aspiration level values into the REILP model in three periods under two scenarios. Through balancing the optimal system returns and corresponding system risks, decision makers can develop an efficient industrial restructuring scheme based directly on the window of “low risk and high return efficiency” in the trade-off curve. The representative schemes at the turning points of two scenarios were interpreted and compared to identify a preferable planning alternative, which has the relatively low risks and nearly maximum benefits. This study provides new insights and proposes a tool, which was REILP, for decision makers to develop an effectively environmental economic optimization scheme in integrated watershed management. PMID:24191144

  13. A novel heuristic for optimization aggregate production problem: Evidence from flat panel display in Malaysia

    NASA Astrophysics Data System (ADS)

    Al-Kuhali, K.; Hussain M., I.; Zain Z., M.; Mullenix, P.

    2015-05-01

    Aim: This paper contribute to the flat panel display industry it terms of aggregate production planning. Methodology: For the minimization cost of total production of LCD manufacturing, a linear programming was applied. The decision variables are general production costs, additional cost incurred for overtime production, additional cost incurred for subcontracting, inventory carrying cost, backorder costs and adjustments for changes incurred within labour levels. Model has been developed considering a manufacturer having several product types, which the maximum types are N, along a total time period of T. Results: Industrial case study based on Malaysia is presented to test and to validate the developed linear programming model for aggregate production planning. Conclusion: The model development is fit under stable environment conditions. Overall it can be recommended to adapt the proven linear programming model to production planning of Malaysian flat panel display industry.

  14. A steady and oscillatory kernel function method for interfering surfaces in subsonic, transonic and supersonic flow. [prediction analysis techniques for airfoils

    NASA Technical Reports Server (NTRS)

    Cunningham, A. M., Jr.

    1976-01-01

    The theory, results and user instructions for an aerodynamic computer program are presented. The theory is based on linear lifting surface theory, and the method is the kernel function. The program is applicable to multiple interfering surfaces which may be coplanar or noncoplanar. Local linearization was used to treat nonuniform flow problems without shocks. For cases with imbedded shocks, the appropriate boundary conditions were added to account for the flow discontinuities. The data describing nonuniform flow fields must be input from some other source such as an experiment or a finite difference solution. The results are in the form of small linear perturbations about nonlinear flow fields. The method was applied to a wide variety of problems for which it is demonstrated to be significantly superior to the uniform flow method. Program user instructions are given for easy access.

  15. A scalable parallel algorithm for multiple objective linear programs

    NASA Technical Reports Server (NTRS)

    Wiecek, Malgorzata M.; Zhang, Hong

    1994-01-01

    This paper presents an ADBASE-based parallel algorithm for solving multiple objective linear programs (MOLP's). Job balance, speedup and scalability are of primary interest in evaluating efficiency of the new algorithm. Implementation results on Intel iPSC/2 and Paragon multiprocessors show that the algorithm significantly speeds up the process of solving MOLP's, which is understood as generating all or some efficient extreme points and unbounded efficient edges. The algorithm gives specially good results for large and very large problems. Motivation and justification for solving such large MOLP's are also included.

  16. Program LRCDM2: Improved aerodynamic prediction program for supersonic canard-tail missiles with axisymmetric bodies

    NASA Technical Reports Server (NTRS)

    Dillenius, Marnix F. E.

    1985-01-01

    Program LRCDM2 was developed for supersonic missiles with axisymmetric bodies and up to two finned sections. Predicted are pressure distributions and loads acting on a complete configuration including effects of body separated flow vorticity and fin-edge vortices. The computer program is based on supersonic panelling and line singularity methods coupled with vortex tracking theory. Effects of afterbody shed vorticity on the afterbody and tail-fin pressure distributions can be optionally treated by companion program BDYSHD. Preliminary versions of combined shock expansion/linear theory and Newtonian/linear theory have been implemented as optional pressure calculation methods to extend the Mach number and angle-of-attack ranges of applicability into the nonlinear supersonic flow regime. Comparisons between program results and experimental data are given for a triform tail-finned configuration and for a canard controlled configuration with a long afterbody for Mach numbers up to 2.5. Initial tests of the nonlinear/linear theory approaches show good agreement for pressures acting on a rectangular wing and a delta wing with attached shocks for Mach numbers up to 4.6 and angles of attack up to 20 degrees.

  17. ALPS: A Linear Program Solver

    NASA Technical Reports Server (NTRS)

    Ferencz, Donald C.; Viterna, Larry A.

    1991-01-01

    ALPS is a computer program which can be used to solve general linear program (optimization) problems. ALPS was designed for those who have minimal linear programming (LP) knowledge and features a menu-driven scheme to guide the user through the process of creating and solving LP formulations. Once created, the problems can be edited and stored in standard DOS ASCII files to provide portability to various word processors or even other linear programming packages. Unlike many math-oriented LP solvers, ALPS contains an LP parser that reads through the LP formulation and reports several types of errors to the user. ALPS provides a large amount of solution data which is often useful in problem solving. In addition to pure linear programs, ALPS can solve for integer, mixed integer, and binary type problems. Pure linear programs are solved with the revised simplex method. Integer or mixed integer programs are solved initially with the revised simplex, and the completed using the branch-and-bound technique. Binary programs are solved with the method of implicit enumeration. This manual describes how to use ALPS to create, edit, and solve linear programming problems. Instructions for installing ALPS on a PC compatible computer are included in the appendices along with a general introduction to linear programming. A programmers guide is also included for assistance in modifying and maintaining the program.

  18. GLOBAL SOLUTIONS TO FOLDED CONCAVE PENALIZED NONCONVEX LEARNING

    PubMed Central

    Liu, Hongcheng; Yao, Tao; Li, Runze

    2015-01-01

    This paper is concerned with solving nonconvex learning problems with folded concave penalty. Despite that their global solutions entail desirable statistical properties, there lack optimization techniques that guarantee global optimality in a general setting. In this paper, we show that a class of nonconvex learning problems are equivalent to general quadratic programs. This equivalence facilitates us in developing mixed integer linear programming reformulations, which admit finite algorithms that find a provably global optimal solution. We refer to this reformulation-based technique as the mixed integer programming-based global optimization (MIPGO). To our knowledge, this is the first global optimization scheme with a theoretical guarantee for folded concave penalized nonconvex learning with the SCAD penalty (Fan and Li, 2001) and the MCP penalty (Zhang, 2010). Numerical results indicate a significant outperformance of MIPGO over the state-of-the-art solution scheme, local linear approximation, and other alternative solution techniques in literature in terms of solution quality. PMID:27141126

  19. The YAV-8B simulation and modeling. Volume 2: Program listing

    NASA Technical Reports Server (NTRS)

    1983-01-01

    Detailed mathematical models of varying complexity representative of the YAV-8B aircraft are defined and documented. These models are used in parameter estimation and in linear analysis computer programs while investigating YAV-8B aircraft handling qualities. Both a six degree of freedom nonlinear model and a linearized three degree of freedom longitudinal and lateral directional model were developed. The nonlinear model is based on the mathematical model used on the MCAIR YAV-8B manned flight simulator. This simulator model has undergone periodic updating based on the results of approximately 360 YAV-8B flights and 8000 hours of wind tunnel testing. Qualified YAV-8B flight test pilots have commented that the handling qualities characteristics of the simulator are quite representative of the real aircraft. These comments are validated herein by comparing data from both static and dynamic flight test maneuvers to the same obtained using the nonlinear program.

  20. Nonlinear-programming mathematical modeling of coal blending for power plant

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tang Longhua; Zhou Junhu; Yao Qiang

    At present most of the blending works are guided by experience or linear-programming (LP) which can not reflect the coal complicated characteristics properly. Experimental and theoretical research work shows that most of the coal blend properties can not always be measured as a linear function of the properties of the individual coals in the blend. The authors introduced nonlinear functions or processes (including neural network and fuzzy mathematics), established on the experiments directed by the authors and other researchers, to quantitatively describe the complex coal blend parameters. Finally nonlinear-programming (NLP) mathematical modeling of coal blend is introduced and utilized inmore » the Hangzhou Coal Blending Center. Predictions based on the new method resulted in different results from the ones based on LP modeling. The authors concludes that it is very important to introduce NLP modeling, instead of NL modeling, into the work of coal blending.« less

  1. User's manual for LINEAR, a FORTRAN program to derive linear aircraft models

    NASA Technical Reports Server (NTRS)

    Duke, Eugene L.; Patterson, Brian P.; Antoniewicz, Robert F.

    1987-01-01

    This report documents a FORTRAN program that provides a powerful and flexible tool for the linearization of aircraft models. The program LINEAR numerically determines a linear system model using nonlinear equations of motion and a user-supplied nonlinear aerodynamic model. The system model determined by LINEAR consists of matrices for both state and observation equations. The program has been designed to allow easy selection and definition of the state, control, and observation variables to be used in a particular model.

  2. A dose-response curve for biodosimetry from a 6 MV electron linear accelerator

    PubMed Central

    Lemos-Pinto, M.M.P.; Cadena, M.; Santos, N.; Fernandes, T.S.; Borges, E.; Amaral, A.

    2015-01-01

    Biological dosimetry (biodosimetry) is based on the investigation of radiation-induced biological effects (biomarkers), mainly dicentric chromosomes, in order to correlate them with radiation dose. To interpret the dicentric score in terms of absorbed dose, a calibration curve is needed. Each curve should be constructed with respect to basic physical parameters, such as the type of ionizing radiation characterized by low or high linear energy transfer (LET) and dose rate. This study was designed to obtain dose calibration curves by scoring of dicentric chromosomes in peripheral blood lymphocytes irradiated in vitro with a 6 MV electron linear accelerator (Mevatron M, Siemens, USA). Two software programs, CABAS (Chromosomal Aberration Calculation Software) and Dose Estimate, were used to generate the curve. The two software programs are discussed; the results obtained were compared with each other and with other published low LET radiation curves. Both software programs resulted in identical linear and quadratic terms for the curve presented here, which was in good agreement with published curves for similar radiation quality and dose rates. PMID:26445334

  3. Estimating population trends with a linear model

    USGS Publications Warehouse

    Bart, Jonathan; Collins, Brian D.; Morrison, R.I.G.

    2003-01-01

    We describe a simple and robust method for estimating trends in population size. The method may be used with Breeding Bird Survey data, aerial surveys, point counts, or any other program of repeated surveys at permanent locations. Surveys need not be made at each location during each survey period. The method differs from most existing methods in being design based, rather than model based. The only assumptions are that the nominal sampling plan is followed and that sample size is large enough for use of the t-distribution. Simulations based on two bird data sets from natural populations showed that the point estimate produced by the linear model was essentially unbiased even when counts varied substantially and 25% of the complete data set was missing. The estimating-equation approach, often used to analyze Breeding Bird Survey data, performed similarly on one data set but had substantial bias on the second data set, in which counts were highly variable. The advantages of the linear model are its simplicity, flexibility, and that it is self-weighting. A user-friendly computer program to carry out the calculations is available from the senior author.

  4. A user's guide for V174, a program using a finite difference method to analyze transonic flow over oscillating wings

    NASA Technical Reports Server (NTRS)

    Butler, T. D.; Weatherill, W. H.; Sebastian, J. D.; Ehlers, F. E.

    1977-01-01

    The design and usage of a pilot program using a finite difference method for calculating the pressure distributions over harmonically oscillating wings in transonic flow are discussed. The procedure used is based on separating the velocity potential into steady and unsteady parts and linearizing the resulting unsteady differential equation for small disturbances. The steady velocity potential which must be obtained from some other program, is required for input. The unsteady differential equation is linear, complex in form with spatially varying coefficients. Because sinusoidal motion is assumed, time is not a variable. The numerical solution is obtained through a finite difference formulation and a line relaxation solution method.

  5. A users guide for A344: A program using a finite difference method to analyze transonic flow over oscillating airfoils

    NASA Technical Reports Server (NTRS)

    Weatherill, W. H.; Ehlers, F. E.

    1979-01-01

    The design and usage of a pilot program for calculating the pressure distributions over harmonically oscillating airfoils in transonic flow are described. The procedure used is based on separating the velocity potential into steady and unsteady parts and linearizing the resulting unsteady differential equations for small disturbances. The steady velocity potential which must be obtained from some other program, was required for input. The unsteady equation, as solved, is linear with spatially varying coefficients. Since sinusoidal motion was assumed, time was not a variable. The numerical solution was obtained through a finite difference formulation and either a line relaxation or an out of core direct solution method.

  6. 3-D inelastic analysis methods for hot section components (base program). [turbine blades, turbine vanes, and combustor liners

    NASA Technical Reports Server (NTRS)

    Wilson, R. B.; Bak, M. J.; Nakazawa, S.; Banerjee, P. K.

    1984-01-01

    A 3-D inelastic analysis methods program consists of a series of computer codes embodying a progression of mathematical models (mechanics of materials, special finite element, boundary element) for streamlined analysis of combustor liners, turbine blades, and turbine vanes. These models address the effects of high temperatures and thermal/mechanical loadings on the local (stress/strain) and global (dynamics, buckling) structural behavior of the three selected components. These models are used to solve 3-D inelastic problems using linear approximations in the sense that stresses/strains and temperatures in generic modeling regions are linear functions of the spatial coordinates, and solution increments for load, temperature and/or time are extrapolated linearly from previous information. Three linear formulation computer codes, referred to as MOMM (Mechanics of Materials Model), MHOST (MARC-Hot Section Technology), and BEST (Boundary Element Stress Technology), were developed and are described.

  7. A class of stochastic optimization problems with one quadratic & several linear objective functions and extended portfolio selection model

    NASA Astrophysics Data System (ADS)

    Xu, Jiuping; Li, Jun

    2002-09-01

    In this paper a class of stochastic multiple-objective programming problems with one quadratic, several linear objective functions and linear constraints has been introduced. The former model is transformed into a deterministic multiple-objective nonlinear programming model by means of the introduction of random variables' expectation. The reference direction approach is used to deal with linear objectives and results in a linear parametric optimization formula with a single linear objective function. This objective function is combined with the quadratic function using the weighted sums. The quadratic problem is transformed into a linear (parametric) complementary problem, the basic formula for the proposed approach. The sufficient and necessary conditions for (properly, weakly) efficient solutions and some construction characteristics of (weakly) efficient solution sets are obtained. An interactive algorithm is proposed based on reference direction and weighted sums. Varying the parameter vector on the right-hand side of the model, the DM can freely search the efficient frontier with the model. An extended portfolio selection model is formed when liquidity is considered as another objective to be optimized besides expectation and risk. The interactive approach is illustrated with a practical example.

  8. Optimal Assembly of Psychological and Educational Tests.

    ERIC Educational Resources Information Center

    van der Linden, Wim J.

    1998-01-01

    Reviews optimal test-assembly literature and introduces the contributions to this special issue. Discusses four approaches to computerized test assembly: (1) heuristic-based test assembly; (2) 0-1 linear programming; (3) network-flow programming; and (4) an optimal design approach. Contains a bibliography of 90 sources on test assembly.…

  9. Computer Series, 65. Bits and Pieces, 26.

    ERIC Educational Resources Information Center

    Moore, John W., Ed.

    1985-01-01

    Describes: (l) a microcomputer-based system for filing test questions and assembling examinations; (2) microcomputer use in practical and simulated experiments of gamma rays scattering by outer shell electrons; (3) an interactive, screen-oriented, general linear regression program; and (4) graphics drill and game programs for benzene synthesis.…

  10. Towards lexicographic multi-objective linear programming using grossone methodology

    NASA Astrophysics Data System (ADS)

    Cococcioni, Marco; Pappalardo, Massimo; Sergeyev, Yaroslav D.

    2016-10-01

    Lexicographic Multi-Objective Linear Programming (LMOLP) problems can be solved in two ways: preemptive and nonpreemptive. The preemptive approach requires the solution of a series of LP problems, with changing constraints (each time the next objective is added, a new constraint appears). The nonpreemptive approach is based on a scalarization of the multiple objectives into a single-objective linear function by a weighted combination of the given objectives. It requires the specification of a set of weights, which is not straightforward and can be time consuming. In this work we present both mathematical and software ingredients necessary to solve LMOLP problems using a recently introduced computational methodology (allowing one to work numerically with infinities and infinitesimals) based on the concept of grossone. The ultimate goal of such an attempt is an implementation of a simplex-like algorithm, able to solve the original LMOLP problem by solving only one single-objective problem and without the need to specify finite weights. The expected advantages are therefore obvious.

  11. Optimisation of substrate blends in anaerobic co-digestion using adaptive linear programming.

    PubMed

    García-Gen, Santiago; Rodríguez, Jorge; Lema, Juan M

    2014-12-01

    Anaerobic co-digestion of multiple substrates has the potential to enhance biogas productivity by making use of the complementary characteristics of different substrates. A blending strategy based on a linear programming optimisation method is proposed aiming at maximising COD conversion into methane, but simultaneously maintaining a digestate and biogas quality. The method incorporates experimental and heuristic information to define the objective function and the linear restrictions. The active constraints are continuously adapted (by relaxing the restriction boundaries) such that further optimisations in terms of methane productivity can be achieved. The feasibility of the blends calculated with this methodology was previously tested and accurately predicted with an ADM1-based co-digestion model. This was validated in a continuously operated pilot plant, treating for several months different mixtures of glycerine, gelatine and pig manure at organic loading rates from 1.50 to 4.93 gCOD/Ld and hydraulic retention times between 32 and 40 days at mesophilic conditions. Copyright © 2014 Elsevier Ltd. All rights reserved.

  12. Symposium on electron linear accelerators in honor of Richard B. Neal's 80th birthday: Proceedings

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Siemann, R.H.

    The papers presented at the conference are: (1) the construction of SLAC and the role of R.B. Neal; (2) symposium speech; (3) lessons learned from the SLC; (4) alternate approaches to future electron-positron linear colliders; (5) the NLC technical program; (6) advanced electron linacs; (7) medical uses of linear accelerators; (8) linac-based, intense, coherent X-ray source using self-amplified spontaneous emission. Selected papers have been indexed separately for inclusion in the Energy Science and Technology Database.

  13. Robust Finite-Dimensional LQG (Linear Quadric Gaussian)-Based Controllers for a Class of Distributed Parameter Systems.

    DTIC Science & Technology

    1988-06-01

    abilities when I lost confidence. Without her help I would not have completed the program. Next, I wish to thank Dr. Peter Maybeck, my research ...being not only a resource for my research . but also for being a friend who listened when I needed a shoulder to cry on. Finally, I wish to give thanks...considered in this research are assumed to be linear quadratic Gaussian (LQG) based controllers. This research first uses a direct approach to

  14. Compositional Verification with Abstraction, Learning, and SAT Solving

    DTIC Science & Technology

    2015-05-01

    arithmetic, and bit-vectors (currently, via bit-blasting). The front-end is based on an existing tool called UFO [8] which converts C programs to the Horn...supports propositional logic, linear arithmetic, and bit-vectors (via bit-blasting). The front-end is based on the tool UFO [8]. It encodes safety of...tool UFO [8]. The encoding in Horn-SMT only uses the theory of Linear Rational Arithmetic. All experiments were carried out on an Intel R© CoreTM2 Quad

  15. Nonlinear optimization with linear constraints using a projection method

    NASA Technical Reports Server (NTRS)

    Fox, T.

    1982-01-01

    Nonlinear optimization problems that are encountered in science and industry are examined. A method of projecting the gradient vector onto a set of linear contraints is developed, and a program that uses this method is presented. The algorithm that generates this projection matrix is based on the Gram-Schmidt method and overcomes some of the objections to the Rosen projection method.

  16. EZLP: An Interactive Computer Program for Solving Linear Programming Problems. Final Report.

    ERIC Educational Resources Information Center

    Jarvis, John J.; And Others

    Designed for student use in solving linear programming problems, the interactive computer program described (EZLP) permits the student to input the linear programming model in exactly the same manner in which it would be written on paper. This report includes a brief review of the development of EZLP; narrative descriptions of program features,…

  17. VENVAL : a plywood mill cost accounting program

    Treesearch

    Henry Spelter

    1991-01-01

    This report documents a package of computer programs called VENVAL. These programs prepare plywood mill data for a linear programming (LP) model that, in turn, calculates the optimum mix of products to make, given a set of technologies and market prices. (The software to solve a linear program is not provided and must be obtained separately.) Linear programming finds...

  18. Ranking Forestry Investments With Parametric Linear Programming

    Treesearch

    Paul A. Murphy

    1976-01-01

    Parametric linear programming is introduced as a technique for ranking forestry investments under multiple constraints; it combines the advantages of simple tanking and linear programming as capital budgeting tools.

  19. Prediction of protein interaction hot spots using rough set-based multiple criteria linear programming.

    PubMed

    Chen, Ruoying; Zhang, Zhiwang; Wu, Di; Zhang, Peng; Zhang, Xinyang; Wang, Yong; Shi, Yong

    2011-01-21

    Protein-protein interactions are fundamentally important in many biological processes and it is in pressing need to understand the principles of protein-protein interactions. Mutagenesis studies have found that only a small fraction of surface residues, known as hot spots, are responsible for the physical binding in protein complexes. However, revealing hot spots by mutagenesis experiments are usually time consuming and expensive. In order to complement the experimental efforts, we propose a new computational approach in this paper to predict hot spots. Our method, Rough Set-based Multiple Criteria Linear Programming (RS-MCLP), integrates rough sets theory and multiple criteria linear programming to choose dominant features and computationally predict hot spots. Our approach is benchmarked by a dataset of 904 alanine-mutated residues and the results show that our RS-MCLP method performs better than other methods, e.g., MCLP, Decision Tree, Bayes Net, and the existing HotSprint database. In addition, we reveal several biological insights based on our analysis. We find that four features (the change of accessible surface area, percentage of the change of accessible surface area, size of a residue, and atomic contacts) are critical in predicting hot spots. Furthermore, we find that three residues (Tyr, Trp, and Phe) are abundant in hot spots through analyzing the distribution of amino acids. Copyright © 2010 Elsevier Ltd. All rights reserved.

  20. Multi-dimensional Rankings, Program Termination, and Complexity Bounds of Flowchart Programs

    NASA Astrophysics Data System (ADS)

    Alias, Christophe; Darte, Alain; Feautrier, Paul; Gonnord, Laure

    Proving the termination of a flowchart program can be done by exhibiting a ranking function, i.e., a function from the program states to a well-founded set, which strictly decreases at each program step. A standard method to automatically generate such a function is to compute invariants for each program point and to search for a ranking in a restricted class of functions that can be handled with linear programming techniques. Previous algorithms based on affine rankings either are applicable only to simple loops (i.e., single-node flowcharts) and rely on enumeration, or are not complete in the sense that they are not guaranteed to find a ranking in the class of functions they consider, if one exists. Our first contribution is to propose an efficient algorithm to compute ranking functions: It can handle flowcharts of arbitrary structure, the class of candidate rankings it explores is larger, and our method, although greedy, is provably complete. Our second contribution is to show how to use the ranking functions we generate to get upper bounds for the computational complexity (number of transitions) of the source program. This estimate is a polynomial, which means that we can handle programs with more than linear complexity. We applied the method on a collection of test cases from the literature. We also show the links and differences with previous techniques based on the insertion of counters.

  1. Waste management under multiple complexities: inexact piecewise-linearization-based fuzzy flexible programming.

    PubMed

    Sun, Wei; Huang, Guo H; Lv, Ying; Li, Gongchen

    2012-06-01

    To tackle nonlinear economies-of-scale (EOS) effects in interval-parameter constraints for a representative waste management problem, an inexact piecewise-linearization-based fuzzy flexible programming (IPFP) model is developed. In IPFP, interval parameters for waste amounts and transportation/operation costs can be quantified; aspiration levels for net system costs, as well as tolerance intervals for both capacities of waste treatment facilities and waste generation rates can be reflected; and the nonlinear EOS effects transformed from objective function to constraints can be approximated. An interactive algorithm is proposed for solving the IPFP model, which in nature is an interval-parameter mixed-integer quadratically constrained programming model. To demonstrate the IPFP's advantages, two alternative models are developed to compare their performances. One is a conventional linear-regression-based inexact fuzzy programming model (IPFP2) and the other is an IPFP model with all right-hand-sides of fussy constraints being the corresponding interval numbers (IPFP3). The comparison results between IPFP and IPFP2 indicate that the optimized waste amounts would have the similar patterns in both models. However, when dealing with EOS effects in constraints, the IPFP2 may underestimate the net system costs while the IPFP can estimate the costs more accurately. The comparison results between IPFP and IPFP3 indicate that their solutions would be significantly different. The decreased system uncertainties in IPFP's solutions demonstrate its effectiveness for providing more satisfactory interval solutions than IPFP3. Following its first application to waste management, the IPFP can be potentially applied to other environmental problems under multiple complexities. Copyright © 2012 Elsevier Ltd. All rights reserved.

  2. Investigating Integer Restrictions in Linear Programming

    ERIC Educational Resources Information Center

    Edwards, Thomas G.; Chelst, Kenneth R.; Principato, Angela M.; Wilhelm, Thad L.

    2015-01-01

    Linear programming (LP) is an application of graphing linear systems that appears in many Algebra 2 textbooks. Although not explicitly mentioned in the Common Core State Standards for Mathematics, linear programming blends seamlessly into modeling with mathematics, the fourth Standard for Mathematical Practice (CCSSI 2010, p. 7). In solving a…

  3. Landmark matching based retinal image alignment by enforcing sparsity in correspondence matrix.

    PubMed

    Zheng, Yuanjie; Daniel, Ebenezer; Hunter, Allan A; Xiao, Rui; Gao, Jianbin; Li, Hongsheng; Maguire, Maureen G; Brainard, David H; Gee, James C

    2014-08-01

    Retinal image alignment is fundamental to many applications in diagnosis of eye diseases. In this paper, we address the problem of landmark matching based retinal image alignment. We propose a novel landmark matching formulation by enforcing sparsity in the correspondence matrix and offer its solutions based on linear programming. The proposed formulation not only enables a joint estimation of the landmark correspondences and a predefined transformation model but also combines the benefits of the softassign strategy (Chui and Rangarajan, 2003) and the combinatorial optimization of linear programming. We also introduced a set of reinforced self-similarities descriptors which can better characterize local photometric and geometric properties of the retinal image. Theoretical analysis and experimental results with both fundus color images and angiogram images show the superior performances of our algorithms to several state-of-the-art techniques. Copyright © 2013 Elsevier B.V. All rights reserved.

  4. User's manual for interactive LINEAR: A FORTRAN program to derive linear aircraft models

    NASA Technical Reports Server (NTRS)

    Antoniewicz, Robert F.; Duke, Eugene L.; Patterson, Brian P.

    1988-01-01

    An interactive FORTRAN program that provides the user with a powerful and flexible tool for the linearization of aircraft aerodynamic models is documented in this report. The program LINEAR numerically determines a linear system model using nonlinear equations of motion and a user-supplied linear or nonlinear aerodynamic model. The nonlinear equations of motion used are six-degree-of-freedom equations with stationary atmosphere and flat, nonrotating earth assumptions. The system model determined by LINEAR consists of matrices for both the state and observation equations. The program has been designed to allow easy selection and definition of the state, control, and observation variables to be used in a particular model.

  5. An approach of traffic signal control based on NLRSQP algorithm

    NASA Astrophysics Data System (ADS)

    Zou, Yuan-Yang; Hu, Yu

    2017-11-01

    This paper presents a linear program model with linear complementarity constraints (LPLCC) to solve traffic signal optimization problem. The objective function of the model is to obtain the minimization of total queue length with weight factors at the end of each cycle. Then, a combination algorithm based on the nonlinear least regression and sequence quadratic program (NLRSQP) is proposed, by which the local optimal solution can be obtained. Furthermore, four numerical experiments are proposed to study how to set the initial solution of the algorithm that can get a better local optimal solution more quickly. In particular, the results of numerical experiments show that: The model is effective for different arrival rates and weight factors; and the lower bound of the initial solution is, the better optimal solution can be obtained.

  6. A generalized fuzzy linear programming approach for environmental management problem under uncertainty.

    PubMed

    Fan, Yurui; Huang, Guohe; Veawab, Amornvadee

    2012-01-01

    In this study, a generalized fuzzy linear programming (GFLP) method was developed to deal with uncertainties expressed as fuzzy sets that exist in the constraints and objective function. A stepwise interactive algorithm (SIA) was advanced to solve GFLP model and generate solutions expressed as fuzzy sets. To demonstrate its application, the developed GFLP method was applied to a regional sulfur dioxide (SO2) control planning model to identify effective SO2 mitigation polices with a minimized system performance cost under uncertainty. The results were obtained to represent the amount of SO2 allocated to different control measures from different sources. Compared with the conventional interval-parameter linear programming (ILP) approach, the solutions obtained through GFLP were expressed as fuzzy sets, which can provide intervals for the decision variables and objective function, as well as related possibilities. Therefore, the decision makers can make a tradeoff between model stability and the plausibility based on solutions obtained through GFLP and then identify desired policies for SO2-emission control under uncertainty.

  7. A non-linear regression analysis program for describing electrophysiological data with multiple functions using Microsoft Excel.

    PubMed

    Brown, Angus M

    2006-04-01

    The objective of this present study was to demonstrate a method for fitting complex electrophysiological data with multiple functions using the SOLVER add-in of the ubiquitous spreadsheet Microsoft Excel. SOLVER minimizes the difference between the sum of the squares of the data to be fit and the function(s) describing the data using an iterative generalized reduced gradient method. While it is a straightforward procedure to fit data with linear functions, and we have previously demonstrated a method of non-linear regression analysis of experimental data based upon a single function, it is more complex to fit data with multiple functions, usually requiring specialized expensive computer software. In this paper we describe an easily understood program for fitting experimentally acquired data, in this case the stimulus-evoked compound action potential from the mouse optic nerve, with multiple Gaussian functions. The program is flexible and can be applied to describe data with a wide variety of user-input functions.

  8. Mathematical programming models for the economic design and assessment of wind energy conversion systems

    NASA Astrophysics Data System (ADS)

    Reinert, K. A.

    The use of linear decision rules (LDR) and chance constrained programming (CCP) to optimize the performance of wind energy conversion clusters coupled to storage systems is described. Storage is modelled by LDR and output by CCP. The linear allocation rule and linear release rule prescribe the size and optimize a storage facility with a bypass. Chance constraints are introduced to explicitly treat reliability in terms of an appropriate value from an inverse cumulative distribution function. Details of deterministic programming structure and a sample problem involving a 500 kW and a 1.5 MW WECS are provided, considering an installed cost of $1/kW. Four demand patterns and three levels of reliability are analyzed for optimizing the generator choice and the storage configuration for base load and peak operating conditions. Deficiencies in ability to predict reliability and to account for serial correlations are noted in the model, which is concluded useful for narrowing WECS design options.

  9. A parallel solver for huge dense linear systems

    NASA Astrophysics Data System (ADS)

    Badia, J. M.; Movilla, J. L.; Climente, J. I.; Castillo, M.; Marqués, M.; Mayo, R.; Quintana-Ortí, E. S.; Planelles, J.

    2011-11-01

    HDSS (Huge Dense Linear System Solver) is a Fortran Application Programming Interface (API) to facilitate the parallel solution of very large dense systems to scientists and engineers. The API makes use of parallelism to yield an efficient solution of the systems on a wide range of parallel platforms, from clusters of processors to massively parallel multiprocessors. It exploits out-of-core strategies to leverage the secondary memory in order to solve huge linear systems O(100.000). The API is based on the parallel linear algebra library PLAPACK, and on its Out-Of-Core (OOC) extension POOCLAPACK. Both PLAPACK and POOCLAPACK use the Message Passing Interface (MPI) as the communication layer and BLAS to perform the local matrix operations. The API provides a friendly interface to the users, hiding almost all the technical aspects related to the parallel execution of the code and the use of the secondary memory to solve the systems. In particular, the API can automatically select the best way to store and solve the systems, depending of the dimension of the system, the number of processes and the main memory of the platform. Experimental results on several parallel platforms report high performance, reaching more than 1 TFLOP with 64 cores to solve a system with more than 200 000 equations and more than 10 000 right-hand side vectors. New version program summaryProgram title: Huge Dense System Solver (HDSS) Catalogue identifier: AEHU_v1_1 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEHU_v1_1.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 87 062 No. of bytes in distributed program, including test data, etc.: 1 069 110 Distribution format: tar.gz Programming language: Fortran90, C Computer: Parallel architectures: multiprocessors, computer clusters Operating system: Linux/Unix Has the code been vectorized or parallelized?: Yes, includes MPI primitives. RAM: Tested for up to 190 GB Classification: 6.5 External routines: MPI ( http://www.mpi-forum.org/), BLAS ( http://www.netlib.org/blas/), PLAPACK ( http://www.cs.utexas.edu/~plapack/), POOCLAPACK ( ftp://ftp.cs.utexas.edu/pub/rvdg/PLAPACK/pooclapack.ps) (code for PLAPACK and POOCLAPACK is included in the distribution). Catalogue identifier of previous version: AEHU_v1_0 Journal reference of previous version: Comput. Phys. Comm. 182 (2011) 533 Does the new version supersede the previous version?: Yes Nature of problem: Huge scale dense systems of linear equations, Ax=B, beyond standard LAPACK capabilities. Solution method: The linear systems are solved by means of parallelized routines based on the LU factorization, using efficient secondary storage algorithms when the available main memory is insufficient. Reasons for new version: In many applications we need to guarantee a high accuracy in the solution of very large linear systems and we can do it by using double-precision arithmetic. Summary of revisions: Version 1.1 Can be used to solve linear systems using double-precision arithmetic. New version of the initialization routine. The user can choose the kind of arithmetic and the values of several parameters of the environment. Running time: About 5 hours to solve a system with more than 200 000 equations and more than 10 000 right-hand side vectors using double-precision arithmetic on an eight-node commodity cluster with a total of 64 Intel cores.

  10. Image reconstruction and scan configurations enabled by optimization-based algorithms in multispectral CT

    NASA Astrophysics Data System (ADS)

    Chen, Buxin; Zhang, Zheng; Sidky, Emil Y.; Xia, Dan; Pan, Xiaochuan

    2017-11-01

    Optimization-based algorithms for image reconstruction in multispectral (or photon-counting) computed tomography (MCT) remains a topic of active research. The challenge of optimization-based image reconstruction in MCT stems from the inherently non-linear data model that can lead to a non-convex optimization program for which no mathematically exact solver seems to exist for achieving globally optimal solutions. In this work, based upon a non-linear data model, we design a non-convex optimization program, derive its first-order-optimality conditions, and propose an algorithm to solve the program for image reconstruction in MCT. In addition to consideration of image reconstruction for the standard scan configuration, the emphasis is on investigating the algorithm’s potential for enabling non-standard scan configurations with no or minimum hardware modification to existing CT systems, which has potential practical implications for lowered hardware cost, enhanced scanning flexibility, and reduced imaging dose/time in MCT. Numerical studies are carried out for verification of the algorithm and its implementation, and for a preliminary demonstration and characterization of the algorithm in reconstructing images and in enabling non-standard configurations with varying scanning angular range and/or x-ray illumination coverage in MCT.

  11. Development and validation of a general purpose linearization program for rigid aircraft models

    NASA Technical Reports Server (NTRS)

    Duke, E. L.; Antoniewicz, R. F.

    1985-01-01

    A FORTRAN program that provides the user with a powerful and flexible tool for the linearization of aircraft models is discussed. The program LINEAR numerically determines a linear systems model using nonlinear equations of motion and a user-supplied, nonlinear aerodynamic model. The system model determined by LINEAR consists of matrices for both the state and observation equations. The program has been designed to allow easy selection and definition of the state, control, and observation variables to be used in a particular model. Also, included in the report is a comparison of linear and nonlinear models for a high performance aircraft.

  12. A neural network approach to job-shop scheduling.

    PubMed

    Zhou, D N; Cherkassky, V; Baldwin, T R; Olson, D E

    1991-01-01

    A novel analog computational network is presented for solving NP-complete constraint satisfaction problems, i.e. job-shop scheduling. In contrast to most neural approaches to combinatorial optimization based on quadratic energy cost function, the authors propose to use linear cost functions. As a result, the network complexity (number of neurons and the number of resistive interconnections) grows only linearly with problem size, and large-scale implementations become possible. The proposed approach is related to the linear programming network described by D.W. Tank and J.J. Hopfield (1985), which also uses a linear cost function for a simple optimization problem. It is shown how to map a difficult constraint-satisfaction problem onto a simple neural net in which the number of neural processors equals the number of subjobs (operations) and the number of interconnections grows linearly with the total number of operations. Simulations show that the authors' approach produces better solutions than existing neural approaches to job-shop scheduling, i.e. the traveling salesman problem-type Hopfield approach and integer linear programming approach of J.P.S. Foo and Y. Takefuji (1988), in terms of the quality of the solution and the network complexity.

  13. An Optimization Model for the Allocation of University Based Merit Aid

    ERIC Educational Resources Information Center

    Sugrue, Paul K.

    2010-01-01

    The allocation of merit-based financial aid during the college admissions process presents postsecondary institutions with complex and financially expensive decisions. This article describes the application of linear programming as a decision tool in merit based financial aid decisions at a medium size private university. The objective defined for…

  14. Sequential computation of elementary modes and minimal cut sets in genome-scale metabolic networks using alternate integer linear programming.

    PubMed

    Song, Hyun-Seob; Goldberg, Noam; Mahajan, Ashutosh; Ramkrishna, Doraiswami

    2017-08-01

    Elementary (flux) modes (EMs) have served as a valuable tool for investigating structural and functional properties of metabolic networks. Identification of the full set of EMs in genome-scale networks remains challenging due to combinatorial explosion of EMs in complex networks. It is often, however, that only a small subset of relevant EMs needs to be known, for which optimization-based sequential computation is a useful alternative. Most of the currently available methods along this line are based on the iterative use of mixed integer linear programming (MILP), the effectiveness of which significantly deteriorates as the number of iterations builds up. To alleviate the computational burden associated with the MILP implementation, we here present a novel optimization algorithm termed alternate integer linear programming (AILP). Our algorithm was designed to iteratively solve a pair of integer programming (IP) and linear programming (LP) to compute EMs in a sequential manner. In each step, the IP identifies a minimal subset of reactions, the deletion of which disables all previously identified EMs. Thus, a subsequent LP solution subject to this reaction deletion constraint becomes a distinct EM. In cases where no feasible LP solution is available, IP-derived reaction deletion sets represent minimal cut sets (MCSs). Despite the additional computation of MCSs, AILP achieved significant time reduction in computing EMs by orders of magnitude. The proposed AILP algorithm not only offers a computational advantage in the EM analysis of genome-scale networks, but also improves the understanding of the linkage between EMs and MCSs. The software is implemented in Matlab, and is provided as supplementary information . hyunseob.song@pnnl.gov. Supplementary data are available at Bioinformatics online. Published by Oxford University Press 2017. This work is written by US Government employees and are in the public domain in the US.

  15. An algorithm for control system design via parameter optimization. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Sinha, P. K.

    1972-01-01

    An algorithm for design via parameter optimization has been developed for linear-time-invariant control systems based on the model reference adaptive control concept. A cost functional is defined to evaluate the system response relative to nominal, which involves in general the error between the system and nominal response, its derivatives and the control signals. A program for the practical implementation of this algorithm has been developed, with the computational scheme for the evaluation of the performance index based on Lyapunov's theorem for stability of linear invariant systems.

  16. An efficient method for generalized linear multiplicative programming problem with multiplicative constraints.

    PubMed

    Zhao, Yingfeng; Liu, Sanyang

    2016-01-01

    We present a practical branch and bound algorithm for globally solving generalized linear multiplicative programming problem with multiplicative constraints. To solve the problem, a relaxation programming problem which is equivalent to a linear programming is proposed by utilizing a new two-phase relaxation technique. In the algorithm, lower and upper bounds are simultaneously obtained by solving some linear relaxation programming problems. Global convergence has been proved and results of some sample examples and a small random experiment show that the proposed algorithm is feasible and efficient.

  17. MAGDM linear-programming models with distinct uncertain preference structures.

    PubMed

    Xu, Zeshui S; Chen, Jian

    2008-10-01

    Group decision making with preference information on alternatives is an interesting and important research topic which has been receiving more and more attention in recent years. The purpose of this paper is to investigate multiple-attribute group decision-making (MAGDM) problems with distinct uncertain preference structures. We develop some linear-programming models for dealing with the MAGDM problems, where the information about attribute weights is incomplete, and the decision makers have their preferences on alternatives. The provided preference information can be represented in the following three distinct uncertain preference structures: 1) interval utility values; 2) interval fuzzy preference relations; and 3) interval multiplicative preference relations. We first establish some linear-programming models based on decision matrix and each of the distinct uncertain preference structures and, then, develop some linear-programming models to integrate all three structures of subjective uncertain preference information provided by the decision makers and the objective information depicted in the decision matrix. Furthermore, we propose a simple and straightforward approach in ranking and selecting the given alternatives. It is worth pointing out that the developed models can also be used to deal with the situations where the three distinct uncertain preference structures are reduced to the traditional ones, i.e., utility values, fuzzy preference relations, and multiplicative preference relations. Finally, we use a practical example to illustrate in detail the calculation process of the developed approach.

  18. A recurrent neural network for solving bilevel linear programming problem.

    PubMed

    He, Xing; Li, Chuandong; Huang, Tingwen; Li, Chaojie; Huang, Junjian

    2014-04-01

    In this brief, based on the method of penalty functions, a recurrent neural network (NN) modeled by means of a differential inclusion is proposed for solving the bilevel linear programming problem (BLPP). Compared with the existing NNs for BLPP, the model has the least number of state variables and simple structure. Using nonsmooth analysis, the theory of differential inclusions, and Lyapunov-like method, the equilibrium point sequence of the proposed NNs can approximately converge to an optimal solution of BLPP under certain conditions. Finally, the numerical simulations of a supply chain distribution model have shown excellent performance of the proposed recurrent NNs.

  19. Very Low-Cost Nutritious Diet Plans Designed by Linear Programming.

    ERIC Educational Resources Information Center

    Foytik, Jerry

    1981-01-01

    Provides procedural details of Linear Programing, developed by the U.S. Department of Agriculture to devise a dietary guide for consumers that minimizes food costs without sacrificing nutritional quality. Compares Linear Programming with the Thrifty Food Plan, which has been a basis for allocating coupons under the Food Stamp Program. (CS)

  20. HADY-I, a FORTRAN program for the compressible stability analysis of three-dimensional boundary layers. [on swept and tapered wings

    NASA Technical Reports Server (NTRS)

    El-Hady, N. M.

    1981-01-01

    A computer program HADY-I for calculating the linear incompressible or compressible stability characteristics of the laminar boundary layer on swept and tapered wings is described. The eigenvalue problem and its adjoint arising from the linearized disturbance equations with the appropriate boundary conditions are solved numerically using a combination of Newton-Raphson interative scheme and a variable step size integrator based on the Runge-Kutta-Fehlburh fifth-order formulas. The integrator is used in conjunction with a modified Gram-Schmidt orthonormalization procedure. The computer program HADY-I calculates the growth rates of crossflow or streamwise Tollmien-Schlichting instabilities. It also calculates the group velocities of these disturbances. It is restricted to parallel stability calculations, where the boundary layer (meanflow) is assumed to be parallel. The meanflow solution is an input to the program.

  1. Solving a class of generalized fractional programming problems using the feasibility of linear programs.

    PubMed

    Shen, Peiping; Zhang, Tongli; Wang, Chunfeng

    2017-01-01

    This article presents a new approximation algorithm for globally solving a class of generalized fractional programming problems (P) whose objective functions are defined as an appropriate composition of ratios of affine functions. To solve this problem, the algorithm solves an equivalent optimization problem (Q) via an exploration of a suitably defined nonuniform grid. The main work of the algorithm involves checking the feasibility of linear programs associated with the interesting grid points. It is proved that the proposed algorithm is a fully polynomial time approximation scheme as the ratio terms are fixed in the objective function to problem (P), based on the computational complexity result. In contrast to existing results in literature, the algorithm does not require the assumptions on quasi-concavity or low-rank of the objective function to problem (P). Numerical results are given to illustrate the feasibility and effectiveness of the proposed algorithm.

  2. Fuzzy bi-objective linear programming for portfolio selection problem with magnitude ranking function

    NASA Astrophysics Data System (ADS)

    Kusumawati, Rosita; Subekti, Retno

    2017-04-01

    Fuzzy bi-objective linear programming (FBOLP) model is bi-objective linear programming model in fuzzy number set where the coefficients of the equations are fuzzy number. This model is proposed to solve portfolio selection problem which generate an asset portfolio with the lowest risk and the highest expected return. FBOLP model with normal fuzzy numbers for risk and expected return of stocks is transformed into linear programming (LP) model using magnitude ranking function.

  3. E-Learning Technologies: Employing Matlab Web Server to Facilitate the Education of Mathematical Programming

    ERIC Educational Resources Information Center

    Karagiannis, P.; Markelis, I.; Paparrizos, K.; Samaras, N.; Sifaleras, A.

    2006-01-01

    This paper presents new web-based educational software (webNetPro) for "Linear Network Programming." It includes many algorithms for "Network Optimization" problems, such as shortest path problems, minimum spanning tree problems, maximum flow problems and other search algorithms. Therefore, webNetPro can assist the teaching process of courses such…

  4. Bi-Linear Shear Deformable ANCF Shell Element Using Continuum Mechanics Approach

    DTIC Science & Technology

    2014-08-01

    Lappeenranta University of Technology Skinnarilankatu 34, 53850 Lappeenranta, Finland Paramsothy Jayakumar US Army RDECOM TARDEC 6501 E. 11 Mile...2-0001 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) Hiroki Yamashita; Antti Valkeapaa; Paramsothy Jayakumar ; Hiroyuki Sugiyama 5d...Valkeapää, A. I., Yamashita, H., Jayakumar , P. and Sugiyama, H., “Gradient Deficient Bi-Linear Plate Element Based on Absolute Nodal Coordinate

  5. Symbolic discrete event system specification

    NASA Technical Reports Server (NTRS)

    Zeigler, Bernard P.; Chi, Sungdo

    1992-01-01

    Extending discrete event modeling formalisms to facilitate greater symbol manipulation capabilities is important to further their use in intelligent control and design of high autonomy systems. An extension to the DEVS formalism that facilitates symbolic expression of event times by extending the time base from the real numbers to the field of linear polynomials over the reals is defined. A simulation algorithm is developed to generate the branching trajectories resulting from the underlying nondeterminism. To efficiently manage symbolic constraints, a consistency checking algorithm for linear polynomial constraints based on feasibility checking algorithms borrowed from linear programming has been developed. The extended formalism offers a convenient means to conduct multiple, simultaneous explorations of model behaviors. Examples of application are given with concentration on fault model analysis.

  6. ASTROP2-LE: A Mistuned Aeroelastic Analysis System Based on a Two Dimensional Linearized Euler Solver

    NASA Technical Reports Server (NTRS)

    Reddy, T. S. R.; Srivastava, R.; Mehmed, Oral

    2002-01-01

    An aeroelastic analysis system for flutter and forced response analysis of turbomachines based on a two-dimensional linearized unsteady Euler solver has been developed. The ASTROP2 code, an aeroelastic stability analysis program for turbomachinery, was used as a basis for this development. The ASTROP2 code uses strip theory to couple a two dimensional aerodynamic model with a three dimensional structural model. The code was modified to include forced response capability. The formulation was also modified to include aeroelastic analysis with mistuning. A linearized unsteady Euler solver, LINFLX2D is added to model the unsteady aerodynamics in ASTROP2. By calculating the unsteady aerodynamic loads using LINFLX2D, it is possible to include the effects of transonic flow on flutter and forced response in the analysis. The stability is inferred from an eigenvalue analysis. The revised code, ASTROP2-LE for ASTROP2 code using Linearized Euler aerodynamics, is validated by comparing the predictions with those obtained using linear unsteady aerodynamic solutions.

  7. Automata-Based Verification of Temporal Properties on Running Programs

    NASA Technical Reports Server (NTRS)

    Giannakopoulou, Dimitra; Havelund, Klaus; Lan, Sonie (Technical Monitor)

    2001-01-01

    This paper presents an approach to checking a running program against its Linear Temporal Logic (LTL) specifications. LTL is a widely used logic for expressing properties of programs viewed as sets of executions. Our approach consists of translating LTL formulae to finite-state automata, which are used as observers of the program behavior. The translation algorithm we propose modifies standard LTL to Buchi automata conversion techniques to generate automata that check finite program traces. The algorithm has been implemented in a tool, which has been integrated with the generic JPaX framework for runtime analysis of Java programs.

  8. Linear Programming across the Curriculum

    ERIC Educational Resources Information Center

    Yoder, S. Elizabeth; Kurz, M. Elizabeth

    2015-01-01

    Linear programming (LP) is taught in different departments across college campuses with engineering and management curricula. Modeling an LP problem is taught in every linear programming class. As faculty teaching in Engineering and Management departments, the depth to which teachers should expect students to master this particular type of…

  9. Fundamental solution of the problem of linear programming and method of its determination

    NASA Technical Reports Server (NTRS)

    Petrunin, S. V.

    1978-01-01

    The idea of a fundamental solution to a problem in linear programming is introduced. A method of determining the fundamental solution and of applying this method to the solution of a problem in linear programming is proposed. Numerical examples are cited.

  10. A Sawmill Manager Adapts To Change With Linear Programming

    Treesearch

    George F. Dutrow; James E. Granskog

    1973-01-01

    Linear programming provides guidelines for increasing sawmill capacity and flexibility and for determining stumpagepurchasing strategy. The operator of a medium-sized sawmill implemented improvements suggested by linear programming analysis; results indicate a 45 percent increase in revenue and a 36 percent hike in volume processed.

  11. Randomized trial of two e-learning programs for oral health students on secondary prevention of eating disorders.

    PubMed

    DeBate, Rita D; Severson, Herbert H; Cragun, Deborah; Bleck, Jennifer; Gau, Jeff; Merrell, Laura; Cantwell, Carley; Christiansen, Steve; Koerber, Anne; Tomar, Scott L; Brown, Kelli McCormack; Tedesco, Lisa A; Hendricson, William; Taris, Mark

    2014-01-01

    The purpose of this study was to test whether an interactive, web-based training program is more effective than an existing, flat-text, e-learning program at improving oral health students' knowledge, motivation, and self-efficacy to address signs of disordered eating behaviors with patients. Eighteen oral health classes of dental and dental hygiene students were randomized to either the Intervention (interactive program; n=259) or Alternative (existing program; n=58) conditions. Hierarchical linear modeling assessed for posttest differences between groups while controlling for baseline measures. Improvement among Intervention participants was superior to those who completed the Alternative program for three of the six outcomes: benefits/barriers, self-efficacy, and skills-based knowledge (effect sizes ranging from 0.43 to 0.87). This study thus suggests that interactive training programs may be better than flat-text e-learning programs for improving the skills-based knowledge and self-efficacy necessary for behavior change.

  12. Spanish-language community-based mental health treatment programs, policy-required language-assistance programming, and mental health treatment access among Spanish-speaking clients.

    PubMed

    Snowden, Lonnie R; McClellan, Sean R

    2013-09-01

    We investigated the extent to which implementing language assistance programming through contracting with community-based organizations improved the accessibility of mental health care under Medi-Cal (California's Medicaid program) for Spanish-speaking persons with limited English proficiency, and whether it reduced language-based treatment access disparities. Using a time series nonequivalent control group design, we studied county-level penetration of language assistance programming over 10 years (1997-2006) for Spanish-speaking persons with limited English proficiency covered under Medi-Cal. We used linear regression with county fixed effects to control for ongoing trends and other influences. When county mental health plans contracted with community-based organizations, those implementing language assistance programming increased penetration rates of Spanish-language mental health services under Medi-Cal more than other plans (0.28 percentage points, a 25% increase on average; P < .05). However, the increase was insufficient to significantly reduce language-related disparities. Mental health treatment programs operated by community-based organizations may have moderately improved access after implementing required language assistance programming, but the programming did not reduce entrenched disparities in the accessibility of mental health services.

  13. Spanish-Language Community-Based Mental Health Treatment Programs, Policy-Required Language-Assistance Programming, and Mental Health Treatment Access Among Spanish-Speaking Clients

    PubMed Central

    McClellan, Sean R.

    2013-01-01

    Objectives. We investigated the extent to which implementing language assistance programming through contracting with community-based organizations improved the accessibility of mental health care under Medi-Cal (California’s Medicaid program) for Spanish-speaking persons with limited English proficiency, and whether it reduced language-based treatment access disparities. Methods. Using a time series nonequivalent control group design, we studied county-level penetration of language assistance programming over 10 years (1997–2006) for Spanish-speaking persons with limited English proficiency covered under Medi-Cal. We used linear regression with county fixed effects to control for ongoing trends and other influences. Results. When county mental health plans contracted with community-based organizations, those implementing language assistance programming increased penetration rates of Spanish-language mental health services under Medi-Cal more than other plans (0.28 percentage points, a 25% increase on average; P < .05). However, the increase was insufficient to significantly reduce language-related disparities. Conclusions. Mental health treatment programs operated by community-based organizations may have moderately improved access after implementing required language assistance programming, but the programming did not reduce entrenched disparities in the accessibility of mental health services. PMID:23865663

  14. Timetabling an Academic Department with Linear Programming.

    ERIC Educational Resources Information Center

    Bezeau, Lawrence M.

    This paper describes an approach to faculty timetabling and course scheduling that uses computerized linear programming. After reviewing the literature on linear programming, the paper discusses the process whereby a timetable was created for a department at the University of New Brunswick. Faculty were surveyed with respect to course offerings…

  15. A Comparison of Traditional Worksheet and Linear Programming Methods for Teaching Manure Application Planning.

    ERIC Educational Resources Information Center

    Schmitt, M. A.; And Others

    1994-01-01

    Compares traditional manure application planning techniques calculated to meet agronomic nutrient needs on a field-by-field basis with plans developed using computer-assisted linear programming optimization methods. Linear programming provided the most economical and environmentally sound manure application strategy. (Contains 15 references.) (MDH)

  16. Impact of a Web-based worksite health promotion program on absenteeism.

    PubMed

    Niessen, Maurice A J; Kraaijenhagen, Roderik A; Dijkgraaf, Marcel G W; Van Pelt, Danielle; Van Kalken, Coen K; Peek, Niels

    2012-04-01

    To evaluate the effect of participation in a comprehensive, Web-based worksite health promotion program on absenteeism. Study population consists of Dutch workers employed at a large financial services company. Linear regression was used to assess the impact of program attendance on the difference between baseline and follow-up absenteeism rates, controlling for gender, age, job level, years of employment, and noncompletion of the program. Data from 20,797 individuals were analyzed; 3826 individuals enrolled in the program during the study period. A 20.3% reduction in absenteeism was shown among program attendees compared with nonparticipants during a median follow-up period of 23.3 months. Participating in the worksite health promotion program led to an immediate reduction in absenteeism. Improved psychological well-being, increased exercise, and weight reduction are possible pathways toward this reduction.

  17. A computer program for the simulation of folds of different sizes under the influence of gravity

    NASA Astrophysics Data System (ADS)

    Vacas Peña, José M.; Martínez Catalán, José R.

    2004-02-01

    Folding&g is a computer program, based on the finite element method, developed to simulate the process of natural folding from small to large scales in two dimensions. Written in Pascal code and compiled with Borland Delphi 3.0, the program has a friendly interactive user interface and can be used for research as well as educational purposes. Four main menu options allow the user to import or to build and to save a model data file, select the type of graphic output, introduce and modify several physical parameters and enter the calculation routines. The program employs isoparametric, initially rectangular elements with eight nodes, which can sustain large deformations. The mathematical procedure is based on the elasticity equations, but has been modified to simulate a viscous rheology, either linear or of power-law type. The parameters to be introduced include either the linear viscosity, or, when the viscosity is non-linear, the material constant, activation energy, temperature and power of the differential stress. All the parameters can be set by rows, which simulate layers. A toggle permits gravity to be introduced into the calculations. In this case, the density of the different rows must be specified, and the sizes of the finite elements and of the whole model become meaningful. Viscosity values can also be assigned to blocks of several rows and columns, which permits the modelling of heterogeneities such as rectangular areas of high strength, which can be used to simulate shearing components interfering with the buckling process. The program is applied to several cases of folding, including a single competent bed and multilayers, and its results compared with analytical and experimental results. The influence of gravity is illustrated by the modelling of diapiric structures and of a large recumbent fold.

  18. New techniques for the quantification and modeling of remotely sensed alteration and linear features in mineral resource assessment studies

    USGS Publications Warehouse

    Trautwein, C.M.; Rowan, L.C.

    1987-01-01

    Linear structural features and hydrothermally altered rocks that were interpreted from Landsat data have been used by the U.S. Geological Survey (USGS) in regional mineral resource appraisals for more than a decade. In the past, linear features and alterations have been incorporated into models for assessing mineral resources potential by manually overlaying these and other data sets. Recently, USGS research into computer-based geographic information systems (GIS) for mineral resources assessment programs has produced several new techniques for data analysis, quantification, and integration to meet assessment objectives.

  19. Model checking for linear temporal logic: An efficient implementation

    NASA Technical Reports Server (NTRS)

    Sherman, Rivi; Pnueli, Amir

    1990-01-01

    This report provides evidence to support the claim that model checking for linear temporal logic (LTL) is practically efficient. Two implementations of a linear temporal logic model checker is described. One is based on transforming the model checking problem into a satisfiability problem; the other checks an LTL formula for a finite model by computing the cross-product of the finite state transition graph of the program with a structure containing all possible models for the property. An experiment was done with a set of mutual exclusion algorithms and tested safety and liveness under fairness for these algorithms.

  20. A Flight Control System for Small Unmanned Aerial Vehicle

    NASA Astrophysics Data System (ADS)

    Tunik, A. A.; Nadsadnaya, O. I.

    2018-03-01

    The program adaptation of the controller for the flight control system (FCS) of an unmanned aerial vehicle (UAV) is considered. Linearized flight dynamic models depend mainly on the true airspeed of the UAV, which is measured by the onboard air data system. This enables its use for program adaptation of the FCS over the full range of altitudes and velocities, which define the flight operating range. FCS with program adaptation, based on static feedback (SF), is selected. The SF parameters for every sub-range of the true airspeed are determined using the linear matrix inequality approach in the case of discrete systems for synthesis of a suboptimal robust H ∞-controller. The use of the Lagrange interpolation between true airspeed sub-ranges provides continuous adaptation. The efficiency of the proposed approach is shown against an example of the heading stabilization system.

  1. Consideration in selecting crops for the human-rated life support system: a Linear Programming model

    NASA Technical Reports Server (NTRS)

    Wheeler, E. F.; Kossowski, J.; Goto, E.; Langhans, R. W.; White, G.; Albright, L. D.; Wilcox, D.; Henninger, D. L. (Principal Investigator)

    1996-01-01

    A Linear Programming model has been constructed which aids in selecting appropriate crops for CELSS (Controlled Environment Life Support System) food production. A team of Controlled Environment Agriculture (CEA) faculty, staff, graduate students and invited experts representing more than a dozen disciplines, provided a wide range of expertise in developing the model and the crop production program. The model incorporates nutritional content and controlled-environment based production yields of carefully chosen crops into a framework where a crop mix can be constructed to suit the astronauts' needs. The crew's nutritional requirements can be adequately satisfied with only a few crops (assuming vitamin mineral supplements are provided) but this will not be satisfactory from a culinary standpoint. This model is flexible enough that taste and variety driven food choices can be built into the model.

  2. Consideration in selecting crops for the human-rated life support system: a linear programming model

    NASA Astrophysics Data System (ADS)

    Wheeler, E. F.; Kossowski, J.; Goto, E.; Langhans, R. W.; White, G.; Albright, L. D.; Wilcox, D.

    A Linear Programming model has been constructed which aids in selecting appropriate crops for CELSS (Controlled Environment Life Support System) food production. A team of Controlled Environment Agriculture (CEA) faculty, staff, graduate students and invited experts representing more than a dozen disciplines, provided a wide range of expertise in developing the model and the crop production program. The model incorporates nutritional content and controlled-environment based production yields of carefully chosen crops into a framework where a crop mix can be constructed to suit the astronauts' needs. The crew's nutritional requirements can be adequately satisfied with only a few crops (assuming vitamin mineral supplements are provided) but this will not be satisfactory from a culinary standpoint. This model is flexible enough that taste and variety driven food choices can be built into the model.

  3. Ada Linear-Algebra Program

    NASA Technical Reports Server (NTRS)

    Klumpp, A. R.; Lawson, C. L.

    1988-01-01

    Routines provided for common scalar, vector, matrix, and quaternion operations. Computer program extends Ada programming language to include linear-algebra capabilities similar to HAS/S programming language. Designed for such avionics applications as software for Space Station.

  4. High correlations between MRI brain volume measurements based on NeuroQuant® and FreeSurfer.

    PubMed

    Ross, David E; Ochs, Alfred L; Tate, David F; Tokac, Umit; Seabaugh, John; Abildskov, Tracy J; Bigler, Erin D

    2018-05-30

    NeuroQuant ® (NQ) and FreeSurfer (FS) are commonly used computer-automated programs for measuring MRI brain volume. Previously they were reported to have high intermethod reliabilities but often large intermethod effect size differences. We hypothesized that linear transformations could be used to reduce the large effect sizes. This study was an extension of our previously reported study. We performed NQ and FS brain volume measurements on 60 subjects (including normal controls, patients with traumatic brain injury, and patients with Alzheimer's disease). We used two statistical approaches in parallel to develop methods for transforming FS volumes into NQ volumes: traditional linear regression, and Bayesian linear regression. For both methods, we used regression analyses to develop linear transformations of the FS volumes to make them more similar to the NQ volumes. The FS-to-NQ transformations based on traditional linear regression resulted in effect sizes which were small to moderate. The transformations based on Bayesian linear regression resulted in all effect sizes being trivially small. To our knowledge, this is the first report describing a method for transforming FS to NQ data so as to achieve high reliability and low effect size differences. Machine learning methods like Bayesian regression may be more useful than traditional methods. Copyright © 2018 Elsevier B.V. All rights reserved.

  5. Solving large-scale fixed cost integer linear programming models for grid-based location problems with heuristic techniques

    NASA Astrophysics Data System (ADS)

    Noor-E-Alam, Md.; Doucette, John

    2015-08-01

    Grid-based location problems (GBLPs) can be used to solve location problems in business, engineering, resource exploitation, and even in the field of medical sciences. To solve these decision problems, an integer linear programming (ILP) model is designed and developed to provide the optimal solution for GBLPs considering fixed cost criteria. Preliminary results show that the ILP model is efficient in solving small to moderate-sized problems. However, this ILP model becomes intractable in solving large-scale instances. Therefore, a decomposition heuristic is proposed to solve these large-scale GBLPs, which demonstrates significant reduction of solution runtimes. To benchmark the proposed heuristic, results are compared with the exact solution via ILP. The experimental results show that the proposed method significantly outperforms the exact method in runtime with minimal (and in most cases, no) loss of optimality.

  6. A linear programming manual

    NASA Technical Reports Server (NTRS)

    Tuey, R. C.

    1972-01-01

    Computer solutions of linear programming problems are outlined. Information covers vector spaces, convex sets, and matrix algebra elements for solving simultaneous linear equations. Dual problems, reduced cost analysis, ranges, and error analysis are illustrated.

  7. A holistic approach to movement education in sport and fitness: a systems based model.

    PubMed

    Polsgrove, Myles Jay

    2012-01-01

    The typical model used by movement professionals to enhance performance relies on the notion that a linear increase in load results in steady and progressive gains, whereby, the greater the effort, the greater the gains in performance. Traditional approaches to movement progression typically rely on the proper sequencing of extrinsically based activities to facilitate the individual in reaching performance objectives. However, physical rehabilitation or physical performance rarely progresses in such a linear fashion; instead they tend to evolve non-linearly and rather unpredictably. A dynamic system can be described as an entity that self-organizes into increasingly complex forms. Applying this view to the human body, practitioners could facilitate non-linear performance gains through a systems based programming approach. Utilizing a dynamic systems view, the Holistic Approach to Movement Education (HADME) is a model designed to optimize performance by accounting for non-linear and self-organizing traits associated with human movement. In this model, gains in performance occur through advancing individual perspectives and through optimizing sub-system performance. This inward shift of the focus of performance creates a sharper self-awareness and may lead to more optimal movements. Copyright © 2011 Elsevier Ltd. All rights reserved.

  8. A robust approach to measuring the detective quantum efficiency of radiographic detectors in a clinical setting

    NASA Astrophysics Data System (ADS)

    McDonald, Michael C.; Kim, H. K.; Henry, J. R.; Cunningham, I. A.

    2012-03-01

    The detective quantum efficiency (DQE) is widely accepted as a primary measure of x-ray detector performance in the scientific community. A standard method for measuring the DQE, based on IEC 62220-1, requires the system to have a linear response meaning that the detector output signals are proportional to the incident x-ray exposure. However, many systems have a non-linear response due to characteristics of the detector, or post processing of the detector signals, that cannot be disabled and may involve unknown algorithms considered proprietary by the manufacturer. For these reasons, the DQE has not been considered as a practical candidate for routine quality assurance testing in a clinical setting. In this article we described a method that can be used to measure the DQE of both linear and non-linear systems that employ only linear image processing algorithms. The method was validated on a Cesium Iodide based flat panel system that simultaneously stores a raw (linear) and processed (non-linear) image for each exposure. It was found that the resulting DQE was equivalent to a conventional standards-compliant DQE with measurement precision, and the gray-scale inversion and linear edge enhancement did not affect the DQE result. While not IEC 62220-1 compliant, it may be adequate for QA programs.

  9. Topology optimization of induction heating model using sequential linear programming based on move limit with adaptive relaxation

    NASA Astrophysics Data System (ADS)

    Masuda, Hiroshi; Kanda, Yutaro; Okamoto, Yoshifumi; Hirono, Kazuki; Hoshino, Reona; Wakao, Shinji; Tsuburaya, Tomonori

    2017-12-01

    It is very important to design electrical machineries with high efficiency from the point of view of saving energy. Therefore, topology optimization (TO) is occasionally used as a design method for improving the performance of electrical machinery under the reasonable constraints. Because TO can achieve a design with much higher degree of freedom in terms of structure, there is a possibility for deriving the novel structure which would be quite different from the conventional structure. In this paper, topology optimization using sequential linear programming using move limit based on adaptive relaxation is applied to two models. The magnetic shielding, in which there are many local minima, is firstly employed as firstly benchmarking for the performance evaluation among several mathematical programming methods. Secondly, induction heating model is defined in 2-D axisymmetric field. In this model, the magnetic energy stored in the magnetic body is maximized under the constraint on the volume of magnetic body. Furthermore, the influence of the location of the design domain on the solutions is investigated.

  10. Evaluating diffraction-based overlay

    NASA Astrophysics Data System (ADS)

    Li, Jie; Tan, Asher; Jung, JinWoo; Goelzer, Gary; Smith, Nigel; Hu, Jiangtao; Ham, Boo-Hyun; Kwak, Min-Cheol; Kim, Cheol-Hong; Nam, Suk-Woo

    2012-03-01

    We evaluate diffraction-based overlay (DBO) metrology using two test wafers. The test wafers have different film stacks designed to test the quality of DBO data under a range of film conditions. We present DBO results using traditional empirical approach (eDBO). eDBO relies on linear response of the reflectance with respect to the overlay displacement within a small range. It requires specially designed targets that consist of multiple pads with programmed shifts. It offers convenience of quick recipe setup since there is no need to establish a model. We measure five DBO targets designed with different pitches and programmed shifts. The correlations of five eDBO targets and the correlation of eDBO to image-based overlay are excellent. The targets of 800nm and 600nm pitches have better dynamic precision than targets of 400nm pitch, which agrees with simulated results on signal/noise ratio. 3σ of less than 0.1nm is achieved for both wafers using the best configured targets. We further investigate the linearity assumption of eDBO algorithm. Simulation results indicate that as the pitch of DBO targets gets smaller, the nonlinearity error, i.e., the error in the overlay measurement results caused by deviation from ideal linear response, becomes bigger. We propose a nonlinearity correction (NLC) by including higher order terms in the optical response. The new algorithm with NLC improves measurement consistency for DBO targets of same pitch but different programmed shift, due to improved accuracy. The results from targets with different pitches, however, are improved marginally, indicating the presence of other error sources.

  11. Developing a HER3 Vaccine to Prevent Resistance to Endocrine Therapy

    DTIC Science & Technology

    2013-10-01

    linearized using digestion with Pme I, recombined into linearized (E1-,E2b-,E3-) serotype 5 pAd construct in BJ 5183 bacterial recombination-based system...activity Clinical activity was assessed according to Response Evalu- ation Criteria in Solid Tumors ( RECIST 1.0 criteria [22]) using computed tomography...tumors: revised RECIST guideline (version 1.1). Eur J Cancer 45:228–247 23. CTEP Cancer Therapy Evaluation Program. CTCAE and CTC Website (2010) http

  12. An improved technique for determining reflection from semi-infinite atmospheres with linearly anisotropic phase functions. [radiative transfer

    NASA Technical Reports Server (NTRS)

    Fricke, C. L.

    1975-01-01

    A solution to the problem of reflection from a semi-infinite atmosphere is presented, based upon Chandrasekhar's H-function method for linearly anisotropic phase functions. A modification to the Gauss quadrature formula which gives about the same accuracy with 10 points as the conventional Gauss quadrature does with 100 points was developed. A computer program achieving this solution is described and results are presented for several illustrative cases.

  13. A Component-Centered Meta-Analysis of Family-Based Prevention Programs for Adolescent Substance Use

    PubMed Central

    Roseth, Cary J.; Fosco, Gregory M.; Lee, You-kyung; Chen, I-Chien

    2016-01-01

    Although research has documented the positive effects of family-based prevention programs, the field lacks specific information regarding why these programs are effective. The current study summarized the effects of family-based programs on adolescent substance use using a component-based approach to meta-analysis in which we decomposed programs into a set of key topics or components that were specifically addressed by program curricula (e.g., parental monitoring/behavior management, problem solving, positive family relations, etc.). Components were coded according to the amount of time spent on program services that targeted youth, parents, and the whole family; we also coded effect sizes across studies for each substance-related outcome. Given the nested nature of the data, we used hierarchical linear modeling to link program components (Level 2) with effect sizes (Level 1). The overall effect size across programs was .31, which did not differ by type of substance. Youth-focused components designed to encourage more positive family relationships and a positive orientation toward the future emerged as key factors predicting larger than average effect sizes. Our results suggest that, within the universe of family-based prevention, where components such as parental monitoring/behavior management are almost universal, adding or expanding certain youth-focused components may be able to enhance program efficacy. PMID:27064553

  14. Normal loads program for aerodynamic lifting surface theory. [evaluation of spanwise and chordwise loading distributions

    NASA Technical Reports Server (NTRS)

    Medan, R. T.; Ray, K. S.

    1974-01-01

    A description of and users manual are presented for a U.S.A. FORTRAN 4 computer program which evaluates spanwise and chordwise loading distributions, lift coefficient, pitching moment coefficient, and other stability derivatives for thin wings in linearized, steady, subsonic flow. The program is based on a kernel function method lifting surface theory and is applicable to a large class of planforms including asymmetrical ones and ones with mixed straight and curved edges.

  15. The design and analysis of simple low speed flap systems with the aid of linearized theory computer programs

    NASA Technical Reports Server (NTRS)

    Carlson, Harry W.

    1985-01-01

    The purpose here is to show how two linearized theory computer programs in combination may be used for the design of low speed wing flap systems capable of high levels of aerodynamic efficiency. A fundamental premise of the study is that high levels of aerodynamic performance for flap systems can be achieved only if the flow about the wing remains predominantly attached. Based on this premise, a wing design program is used to provide idealized attached flow camber surfaces from which candidate flap systems may be derived, and, in a following step, a wing evaluation program is used to provide estimates of the aerodynamic performance of the candidate systems. Design strategies and techniques that may be employed are illustrated through a series of examples. Applicability of the numerical methods to the analysis of a representative flap system (although not a system designed by the process described here) is demonstrated in a comparison with experimental data.

  16. The Capability Portfolio Analysis Tool (CPAT): A Mixed Integer Linear Programming Formulation for Fleet Modernization Analysis (Version 2.0.2).

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Waddell, Lucas; Muldoon, Frank; Henry, Stephen Michael

    In order to effectively plan the management and modernization of their large and diverse fleets of vehicles, Program Executive Office Ground Combat Systems (PEO GCS) and Program Executive Office Combat Support and Combat Service Support (PEO CS&CSS) commis- sioned the development of a large-scale portfolio planning optimization tool. This software, the Capability Portfolio Analysis Tool (CPAT), creates a detailed schedule that optimally prioritizes the modernization or replacement of vehicles within the fleet - respecting numerous business rules associated with fleet structure, budgets, industrial base, research and testing, etc., while maximizing overall fleet performance through time. This paper contains a thor-more » ough documentation of the terminology, parameters, variables, and constraints that comprise the fleet management mixed integer linear programming (MILP) mathematical formulation. This paper, which is an update to the original CPAT formulation document published in 2015 (SAND2015-3487), covers the formulation of important new CPAT features.« less

  17. A computer program to obtain time-correlated gust loads for nonlinear aircraft using the matched-filter-based method

    NASA Technical Reports Server (NTRS)

    Scott, Robert C.; Pototzky, Anthony S.; Perry, Boyd, III

    1994-01-01

    NASA Langley Research Center has, for several years, conducted research in the area of time-correlated gust loads for linear and nonlinear aircraft. The results of this work led NASA to recommend that the Matched-Filter-Based One-Dimensional Search Method be used for gust load analyses of nonlinear aircraft. This manual describes this method, describes a FORTRAN code which performs this method, and presents example calculations for a sample nonlinear aircraft model. The name of the code is MFD1DS (Matched-Filter-Based One-Dimensional Search). The program source code, the example aircraft equations of motion, a sample input file, and a sample program output are all listed in the appendices.

  18. LINEAR - DERIVATION AND DEFINITION OF A LINEAR AIRCRAFT MODEL

    NASA Technical Reports Server (NTRS)

    Duke, E. L.

    1994-01-01

    The Derivation and Definition of a Linear Model program, LINEAR, provides the user with a powerful and flexible tool for the linearization of aircraft aerodynamic models. LINEAR was developed to provide a standard, documented, and verified tool to derive linear models for aircraft stability analysis and control law design. Linear system models define the aircraft system in the neighborhood of an analysis point and are determined by the linearization of the nonlinear equations defining vehicle dynamics and sensors. LINEAR numerically determines a linear system model using nonlinear equations of motion and a user supplied linear or nonlinear aerodynamic model. The nonlinear equations of motion used are six-degree-of-freedom equations with stationary atmosphere and flat, nonrotating earth assumptions. LINEAR is capable of extracting both linearized engine effects, such as net thrust, torque, and gyroscopic effects and including these effects in the linear system model. The point at which this linear model is defined is determined either by completely specifying the state and control variables, or by specifying an analysis point on a trajectory and directing the program to determine the control variables and the remaining state variables. The system model determined by LINEAR consists of matrices for both the state and observation equations. The program has been designed to provide easy selection of state, control, and observation variables to be used in a particular model. Thus, the order of the system model is completely under user control. Further, the program provides the flexibility of allowing alternate formulations of both the state and observation equations. Data describing the aircraft and the test case is input to the program through a terminal or formatted data files. All data can be modified interactively from case to case. The aerodynamic model can be defined in two ways: a set of nondimensional stability and control derivatives for the flight point of interest, or a full non-linear aerodynamic model as used in simulations. LINEAR is written in FORTRAN and has been implemented on a DEC VAX computer operating under VMS with a virtual memory requirement of approximately 296K of 8 bit bytes. Both an interactive and batch version are included. LINEAR was developed in 1988.

  19. User's manual: Subsonic/supersonic advanced panel pilot code

    NASA Technical Reports Server (NTRS)

    Moran, J.; Tinoco, E. N.; Johnson, F. T.

    1978-01-01

    Sufficient instructions for running the subsonic/supersonic advanced panel pilot code were developed. This software was developed as a vehicle for numerical experimentation and it should not be construed to represent a finished production program. The pilot code is based on a higher order panel method using linearly varying source and quadratically varying doublet distributions for computing both linearized supersonic and subsonic flow over arbitrary wings and bodies. This user's manual contains complete input and output descriptions. A brief description of the method is given as well as practical instructions for proper configurations modeling. Computed results are also included to demonstrate some of the capabilities of the pilot code. The computer program is written in FORTRAN IV for the SCOPE 3.4.4 operations system of the Ames CDC 7600 computer. The program uses overlay structure and thirteen disk files, and it requires approximately 132000 (Octal) central memory words.

  20. Optimum use of air tankers in initial attack: selection, basing, and transfer rules

    Treesearch

    Francis E. Greulich; William G. O' Regan

    1982-01-01

    Fire managers face two interrelated problems in deciding the most efficient use of air tankers: where best to base them, and how best to reallocate them each day in anticipation of fire occurrence. A computerized model based on a mixed integer linear program can help in assigning air tankers throughout the fire season. The model was tested using information from...

  1. A circuit model for nonlinear simulation of radio-frequency filters using bulk acoustic wave resonators.

    PubMed

    Ueda, Masanori; Iwaki, Masafumi; Nishihara, Tokihiro; Satoh, Yoshio; Hashimoto, Ken-ya

    2008-04-01

    This paper describes a circuit model for the analysis of nonlinearity in the filters based on radiofrequency (RF) bulk acoustic wave (BAW) resonators. The nonlinear output is expressed by a current source connected parallel to the linear resonator. Amplitude of the nonlinear current source is programmed proportional to the product of linear currents flowing in the resonator. Thus, the nonlinear analysis is performed by the common linear analysis, even for complex device structures. The analysis is applied to a ladder-type RF BAW filter, and frequency dependence of the nonlinear output is discussed. Furthermore, this analysis is verified through comparison with experiments.

  2. FSILP: fuzzy-stochastic-interval linear programming for supporting municipal solid waste management.

    PubMed

    Li, Pu; Chen, Bing

    2011-04-01

    Although many studies on municipal solid waste management (MSW management) were conducted under uncertain conditions of fuzzy, stochastic, and interval coexistence, the solution to the conventional linear programming problems of integrating fuzzy method with the other two was inefficient. In this study, a fuzzy-stochastic-interval linear programming (FSILP) method is developed by integrating Nguyen's method with conventional linear programming for supporting municipal solid waste management. The Nguyen's method was used to convert the fuzzy and fuzzy-stochastic linear programming problems into the conventional linear programs, by measuring the attainment values of fuzzy numbers and/or fuzzy random variables, as well as superiority and inferiority between triangular fuzzy numbers/triangular fuzzy-stochastic variables. The developed method can effectively tackle uncertainties described in terms of probability density functions, fuzzy membership functions, and discrete intervals. Moreover, the method can also improve upon the conventional interval fuzzy programming and two-stage stochastic programming approaches, with advantageous capabilities that are easily achieved with fewer constraints and significantly reduces consumption time. The developed model was applied to a case study of municipal solid waste management system in a city. The results indicated that reasonable solutions had been generated. The solution can help quantify the relationship between the change of system cost and the uncertainties, which could support further analysis of tradeoffs between the waste management cost and the system failure risk. Copyright © 2010 Elsevier Ltd. All rights reserved.

  3. Optimization of land use of agricultural farms in Sumedang regency by using linear programming models

    NASA Astrophysics Data System (ADS)

    Zenis, F. M.; Supian, S.; Lesmana, E.

    2018-03-01

    Land is one of the most important assets for farmers in Sumedang Regency. Therefore, agricultural land should be used optimally. This study aims to obtain the optimal land use composition in order to obtain maximum income. The optimization method used in this research is Linear Programming Models. Based on the results of the analysis, the composition of land use for rice area of 135.314 hectares, corn area of 11.798 hectares, soy area of 2.290 hectares, and peanuts of 2.818 hectares with the value of farmers income of IDR 2.682.020.000.000,-/year. The results of this analysis can be used as a consideration in decisions making about cropping patterns by farmers.

  4. A Hybrid Method to Estimate Specific Differential Phase and Rainfall With Linear Programming and Physics Constraints

    DOE PAGES

    Huang, Hao; Zhang, Guifu; Zhao, Kun; ...

    2016-10-20

    A hybrid method of combining linear programming (LP) and physical constraints is developed to estimate specific differential phase (K DP) and to improve rain estimation. Moreover, the hybrid K DP estimator and the existing estimators of LP, least squares fitting, and a self-consistent relation of polarimetric radar variables are evaluated and compared using simulated data. Our simulation results indicate the new estimator's superiority, particularly in regions where backscattering phase (δ hv) dominates. Further, a quantitative comparison between auto-weather-station rain-gauge observations and K DP-based radar rain estimates for a Meiyu event also demonstrate the superiority of the hybrid K DP estimatormore » over existing methods.« less

  5. Variance approach for multi-objective linear programming with fuzzy random of objective function coefficients

    NASA Astrophysics Data System (ADS)

    Indarsih, Indrati, Ch. Rini

    2016-02-01

    In this paper, we define variance of the fuzzy random variables through alpha level. We have a theorem that can be used to know that the variance of fuzzy random variables is a fuzzy number. We have a multi-objective linear programming (MOLP) with fuzzy random of objective function coefficients. We will solve the problem by variance approach. The approach transform the MOLP with fuzzy random of objective function coefficients into MOLP with fuzzy of objective function coefficients. By weighted methods, we have linear programming with fuzzy coefficients and we solve by simplex method for fuzzy linear programming.

  6. Genetic programming over context-free languages with linear constraints for the knapsack problem: first results.

    PubMed

    Bruhn, Peter; Geyer-Schulz, Andreas

    2002-01-01

    In this paper, we introduce genetic programming over context-free languages with linear constraints for combinatorial optimization, apply this method to several variants of the multidimensional knapsack problem, and discuss its performance relative to Michalewicz's genetic algorithm with penalty functions. With respect to Michalewicz's approach, we demonstrate that genetic programming over context-free languages with linear constraints improves convergence. A final result is that genetic programming over context-free languages with linear constraints is ideally suited to modeling complementarities between items in a knapsack problem: The more complementarities in the problem, the stronger the performance in comparison to its competitors.

  7. Single-Photon-Sensitive HgCdTe Avalanche Photodiode Detector

    NASA Technical Reports Server (NTRS)

    Huntington, Andrew

    2013-01-01

    The purpose of this program was to develop single-photon-sensitive short-wavelength infrared (SWIR) and mid-wavelength infrared (MWIR) avalanche photodiode (APD) receivers based on linear-mode HgCdTe APDs, for application by NASA in light detection and ranging (lidar) sensors. Linear-mode photon-counting APDs are desired for lidar because they have a shorter pixel dead time than Geiger APDs, and can detect sequential pulse returns from multiple objects that are closely spaced in range. Linear-mode APDs can also measure photon number, which Geiger APDs cannot, adding an extra dimension to lidar scene data for multi-photon returns. High-gain APDs with low multiplication noise are required for efficient linear-mode detection of single photons because of APD gain statistics -- a low-excess-noise APD will generate detectible current pulses from single photon input at a much higher rate of occurrence than will a noisy APD operated at the same average gain. MWIR and LWIR electron-avalanche HgCdTe APDs have been shown to operate in linear mode at high average avalanche gain (M > 1000) without excess multiplication noise (F = 1), and are therefore very good candidates for linear-mode photon counting. However, detectors fashioned from these narrow-bandgap alloys require aggressive cooling to control thermal dark current. Wider-bandgap SWIR HgCdTe APDs were investigated in this program as a strategy to reduce detector cooling requirements.

  8. IESIP - AN IMPROVED EXPLORATORY SEARCH TECHNIQUE FOR PURE INTEGER LINEAR PROGRAMMING PROBLEMS

    NASA Technical Reports Server (NTRS)

    Fogle, F. R.

    1994-01-01

    IESIP, an Improved Exploratory Search Technique for Pure Integer Linear Programming Problems, addresses the problem of optimizing an objective function of one or more variables subject to a set of confining functions or constraints by a method called discrete optimization or integer programming. Integer programming is based on a specific form of the general linear programming problem in which all variables in the objective function and all variables in the constraints are integers. While more difficult, integer programming is required for accuracy when modeling systems with small numbers of components such as the distribution of goods, machine scheduling, and production scheduling. IESIP establishes a new methodology for solving pure integer programming problems by utilizing a modified version of the univariate exploratory move developed by Robert Hooke and T.A. Jeeves. IESIP also takes some of its technique from the greedy procedure and the idea of unit neighborhoods. A rounding scheme uses the continuous solution found by traditional methods (simplex or other suitable technique) and creates a feasible integer starting point. The Hook and Jeeves exploratory search is modified to accommodate integers and constraints and is then employed to determine an optimal integer solution from the feasible starting solution. The user-friendly IESIP allows for rapid solution of problems up to 10 variables in size (limited by DOS allocation). Sample problems compare IESIP solutions with the traditional branch-and-bound approach. IESIP is written in Borland's TURBO Pascal for IBM PC series computers and compatibles running DOS. Source code and an executable are provided. The main memory requirement for execution is 25K. This program is available on a 5.25 inch 360K MS DOS format diskette. IESIP was developed in 1990. IBM is a trademark of International Business Machines. TURBO Pascal is registered by Borland International.

  9. Statistical power analyses using G*Power 3.1: tests for correlation and regression analyses.

    PubMed

    Faul, Franz; Erdfelder, Edgar; Buchner, Axel; Lang, Albert-Georg

    2009-11-01

    G*Power is a free power analysis program for a variety of statistical tests. We present extensions and improvements of the version introduced by Faul, Erdfelder, Lang, and Buchner (2007) in the domain of correlation and regression analyses. In the new version, we have added procedures to analyze the power of tests based on (1) single-sample tetrachoric correlations, (2) comparisons of dependent correlations, (3) bivariate linear regression, (4) multiple linear regression based on the random predictor model, (5) logistic regression, and (6) Poisson regression. We describe these new features and provide a brief introduction to their scope and handling.

  10. Computed Tomography Inspection and Analysis for Additive Manufacturing Components

    NASA Technical Reports Server (NTRS)

    Beshears, Ronald D.

    2017-01-01

    Computed tomography (CT) inspection was performed on test articles additively manufactured from metallic materials. Metallic AM and machined wrought alloy test articles with programmed flaws and geometric features were inspected using a 2-megavolt linear accelerator based CT system. Performance of CT inspection on identically configured wrought and AM components and programmed flaws was assessed to determine the impact of additive manufacturing on inspectability of objects with complex geometries.

  11. Linear-Algebra Programs

    NASA Technical Reports Server (NTRS)

    Lawson, C. L.; Krogh, F. T.; Gold, S. S.; Kincaid, D. R.; Sullivan, J.; Williams, E.; Hanson, R. J.; Haskell, K.; Dongarra, J.; Moler, C. B.

    1982-01-01

    The Basic Linear Algebra Subprograms (BLAS) library is a collection of 38 FORTRAN-callable routines for performing basic operations of numerical linear algebra. BLAS library is portable and efficient source of basic operations for designers of programs involving linear algebriac computations. BLAS library is supplied in portable FORTRAN and Assembler code versions for IBM 370, UNIVAC 1100 and CDC 6000 series computers.

  12. Solving large mixed linear models using preconditioned conjugate gradient iteration.

    PubMed

    Strandén, I; Lidauer, M

    1999-12-01

    Continuous evaluation of dairy cattle with a random regression test-day model requires a fast solving method and algorithm. A new computing technique feasible in Jacobi and conjugate gradient based iterative methods using iteration on data is presented. In the new computing technique, the calculations in multiplication of a vector by a matrix were recorded to three steps instead of the commonly used two steps. The three-step method was implemented in a general mixed linear model program that used preconditioned conjugate gradient iteration. Performance of this program in comparison to other general solving programs was assessed via estimation of breeding values using univariate, multivariate, and random regression test-day models. Central processing unit time per iteration with the new three-step technique was, at best, one-third that needed with the old technique. Performance was best with the test-day model, which was the largest and most complex model used. The new program did well in comparison to other general software. Programs keeping the mixed model equations in random access memory required at least 20 and 435% more time to solve the univariate and multivariate animal models, respectively. Computations of the second best iteration on data took approximately three and five times longer for the animal and test-day models, respectively, than did the new program. Good performance was due to fast computing time per iteration and quick convergence to the final solutions. Use of preconditioned conjugate gradient based methods in solving large breeding value problems is supported by our findings.

  13. Linear units improve articulation between social and physical constructs: An example from caregiver parameterization for children supported by complex medical technologies

    NASA Astrophysics Data System (ADS)

    Bezruczko, N.; Stanley, T.; Battle, M.; Latty, C.

    2016-11-01

    Despite broad sweeping pronouncements by international research organizations that social sciences are being integrated into global research programs, little attention has been directed toward obstacles blocking productive collaborations. In particular, social sciences routinely implement nonlinear, ordinal measures, which fundamentally inhibit integration with overarching scientific paradigms. The widely promoted general linear model in contemporary social science methods is largely based on untransformed scores and ratings, which are neither objective nor linear. This issue has historically separated physical and social sciences, which this report now asserts is unnecessary. In this research, nonlinear, subjective caregiver ratings of confidence to care for children supported by complex, medical technologies were transformed to an objective scale defined by logits (N=70). Transparent linear units from this transformation provided foundational insights into measurement properties of a social- humanistic caregiving construct, which clarified physical and social caregiver implications. Parameterized items and ratings were also subjected to multivariate hierarchical analysis, then decomposed to demonstrate theoretical coherence (R2 >.50), which provided further support for convergence of mathematical parameterization, physical expectations, and a social-humanistic construct. These results present substantial support for improving integration of social sciences with contemporary scientific research programs by emphasizing construction of common variables with objective, linear units.

  14. What linear programming contributes: world food programme experience with the "cost of the diet" tool.

    PubMed

    Frega, Romeo; Lanfranco, Jose Guerra; De Greve, Sam; Bernardini, Sara; Geniez, Perrine; Grede, Nils; Bloem, Martin; de Pee, Saskia

    2012-09-01

    Linear programming has been used for analyzing children's complementary feeding diets, for optimizing nutrient adequacy of dietary recommendations for a population, and for estimating the economic value of fortified foods. To describe and apply a linear programming tool ("Cost of the Diet") with data from Mozambique to determine what could be cost-effective fortification strategies. Based on locally assessed average household dietary needs, seasonal market prices of available food products, and food composition data, the tool estimates the lowest-cost diet that meets almost all nutrient needs. The results were compared with expenditure data from Mozambique to establish the affordability of this diet by quintiles of the population. Three different applications were illustrated: identifying likely "limiting nutrients," comparing cost effectiveness of different fortification interventions at the household level, and assessing economic access to nutritious foods. The analysis identified iron, vitamin B2, and pantothenic acid as "limiting nutrients." Under the Mozambique conditions, vegetable oil was estimated as a more cost-efficient vehicle for vitamin A fortification than sugar; maize flour may also be an effective vehicle to provide other constraining micronutrients. Multiple micronutrient fortification of maize flour could reduce the cost of the "lowest-cost nutritious diet" by 18%, but even this diet can be afforded by only 20% of the Mozambican population. Within the context of fortification, linear programming can be a useful tool for identifying likely nutrient inadequacies, for comparing fortification options in terms of cost effectiveness, and for illustrating the potential benefit of fortification for improving household access to a nutritious diet.

  15. Electro-Mechanical Actuator. DC Resonant Link Controller

    NASA Technical Reports Server (NTRS)

    Schreiner, Kenneth E.

    1996-01-01

    This report summarizes the work performed on the 68 HP electro-mechanical actuator (EMA) system developed on NASA contract for the Electrical Actuation (ELA) Technology Bridging Program. The system was designed to demonstrate the capability of large, high power linear ELAs for applications such as Thrust Vector Control (TVC) on rocket engines. It consists of a motor controller, drive electronics and a linear actuator capable of up to 32,00 lbs loading at 7.4 inches/second. The drive electronics are based on the Resonant DC link concept and operate at a nominal frequency of 55 kHz. The induction motor is a specially designed high speed, low inertia motor capable of a 68 peak HP. The actuator was originally designed by MOOG Aerospace under an internal R & D program to meet Space Shuttle Main Engine (SSME) TVC requirements. The design was modified to meet this programs linear rate specification of 7.4 inches/second. The motor and driver were tested on a dynamometer at the Martin Marietta Space Systems facility. System frequency response, step response and force-velocity tests were conducted at the MOOG Aerospace facility. A complete description of the system and all test results can be found in the body of the report.

  16. Inter and intra-modal deformable registration: continuous deformations meet efficient optimal linear programming.

    PubMed

    Glocker, Ben; Paragios, Nikos; Komodakis, Nikos; Tziritas, Georgios; Navab, Nassir

    2007-01-01

    In this paper we propose a novel non-rigid volume registration based on discrete labeling and linear programming. The proposed framework reformulates registration as a minimal path extraction in a weighted graph. The space of solutions is represented using a set of a labels which are assigned to predefined displacements. The graph topology corresponds to a superimposed regular grid onto the volume. Links between neighborhood control points introduce smoothness, while links between the graph nodes and the labels (end-nodes) measure the cost induced to the objective function through the selection of a particular deformation for a given control point once projected to the entire volume domain, Higher order polynomials are used to express the volume deformation from the ones of the control points. Efficient linear programming that can guarantee the optimal solution up to (a user-defined) bound is considered to recover the optimal registration parameters. Therefore, the method is gradient free, can encode various similarity metrics (simple changes on the graph construction), can guarantee a globally sub-optimal solution and is computational tractable. Experimental validation using simulated data with known deformation, as well as manually segmented data demonstrate the extreme potentials of our approach.

  17. PAPR reduction in FBMC using an ACE-based linear programming optimization

    NASA Astrophysics Data System (ADS)

    van der Neut, Nuan; Maharaj, Bodhaswar TJ; de Lange, Frederick; González, Gustavo J.; Gregorio, Fernando; Cousseau, Juan

    2014-12-01

    This paper presents four novel techniques for peak-to-average power ratio (PAPR) reduction in filter bank multicarrier (FBMC) modulation systems. The approach extends on current PAPR reduction active constellation extension (ACE) methods, as used in orthogonal frequency division multiplexing (OFDM), to an FBMC implementation as the main contribution. The four techniques introduced can be split up into two: linear programming optimization ACE-based techniques and smart gradient-project (SGP) ACE techniques. The linear programming (LP)-based techniques compensate for the symbol overlaps by utilizing a frame-based approach and provide a theoretical upper bound on achievable performance for the overlapping ACE techniques. The overlapping ACE techniques on the other hand can handle symbol by symbol processing. Furthermore, as a result of FBMC properties, the proposed techniques do not require side information transmission. The PAPR performance of the techniques is shown to match, or in some cases improve, on current PAPR techniques for FBMC. Initial analysis of the computational complexity of the SGP techniques indicates that the complexity issues with PAPR reduction in FBMC implementations can be addressed. The out-of-band interference introduced by the techniques is investigated. As a result, it is shown that the interference can be compensated for, whilst still maintaining decent PAPR performance. Additional results are also provided by means of a study of the PAPR reduction of the proposed techniques at a fixed clipping probability. The bit error rate (BER) degradation is investigated to ensure that the trade-off in terms of BER degradation is not too severe. As illustrated by exhaustive simulations, the SGP ACE-based technique proposed are ideal candidates for practical implementation in systems employing the low-complexity polyphase implementation of FBMC modulators. The methods are shown to offer significant PAPR reduction and increase the feasibility of FBMC as a replacement modulation system for OFDM.

  18. Reduced-Size Integer Linear Programming Models for String Selection Problems: Application to the Farthest String Problem.

    PubMed

    Zörnig, Peter

    2015-08-01

    We present integer programming models for some variants of the farthest string problem. The number of variables and constraints is substantially less than that of the integer linear programming models known in the literature. Moreover, the solution of the linear programming-relaxation contains only a small proportion of noninteger values, which considerably simplifies the rounding process. Numerical tests have shown excellent results, especially when a small set of long sequences is given.

  19. Neural control of magnetic suspension systems

    NASA Technical Reports Server (NTRS)

    Gray, W. Steven

    1993-01-01

    The purpose of this research program is to design, build and test (in cooperation with NASA personnel from the NASA Langley Research Center) neural controllers for two different small air-gap magnetic suspension systems. The general objective of the program is to study neural network architectures for the purpose of control in an experimental setting and to demonstrate the feasibility of the concept. The specific objectives of the research program are: (1) to demonstrate through simulation and experimentation the feasibility of using neural controllers to stabilize a nonlinear magnetic suspension system; (2) to investigate through simulation and experimentation the performance of neural controllers designs under various types of parametric and nonparametric uncertainty; (3) to investigate through simulation and experimentation various types of neural architectures for real-time control with respect to performance and complexity; and (4) to benchmark in an experimental setting the performance of neural controllers against other types of existing linear and nonlinear compensator designs. To date, the first one-dimensional, small air-gap magnetic suspension system has been built, tested and delivered to the NASA Langley Research Center. The device is currently being stabilized with a digital linear phase-lead controller. The neural controller hardware is under construction. Two different neural network paradigms are under consideration, one based on hidden layer feedforward networks trained via back propagation and one based on using Gaussian radial basis functions trained by analytical methods related to stability conditions. Some advanced nonlinear control algorithms using feedback linearization and sliding mode control are in simulation studies.

  20. Serenity: A subsystem quantum chemistry program.

    PubMed

    Unsleber, Jan P; Dresselhaus, Thomas; Klahr, Kevin; Schnieders, David; Böckers, Michael; Barton, Dennis; Neugebauer, Johannes

    2018-05-15

    We present the new quantum chemistry program Serenity. It implements a wide variety of functionalities with a focus on subsystem methodology. The modular code structure in combination with publicly available external tools and particular design concepts ensures extensibility and robustness with a focus on the needs of a subsystem program. Several important features of the program are exemplified with sample calculations with subsystem density-functional theory, potential reconstruction techniques, a projection-based embedding approach and combinations thereof with geometry optimization, semi-numerical frequency calculations and linear-response time-dependent density-functional theory. © 2018 Wiley Periodicals, Inc. © 2018 Wiley Periodicals, Inc.

  1. Development and applications of algorithms for calculating the transonic flow about harmonically oscillating wings

    NASA Technical Reports Server (NTRS)

    Ehlers, F. E.; Weatherill, W. H.; Yip, E. L.

    1984-01-01

    A finite difference method to solve the unsteady transonic flow about harmonically oscillating wings was investigated. The procedure is based on separating the velocity potential into steady and unsteady parts and linearizing the resulting unsteady differential equation for small disturbances. The differential equation for the unsteady velocity potential is linear with spatially varying coefficients and with the time variable eliminated by assuming harmonic motion. An alternating direction implicit procedure was investigated, and a pilot program was developed for both two and three dimensional wings. This program provides a relatively efficient relaxation solution without previously encountered solution instability problems. Pressure distributions for two rectangular wings are calculated. Conjugate gradient techniques were developed for the asymmetric, indefinite problem. The conjugate gradient procedure is evaluated for applications to the unsteady transonic problem. Different equations for the alternating direction procedure are derived using a coordinate transformation for swept and tapered wing planforms. Pressure distributions for swept, untaped wings of vanishing thickness are correlated with linear results for sweep angles up to 45 degrees.

  2. Going virtual with quicktime VR: new methods and standardized tools for interactive dynamic visualization of anatomical structures.

    PubMed

    Trelease, R B; Nieder, G L; Dørup, J; Hansen, M S

    2000-04-15

    Continuing evolution of computer-based multimedia technologies has produced QuickTime, a multiplatform digital media standard that is supported by stand-alone commercial programs and World Wide Web browsers. While its core functions might be most commonly employed for production and delivery of conventional video programs (e.g., lecture videos), additional QuickTime VR "virtual reality" features can be used to produce photorealistic, interactive "non-linear movies" of anatomical structures ranging in size from microscopic through gross anatomic. But what is really included in QuickTime VR and how can it be easily used to produce novel and innovative visualizations for education and research? This tutorial introduces the QuickTime multimedia environment, its QuickTime VR extensions, basic linear and non-linear digital video technologies, image acquisition, and other specialized QuickTime VR production methods. Four separate practical applications are presented for light and electron microscopy, dissectable preserved specimens, and explorable functional anatomy in magnetic resonance cinegrams.

  3. A Block-LU Update for Large-Scale Linear Programming

    DTIC Science & Technology

    1990-01-01

    linear programming problems. Results are given from runs on the Cray Y -MP. 1. Introduction We wish to use the simplex method [Dan63] to solve the...standard linear program, minimize cTx subject to Ax = b 1< x <U, where A is an m by n matrix and c, x, 1, u, and b are of appropriate dimension. The simplex...the identity matrix. The basis is used to solve for the search direction y and the dual variables 7r in the following linear systems: Bky = aq (1.2) and

  4. Learning Representation and Control in Markov Decision Processes

    DTIC Science & Technology

    2013-10-21

    π. Figure 3 shows that Drazin bases outperforms the other bases on a two-room MDP. However, a drawback of Drazin bases is that they are...stochastic matrices. One drawback of diffusion wavelets is that it can gen- erate a large number of overcomplete bases, which needs to be effectively...proposed in [52], overcoming some of the drawbacks of LARS-TD. An approximate linear programming for finding l1 regularized solutions of the Bellman

  5. DE and NLP Based QPLS Algorithm

    NASA Astrophysics Data System (ADS)

    Yu, Xiaodong; Huang, Dexian; Wang, Xiong; Liu, Bo

    As a novel evolutionary computing technique, Differential Evolution (DE) has been considered to be an effective optimization method for complex optimization problems, and achieved many successful applications in engineering. In this paper, a new algorithm of Quadratic Partial Least Squares (QPLS) based on Nonlinear Programming (NLP) is presented. And DE is used to solve the NLP so as to calculate the optimal input weights and the parameters of inner relationship. The simulation results based on the soft measurement of diesel oil solidifying point on a real crude distillation unit demonstrate that the superiority of the proposed algorithm to linear PLS and QPLS which is based on Sequential Quadratic Programming (SQP) in terms of fitting accuracy and computational costs.

  6. Yager’s ranking method for solving the trapezoidal fuzzy number linear programming

    NASA Astrophysics Data System (ADS)

    Karyati; Wutsqa, D. U.; Insani, N.

    2018-03-01

    In the previous research, the authors have studied the fuzzy simplex method for trapezoidal fuzzy number linear programming based on the Maleki’s ranking function. We have found some theories related to the term conditions for the optimum solution of fuzzy simplex method, the fuzzy Big-M method, the fuzzy two-phase method, and the sensitivity analysis. In this research, we study about the fuzzy simplex method based on the other ranking function. It is called Yager's ranking function. In this case, we investigate the optimum term conditions. Based on the result of research, it is found that Yager’s ranking function is not like Maleki’s ranking function. Using the Yager’s function, the simplex method cannot work as well as when using the Maleki’s function. By using the Yager’s function, the value of the subtraction of two equal fuzzy numbers is not equal to zero. This condition makes the optimum table of the fuzzy simplex table is undetected. As a result, the simplified fuzzy simplex table becomes stopped and does not reach the optimum solution.

  7. A new method for the prediction of combustion instability

    NASA Astrophysics Data System (ADS)

    Flanagan, Steven Meville

    This dissertation presents a new approach to the prediction of combustion instability in solid rocket motors. Previous attempts at developing computational tools to solve this problem have been largely unsuccessful, showing very poor agreement with experimental results and having little or no predictive capability. This is due primarily to deficiencies in the linear stability theory upon which these efforts have been based. Recent advances in linear instability theory by Flandro have demonstrated the importance of including unsteady rotational effects, previously considered negligible. Previous versions of the theory also neglected corrections to the unsteady flow field of the first order in the mean flow Mach number. This research explores the stability implications of extending the solution to include these corrections. Also, the corrected linear stability theory based upon a rotational unsteady flow field extended to first order in mean flow Mach number has been implemented in two computer programs developed for the Macintosh platform. A quasi one-dimensional version of the program has been developed which is based upon an approximate solution to the cavity acoustics problem. The three-dimensional program applies Greens's Function Discretization (GFD) to the solution for the acoustic mode shapes and frequency. GFD is a recently developed numerical method for finding fully three dimensional solutions for this class of problems. The analysis of complex motor geometries, previously a tedious and time consuming task, has also been greatly simplified through the development of a drawing package designed specifically to facilitate the specification of typical motor geometries. The combination of the drawing package, improved acoustic solutions, and new analysis, results in a tool which is capable of producing more accurate and meaningful predictions than have been possible in the past.

  8. Linear System of Equations, Matrix Inversion, and Linear Programming Using MS Excel

    ERIC Educational Resources Information Center

    El-Gebeily, M.; Yushau, B.

    2008-01-01

    In this note, we demonstrate with illustrations two different ways that MS Excel can be used to solve Linear Systems of Equation, Linear Programming Problems, and Matrix Inversion Problems. The advantage of using MS Excel is its availability and transparency (the user is responsible for most of the details of how a problem is solved). Further, we…

  9. A Fiber Bragg Grating Sensor Interrogation System Based on a Linearly Wavelength-Swept Thermo-Optic Laser Chip

    PubMed Central

    Lee, Hyung-Seok; Lee, Hwi Don; Kim, Hyo Jin; Cho, Jae Du; Jeong, Myung Yung; Kim, Chang-Seok

    2014-01-01

    A linearized wavelength-swept thermo-optic laser chip was applied to demonstrate a fiber Bragg grating (FBG) sensor interrogation system. A broad tuning range of 11.8 nm was periodically obtained from the laser chip for a sweep rate of 16 Hz. To measure the linear time response of the reflection signal from the FBG sensor, a programmed driving signal was directly applied to the wavelength-swept laser chip. The linear wavelength response of the applied strain was clearly extracted with an R-squared value of 0.99994. To test the feasibility of the system for dynamic measurements, the dynamic strain was successfully interrogated with a repetition rate of 0.2 Hz by using this FBG sensor interrogation system. PMID:25177803

  10. General purpose computer programs for numerically analyzing linear ac electrical and electronic circuits for steady-state conditions

    NASA Technical Reports Server (NTRS)

    Egebrecht, R. A.; Thorbjornsen, A. R.

    1967-01-01

    Digital computer programs determine steady-state performance characteristics of active and passive linear circuits. The ac analysis program solves the basic circuit parameters. The compiler program solves these circuit parameters and in addition provides a more versatile program by allowing the user to perform mathematical and logical operations.

  11. A component-centered meta-analysis of family-based prevention programs for adolescent substance use.

    PubMed

    Van Ryzin, Mark J; Roseth, Cary J; Fosco, Gregory M; Lee, You-Kyung; Chen, I-Chien

    2016-04-01

    Although research has documented the positive effects of family-based prevention programs, the field lacks specific information regarding why these programs are effective. The current study summarized the effects of family-based programs on adolescent substance use using a component-based approach to meta-analysis in which we decomposed programs into a set of key topics or components that were specifically addressed by program curricula (e.g., parental monitoring/behavior management,problem solving, positive family relations, etc.). Components were coded according to the amount of time spent on program services that targeted youth, parents, and the whole family; we also coded effect sizes across studies for each substance-related outcome. Given the nested nature of the data, we used hierarchical linear modeling to link program components (Level 2) with effect sizes (Level 1). The overall effect size across programs was .31, which did not differ by type of substance. Youth-focused components designed to encourage more positive family relationships and a positive orientation toward the future emerged as key factors predicting larger than average effect sizes. Our results suggest that, within the universe of family-based prevention, where components such as parental monitoring/behavior management are almost universal, adding or expanding certain youth-focused components may be able to enhance program efficacy. Copyright © 2016 Elsevier Ltd. All rights reserved.

  12. NASA pyrotechnically actuated systems program

    NASA Technical Reports Server (NTRS)

    Schulze, Norman R.

    1993-01-01

    The Office of Safety and Mission Quality initiated a Pyrotechnically Actuated Systems (PAS) Program in FY-92 to address problems experienced with pyrotechnically actuated systems and devices used both on the ground and in flight. The PAS Program will provide the technical basis for NASA's projects to incorporate new technological developments in operational systems. The program will accomplish that objective by developing/testing current and new hardware designs for flight applications and by providing a pyrotechnic data base. This marks the first applied pyrotechnic technology program funded by NASA to address pyrotechnic issues. The PAS Program has been structured to address the results of a survey of pyrotechnic device and system problems with the goal of alleviating or minimizing their risks. Major program initiatives include the development of a Laser Initiated Ordnance System, a pyrotechnic systems data base, NASA Standard Initiator model, a NASA Standard Linear Separation System and a NASA Standard Gas Generator. The PAS Program sponsors annual aerospace pyrotechnic systems workshops.

  13. Linear Programming and Its Application to Pattern Recognition Problems

    NASA Technical Reports Server (NTRS)

    Omalley, M. J.

    1973-01-01

    Linear programming and linear programming like techniques as applied to pattern recognition problems are discussed. Three relatively recent research articles on such applications are summarized. The main results of each paper are described, indicating the theoretical tools needed to obtain them. A synopsis of the author's comments is presented with regard to the applicability or non-applicability of his methods to particular problems, including computational results wherever given.

  14. On 2- and 3-person games on polyhedral sets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Belenky, A.S.

    1994-12-31

    Special classes of 3 person games are considered where the sets of players` allowable strategies are polyhedral and the payoff functions are defined as maxima, on a polyhedral set, of certain kind of sums of linear and bilinear functions. Necessary and sufficient conditions, which are easy to verify, for a Nash point in these games are established, and a finite method, based on these conditions, for calculating Nash points is proposed. It is shown that the game serves as a generalization of a model for a problem of waste products evacuation from a territory. The method makes it possible tomore » reduce calculation of a Nash point to solving some linear and quadratic programming problems formulated on the basis of the original 3-person game. A class of 2-person games on connected polyhedral sets is considered, with the payoff function being a sum of two linear functions and one bilinear function. Necessary and sufficient conditions are established for the min-max, the max-min, and for a certain equilibrium. It is shown that the corresponding points can be calculated from auxiliary linear programming problems formulated on the basis of the master game.« less

  15. Discovery of Boolean metabolic networks: integer linear programming based approach.

    PubMed

    Qiu, Yushan; Jiang, Hao; Ching, Wai-Ki; Cheng, Xiaoqing

    2018-04-11

    Traditional drug discovery methods focused on the efficacy of drugs rather than their toxicity. However, toxicity and/or lack of efficacy are produced when unintended targets are affected in metabolic networks. Thus, identification of biological targets which can be manipulated to produce the desired effect with minimum side-effects has become an important and challenging topic. Efficient computational methods are required to identify the drug targets while incurring minimal side-effects. In this paper, we propose a graph-based computational damage model that summarizes the impact of enzymes on compounds in metabolic networks. An efficient method based on Integer Linear Programming formalism is then developed to identify the optimal enzyme-combination so as to minimize the side-effects. The identified target enzymes for known successful drugs are then verified by comparing the results with those in the existing literature. Side-effects reduction plays a crucial role in the study of drug development. A graph-based computational damage model is proposed and the theoretical analysis states the captured problem is NP-completeness. The proposed approaches can therefore contribute to the discovery of drug targets. Our developed software is available at " http://hkumath.hku.hk/~wkc/APBC2018-metabolic-network.zip ".

  16. Optimal Recycling of Steel Scrap and Alloying Elements: Input-Output based Linear Programming Method with Its Application to End-of-Life Vehicles in Japan.

    PubMed

    Ohno, Hajime; Matsubae, Kazuyo; Nakajima, Kenichi; Kondo, Yasushi; Nakamura, Shinichiro; Fukushima, Yasuhiro; Nagasaka, Tetsuya

    2017-11-21

    Importance of end-of-life vehicles (ELVs) as an urban mine is expected to grow, as more people in developing countries are experiencing increased standards of living, while the automobiles are increasingly made using high-quality materials to meet stricter environmental and safety requirements. While most materials in ELVs, particularly steel, have been recycled at high rates, quality issues have not been adequately addressed due to the complex use of automobile materials, leading to considerable losses of valuable alloying elements. This study highlights the maximal potential of quality-oriented recycling of ELV steel, by exploring the utilization methods of scrap, sorted by parts, to produce electric-arc-furnace-based crude alloy steel with minimal losses of alloying elements. Using linear programming on the case of Japanese economy in 2005, we found that adoption of parts-based scrap sorting could result in the recovery of around 94-98% of the alloying elements occurring in parts scrap (manganese, chromium, nickel, and molybdenum), which may replace 10% of the virgin sources in electric arc furnace-based crude alloy steel production.

  17. SPIP: A computer program implementing the Interaction Picture method for simulation of light-wave propagation in optical fibre

    NASA Astrophysics Data System (ADS)

    Balac, Stéphane; Fernandez, Arnaud

    2016-02-01

    The computer program SPIP is aimed at solving the Generalized Non-Linear Schrödinger equation (GNLSE), involved in optics e.g. in the modelling of light-wave propagation in an optical fibre, by the Interaction Picture method, a new efficient alternative method to the Symmetric Split-Step method. In the SPIP program a dedicated costless adaptive step-size control based on the use of a 4th order embedded Runge-Kutta method is implemented in order to speed up the resolution.

  18. Two Computer Programs for the Statistical Evaluation of a Weighted Linear Composite.

    ERIC Educational Resources Information Center

    Sands, William A.

    1978-01-01

    Two computer programs (one batch, one interactive) are designed to provide statistics for a weighted linear combination of several component variables. Both programs provide mean, variance, standard deviation, and a validity coefficient. (Author/JKS)

  19. Link state relationships under incident conditions : using a CTM-based linear programming dynamic traffic assignment model.

    DOT National Transportation Integrated Search

    2010-03-01

    Urban transportation networks, consisting of numerous links and nodes, experience traffic incidents such as accidents and road : maintenance work. A typical consequence of incidents is congestion which results in long queues and causes high travel ti...

  20. Link State Relationships Under Incident Conditions: Using a CTM-Based Linear Programming Dynamic Traffic Assignment Model

    DOT National Transportation Integrated Search

    2010-03-01

    Urban transportation networks, consisting of numerous links and nodes, experience traffic incidents such as accidents and road maintenance work. A typical consequence of incidents is congestion which results in long queues and causes high travel time...

  1. Relevance of Web Documents:Ghosts Consensus Method.

    ERIC Educational Resources Information Center

    Gorbunov, Andrey L.

    2002-01-01

    Discusses how to improve the quality of Internet search systems and introduces the Ghosts Consensus Method which is free from the drawbacks of digital democracy algorithms and is based on linear programming tasks. Highlights include vector space models; determining relevant documents; and enriching query terms. (LRW)

  2. The inverse problem: Ocean tides derived from earth tide observations

    NASA Technical Reports Server (NTRS)

    Kuo, J. T.

    1978-01-01

    Indirect mapping ocean tides by means of land and island-based tidal gravity measurements is presented. The inverse scheme of linear programming is used for indirect mapping of ocean tides. Open ocean tides were measured by the numerical integration of Laplace's tidal equations.

  3. Designing Templates for Interactive Tasks in CALL Tutorials.

    ERIC Educational Resources Information Center

    Ruhlmann, Felicitas

    The development of templates for computer-assisted language learning (CALL) is discussed, based on experiences with primarily linear multimedia tutorial programs. Design of templates for multiple-choice questions and interactive tasks in a prototype module is described. Possibilities of enhancing interactivity by introducing problem-oriented…

  4. A reverse engineering algorithm for neural networks, applied to the subthalamopallidal network of basal ganglia.

    PubMed

    Floares, Alexandru George

    2008-01-01

    Modeling neural networks with ordinary differential equations systems is a sensible approach, but also very difficult. This paper describes a new algorithm based on linear genetic programming which can be used to reverse engineer neural networks. The RODES algorithm automatically discovers the structure of the network, including neural connections, their signs and strengths, estimates its parameters, and can even be used to identify the biophysical mechanisms involved. The algorithm is tested on simulated time series data, generated using a realistic model of the subthalamopallidal network of basal ganglia. The resulting ODE system is highly accurate, and results are obtained in a matter of minutes. This is because the problem of reverse engineering a system of coupled differential equations is reduced to one of reverse engineering individual algebraic equations. The algorithm allows the incorporation of common domain knowledge to restrict the solution space. To our knowledge, this is the first time a realistic reverse engineering algorithm based on linear genetic programming has been applied to neural networks.

  5. Multivariate quadrature for representing cloud condensation nuclei activity of aerosol populations

    DOE PAGES

    Fierce, Laura; McGraw, Robert L.

    2017-07-26

    Here, sparse representations of atmospheric aerosols are needed for efficient regional- and global-scale chemical transport models. Here we introduce a new framework for representing aerosol distributions, based on the quadrature method of moments. Given a set of moment constraints, we show how linear programming, combined with an entropy-inspired cost function, can be used to construct optimized quadrature representations of aerosol distributions. The sparse representations derived from this approach accurately reproduce cloud condensation nuclei (CCN) activity for realistically complex distributions simulated by a particleresolved model. Additionally, the linear programming techniques described in this study can be used to bound key aerosolmore » properties, such as the number concentration of CCN. Unlike the commonly used sparse representations, such as modal and sectional schemes, the maximum-entropy approach described here is not constrained to pre-determined size bins or assumed distribution shapes. This study is a first step toward a particle-based aerosol scheme that will track multivariate aerosol distributions with sufficient computational efficiency for large-scale simulations.« less

  6. Multivariate quadrature for representing cloud condensation nuclei activity of aerosol populations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fierce, Laura; McGraw, Robert L.

    Here, sparse representations of atmospheric aerosols are needed for efficient regional- and global-scale chemical transport models. Here we introduce a new framework for representing aerosol distributions, based on the quadrature method of moments. Given a set of moment constraints, we show how linear programming, combined with an entropy-inspired cost function, can be used to construct optimized quadrature representations of aerosol distributions. The sparse representations derived from this approach accurately reproduce cloud condensation nuclei (CCN) activity for realistically complex distributions simulated by a particleresolved model. Additionally, the linear programming techniques described in this study can be used to bound key aerosolmore » properties, such as the number concentration of CCN. Unlike the commonly used sparse representations, such as modal and sectional schemes, the maximum-entropy approach described here is not constrained to pre-determined size bins or assumed distribution shapes. This study is a first step toward a particle-based aerosol scheme that will track multivariate aerosol distributions with sufficient computational efficiency for large-scale simulations.« less

  7. Computed Tomography Inspection and Analysis for Additive Manufacturing Components

    NASA Technical Reports Server (NTRS)

    Beshears, Ronald D.

    2016-01-01

    Computed tomography (CT) inspection was performed on test articles additively manufactured from metallic materials. Metallic AM and machined wrought alloy test articles with programmed flaws were inspected using a 2MeV linear accelerator based CT system. Performance of CT inspection on identically configured wrought and AM components and programmed flaws was assessed using standard image analysis techniques to determine the impact of additive manufacturing on inspectability of objects with complex geometries.

  8. Ocean Surface Wave Optical Roughness: Analysis of Innovative Measurements

    DTIC Science & Technology

    2013-12-16

    relationship of MSS to wind speed, and at times has shown a reversal of the Cox-Munk linear relationship. Furthermore, we observe measurable changes in...1985]. The variable speed allocation method has the effect of aliasing (cb) to slower waves, thereby increasing the exponent –m. Our analysis based ...RaDyO) program. The primary research goals of the program are to (1) examine time -dependent oceanic radiance distribution in relation to dynamic

  9. The RANDOM computer program: A linear congruential random number generator

    NASA Technical Reports Server (NTRS)

    Miles, R. F., Jr.

    1986-01-01

    The RANDOM Computer Program is a FORTRAN program for generating random number sequences and testing linear congruential random number generators (LCGs). The linear congruential form of random number generator is discussed, and the selection of parameters of an LCG for a microcomputer described. This document describes the following: (1) The RANDOM Computer Program; (2) RANDOM.MOD, the computer code needed to implement an LCG in a FORTRAN program; and (3) The RANCYCLE and the ARITH Computer Programs that provide computational assistance in the selection of parameters for an LCG. The RANDOM, RANCYCLE, and ARITH Computer Programs are written in Microsoft FORTRAN for the IBM PC microcomputer and its compatibles. With only minor modifications, the RANDOM Computer Program and its LCG can be run on most micromputers or mainframe computers.

  10. A simple approach to optimal control of invasive species.

    PubMed

    Hastings, Alan; Hall, Richard J; Taylor, Caz M

    2006-12-01

    The problem of invasive species and their control is one of the most pressing applied issues in ecology today. We developed simple approaches based on linear programming for determining the optimal removal strategies of different stage or age classes for control of invasive species that are still in a density-independent phase of growth. We illustrate the application of this method to the specific example of invasive Spartina alterniflora in Willapa Bay, WA. For all such systems, linear programming shows in general that the optimal strategy in any time step is to prioritize removal of a single age or stage class. The optimal strategy adjusts which class is the focus of control through time and can be much more cost effective than prioritizing removal of the same stage class each year.

  11. Accommodation of practical constraints by a linear programming jet select. [for Space Shuttle

    NASA Technical Reports Server (NTRS)

    Bergmann, E.; Weiler, P.

    1983-01-01

    An experimental spacecraft control system will be incorporated into the Space Shuttle flight software and exercised during a forthcoming mission to evaluate its performance and handling qualities. The control system incorporates a 'phase space' control law to generate rate change requests and a linear programming jet select to compute jet firings. Posed as a linear programming problem, jet selection must represent the rate change request as a linear combination of jet acceleration vectors where the coefficients are the jet firing times, while minimizing the fuel expended in satisfying that request. This problem is solved in real time using a revised Simplex algorithm. In order to implement the jet selection algorithm in the Shuttle flight control computer, it was modified to accommodate certain practical features of the Shuttle such as limited computer throughput, lengthy firing times, and a large number of control jets. To the authors' knowledge, this is the first such application of linear programming. It was made possible by careful consideration of the jet selection problem in terms of the properties of linear programming and the Simplex algorithm. These modifications to the jet select algorithm may by useful for the design of reaction controlled spacecraft.

  12. Simple linear and multivariate regression models.

    PubMed

    Rodríguez del Águila, M M; Benítez-Parejo, N

    2011-01-01

    In biomedical research it is common to find problems in which we wish to relate a response variable to one or more variables capable of describing the behaviour of the former variable by means of mathematical models. Regression techniques are used to this effect, in which an equation is determined relating the two variables. While such equations can have different forms, linear equations are the most widely used form and are easy to interpret. The present article describes simple and multiple linear regression models, how they are calculated, and how their applicability assumptions are checked. Illustrative examples are provided, based on the use of the freely accessible R program. Copyright © 2011 SEICAP. Published by Elsevier Espana. All rights reserved.

  13. An object-oriented computational model to study cardiopulmonary hemodynamic interactions in humans.

    PubMed

    Ngo, Chuong; Dahlmanns, Stephan; Vollmer, Thomas; Misgeld, Berno; Leonhardt, Steffen

    2018-06-01

    This work introduces an object-oriented computational model to study cardiopulmonary interactions in humans. Modeling was performed in object-oriented programing language Matlab Simscape, where model components are connected with each other through physical connections. Constitutive and phenomenological equations of model elements are implemented based on their non-linear pressure-volume or pressure-flow relationship. The model includes more than 30 physiological compartments, which belong either to the cardiovascular or respiratory system. The model considers non-linear behaviors of veins, pulmonary capillaries, collapsible airways, alveoli, and the chest wall. Model parameters were derisved based on literature values. Model validation was performed by comparing simulation results with clinical and animal data reported in literature. The model is able to provide quantitative values of alveolar, pleural, interstitial, aortic and ventricular pressures, as well as heart and lung volumes during spontaneous breathing and mechanical ventilation. Results of baseline simulation demonstrate the consistency of the assigned parameters. Simulation results during mechanical ventilation with PEEP trials can be directly compared with animal and clinical data given in literature. Object-oriented programming languages can be used to model interconnected systems including model non-linearities. The model provides a useful tool to investigate cardiopulmonary activity during spontaneous breathing and mechanical ventilation. Copyright © 2018 Elsevier B.V. All rights reserved.

  14. Timber management planning with timber ram and goal programming

    Treesearch

    Richard C. Field

    1978-01-01

    By using goal programming to enhance the linear programming of Timber RAM, multiple decision criteria were incorporated in the timber management planning of a National Forest in the southeastern United States. Combining linear and goal programming capitalizes on the advantages of the two techniques and produces operationally feasible solutions. This enhancement may...

  15. A comparison of Heuristic method and Llewellyn’s rules for identification of redundant constraints

    NASA Astrophysics Data System (ADS)

    Estiningsih, Y.; Farikhin; Tjahjana, R. H.

    2018-03-01

    Important techniques in linear programming is modelling and solving practical optimization. Redundant constraints are consider for their effects on general linear programming problems. Identification and reduce redundant constraints are for avoidance of all the calculations associated when solving an associated linear programming problems. Many researchers have been proposed for identification redundant constraints. This paper a compararison of Heuristic method and Llewellyn’s rules for identification of redundant constraints.

  16. Gstat: a program for geostatistical modelling, prediction and simulation

    NASA Astrophysics Data System (ADS)

    Pebesma, Edzer J.; Wesseling, Cees G.

    1998-01-01

    Gstat is a computer program for variogram modelling, and geostatistical prediction and simulation. It provides a generic implementation of the multivariable linear model with trends modelled as a linear function of coordinate polynomials or of user-defined base functions, and independent or dependent, geostatistically modelled, residuals. Simulation in gstat comprises conditional or unconditional (multi-) Gaussian sequential simulation of point values or block averages, or (multi-) indicator sequential simulation. Besides many of the popular options found in other geostatistical software packages, gstat offers the unique combination of (i) an interactive user interface for modelling variograms and generalized covariances (residual variograms), that uses the device-independent plotting program gnuplot for graphical display, (ii) support for several ascii and binary data and map file formats for input and output, (iii) a concise, intuitive and flexible command language, (iv) user customization of program defaults, (v) no built-in limits, and (vi) free, portable ANSI-C source code. This paper describes the class of problems gstat can solve, and addresses aspects of efficiency and implementation, managing geostatistical projects, and relevant technical details.

  17. Linear feature extraction from radar imagery: SBIR (Small Business Innovative Research), phase 2, option 2

    NASA Astrophysics Data System (ADS)

    Milgram, David L.; Kahn, Philip; Conner, Gary D.; Lawton, Daryl T.

    1988-12-01

    The goal of this effort is to develop and demonstrate prototype processing capabilities for a knowledge-based system to automatically extract and analyze features from Synthetic Aperture Radar (SAR) imagery. This effort constitutes Phase 2 funding through the Defense Small Business Innovative Research (SBIR) Program. Previous work examined the feasibility of and technology issues involved in the development of an automated linear feature extraction system. This final report documents this examination and the technologies involved in automating this image understanding task. In particular, it reports on a major software delivery containing an image processing algorithmic base, a perceptual structures manipulation package, a preliminary hypothesis management framework and an enhanced user interface.

  18. Effect of a fall prevention program on balance maintenance using a quasi-experimental design in real-world settings.

    PubMed

    Robitaille, Yvonne; Fournier, Michel; Laforest, Sophie; Gauvin, Lise; Filiatrault, Johanne; Corriveau, Hélène

    2012-08-01

    To examine the effect of a fall prevention program offered under real-world conditions on balance maintenance several months after the program. To explore the program's impact on falls. A quasi-experimental study was conducted among community-dwelling seniors, with pre- and postintervention measures of balance performance and self-reported falls. Ten community-based organizations offered the intervention (98 participants) and 7 recruited participants to the study's control arm (102 participants). An earlier study examined balance immediately after the 12-week program. The present study focuses on the 12-month effect. Linear regression (balance) and negative binomial regression (falls) procedures were performed.falls. During the 12-month study period, experimental participants improved and maintained their balance as reflected by their scores on three performance tests. There was no evidence of an effect on falls.falls. Structured group exercise programs offered in community-based settings can maintain selected components of balance for several months after the program's end.

  19. Spatial Planning of School Districts

    ERIC Educational Resources Information Center

    Maxfield, Donald W.

    1972-01-01

    The development of several plans based on linear programming and geographic methodology will permit school administrators to make better decisions concerning the planning of school districts: where to locate boundaries, how to eliminate overcrowding, where to locate new classrooms, and how to overcome de facto segregation. The primal and dual…

  20. STARS: An Integrated, Multidisciplinary, Finite-Element, Structural, Fluids, Aeroelastic, and Aeroservoelastic Analysis Computer Program

    NASA Technical Reports Server (NTRS)

    Gupta, K. K.

    1997-01-01

    A multidisciplinary, finite element-based, highly graphics-oriented, linear and nonlinear analysis capability that includes such disciplines as structures, heat transfer, linear aerodynamics, computational fluid dynamics, and controls engineering has been achieved by integrating several new modules in the original STARS (STructural Analysis RoutineS) computer program. Each individual analysis module is general-purpose in nature and is effectively integrated to yield aeroelastic and aeroservoelastic solutions of complex engineering problems. Examples of advanced NASA Dryden Flight Research Center projects analyzed by the code in recent years include the X-29A, F-18 High Alpha Research Vehicle/Thrust Vectoring Control System, B-52/Pegasus Generic Hypersonics, National AeroSpace Plane (NASP), SR-71/Hypersonic Launch Vehicle, and High Speed Civil Transport (HSCT) projects. Extensive graphics capabilities exist for convenient model development and postprocessing of analysis results. The program is written in modular form in standard FORTRAN language to run on a variety of computers, such as the IBM RISC/6000, SGI, DEC, Cray, and personal computer; associated graphics codes use OpenGL and IBM/graPHIGS language for color depiction. This program is available from COSMIC, the NASA agency for distribution of computer programs.

  1. A linear programming approach for placement of applicants to academic programs.

    PubMed

    Kassa, Biniyam Asmare

    2013-01-01

    This paper reports a linear programming approach for placement of applicants to study programs developed and implemented at the college of Business & Economics, Bahir Dar University, Bahir Dar, Ethiopia. The approach is estimated to significantly streamline the placement decision process at the college by reducing required man hour as well as the time it takes to announce placement decisions. Compared to the previous manual system where only one or two placement criteria were considered, the new approach allows the college's management to easily incorporate additional placement criteria, if needed. Comparison of our approach against manually constructed placement decisions based on actual data for the 2012/13 academic year suggested that about 93 percent of the placements from our model concur with the actual placement decisions. For the remaining 7 percent of placements, however, the actual placements made by the manual system display inconsistencies of decisions judged against the very criteria intended to guide placement decisions by the college's program management office. Overall, the new approach proves to be a significant improvement over the manual system in terms of efficiency of the placement process and the quality of placement decisions.

  2. A Linear Programming Model to Optimize Various Objective Functions of a Foundation Type State Support Program.

    ERIC Educational Resources Information Center

    Matzke, Orville R.

    The purpose of this study was to formulate a linear programming model to simulate a foundation type support program and to apply this model to a state support program for the public elementary and secondary school districts in the State of Iowa. The model was successful in producing optimal solutions to five objective functions proposed for…

  3. Efficient generation of sum-of-products representations of high-dimensional potential energy surfaces based on multimode expansions

    NASA Astrophysics Data System (ADS)

    Ziegler, Benjamin; Rauhut, Guntram

    2016-03-01

    The transformation of multi-dimensional potential energy surfaces (PESs) from a grid-based multimode representation to an analytical one is a standard procedure in quantum chemical programs. Within the framework of linear least squares fitting, a simple and highly efficient algorithm is presented, which relies on a direct product representation of the PES and a repeated use of Kronecker products. It shows the same scalings in computational cost and memory requirements as the potfit approach. In comparison to customary linear least squares fitting algorithms, this corresponds to a speed-up and memory saving by several orders of magnitude. Different fitting bases are tested, namely, polynomials, B-splines, and distributed Gaussians. Benchmark calculations are provided for the PESs of a set of small molecules.

  4. Efficient generation of sum-of-products representations of high-dimensional potential energy surfaces based on multimode expansions.

    PubMed

    Ziegler, Benjamin; Rauhut, Guntram

    2016-03-21

    The transformation of multi-dimensional potential energy surfaces (PESs) from a grid-based multimode representation to an analytical one is a standard procedure in quantum chemical programs. Within the framework of linear least squares fitting, a simple and highly efficient algorithm is presented, which relies on a direct product representation of the PES and a repeated use of Kronecker products. It shows the same scalings in computational cost and memory requirements as the potfit approach. In comparison to customary linear least squares fitting algorithms, this corresponds to a speed-up and memory saving by several orders of magnitude. Different fitting bases are tested, namely, polynomials, B-splines, and distributed Gaussians. Benchmark calculations are provided for the PESs of a set of small molecules.

  5. Monthly reservoir inflow forecasting using a new hybrid SARIMA genetic programming approach

    NASA Astrophysics Data System (ADS)

    Moeeni, Hamid; Bonakdari, Hossein; Ebtehaj, Isa

    2017-03-01

    Forecasting reservoir inflow is one of the most important components of water resources and hydroelectric systems operation management. Seasonal autoregressive integrated moving average (SARIMA) models have been frequently used for predicting river flow. SARIMA models are linear and do not consider the random component of statistical data. To overcome this shortcoming, monthly inflow is predicted in this study based on a combination of seasonal autoregressive integrated moving average (SARIMA) and gene expression programming (GEP) models, which is a new hybrid method (SARIMA-GEP). To this end, a four-step process is employed. First, the monthly inflow datasets are pre-processed. Second, the datasets are modelled linearly with SARIMA and in the third stage, the non-linearity of residual series caused by linear modelling is evaluated. After confirming the non-linearity, the residuals are modelled in the fourth step using a gene expression programming (GEP) method. The proposed hybrid model is employed to predict the monthly inflow to the Jamishan Dam in west Iran. Thirty years' worth of site measurements of monthly reservoir dam inflow with extreme seasonal variations are used. The results of this hybrid model (SARIMA-GEP) are compared with SARIMA, GEP, artificial neural network (ANN) and SARIMA-ANN models. The results indicate that the SARIMA-GEP model ( R 2=78.8, VAF =78.8, RMSE =0.89, MAPE =43.4, CRM =0.053) outperforms SARIMA and GEP and SARIMA-ANN ( R 2=68.3, VAF =66.4, RMSE =1.12, MAPE =56.6, CRM =0.032) displays better performance than the SARIMA and ANN models. A comparison of the two hybrid models indicates the superiority of SARIMA-GEP over the SARIMA-ANN model.

  6. Resource allocation in shared spectrum access communications for operators with diverse service requirements

    NASA Astrophysics Data System (ADS)

    Kibria, Mirza Golam; Villardi, Gabriel Porto; Ishizu, Kentaro; Kojima, Fumihide; Yano, Hiroyuki

    2016-12-01

    In this paper, we study inter-operator spectrum sharing and intra-operator resource allocation in shared spectrum access communication systems and propose efficient dynamic solutions to address both inter-operator and intra-operator resource allocation optimization problems. For inter-operator spectrum sharing, we present two competent approaches, namely the subcarrier gain-based sharing and fragmentation-based sharing, which carry out fair and flexible allocation of the available shareable spectrum among the operators subject to certain well-defined sharing rules, traffic demands, and channel propagation characteristics. The subcarrier gain-based spectrum sharing scheme has been found to be more efficient in terms of achieved throughput. However, the fragmentation-based sharing is more attractive in terms of computational complexity. For intra-operator resource allocation, we consider resource allocation problem with users' dissimilar service requirements, where the operator supports users with delay constraint and non-delay constraint service requirements, simultaneously. This optimization problem is a mixed-integer non-linear programming problem and non-convex, which is computationally very expensive, and the complexity grows exponentially with the number of integer variables. We propose less-complex and efficient suboptimal solution based on formulating exact linearization, linear approximation, and convexification techniques for the non-linear and/or non-convex objective functions and constraints. Extensive simulation performance analysis has been carried out that validates the efficiency of the proposed solution.

  7. Calculation of the distributed loads on the blades of individual multiblade propellers in axial flow using linear and nonlinear lifting surface theories

    NASA Technical Reports Server (NTRS)

    Pesetskaya, N. N.; Timofeev, I. YA.; Shipilov, S. D.

    1988-01-01

    In recent years much attention has been given to the development of methods and programs for the calculation of the aerodynamic characteristics of multiblade, saber-shaped air propellers. Most existing methods are based on the theory of lifting lines. Elsewhere, the theory of a lifting surface is used to calculate screw and lifting propellers. In this work, methods of discrete eddies are described for the calculation of the aerodynamic characteristics of propellers using the linear and nonlinear theories of lifting surfaces.

  8. On a program manifold's stability of one contour automatic control systems

    NASA Astrophysics Data System (ADS)

    Zumatov, S. S.

    2017-12-01

    Methodology of analysis of stability is expounded to the one contour systems automatic control feedback in the presence of non-linearities. The methodology is based on the use of the simplest mathematical models of the nonlinear controllable systems. Stability of program manifolds of one contour automatic control systems is investigated. The sufficient conditions of program manifold's absolute stability of one contour automatic control systems are obtained. The Hurwitz's angle of absolute stability was determined. The sufficient conditions of program manifold's absolute stability of control systems by the course of plane in the mode of autopilot are obtained by means Lyapunov's second method.

  9. WE-FG-207B-02: Material Reconstruction for Spectral Computed Tomography with Detector Response Function

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, J; Gao, H

    2016-06-15

    Purpose: Different from the conventional computed tomography (CT), spectral CT based on energy-resolved photon-counting detectors is able to provide the unprecedented material composition. However, an important missing piece for accurate spectral CT is to incorporate the detector response function (DRF), which is distorted by factors such as pulse pileup and charge-sharing. In this work, we propose material reconstruction methods for spectral CT with DRF. Methods: The polyenergetic X-ray forward model takes the DRF into account for accurate material reconstruction. Two image reconstruction methods are proposed: a direct method based on the nonlinear data fidelity from DRF-based forward model; a linear-data-fidelitymore » based method that relies on the spectral rebinning so that the corresponding DRF matrix is invertible. Then the image reconstruction problem is regularized with the isotropic TV term and solved by alternating direction method of multipliers. Results: The simulation results suggest that the proposed methods provided more accurate material compositions than the standard method without DRF. Moreover, the proposed method with linear data fidelity had improved reconstruction quality from the proposed method with nonlinear data fidelity. Conclusion: We have proposed material reconstruction methods for spectral CT with DRF, whichprovided more accurate material compositions than the standard methods without DRF. Moreover, the proposed method with linear data fidelity had improved reconstruction quality from the proposed method with nonlinear data fidelity. Jiulong Liu and Hao Gao were partially supported by the NSFC (#11405105), the 973 Program (#2015CB856000), and the Shanghai Pujiang Talent Program (#14PJ1404500).« less

  10. Conic Sampling: An Efficient Method for Solving Linear and Quadratic Programming by Randomly Linking Constraints within the Interior

    PubMed Central

    Serang, Oliver

    2012-01-01

    Linear programming (LP) problems are commonly used in analysis and resource allocation, frequently surfacing as approximations to more difficult problems. Existing approaches to LP have been dominated by a small group of methods, and randomized algorithms have not enjoyed popularity in practice. This paper introduces a novel randomized method of solving LP problems by moving along the facets and within the interior of the polytope along rays randomly sampled from the polyhedral cones defined by the bounding constraints. This conic sampling method is then applied to randomly sampled LPs, and its runtime performance is shown to compare favorably to the simplex and primal affine-scaling algorithms, especially on polytopes with certain characteristics. The conic sampling method is then adapted and applied to solve a certain quadratic program, which compute a projection onto a polytope; the proposed method is shown to outperform the proprietary software Mathematica on large, sparse QP problems constructed from mass spectometry-based proteomics. PMID:22952741

  11. Development and application of a program to calculate transonic flow around an oscillating three-dimensional wing using finite difference procedures

    NASA Technical Reports Server (NTRS)

    Weatherill, Warren H.; Ehlers, F. Edward

    1989-01-01

    A finite difference method for solving the unsteady transonic flow about harmonically oscillating wings is investigated. The procedure is based on separating the velocity potential into steady and unsteady parts and linearizing the resulting unsteady differential equation for small disturbances. The differential equation for the unsteady potential is linear with spatially varying coefficients and with the time variable eliminated by assuming harmonic motion. Difference equations are derived for harmonic transonic flow to include a coordinate transformation for swept and tapered planforms. A pilot program is developed for three-dimensional planar lifting surface configurations (including thickness) for the CRAY-XMP at Boeing Commercial Airplanes and for the CYBER VPS-32 at the NASA Langley Research Center. An investigation is made of the effect of the location of the outer boundaries on accuracy for very small reduced frequencies. Finally, the pilot program is applied to the flutter analysis of a rectangular wing.

  12. Interactive graphical system for small-angle scattering analysis of polydisperse systems

    NASA Astrophysics Data System (ADS)

    Konarev, P. V.; Volkov, V. V.; Svergun, D. I.

    2016-09-01

    A program suite for one-dimensional small-angle scattering analysis of polydisperse systems and multiple data sets is presented. The main program, POLYSAS, has a menu-driven graphical user interface calling computational modules from ATSAS package to perform data treatment and analysis. The graphical menu interface allows one to process multiple (time, concentration or temperature-dependent) data sets and interactively change the parameters for the data modelling using sliders. The graphical representation of the data is done via the Winteracter-based program SASPLOT. The package is designed for the analysis of polydisperse systems and mixtures, and permits one to obtain size distributions and evaluate the volume fractions of the components using linear and non-linear fitting algorithms as well as model-independent singular value decomposition. The use of the POLYSAS package is illustrated by the recent examples of its application to study concentration-dependent oligomeric states of proteins and time kinetics of polymer micelles for anticancer drug delivery.

  13. Drag reduction of a car model by linear genetic programming control

    NASA Astrophysics Data System (ADS)

    Li, Ruiying; Noack, Bernd R.; Cordier, Laurent; Borée, Jacques; Harambat, Fabien

    2017-08-01

    We investigate open- and closed-loop active control for aerodynamic drag reduction of a car model. Turbulent flow around a blunt-edged Ahmed body is examined at ReH≈ 3× 105 based on body height. The actuation is performed with pulsed jets at all trailing edges (multiple inputs) combined with a Coanda deflection surface. The flow is monitored with 16 pressure sensors distributed at the rear side (multiple outputs). We apply a recently developed model-free control strategy building on genetic programming in Dracopoulos and Kent (Neural Comput Appl 6:214-228, 1997) and Gautier et al. (J Fluid Mech 770:424-441, 2015). The optimized control laws comprise periodic forcing, multi-frequency forcing and sensor-based feedback including also time-history information feedback and combinations thereof. Key enabler is linear genetic programming (LGP) as powerful regression technique for optimizing the multiple-input multiple-output control laws. The proposed LGP control can select the best open- or closed-loop control in an unsupervised manner. Approximately 33% base pressure recovery associated with 22% drag reduction is achieved in all considered classes of control laws. Intriguingly, the feedback actuation emulates periodic high-frequency forcing. In addition, the control identified automatically the only sensor which listens to high-frequency flow components with good signal to noise ratio. Our control strategy is, in principle, applicable to all multiple actuators and sensors experiments.

  14. 40 HP Electro-Mechanical Actuator

    NASA Technical Reports Server (NTRS)

    Fulmer, Chris

    1996-01-01

    This report summarizes the work performed on the 40 BP electro-mechanical actuator (EMA) system developed on NASA contract NAS3-25799 for the NASA National Launch System and Electrical Actuation (ELA) Technology Bridging Programs. The system was designed to demonstrate the capability of large, high power linear ELA's for applications such as Thrust Vector Control (TVC) on rocket engines. It consists of a motor controller, high frequency power source, drive electronics and a linear actuator. The power source is a 25kVA 20 kHz Mapham inverter. The drive electronics are based on the pulse population modulation concept and operate at a nominal frequency of 40 kHz. The induction motor is a specially designed high speed, low inertia motor capable of a 68 peak HP. The actuator was originally designed by MOOG Aerospace under an internal R & D program to meet Space Shuttle Main Engine (SSME) TVC requirements. The design was modified to meet this programs linear rate specification of 7.4 inches/second. The motor and driver were tested on a dynamometer at the Martin Marietta Space Systems facility. System frequency response and step response tests were conducted at the Marshall Space Flight Center facility. A complete description of the system and all test results can be found in the body of the report.

  15. The Many Ways Data Must Flow.

    ERIC Educational Resources Information Center

    La Brecque, Mort

    1984-01-01

    To break the bottleneck inherent in today's linear computer architectures, parallel schemes (which allow computers to perform multiple tasks at one time) are being devised. Several of these schemes are described. Dataflow devices, parallel number-crunchers, programing languages, and a device based on a neurological model are among the areas…

  16. Accelerator Generation and Thermal Separation (AGATS) of Technetium-99m

    ScienceCinema

    Grover, Blaine

    2018-05-01

    Accelerator Generation and Thermal Separation (AGATS) of Technetium-99m is a linear electron accelerator-based technology for producing medical imaging radioisotopes from a separation process that heats, vaporizes and condenses the desired radioisotope. You can learn more about INL's education programs at http://www.facebook.com/idahonationallaboratory.

  17. Forecasting Pell Program Applications Using Structural Aggregate Models.

    ERIC Educational Resources Information Center

    Cavin, Edward S.

    1995-01-01

    Demand for Pell Grant financial aid has become difficult to predict when using the current microsimulation model. This paper proposes an alternative model that uses aggregate data (based on individuals' microlevel decisions and macrodata on family incomes, college costs, and opportunity wages) and avoids some limitations of simple linear models.…

  18. Estimating School Efficiency: A Comparison of Methods Using Simulated Data.

    ERIC Educational Resources Information Center

    Bifulco, Robert; Bretschneider, Stuart

    2001-01-01

    Uses simulated data to assess the adequacy of two econometric and linear-programming techniques (data-envelopment analysis and corrected ordinary least squares) for measuring performance-based school reform. In complex data sets (simulated to contain measurement error and endogeneity), these methods are inadequate efficiency measures. (Contains 40…

  19. A note on probabilistic models over strings: the linear algebra approach.

    PubMed

    Bouchard-Côté, Alexandre

    2013-12-01

    Probabilistic models over strings have played a key role in developing methods that take into consideration indels as phylogenetically informative events. There is an extensive literature on using automata and transducers on phylogenies to do inference on these probabilistic models, in which an important theoretical question is the complexity of computing the normalization of a class of string-valued graphical models. This question has been investigated using tools from combinatorics, dynamic programming, and graph theory, and has practical applications in Bayesian phylogenetics. In this work, we revisit this theoretical question from a different point of view, based on linear algebra. The main contribution is a set of results based on this linear algebra view that facilitate the analysis and design of inference algorithms on string-valued graphical models. As an illustration, we use this method to give a new elementary proof of a known result on the complexity of inference on the "TKF91" model, a well-known probabilistic model over strings. Compared to previous work, our proving method is easier to extend to other models, since it relies on a novel weak condition, triangular transducers, which is easy to establish in practice. The linear algebra view provides a concise way of describing transducer algorithms and their compositions, opens the possibility of transferring fast linear algebra libraries (for example, based on GPUs), as well as low rank matrix approximation methods, to string-valued inference problems.

  20. Portfolio optimization using fuzzy linear programming

    NASA Astrophysics Data System (ADS)

    Pandit, Purnima K.

    2013-09-01

    Portfolio Optimization (PO) is a problem in Finance, in which investor tries to maximize return and minimize risk by carefully choosing different assets. Expected return and risk are the most important parameters with regard to optimal portfolios. In the simple form PO can be modeled as quadratic programming problem which can be put into equivalent linear form. PO problems with the fuzzy parameters can be solved as multi-objective fuzzy linear programming problem. In this paper we give the solution to such problems with an illustrative example.

  1. OPR-PPR, a Computer Program for Assessing Data Importance to Model Predictions Using Linear Statistics

    USGS Publications Warehouse

    Tonkin, Matthew J.; Tiedeman, Claire; Ely, D. Matthew; Hill, Mary C.

    2007-01-01

    The OPR-PPR program calculates the Observation-Prediction (OPR) and Parameter-Prediction (PPR) statistics that can be used to evaluate the relative importance of various kinds of data to simulated predictions. The data considered fall into three categories: (1) existing observations, (2) potential observations, and (3) potential information about parameters. The first two are addressed by the OPR statistic; the third is addressed by the PPR statistic. The statistics are based on linear theory and measure the leverage of the data, which depends on the location, the type, and possibly the time of the data being considered. For example, in a ground-water system the type of data might be a head measurement at a particular location and time. As a measure of leverage, the statistics do not take into account the value of the measurement. As linear measures, the OPR and PPR statistics require minimal computational effort once sensitivities have been calculated. Sensitivities need to be calculated for only one set of parameter values; commonly these are the values estimated through model calibration. OPR-PPR can calculate the OPR and PPR statistics for any mathematical model that produces the necessary OPR-PPR input files. In this report, OPR-PPR capabilities are presented in the context of using the ground-water model MODFLOW-2000 and the universal inverse program UCODE_2005. The method used to calculate the OPR and PPR statistics is based on the linear equation for prediction standard deviation. Using sensitivities and other information, OPR-PPR calculates (a) the percent increase in the prediction standard deviation that results when one or more existing observations are omitted from the calibration data set; (b) the percent decrease in the prediction standard deviation that results when one or more potential observations are added to the calibration data set; or (c) the percent decrease in the prediction standard deviation that results when potential information on one or more parameters is added.

  2. Linear Programming for Vocational Education Planning. Interim Report.

    ERIC Educational Resources Information Center

    Young, Robert C.; And Others

    The purpose of the paper is to define for potential users of vocational education management information systems a quantitative analysis technique and its utilization to facilitate more effective planning of vocational education programs. Defining linear programming (LP) as a management technique used to solve complex resource allocation problems…

  3. Aerodynamic preliminary analysis system. Part 1: Theory. [linearized potential theory

    NASA Technical Reports Server (NTRS)

    Bonner, E.; Clever, W.; Dunn, K.

    1978-01-01

    A comprehensive aerodynamic analysis program based on linearized potential theory is described. The solution treats thickness and attitude problems at subsonic and supersonic speeds. Three dimensional configurations with or without jet flaps having multiple non-planar surfaces of arbitrary planform and open or closed slender bodies of non-circular contour may be analyzed. Longitudinal and lateral-directional static and rotary derivative solutions may be generated. The analysis was implemented on a time sharing system in conjunction with an input tablet digitizer and an interactive graphics input/output display and editing terminal to maximize its responsiveness to the preliminary analysis problem. Nominal case computation time of 45 CPU seconds on the CDC 175 for a 200 panel simulation indicates the program provides an efficient analysis for systematically performing various aerodynamic configuration tradeoff and evaluation studies.

  4. A Brownian dynamics program for the simulation of linear and circular DNA and other wormlike chain polyelectrolytes.

    PubMed Central

    Klenin, K; Merlitz, H; Langowski, J

    1998-01-01

    For the interpretation of solution structural and dynamic data of linear and circular DNA molecules in the kb range, and for the prediction of the effect of local structural changes on the global conformation of such DNAs, we have developed an efficient and easy way to set up a program based on a second-order explicit Brownian dynamics algorithm. The DNA is modeled by a chain of rigid segments interacting through harmonic spring potentials for bending, torsion, and stretching. The electrostatics are handled using precalculated energy tables for the interactions between DNA segments as a function of relative orientation and distance. Hydrodynamic interactions are treated using the Rotne-Prager tensor. While maintaining acceptable precision, the simulation can be accelerated by recalculating this tensor only once in a certain number of steps. PMID:9533691

  5. Can hydro-economic river basin models simulate water shadow prices under asymmetric access?

    PubMed

    Kuhn, A; Britz, W

    2012-01-01

    Hydro-economic river basin models (HERBM) based on mathematical programming are conventionally formulated as explicit 'aggregate optimization' problems with a single, aggregate objective function. Often unintended, this format implicitly assumes that decisions on water allocation are made via central planning or functioning markets such as to maximize social welfare. In the absence of perfect water markets, however, individually optimal decisions by water users will differ from the social optimum. Classical aggregate HERBMs cannot simulate that situation and thus might be unable to describe existing institutions governing access to water and might produce biased results for alternative ones. We propose a new solution format for HERBMs, based on the format of the mixed complementarity problem (MCP), where modified shadow price relations express spatial externalities resulting from asymmetric access to water use. This new problem format, as opposed to commonly used linear (LP) or non-linear programming (NLP) approaches, enables the simultaneous simulation of numerous 'independent optimization' decisions by multiple water users while maintaining physical interdependences based on water use and flow in the river basin. We show that the alternative problem format allows the formulation HERBMs that yield more realistic results when comparing different water management institutions.

  6. PDB to AMPL Conversion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anna Johnston, SNL 9215

    2002-09-01

    PDB to AMPL Conversion was written to convert protein data base files to AMPL files. The protein data bases on the internet contain a wealth of information about the structue and makeup of proteins. Each file contains information derived by one or more experiments and contains information on how the experiment waw performed, the amino acid building blocks of each chain, and often the three-dimensional structure of the protein extracted from the experiments. The way a protein folds determines much about its function. Thus, studying the three-dimensional structure of the protein is of great interest. Analysing the contact maps ismore » one way to examine the structure. A contact map is a graph which has a linear back bone of amino acids for nodes (i.e., adjacent amino acids are always connected) and vertices between non-adjacent nodes if they are close enough to be considered in contact. If the graphs are similar then the folds of the protein and their function should also be similar. This software extracts the contact maps from a protein data base file and puts in into AMPL data format. This format is designed for use in AMPL, a programming language for simplifying linear programming formulations.« less

  7. Quantifying circular RNA expression from RNA-seq data using model-based framework.

    PubMed

    Li, Musheng; Xie, Xueying; Zhou, Jing; Sheng, Mengying; Yin, Xiaofeng; Ko, Eun-A; Zhou, Tong; Gu, Wanjun

    2017-07-15

    Circular RNAs (circRNAs) are a class of non-coding RNAs that are widely expressed in various cell lines and tissues of many organisms. Although the exact function of many circRNAs is largely unknown, the cell type-and tissue-specific circRNA expression has implicated their crucial functions in many biological processes. Hence, the quantification of circRNA expression from high-throughput RNA-seq data is becoming important to ascertain. Although many model-based methods have been developed to quantify linear RNA expression from RNA-seq data, these methods are not applicable to circRNA quantification. Here, we proposed a novel strategy that transforms circular transcripts to pseudo-linear transcripts and estimates the expression values of both circular and linear transcripts using an existing model-based algorithm, Sailfish. The new strategy can accurately estimate transcript expression of both linear and circular transcripts from RNA-seq data. Several factors, such as gene length, amount of expression and the ratio of circular to linear transcripts, had impacts on quantification performance of circular transcripts. In comparison to count-based tools, the new computational framework had superior performance in estimating the amount of circRNA expression from both simulated and real ribosomal RNA-depleted (rRNA-depleted) RNA-seq datasets. On the other hand, the consideration of circular transcripts in expression quantification from rRNA-depleted RNA-seq data showed substantial increased accuracy of linear transcript expression. Our proposed strategy was implemented in a program named Sailfish-cir. Sailfish-cir is freely available at https://github.com/zerodel/Sailfish-cir . tongz@medicine.nevada.edu or wanjun.gu@gmail.com. Supplementary data are available at Bioinformatics online. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  8. Guided Discovery, Visualization, and Technology Applied to the New Curriculum for Secondary Mathematics.

    ERIC Educational Resources Information Center

    Smith, Karan B.

    1996-01-01

    Presents activities which highlight major concepts of linear programming. Demonstrates how technology allows students to solve linear programming problems using exploration prior to learning algorithmic methods. (DDR)

  9. Outcome based state budget allocation for diabetes prevention programs using multi-criteria optimization with robust weights.

    PubMed

    Mehrotra, Sanjay; Kim, Kibaek

    2011-12-01

    We consider the problem of outcomes based budget allocations to chronic disease prevention programs across the United States (US) to achieve greater geographical healthcare equity. We use Diabetes Prevention and Control Programs (DPCP) by the Center for Disease Control and Prevention (CDC) as an example. We present a multi-criteria robust weighted sum model for such multi-criteria decision making in a group decision setting. The principal component analysis and an inverse linear programming techniques are presented and used to study the actual 2009 budget allocation by CDC. Our results show that the CDC budget allocation process for the DPCPs is not likely model based. In our empirical study, the relative weights for different prevalence and comorbidity factors and the corresponding budgets obtained under different weight regions are discussed. Parametric analysis suggests that money should be allocated to states to promote diabetes education and to increase patient-healthcare provider interactions to reduce disparity across the US.

  10. One cutting plane algorithm using auxiliary functions

    NASA Astrophysics Data System (ADS)

    Zabotin, I. Ya; Kazaeva, K. E.

    2016-11-01

    We propose an algorithm for solving a convex programming problem from the class of cutting methods. The algorithm is characterized by the construction of approximations using some auxiliary functions, instead of the objective function. Each auxiliary function bases on the exterior penalty function. In proposed algorithm the admissible set and the epigraph of each auxiliary function are embedded into polyhedral sets. In connection with the above, the iteration points are found by solving linear programming problems. We discuss the implementation of the algorithm and prove its convergence.

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Xiaohu; Shi, Di; Wang, Zhiwei

    Shunt FACTS devices, such as, a Static Var Compensator (SVC), are capable of providing local reactive power compensation. They are widely used in the network to reduce the real power loss and improve the voltage profile. This paper proposes a planning model based on mixed integer conic programming (MICP) to optimally allocate SVCs in the transmission network considering load uncertainty. The load uncertainties are represented by a number of scenarios. Reformulation and linearization techniques are utilized to transform the original non-convex model into a convex second order cone programming (SOCP) model. Numerical case studies based on the IEEE 30-bus systemmore » demonstrate the effectiveness of the proposed planning model.« less

  12. High-reliable linear cryocoolers and miniaturization developments at Thales Cryogenics

    NASA Astrophysics Data System (ADS)

    van der Weijden, H.; Benschop, A.; v. D. Groep, W.; Willems, D.; Mullie, J.

    2010-04-01

    Thales Cryogenics (TCBV) has an extensive background in delivering long life cryogenic coolers for military, civil and space programs. This cooler range is based on two main compressor concepts: close tolerance contact seals (UP) and flexure bearing (LSF/LPT) coolers. Main difference between these products is the Mean Time To Failure (MTTF). In this paper an overview of lifetime parameters will be listed versus the impact in the different cooler types. Also test results from both the installed base and the Thales Cryogenics test lab will be presented. New developments at Thales Cryogenics regarding compact long lifetime coolers will be outlined. In addition new developments for miniature linear cooler drive electronics with high temperature stability and power density will be described.

  13. Linear feature extraction from radar imagery: SBIR (Small Business Innovative Research) phase 2, option 1

    NASA Astrophysics Data System (ADS)

    Conner, Gary D.; Milgram, David L.; Lawton, Daryl T.; McConnell, Christopher C.

    1988-04-01

    The goal of this effort is to develop and demonstrate prototype processing capabilities for a knowledge-based system to automatically extract and analyze linear features from synthetic aperture radar (SAR) imagery. This effort constitutes Phase 2 funding through the Defense Small Business Innovative Research (SBIR) Program. Previous work examined the feasibility of the technology issues involved in the development of an automatedlinear feature extraction system. This Option 1 Final Report documents this examination and the technologies involved in automating this image understanding task. In particular, it reports on a major software delivery containing an image processing algorithmic base, a perceptual structures manipulation package, a preliminary hypothesis management framework and an enhanced user interface.

  14. DNA Sequencing Using capillary Electrophoresis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dr. Barry Karger

    2011-05-09

    The overall goal of this program was to develop capillary electrophoresis as the tool to be used to sequence for the first time the Human Genome. Our program was part of the Human Genome Project. In this work, we were highly successful and the replaceable polymer we developed, linear polyacrylamide, was used by the DOE sequencing lab in California to sequence a significant portion of the human genome using the MegaBase multiple capillary array electrophoresis instrument. In this final report, we summarize our efforts and success. We began our work by separating by capillary electrophoresis double strand oligonucleotides using cross-linkedmore » polyacrylamide gels in fused silica capillaries. This work showed the potential of the methodology. However, preparation of such cross-linked gel capillaries was difficult with poor reproducibility, and even more important, the columns were not very stable. We improved stability by using non-cross linked linear polyacrylamide. Here, the entangled linear chains could move when osmotic pressure (e.g. sample injection) was imposed on the polymer matrix. This relaxation of the polymer dissipated the stress in the column. Our next advance was to use significantly lower concentrations of the linear polyacrylamide that the polymer could be automatically blown out after each run and replaced with fresh linear polymer solution. In this way, a new column was available for each analytical run. Finally, while testing many linear polymers, we selected linear polyacrylamide as the best matrix as it was the most hydrophilic polymer available. Under our DOE program, we demonstrated initially the success of the linear polyacrylamide to separate double strand DNA. We note that the method is used even today to assay purity of double stranded DNA fragments. Our focus, of course, was on the separation of single stranded DNA for sequencing purposes. In one paper, we demonstrated the success of our approach in sequencing up to 500 bases. Other application papers of sequencing up to this level were also published in the mid 1990's. A major interest of the sequencing community has always been read length. The longer the sequence read per run the more efficient the process as well as the ability to read repeat sequences. We therefore devoted a great deal of time to studying the factors influencing read length in capillary electrophoresis, including polymer type and molecule weight, capillary column temperature, applied electric field, etc. In our initial optimization, we were able to demonstrate, for the first time, the sequencing of over 1000 bases with 90% accuracy. The run required 80 minutes for separation. Sequencing of 1000 bases per column was next demonstrated on a multiple capillary instrument. Our studies revealed that linear polyacrylamide produced the longest read lengths because the hydrophilic single strand DNA had minimal interaction with the very hydrophilic linear polyacrylamide. Any interaction of the DNA with the polymer would lead to broader peaks and lower read length. Another important parameter was the molecular weight of the linear chains. High molecular weight (> 1 MDA) was important to allow the long single strand DNA to reptate through the entangled polymer matrix. In an important paper, we showed an inverse emulsion method to prepare reproducibility linear polyacrylamide polymer with an average MWT of 9MDa. This approach was used in the polymer for sequencing the human genome. Another critical factor in the successful use of capillary electrophoresis for sequencing was the sample preparation method. In the Sanger sequencing reaction, high concentration of salts and dideoxynucleotide remained. Since the sample was introduced to the capillary column by electrokinetic injection, these salt ions would be favorably injected into the column over the sequencing fragments, thus reducing the signal for longer fragments and hence reading read length. In two papers, we examined the role of individual components from the sequencing reaction and then developed a protocol to reduce the deleterious salts. We demonstrated a robust method for achieving long read length DNA sequencing. Continuing our advances, we next demonstrated the achievement of over 1000 bases in less than one hour with a base calling accuracy of between 98 and 99%. In this work, we implemented energy transfer dyes which allowed for cleaner differentiation of the 4 dye labeled terminal nucleotides. In addition, we developed improved base calling software to help read sequencing when the separation was only minimal as occurs at long read lengths. Another critical parameter we studied was column temperature. We demonstrated that read lengths improved as the column temperature was increased from room temperature to 60 C or 70 C. The higher temperature relaxed the DNA chains under the influence of the high electric field.« less

  15. Global optimization algorithm for heat exchanger networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Quesada, I.; Grossmann, I.E.

    This paper deals with the global optimization of heat exchanger networks with fixed topology. It is shown that if linear area cost functions are assumed, as well as arithmetic mean driving force temperature differences in networks with isothermal mixing, the corresponding nonlinear programming (NLP) optimization problem involves linear constraints and a sum of linear fractional functions in the objective which are nonconvex. A rigorous algorithm is proposed that is based on a convex NLP underestimator that involves linear and nonlinear estimators for fractional and bilinear terms which provide a tight lower bound to the global optimum. This NLP problem ismore » used within a spatial branch and bound method for which branching rules are given. Basic properties of the proposed method are presented, and its application is illustrated with several example problems. The results show that the proposed method only requires few nodes in the branch and bound search.« less

  16. A Flexible CUDA LU-based Solver for Small, Batched Linear Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tumeo, Antonino; Gawande, Nitin A.; Villa, Oreste

    This chapter presents the implementation of a batched CUDA solver based on LU factorization for small linear systems. This solver may be used in applications such as reactive flow transport models, which apply the Newton-Raphson technique to linearize and iteratively solve the sets of non linear equations that represent the reactions for ten of thousands to millions of physical locations. The implementation exploits somewhat counterintuitive GPGPU programming techniques: it assigns the solution of a matrix (representing a system) to a single CUDA thread, does not exploit shared memory and employs dynamic memory allocation on the GPUs. These techniques enable ourmore » implementation to simultaneously solve sets of systems with over 100 equations and to employ LU decomposition with complete pivoting, providing the higher numerical accuracy required by certain applications. Other currently available solutions for batched linear solvers are limited by size and only support partial pivoting, although they may result faster in certain conditions. We discuss the code of our implementation and present a comparison with the other implementations, discussing the various tradeoffs in terms of performance and flexibility. This work will enable developers that need batched linear solvers to choose whichever implementation is more appropriate to the features and the requirements of their applications, and even to implement dynamic switching approaches that can choose the best implementation depending on the input data.« less

  17. Indirect synthesis of multi-degree of freedom transient systems. [linear programming for a kinematically linear system

    NASA Technical Reports Server (NTRS)

    Pilkey, W. D.; Chen, Y. H.

    1974-01-01

    An indirect synthesis method is used in the efficient optimal design of multi-degree of freedom, multi-design element, nonlinear, transient systems. A limiting performance analysis which requires linear programming for a kinematically linear system is presented. The system is selected using system identification methods such that the designed system responds as closely as possible to the limiting performance. The efficiency is a result of the method avoiding the repetitive systems analyses accompanying other numerical optimization methods.

  18. Impact of the project P.A.T.H.S. In the junior secondary school years: individual growth curve analyses.

    PubMed

    Shek, Daniel T L; Ma, Cecilia M S

    2011-02-03

    The Tier 1 Program of the Project P.A.T.H.S. (Positive Adolescent Training through Holistic Social Programs) is a positive youth development program implemented in school settings utilizing a curricular-based approach. In the third year of the Full Implementation Phase, 19 experimental schools (n = 3,006 students) and 24 control schools (n = 3,727 students) participated in a randomized group trial. Analyses based on linear mixed models via SPSS showed that participants in the experimental schools displayed better positive youth development than did participants in the control schools based on different indicators derived from the Chinese Positive Youth Development Scale, including positive self-identity, prosocial behavior, and general positive youth development attributes. Differences between experimental and control participants were also found when students who joined the Tier 1 Program and perceived the program to be beneficial were employed as participants of the experimental schools. The present findings strongly suggest that the Project P.A.T.H.S. is making an important positive impact for junior secondary school students in Hong Kong.

  19. Global strength assessment in oblique waves of a large gas carrier ship, based on a non-linear iterative method

    NASA Astrophysics Data System (ADS)

    Domnisoru, L.; Modiga, A.; Gasparotti, C.

    2016-08-01

    At the ship's design, the first step of the hull structural assessment is based on the longitudinal strength analysis, with head wave equivalent loads by the ships' classification societies’ rules. This paper presents an enhancement of the longitudinal strength analysis, considering the general case of the oblique quasi-static equivalent waves, based on the own non-linear iterative procedure and in-house program. The numerical approach is developed for the mono-hull ships, without restrictions on 3D-hull offset lines non-linearities, and involves three interlinked iterative cycles on floating, pitch and roll trim equilibrium conditions. Besides the ship-wave equilibrium parameters, the ship's girder wave induced loads are obtained. As numerical study case we have considered a large LPG liquefied petroleum gas carrier. The numerical results of the large LPG are compared with the statistical design values from several ships' classification societies’ rules. This study makes possible to obtain the oblique wave conditions that are inducing the maximum loads into the large LPG ship's girder. The numerical results of this study are pointing out that the non-linear iterative approach is necessary for the computation of the extreme loads induced by the oblique waves, ensuring better accuracy of the large LPG ship's longitudinal strength assessment.

  20. Development of Regional Supply Functions and a Least-Cost Model for Allocating Water Resources in Utah: A Parametric Linear Programming Approach.

    DTIC Science & Technology

    SYSTEMS ANALYSIS, * WATER SUPPLIES, MATHEMATICAL MODELS, OPTIMIZATION, ECONOMICS, LINEAR PROGRAMMING, HYDROLOGY, REGIONS, ALLOCATIONS, RESTRAINT, RIVERS, EVAPORATION, LAKES, UTAH, SALVAGE, MINES(EXCAVATIONS).

  1. BIODEGRADATION PROBABILITY PROGRAM (BIODEG)

    EPA Science Inventory

    The Biodegradation Probability Program (BIODEG) calculates the probability that a chemical under aerobic conditions with mixed cultures of microorganisms will biodegrade rapidly or slowly. It uses fragment constants developed using multiple linear and non-linear regressions and d...

  2. The Use of Linear Programming for Prediction.

    ERIC Educational Resources Information Center

    Schnittjer, Carl J.

    The purpose of the study was to develop a linear programming model to be used for prediction, test the accuracy of the predictions, and compare the accuracy with that produced by curvilinear multiple regression analysis. (Author)

  3. Evidence-based treatment practices for drug-involved adults in the criminal justice system.

    PubMed

    Friedmann, Peter D; Taxman, Faye S; Henderson, Craig E

    2007-04-01

    The aim of this study was to estimate the extent and organizational correlates of evidence-based practices (EBPs) in correctional facilities and community-based substance abuse treatment programs that manage drug-involved adult offenders. Correctional administrators and treatment program directors affiliated with a national sample of 384 criminal justice and community-based programs providing substance abuse treatment to adult offenders in the United States were surveyed in 2004. Correctional administrators reported the availability of up to 13 specified EBPs, and treatment directors up to 15. The sum total of EBPs indicates their extent. Linear models regress the extent of EBPs on variables measuring structure and leadership, culture and climate, administrator attitudes, and network connectedness of the organization. Most programs offer fewer than 60% of the specified EBPs to drug-involved offenders. In multiple regression models, offender treatment programs that provided more EBPs were community based, accredited, and network connected, with a performance-oriented, nonpunitive culture, more training resources, and leadership with a background in human services, a high regard for the value of substance abuse treatment, and an understanding of EBPs. The use of EBPs among facility- and community-based programs that serve drug-involved adult offenders has room for improvement. Initiatives to disseminate EBPs might target these institutional and environmental domains, but further research is needed to determine whether such organization interventions can promote the uptake of EBPs.

  4. EVIDENCE-BASED TREATMENT PRACTICES FOR DRUG-INVOLVED ADULTS IN THE CRIMINAL JUSTICE SYSTEM

    PubMed Central

    Friedmann, Peter D.; Taxman, Faye S.; Henderson, Craig E.

    2007-01-01

    OBJECTIVE To estimate the extent and organizational correlates of evidence-based practices (EBPs) in correctional facilities and community-based substance abuse treatment programs that manage drug-involved adult offenders. METHODS Correctional administrators and treatment program directors affiliated with a national sample of 384 criminal justice and community-based programs providing substance abuse treatment to adult offenders in the United States were surveyed in 2004. Correctional administrators reported the availability of up to 13 specified EBPs and treatment directors up to 15. The sum total of EBPs indicates their extent. Linear models regress the extent of EBPs on variables measuring structure and leadership, culture and climate, administrator attitudes and network connectedness of the organization. RESULTS Most programs offer fewer than 60% of the specified EBPs to drug-involved offenders. In multiple regression models, offender treatment programs that provided more EBPs were community-based, accredited, and network-connected; with a performance-oriented, non-punitive culture, more training resources; and leadership with a background in human services, a high regard for the value of substance abuse treatment and an understanding of EBPs. CONCLUSIONS The use of EBPs among facility- and community-based programs that serve drug-involved adult offenders has room for improvement. Initiatives to disseminate EBPs might target these institutional and environmental domains, but further research is needed to determine whether such organization interventions can promote the uptake of EBPs. PMID:17383551

  5. The Association Between Health Program Participation and Employee Retention.

    PubMed

    Mitchell, Rebecca J; Ozminkowski, Ronald J; Hartley, Stephen K

    2016-09-01

    Using health plan membership as a proxy for employee retention, the objective of this study was to examine whether use of health promotion programs was associated with employee retention. Propensity score weighted generalized linear regression models were used to estimate the association between telephonic programs or health risk surveys and retention. Analyses were conducted with six study samples based on type of program participation. Retention rates were highest for employees with either telephonic program activity or health risk surveys and lowest for employees who did not participate in any interventions. Participants ranged from 71% more likely to 5% less likely to remain with their employers compared with nonparticipants, depending on the sample used in analyses. Using health promotion programs in combination with health risk surveys may lead to improvements in employee retention.

  6. ProbeZT: Simulation of transport coefficients of molecular electronic junctions under environmental effects using Büttiker's probes

    NASA Astrophysics Data System (ADS)

    Korol, Roman; Kilgour, Michael; Segal, Dvira

    2018-03-01

    We present our in-house quantum transport package, ProbeZT. This program provides linear response coefficients: electrical and electronic thermal conductances, as well as the thermopower of molecular junctions in which electrons interact with the surrounding thermal environment. Calculations are performed based on the Büttiker probe method, which introduces decoherence, energy exchange and dissipation effects phenomenologically using virtual electrode terminals called probes. The program can realize different types of probes, each introducing various environmental effects, including elastic and inelastic scattering of electrons. The molecular system is described by an arbitrary tight-binding Hamiltonian, allowing the study of different geometries beyond simple one-dimensional wires. Applications of the program to study the thermoelectric performance of molecular junctions are illustrated. The program also has a built-in functionality to simulate electron transport in double-stranded DNA molecules based on a tight-binding (ladder) description of the junction.

  7. METLIN-PC: An applications-program package for problems of mathematical programming

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pshenichnyi, B.N.; Sobolenko, L.A.; Sosnovskii, A.A.

    1994-05-01

    The METLIN-PC applications-program package (APP) was developed at the V.M. Glushkov Institute of Cybernetics of the Academy of Sciences of Ukraine on IBM PC XT and AT computers. The present version of the package was written in Turbo Pascal and Fortran-77. The METLIN-PC is chiefly designed for the solution of smooth problems of mathematical programming and is a further development of the METLIN prototype, which was created earlier on a BESM-6 computer. The principal property of the previous package is retained - the applications modules employ a single approach based on the linearization method of B.N. Pschenichnyi. Hence the namemore » {open_quotes}METLIN.{close_quotes}« less

  8. Lichen-based critical loads for atmospheric nitrogen deposition in Western Oregon and Washington forests, USA

    Treesearch

    Linda H. Geiser; Sarah E. Jovan; Doug A. Glavich; Matthew K. Porter

    2010-01-01

    Critical loads (CLs) define maximum atmospheric deposition levels apparently preventative of ecosystem harm. We present first nitrogen CLs for northwestern North America's maritime forests. Using multiple linear regression, we related epiphytic-macrolichen community composition to: 1) wet deposition from the National Atmospheric Deposition Program, 2) wet, dry,...

  9. Optimal Stratification of Item Pools in a-Stratified Computerized Adaptive Testing.

    ERIC Educational Resources Information Center

    Chang, Hua-Hua; van der Linden, Wim J.

    2003-01-01

    Developed a method based on 0-1 linear programming to stratify an item pool optimally for use in alpha-stratified adaptive testing. Applied the method to a previous item pool from the computerized adaptive test of the Graduate Record Examinations. Results show the new method performs well in practical situations. (SLD)

  10. Traversing Theory and Transgressing Academic Discourses: Arts-Based Research in Teacher Education

    ERIC Educational Resources Information Center

    Dixon, Mary; Senior, Kim

    2009-01-01

    Pre-service teacher education is marked by linear and sequential programming which offers a plethora of strategies and methods (Cochran-Smith & Zeichner, 2005; Darling Hammond & Bransford, 2005; Grant & Zeichner, 1997). This paper emerges from a three year study within a core education subject in pre-service teacher education in…

  11. ARS-Media: A spreadsheet tool for calculating media recipes based on ion-specific constraints

    USDA-ARS?s Scientific Manuscript database

    ARS-Media is an ion solution calculator that uses Microsoft Excel to generate recipes of salts for complex ion mixtures specified by the user. Generating salt combinations (recipes) that result in pre-specified target ion values is a linear programming problem. Thus, the recipes are generated using ...

  12. Synthesizing Dynamic Programming Algorithms from Linear Temporal Logic Formulae

    NASA Technical Reports Server (NTRS)

    Rosu, Grigore; Havelund, Klaus

    2001-01-01

    The problem of testing a linear temporal logic (LTL) formula on a finite execution trace of events, generated by an executing program, occurs naturally in runtime analysis of software. We present an algorithm which takes an LTL formula and generates an efficient dynamic programming algorithm. The generated algorithm tests whether the LTL formula is satisfied by a finite trace of events given as input. The generated algorithm runs in linear time, its constant depending on the size of the LTL formula. The memory needed is constant, also depending on the size of the formula.

  13. On the stability and instantaneous velocity of grasped frictionless objects

    NASA Technical Reports Server (NTRS)

    Trinkle, Jeffrey C.

    1992-01-01

    A quantitative test for form closure valid for any number of contact points is formulated as a linear program, the optimal objective value of which provides a measure of how far a grasp is from losing form closure. Another contribution of the study is the formulation of a linear program whose solution yields the same information as the classical approach. The benefit of the formulation is that explicit testing of all possible combinations of contact interactions can be avoided by the algorithm used to solve the linear program.

  14. Engineering calculations for communications satellite systems planning

    NASA Technical Reports Server (NTRS)

    Reilly, C. H.; Levis, C. A.; Mount-Campbell, C.; Gonsalvez, D. J.; Wang, C. W.; Yamamura, Y.

    1985-01-01

    Computer-based techniques for optimizing communications-satellite orbit and frequency assignments are discussed. A gradient-search code was tested against a BSS scenario derived from the RARC-83 data. Improvement was obtained, but each iteration requires about 50 minutes of IBM-3081 CPU time. Gradient-search experiments on a small FSS test problem, consisting of a single service area served by 8 satellites, showed quickest convergence when the satellites were all initially placed near the center of the available orbital arc with moderate spacing. A transformation technique is proposed for investigating the surface topography of the objective function used in the gradient-search method. A new synthesis approach is based on transforming single-entry interference constraints into corresponding constraints on satellite spacings. These constraints are used with linear objective functions to formulate the co-channel orbital assignment task as a linear-programming (LP) problem or mixed integer programming (MIP) problem. Globally optimal solutions are always found with the MIP problems, but not necessarily with the LP problems. The MIP solutions can be used to evaluate the quality of the LP solutions. The initial results are very encouraging.

  15. The Efficiency of Split Panel Designs in an Analysis of Variance Model

    PubMed Central

    Wang, Wei-Guo; Liu, Hai-Jun

    2016-01-01

    We consider split panel design efficiency in analysis of variance models, that is, the determination of the cross-sections series optimal proportion in all samples, to minimize parametric best linear unbiased estimators of linear combination variances. An orthogonal matrix is constructed to obtain manageable expression of variances. On this basis, we derive a theorem for analyzing split panel design efficiency irrespective of interest and budget parameters. Additionally, relative estimator efficiency based on the split panel to an estimator based on a pure panel or a pure cross-section is present. The analysis shows that the gains from split panel can be quite substantial. We further consider the efficiency of split panel design, given a budget, and transform it to a constrained nonlinear integer programming. Specifically, an efficient algorithm is designed to solve the constrained nonlinear integer programming. Moreover, we combine one at time designs and factorial designs to illustrate the algorithm’s efficiency with an empirical example concerning monthly consumer expenditure on food in 1985, in the Netherlands, and the efficient ranges of the algorithm parameters are given to ensure a good solution. PMID:27163447

  16. Adaptive dynamic programming for discrete-time linear quadratic regulation based on multirate generalised policy iteration

    NASA Astrophysics Data System (ADS)

    Chun, Tae Yoon; Lee, Jae Young; Park, Jin Bae; Choi, Yoon Ho

    2018-06-01

    In this paper, we propose two multirate generalised policy iteration (GPI) algorithms applied to discrete-time linear quadratic regulation problems. The proposed algorithms are extensions of the existing GPI algorithm that consists of the approximate policy evaluation and policy improvement steps. The two proposed schemes, named heuristic dynamic programming (HDP) and dual HDP (DHP), based on multirate GPI, use multi-step estimation (M-step Bellman equation) at the approximate policy evaluation step for estimating the value function and its gradient called costate, respectively. Then, we show that these two methods with the same update horizon can be considered equivalent in the iteration domain. Furthermore, monotonically increasing and decreasing convergences, so called value iteration (VI)-mode and policy iteration (PI)-mode convergences, are proved to hold for the proposed multirate GPIs. Further, general convergence properties in terms of eigenvalues are also studied. The data-driven online implementation methods for the proposed HDP and DHP are demonstrated and finally, we present the results of numerical simulations performed to verify the effectiveness of the proposed methods.

  17. ARS-Media for Excel: A Spreadsheet Tool for Calculating Media Recipes Based on Ion-Specific Constraints

    PubMed Central

    Niedz, Randall P.

    2016-01-01

    ARS-Media for Excel is an ion solution calculator that uses “Microsoft Excel” to generate recipes of salts for complex ion mixtures specified by the user. Generating salt combinations (recipes) that result in pre-specified target ion values is a linear programming problem. Excel’s Solver add-on solves the linear programming equation to generate a recipe. Calculating a mixture of salts to generate exact solutions of complex ionic mixtures is required for at least 2 types of problems– 1) formulating relevant ecological/biological ionic solutions such as those from a specific lake, soil, cell, tissue, or organ and, 2) designing ion confounding-free experiments to determine ion-specific effects where ions are treated as statistical factors. Using ARS-Media for Excel to solve these two problems is illustrated by 1) exactly reconstructing a soil solution representative of a loamy agricultural soil and, 2) constructing an ion-based experiment to determine the effects of substituting Na+ for K+ on the growth of a Valencia sweet orange nonembryogenic cell line. PMID:27812202

  18. ARS-Media for Excel: A Spreadsheet Tool for Calculating Media Recipes Based on Ion-Specific Constraints.

    PubMed

    Niedz, Randall P

    2016-01-01

    ARS-Media for Excel is an ion solution calculator that uses "Microsoft Excel" to generate recipes of salts for complex ion mixtures specified by the user. Generating salt combinations (recipes) that result in pre-specified target ion values is a linear programming problem. Excel's Solver add-on solves the linear programming equation to generate a recipe. Calculating a mixture of salts to generate exact solutions of complex ionic mixtures is required for at least 2 types of problems- 1) formulating relevant ecological/biological ionic solutions such as those from a specific lake, soil, cell, tissue, or organ and, 2) designing ion confounding-free experiments to determine ion-specific effects where ions are treated as statistical factors. Using ARS-Media for Excel to solve these two problems is illustrated by 1) exactly reconstructing a soil solution representative of a loamy agricultural soil and, 2) constructing an ion-based experiment to determine the effects of substituting Na+ for K+ on the growth of a Valencia sweet orange nonembryogenic cell line.

  19. Approximate labeling via graph cuts based on linear programming.

    PubMed

    Komodakis, Nikos; Tziritas, Georgios

    2007-08-01

    A new framework is presented for both understanding and developing graph-cut-based combinatorial algorithms suitable for the approximate optimization of a very wide class of Markov Random Fields (MRFs) that are frequently encountered in computer vision. The proposed framework utilizes tools from the duality theory of linear programming in order to provide an alternative and more general view of state-of-the-art techniques like the \\alpha-expansion algorithm, which is included merely as a special case. Moreover, contrary to \\alpha-expansion, the derived algorithms generate solutions with guaranteed optimality properties for a much wider class of problems, for example, even for MRFs with nonmetric potentials. In addition, they are capable of providing per-instance suboptimality bounds in all occasions, including discrete MRFs with an arbitrary potential function. These bounds prove to be very tight in practice (that is, very close to 1), which means that the resulting solutions are almost optimal. Our algorithms' effectiveness is demonstrated by presenting experimental results on a variety of low-level vision tasks, such as stereo matching, image restoration, image completion, and optical flow estimation, as well as on synthetic problems.

  20. Community coalitions as a system: effects of network change on adoption of evidence-based substance abuse prevention.

    PubMed

    Valente, Thomas W; Chou, Chich Ping; Pentz, Mary Ann

    2007-05-01

    We examined the effect of community coalition network structure on the effectiveness of an intervention designed to accelerate the adoption of evidence-based substance abuse prevention programs. At baseline, 24 cities were matched and randomly assigned to 3 conditions (control, satellite TV training, and training plus technical assistance). We surveyed 415 community leaders at baseline and 406 at 18-month follow-up about their attitudes and practices toward substance abuse prevention programs. Network structure was measured by asking leaders whom in their coalition they turned to for advice about prevention programs. The outcome was a scale with 4 subscales: coalition function, planning, achievement of benchmarks, and progress in prevention activities. We used multiple linear regression and path analysis to test hypotheses. Intervention had a significant effect on decreasing the density of coalition networks. The change in density subsequently increased adoption of evidence-based practices. Optimal community network structures for the adoption of public health programs are unknown, but it should not be assumed that increasing network density or centralization are appropriate goals. Lower-density networks may be more efficient for organizing evidence-based prevention programs in communities.

  1. Large-scale linear programs in planning and prediction.

    DOT National Transportation Integrated Search

    2017-06-01

    Large-scale linear programs are at the core of many traffic-related optimization problems in both planning and prediction. Moreover, many of these involve significant uncertainty, and hence are modeled using either chance constraints, or robust optim...

  2. Computer-aided linear-circuit design.

    NASA Technical Reports Server (NTRS)

    Penfield, P.

    1971-01-01

    Usually computer-aided design (CAD) refers to programs that analyze circuits conceived by the circuit designer. Among the services such programs should perform are direct network synthesis, analysis, optimization of network parameters, formatting, storage of miscellaneous data, and related calculations. The program should be embedded in a general-purpose conversational language such as BASIC, JOSS, or APL. Such a program is MARTHA, a general-purpose linear-circuit analyzer embedded in APL.

  3. Linear and nonlinear interpretation of the direct strike lightning response of the NASA F106B thunderstorm research aircraft

    NASA Technical Reports Server (NTRS)

    Rudolph, T. H.; Perala, R. A.

    1983-01-01

    The objective of the work reported here is to develop a methodology by which electromagnetic measurements of inflight lightning strike data can be understood and extended to other aircraft. A linear and time invariant approach based on a combination of Fourier transform and three dimensional finite difference techniques is demonstrated. This approach can obtain the lightning channel current in the absence of the aircraft for given channel characteristic impedance and resistive loading. The model is applied to several measurements from the NASA F106B lightning research program. A non-linear three dimensional finite difference code has also been developed to study the response of the F106B to a lightning leader attachment. This model includes three species air chemistry and fluid continuity equations and can incorporate an experimentally based streamer formulation. Calculated responses are presented for various attachment locations and leader parameters. The results are compared qualitatively with measured inflight data.

  4. An intuitionistic fuzzy multi-objective non-linear programming model for sustainable irrigation water allocation under the combination of dry and wet conditions

    NASA Astrophysics Data System (ADS)

    Li, Mo; Fu, Qiang; Singh, Vijay P.; Ma, Mingwei; Liu, Xiao

    2017-12-01

    Water scarcity causes conflicts among natural resources, society and economy and reinforces the need for optimal allocation of irrigation water resources in a sustainable way. Uncertainties caused by natural conditions and human activities make optimal allocation more complex. An intuitionistic fuzzy multi-objective non-linear programming (IFMONLP) model for irrigation water allocation under the combination of dry and wet conditions is developed to help decision makers mitigate water scarcity. The model is capable of quantitatively solving multiple problems including crop yield increase, blue water saving, and water supply cost reduction to obtain a balanced water allocation scheme using a multi-objective non-linear programming technique. Moreover, it can deal with uncertainty as well as hesitation based on the introduction of intuitionistic fuzzy numbers. Consideration of the combination of dry and wet conditions for water availability and precipitation makes it possible to gain insights into the various irrigation water allocations, and joint probabilities based on copula functions provide decision makers an average standard for irrigation. A case study on optimally allocating both surface water and groundwater to different growth periods of rice in different subareas in Heping irrigation area, Qing'an County, northeast China shows the potential and applicability of the developed model. Results show that the crop yield increase target especially in tillering and elongation stages is a prevailing concern when more water is available, and trading schemes can mitigate water supply cost and save water with an increased grain output. Results also reveal that the water allocation schemes are sensitive to the variation of water availability and precipitation with uncertain characteristics. The IFMONLP model is applicable for most irrigation areas with limited water supplies to determine irrigation water strategies under a fuzzy environment.

  5. A set-covering based heuristic algorithm for the periodic vehicle routing problem.

    PubMed

    Cacchiani, V; Hemmelmayr, V C; Tricoire, F

    2014-01-30

    We present a hybrid optimization algorithm for mixed-integer linear programming, embedding both heuristic and exact components. In order to validate it we use the periodic vehicle routing problem (PVRP) as a case study. This problem consists of determining a set of minimum cost routes for each day of a given planning horizon, with the constraints that each customer must be visited a required number of times (chosen among a set of valid day combinations), must receive every time the required quantity of product, and that the number of routes per day (each respecting the capacity of the vehicle) does not exceed the total number of available vehicles. This is a generalization of the well-known vehicle routing problem (VRP). Our algorithm is based on the linear programming (LP) relaxation of a set-covering-like integer linear programming formulation of the problem, with additional constraints. The LP-relaxation is solved by column generation, where columns are generated heuristically by an iterated local search algorithm. The whole solution method takes advantage of the LP-solution and applies techniques of fixing and releasing of the columns as a local search, making use of a tabu list to avoid cycling. We show the results of the proposed algorithm on benchmark instances from the literature and compare them to the state-of-the-art algorithms, showing the effectiveness of our approach in producing good quality solutions. In addition, we report the results on realistic instances of the PVRP introduced in Pacheco et al. (2011)  [24] and on benchmark instances of the periodic traveling salesman problem (PTSP), showing the efficacy of the proposed algorithm on these as well. Finally, we report the new best known solutions found for all the tested problems.

  6. A set-covering based heuristic algorithm for the periodic vehicle routing problem

    PubMed Central

    Cacchiani, V.; Hemmelmayr, V.C.; Tricoire, F.

    2014-01-01

    We present a hybrid optimization algorithm for mixed-integer linear programming, embedding both heuristic and exact components. In order to validate it we use the periodic vehicle routing problem (PVRP) as a case study. This problem consists of determining a set of minimum cost routes for each day of a given planning horizon, with the constraints that each customer must be visited a required number of times (chosen among a set of valid day combinations), must receive every time the required quantity of product, and that the number of routes per day (each respecting the capacity of the vehicle) does not exceed the total number of available vehicles. This is a generalization of the well-known vehicle routing problem (VRP). Our algorithm is based on the linear programming (LP) relaxation of a set-covering-like integer linear programming formulation of the problem, with additional constraints. The LP-relaxation is solved by column generation, where columns are generated heuristically by an iterated local search algorithm. The whole solution method takes advantage of the LP-solution and applies techniques of fixing and releasing of the columns as a local search, making use of a tabu list to avoid cycling. We show the results of the proposed algorithm on benchmark instances from the literature and compare them to the state-of-the-art algorithms, showing the effectiveness of our approach in producing good quality solutions. In addition, we report the results on realistic instances of the PVRP introduced in Pacheco et al. (2011)  [24] and on benchmark instances of the periodic traveling salesman problem (PTSP), showing the efficacy of the proposed algorithm on these as well. Finally, we report the new best known solutions found for all the tested problems. PMID:24748696

  7. Planning Student Flow with Linear Programming: A Tunisian Case Study.

    ERIC Educational Resources Information Center

    Bezeau, Lawrence

    A student flow model in linear programming format, designed to plan the movement of students into secondary and university programs in Tunisia, is described. The purpose of the plan is to determine a sufficient number of graduating students that would flow back into the system as teachers or move into the labor market to meet fixed manpower…

  8. Assessment of resident operative performance using a real-time mobile Web system: preparing for the milestone age.

    PubMed

    Wagner, Justin P; Chen, David C; Donahue, Timothy R; Quach, Chi; Hines, O Joe; Hiatt, Jonathan R; Tillou, Areti

    2014-01-01

    To satisfy trainees' operative competency requirements while improving feedback validity and timeliness using a mobile Web-based platform. The Southern Illinois University Operative Performance Rating Scale (OPRS) was embedded into a website formatted for mobile devices. From March 2013 to February 2014, faculty members were instructed to complete the OPRS form while providing verbal feedback to the operating resident at the conclusion of each procedure. Submitted data were compiled automatically within a secure Web-based spreadsheet. Conventional end-of-rotation performance (CERP) evaluations filed 2006 to 2013 and OPRS performance scores were compared by year of training using serial and independent-samples t tests. The mean CERP scores and OPRS overall resident operative performance scores were directly compared using a linear regression model. OPRS mobile site analytics were reviewed using a Web-based reporting program. Large university-based general surgery residency program. General Surgery faculty used the mobile Web OPRS system to rate resident performance. Residents and the program director reviewed evaluations semiannually. Over the study period, 18 faculty members and 37 residents logged 176 operations using the mobile OPRS system. There were 334 total OPRS website visits. Median time to complete an evaluation was 45 minutes from the end of the operation, and faculty spent an average of 134 seconds on the site to enter 1 assessment. In the 38,506 CERP evaluations reviewed, mean performance scores showed a positive linear trend of 2% change per year of training (p = 0.001). OPRS overall resident operative performance scores showed a significant linear (p = 0.001), quadratic (p = 0.001), and cubic (p = 0.003) trend of change per year of clinical training, reflecting the resident operative experience in our training program. Differences between postgraduate year-1 and postgraduate year-5 overall performance scores were greater with the OPRS (mean = 0.96, CI: 0.55-1.38) than with CERP measures (mean = 0.37, CI: 0.34-0.41). Additionally, there were consistent increases in each of the OPRS subcategories. In contrast to CERPs, the OPRS fully satisfies the Accreditation Council for Graduate Medical Education and American Board of Surgery operative assessment requirements. The mobile Web platform provides a convenient interface, broad accessibility, automatic data compilation, and compatibility with common database and statistical software. Our mobile OPRS system encourages candid feedback dialog and generates a comprehensive review of individual and group-wide operative proficiency in real time. Copyright © 2014 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.

  9. Toward the prevention of childhood undernutrition: diet diversity strategies using locally produced food can overcome gaps in nutrient supply.

    PubMed

    Parlesak, Alexandr; Geelhoed, Diederike; Robertson, Aileen

    2014-06-01

    Chronic undernutrition is prevalent in Mozambique, where children suffer from stunting, vitamin A deficiency, anemia, and other nutrition-related disorders. Complete diet formulation products (CDFPs) are increasingly promoted to prevent chronic undernutrition. Using linear programming, to investigate whether diet diversification using local foods should be prioritized in order to reduce the prevalence of chronic undernutrition. Market prices of local foods were collected in Tete City, Mozambique. Linear programming was applied to calculate the cheapest possible fully nutritious food baskets (FNFB) by stepwise addition of micronutrient-dense localfoods. Only the top quintile of Mozambican households, using average expenditure data, could afford the FNFB that was designed using linear programming from a spectrum of local standard foods. The addition of beef heart or liver, dried fish and fresh moringa leaves, before applying linear programming decreased the price by a factor of up to 2.6. As a result, the top three quintiles could afford the FNFB optimized using both diversification strategy and linear programming. CDFPs, when added to the baskets, were unable to overcome the micronutrient gaps without greatly exceeding recommended energy intakes, due to their high ratio of energy to micronutrient density. Dietary diversification strategies using local, low-cost, nutrient-dense foods can meet all micronutrient recommendations and overcome all micronutrient gaps. The success of linear programming to identify a low-cost FNFB depends entirely on the investigators' ability to select appropriate micronutrient-dense foods. CDFPs added to food baskets are unable to overcome micronutrient gaps without greatly exceeding recommended energy intake.

  10. Improved Equivalent Linearization Implementations Using Nonlinear Stiffness Evaluation

    NASA Technical Reports Server (NTRS)

    Rizzi, Stephen A.; Muravyov, Alexander A.

    2001-01-01

    This report documents two new implementations of equivalent linearization for solving geometrically nonlinear random vibration problems of complicated structures. The implementations are given the acronym ELSTEP, for "Equivalent Linearization using a STiffness Evaluation Procedure." Both implementations of ELSTEP are fundamentally the same in that they use a novel nonlinear stiffness evaluation procedure to numerically compute otherwise inaccessible nonlinear stiffness terms from commercial finite element programs. The commercial finite element program MSC/NASTRAN (NASTRAN) was chosen as the core of ELSTEP. The FORTRAN implementation calculates the nonlinear stiffness terms and performs the equivalent linearization analysis outside of NASTRAN. The Direct Matrix Abstraction Program (DMAP) implementation performs these operations within NASTRAN. Both provide nearly identical results. Within each implementation, two error minimization approaches for the equivalent linearization procedure are available - force and strain energy error minimization. Sample results for a simply supported rectangular plate are included to illustrate the analysis procedure.

  11. Linear combination reading program for capture gamma rays

    USGS Publications Warehouse

    Tanner, Allan B.

    1971-01-01

    This program computes a weighting function, Qj, which gives a scalar output value of unity when applied to the spectrum of a desired element and a minimum value (considering statistics) when applied to spectra of materials not containing the desired element. Intermediate values are obtained for materials containing the desired element, in proportion to the amount of the element they contain. The program is written in the BASIC language in a format specific to the Hewlett-Packard 2000A Time-Sharing System, and is an adaptation of an earlier program for linear combination reading for X-ray fluorescence analysis (Tanner and Brinkerhoff, 1971). Following the program is a sample run from a study of the application of the linear combination technique to capture-gamma-ray analysis for calcium (report in preparation).

  12. A singular value decomposition linear programming (SVDLP) optimization technique for circular cone based robotic radiotherapy.

    PubMed

    Liang, Bin; Li, Yongbao; Wei, Ran; Guo, Bin; Xu, Xuang; Liu, Bo; Li, Jiafeng; Wu, Qiuwen; Zhou, Fugen

    2018-01-05

    With robot-controlled linac positioning, robotic radiotherapy systems such as CyberKnife significantly increase freedom of radiation beam placement, but also impose more challenges on treatment plan optimization. The resampling mechanism in the vendor-supplied treatment planning system (MultiPlan) cannot fully explore the increased beam direction search space. Besides, a sparse treatment plan (using fewer beams) is desired to improve treatment efficiency. This study proposes a singular value decomposition linear programming (SVDLP) optimization technique for circular collimator based robotic radiotherapy. The SVDLP approach initializes the input beams by simulating the process of covering the entire target volume with equivalent beam tapers. The requirements on dosimetry distribution are modeled as hard and soft constraints, and the sparsity of the treatment plan is achieved by compressive sensing. The proposed linear programming (LP) model optimizes beam weights by minimizing the deviation of soft constraints subject to hard constraints, with a constraint on the l 1 norm of the beam weight. A singular value decomposition (SVD) based acceleration technique was developed for the LP model. Based on the degeneracy of the influence matrix, the model is first compressed into lower dimension for optimization, and then back-projected to reconstruct the beam weight. After beam weight optimization, the number of beams is reduced by removing the beams with low weight, and optimizing the weights of the remaining beams using the same model. This beam reduction technique is further validated by a mixed integer programming (MIP) model. The SVDLP approach was tested on a lung case. The results demonstrate that the SVD acceleration technique speeds up the optimization by a factor of 4.8. Furthermore, the beam reduction achieves a similar plan quality to the globally optimal plan obtained by the MIP model, but is one to two orders of magnitude faster. Furthermore, the SVDLP approach is tested and compared with MultiPlan on three clinical cases of varying complexities. In general, the plans generated by the SVDLP achieve steeper dose gradient, better conformity and less damage to normal tissues. In conclusion, the SVDLP approach effectively improves the quality of treatment plan due to the use of the complete beam search space. This challenging optimization problem with the complete beam search space is effectively handled by the proposed SVD acceleration.

  13. A singular value decomposition linear programming (SVDLP) optimization technique for circular cone based robotic radiotherapy

    NASA Astrophysics Data System (ADS)

    Liang, Bin; Li, Yongbao; Wei, Ran; Guo, Bin; Xu, Xuang; Liu, Bo; Li, Jiafeng; Wu, Qiuwen; Zhou, Fugen

    2018-01-01

    With robot-controlled linac positioning, robotic radiotherapy systems such as CyberKnife significantly increase freedom of radiation beam placement, but also impose more challenges on treatment plan optimization. The resampling mechanism in the vendor-supplied treatment planning system (MultiPlan) cannot fully explore the increased beam direction search space. Besides, a sparse treatment plan (using fewer beams) is desired to improve treatment efficiency. This study proposes a singular value decomposition linear programming (SVDLP) optimization technique for circular collimator based robotic radiotherapy. The SVDLP approach initializes the input beams by simulating the process of covering the entire target volume with equivalent beam tapers. The requirements on dosimetry distribution are modeled as hard and soft constraints, and the sparsity of the treatment plan is achieved by compressive sensing. The proposed linear programming (LP) model optimizes beam weights by minimizing the deviation of soft constraints subject to hard constraints, with a constraint on the l 1 norm of the beam weight. A singular value decomposition (SVD) based acceleration technique was developed for the LP model. Based on the degeneracy of the influence matrix, the model is first compressed into lower dimension for optimization, and then back-projected to reconstruct the beam weight. After beam weight optimization, the number of beams is reduced by removing the beams with low weight, and optimizing the weights of the remaining beams using the same model. This beam reduction technique is further validated by a mixed integer programming (MIP) model. The SVDLP approach was tested on a lung case. The results demonstrate that the SVD acceleration technique speeds up the optimization by a factor of 4.8. Furthermore, the beam reduction achieves a similar plan quality to the globally optimal plan obtained by the MIP model, but is one to two orders of magnitude faster. Furthermore, the SVDLP approach is tested and compared with MultiPlan on three clinical cases of varying complexities. In general, the plans generated by the SVDLP achieve steeper dose gradient, better conformity and less damage to normal tissues. In conclusion, the SVDLP approach effectively improves the quality of treatment plan due to the use of the complete beam search space. This challenging optimization problem with the complete beam search space is effectively handled by the proposed SVD acceleration.

  14. Output-Feedback Control of Unknown Linear Discrete-Time Systems With Stochastic Measurement and Process Noise via Approximate Dynamic Programming.

    PubMed

    Wang, Jun-Sheng; Yang, Guang-Hong

    2017-07-25

    This paper studies the optimal output-feedback control problem for unknown linear discrete-time systems with stochastic measurement and process noise. A dithered Bellman equation with the innovation covariance matrix is constructed via the expectation operator given in the form of a finite summation. On this basis, an output-feedback-based approximate dynamic programming method is developed, where the terms depending on the innovation covariance matrix are available with the aid of the innovation covariance matrix identified beforehand. Therefore, by iterating the Bellman equation, the resulting value function can converge to the optimal one in the presence of the aforementioned noise, and the nearly optimal control laws are delivered. To show the effectiveness and the advantages of the proposed approach, a simulation example and a velocity control experiment on a dc machine are employed.

  15. Clustering of the human skeletal muscle fibers using linear programming and angular Hilbertian metrics.

    PubMed

    Neji, Radhouène; Besbes, Ahmed; Komodakis, Nikos; Deux, Jean-François; Maatouk, Mezri; Rahmouni, Alain; Bassez, Guillaume; Fleury, Gilles; Paragios, Nikos

    2009-01-01

    In this paper, we present a manifold clustering method fo the classification of fibers obtained from diffusion tensor images (DTI) of the human skeletal muscle. Using a linear programming formulation of prototype-based clustering, we propose a novel fiber classification algorithm over manifolds that circumvents the necessity to embed the data in low dimensional spaces and determines automatically the number of clusters. Furthermore, we propose the use of angular Hilbertian metrics between multivariate normal distributions to define a family of distances between tensors that we generalize to fibers. These metrics are used to approximate the geodesic distances over the fiber manifold. We also discuss the case where only geodesic distances to a reduced set of landmark fibers are available. The experimental validation of the method is done using a manually annotated significant dataset of DTI of the calf muscle for healthy and diseased subjects.

  16. Dynamic Photonic Materials Based on Liquid Crystals (Postprint)

    DTIC Science & Technology

    2013-09-01

    AFRL-RX-WP-JA-2015-0059 DYNAMIC PHOTONIC MATERIALS BASED ON LIQUID CRYSTALS (POSTPRINT) Luciano De Sio and Cesare Umeton University...ON LIQUID CRYSTALS (POSTPRINT) 5a. CONTRACT NUMBER In-House 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 61102F 6. AUTHOR(S) (see back...10.1016/B978-0-444-62644-8.00001-7. 14. ABSTRACT Liquid crystals, combining optical non-linearity and self-organizing properties with fluidity, and being

  17. Evaluating forest management policies by parametric linear programing

    Treesearch

    Daniel I. Navon; Richard J. McConnen

    1967-01-01

    An analytical and simulation technique, parametric linear programing explores alternative conditions and devises an optimal management plan for each condition. Its application in solving policy-decision problems in the management of forest lands is illustrated in an example.

  18. Use of nonlinear programming to optimize performance response to energy density in broiler feed formulation.

    PubMed

    Guevara, V R

    2004-02-01

    A nonlinear programming optimization model was developed to maximize margin over feed cost in broiler feed formulation and is described in this paper. The model identifies the optimal feed mix that maximizes profit margin. Optimum metabolizable energy level and performance were found by using Excel Solver nonlinear programming. Data from an energy density study with broilers were fitted to quadratic equations to express weight gain, feed consumption, and the objective function income over feed cost in terms of energy density. Nutrient:energy ratio constraints were transformed into equivalent linear constraints. National Research Council nutrient requirements and feeding program were used for examining changes in variables. The nonlinear programming feed formulation method was used to illustrate the effects of changes in different variables on the optimum energy density, performance, and profitability and was compared with conventional linear programming. To demonstrate the capabilities of the model, I determined the impact of variation in prices. Prices for broiler, corn, fish meal, and soybean meal were increased and decreased by 25%. Formulations were identical in all other respects. Energy density, margin, and diet cost changed compared with conventional linear programming formulation. This study suggests that nonlinear programming can be more useful than conventional linear programming to optimize performance response to energy density in broiler feed formulation because an energy level does not need to be set.

  19. A study of the use of linear programming techniques to improve the performance in design optimization problems

    NASA Technical Reports Server (NTRS)

    Young, Katherine C.; Sobieszczanski-Sobieski, Jaroslaw

    1988-01-01

    This project has two objectives. The first is to determine whether linear programming techniques can improve performance when handling design optimization problems with a large number of design variables and constraints relative to the feasible directions algorithm. The second purpose is to determine whether using the Kreisselmeier-Steinhauser (KS) function to replace the constraints with one constraint will reduce the cost of total optimization. Comparisons are made using solutions obtained with linear and non-linear methods. The results indicate that there is no cost saving using the linear method or in using the KS function to replace constraints.

  20. Interactive computer programs for the graphic analysis of nucleotide sequence data.

    PubMed Central

    Luckow, V A; Littlewood, R K; Rownd, R H

    1984-01-01

    A group of interactive computer programs have been developed which aid in the collection and graphical analysis of nucleotide and protein sequence data. The programs perform the following basic functions: a) enter, edit, list, and rearrange sequence data; b) permit automatic entry of nucleotide sequence data directly from an autoradiograph into the computer; c) search for restriction sites or other specified patterns and plot a linear or circular restriction map, or print their locations; d) plot base composition; e) analyze homology between sequences by plotting a two-dimensional graphic matrix; and f) aid in plotting predicted secondary structures of RNA molecules. PMID:6546437

  1. Optimal blood glucose level control using dynamic programming based on minimal Bergman model

    NASA Astrophysics Data System (ADS)

    Rettian Anggita Sari, Maria; Hartono

    2018-03-01

    The purpose of this article is to simulate the glucose dynamic and the insulin kinetic of diabetic patient. The model used in this research is a non-linear Minimal Bergman model. Optimal control theory is then applied to formulate the problem in order to determine the optimal dose of insulin in the treatment of diabetes mellitus such that the glucose level is in the normal range for some specific time range. The optimization problem is solved using dynamic programming. The result shows that dynamic programming is quite reliable to represent the interaction between glucose and insulin levels in diabetes mellitus patient.

  2. A non-linear programming approach to the computer-aided design of regulators using a linear-quadratic formulation

    NASA Technical Reports Server (NTRS)

    Fleming, P.

    1985-01-01

    A design technique is proposed for linear regulators in which a feedback controller of fixed structure is chosen to minimize an integral quadratic objective function subject to the satisfaction of integral quadratic constraint functions. Application of a non-linear programming algorithm to this mathematically tractable formulation results in an efficient and useful computer-aided design tool. Particular attention is paid to computational efficiency and various recommendations are made. Two design examples illustrate the flexibility of the approach and highlight the special insight afforded to the designer.

  3. Genetic programming based quantitative structure-retention relationships for the prediction of Kovats retention indices.

    PubMed

    Goel, Purva; Bapat, Sanket; Vyas, Renu; Tambe, Amruta; Tambe, Sanjeev S

    2015-11-13

    The development of quantitative structure-retention relationships (QSRR) aims at constructing an appropriate linear/nonlinear model for the prediction of the retention behavior (such as Kovats retention index) of a solute on a chromatographic column. Commonly, multi-linear regression and artificial neural networks are used in the QSRR development in the gas chromatography (GC). In this study, an artificial intelligence based data-driven modeling formalism, namely genetic programming (GP), has been introduced for the development of quantitative structure based models predicting Kovats retention indices (KRI). The novelty of the GP formalism is that given an example dataset, it searches and optimizes both the form (structure) and the parameters of an appropriate linear/nonlinear data-fitting model. Thus, it is not necessary to pre-specify the form of the data-fitting model in the GP-based modeling. These models are also less complex, simple to understand, and easy to deploy. The effectiveness of GP in constructing QSRRs has been demonstrated by developing models predicting KRIs of light hydrocarbons (case study-I) and adamantane derivatives (case study-II). In each case study, two-, three- and four-descriptor models have been developed using the KRI data available in the literature. The results of these studies clearly indicate that the GP-based models possess an excellent KRI prediction accuracy and generalization capability. Specifically, the best performing four-descriptor models in both the case studies have yielded high (>0.9) values of the coefficient of determination (R(2)) and low values of root mean squared error (RMSE) and mean absolute percent error (MAPE) for training, test and validation set data. The characteristic feature of this study is that it introduces a practical and an effective GP-based method for developing QSRRs in gas chromatography that can be gainfully utilized for developing other types of data-driven models in chromatography science. Copyright © 2015 Elsevier B.V. All rights reserved.

  4. A sequential linear optimization approach for controller design

    NASA Technical Reports Server (NTRS)

    Horta, L. G.; Juang, J.-N.; Junkins, J. L.

    1985-01-01

    A linear optimization approach with a simple real arithmetic algorithm is presented for reliable controller design and vibration suppression of flexible structures. Using first order sensitivity of the system eigenvalues with respect to the design parameters in conjunction with a continuation procedure, the method converts a nonlinear optimization problem into a maximization problem with linear inequality constraints. The method of linear programming is then applied to solve the converted linear optimization problem. The general efficiency of the linear programming approach allows the method to handle structural optimization problems with a large number of inequality constraints on the design vector. The method is demonstrated using a truss beam finite element model for the optimal sizing and placement of active/passive-structural members for damping augmentation. Results using both the sequential linear optimization approach and nonlinear optimization are presented and compared. The insensitivity to initial conditions of the linear optimization approach is also demonstrated.

  5. Comparison of some optimal control methods for the design of turbine blades

    NASA Technical Reports Server (NTRS)

    Desilva, B. M. E.; Grant, G. N. C.

    1977-01-01

    This paper attempts a comparative study of some numerical methods for the optimal control design of turbine blades whose vibration characteristics are approximated by Timoshenko beam idealizations with shear and incorporating simple boundary conditions. The blade was synthesized using the following methods: (1) conjugate gradient minimization of the system Hamiltonian in function space incorporating penalty function transformations, (2) projection operator methods in a function space which includes the frequencies of vibration and the control function, (3) epsilon-technique penalty function transformation resulting in a highly nonlinear programming problem, (4) finite difference discretization of the state equations again resulting in a nonlinear program, (5) second variation methods with complex state differential equations to include damping effects resulting in systems of inhomogeneous matrix Riccatti equations some of which are stiff, (6) quasi-linear methods based on iterative linearization of the state and adjoint equation. The paper includes a discussion of some substantial computational difficulties encountered in the implementation of these techniques together with a resume of work presently in progress using a differential dynamic programming approach.

  6. Split diversity in constrained conservation prioritization using integer linear programming.

    PubMed

    Chernomor, Olga; Minh, Bui Quang; Forest, Félix; Klaere, Steffen; Ingram, Travis; Henzinger, Monika; von Haeseler, Arndt

    2015-01-01

    Phylogenetic diversity (PD) is a measure of biodiversity based on the evolutionary history of species. Here, we discuss several optimization problems related to the use of PD, and the more general measure split diversity (SD), in conservation prioritization.Depending on the conservation goal and the information available about species, one can construct optimization routines that incorporate various conservation constraints. We demonstrate how this information can be used to select sets of species for conservation action. Specifically, we discuss the use of species' geographic distributions, the choice of candidates under economic pressure, and the use of predator-prey interactions between the species in a community to define viability constraints.Despite such optimization problems falling into the area of NP hard problems, it is possible to solve them in a reasonable amount of time using integer programming. We apply integer linear programming to a variety of models for conservation prioritization that incorporate the SD measure.We exemplarily show the results for two data sets: the Cape region of South Africa and a Caribbean coral reef community. Finally, we provide user-friendly software at http://www.cibiv.at/software/pda.

  7. A New Pattern of Getting Nasty Number in Graphical Method

    NASA Astrophysics Data System (ADS)

    Sumathi, P.; Indhumathi, N.

    2018-04-01

    This paper proposed a new technique of getting nasty numbers using graphical method in linear programming problem and it has been proved for various Linear programming problems. And also some characterisation of nasty numbers is discussed in this paper.

  8. SPAR reference manual. [for stress analysis

    NASA Technical Reports Server (NTRS)

    Whetstone, W. D.

    1974-01-01

    SPAR is a system of related programs which may be operated either in batch or demand (teletype) mode. Information exchange between programs is automatically accomplished through one or more direct access libraries, known collectively as the data complex. Card input is command-oriented, in free-field form. Capabilities available in the first production release of the system are fully documented, and include linear stress analysis, linear bifurcation buckling analysis, and linear vibrational analysis.

  9. Resource Allocation Modelling in Vocational Rehabilitation: A Prototype Developed with the Michigan and Rhode Island VR Agencies.

    ERIC Educational Resources Information Center

    Leff, H. Stephen; Turner, Ralph R.

    This report focuses on the use of linear programming models to address the issues of how vocational rehabilitation (VR) resources should be allocated in order to maximize program efficiency within given resource constraints. A general introduction to linear programming models is first presented that describes the major types of models available,…

  10. Specification for Teaching Machines and Programmes (Interchangeability of Programmes). Part 1, Linear Machines and Programmes.

    ERIC Educational Resources Information Center

    British Standards Institution, London (England).

    To promote interchangeability of teaching machines and programs, so that the user is not so limited in his choice of programs, the British Standards Institute has offered a standard. Part I of the standard deals with linear teaching machines and programs that make use of the roll or sheet methods of presentation. Requirements cover: spools,…

  11. Use and Misuse of Stunting as a Measure of Child Health.

    PubMed

    Perumal, Nandita; Bassani, Diego G; Roth, Daniel E

    2018-03-01

    The term "stunting" has become pervasive in international nutrition and child health research, program, and policy circles. Although originally intended as a population-level statistical indicator of children's social and economic deprivation, the conventional anthropometric definition of stunting (height-for-age z scores <-2 SD) is now widely used to define chronic malnutrition. Epidemiologists often portray it as a disease, making inferences about the causes of growth faltering based on comparisons between stunted (i.e., undernourished) and nonstunted children. Stunting is commonly used to monitor public health and nutrition program effectiveness alongside calls for the "elimination of stunting." However, there is no biological basis for the -2 SD cutoff to define stunting, making it a poor individual-level classifier of malnutrition or disease. In fact, in many low- and middle-income countries, children above and below the threshold are similarly affected by growth-limiting exposures. We argue that the common use of stunting as an indicator of child linear growth has contributed to unsubstantiated assumptions about the biological mechanisms underlying linear growth impairment in low- and middle-income countries and has led to a systematic underestimation of the burden of linear growth deficits among children in low-resource settings. Moreover, because nutrition-specific short-term public health interventions may result in relatively minor changes in child height, the use of stunting prevalence to monitor health or nutrition program effectiveness may be inappropriate. A more nuanced approach to the application and interpretation of stunting as an indicator in child growth research and public health programming is warranted.

  12. Frequency assignments for HFDF receivers in a search and rescue network

    NASA Astrophysics Data System (ADS)

    Johnson, Krista E.

    1990-03-01

    This thesis applies a multiobjective linear programming approach to the problem of assigning frequencies to high frequency direction finding (HFDF) receivers in a search-and-rescue network in order to maximize the expected number of geolocations of vessels in distress. The problem is formulated as a multiobjective integer linear programming problem. The integrality of the solutions is guaranteed by the totally unimodularity of the A-matrix. Two approaches are taken to solve the multiobjective linear programming problem: (1) the multiobjective simplex method as implemented in ADBASE; and (2) an iterative approach. In this approach, the individual objective functions are weighted and combined in a single additive objective function. The resulting single objective problem is expressed as a network programming problem and solved using SAS NETFLOW. The process is then repeated with different weightings for the objective functions. The solutions obtained from the multiobjective linear programs are evaluated using a FORTRAN program to determine which solution provides the greatest expected number of geolocations. This solution is then compared to the sample mean and standard deviation for the expected number of geolocations resulting from 10,000 random frequency assignments for the network.

  13. On solving three-dimensional open-dimension rectangular packing problems

    NASA Astrophysics Data System (ADS)

    Junqueira, Leonardo; Morabito, Reinaldo

    2017-05-01

    In this article, a recently proposed three-dimensional open-dimension rectangular packing problem is considered, in which the objective is to find a minimal volume rectangular container that packs a set of rectangular boxes. The literature has tackled small-sized instances of this problem by means of optimization solvers, position-free mixed-integer programming (MIP) formulations and piecewise linearization approaches. In this study, the problem is alternatively addressed by means of grid-based position MIP formulations, whereas still considering optimization solvers and the same piecewise linearization techniques. A comparison of the computational performance of both models is then presented, when tested with benchmark problem instances and with new instances, and it is shown that the grid-based position MIP formulation can be competitive, depending on the characteristics of the instances. The grid-based position MIP formulation is also embedded with real-world practical constraints, such as cargo stability, and results are additionally presented.

  14. Fulfilling the Roosevelts’ Vision for American Naval Power (1923-2005)

    DTIC Science & Technology

    2006-06-30

    nuclear pressure vessels are based on the results of that program.81 In...of a Nuclear Submarine 14 Identification Friend-or-Foe Systems 15 First American Airborne Radar 17 ThE COlD WAR 18 Monopulse Radar...Film-Forming Foam 38 Nuclear Reactor Safety iii 39 Linear Predictive Coder 40 Submarine Habitability 41

  15. MOFA Software for the COBRA Toolbox

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Griesemer, Marc; Navid, Ali

    MOFA-COBRA is a software code for Matlab that performs Multi-Objective Flux Analysis (MOFA), a solving of linear programming problems. Teh leading software package for conducting different types of analyses using constrain-based models is the COBRA Toolbox for Matlab. MOFA-COBRA is an added tool for COBRA that solves multi-objective problems using a novel algorithm.

  16. Landscape scale mapping of forest inventory data by nearest neighbor classification

    Treesearch

    Andrew Lister

    2009-01-01

    One of the goals of the Forest Service, U.S. Department of Agriculture's Forest Inventory and Analysis (FIA) program is large-area mapping. FIA scientists have tried many methods in the past, including geostatistical methods, linear modeling, nonlinear modeling, and simple choropleth and dot maps. Mapping methods that require individual model-based maps to be...

  17. Recursion Removal as an Instructional Method to Enhance the Understanding of Recursion Tracing

    ERIC Educational Resources Information Center

    Velázquez-Iturbide, J. Ángel; Castellanos, M. Eugenia; Hijón-Neira, Raquel

    2016-01-01

    Recursion is one of the most difficult programming topics for students. In this paper, an instructional method is proposed to enhance students' understanding of recursion tracing. The proposal is based on the use of rules to translate linear recursion algorithms into equivalent, iterative ones. The paper has two main contributions: the…

  18. A Conceptual View of the Officer Procurement Model (TOPOPS). Technical Report No. 73-73.

    ERIC Educational Resources Information Center

    Akman, Allan; Nordhauser, Fred

    This report presents the conceptual design of a computer-based linear programing model of the Air Force officer procurement system called TOPOPS. The TOPOPS model is an aggregate model which simulates officer accession and training and is directed at optimizing officer procurement in terms of either minimizing cost or maximizing accession quality…

  19. Improved sonic-box computer program for calculating transonic aerodynamic loads on oscillating wings with thickness

    NASA Technical Reports Server (NTRS)

    Ruo, S. Y.

    1978-01-01

    A computer program was developed to account approximately for the effects of finite wing thickness in transonic potential flow over an oscillation wing of finite span. The program is based on the original sonic box computer program for planar wing which was extended to account for the effect of wing thickness. Computational efficiency and accuracy were improved and swept trailing edges were accounted for. Account for the nonuniform flow caused by finite thickness was made by application of the local linearization concept with appropriate coordinate transformation. A brief description of each computer routine and the applications of cubic spline and spline surface data fitting techniques used in the program are given, and the method of input was shown in detail. Sample calculations as well as a complete listing of the computer program listing are presented.

  20. A binary linear programming formulation of the graph edit distance.

    PubMed

    Justice, Derek; Hero, Alfred

    2006-08-01

    A binary linear programming formulation of the graph edit distance for unweighted, undirected graphs with vertex attributes is derived and applied to a graph recognition problem. A general formulation for editing graphs is used to derive a graph edit distance that is proven to be a metric, provided the cost function for individual edit operations is a metric. Then, a binary linear program is developed for computing this graph edit distance, and polynomial time methods for determining upper and lower bounds on the solution of the binary program are derived by applying solution methods for standard linear programming and the assignment problem. A recognition problem of comparing a sample input graph to a database of known prototype graphs in the context of a chemical information system is presented as an application of the new method. The costs associated with various edit operations are chosen by using a minimum normalized variance criterion applied to pairwise distances between nearest neighbors in the database of prototypes. The new metric is shown to perform quite well in comparison to existing metrics when applied to a database of chemical graphs.

  1. Hybrid 3-D rocket trajectory program. Part 1: Formulation and analysis. Part 2: Computer programming and user's instruction. [computerized simulation using three dimensional motion analysis

    NASA Technical Reports Server (NTRS)

    Huang, L. C. P.; Cook, R. A.

    1973-01-01

    Models utilizing various sub-sets of the six degrees of freedom are used in trajectory simulation. A 3-D model with only linear degrees of freedom is especially attractive, since the coefficients for the angular degrees of freedom are the most difficult to determine and the angular equations are the most time consuming for the computer to evaluate. A computer program is developed that uses three separate subsections to predict trajectories. A launch rail subsection is used until the rocket has left its launcher. The program then switches to a special 3-D section which computes motions in two linear and one angular degrees of freedom. When the rocket trims out, the program switches to the standard, three linear degrees of freedom model.

  2. Solving deterministic non-linear programming problem using Hopfield artificial neural network and genetic programming techniques

    NASA Astrophysics Data System (ADS)

    Vasant, P.; Ganesan, T.; Elamvazuthi, I.

    2012-11-01

    A fairly reasonable result was obtained for non-linear engineering problems using the optimization techniques such as neural network, genetic algorithms, and fuzzy logic independently in the past. Increasingly, hybrid techniques are being used to solve the non-linear problems to obtain better output. This paper discusses the use of neuro-genetic hybrid technique to optimize the geological structure mapping which is known as seismic survey. It involves the minimization of objective function subject to the requirement of geophysical and operational constraints. In this work, the optimization was initially performed using genetic programming, and followed by hybrid neuro-genetic programming approaches. Comparative studies and analysis were then carried out on the optimized results. The results indicate that the hybrid neuro-genetic hybrid technique produced better results compared to the stand-alone genetic programming method.

  3. Programming Models for Concurrency and Real-Time

    NASA Astrophysics Data System (ADS)

    Vitek, Jan

    Modern real-time applications are increasingly large, complex and concurrent systems which must meet stringent performance and predictability requirements. Programming those systems require fundamental advances in programming languages and runtime systems. This talk presents our work on Flexotasks, a programming model for concurrent, real-time systems inspired by stream-processing and concurrent active objects. Some of the key innovations in Flexotasks are that it support both real-time garbage collection and region-based memory with an ownership type system for static safety. Communication between tasks is performed by channels with a linear type discipline to avoid copying messages, and by a non-blocking transactional memory facility. We have evaluated our model empirically within two distinct implementations, one based on Purdue’s Ovm research virtual machine framework and the other on Websphere, IBM’s production real-time virtual machine. We have written a number of small programs, as well as a 30 KLOC avionics collision detector application. We show that Flexotasks are capable of executing periodic threads at 10 KHz with a standard deviation of 1.2us and have performance competitive with hand coded C programs.

  4. On the Feasibility of a Generalized Linear Program

    DTIC Science & Technology

    1989-03-01

    generealized linear program by applying the same algorithm to a "phase-one" problem without requiring that the initial basic feasible solution to the latter be non-degenerate. secUrMTY C.AMlIS CAYI S OP ?- PAeES( UII -W & ,

  5. Development of a validation model for the defense meteorological satellite program's special sensor microwave imager

    NASA Technical Reports Server (NTRS)

    Swift, C. T.; Goodberlet, M. A.; Wilkerson, J. C.

    1990-01-01

    The Defence Meteorological Space Program's (DMSP) Special Sensor Microwave/Imager (SSM/I), an operational wind speed algorithm was developed. The algorithm is based on the D-matrix approach which seeks a linear relationship between measured SSM/I brightness temperatures and environmental parameters. D-matrix performance was validated by comparing algorithm derived wind speeds with near-simultaneous and co-located measurements made by off-shore ocean buoys. Other topics include error budget modeling, alternate wind speed algorithms, and D-matrix performance with one or more inoperative SSM/I channels.

  6. Multiple object tracking using the shortest path faster association algorithm.

    PubMed

    Xi, Zhenghao; Liu, Heping; Liu, Huaping; Yang, Bin

    2014-01-01

    To solve the persistently multiple object tracking in cluttered environments, this paper presents a novel tracking association approach based on the shortest path faster algorithm. First, the multiple object tracking is formulated as an integer programming problem of the flow network. Then we relax the integer programming to a standard linear programming problem. Therefore, the global optimum can be quickly obtained using the shortest path faster algorithm. The proposed method avoids the difficulties of integer programming, and it has a lower worst-case complexity than competing methods but better robustness and tracking accuracy in complex environments. Simulation results show that the proposed algorithm takes less time than other state-of-the-art methods and can operate in real time.

  7. Multiple Object Tracking Using the Shortest Path Faster Association Algorithm

    PubMed Central

    Liu, Heping; Liu, Huaping; Yang, Bin

    2014-01-01

    To solve the persistently multiple object tracking in cluttered environments, this paper presents a novel tracking association approach based on the shortest path faster algorithm. First, the multiple object tracking is formulated as an integer programming problem of the flow network. Then we relax the integer programming to a standard linear programming problem. Therefore, the global optimum can be quickly obtained using the shortest path faster algorithm. The proposed method avoids the difficulties of integer programming, and it has a lower worst-case complexity than competing methods but better robustness and tracking accuracy in complex environments. Simulation results show that the proposed algorithm takes less time than other state-of-the-art methods and can operate in real time. PMID:25215322

  8. Linear-constraint wavefront control for exoplanet coronagraphic imaging systems

    NASA Astrophysics Data System (ADS)

    Sun, He; Eldorado Riggs, A. J.; Kasdin, N. Jeremy; Vanderbei, Robert J.; Groff, Tyler Dean

    2017-01-01

    A coronagraph is a leading technology for achieving high-contrast imaging of exoplanets in a space telescope. It uses a system of several masks to modify the diffraction and achieve extremely high contrast in the image plane around target stars. However, coronagraphic imaging systems are very sensitive to optical aberrations, so wavefront correction using deformable mirrors (DMs) is necessary to avoid contrast degradation in the image plane. Electric field conjugation (EFC) and Stroke minimization (SM) are two primary high-contrast wavefront controllers explored in the past decade. EFC minimizes the average contrast in the search areas while regularizing the strength of the control inputs. Stroke minimization calculates the minimum DM commands under the constraint that a target average contrast is achieved. Recently in the High Contrast Imaging Lab at Princeton University (HCIL), a new linear-constraint wavefront controller based on stroke minimization was developed and demonstrated using numerical simulation. Instead of only constraining the average contrast over the entire search area, the new controller constrains the electric field of each single pixel using linear programming, which could led to significant increases in speed of the wavefront correction and also create more uniform dark holes. As a follow-up of this work, another linear-constraint controller modified from EFC is demonstrated theoretically and numerically and the lab verification of the linear-constraint controllers is reported. Based on the simulation and lab results, the pros and cons of linear-constraint controllers are carefully compared with EFC and stroke minimization.

  9. Multi-target detection and positioning in crowds using multiple camera surveillance

    NASA Astrophysics Data System (ADS)

    Huang, Jiahu; Zhu, Qiuyu; Xing, Yufeng

    2018-04-01

    In this study, we propose a pixel correspondence algorithm for positioning in crowds based on constraints on the distance between lines of sight, grayscale differences, and height in a world coordinates system. First, a Gaussian mixture model is used to obtain the background and foreground from multi-camera videos. Second, the hair and skin regions are extracted as regions of interest. Finally, the correspondences between each pixel in the region of interest are found under multiple constraints and the targets are positioned by pixel clustering. The algorithm can provide appropriate redundancy information for each target, which decreases the risk of losing targets due to a large viewing angle and wide baseline. To address the correspondence problem for multiple pixels, we construct a pixel-based correspondence model based on a similar permutation matrix, which converts the correspondence problem into a linear programming problem where a similar permutation matrix is found by minimizing an objective function. The correct pixel correspondences can be obtained by determining the optimal solution of this linear programming problem and the three-dimensional position of the targets can also be obtained by pixel clustering. Finally, we verified the algorithm with multiple cameras in experiments, which showed that the algorithm has high accuracy and robustness.

  10. Dosimeter-Type NOx Sensing Properties of KMnO4 and Its Electrical Conductivity during Temperature Programmed Desorption

    PubMed Central

    Groβ, Andrea; Kremling, Michael; Marr, Isabella; Kubinski, David J.; Visser, Jacobus H.; Tuller, Harry L.; Moos, Ralf

    2013-01-01

    An impedimetric NOx dosimeter based on the NOx sorption material KMnO4 is proposed. In addition to its application as a low level NOx dosimeter, KMnO4 shows potential as a precious metal free lean NOx trap material (LNT) for NOx storage catalysts (NSC) enabling electrical in-situ diagnostics. With this dosimeter, low levels of NO and NO2 exposure can be detected electrically as instantaneous values at 380 °C by progressive NOx accumulation in the KMnO4 based sensitive layer. The linear NOx sensing characteristics are recovered periodically by heating to 650 °C or switching to rich atmospheres. Further insight into the NOx sorption-dependent conductivity of the KMnO4-based material is obtained by the novel eTPD method that combines electrical characterization with classical temperature programmed desorption (TPD). The NOx loading amount increases proportionally to the NOx exposure time at sorption temperature. The cumulated NOx exposure, as well as the corresponding NOx loading state, can be detected linearly by electrical means in two modes: (1) time-continuously during the sorption interval including NOx concentration information from the signal derivative or (2) during the short-term thermal NOx release. PMID:23549366

  11. Dynamic modelling and simulation of linear Fresnel solar field model based on molten salt heat transfer fluid

    NASA Astrophysics Data System (ADS)

    Hakkarainen, Elina; Tähtinen, Matti

    2016-05-01

    Demonstrations of direct steam generation (DSG) in linear Fresnel collectors (LFC) have given promising results related to higher steam parameters compared to the current state-of-the-art parabolic trough collector (PTC) technology using oil as heat transfer fluid (HTF). However, DSG technology lacks feasible solution for long-term thermal energy storage (TES) system. This option is important for CSP technology in order to offer dispatchable power. Recently, molten salts have been proposed to be used as HTF and directly as storage medium in both line-focusing solar fields, offering storage capacity of several hours. This direct molten salt (DMS) storage concept has already gained operational experience in solar tower power plant, and it is under demonstration phase both in the case of LFC and PTC systems. Dynamic simulation programs offer a valuable effort for design and optimization of solar power plants. In this work, APROS dynamic simulation program is used to model a DMS linear Fresnel solar field with two-tank TES system, and example simulation results are presented in order to verify the functionality of the model and capability of APROS for CSP modelling and simulation.

  12. Thin Cloud Detection Method by Linear Combination Model of Cloud Image

    NASA Astrophysics Data System (ADS)

    Liu, L.; Li, J.; Wang, Y.; Xiao, Y.; Zhang, W.; Zhang, S.

    2018-04-01

    The existing cloud detection methods in photogrammetry often extract the image features from remote sensing images directly, and then use them to classify images into cloud or other things. But when the cloud is thin and small, these methods will be inaccurate. In this paper, a linear combination model of cloud images is proposed, by using this model, the underlying surface information of remote sensing images can be removed. So the cloud detection result can become more accurate. Firstly, the automatic cloud detection program in this paper uses the linear combination model to split the cloud information and surface information in the transparent cloud images, then uses different image features to recognize the cloud parts. In consideration of the computational efficiency, AdaBoost Classifier was introduced to combine the different features to establish a cloud classifier. AdaBoost Classifier can select the most effective features from many normal features, so the calculation time is largely reduced. Finally, we selected a cloud detection method based on tree structure and a multiple feature detection method using SVM classifier to compare with the proposed method, the experimental data shows that the proposed cloud detection program in this paper has high accuracy and fast calculation speed.

  13. A linear programming approach to reconstructing subcellular structures from confocal images for automated generation of representative 3D cellular models.

    PubMed

    Wood, Scott T; Dean, Brian C; Dean, Delphine

    2013-04-01

    This paper presents a novel computer vision algorithm to analyze 3D stacks of confocal images of fluorescently stained single cells. The goal of the algorithm is to create representative in silico model structures that can be imported into finite element analysis software for mechanical characterization. Segmentation of cell and nucleus boundaries is accomplished via standard thresholding methods. Using novel linear programming methods, a representative actin stress fiber network is generated by computing a linear superposition of fibers having minimum discrepancy compared with an experimental 3D confocal image. Qualitative validation is performed through analysis of seven 3D confocal image stacks of adherent vascular smooth muscle cells (VSMCs) grown in 2D culture. The presented method is able to automatically generate 3D geometries of the cell's boundary, nucleus, and representative F-actin network based on standard cell microscopy data. These geometries can be used for direct importation and implementation in structural finite element models for analysis of the mechanics of a single cell to potentially speed discoveries in the fields of regenerative medicine, mechanobiology, and drug discovery. Copyright © 2012 Elsevier B.V. All rights reserved.

  14. Measurement-Based Linear Optics

    NASA Astrophysics Data System (ADS)

    Alexander, Rafael N.; Gabay, Natasha C.; Rohde, Peter P.; Menicucci, Nicolas C.

    2017-03-01

    A major challenge in optical quantum processing is implementing large, stable interferometers. We offer a novel approach: virtual, measurement-based interferometers that are programed on the fly solely by the choice of homodyne measurement angles. The effects of finite squeezing are captured as uniform amplitude damping. We compare our proposal to existing (physical) interferometers and consider its performance for BosonSampling, which could demonstrate postclassical computational power in the near future. We prove its efficiency in time and squeezing (energy) in this setting.

  15. Method and system for knowledge discovery using non-linear statistical analysis and a 1st and 2nd tier computer program

    DOEpatents

    Hively, Lee M [Philadelphia, TN

    2011-07-12

    The invention relates to a method and apparatus for simultaneously processing different sources of test data into informational data and then processing different categories of informational data into knowledge-based data. The knowledge-based data can then be communicated between nodes in a system of multiple computers according to rules for a type of complex, hierarchical computer system modeled on a human brain.

  16. Probabilistic methods for rotordynamics analysis

    NASA Technical Reports Server (NTRS)

    Wu, Y.-T.; Torng, T. Y.; Millwater, H. R.; Fossum, A. F.; Rheinfurth, M. H.

    1991-01-01

    This paper summarizes the development of the methods and a computer program to compute the probability of instability of dynamic systems that can be represented by a system of second-order ordinary linear differential equations. Two instability criteria based upon the eigenvalues or Routh-Hurwitz test functions are investigated. Computational methods based on a fast probability integration concept and an efficient adaptive importance sampling method are proposed to perform efficient probabilistic analysis. A numerical example is provided to demonstrate the methods.

  17. A Flash X-Ray Facility for the Naval Postgraduate School

    DTIC Science & Technology

    1985-06-01

    ionizing radiation, *• NPS has had active programs with a Van de Graaff generator, a reactor, radioactive sources, X-ray machines and a linear electron ...interaction of radiation with matter and with coherent radiation. Currently the most active program is at the linear electron accelerator which over...twenty years has produced some 75 theses. The flash X-ray machine was obtained to expan-i and complement the capabilities of the linear electron

  18. Discrete Methods and their Applications

    DTIC Science & Technology

    1993-02-03

    problem of finding all near-optimal solutions to a linear program. In paper [18], we give a brief and elementary proof of a result of Hoffman [1952) about...relies only on linear programming duality; second, we obtain geometric and algebraic representations of the bounds that are determined explicitly in...same. We have studied the problem of finding the minimum n such that a given unit interval graph is an n--graph. A linear time algorithm to compute

  19. Summer Research Program (1992). Summer Faculty Research Program (SFRP) Reports. Volume 2. Armstrong Laboratory

    DTIC Science & Technology

    1992-12-01

    desirable. In this study, the proposed model consists of a thick-walled, highly deformable elastic tube in which the blood flow is described by linearized ...presented a mechanical model consisting of linearized Navier-Stokes and finite elasticity equations to predict blood pooling under acceleration stress... linear multielement model of the cardiovascular system which can calculate blood pressures and flows at any point in the cardio- vascular system. It

  20. Training Employers to Implement Health Promotion Programs: Results From the CDC Work@Health® Program.

    PubMed

    Cluff, Laurie A; Lang, Jason E; Rineer, Jennifer R; Jones-Jack, Nkenge H; Strazza, Karen M

    2018-05-01

    Centers for Disease Control and Prevention (CDC) initiated the Work@Health Program to teach employers how to improve worker health using evidence-based strategies. Program goals included (1) determining the best way(s) to deliver employer training, (2) increasing employers' knowledge of workplace health promotion (WHP), and (3) increasing the number of evidence-based WHP interventions at employers' worksites. This study is one of the few to examine the effectiveness of a program designed to train employers how to implement WHP programs. Pre- and posttest design. Training via 1 of 3 formats hands-on, online, or blended. Two hundred six individual participants from 173 employers of all sizes. Eight-module training curriculum to guide participants through building an evidence-based WHP program, followed by 6 to 10 months of technical assistance. The CDC Worksite Health ScoreCard and knowledge, attitudes, and behavior survey. Descriptive statistics, paired t tests, and mixed linear models. Participants' posttraining mean knowledge scores were significantly greater than the pretraining scores (61.1 vs 53.2, P < .001). A year after training, employers had significantly increased the number of evidence-based interventions in place (47.7 vs 35.5, P < .001). Employers' improvements did not significantly differ among the 3 training delivery formats. The Work@Health Program provided employers with knowledge to implement WHP interventions. The training and technical assistance provided structure, practical guidance, and tools to assess needs and select, implement, and evaluate interventions.

  1. EXPERIMENTS IN THE USE OF PROGRAMED MATERIALS IN TEACHING AN INTRODUCTORY COURSE IN THE BIOLOGICAL SCIENCES AT THE COLLEGE LEVEL.

    ERIC Educational Resources Information Center

    KANTASEWI, NIPHON

    THE PURPOSE OF THE STUDY WAS TO COMPARE THE EFFECTIVENESS OF (1) LECTURE PRESENTATIONS, (2) LINEAR PROGRAM USE IN CLASS WITH AND WITHOUT DISCUSSION, AND (3) LINEAR PROGRAMS USED OUTSIDE OF CLASS WITH INCLASS PROBLEMS OR DISCUSSION. THE 126 COLLEGE STUDENTS ENROLLED IN A BACTERIOLOGY COURSE WERE RANDOMLY ASSIGNED TO THREE GROUPS. IN A SUCCEEDING…

  2. A Comprehensive Meta-Analysis of Triple P-Positive Parenting Program Using Hierarchical Linear Modeling: Effectiveness and Moderating Variables

    ERIC Educational Resources Information Center

    Nowak, Christoph; Heinrichs, Nina

    2008-01-01

    A meta-analysis encompassing all studies evaluating the impact of the Triple P-Positive Parenting Program on parent and child outcome measures was conducted in an effort to identify variables that moderate the program's effectiveness. Hierarchical linear models (HLM) with three levels of data were employed to analyze effect sizes. The results (N =…

  3. Waste management with recourse: an inexact dynamic programming model containing fuzzy boundary intervals in objectives and constraints.

    PubMed

    Tan, Q; Huang, G H; Cai, Y P

    2010-09-01

    The existing inexact optimization methods based on interval-parameter linear programming can hardly address problems where coefficients in objective functions are subject to dual uncertainties. In this study, a superiority-inferiority-based inexact fuzzy two-stage mixed-integer linear programming (SI-IFTMILP) model was developed for supporting municipal solid waste management under uncertainty. The developed SI-IFTMILP approach is capable of tackling dual uncertainties presented as fuzzy boundary intervals (FuBIs) in not only constraints, but also objective functions. Uncertainties expressed as a combination of intervals and random variables could also be explicitly reflected. An algorithm with high computational efficiency was provided to solve SI-IFTMILP. SI-IFTMILP was then applied to a long-term waste management case to demonstrate its applicability. Useful interval solutions were obtained. SI-IFTMILP could help generate dynamic facility-expansion and waste-allocation plans, as well as provide corrective actions when anticipated waste management plans are violated. It could also greatly reduce system-violation risk and enhance system robustness through examining two sets of penalties resulting from variations in fuzziness and randomness. Moreover, four possible alternative models were formulated to solve the same problem; solutions from them were then compared with those from SI-IFTMILP. The results indicate that SI-IFTMILP could provide more reliable solutions than the alternatives. 2010 Elsevier Ltd. All rights reserved.

  4. User's manual for interfacing a leading edge, vortex rollup program with two linear panel methods

    NASA Technical Reports Server (NTRS)

    Desilva, B. M. E.; Medan, R. T.

    1979-01-01

    Sufficient instructions are provided for interfacing the Mangler-Smith, leading edge vortex rollup program with a vortex lattice (POTFAN) method and an advanced higher order, singularity linear analysis for computing the vortex effects for simple canard wing combinations.

  5. ELAS: A general-purpose computer program for the equilibrium problems of linear structures. Volume 2: Documentation of the program. [subroutines and flow charts

    NASA Technical Reports Server (NTRS)

    Utku, S.

    1969-01-01

    A general purpose digital computer program for the in-core solution of linear equilibrium problems of structural mechanics is documented. The program requires minimum input for the description of the problem. The solution is obtained by means of the displacement method and the finite element technique. Almost any geometry and structure may be handled because of the availability of linear, triangular, quadrilateral, tetrahedral, hexahedral, conical, triangular torus, and quadrilateral torus elements. The assumption of piecewise linear deflection distribution insures monotonic convergence of the deflections from the stiffer side with decreasing mesh size. The stresses are provided by the best-fit strain tensors in the least squares at the mesh points where the deflections are given. The selection of local coordinate systems whenever necessary is automatic. The core memory is used by means of dynamic memory allocation, an optional mesh-point relabelling scheme and imposition of the boundary conditions during the assembly time.

  6. Applying linear programming to estimate fluxes in ecosystems or food webs: An example from the herpetological assemblage of the freshwater Everglades

    USGS Publications Warehouse

    Diffendorfer, James E.; Richards, Paul M.; Dalrymple, George H.; DeAngelis, Donald L.

    2001-01-01

    We present the application of Linear Programming for estimating biomass fluxes in ecosystem and food web models. We use the herpetological assemblage of the Everglades as an example. We developed food web structures for three common Everglades freshwater habitat types: marsh, prairie, and upland. We obtained a first estimate of the fluxes using field data, literature estimates, and professional judgment. Linear programming was used to obtain a consistent and better estimate of the set of fluxes, while maintaining mass balance and minimizing deviations from point estimates. The results support the view that the Everglades is a spatially heterogeneous system, with changing patterns of energy flux, species composition, and biomasses across the habitat types. We show that a food web/ecosystem perspective, combined with Linear Programming, is a robust method for describing food webs and ecosystems that requires minimal data, produces useful post-solution analyses, and generates hypotheses regarding the structure of energy flow in the system.

  7. A Linear Programming Approach to Routing Control in Networks of Constrained Nonlinear Positive Systems with Concave Flow Rates

    NASA Technical Reports Server (NTRS)

    Arneson, Heather M.; Dousse, Nicholas; Langbort, Cedric

    2014-01-01

    We consider control design for positive compartmental systems in which each compartment's outflow rate is described by a concave function of the amount of material in the compartment.We address the problem of determining the routing of material between compartments to satisfy time-varying state constraints while ensuring that material reaches its intended destination over a finite time horizon. We give sufficient conditions for the existence of a time-varying state-dependent routing strategy which ensures that the closed-loop system satisfies basic network properties of positivity, conservation and interconnection while ensuring that capacity constraints are satisfied, when possible, or adjusted if a solution cannot be found. These conditions are formulated as a linear programming problem. Instances of this linear programming problem can be solved iteratively to generate a solution to the finite horizon routing problem. Results are given for the application of this control design method to an example problem. Key words: linear programming; control of networks; positive systems; controller constraints and structure.

  8. DOE CALiPER Program, Report 21.2: Linear (T8) LED Lamp Performance in Five Types of Recessed Troffers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miller, Naomi J.; Perrin, Tess E.; Royer, Michael P.

    2014-05-20

    Although lensed troffers are numerous, there are many other types of optical systems as well. This report looked at the performance of three linear (T8) LED lamps chosen primarily based on their luminous intensity distributions (narrow, medium, and wide beam angles) as well as a benchmark fluorescent lamp in five different troffer types. Also included are the results of a subjective evaluation. Results show that linear (T8) LED lamps can improve luminaire efficiency in K12-lensed and parabolic-louvered troffers, effect little change in volumetric and high-performance diffuse-lensed type luminaires, but reduce efficiency in recessed indirect troffers. These changes can be accompaniedmore » by visual appearance and visual comfort consequences, especially when LED lamps with clear lenses and narrow distributions are installed. Linear (T8) LED lamps with diffuse apertures exhibited wider beam angles, performed more similarly to fluorescent lamps, and received better ratings from observers. Guidance is provided on which luminaires are the best candidates for retrofitting with linear (T8) LED lamps.« less

  9. Weighted functional linear regression models for gene-based association analysis.

    PubMed

    Belonogova, Nadezhda M; Svishcheva, Gulnara R; Wilson, James F; Campbell, Harry; Axenovich, Tatiana I

    2018-01-01

    Functional linear regression models are effectively used in gene-based association analysis of complex traits. These models combine information about individual genetic variants, taking into account their positions and reducing the influence of noise and/or observation errors. To increase the power of methods, where several differently informative components are combined, weights are introduced to give the advantage to more informative components. Allele-specific weights have been introduced to collapsing and kernel-based approaches to gene-based association analysis. Here we have for the first time introduced weights to functional linear regression models adapted for both independent and family samples. Using data simulated on the basis of GAW17 genotypes and weights defined by allele frequencies via the beta distribution, we demonstrated that type I errors correspond to declared values and that increasing the weights of causal variants allows the power of functional linear models to be increased. We applied the new method to real data on blood pressure from the ORCADES sample. Five of the six known genes with P < 0.1 in at least one analysis had lower P values with weighted models. Moreover, we found an association between diastolic blood pressure and the VMP1 gene (P = 8.18×10-6), when we used a weighted functional model. For this gene, the unweighted functional and weighted kernel-based models had P = 0.004 and 0.006, respectively. The new method has been implemented in the program package FREGAT, which is freely available at https://cran.r-project.org/web/packages/FREGAT/index.html.

  10. Computer Program For Linear Algebra

    NASA Technical Reports Server (NTRS)

    Krogh, F. T.; Hanson, R. J.

    1987-01-01

    Collection of routines provided for basic vector operations. Basic Linear Algebra Subprogram (BLAS) library is collection from FORTRAN-callable routines for employing standard techniques to perform basic operations of numerical linear algebra.

  11. Multispectral Linear Array detector technology

    NASA Astrophysics Data System (ADS)

    Tower, J. R.; McCarthy, B. M.; Pellon, L. E.; Strong, R. T.; Elabd, H.

    1984-01-01

    The Multispectral Linear Array (MLA) program sponsored by NASA has the aim to extend space-based remote sensor capabilities. The technology development effort involves the realization of very large, all-solid-state, pushbroom focal planes. The pushbroom, staring focal planes will contain thousands of detectors with the objective to provide two orders of magnitude improvement in detector dwell time compared to present Landsat mechanically scanned systems. Attenton is given to visible and near-infrared sensor development, the shortwave infrared sensor, aspects of filter technology development, the packaging concept, and questions of system performance. First-sample, four-band interference filters have been fabricated successfully, and a hybrid packaging technology is being developed.

  12. Travel Demand Modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Southworth, Frank; Garrow, Dr. Laurie

    This chapter describes the principal types of both passenger and freight demand models in use today, providing a brief history of model development supported by references to a number of popular texts on the subject, and directing the reader to papers covering some of the more recent technical developments in the area. Over the past half century a variety of methods have been used to estimate and forecast travel demands, drawing concepts from economic/utility maximization theory, transportation system optimization and spatial interaction theory, using and often combining solution techniques as varied as Box-Jenkins methods, non-linear multivariate regression, non-linear mathematical programming,more » and agent-based microsimulation.« less

  13. Model-based optimal design of experiments - semidefinite and nonlinear programming formulations

    PubMed Central

    Duarte, Belmiro P.M.; Wong, Weng Kee; Oliveira, Nuno M.C.

    2015-01-01

    We use mathematical programming tools, such as Semidefinite Programming (SDP) and Nonlinear Programming (NLP)-based formulations to find optimal designs for models used in chemistry and chemical engineering. In particular, we employ local design-based setups in linear models and a Bayesian setup in nonlinear models to find optimal designs. In the latter case, Gaussian Quadrature Formulas (GQFs) are used to evaluate the optimality criterion averaged over the prior distribution for the model parameters. Mathematical programming techniques are then applied to solve the optimization problems. Because such methods require the design space be discretized, we also evaluate the impact of the discretization scheme on the generated design. We demonstrate the techniques for finding D–, A– and E–optimal designs using design problems in biochemical engineering and show the method can also be directly applied to tackle additional issues, such as heteroscedasticity in the model. Our results show that the NLP formulation produces highly efficient D–optimal designs but is computationally less efficient than that required for the SDP formulation. The efficiencies of the generated designs from the two methods are generally very close and so we recommend the SDP formulation in practice. PMID:26949279

  14. Model-based optimal design of experiments - semidefinite and nonlinear programming formulations.

    PubMed

    Duarte, Belmiro P M; Wong, Weng Kee; Oliveira, Nuno M C

    2016-02-15

    We use mathematical programming tools, such as Semidefinite Programming (SDP) and Nonlinear Programming (NLP)-based formulations to find optimal designs for models used in chemistry and chemical engineering. In particular, we employ local design-based setups in linear models and a Bayesian setup in nonlinear models to find optimal designs. In the latter case, Gaussian Quadrature Formulas (GQFs) are used to evaluate the optimality criterion averaged over the prior distribution for the model parameters. Mathematical programming techniques are then applied to solve the optimization problems. Because such methods require the design space be discretized, we also evaluate the impact of the discretization scheme on the generated design. We demonstrate the techniques for finding D -, A - and E -optimal designs using design problems in biochemical engineering and show the method can also be directly applied to tackle additional issues, such as heteroscedasticity in the model. Our results show that the NLP formulation produces highly efficient D -optimal designs but is computationally less efficient than that required for the SDP formulation. The efficiencies of the generated designs from the two methods are generally very close and so we recommend the SDP formulation in practice.

  15. Program Flow Analyzer. Volume 3

    DTIC Science & Technology

    1984-08-01

    metrics are defined using these basic terms. Of interest is another measure for the size of the program, called the volume: V N x log 2 n. 5 The unit of...correlated to actual data and most useful for test. The formula des - cribing difficulty may be expressed as: nl X N2D - 2 -I/L *Difficulty then, is the...linearly independent program paths through any program graph. A maximal set of these linearly independent paths, called a "basis set," can always be found

  16. VIBRA: An interactive computer program for steady-state vibration response analysis of linear damped structures

    NASA Technical Reports Server (NTRS)

    Bowman, L. M.

    1984-01-01

    An interactive steady state frequency response computer program with graphics is documented. Single or multiple forces may be applied to the structure using a modal superposition approach to calculate response. The method can be reapplied to linear, proportionally damped structures in which the damping may be viscous or structural. The theoretical approach and program organization are described. Example problems, user instructions, and a sample interactive session are given to demonstate the program's capability in solving a variety of problems.

  17. Ranking Surgical Residency Programs: Reputation Survey or Outcomes Measures?

    PubMed

    Wilson, Adam B; Torbeck, Laura J; Dunnington, Gary L

    2015-01-01

    The release of general surgery residency program rankings by Doximity and U.S. News & World Report accentuates the need to define and establish measurable standards of program quality. This study evaluated the extent to which program rankings based solely on peer nominations correlated with familiar program outcomes measures. Publicly available data were collected for all 254 general surgery residency programs. To generate a rudimentary outcomes-based program ranking, surgery programs were rank-ordered according to an average percentile rank that was calculated using board pass rates and the prevalence of alumni publications. A Kendall τ-b rank correlation computed the linear association between program rankings based on reputation alone and those derived from outcomes measures to validate whether reputation was a reasonable surrogate for globally judging program quality. For the 218 programs with complete data eligible for analysis, the mean board pass rate was 72% with a standard deviation of 14%. A total of 60 programs were placed in the 75th percentile or above for the number of publications authored by program alumni. The correlational analysis reported a significant correlation of 0.428, indicating only a moderate association between programs ranked by outcomes measures and those ranked according to reputation. Seventeen programs that were ranked in the top 30 according to reputation were also ranked in the top 30 based on outcomes measures. This study suggests that reputation alone does not fully capture a representative snapshot of a program's quality. Rather, the use of multiple quantifiable indicators and attributes unique to programs ought to be given more consideration when assigning ranks to denote program quality. It is advised that the interpretation and subsequent use of program rankings be met with caution until further studies can rigorously demonstrate best practices for awarding program standings. Copyright © 2015 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.

  18. Generalised Assignment Matrix Methodology in Linear Programming

    ERIC Educational Resources Information Center

    Jerome, Lawrence

    2012-01-01

    Discrete Mathematics instructors and students have long been struggling with various labelling and scanning algorithms for solving many important problems. This paper shows how to solve a wide variety of Discrete Mathematics and OR problems using assignment matrices and linear programming, specifically using Excel Solvers although the same…

  19. A simplified computer program for the prediction of the linear stability behavior of liquid propellant combustors

    NASA Technical Reports Server (NTRS)

    Mitchell, C. E.; Eckert, K.

    1979-01-01

    A program for predicting the linear stability of liquid propellant rocket engines is presented. The underlying model assumptions and analytical steps necessary for understanding the program and its input and output are also given. The rocket engine is modeled as a right circular cylinder with an injector with a concentrated combustion zone, a nozzle, finite mean flow, and an acoustic admittance, or the sensitive time lag theory. The resulting partial differential equations are combined into two governing integral equations by the use of the Green's function method. These equations are solved using a successive approximation technique for the small amplitude (linear) case. The computational method used as well as the various user options available are discussed. Finally, a flow diagram, sample input and output for a typical application and a complete program listing for program MODULE are presented.

  20. The fastclime Package for Linear Programming and Large-Scale Precision Matrix Estimation in R.

    PubMed

    Pang, Haotian; Liu, Han; Vanderbei, Robert

    2014-02-01

    We develop an R package fastclime for solving a family of regularized linear programming (LP) problems. Our package efficiently implements the parametric simplex algorithm, which provides a scalable and sophisticated tool for solving large-scale linear programs. As an illustrative example, one use of our LP solver is to implement an important sparse precision matrix estimation method called CLIME (Constrained L 1 Minimization Estimator). Compared with existing packages for this problem such as clime and flare, our package has three advantages: (1) it efficiently calculates the full piecewise-linear regularization path; (2) it provides an accurate dual certificate as stopping criterion; (3) it is completely coded in C and is highly portable. This package is designed to be useful to statisticians and machine learning researchers for solving a wide range of problems.

  1. A green vehicle routing problem with customer satisfaction criteria

    NASA Astrophysics Data System (ADS)

    Afshar-Bakeshloo, M.; Mehrabi, A.; Safari, H.; Maleki, M.; Jolai, F.

    2016-12-01

    This paper develops an MILP model, named Satisfactory-Green Vehicle Routing Problem. It consists of routing a heterogeneous fleet of vehicles in order to serve a set of customers within predefined time windows. In this model in addition to the traditional objective of the VRP, both the pollution and customers' satisfaction have been taken into account. Meanwhile, the introduced model prepares an effective dashboard for decision-makers that determines appropriate routes, the best mixed fleet, speed and idle time of vehicles. Additionally, some new factors evaluate the greening of each decision based on three criteria. This model applies piecewise linear functions (PLFs) to linearize a nonlinear fuzzy interval for incorporating customers' satisfaction into other linear objectives. We have presented a mixed integer linear programming formulation for the S-GVRP. This model enriches managerial insights by providing trade-offs between customers' satisfaction, total costs and emission levels. Finally, we have provided a numerical study for showing the applicability of the model.

  2. Linear Goal Programming as a Military Decision Aid.

    DTIC Science & Technology

    1988-04-01

    JAMES F. MAJOR9 USAF 13a. TYPE OF REPORT 13b. TIME COVERED 14. DATE OF REPORT (Year, Month, Day) 15. PAGE COUNT IFROM____ TO 1988 APRIL 64 16...air warfare, advanced armour warfare, the potential f or space warfare, and many other advances have expanded the breadth of weapons employed to the...written by A. Charnes and W. W. Cooper, Management Models and Industrial Applications of Linear Programming In 1961.(3:5) Since this time linear

  3. Probabilistic analysis for fatigue strength degradation of materials

    NASA Technical Reports Server (NTRS)

    Royce, Lola

    1989-01-01

    This report presents the results of the first year of a research program conducted for NASA-LeRC by the University of Texas at San Antonio. The research included development of methodology that provides a probabilistic treatment of lifetime prediction of structural components of aerospace propulsion systems subjected to fatigue. Material strength degradation models, based on primitive variables, include both a fatigue strength reduction model and a fatigue crack growth model. Linear elastic fracture mechanics is utilized in the latter model. Probabilistic analysis is based on simulation, and both maximum entropy and maximum penalized likelihood methods are used for the generation of probability density functions. The resulting constitutive relationships are included in several computer programs, RANDOM2, RANDOM3, and RANDOM4. These programs determine the random lifetime of an engine component, in mechanical load cycles, to reach a critical fatigue strength or crack size. The material considered was a cast nickel base superalloy, one typical of those used in the Space Shuttle Main Engine.

  4. Stochastic Dynamic Mixed-Integer Programming (SD-MIP)

    DTIC Science & Technology

    2015-05-05

    stochastic linear programming ( SLP ) problems. By using a combination of ideas from cutting plane theory of deterministic MIP (especially disjunctive...developed to date. b) As part of this project, we have also developed tools for very large scale Stochastic Linear Programming ( SLP ). There are...several reasons for this. First, SLP models continue to challenge many of the fastest computers to date, and many applications within the DoD (e.g

  5. Notes of the Design of Two Supercavitating Hydrofoils

    DTIC Science & Technology

    1975-07-01

    Foil Section Characteristics Definition Tulin Two -Term Levi - Civita Larock and Street Two -Term three pararreter Prcgram and Inputs linearized two ...36 NOMENCLATURE Symbol Description Dimensions AIA 2 Angle distribution multipliers in Levi - radians Civita Program AR Aspect ratio CL Lift coefficient...angle of attack radian B Constant angle in Levi - Civita program radian 6 Linearized angle of attack superposed degrees C Wu’s 1955 program parameter

  6. Should we feed animals to feed people? An optimization-based evaluation of environmental, economic, and health aspects of human diets in the United States

    USDA-ARS?s Scientific Manuscript database

    Given global pressure to improve food security, there is increasing interest in the question of whether we should feed animals to feed humans. Another question is whether there is sufficient domestically produced food to meet a population’s nutrient requirements. A linear programming approach to ide...

  7. Municipal solid waste management planning for Xiamen City, China: a stochastic fractional inventory-theory-based approach.

    PubMed

    Chen, Xiujuan; Huang, Guohe; Zhao, Shan; Cheng, Guanhui; Wu, Yinghui; Zhu, Hua

    2017-11-01

    In this study, a stochastic fractional inventory-theory-based waste management planning (SFIWP) model was developed and applied for supporting long-term planning of the municipal solid waste (MSW) management in Xiamen City, the special economic zone of Fujian Province, China. In the SFIWP model, the techniques of inventory model, stochastic linear fractional programming, and mixed-integer linear programming were integrated in a framework. Issues of waste inventory in MSW management system were solved, and the system efficiency was maximized through considering maximum net-diverted wastes under various constraint-violation risks. Decision alternatives for waste allocation and capacity expansion were also provided for MSW management planning in Xiamen. The obtained results showed that about 4.24 × 10 6  t of waste would be diverted from landfills when p i is 0.01, which accounted for 93% of waste in Xiamen City, and the waste diversion per unit of cost would be 26.327 × 10 3  t per $10 6 . The capacities of MSW management facilities including incinerators, composting facility, and landfills would be expanded due to increasing waste generation rate.

  8. Mixed Integer Linear Programming based machine learning approach identifies regulators of telomerase in yeast.

    PubMed

    Poos, Alexandra M; Maicher, André; Dieckmann, Anna K; Oswald, Marcus; Eils, Roland; Kupiec, Martin; Luke, Brian; König, Rainer

    2016-06-02

    Understanding telomere length maintenance mechanisms is central in cancer biology as their dysregulation is one of the hallmarks for immortalization of cancer cells. Important for this well-balanced control is the transcriptional regulation of the telomerase genes. We integrated Mixed Integer Linear Programming models into a comparative machine learning based approach to identify regulatory interactions that best explain the discrepancy of telomerase transcript levels in yeast mutants with deleted regulators showing aberrant telomere length, when compared to mutants with normal telomere length. We uncover novel regulators of telomerase expression, several of which affect histone levels or modifications. In particular, our results point to the transcription factors Sum1, Hst1 and Srb2 as being important for the regulation of EST1 transcription, and we validated the effect of Sum1 experimentally. We compiled our machine learning method leading to a user friendly package for R which can straightforwardly be applied to similar problems integrating gene regulator binding information and expression profiles of samples of e.g. different phenotypes, diseases or treatments. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.

  9. The impact of a workplace-based weight loss program on work-related outcomes in overweight male shift workers.

    PubMed

    Morgan, Philip J; Collins, Clare E; Plotnikoff, Ronald C; Cook, Alyce T; Berthon, Bronwyn; Mitchell, Simon; Callister, Robin

    2012-02-01

    The aim of this study was to evaluate the impact of a workplace-based weight loss program (Workplace POWER [Preventing Obesity Without Eating like a Rabbit]) for male shift workers on a number of work-related outcomes. A total of 110 overweight/obese (body mass index = 25-40) (mean [SD] age = 44.3 [8.6] years; body mass index = 30.5 [3.6]) male employees at Tomago Aluminium (New South Wales, Australia) were randomized to either (i) Workplace POWER program (n = 65) or (ii) a 14-week wait-list control group (n = 45). Men were assessed at baseline and 14-week follow-up for weight, quality of life, sleepiness, productivity at work (presenteeism), absenteeism, and workplace injuries. Retention was 81%. Intention-to-treat analysis using linear mixed models revealed a significant intervention effect for weight, quality of life (mental), presenteeism, absenteeism, and injuries. The Workplace POWER weight loss program improved a number of important work-related outcomes in male shift workers.

  10. Does Targeting Higher Health Risk Employees or Increasing Intervention Intensity Yield Savings in a Workplace Wellness Program?

    PubMed

    Kapinos, Kandice A; Caloyeras, John P; Liu, Hangsheng; Mattke, Soeren

    2015-12-01

    This article aims to test whether a workplace wellness program reduces health care cost for higher risk employees or employees with greater participation. The program effect on costs was estimated using a generalized linear model with a log-link function using a difference-in-difference framework with a propensity score matched sample of employees using claims and program data from a large US firm from 2003 to 2011. The program targeting higher risk employees did not yield cost savings. Employees participating in five or more sessions aimed at encouraging more healthful living had about $20 lower per member per month costs relative to matched comparisons (P = 0.002). Our results add to the growing evidence base that workplace wellness programs aimed at primary prevention do not reduce health care cost, with the exception of those employees who choose to participate more actively.

  11. MM Algorithms for Geometric and Signomial Programming

    PubMed Central

    Lange, Kenneth; Zhou, Hua

    2013-01-01

    This paper derives new algorithms for signomial programming, a generalization of geometric programming. The algorithms are based on a generic principle for optimization called the MM algorithm. In this setting, one can apply the geometric-arithmetic mean inequality and a supporting hyperplane inequality to create a surrogate function with parameters separated. Thus, unconstrained signomial programming reduces to a sequence of one-dimensional minimization problems. Simple examples demonstrate that the MM algorithm derived can converge to a boundary point or to one point of a continuum of minimum points. Conditions under which the minimum point is unique or occurs in the interior of parameter space are proved for geometric programming. Convergence to an interior point occurs at a linear rate. Finally, the MM framework easily accommodates equality and inequality constraints of signomial type. For the most important special case, constrained quadratic programming, the MM algorithm involves very simple updates. PMID:24634545

  12. MM Algorithms for Geometric and Signomial Programming.

    PubMed

    Lange, Kenneth; Zhou, Hua

    2014-02-01

    This paper derives new algorithms for signomial programming, a generalization of geometric programming. The algorithms are based on a generic principle for optimization called the MM algorithm. In this setting, one can apply the geometric-arithmetic mean inequality and a supporting hyperplane inequality to create a surrogate function with parameters separated. Thus, unconstrained signomial programming reduces to a sequence of one-dimensional minimization problems. Simple examples demonstrate that the MM algorithm derived can converge to a boundary point or to one point of a continuum of minimum points. Conditions under which the minimum point is unique or occurs in the interior of parameter space are proved for geometric programming. Convergence to an interior point occurs at a linear rate. Finally, the MM framework easily accommodates equality and inequality constraints of signomial type. For the most important special case, constrained quadratic programming, the MM algorithm involves very simple updates.

  13. Comparative Effectiveness of After-School Programs to Increase Physical Activity

    PubMed Central

    Gesell, Sabina B.; Sommer, Evan C.; Lambert, E. Warren; Vides de Andrade, Ana Regina; Davis, Lauren; Beech, Bettina M.; Mitchell, Stephanie J.; Neloms, Stevon; Ryan, Colleen K.

    2013-01-01

    Background. We conducted a comparative effectiveness analysis to evaluate the difference in the amount of physical activity children engaged in when enrolled in a physical activity-enhanced after-school program based in a community recreation center versus a standard school-based after-school program. Methods. The study was a natural experiment with 54 elementary school children attending the community ASP and 37 attending the school-based ASP. Accelerometry was used to measure physical activity. Data were collected at baseline, 6 weeks, and 12 weeks, with 91% retention. Results. At baseline, 43% of the multiethnic sample was overweight/obese, and the mean age was 7.9 years (SD = 1.7). Linear latent growth models suggested that the average difference between the two groups of children at Week 12 was 14.7 percentage points in moderate-vigorous physical activity (P < .001). Cost analysis suggested that children attending traditional school-based ASPs—at an average cost of $17.67 per day—would need an additional daily investment of $1.59 per child for 12 weeks to increase their moderate-vigorous physical activity by a model-implied 14.7 percentage points. Conclusions. A low-cost, alternative after-school program featuring adult-led physical activities in a community recreation center was associated with increased physical activity compared to standard-of-care school-based after-school program. PMID:23984052

  14. The "Generacion Diez" after-school program and Latino parent involvement with schools.

    PubMed

    Riggs, Nathaniel R; Medina, Carmen

    2005-11-01

    The current study examines associations between participation in after-school programs and change in Latino parent involvement with schools. Hierarchical linear regression analyses demonstrated that parents of children who had higher after-school program attendance rates were significantly more likely to report increases in the quality of relationships with their children's teachers, frequency of parent-teacher contact, and engagement with their children's schooling over a two-year period. However, greater home educator contacts were related to decreases in quality and quantity of parent-school involvement. A primary implication is that attendance in school-based after-school programs may draw parents into children's regular-day school context. Editors' Strategic Implications The authors illustrate the promising practice of using after-school programs to promote parent involvement and to help integrate the often disparate family and school contexts for Latino children.

  15. The Next Linear Collider Program

    Science.gov Websites

    The Next Linear Collider at SLAC Navbar NLC Playpen Warning: This page is provided as a place for Comments & Suggestions | Desktop Trouble Call | Linear Collider Group at FNAL || This page was updated

  16. Microwave and Electron Beam Computer Programs

    DTIC Science & Technology

    1988-06-01

    Research (ONR). SCRIBE was adapted by MRC from the Stanford Linear Accelerator Center Beam Trajectory Program, EGUN . oTIC NSECE Acc !,,o For IDL1C I...achieved with SCRIBE. It is a ver- sion of the Stanford Linear Accelerator (SLAC) code EGUN (Ref. 8), extensively modified by MRC for research on

  17. Interior-Point Methods for Linear Programming: A Review

    ERIC Educational Resources Information Center

    Singh, J. N.; Singh, D.

    2002-01-01

    The paper reviews some recent advances in interior-point methods for linear programming and indicates directions in which future progress can be made. Most of the interior-point methods belong to any of three categories: affine-scaling methods, potential reduction methods and central path methods. These methods are discussed together with…

  18. AN EVALUATION OF HEURISTICS FOR THRESHOLD-FUNCTION TEST-SYNTHESIS,

    DTIC Science & Technology

    Linear programming offers the most attractive procedure for testing and obtaining optimal threshold gate realizations for functions generated in...The design of the experiments may be of general interest to students of automatic problem solving; the results should be of interest in threshold logic and linear programming. (Author)

  19. ALPS - A LINEAR PROGRAM SOLVER

    NASA Technical Reports Server (NTRS)

    Viterna, L. A.

    1994-01-01

    Linear programming is a widely-used engineering and management tool. Scheduling, resource allocation, and production planning are all well-known applications of linear programs (LP's). Most LP's are too large to be solved by hand, so over the decades many computer codes for solving LP's have been developed. ALPS, A Linear Program Solver, is a full-featured LP analysis program. ALPS can solve plain linear programs as well as more complicated mixed integer and pure integer programs. ALPS also contains an efficient solution technique for pure binary (0-1 integer) programs. One of the many weaknesses of LP solvers is the lack of interaction with the user. ALPS is a menu-driven program with no special commands or keywords to learn. In addition, ALPS contains a full-screen editor to enter and maintain the LP formulation. These formulations can be written to and read from plain ASCII files for portability. For those less experienced in LP formulation, ALPS contains a problem "parser" which checks the formulation for errors. ALPS creates fully formatted, readable reports that can be sent to a printer or output file. ALPS is written entirely in IBM's APL2/PC product, Version 1.01. The APL2 workspace containing all the ALPS code can be run on any APL2/PC system (AT or 386). On a 32-bit system, this configuration can take advantage of all extended memory. The user can also examine and modify the ALPS code. The APL2 workspace has also been "packed" to be run on any DOS system (without APL2) as a stand-alone "EXE" file, but has limited memory capacity on a 640K system. A numeric coprocessor (80X87) is optional but recommended. The standard distribution medium for ALPS is a 5.25 inch 360K MS-DOS format diskette. IBM, IBM PC and IBM APL2 are registered trademarks of International Business Machines Corporation. MS-DOS is a registered trademark of Microsoft Corporation.

  20. The Langley Stability and Transition Analysis Code (LASTRAC) : LST, Linear and Nonlinear PSE for 2-D, Axisymmetric, and Infinite Swept Wing Boundary Layers

    NASA Technical Reports Server (NTRS)

    Chang, Chau-Lyan

    2003-01-01

    During the past two decades, our understanding of laminar-turbulent transition flow physics has advanced significantly owing to, in a large part, the NASA program support such as the National Aerospace Plane (NASP), High-speed Civil Transport (HSCT), and Advanced Subsonic Technology (AST). Experimental, theoretical, as well as computational efforts on various issues such as receptivity and linear and nonlinear evolution of instability waves take part in broadening our knowledge base for this intricate flow phenomenon. Despite all these advances, transition prediction remains a nontrivial task for engineers due to the lack of a widely available, robust, and efficient prediction tool. The design and development of the LASTRAC code is aimed at providing one such engineering tool that is easy to use and yet capable of dealing with a broad range of transition related issues. LASTRAC was written from scratch based on the state-of-the-art numerical methods for stability analysis and modem software technologies. At low fidelity, it allows users to perform linear stability analysis and N-factor transition correlation for a broad range of flow regimes and configurations by using either the linear stability theory (LST) or linear parabolized stability equations (LPSE) method. At high fidelity, users may use nonlinear PSE to track finite-amplitude disturbances until the skin friction rise. Coupled with the built-in receptivity model that is currently under development, the nonlinear PSE method offers a synergistic approach to predict transition onset for a given disturbance environment based on first principles. This paper describes the governing equations, numerical methods, code development, and case studies for the current release of LASTRAC. Practical applications of LASTRAC are demonstrated for linear stability calculations, N-factor transition correlation, non-linear breakdown simulations, and controls of stationary crossflow instability in supersonic swept wing boundary layers.

  1. Polynomial Size Formulations for the Distance and Capacity Constrained Vehicle Routing Problem

    NASA Astrophysics Data System (ADS)

    Kara, Imdat; Derya, Tusan

    2011-09-01

    The Distance and Capacity Constrained Vehicle Routing Problem (DCVRP) is an extension of the well known Traveling Salesman Problem (TSP). DCVRP arises in distribution and logistics problems. It would be beneficial to construct new formulations, which is the main motivation and contribution of this paper. We focused on two indexed integer programming formulations for DCVRP. One node based and one arc (flow) based formulation for DCVRP are presented. Both formulations have O(n2) binary variables and O(n2) constraints, i.e., the number of the decision variables and constraints grows with a polynomial function of the nodes of the underlying graph. It is shown that proposed arc based formulation produces better lower bound than the existing one (this refers to the Water's formulation in the paper). Finally, various problems from literature are solved with the node based and arc based formulations by using CPLEX 8.0. Preliminary computational analysis shows that, arc based formulation outperforms the node based formulation in terms of linear programming relaxation.

  2. Food pattern modeling shows that the 2010 Dietary Guidelines for sodium and potassium cannot be met simultaneously

    PubMed Central

    Maillot, Matthieu; Monsivais, Pablo; Drewnowski, Adam

    2013-01-01

    The 2010 US Dietary Guidelines recommended limiting intake of sodium to 1500 mg/d for people older than 50 years, African Americans, and those suffering from chronic disease. The guidelines recommended that all other people consume less than 2300 mg sodium and 4700 mg of potassium per day. The theoretical feasibility of meeting the sodium and potassium guidelines while simultaneously maintaining nutritional adequacy of the diet was tested using food pattern modeling based on linear programming. Dietary data from the National Health and Nutrition Examination Survey 2001-2002 were used to create optimized food patterns for 6 age-sex groups. Linear programming models determined the boundary conditions for the potassium and sodium content of the modeled food patterns that would also be compatible with other nutrient goals. Linear programming models also sought to determine the amounts of sodium and potassium that both would be consistent with the ratio of Na to K of 0.49 and would cause the least deviation from the existing food habits. The 6 sets of food patterns were created before and after an across-the-board 10% reduction in sodium content of all foods in the Food and Nutrition Database for Dietary Studies. Modeling analyses showed that the 2010 Dietary Guidelines for sodium were incompatible with potassium guidelines and with nutritionally adequate diets, even after reducing the sodium content of all US foods by 10%. Feasibility studies should precede or accompany the issuing of dietary guidelines to the public. PMID:23507224

  3. Runtime Analysis of Linear Temporal Logic Specifications

    NASA Technical Reports Server (NTRS)

    Giannakopoulou, Dimitra; Havelund, Klaus

    2001-01-01

    This report presents an approach to checking a running program against its Linear Temporal Logic (LTL) specifications. LTL is a widely used logic for expressing properties of programs viewed as sets of executions. Our approach consists of translating LTL formulae to finite-state automata, which are used as observers of the program behavior. The translation algorithm we propose modifies standard LTL to B chi automata conversion techniques to generate automata that check finite program traces. The algorithm has been implemented in a tool, which has been integrated with the generic JPaX framework for runtime analysis of Java programs.

  4. Image Understanding Research and Its Application to Cartography and Computer-Based Analysis of Aerial Imagery

    DTIC Science & Technology

    1983-05-01

    Parallel Computation that Assign Canonical Object-Based Frames of Refer- ence," Proc. 7th it. .nt. Onf. on Artifcial Intellig nce (IJCAI-81), Vol. 2...Perception of Linear Struc- ture in Imaged Data ." TN 276, Artiflci!.a Intelligence Center, SRI International, Feb. 1983. [Fram75] J.P. Frain and E.S...1983 May 1983 D C By: Martin A. Fischler, Program Director S ELECTE Principal Investigator, (415)859-5106 MAY 2 21990 Artificial Intelligence Center

  5. The Next Linear Collider Program-News

    Science.gov Websites

    The Next Linear Collider at SLAC Navbar The Next Linear Collider In The Press The Secretary of Linear Collider is a high-priority goal of this plan. http://www.sc.doe.gov/Sub/Facilities_for_future/20 -term projects in conceputal stages (the Linear Collider is the highest priority project in this

  6. Water resources planning and management : A stochastic dual dynamic programming approach

    NASA Astrophysics Data System (ADS)

    Goor, Q.; Pinte, D.; Tilmant, A.

    2008-12-01

    Allocating water between different users and uses, including the environment, is one of the most challenging task facing water resources managers and has always been at the heart of Integrated Water Resources Management (IWRM). As water scarcity is expected to increase over time, allocation decisions among the different uses will have to be found taking into account the complex interactions between water and the economy. Hydro-economic optimization models can capture those interactions while prescribing efficient allocation policies. Many hydro-economic models found in the literature are formulated as large-scale non linear optimization problems (NLP), seeking to maximize net benefits from the system operation while meeting operational and/or institutional constraints, and describing the main hydrological processes. However, those models rarely incorporate the uncertainty inherent to the availability of water, essentially because of the computational difficulties associated stochastic formulations. The purpose of this presentation is to present a stochastic programming model that can identify economically efficient allocation policies in large-scale multipurpose multireservoir systems. The model is based on stochastic dual dynamic programming (SDDP), an extension of traditional SDP that is not affected by the curse of dimensionality. SDDP identify efficient allocation policies while considering the hydrologic uncertainty. The objective function includes the net benefits from the hydropower and irrigation sectors, as well as penalties for not meeting operational and/or institutional constraints. To be able to implement the efficient decomposition scheme that remove the computational burden, the one-stage SDDP problem has to be a linear program. Recent developments improve the representation of the non-linear and mildly non- convex hydropower function through a convex hull approximation of the true hydropower function. This model is illustrated on a cascade of 14 reservoirs on the Nile river basin.

  7. A program for identification of linear systems

    NASA Technical Reports Server (NTRS)

    Buell, J.; Kalaba, R.; Ruspini, E.; Yakush, A.

    1971-01-01

    A program has been written for the identification of parameters in certain linear systems. These systems appear in biomedical problems, particularly in compartmental models of pharmacokinetics. The method presented here assumes that some of the state variables are regularly modified by jump conditions. This simulates administration of drugs following some prescribed drug regime. Parameters are identified by a least-square fit of the linear differential system to a set of experimental observations. The method is especially suited when the interval of observation of the system is very long.

  8. Participation in fitness-related activities of an incentive-based health promotion program and hospital costs: a retrospective longitudinal study.

    PubMed

    Patel, Deepak; Lambert, Estelle V; da Silva, Roseanne; Greyling, Mike; Kolbe-Alexander, Tracy; Noach, Adam; Conradie, Jaco; Nossel, Craig; Borresen, Jill; Gaziano, Thomas

    2011-01-01

    A retrospective, longitudinal study examined changes in participation in fitness-related activities and hospital claims over 5 years amongst members of an incentivized health promotion program offered by a private health insurer. A 3-year retrospective observational analysis measuring gym visits and participation in documented fitness-related activities, probability of hospital admission, and associated costs of admission. A South African private health plan, Discovery Health and the Vitality health promotion program. 304,054 adult members of the Discovery medical plan, 192,467 of whom registered for the health promotion program and 111,587 members who were not on the program. Members were incentivised for fitness-related activities on the basis of the frequency of gym visits. Changes in electronically documented gym visits and registered participation in fitness-related activities over 3 years and measures of association between changes in participation (years 1-3) and subsequent probability and costs of hospital admission (years 4-5). Hospital admissions and associated costs are based on claims extracted from the health insurer database. The probability of a claim modeled by using linear logistic regression and costs of claims examined by using general linear models. Propensity scores were estimated and included age, gender, registration for chronic disease benefits, plan type, and the presence of a claim during the transition period, and these were used as covariates in the final model. There was a significant decrease in the prevalence of inactive members (76% to 68%) over 5 years. Members who remained highly active (years 1-3) had a lower probability (p < .05) of hospital admission in years 4 to 5 (20.7%) compared with those who remained inactive (22.2%). The odds of admission were 13% lower for two additional gym visits per week (odds ratio, .87; 95% confidence interval [CI], .801-.949). We observed an increase in fitness-related activities over time amongst members of this incentive-based health promotion program, which was associated with a lower probability of hospital admission and lower hospital costs in the subsequent 2 years. Copyright © 2011 by American Journal of Health Promotion, Inc.

  9. User's manual for UCAP: Unified Counter-Rotation Aero-Acoustics Program

    NASA Technical Reports Server (NTRS)

    Culver, E. M.; Mccolgan, C. J.

    1993-01-01

    This is the user's manual for the Unified Counter-rotation Aeroacoustics Program (UCAP), the counter-rotation derivative of the UAAP (Unified Aero-Acoustic Program). The purpose of this program is to predict steady and unsteady air loading on the blades and the noise produced by a counter-rotation Prop-Fan. The aerodynamic method is based on linear potential theory with corrections for nonlinearity associated with axial flux induction, vortex lift on the blades, and rotor-to-rotor interference. The theory for acoustics and the theory for individual blade loading and wakes are derived in Unified Aeroacoustics Analysis for High Speed Turboprop Aerodynamics and Noise, Volume 1 (NASA CR-4329). This user's manual also includes a brief explanation of the theory used for the modelling of counter-rotation.

  10. Pseudo-random number generator for the Sigma 5 computer

    NASA Technical Reports Server (NTRS)

    Carroll, S. N.

    1983-01-01

    A technique is presented for developing a pseudo-random number generator based on the linear congruential form. The two numbers used for the generator are a prime number and a corresponding primitive root, where the prime is the largest prime number that can be accurately represented on a particular computer. The primitive root is selected by applying Marsaglia's lattice test. The technique presented was applied to write a random number program for the Sigma 5 computer. The new program, named S:RANDOM1, is judged to be superior to the older program named S:RANDOM. For applications requiring several independent random number generators, a table is included showing several acceptable primitive roots. The technique and programs described can be applied to any computer having word length different from that of the Sigma 5.

  11. User's manual for UCAP: Unified Counter-Rotation Aero-Acoustics Program

    NASA Astrophysics Data System (ADS)

    Culver, E. M.; McColgan, C. J.

    1993-04-01

    This is the user's manual for the Unified Counter-rotation Aeroacoustics Program (UCAP), the counter-rotation derivative of the UAAP (Unified Aero-Acoustic Program). The purpose of this program is to predict steady and unsteady air loading on the blades and the noise produced by a counter-rotation Prop-Fan. The aerodynamic method is based on linear potential theory with corrections for nonlinearity associated with axial flux induction, vortex lift on the blades, and rotor-to-rotor interference. The theory for acoustics and the theory for individual blade loading and wakes are derived in Unified Aeroacoustics Analysis for High Speed Turboprop Aerodynamics and Noise, Volume 1 (NASA CR-4329). This user's manual also includes a brief explanation of the theory used for the modelling of counter-rotation.

  12. Robust Neighboring Optimal Guidance for the Advanced Launch System

    NASA Technical Reports Server (NTRS)

    Hull, David G.

    1993-01-01

    In recent years, optimization has become an engineering tool through the availability of numerous successful nonlinear programming codes. Optimal control problems are converted into parameter optimization (nonlinear programming) problems by assuming the control to be piecewise linear, making the unknowns the nodes or junction points of the linear control segments. Once the optimal piecewise linear control (suboptimal) control is known, a guidance law for operating near the suboptimal path is the neighboring optimal piecewise linear control (neighboring suboptimal control). Research conducted under this grant has been directed toward the investigation of neighboring suboptimal control as a guidance scheme for an advanced launch system.

  13. Estimating linear temporal trends from aggregated environmental monitoring data

    USGS Publications Warehouse

    Erickson, Richard A.; Gray, Brian R.; Eager, Eric A.

    2017-01-01

    Trend estimates are often used as part of environmental monitoring programs. These trends inform managers (e.g., are desired species increasing or undesired species decreasing?). Data collected from environmental monitoring programs is often aggregated (i.e., averaged), which confounds sampling and process variation. State-space models allow sampling variation and process variations to be separated. We used simulated time-series to compare linear trend estimations from three state-space models, a simple linear regression model, and an auto-regressive model. We also compared the performance of these five models to estimate trends from a long term monitoring program. We specifically estimated trends for two species of fish and four species of aquatic vegetation from the Upper Mississippi River system. We found that the simple linear regression had the best performance of all the given models because it was best able to recover parameters and had consistent numerical convergence. Conversely, the simple linear regression did the worst job estimating populations in a given year. The state-space models did not estimate trends well, but estimated population sizes best when the models converged. We found that a simple linear regression performed better than more complex autoregression and state-space models when used to analyze aggregated environmental monitoring data.

  14. Off-Grid Direction of Arrival Estimation Based on Joint Spatial Sparsity for Distributed Sparse Linear Arrays

    PubMed Central

    Liang, Yujie; Ying, Rendong; Lu, Zhenqi; Liu, Peilin

    2014-01-01

    In the design phase of sensor arrays during array signal processing, the estimation performance and system cost are largely determined by array aperture size. In this article, we address the problem of joint direction-of-arrival (DOA) estimation with distributed sparse linear arrays (SLAs) and propose an off-grid synchronous approach based on distributed compressed sensing to obtain larger array aperture. We focus on the complex source distribution in the practical applications and classify the sources into common and innovation parts according to whether a signal of source can impinge on all the SLAs or a specific one. For each SLA, we construct a corresponding virtual uniform linear array (ULA) to create the relationship of random linear map between the signals respectively observed by these two arrays. The signal ensembles including the common/innovation sources for different SLAs are abstracted as a joint spatial sparsity model. And we use the minimization of concatenated atomic norm via semidefinite programming to solve the problem of joint DOA estimation. Joint calculation of the signals observed by all the SLAs exploits their redundancy caused by the common sources and decreases the requirement of array size. The numerical results illustrate the advantages of the proposed approach. PMID:25420150

  15. Fault detection and initial state verification by linear programming for a class of Petri nets

    NASA Technical Reports Server (NTRS)

    Rachell, Traxon; Meyer, David G.

    1992-01-01

    The authors present an algorithmic approach to determining when the marking of a LSMG (live safe marked graph) or a LSFC (live safe free choice) net is in the set of live safe markings M. Hence, once the marking of a net is determined to be in M, then if at some time thereafter the marking of this net is determined not to be in M, this indicates a fault. It is shown how linear programming can be used to determine if m is an element of M. The worst-case computational complexity of each algorithm is bounded by the number of linear programs necessary to compute.

  16. STAR adaptation of QR algorithm. [program for solving over-determined systems of linear equations

    NASA Technical Reports Server (NTRS)

    Shah, S. N.

    1981-01-01

    The QR algorithm used on a serial computer and executed on the Control Data Corporation 6000 Computer was adapted to execute efficiently on the Control Data STAR-100 computer. How the scalar program was adapted for the STAR-100 and why these adaptations yielded an efficient STAR program is described. Program listings of the old scalar version and the vectorized SL/1 version are presented in the appendices. Execution times for the two versions applied to the same system of linear equations, are compared.

  17. Optimal GENCO bidding strategy

    NASA Astrophysics Data System (ADS)

    Gao, Feng

    Electricity industries worldwide are undergoing a period of profound upheaval. The conventional vertically integrated mechanism is being replaced by a competitive market environment. Generation companies have incentives to apply novel technologies to lower production costs, for example: Combined Cycle units. Economic dispatch with Combined Cycle units becomes a non-convex optimization problem, which is difficult if not impossible to solve by conventional methods. Several techniques are proposed here: Mixed Integer Linear Programming, a hybrid method, as well as Evolutionary Algorithms. Evolutionary Algorithms share a common mechanism, stochastic searching per generation. The stochastic property makes evolutionary algorithms robust and adaptive enough to solve a non-convex optimization problem. This research implements GA, EP, and PS algorithms for economic dispatch with Combined Cycle units, and makes a comparison with classical Mixed Integer Linear Programming. The electricity market equilibrium model not only helps Independent System Operator/Regulator analyze market performance and market power, but also provides Market Participants the ability to build optimal bidding strategies based on Microeconomics analysis. Supply Function Equilibrium (SFE) is attractive compared to traditional models. This research identifies a proper SFE model, which can be applied to a multiple period situation. The equilibrium condition using discrete time optimal control is then developed for fuel resource constraints. Finally, the research discusses the issues of multiple equilibria and mixed strategies, which are caused by the transmission network. Additionally, an advantage of the proposed model for merchant transmission planning is discussed. A market simulator is a valuable training and evaluation tool to assist sellers, buyers, and regulators to understand market performance and make better decisions. A traditional optimization model may not be enough to consider the distributed, large-scale, and complex energy market. This research compares the performance and searching paths of different artificial life techniques such as Genetic Algorithm (GA), Evolutionary Programming (EP), and Particle Swarm (PS), and look for a proper method to emulate Generation Companies' (GENCOs) bidding strategies. After deregulation, GENCOs face risk and uncertainty associated with the fast-changing market environment. A profit-based bidding decision support system is critical for GENCOs to keep a competitive position in the new environment. Most past research do not pay special attention to the piecewise staircase characteristic of generator offer curves. This research proposes an optimal bidding strategy based on Parametric Linear Programming. The proposed algorithm is able to handle actual piecewise staircase energy offer curves. The proposed method is then extended to incorporate incomplete information based on Decision Analysis. Finally, the author develops an optimal bidding tool (GenBidding) and applies it to the RTS96 test system.

  18. Optimal Facility Location Tool for Logistics Battle Command (LBC)

    DTIC Science & Technology

    2015-08-01

    64 Appendix B. VBA Code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72 Appendix C. Story...should city planners have located emergency service facilities so that all households (the demand) had equal access to coverage?” The critical...programming language called Visual Basic for Applications ( VBA ). CPLEX is a commercial solver for linear, integer, and mixed integer linear programming problems

  19. Fitting program for linear regressions according to Mahon (1996)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Trappitsch, Reto G.

    2018-01-09

    This program takes the users' Input data and fits a linear regression to it using the prescription presented by Mahon (1996). Compared to the commonly used York fit, this method has the correct prescription for measurement error propagation. This software should facilitate the proper fitting of measurements with a simple Interface.

  20. A comprehensive linear programming tool to optimize formulations of ready-to-use therapeutic foods: An application to Ethiopia

    USDA-ARS?s Scientific Manuscript database

    Ready-to-use therapeutic food (RUTF) is the standard of care for children suffering from noncomplicated severe acute malnutrition (SAM). The objective was to develop a comprehensive linear programming (LP) tool to create novel RUTF formulations for Ethiopia. A systematic approach that surveyed inter...

  1. Secret Message Decryption: Group Consulting Projects Using Matrices and Linear Programming

    ERIC Educational Resources Information Center

    Gurski, Katharine F.

    2009-01-01

    We describe two short group projects for finite mathematics students that incorporate matrices and linear programming into fictional consulting requests presented as a letter to the students. The students are required to use mathematics to decrypt secret messages in one project involving matrix multiplication and inversion. The second project…

  2. LCPT: a program for finding linear canonical transformations. [In MACSYMA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Char, B.W.; McNamara, B.

    This article describes a MACSYMA program to compute symbolically a canonical linear transformation between coordinate systems. The difficulties in implementation of this canonical small physics problem are also discussed, along with the implications that may be drawn from such difficulties about widespread MACSYMA usage by the community of computational/theoretical physicists.

  3. A Test of a Linear Programming Model as an Optimal Solution to the Problem of Combining Methods of Reading Instruction

    ERIC Educational Resources Information Center

    Mills, James W.; And Others

    1973-01-01

    The Study reported here tested an application of the Linear Programming Model at the Reading Clinic of Drew University. Results, while not conclusive, indicate that this approach yields greater gains in speed scores than a traditional approach for this population. (Author)

  4. Visual, Algebraic and Mixed Strategies in Visually Presented Linear Programming Problems.

    ERIC Educational Resources Information Center

    Shama, Gilli; Dreyfus, Tommy

    1994-01-01

    Identified and classified solution strategies of (n=49) 10th-grade students who were presented with linear programming problems in a predominantly visual setting in the form of a computerized game. Visual strategies were developed more frequently than either algebraic or mixed strategies. Appendix includes questionnaires. (Contains 11 references.)…

  5. Development of a computer technique for the prediction of transport aircraft flight profile sonic boom signatures. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Coen, Peter G.

    1991-01-01

    A new computer technique for the analysis of transport aircraft sonic boom signature characteristics was developed. This new technique, based on linear theory methods, combines the previously separate equivalent area and F function development with a signature propagation method using a single geometry description. The new technique was implemented in a stand-alone computer program and was incorporated into an aircraft performance analysis program. Through these implementations, both configuration designers and performance analysts are given new capabilities to rapidly analyze an aircraft's sonic boom characteristics throughout the flight envelope.

  6. PAN AIR: A computer program for predicting subsonic or supersonic linear potential flows about arbitrary configurations using a higher order panel method. Volume 4: Maintenance document (version 3.0)

    NASA Technical Reports Server (NTRS)

    Purdon, David J.; Baruah, Pranab K.; Bussoletti, John E.; Epton, Michael A.; Massena, William A.; Nelson, Franklin D.; Tsurusaki, Kiyoharu

    1990-01-01

    The Maintenance Document Version 3.0 is a guide to the PAN AIR software system, a system which computes the subsonic or supersonic linear potential flow about a body of nearly arbitrary shape, using a higher order panel method. The document describes the overall system and each program module of the system. Sufficient detail is given for program maintenance, updating, and modification. It is assumed that the reader is familiar with programming and CRAY computer systems. The PAN AIR system was written in FORTRAN 4 language except for a few CAL language subroutines which exist in the PAN AIR library. Structured programming techniques were used to provide code documentation and maintainability. The operating systems accommodated are COS 1.11, COS 1.12, COS 1.13, and COS 1.14 on the CRAY 1S, 1M, and X-MP computing systems. The system is comprised of a data base management system, a program library, an execution control module, and nine separate FORTRAN technical modules. Each module calculates part of the posed PAN AIR problem. The data base manager is used to communicate between modules and within modules. The technical modules must be run in a prescribed fashion for each PAN AIR problem. In order to ease the problem of supplying the many JCL cards required to execute the modules, a set of CRAY procedures (PAPROCS) was created to automatically supply most of the JCL cards. Most of this document has not changed for Version 3.0. It now, however, strictly applies only to PAN AIR version 3.0. The major changes are: (1) additional sections covering the new FDP module (which calculates streamlines and offbody points); (2) a complete rewrite of the section on the MAG module; and (3) strict applicability to CRAY computing systems.

  7. Integer Linear Programming for Constrained Multi-Aspect Committee Review Assignment

    PubMed Central

    Karimzadehgan, Maryam; Zhai, ChengXiang

    2011-01-01

    Automatic review assignment can significantly improve the productivity of many people such as conference organizers, journal editors and grant administrators. A general setup of the review assignment problem involves assigning a set of reviewers on a committee to a set of documents to be reviewed under the constraint of review quota so that the reviewers assigned to a document can collectively cover multiple topic aspects of the document. No previous work has addressed such a setup of committee review assignments while also considering matching multiple aspects of topics and expertise. In this paper, we tackle the problem of committee review assignment with multi-aspect expertise matching by casting it as an integer linear programming problem. The proposed algorithm can naturally accommodate any probabilistic or deterministic method for modeling multiple aspects to automate committee review assignments. Evaluation using a multi-aspect review assignment test set constructed using ACM SIGIR publications shows that the proposed algorithm is effective and efficient for committee review assignments based on multi-aspect expertise matching. PMID:22711970

  8. Autonomous Guidance of Agile Small-scale Rotorcraft

    NASA Technical Reports Server (NTRS)

    Mettler, Bernard; Feron, Eric

    2004-01-01

    This report describes a guidance system for agile vehicles based on a hybrid closed-loop model of the vehicle dynamics. The hybrid model represents the vehicle dynamics through a combination of linear-time-invariant control modes and pre-programmed, finite-duration maneuvers. This particular hybrid structure can be realized through a control system that combines trim controllers and a maneuvering control logic. The former enable precise trajectory tracking, and the latter enables trajectories at the edge of the vehicle capabilities. The closed-loop model is much simpler than the full vehicle equations of motion, yet it can capture a broad range of dynamic behaviors. It also supports a consistent link between the physical layer and the decision-making layer. The trajectory generation was formulated as an optimization problem using mixed-integer-linear-programming. The optimization is solved in a receding horizon fashion. Several techniques to improve the computational tractability were investigate. Simulation experiments using NASA Ames 'R-50 model show that this approach fully exploits the vehicle's agility.

  9. A robust multi-objective global supplier selection model under currency fluctuation and price discount

    NASA Astrophysics Data System (ADS)

    Zarindast, Atousa; Seyed Hosseini, Seyed Mohamad; Pishvaee, Mir Saman

    2017-06-01

    Robust supplier selection problem, in a scenario-based approach has been proposed, when the demand and exchange rates are subject to uncertainties. First, a deterministic multi-objective mixed integer linear programming is developed; then, the robust counterpart of the proposed mixed integer linear programming is presented using the recent extension in robust optimization theory. We discuss decision variables, respectively, by a two-stage stochastic planning model, a robust stochastic optimization planning model which integrates worst case scenario in modeling approach and finally by equivalent deterministic planning model. The experimental study is carried out to compare the performances of the three models. Robust model resulted in remarkable cost saving and it illustrated that to cope with such uncertainties, we should consider them in advance in our planning. In our case study different supplier were selected due to this uncertainties and since supplier selection is a strategic decision, it is crucial to consider these uncertainties in planning approach.

  10. Finding Bayesian Optimal Designs for Nonlinear Models: A Semidefinite Programming-Based Approach.

    PubMed

    Duarte, Belmiro P M; Wong, Weng Kee

    2015-08-01

    This paper uses semidefinite programming (SDP) to construct Bayesian optimal design for nonlinear regression models. The setup here extends the formulation of the optimal designs problem as an SDP problem from linear to nonlinear models. Gaussian quadrature formulas (GQF) are used to compute the expectation in the Bayesian design criterion, such as D-, A- or E-optimality. As an illustrative example, we demonstrate the approach using the power-logistic model and compare results in the literature. Additionally, we investigate how the optimal design is impacted by different discretising schemes for the design space, different amounts of uncertainty in the parameter values, different choices of GQF and different prior distributions for the vector of model parameters, including normal priors with and without correlated components. Further applications to find Bayesian D-optimal designs with two regressors for a logistic model and a two-variable generalised linear model with a gamma distributed response are discussed, and some limitations of our approach are noted.

  11. Finding Bayesian Optimal Designs for Nonlinear Models: A Semidefinite Programming-Based Approach

    PubMed Central

    Duarte, Belmiro P. M.; Wong, Weng Kee

    2014-01-01

    Summary This paper uses semidefinite programming (SDP) to construct Bayesian optimal design for nonlinear regression models. The setup here extends the formulation of the optimal designs problem as an SDP problem from linear to nonlinear models. Gaussian quadrature formulas (GQF) are used to compute the expectation in the Bayesian design criterion, such as D-, A- or E-optimality. As an illustrative example, we demonstrate the approach using the power-logistic model and compare results in the literature. Additionally, we investigate how the optimal design is impacted by different discretising schemes for the design space, different amounts of uncertainty in the parameter values, different choices of GQF and different prior distributions for the vector of model parameters, including normal priors with and without correlated components. Further applications to find Bayesian D-optimal designs with two regressors for a logistic model and a two-variable generalised linear model with a gamma distributed response are discussed, and some limitations of our approach are noted. PMID:26512159

  12. Users manual for flight control design programs

    NASA Technical Reports Server (NTRS)

    Nalbandian, J. Y.

    1975-01-01

    Computer programs for the design of analog and digital flight control systems are documented. The program DIGADAPT uses linear-quadratic-gaussian synthesis algorithms in the design of command response controllers and state estimators, and it applies covariance propagation analysis to the selection of sampling intervals for digital systems. Program SCHED executes correlation and regression analyses for the development of gain and trim schedules to be used in open-loop explicit-adaptive control laws. A linear-time-varying simulation of aircraft motions is provided by the program TVHIS, which includes guidance and control logic, as well as models for control actuator dynamics. The programs are coded in FORTRAN and are compiled and executed on both IBM and CDC computers.

  13. Chromosome structures: reduction of certain problems with unequal gene content and gene paralogs to integer linear programming.

    PubMed

    Lyubetsky, Vassily; Gershgorin, Roman; Gorbunov, Konstantin

    2017-12-06

    Chromosome structure is a very limited model of the genome including the information about its chromosomes such as their linear or circular organization, the order of genes on them, and the DNA strand encoding a gene. Gene lengths, nucleotide composition, and intergenic regions are ignored. Although highly incomplete, such structure can be used in many cases, e.g., to reconstruct phylogeny and evolutionary events, to identify gene synteny, regulatory elements and promoters (considering highly conserved elements), etc. Three problems are considered; all assume unequal gene content and the presence of gene paralogs. The distance problem is to determine the minimum number of operations required to transform one chromosome structure into another and the corresponding transformation itself including the identification of paralogs in two structures. We use the DCJ model which is one of the most studied combinatorial rearrangement models. Double-, sesqui-, and single-operations as well as deletion and insertion of a chromosome region are considered in the model; the single ones comprise cut and join. In the reconstruction problem, a phylogenetic tree with chromosome structures in the leaves is given. It is necessary to assign the structures to inner nodes of the tree to minimize the sum of distances between terminal structures of each edge and to identify the mutual paralogs in a fairly large set of structures. A linear algorithm is known for the distance problem without paralogs, while the presence of paralogs makes it NP-hard. If paralogs are allowed but the insertion and deletion operations are missing (and special constraints are imposed), the reduction of the distance problem to integer linear programming is known. Apparently, the reconstruction problem is NP-hard even in the absence of paralogs. The problem of contigs is to find the optimal arrangements for each given set of contigs, which also includes the mutual identification of paralogs. We proved that these problems can be reduced to integer linear programming formulations, which allows an algorithm to redefine the problems to implement a very special case of the integer linear programming tool. The results were tested on synthetic and biological samples. Three well-known problems were reduced to a very special case of integer linear programming, which is a new method of their solutions. Integer linear programming is clearly among the main computational methods and, as generally accepted, is fast on average; in particular, computation systems specifically targeted at it are available. The challenges are to reduce the size of the corresponding integer linear programming formulations and to incorporate a more detailed biological concept in our model of the reconstruction.

  14. Synchronous Control Method and Realization of Automated Pharmacy Elevator

    NASA Astrophysics Data System (ADS)

    Liu, Xiang-Quan

    Firstly, the control method of elevator's synchronous motion is provided, the synchronous control structure of double servo motor based on PMAC is accomplished. Secondly, synchronous control program of elevator is implemented by using PMAC linear interpolation motion model and position error compensation method. Finally, the PID parameters of servo motor were adjusted. The experiment proves the control method has high stability and reliability.

  15. Uplink Packet-Data Scheduling in DS-CDMA Systems

    NASA Astrophysics Data System (ADS)

    Choi, Young Woo; Kim, Seong-Lyun

    In this letter, we consider the uplink packet scheduling for non-real-time data users in a DS-CDMA system. As an effort to jointly optimize throughput and fairness, we formulate a time-span minimization problem incorporating the time-multiplexing of different simultaneous transmission schemes. Based on simple rules, we propose efficient scheduling algorithms and compare them with the optimal solution obtained by linear programming.

  16. A User’s Manual for Linear Control Programs on IBM/360.

    DTIC Science & Technology

    1979-12-01

    problems is presented in the following paragraphs. However, the theory on which the subprogram is based is not given. The user who wishes to learn more...21-22, of the system (N < 10), 23-24, 25-26 dimension of the Fandom input vector (L < 10), number of measurements (M < 10) 2 t (NxN) matrix (one row

  17. The Distributed Air Wing

    DTIC Science & Technology

    2014-06-01

    Cruise Missile LCS Littoral Combat Ship LEO Low Earth Orbit LER Loss-Exchange-Ration LHA Landing Helicopter Assault LIDAR Laser Imaging Detection and...Ranging LOC Lines of Communication LP Linear Programming LRASM Long Range Anti-Ship Missile LT Long Ton MANA Map-Aware Non-uniform Automata ME...enemy’s spy satellites. Based on open source information, China currently has 25 satellites operating in Low Earth Orbit ( LEO ), each operates at an

  18. Optimization of the nutrient content and protein quality of cereal-legume blends for use as complementary foods in Ghana.

    PubMed

    Suri, Devika J; Tano-Debrah, Kwaku; Ghosh, Shibani A

    2014-09-01

    Nutritionally adequate complementary foods made from locally available ingredients are of high priority in developing countries, including Ghana. The majority of complementary foods in these countries are cereal-based and are unable to meet the nutrient intakes recommended by the World Health Organization. To evaluate the nutrient content and protein quality of local cereal-legume blends for complementary foods against recommendations and to determine the quantities of additional ingredients required to meet needs by using linear programming. Nine cereal-legume combinations (maize, sorghum, or millet combined with cowpea, peanut, or soybean) and koko (a traditional Ghanaian maize-based complementary food) were evaluated based on the macronutrient targets for a daily ration of complementary food for the age group 12 to 24 months: 264 kcal, 6.5 g of protein, and 8.2 to 11.7 g of fat. Protein quality was assessed by the Protein Digestibility Corrected Amino Acid Score (PDCAAS). Linear programming was then used to determine the amounts of additional oil, sugar, and lysine needed to meet macronutrient requirements. No traditional cereal-legume food met all complementary food macronutrient requirements on its own. Cereal-legume blends made with peanut or cowpeas were low in quality protein, while those with soybean were low in fat. Lysine was the limiting amino acid (PDCAAS 0.50 to 0.82) in all blends. Adding lysine increased utilizable protein by 1% to 10% in soybean blends, 35% to 40% in peanut blends, and 14% to 24% in cowpea blends. Peanut-maize, peanut-millet, and all soybean-cereal blends were able to meet macronutrient targets; most micronutrients remained below recommended levels. Traditional cereal-legume blends made from locally available ingredients do not meet energy, quality protein, and fat recommendations for complementary foods; however, such complementary food blends may be optimized to meet nutrient requirements by using linear programming as a tool to determine the exact levels of fortificants to be added (including, but not limited to, added fat, amino acids, and micronutrients).

  19. Influence of electromagnetic interference on implanted cardiac arrhythmia devices in and around a magnetically levitated linear motor car.

    PubMed

    Fukuta, Motoyuki; Mizutani, Noboru; Waseda, Katsuhisa

    2005-01-01

    This study was designed to determine the susceptibility of implanted cardiac arrhythmia devices to electromagnetic interference in and around a magnetically levitated linear motor car [High-Speed Surface Transport (HSST)]. During the study, cardiac devices were connected to a phantom model that had similar characteristics to the human body. Three pacemakers from three manufacturers and one implantable cardioverter-defibrillator (ICD) were evaluated in and around the magnetically levitated vehicle. The system is based on a normal conductive system levitated by the attractive force of magnets and propelled by a linear induction motor without wheels. The magnetic field strength at 40 cm from the vehicle in the nonlevitating state was 0.12 mT and that during levitation was 0.20 mT. The magnetic and electric field strengths on a seat close to the variable voltage/variable frequency inverter while the vehicle was moving and at rest were 0.13 mT, 2.95 V/m and 0.04 mT, 0.36 V/m, respectively. Data recorded on a seat close to the reactor while the vehicle was moving and at rest were 0.09 mT, 2.45 V/m and 0.05 mT, 1.46 V/m, respectively. Measured magnetic and electric field strengths both inside and outside the linear motor car were too low to result in device inactivation. No sensing, pacing, or arrhythmic interactions were noted with any pacemaker or ICD programmed in either bipolar and unipolar configurations. In conclusion, our data suggest that a permanent programming change or a device failure is unlikely to occur and that the linear motor car system is probably safe for patients with one of the four implanted cardiac arrhythmia devices used in this study under the conditions tested.

  20. Comparison of simulated and measured nonlinear ultrasound fields

    NASA Astrophysics Data System (ADS)

    Du, Yigang; Jensen, Henrik; Jensen, Jørgen Arendt

    2011-03-01

    In this paper results from a non-linear AS (angular spectrum) based ultrasound simulation program are compared to water-tank measurements. A circular concave transducer with a diameter of 1 inch (25.4 mm) is used as the emitting source. The measured pulses are first compared with the linear simulation program Field II, which will be used to generate the source for the AS simulation. The generated non-linear ultrasound field is measured by a hydrophone in the focal plane. The second harmonic component from the measurement is compared with the AS simulation, which is used to calculate both fundamental and second harmonic fields. The focused piston transducer with a center frequency of 5 MHz is excited by a waveform generator emitting a 6-cycle sine wave. The hydrophone is mounted in the focal plane 118 mm from the transducer. The point spread functions at the focal depth from Field II and measurements are illustrated. The FWHM (full width at half maximum) values are 1.96 mm for the measurement and 1.84 mm for the Field II simulation. The fundamental and second harmonic components of the experimental results are plotted compared with the AS simulations. The RMS (root mean square) errors of the AS simulations are 7.19% and 10.3% compared with the fundamental and second harmonic components of the measurements.

  1. Linear programming: an alternative approach for developing formulations for emergency food products.

    PubMed

    Sheibani, Ershad; Dabbagh Moghaddam, Arasb; Sharifan, Anousheh; Afshari, Zahra

    2018-03-01

    To minimize the mortality rates of individuals affected by disasters, providing high-quality food relief during the initial stages of an emergency is crucial. The goal of this study was to develop a formulation for a high-energy, nutrient-dense prototype using linear programming (LP) model as a novel method for developing formulations for food products. The model consisted of the objective function and the decision variables, which were the formulation costs and weights of the selected commodities, respectively. The LP constraints were the Institute of Medicine and the World Health Organization specifications of the content of nutrients in the product. Other constraints related to the product's sensory properties were also introduced to the model. Nonlinear constraints for energy ratios of nutrients were linearized to allow their use in the LP. Three focus group studies were conducted to evaluate the palatability and other aspects of the optimized formulation. New constraints were introduced to the LP model based on the focus group evaluations to improve the formulation. LP is an appropriate tool for designing formulations of food products to meet a set of nutritional requirements. This method is an excellent alternative to the traditional 'trial and error' method in designing formulations. © 2017 Society of Chemical Industry. © 2017 Society of Chemical Industry.

  2. The Next Linear Collider Program

    Science.gov Websites

    text only International Study Group (ISG) Meetings NLC Home Page NLC Technical SLAC Eleventh Linear Collider International Study Group at KEK, December 16 - 19, 2003 Tenth (X) Linear Collider International Study Group at SLAC, June, 2003 Nineth Linear Collider ,International Study Group at KEK, December 10-13

  3. The effect of project-based learning on students' statistical literacy levels for data representation

    NASA Astrophysics Data System (ADS)

    Koparan, Timur; Güven, Bülent

    2015-07-01

    The point of this study is to define the effect of project-based learning approach on 8th Grade secondary-school students' statistical literacy levels for data representation. To achieve this goal, a test which consists of 12 open-ended questions in accordance with the views of experts was developed. Seventy 8th grade secondary-school students, 35 in the experimental group and 35 in the control group, took this test twice, one before the application and one after the application. All the raw scores were turned into linear points by using the Winsteps 3.72 modelling program that makes the Rasch analysis and t-tests, and an ANCOVA analysis was carried out with the linear points. Depending on the findings, it was concluded that the project-based learning approach increases students' level of statistical literacy for data representation. Students' levels of statistical literacy before and after the application were shown through the obtained person-item maps.

  4. Computer Programs for Calculating and Plotting the Stability Characteristics of a Balloon Tethered in a Wind

    NASA Technical Reports Server (NTRS)

    Bennett, R. M.; Bland, S. R.; Redd, L. T.

    1973-01-01

    Computer programs for calculating the stability characteristics of a balloon tethered in a steady wind are presented. Equilibrium conditions, characteristic roots, and modal ratios are calculated for a range of discrete values of velocity for a fixed tether-line length. Separate programs are used: (1) to calculate longitudinal stability characteristics, (2) to calculate lateral stability characteristics, (3) to plot the characteristic roots versus velocity, (4) to plot the characteristic roots in root-locus form, (5) to plot the longitudinal modes of motion, and (6) to plot the lateral modes for motion. The basic equations, program listings, and the input and output data for sample cases are presented, with a brief discussion of the overall operation and limitations. The programs are based on a linearized, stability-derivative type of analysis, including balloon aerodynamics, apparent mass, buoyancy effects, and static forces which result from the tether line.

  5. Investigations into the Effect of Current Velocity on Amidoxime-Based Polymeric Uranium Adsorbent Performance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gill, Gary A.; Kuo, Li-Jung; Strivens, Jonathan E.

    2015-12-01

    The Fuel Resources Program at the U.S. Department of Energy’s (DOE), Office of Nuclear Energy (DOE-NE) is developing adsorbent technology to extract uranium from seawater. This technology is being developed to provide a sustainable and economically viable supply of uranium fuel for nuclear reactors (DOE, 2010). Among the key environmental variables to understand for adsorbent deployment in the coastal ocean is what effect flow-rates or linear velocity has on uranium adsorption capacity. The goal is to find a flow conditions that optimize uranium adsorption capacity in the shortest exposure time. Understanding these criteria will be critical in choosing a locationmore » for deployment of a marine adsorbent farm. The objective of this study was to identify at what linear velocity the adsorption kinetics for uranium extraction starts to drop off due to limitations in mass transport of uranium to the surface of the adsorbent fibers. Two independent laboratory-based experimental approaches using flow-through columns and recirculating flumes for adsorbent exposure were used to assess the effect of flow-rate (linear velocity) on the kinetic uptake of uranium on amidoxime-based polymeric adsorbent material. Time series observations over a 56 day period were conducted with flow-through columns over a 35-fold range in linear velocity from 0.29 to 10.2 cm/s, while the flume study was conducted over a narrower 11-fold range, from 0.48 to 5.52 cm/s. These ranges were specifically chosen to focus on the lower end of oceanic currents and expand above and below the linear velocity of ~ 2.5 cm/s adopted for marine testing of adsorbent material at PNNL.« less

  6. A primer for biomedical scientists on how to execute model II linear regression analysis.

    PubMed

    Ludbrook, John

    2012-04-01

    1. There are two very different ways of executing linear regression analysis. One is Model I, when the x-values are fixed by the experimenter. The other is Model II, in which the x-values are free to vary and are subject to error. 2. I have received numerous complaints from biomedical scientists that they have great difficulty in executing Model II linear regression analysis. This may explain the results of a Google Scholar search, which showed that the authors of articles in journals of physiology, pharmacology and biochemistry rarely use Model II regression analysis. 3. I repeat my previous arguments in favour of using least products linear regression analysis for Model II regressions. I review three methods for executing ordinary least products (OLP) and weighted least products (WLP) regression analysis: (i) scientific calculator and/or computer spreadsheet; (ii) specific purpose computer programs; and (iii) general purpose computer programs. 4. Using a scientific calculator and/or computer spreadsheet, it is easy to obtain correct values for OLP slope and intercept, but the corresponding 95% confidence intervals (CI) are inaccurate. 5. Using specific purpose computer programs, the freeware computer program smatr gives the correct OLP regression coefficients and obtains 95% CI by bootstrapping. In addition, smatr can be used to compare the slopes of OLP lines. 6. When using general purpose computer programs, I recommend the commercial programs systat and Statistica for those who regularly undertake linear regression analysis and I give step-by-step instructions in the Supplementary Information as to how to use loss functions. © 2011 The Author. Clinical and Experimental Pharmacology and Physiology. © 2011 Blackwell Publishing Asia Pty Ltd.

  7. Observer-Based Discrete-Time Nonnegative Edge Synchronization of Networked Systems.

    PubMed

    Su, Housheng; Wu, Han; Chen, Xia

    2017-10-01

    This paper studies the multi-input and multi-output discrete-time nonnegative edge synchronization of networked systems based on neighbors' output information. The communication relationship among the edges of networked systems is modeled by well-known line graph. Two observer-based edge synchronization algorithms are designed, for which some necessary and sufficient synchronization conditions are derived. Moreover, some computable sufficient synchronization conditions are obtained, in which the feedback matrix and the observer matrix are computed by solving the linear programming problems. We finally design several simulation examples to demonstrate the validity of the given nonnegative edge synchronization algorithms.

  8. Linking linear programming and spatial simulation models to predict landscape effects of forest management alternatives

    Treesearch

    Eric J. Gustafson; L. Jay Roberts; Larry A. Leefers

    2006-01-01

    Forest management planners require analytical tools to assess the effects of alternative strategies on the sometimes disparate benefits from forests such as timber production and wildlife habitat. We assessed the spatial patterns of alternative management strategies by linking two models that were developed for different purposes. We used a linear programming model (...

  9. A Partitioning and Bounded Variable Algorithm for Linear Programming

    ERIC Educational Resources Information Center

    Sheskin, Theodore J.

    2006-01-01

    An interesting new partitioning and bounded variable algorithm (PBVA) is proposed for solving linear programming problems. The PBVA is a variant of the simplex algorithm which uses a modified form of the simplex method followed by the dual simplex method for bounded variables. In contrast to the two-phase method and the big M method, the PBVA does…

  10. Airborne Tactical Crossload Planner

    DTIC Science & Technology

    2017-12-01

    set out in the Airborne Standard Operating Procedure (ASOP). 14. SUBJECT TERMS crossload, airborne, optimization, integer linear programming ...they land to their respective sub-mission locations. In this thesis, we formulate and implement an integer linear program called the Tactical...to meet any desired crossload objectives. xiv We demonstrate TCP with two real-world tactical problems from recent airborne operations: one by the

  11. Radar Resource Management in a Dense Target Environment

    DTIC Science & Technology

    2014-03-01

    problem faced by networked MFRs . While relaxing our assumptions concerning information gain presents numerous challenges worth exploring, future research...linear programming MFR multifunction phased array radar MILP mixed integer linear programming NATO North Atlantic Treaty Organization PDF probability...1: INTRODUCTION Multifunction phased array radars ( MFRs ) are capable of performing various tasks in rapid succession. The performance of target search

  12. Linear circuit analysis program for IBM 1620 Monitor 2, 1311/1443 data processing system /CIRCS/

    NASA Technical Reports Server (NTRS)

    Hatfield, J.

    1967-01-01

    CIRCS is modification of IBSNAP Circuit Analysis Program, for use on smaller systems. This data processing system retains the basic dc, transient analysis, and FORTRAN 2 formats. It can be used on the IBM 1620/1311 Monitor I Mod 5 system, and solves a linear network containing 15 nodes and 45 branches.

  13. LFSPMC: Linear feature selection program using the probability of misclassification

    NASA Technical Reports Server (NTRS)

    Guseman, L. F., Jr.; Marion, B. P.

    1975-01-01

    The computational procedure and associated computer program for a linear feature selection technique are presented. The technique assumes that: a finite number, m, of classes exists; each class is described by an n-dimensional multivariate normal density function of its measurement vectors; the mean vector and covariance matrix for each density function are known (or can be estimated); and the a priori probability for each class is known. The technique produces a single linear combination of the original measurements which minimizes the one-dimensional probability of misclassification defined by the transformed densities.

  14. SLFP: a stochastic linear fractional programming approach for sustainable waste management.

    PubMed

    Zhu, H; Huang, G H

    2011-12-01

    A stochastic linear fractional programming (SLFP) approach is developed for supporting sustainable municipal solid waste management under uncertainty. The SLFP method can solve ratio optimization problems associated with random information, where chance-constrained programming is integrated into a linear fractional programming framework. It has advantages in: (1) comparing objectives of two aspects, (2) reflecting system efficiency, (3) dealing with uncertainty expressed as probability distributions, and (4) providing optimal-ratio solutions under different system-reliability conditions. The method is applied to a case study of waste flow allocation within a municipal solid waste (MSW) management system. The obtained solutions are useful for identifying sustainable MSW management schemes with maximized system efficiency under various constraint-violation risks. The results indicate that SLFP can support in-depth analysis of the interrelationships among system efficiency, system cost and system-failure risk. Copyright © 2011 Elsevier Ltd. All rights reserved.

  15. Probabilistic finite elements for transient analysis in nonlinear continua

    NASA Technical Reports Server (NTRS)

    Liu, W. K.; Belytschko, T.; Mani, A.

    1985-01-01

    The probabilistic finite element method (PFEM), which is a combination of finite element methods and second-moment analysis, is formulated for linear and nonlinear continua with inhomogeneous random fields. Analogous to the discretization of the displacement field in finite element methods, the random field is also discretized. The formulation is simplified by transforming the correlated variables to a set of uncorrelated variables through an eigenvalue orthogonalization. Furthermore, it is shown that a reduced set of the uncorrelated variables is sufficient for the second-moment analysis. Based on the linear formulation of the PFEM, the method is then extended to transient analysis in nonlinear continua. The accuracy and efficiency of the method is demonstrated by application to a one-dimensional, elastic/plastic wave propagation problem. The moments calculated compare favorably with those obtained by Monte Carlo simulation. Also, the procedure is amenable to implementation in deterministic FEM based computer programs.

  16. PMICALC: an R code-based software for estimating post-mortem interval (PMI) compatible with Windows, Mac and Linux operating systems.

    PubMed

    Muñoz-Barús, José I; Rodríguez-Calvo, María Sol; Suárez-Peñaranda, José M; Vieira, Duarte N; Cadarso-Suárez, Carmen; Febrero-Bande, Manuel

    2010-01-30

    In legal medicine the correct determination of the time of death is of utmost importance. Recent advances in estimating post-mortem interval (PMI) have made use of vitreous humour chemistry in conjunction with Linear Regression, but the results are questionable. In this paper we present PMICALC, an R code-based freeware package which estimates PMI in cadavers of recent death by measuring the concentrations of potassium ([K+]), hypoxanthine ([Hx]) and urea ([U]) in the vitreous humor using two different regression models: Additive Models (AM) and Support Vector Machine (SVM), which offer more flexibility than the previously used Linear Regression. The results from both models are better than those published to date and can give numerical expression of PMI with confidence intervals and graphic support within 20 min. The program also takes into account the cause of death. 2009 Elsevier Ireland Ltd. All rights reserved.

  17. Exploring the efficacy of replacing linear paper-based patient cases in problem-based learning with dynamic Web-based virtual patients: randomized controlled trial.

    PubMed

    Poulton, Terry; Ellaway, Rachel H; Round, Jonathan; Jivram, Trupti; Kavia, Sheetal; Hilton, Sean

    2014-11-05

    Problem-based learning (PBL) is well established in medical education and beyond, and continues to be developed and explored. Challenges include how to connect the somewhat abstract nature of classroom-based PBL with clinical practice and how to maintain learner engagement in the process of PBL over time. A study was conducted to investigate the efficacy of decision-PBL (D-PBL), a variant form of PBL that replaces linear PBL cases with virtual patients. These Web-based interactive cases provided learners with a series of patient management pathways. Learners were encouraged to consider and discuss courses of action, take their chosen management pathway, and experience the consequences of their decisions. A Web-based application was essential to allow scenarios to respond dynamically to learners' decisions, to deliver the scenarios to multiple PBL classrooms in the same timeframe, and to record centrally the paths taken by the PBL groups. A randomized controlled trial in crossover design was run involving all learners (N=81) in the second year of the graduate entry stream for the undergraduate medicine program at St George's University of London. Learners were randomized to study groups; half engaged in a D-PBL activity whereas the other half had a traditional linear PBL activity on the same subject material. Groups alternated D-PBL and linear PBL over the semester. The measure was mean cohort performance on specific face-to-face exam questions at the end of the semester. D-PBL groups performed better than linear PBL groups on questions related to D-PBL with the difference being statistically significant for all questions. Differences between the exam performances of the 2 groups were not statistically significant for the questions not related to D-PBL. The effect sizes for D-PBL-related questions were large and positive (>0.6) except for 1 question that showed a medium positive effect size. The effect sizes for questions not related to D-PBL were all small (≤0.3) with a mix of positive and negative values. The efficacy of D-PBL was indicated by improved exam performance for learners who had D-PBL compared to those who had linear PBL. This suggests that the use of D-PBL leads to better midterm learning outcomes than linear PBL, at least for learners with prior experience with linear PBL. On the basis of tutor and student feedback, St George's University of London and the University of Nicosia, Cyprus have replaced paper PBL cases for midstage undergraduate teaching with D-PBL virtual patients, and 6 more institutions in the ePBLnet partnership will be implementing D-PBL in Autumn 2015.

  18. Changes of Linearity in MF2 Index with R12 and Solar Activity Maximum

    NASA Astrophysics Data System (ADS)

    Villanueva, L.

    2013-05-01

    Critical frequency of F2 layer is related to the solar activity, and the sunspot number has been the standard index for ionospheric prediction programs. This layer, being considered the most important in HF radio communications due to its highest electron density, determines the maximum frequency coming back from ground base transmitter signals, and shows irregular variation in time and space. Nowadays the spatial variation, better understood due to the availability of TEC measurements, let Space Weather Centers have observations almost in real time. However, it is still the most difficult layer to predict in time. Short time variations are improved in IRI model, but long term predictions are only related to the well-known CCIR and URSI coefficients and Solar activity R12 predictions, (or ionospheric indexes in regional models). The concept of the "saturation" of the ionosphere is based on data observations around 3 solar cycles before 1970, (NBS, 1968). There is a linear relationship among MUF (0Km) and R12, for smooth Sunspot numbers R12 less than 100, but constant for higher R12, so, no rise of MUF is expected for R12 higher than 100. This recommendation has been used in most of the known Ionospheric prediction programs for HF Radio communication. In this work, observations of smoothed ionospheric index MF2 related to R12 are presented to find common features of the linear relationship, which is found to persist in different ranges of R12 depending on the specific maximum level of each solar cycle. In the analysis of individual solar cycles, the lapse of linearity is less than 100 for a low solar cycle and higher than 100 for a high solar cycle. To improve ionospheric predictions we can establish levels for solar cycle maximum sunspot numbers R12 around low 100, medium 150 and high 200 and specify the ranges of linearity of MUF(0Km) related to R12 which is not only 100 as assumed for all the solar cycles. For lower levels of solar cycle, discussions of present observations are presented.

  19. Using the nonlinear aquifer storage-discharge relationship to simulate the base flow of glacier- and snowmelt-dominated basins in northwest China

    NASA Astrophysics Data System (ADS)

    Gan, R.; Luo, Y.

    2013-09-01

    Base flow is an important component in hydrological modeling. This process is usually modeled by using the linear aquifer storage-discharge relation approach, although the outflow from groundwater aquifers is nonlinear. To identify the accuracy of base flow estimates in rivers dominated by snowmelt and/or glacier melt in arid and cold northwestern China, a nonlinear storage-discharge relationship for use in SWAT (Soil Water Assessment Tool) modeling was developed and applied to the Manas River basin in the Tian Shan Mountains. Linear reservoir models and a digital filter program were used for comparisons. Meanwhile, numerical analysis of recession curves from 78 river gauge stations revealed variation in the parameters of the nonlinear relationship. It was found that the nonlinear reservoir model can improve the streamflow simulation, especially for low-flow period. The higher Nash-Sutcliffe efficiency, logarithmic efficiency, and volumetric efficiency, and lower percent bias were obtained when compared to the one-linear reservoir approach. The parameter b of the aquifer storage-discharge function varied mostly between 0.0 and 0.1, which is much smaller than the suggested value of 0.5. The coefficient a of the function is related to catchment properties, primarily the basin and glacier areas.

  20. Optimal Load Shedding and Generation Rescheduling for Overload Suppression in Large Power Systems.

    NASA Astrophysics Data System (ADS)

    Moon, Young-Hyun

    Ever-increasing size, complexity and operation costs in modern power systems have stimulated the intensive study of an optimal Load Shedding and Generator Rescheduling (LSGR) strategy in the sense of a secure and economic system operation. The conventional approach to LSGR has been based on the application of LP (Linear Programming) with the use of an approximately linearized model, and the LP algorithm is currently considered to be the most powerful tool for solving the LSGR problem. However, all of the LP algorithms presented in the literature essentially lead to the following disadvantages: (i) piecewise linearization involved in the LP algorithms requires the introduction of a number of new inequalities and slack variables, which creates significant burden to the computing facilities, and (ii) objective functions are not formulated in terms of the state variables of the adopted models, resulting in considerable numerical inefficiency in the process of computing the optimal solution. A new approach is presented, based on the development of a new linearized model and on the application of QP (Quadratic Programming). The changes in line flows as a result of changes to bus injection power are taken into account in the proposed model by the introduction of sensitivity coefficients, which avoids the mentioned second disadvantages. A precise method to calculate these sensitivity coefficients is given. A comprehensive review of the theory of optimization is included, in which results of the development of QP algorithms for LSGR as based on Wolfe's method and Kuhn -Tucker theory are evaluated in detail. The validity of the proposed model and QP algorithms has been verified and tested on practical power systems, showing the significant reduction of both computation time and memory requirements as well as the expected lower generation costs of the optimal solution as compared with those obtained from computing the optimal solution with LP. Finally, it is noted that an efficient reactive power compensation algorithm is developed to suppress voltage disturbances due to load sheddings, and that a new method for multiple contingency simulation is presented.

  1. Cognitive-behavioral based physical therapy for patients with chronic pain undergoing lumbar spine surgery: a randomized controlled trial

    PubMed Central

    Archer, Kristin R.; Devin, Clinton J.; Vanston, Susan W.; Koyama, Tatsuki; Phillips, Sharon; George, Steven Z.; McGirt, Matthew J.; Spengler, Dan M.; Aaronson, Oran S.; Cheng, Joseph S.; Wegener, Stephen T.

    2015-01-01

    The purpose of this study was to determine the efficacy of a cognitive-behavioral based physical therapy (CBPT) program for improving outcomes in patients following lumbar spine surgery. A randomized controlled trial was conducted in 86 adults undergoing a laminectomy with or without arthrodesis for a lumbar degenerative condition. Patients were screened preoperatively for high fear of movement using the Tampa Scale for Kinesiophobia. Randomization to either CBPT or an Education program occurred at 6 weeks after surgery. Assessments were completed pre-treatment, post-treatment and at 3 month follow-up. The primary outcomes were pain and disability measured by the Brief Pain Inventory and Oswestry Disability Index. Secondary outcomes included general health (SF-12) and performance-based tests (5-Chair Stand, Timed Up and Go, 10 Meter Walk). Multivariable linear regression analyses found that CBPT participants had significantly greater decreases in pain and disability and increases in general health and physical performance compared to the Education group at 3 month follow-up. Results suggest a targeted CBPT program may result in significant and clinically meaningful improvement in postoperative outcomes. CBPT has the potential to be an evidence-based program that clinicians can recommend for patients at-risk for poor recovery following spine surgery. PMID:26476267

  2. The gynecologic oncology fellowship interview process: Challenges and potential areas for improvement.

    PubMed

    Gressel, Gregory M; Van Arsdale, Anne; Dioun, Shayan M; Goldberg, Gary L; Nevadunsky, Nicole S

    2017-05-01

    The application and interview process for gynecologic oncology fellowship is highly competitive, time-consuming and expensive for applicants. We conducted a survey of successfully matched gynecologic oncology fellowship applicants to assess problems associated with the interview process and identify areas for improvement. All Society of Gynecologic Oncology (SGO) list-serve members who have participated in the match program for gynecologic oncology fellowship were asked to complete an online survey regarding the interview process. Linear regression modeling was used to examine association between year of match, number of programs applied to, cost incurred, and overall satisfaction. Two hundred and sixty-nine eligible participants reported applying to a mean of 20 programs [range 1-45] and were offered a mean of 14 interviews [range 1-43]. They spent an average of $6000 [$0-25,000], using personal savings (54%), credit cards (50%), family support (12%) or personal loans (3%). Seventy percent of respondents identified the match as fair, and 93% were satisfied. Interviewees spent a mean of 15 [0-45] days away from work and 37% reported difficulty arranging coverage. Linear regression showed an increase in number of programs applied to and cost per applicant over time ( p  < 0.001) between 1993 and 2016. Applicants who applied to all available programs spent more ( p  < 0.001) than those who applied to programs based on their location or quality. The current fellowship match was identified as fair and satisfying by most respondents despite being time consuming and expensive. Suggested alternative options included clustering interviews geographically or conducting preliminary interviews at the SGO Annual Meeting.

  3. Final Report---Optimization Under Nonconvexity and Uncertainty: Algorithms and Software

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jeff Linderoth

    2011-11-06

    the goal of this work was to develop new algorithmic techniques for solving large-scale numerical optimization problems, focusing on problems classes that have proven to be among the most challenging for practitioners: those involving uncertainty and those involving nonconvexity. This research advanced the state-of-the-art in solving mixed integer linear programs containing symmetry, mixed integer nonlinear programs, and stochastic optimization problems. The focus of the work done in the continuation was on Mixed Integer Nonlinear Programs (MINLP)s and Mixed Integer Linear Programs (MILP)s, especially those containing a great deal of symmetry.

  4. Designing optimal food intake patterns to achieve nutritional goals for Japanese adults through the use of linear programming optimization models.

    PubMed

    Okubo, Hitomi; Sasaki, Satoshi; Murakami, Kentaro; Yokoyama, Tetsuji; Hirota, Naoko; Notsu, Akiko; Fukui, Mitsuru; Date, Chigusa

    2015-06-06

    Simultaneous dietary achievement of a full set of nutritional recommendations is difficult. Diet optimization model using linear programming is a useful mathematical means of translating nutrient-based recommendations into realistic nutritionally-optimal food combinations incorporating local and culture-specific foods. We used this approach to explore optimal food intake patterns that meet the nutrient recommendations of the Dietary Reference Intakes (DRIs) while incorporating typical Japanese food selections. As observed intake values, we used the food and nutrient intake data of 92 women aged 31-69 years and 82 men aged 32-69 years living in three regions of Japan. Dietary data were collected with semi-weighed dietary record on four non-consecutive days in each season of the year (16 days total). The linear programming models were constructed to minimize the differences between observed and optimized food intake patterns while also meeting the DRIs for a set of 28 nutrients, setting energy equal to estimated requirements, and not exceeding typical quantities of each food consumed by each age (30-49 or 50-69 years) and gender group. We successfully developed mathematically optimized food intake patterns that met the DRIs for all 28 nutrients studied in each sex and age group. Achieving nutritional goals required minor modifications of existing diets in older groups, particularly women, while major modifications were required to increase intake of fruit and vegetables in younger groups of both sexes. Across all sex and age groups, optimized food intake patterns demanded greatly increased intake of whole grains and reduced-fat dairy products in place of intake of refined grains and full-fat dairy products. Salt intake goals were the most difficult to achieve, requiring marked reduction of salt-containing seasoning (65-80%) in all sex and age groups. Using a linear programming model, we identified optimal food intake patterns providing practical food choices and meeting nutritional recommendations for Japanese populations. Dietary modifications from current eating habits required to fulfil nutritional goals differed by age: more marked increases in food volume were required in younger groups.

  5. Solution Methods for Stochastic Dynamic Linear Programs.

    DTIC Science & Technology

    1980-12-01

    16, No. 11, pp. 652-675, July 1970. [28] Glassey, C.R., "Dynamic linear programs for production scheduling", OR 19, pp. 45-56. 1971 . 129 Glassey, C.R...Huang, C.C., I. Vertinsky, W.T. Ziemba, ’Sharp bounds on the value of perfect information", OR 25, pp. 128-139, 1977. [37 Kall , P., ’Computational... 1971 . [701 Ziemba, W.T., *Computational algorithms for convex stochastic programs with simple recourse", OR 8, pp. 414-431, 1970. 131 UNCLASSI FIED

  6. Development of food-based complementary feeding recommendations for 9- to 11-month-old peri-urban Indonesian infants using linear programming.

    PubMed

    Santika, Otte; Fahmida, Umi; Ferguson, Elaine L

    2009-01-01

    Effective population-specific, food-based complementary feeding recommendations (CFR) are required to combat micronutrient deficiencies. To facilitate their formulation, a modeling approach was recently developed. However, it has not yet been used in practice. This study therefore aimed to use this approach to develop CFR for 9- to 11-mo-old Indonesian infants and to identify nutrients that will likely remain low in their diets. The CFR were developed using a 4-phase approach based on linear and goal programming. Model parameters were defined using dietary data collected in a cross-sectional survey of 9- to 11-mo-old infants (n = 100) living in the Bogor District, West-Java, Indonesia and a market survey of 3 local markets. Results showed theoretical iron requirements could not be achieved using local food sources (highest level achievable, 63% of recommendations) and adequate levels of iron, niacin, zinc, and calcium were difficult to achieve. Fortified foods, meatballs, chicken liver, eggs, tempe-tofu, banana, and spinach were the best local food sources to improve dietary quality. The final CFR were: breast-feed on demand, provide 3 meals/d, of which 1 is a fortified infant cereal; > or = 5 servings/wk of tempe/tofu; > or = 3 servings/wk of animal-source foods, of which 2 servings/wk are chicken liver; vegetables, daily; snacks, 2 times/d, including > or = 2 servings/wk of banana; and > or = 4 servings/wk of fortified-biscuits. Results showed that the approach can be used to objectively formulate population-specific CFR and identify key problem nutrients to strengthen nutrition program planning and policy decisions. Before recommending these CFR, their long-term acceptability, affordability, and effectiveness should be assessed.

  7. The Effect of a Workplace-Based Early Intervention Program on Work-Related Musculoskeletal Compensation Outcomes at a Poultry Meat Processing Plant.

    PubMed

    Donovan, Michael; Khan, Asaduzzaman; Johnston, Venerina

    2017-03-01

    Introduction The aim of this study is to determine whether a workplace-based early intervention injury prevention program reduces work-related musculoskeletal compensation outcomes in poultry meat processing workers. Methods A poultry meatworks in Queensland, Australia implemented an onsite early intervention which included immediate reporting and triage, reassurance, multidisciplinary participatory consultation, workplace modifica tion and onsite physiotherapy. Secondary pre-post analyses of the meatworks' compensation data over 4 years were performed, with the intervention commencing 2 years into the study period. Outcome measures included rate of claims, costs per claim and work days absent at an individual claim level. Where possible, similar analyses were performed on data for Queensland's poultry meat processing industry (excluding the meatworks used in this study). Results At the intervention meatworks, in the post intervention period an 18 % reduction in claims per 1 million working hours (p = 0.017) was observed. Generalized linear modelling revealed a significant reduction in average costs per claim of $831 (OR 0.74; 95 % CI 0.59-0.93; p = 0.009). Median days absent was reduced by 37 % (p = 0.024). For the poultry meat processing industry over the same period, generalized linear modelling revealed no significant change in average costs per claim (OR 1.02; 95 % CI 0.76-1.36; p = 0.91). Median days absent was unchanged (p = 0.93). Conclusion The introduction of an onsite, workplace-based early intervention injury prevention program demonstrated positive effects on compensation outcomes for work-related musculoskeletal disorders in poultry meat processing workers. Prospective studies are needed to confirm the findings of the present study.

  8. Effect of virtual memory on efficient solution of two model problems

    NASA Technical Reports Server (NTRS)

    Lambiotte, J. J., Jr.

    1977-01-01

    Computers with virtual memory architecture allow programs to be written as if they were small enough to be contained in memory. Two types of problems are investigated to show that this luxury can lead to quite an inefficient performance if the programmer does not interact strongly with the characteristics of the operating system when developing the program. The two problems considered are the simultaneous solutions of a large linear system of equations by Gaussian elimination and a model three-dimensional finite-difference problem. The Control Data STAR-100 computer runs are made to demonstrate the inefficiencies of programming the problems in the manner one would naturally do if the problems were indeed, small enough to be contained in memory. Program redesigns are presented which achieve large improvements in performance through changes in the computational procedure and the data base arrangement.

  9. Association between proportion of US medical graduates and program characteristics in gastroenterology fellowships.

    PubMed

    Atsawarungruangkit, Amporn

    2017-01-01

    Gastroenterology is one of the most competitive internal medicine fellowship. However, factors that associated with program competitiveness have not been documented. The objective of this study was to evaluate associations between characteristics of gastroenterology fellowship programs and their competitiveness through the proportion of US medical graduates for the academic year 2016/17. This study used a retrospective, cross-sectional design with data obtained from the American Medical Association. The proportion of US medical graduates in gastroenterology fellowships was used as an indicator of program competitiveness. Using both univariate and multivariate linear regression analyses, we analyzed the association between the proportion of medical graduates in each program and 27 program characteristics based on a significance level of 0.05. In total, 153 out of 171 gastroenterology fellowship programs satisfied the inclusion criteria. A multivariate analysis revealed that a higher proportion of US medical graduates was significantly associated with five program characteristics: that it was a university-based program (p < 0.001), the ratio of full-time paid faculty to fellow positions (p < 0.001), the proportion of females in the program (p = 0.002), location in the Pacific region (p = 0.039), and a non-smoker hiring policy (p = 0.042). Among the five significant factors, being university based, located in the Pacific, and having a non-smoker hiring policy were likely to remain unchanged over a long period. However, program directors and candidates should pay attention to equivalence between full-time paid faculty and fellowship positions, and the proportion of women in the program. The former indicates the level of supervision while the latter has become increasingly important owing to the higher proportion of women in medicine.

  10. Computer program for calculating aerodynamic characteristics of upper-surface-blowing and over-wing-blowing configurations

    NASA Technical Reports Server (NTRS)

    Lan, C. E.; Fillman, G. L.; Fox, C. H., Jr.

    1977-01-01

    The program is based on the inviscid wing-jet interaction theory of Lan and Campbell, and the jet entrainment theory of Lan. In the interaction theory, the flow perturbations are computed both inside and outside the jet, separately, and then matched on the jet surface to satisfy the jet boundary conditions. The jet Mach number is allowed to be different from the free stream value (Mach number nonuniformity). These jet boundary conditions require that the static pressure be continuous across the jet surface which must always remain as a stream surface. These conditions, as well as the wing-surface tangency condition, are satisified only in the linearized sense. The detailed formulation of these boundary conditions is based on the quasi-vortex-lattice method of Lan.

  11. Prediction on the inhibition ratio of pyrrolidine derivatives on matrix metalloproteinase based on gene expression programming.

    PubMed

    Li, Yuqin; You, Guirong; Jia, Baoxiu; Si, Hongzong; Yao, Xiaojun

    2014-01-01

    Quantitative structure-activity relationships (QSAR) were developed to predict the inhibition ratio of pyrrolidine derivatives on matrix metalloproteinase via heuristic method (HM) and gene expression programming (GEP). The descriptors of 33 pyrrolidine derivatives were calculated by the software CODESSA, which can calculate quantum chemical, topological, geometrical, constitutional, and electrostatic descriptors. HM was also used for the preselection of 5 appropriate molecular descriptors. Linear and nonlinear QSAR models were developed based on the HM and GEP separately and two prediction models lead to a good correlation coefficient (R (2)) of 0.93 and 0.94. The two QSAR models are useful in predicting the inhibition ratio of pyrrolidine derivatives on matrix metalloproteinase during the discovery of new anticancer drugs and providing theory information for studying the new drugs.

  12. The Programming Language Python In Earth System Simulations

    NASA Astrophysics Data System (ADS)

    Gross, L.; Imranullah, A.; Mora, P.; Saez, E.; Smillie, J.; Wang, C.

    2004-12-01

    Mathematical models in earth sciences base on the solution of systems of coupled, non-linear, time-dependent partial differential equations (PDEs). The spatial and time-scale vary from a planetary scale and million years for convection problems to 100km and 10 years for fault systems simulations. Various techniques are in use to deal with the time dependency (e.g. Crank-Nicholson), with the non-linearity (e.g. Newton-Raphson) and weakly coupled equations (e.g. non-linear Gauss-Seidel). Besides these high-level solution algorithms discretization methods (e.g. finite element method (FEM), boundary element method (BEM)) are used to deal with spatial derivatives. Typically, large-scale, three dimensional meshes are required to resolve geometrical complexity (e.g. in the case of fault systems) or features in the solution (e.g. in mantel convection simulations). The modelling environment escript allows the rapid implementation of new physics as required for the development of simulation codes in earth sciences. Its main object is to provide a programming language, where the user can define new models and rapidly develop high-level solution algorithms. The current implementation is linked with the finite element package finley as a PDE solver. However, the design is open and other discretization technologies such as finite differences and boundary element methods could be included. escript is implemented as an extension of the interactive programming environment python (see www.python.org). Key concepts introduced are Data objects, which are holding values on nodes or elements of the finite element mesh, and linearPDE objects, which are defining linear partial differential equations to be solved by the underlying discretization technology. In this paper we will show the basic concepts of escript and will show how escript is used to implement a simulation code for interacting fault systems. We will show some results of large-scale, parallel simulations on an SGI Altix system. Acknowledgements: Project work is supported by Australian Commonwealth Government through the Australian Computational Earth Systems Simulator Major National Research Facility, Queensland State Government Smart State Research Facility Fund, The University of Queensland and SGI.

  13. AQMAN; linear and quadratic programming matrix generator using two-dimensional ground-water flow simulation for aquifer management modeling

    USGS Publications Warehouse

    Lefkoff, L.J.; Gorelick, S.M.

    1987-01-01

    A FORTRAN-77 computer program code that helps solve a variety of aquifer management problems involving the control of groundwater hydraulics. It is intended for use with any standard mathematical programming package that uses Mathematical Programming System input format. The computer program creates the input files to be used by the optimization program. These files contain all the hydrologic information and management objectives needed to solve the management problem. Used in conjunction with a mathematical programming code, the computer program identifies the pumping or recharge strategy that achieves a user 's management objective while maintaining groundwater hydraulic conditions within desired limits. The objective may be linear or quadratic, and may involve the minimization of pumping and recharge rates or of variable pumping costs. The problem may contain constraints on groundwater heads, gradients, and velocities for a complex, transient hydrologic system. Linear superposition of solutions to the transient, two-dimensional groundwater flow equation is used by the computer program in conjunction with the response matrix optimization method. A unit stress is applied at each decision well and transient responses at all control locations are computed using a modified version of the U.S. Geological Survey two dimensional aquifer simulation model. The program also computes discounted cost coefficients for the objective function and accounts for transient aquifer conditions. (Author 's abstract)

  14. STICAP: A linear circuit analysis program with stiff systems capability. Volume 1: Theory manual. [network analysis

    NASA Technical Reports Server (NTRS)

    Cooke, C. H.

    1975-01-01

    STICAP (Stiff Circuit Analysis Program) is a FORTRAN 4 computer program written for the CDC-6400-6600 computer series and SCOPE 3.0 operating system. It provides the circuit analyst a tool for automatically computing the transient responses and frequency responses of large linear time invariant networks, both stiff and nonstiff (algorithms and numerical integration techniques are described). The circuit description and user's program input language is engineer-oriented, making simple the task of using the program. Engineering theories underlying STICAP are examined. A user's manual is included which explains user interaction with the program and gives results of typical circuit design applications. Also, the program structure from a systems programmer's viewpoint is depicted and flow charts and other software documentation are given.

  15. Massively parallel sparse matrix function calculations with NTPoly

    NASA Astrophysics Data System (ADS)

    Dawson, William; Nakajima, Takahito

    2018-04-01

    We present NTPoly, a massively parallel library for computing the functions of sparse, symmetric matrices. The theory of matrix functions is a well developed framework with a wide range of applications including differential equations, graph theory, and electronic structure calculations. One particularly important application area is diagonalization free methods in quantum chemistry. When the input and output of the matrix function are sparse, methods based on polynomial expansions can be used to compute matrix functions in linear time. We present a library based on these methods that can compute a variety of matrix functions. Distributed memory parallelization is based on a communication avoiding sparse matrix multiplication algorithm. OpenMP task parallellization is utilized to implement hybrid parallelization. We describe NTPoly's interface and show how it can be integrated with programs written in many different programming languages. We demonstrate the merits of NTPoly by performing large scale calculations on the K computer.

  16. Gene selection for the reconstruction of stem cell differentiation trees: a linear programming approach.

    PubMed

    Ghadie, Mohamed A; Japkowicz, Nathalie; Perkins, Theodore J

    2015-08-15

    Stem cell differentiation is largely guided by master transcriptional regulators, but it also depends on the expression of other types of genes, such as cell cycle genes, signaling genes, metabolic genes, trafficking genes, etc. Traditional approaches to understanding gene expression patterns across multiple conditions, such as principal components analysis or K-means clustering, can group cell types based on gene expression, but they do so without knowledge of the differentiation hierarchy. Hierarchical clustering can organize cell types into a tree, but in general this tree is different from the differentiation hierarchy itself. Given the differentiation hierarchy and gene expression data at each node, we construct a weighted Euclidean distance metric such that the minimum spanning tree with respect to that metric is precisely the given differentiation hierarchy. We provide a set of linear constraints that are provably sufficient for the desired construction and a linear programming approach to identify sparse sets of weights, effectively identifying genes that are most relevant for discriminating different parts of the tree. We apply our method to microarray gene expression data describing 38 cell types in the hematopoiesis hierarchy, constructing a weighted Euclidean metric that uses just 175 genes. However, we find that there are many alternative sets of weights that satisfy the linear constraints. Thus, in the style of random-forest training, we also construct metrics based on random subsets of the genes and compare them to the metric of 175 genes. We then report on the selected genes and their biological functions. Our approach offers a new way to identify genes that may have important roles in stem cell differentiation. tperkins@ohri.ca Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  17. Optimization of space manufacturing systems

    NASA Technical Reports Server (NTRS)

    Akin, D. L.

    1979-01-01

    Four separate analyses are detailed: transportation to low earth orbit, orbit-to-orbit optimization, parametric analysis of SPS logistics based on earth and lunar source locations, and an overall program option optimization implemented with linear programming. It is found that smaller vehicles are favored for earth launch, with the current Space Shuttle being right at optimum payload size. Fully reusable launch vehicles represent a savings of 50% over the Space Shuttle; increased reliability with less maintenance could further double the savings. An optimization of orbit-to-orbit propulsion systems using lunar oxygen for propellants shows that ion propulsion is preferable by a 3:1 cost margin over a mass driver reaction engine at optimum values; however, ion engines cannot yet operate in the lower exhaust velocity range where the optimum lies, and total program costs between the two systems are ambiguous. Heavier payloads favor the use of a MDRE. A parametric model of a space manufacturing facility is proposed, and used to analyze recurring costs, total costs, and net present value discounted cash flows. Parameters studied include productivity, effects of discounting, materials source tradeoffs, economic viability of closed-cycle habitats, and effects of varying degrees of nonterrestrial SPS materials needed from earth. Finally, candidate optimal scenarios are chosen, and implemented in a linear program with external constraints in order to arrive at an optimum blend of SPS production strategies in order to maximize returns.

  18. Comparison of Quasi-Conservative Pressure-Based and Fully-Conservative Formulations for the Simulation of Transcritical Flows

    NASA Astrophysics Data System (ADS)

    Lacaze, Guilhem; Oefelein, Joseph

    2016-11-01

    High-pressure flows are known to be challenging to simulate due to thermodynamic non-linearities occurring in the vicinity of the pseudo-boiling line. This study investigates the origin of this issue by analyzing the behavior of thermodynamic processes at elevated pressure and low temperature. We show that under transcritical conditions, non-linearities significantly amplify numerical errors associated with construction of fluxes. These errors affect the local density and energy balances, which in turn creates pressure oscillations. For that reason, solvers based on a conservative system of equations that transport density and total energy are subject to unphysical pressure variations in gradient regions. These perturbations hinder numerical stability and degrade the accuracy of predictions. To circumvent this problem, the governing system can be reformulated to a pressure-based treatment of energy. We present comparisons between the pressure-based and fully conservative formulations using a progressive set of canonical cases, including a cryogenic turbulent mixing layer at rocket engine conditions. Department of Energy, Office of Science, Basic Energy Sciences Program.

  19. Use of Linear Programming to Develop Cost-Minimized Nutritionally Adequate Health Promoting Food Baskets.

    PubMed

    Parlesak, Alexandr; Tetens, Inge; Dejgård Jensen, Jørgen; Smed, Sinne; Gabrijelčič Blenkuš, Mojca; Rayner, Mike; Darmon, Nicole; Robertson, Aileen

    2016-01-01

    Food-Based Dietary Guidelines (FBDGs) are developed to promote healthier eating patterns, but increasing food prices may make healthy eating less affordable. The aim of this study was to design a range of cost-minimized nutritionally adequate health-promoting food baskets (FBs) that help prevent both micronutrient inadequacy and diet-related non-communicable diseases at lowest cost. Average prices for 312 foods were collected within the Greater Copenhagen area. The cost and nutrient content of five different cost-minimized FBs for a family of four were calculated per day using linear programming. The FBs were defined using five different constraints: cultural acceptability (CA), or dietary guidelines (DG), or nutrient recommendations (N), or cultural acceptability and nutrient recommendations (CAN), or dietary guidelines and nutrient recommendations (DGN). The variety and number of foods in each of the resulting five baskets was increased through limiting the relative share of individual foods. The one-day version of N contained only 12 foods at the minimum cost of DKK 27 (€ 3.6). The CA, DG, and DGN were about twice of this and the CAN cost ~DKK 81 (€ 10.8). The baskets with the greater variety of foods contained from 70 (CAN) to 134 (DGN) foods and cost between DKK 60 (€ 8.1, N) and DKK 125 (€ 16.8, DGN). Ensuring that the food baskets cover both dietary guidelines and nutrient recommendations doubled the cost while cultural acceptability (CAN) tripled it. Use of linear programming facilitates the generation of low-cost food baskets that are nutritionally adequate, health promoting, and culturally acceptable.

  20. Food choices to meet nutrient recommendations for the adult Brazilian population based on the linear programming approach.

    PubMed

    Dos Santos, Quenia; Sichieri, Rosely; Darmon, Nicole; Maillot, Matthieu; Verly-Junior, Eliseu

    2018-06-01

    To identify optimal food choices that meet nutritional recommendations to reduce prevalence of inadequate nutrient intakes. Linear programming was used to obtain an optimized diet with sixty-eight foods with the least difference from the observed population mean dietary intake while meeting a set of nutritional goals that included reduction in the prevalence of inadequate nutrient intakes to ≤20 %. Brazil. Participants (men and women, n 25 324) aged 20 years or more from the first National Dietary Survey (NDS) 2008-2009. Feasible solution to the model was not found when all constraints were imposed; infeasible nutrients were Ca, vitamins D and E, Mg, Zn, fibre, linolenic acid, monounsaturated fat and Na. Feasible solution was obtained after relaxing the nutritional constraints for these limiting nutrients by including a deviation variable in the model. Estimated prevalence of nutrient inadequacy was reduced by 60-70 % for most nutrients, and mean saturated and trans-fat decreased in the optimized diet meeting the model constraints. Optimized diet was characterized by increases especially in fruits (+92 g), beans (+64 g), vegetables (+43 g), milk (+12 g), fish and seafood (+15 g) and whole cereals (+14 g), and reductions of sugar-sweetened beverages (-90 g), rice (-63 g), snacks (-14 g), red meat (-13 g) and processed meat (-9·7 g). Linear programming is a unique tool to identify which changes in the current diet can increase nutrient intake and place the population at lower risk of nutrient inadequacy. Reaching nutritional adequacy for all nutrients would require major dietary changes in the Brazilian diet.

  1. New method for calculating a mathematical expression for streamflow recession

    USGS Publications Warehouse

    Rutledge, Albert T.

    1991-01-01

    An empirical method has been devised to calculate the master recession curve, which is a mathematical expression for streamflow recession during times of negligible direct runoff. The method is based on the assumption that the storage-delay factor, which is the time per log cycle of streamflow recession, varies linearly with the logarithm of streamflow. The resulting master recession curve can be nonlinear. The method can be executed by a computer program that reads a data file of daily mean streamflow, then allows the user to select several near-linear segments of streamflow recession. The storage-delay factor for each segment is one of the coefficients of the equation that results from linear least-squares regression. Using results for each recession segment, a mathematical expression of the storage-delay factor as a function of the log of streamflow is determined by linear least-squares regression. The master recession curve, which is a second-order polynomial expression for time as a function of log of streamflow, is then derived using the coefficients of this function.

  2. Linear and exponential TAIL-PCR: a method for efficient and quick amplification of flanking sequences adjacent to Tn5 transposon insertion sites.

    PubMed

    Jia, Xianbo; Lin, Xinjian; Chen, Jichen

    2017-11-02

    Current genome walking methods are very time consuming, and many produce non-specific amplification products. To amplify the flanking sequences that are adjacent to Tn5 transposon insertion sites in Serratia marcescens FZSF02, we developed a genome walking method based on TAIL-PCR. This PCR method added a 20-cycle linear amplification step before the exponential amplification step to increase the concentration of the target sequences. Products of the linear amplification and the exponential amplification were diluted 100-fold to decrease the concentration of the templates that cause non-specific amplification. Fast DNA polymerase with a high extension speed was used in this method, and an amplification program was used to rapidly amplify long specific sequences. With this linear and exponential TAIL-PCR (LETAIL-PCR), we successfully obtained products larger than 2 kb from Tn5 transposon insertion mutant strains within 3 h. This method can be widely used in genome walking studies to amplify unknown sequences that are adjacent to known sequences.

  3. Comparison of l₁-Norm SVR and Sparse Coding Algorithms for Linear Regression.

    PubMed

    Zhang, Qingtian; Hu, Xiaolin; Zhang, Bo

    2015-08-01

    Support vector regression (SVR) is a popular function estimation technique based on Vapnik's concept of support vector machine. Among many variants, the l1-norm SVR is known to be good at selecting useful features when the features are redundant. Sparse coding (SC) is a technique widely used in many areas and a number of efficient algorithms are available. Both l1-norm SVR and SC can be used for linear regression. In this brief, the close connection between the l1-norm SVR and SC is revealed and some typical algorithms are compared for linear regression. The results show that the SC algorithms outperform the Newton linear programming algorithm, an efficient l1-norm SVR algorithm, in efficiency. The algorithms are then used to design the radial basis function (RBF) neural networks. Experiments on some benchmark data sets demonstrate the high efficiency of the SC algorithms. In particular, one of the SC algorithms, the orthogonal matching pursuit is two orders of magnitude faster than a well-known RBF network designing algorithm, the orthogonal least squares algorithm.

  4. Isocenter verification for linac‐based stereotactic radiation therapy: review of principles and techniques

    PubMed Central

    Sabet, Mahsheed; O'Connor, Daryl J.; Greer, Peter B.

    2011-01-01

    There have been several manual, semi‐automatic and fully‐automatic methods proposed for verification of the position of mechanical isocenter as part of comprehensive quality assurance programs required for linear accelerator‐based stereotactic radiosurgery/radiotherapy (SRS/SRT) treatments. In this paper, a systematic review has been carried out to discuss the present methods for isocenter verification and compare their characteristics, to help physicists in making a decision on selection of their quality assurance routine. PACS numbers: 87.53.Ly, 87.56.Fc, 87.56.‐v PMID:22089022

  5. User document for computer programs for ring-stiffened shells of revolution

    NASA Technical Reports Server (NTRS)

    Cohen, G. A.

    1973-01-01

    A user manual and related program documentation is presented for six compatible computer programs for structural analysis of axisymmetric shell structures. The programs apply to a common structural model but analyze different modes of structural response. In particular, they are: (1) Linear static response under asymmetric loads; (2) Buckling of linear states under asymmetric loads; (3) Nonlinear static response under axisymmetric loads; (4) Buckling nonlinear states under axisymmetric (5) Imperfection sensitivity of buckling modes under axisymmetric loads; and (6) Vibrations about nonlinear states under axisymmetric loads. These programs treat branched shells of revolution with an arbitrary arrangement of a large number of open branches but with at most one closed branch.

  6. Alternative mathematical programming formulations for FSS synthesis

    NASA Technical Reports Server (NTRS)

    Reilly, C. H.; Mount-Campbell, C. A.; Gonsalvez, D. J. A.; Levis, C. A.

    1986-01-01

    A variety of mathematical programming models and two solution strategies are suggested for the problem of allocating orbital positions to (synthesizing) satellites in the Fixed Satellite Service. Mixed integer programming and almost linear programming formulations are presented in detail for each of two objectives: (1) positioning satellites as closely as possible to specified desired locations, and (2) minimizing the total length of the geostationary arc allocated to the satellites whose positions are to be determined. Computational results for mixed integer and almost linear programming models, with the objective of positioning satellites as closely as possible to their desired locations, are reported for three six-administration test problems and a thirteen-administration test problem.

  7. Calibration Methods for a 3D Triangulation Based Camera

    NASA Astrophysics Data System (ADS)

    Schulz, Ulrike; Böhnke, Kay

    A sensor in a camera takes a gray level image (1536 x 512 pixels), which is reflected by a reference body. The reference body is illuminated by a linear laser line. This gray level image can be used for a 3D calibration. The following paper describes how a calibration program calculates the calibration factors. The calibration factors serve to determine the size of an unknown reference body.

  8. Dynamic Distributed Cooperative Control of Multiple Heterogeneous Resources

    DTIC Science & Technology

    2012-10-01

    of the UAVs to maximize the total sensor footprint over the region of interest. The algorithm utilized to solve this problem was based on sampling a...and moving obstacles. Obstacle positions were assumed known a priori. Kingston and Beard [22] presented an algorithm to keep moving UAVs equally spaced...Planning Algorithms , Cambridge University Press, 2006. 11. Ma, C. S. and Miller, R. H., “Mixed integer linear programming trajectory generation for

  9. The Event Based Language and Its Multiple Processor Implementations.

    DTIC Science & Technology

    1980-01-01

    10 6.1 "Recursive" Linear Fibonacci ................................................ 105 6.2 The Readers Writers Problem...kinds. Examples of such systems are: C.mmp [Wu-72], Pluribus [He-73], Data Flow [ De -75], the boolean n-cube parallel machine [Su-77], and the MuNet [Wa...concurrency within programs; therefore, we hate concentrated on two types of systems which seem suitable: a processor network, and a data flow processor [ De -77

  10. Rotman Lens Sidewall Design and Optimization with Hybrid Hardware/Software Based Programming

    DTIC Science & Technology

    2015-01-09

    conventional MoM and stored in memory. The components of Zfar are computed as needed through a fast matrix vector multiplication ( MVM ), which...V vector. Iterative methods, e.g. BiCGSTAB, are employed for solving the linear equation. The matrix-vector multiplications ( MVMs ), which dominate...most of the computation in the solving phase, consists of calculating near and far MVMs . The far MVM comprises aggregation, translation, and

  11. A Maximin Model for Test Design with Practical Constraints. Project Psychometric Aspects of Item Banking No. 25. Research Report 87-10.

    ERIC Educational Resources Information Center

    van der Linden, Wim J.; Boekkooi-Timminga, Ellen

    A "maximin" model for item response theory based test design is proposed. In this model only the relative shape of the target test information function is specified. It serves as a constraint subject to which a linear programming algorithm maximizes the information in the test. In the practice of test construction there may be several…

  12. A Study on Linear Programming Applications for the Optimization of School Lunch Menus. Summation Report.

    ERIC Educational Resources Information Center

    Findorff, Irene K.

    This document summarizes the results of a project at Tulane University that was designed to adapt, test, and evaluate a computerized information and menu planning system utilizing linear programing techniques for use in school lunch food service operations. The objectives of the menu planning were to formulate menu items into a palatable,…

  13. Spline smoothing of histograms by linear programming

    NASA Technical Reports Server (NTRS)

    Bennett, J. O.

    1972-01-01

    An algorithm for an approximating function to the frequency distribution is obtained from a sample of size n. To obtain the approximating function a histogram is made from the data. Next, Euclidean space approximations to the graph of the histogram using central B-splines as basis elements are obtained by linear programming. The approximating function has area one and is nonnegative.

  14. FPL-PELPS : a price endogenous linear programming system for economic modeling, supplement to PELPS III, version 1.1.

    Treesearch

    Patricia K. Lebow; Henry Spelter; Peter J. Ince

    2003-01-01

    This report provides documentation and user information for FPL-PELPS, a personal computer price endogenous linear programming system for economic modeling. Originally developed to model the North American pulp and paper industry, FPL-PELPS follows its predecessors in allowing the modeling of any appropriate sector to predict consumption, production and capacity by...

  15. Web-based support as an adjunct to group-based smoking cessation for adolescents

    PubMed Central

    Mermelstein, Robin; Turner, Lindsey

    2008-01-01

    Although group-based programs remain the most common treatment approach for adolescent smoking cessation, success rates for these programs have been relatively modest, and their reach may be limited. Web-based adjuncts may be one way to boost the efficacy and reach of group-based approaches. The purpose of this study was to evaluate the effectiveness of enhancing the American Lung Association’s Not on Tobacco program (NOT) with a Web-based adjunct (NOT Plus). Twenty-nine high schools were randomly assigned to either the NOT program alone or to the NOT Plus condition, which included access to a specially designed Web site for teens, along with proactive phone calls from the group facilitator to the participant. Self-reported smoking behavior was obtained at end-of-program and at a 3-month follow-up. Using hierarchical linear modeling, accounting for the clustering of students in schools, and controlling for student gender, grade, race, and baseline smoking rate, there was a marginally significant (p = .06) condition effect at end-of-treatment and a significant effect at 3-month follow-up (p < .05) favoring the NOT Plus condition. Approximately 57% of adolescents reported visiting the Web site, and among the NOT Plus condition, use of the Web site was associated with cessation significantly at end-of-program (p < .05), but not at 3 months. Adolescents in urban schools were more likely to access the Web site than those in rural schools. Participants who visited the Web site rated it positively on several dimensions. Reasons for not using the Web site will be discussed, as well as its value as an adjunct. PMID:17491173

  16. Applying linear programming model to aggregate production planning of coated peanut products

    NASA Astrophysics Data System (ADS)

    Rohmah, W. G.; Purwaningsih, I.; Santoso, EF S. M.

    2018-03-01

    The aim of this study was to set the overall production level for each grade of coated peanut product to meet market demands with a minimum production cost. The linear programming model was applied in this study. The proposed model was used to minimize the total production cost based on the limited demand of coated peanuts. The demand values applied to the method was previously forecasted using time series method and production capacity aimed to plan the aggregate production for the next 6 month period. The results indicated that the production planning using the proposed model has resulted a better fitted pattern to the customer demands compared to that of the company policy. The production capacity of product family A, B, and C was relatively stable for the first 3 months of the planning periods, then began to fluctuate over the next 3 months. While, the production capacity of product family D and E was fluctuated over the 6-month planning periods, with the values in the range of 10,864 - 32,580 kg and 255 – 5,069 kg, respectively. The total production cost for all products was 27.06% lower than the production cost calculated using the company’s policy-based method.

  17. Exploring an Ecologically Sustainable Scheme for Landscape Restoration of Abandoned Mine Land: Scenario-Based Simulation Integrated Linear Programming and CLUE-S Model

    PubMed Central

    Zhang, Liping; Zhang, Shiwen; Huang, Yajie; Cao, Meng; Huang, Yuanfang; Zhang, Hongyan

    2016-01-01

    Understanding abandoned mine land (AML) changes during land reclamation is crucial for reusing damaged land resources and formulating sound ecological restoration policies. This study combines the linear programming (LP) model and the CLUE-S model to simulate land-use dynamics in the Mentougou District (Beijing, China) from 2007 to 2020 under three reclamation scenarios, that is, the planning scenario based on the general land-use plan in study area (scenario 1), maximal comprehensive benefits (scenario 2), and maximal ecosystem service value (scenario 3). Nine landscape-scale graph metrics were then selected to describe the landscape characteristics. The results show that the coupled model presented can simulate the dynamics of AML effectively and the spatially explicit transformations of AML were different. New cultivated land dominates in scenario 1, while construction land and forest land account for major percentages in scenarios 2 and 3, respectively. Scenario 3 has an advantage in most of the selected indices as the patches combined most closely. To conclude, reclaiming AML by transformation into more forest can reduce the variability and maintain the stability of the landscape ecological system in study area. These findings contribute to better mapping AML dynamics and providing policy support for the management of AML. PMID:27023575

  18. Exploring an Ecologically Sustainable Scheme for Landscape Restoration of Abandoned Mine Land: Scenario-Based Simulation Integrated Linear Programming and CLUE-S Model.

    PubMed

    Zhang, Liping; Zhang, Shiwen; Huang, Yajie; Cao, Meng; Huang, Yuanfang; Zhang, Hongyan

    2016-03-24

    Understanding abandoned mine land (AML) changes during land reclamation is crucial for reusing damaged land resources and formulating sound ecological restoration policies. This study combines the linear programming (LP) model and the CLUE-S model to simulate land-use dynamics in the Mentougou District (Beijing, China) from 2007 to 2020 under three reclamation scenarios, that is, the planning scenario based on the general land-use plan in study area (scenario 1), maximal comprehensive benefits (scenario 2), and maximal ecosystem service value (scenario 3). Nine landscape-scale graph metrics were then selected to describe the landscape characteristics. The results show that the coupled model presented can simulate the dynamics of AML effectively and the spatially explicit transformations of AML were different. New cultivated land dominates in scenario 1, while construction land and forest land account for major percentages in scenarios 2 and 3, respectively. Scenario 3 has an advantage in most of the selected indices as the patches combined most closely. To conclude, reclaiming AML by transformation into more forest can reduce the variability and maintain the stability of the landscape ecological system in study area. These findings contribute to better mapping AML dynamics and providing policy support for the management of AML.

  19. A depth-first search algorithm to compute elementary flux modes by linear programming.

    PubMed

    Quek, Lake-Ee; Nielsen, Lars K

    2014-07-30

    The decomposition of complex metabolic networks into elementary flux modes (EFMs) provides a useful framework for exploring reaction interactions systematically. Generating a complete set of EFMs for large-scale models, however, is near impossible. Even for moderately-sized models (<400 reactions), existing approaches based on the Double Description method must iterate through a large number of combinatorial candidates, thus imposing an immense processor and memory demand. Based on an alternative elementarity test, we developed a depth-first search algorithm using linear programming (LP) to enumerate EFMs in an exhaustive fashion. Constraints can be introduced to directly generate a subset of EFMs satisfying the set of constraints. The depth-first search algorithm has a constant memory overhead. Using flux constraints, a large LP problem can be massively divided and parallelized into independent sub-jobs for deployment into computing clusters. Since the sub-jobs do not overlap, the approach scales to utilize all available computing nodes with minimal coordination overhead or memory limitations. The speed of the algorithm was comparable to efmtool, a mainstream Double Description method, when enumerating all EFMs; the attrition power gained from performing flux feasibility tests offsets the increased computational demand of running an LP solver. Unlike the Double Description method, the algorithm enables accelerated enumeration of all EFMs satisfying a set of constraints.

  20. Initial study of thermal energy storage in unconfined aquifers. [UCATES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Haitjema, H.M.; Strack, O.D.L.

    1986-04-01

    Convective heat transport in unconfined aquifers is modeled in a semi-analytic way. The transient groundwater flow is modeled by superposition of analytic functions, whereby changes in the aquifer storage are represented by a network of triangles, each with a linearly varying sink distribution. This analytic formulation incorporates the nonlinearity of the differential equation for unconfined flow and eliminates numerical dispersion in modeling heat convection. The thermal losses through the aquifer base and vadose zone are modeled rather crudely. Only vertical heat conduction is considered in these boundaries, whereby a linearly varying temperature is assumed at all times. The latter assumptionmore » appears reasonable for thin aquifer boundaries. However, assuming such thin aquifer boundaries may lead to an overestimation of the thermal losses when the aquifer base is regarded as infinitely thick in reality. The approach is implemented in the computer program UCATES, which serves as a first step toward the development of a comprehensive screening tool for ATES systems in unconfined aquifers. In its present form, the program is capable of predicting the relative effects of regional flow on the efficiency of ATES systems. However, only after a more realistic heatloss mechanism is incorporated in UCATES will reliable predictions of absolute ATES efficiencies be possible.« less

  1. ZIP2DL: An Elastic-Plastic, Large-Rotation Finite-Element Stress Analysis and Crack-Growth Simulation Program

    NASA Technical Reports Server (NTRS)

    Deng, Xiaomin; Newman, James C., Jr.

    1997-01-01

    ZIP2DL is a two-dimensional, elastic-plastic finte element program for stress analysis and crack growth simulations, developed for the NASA Langley Research Center. It has many of the salient features of the ZIP2D program. For example, ZIP2DL contains five material models (linearly elastic, elastic-perfectly plastic, power-law hardening, linear hardening, and multi-linear hardening models), and it can simulate mixed-mode crack growth for prescribed crack growth paths under plane stress, plane strain and mixed state of stress conditions. Further, as an extension of ZIP2D, it also includes a number of new capabilities. The large-deformation kinematics in ZIP2DL will allow it to handle elastic problems with large strains and large rotations, and elastic-plastic problems with small strains and large rotations. Loading conditions in terms of surface traction, concentrated load, and nodal displacement can be applied with a default linear time dependence or they can be programmed according to a user-defined time dependence through a user subroutine. The restart capability of ZIP2DL will make it possible to stop the execution of the program at any time, analyze the results and/or modify execution options and resume and continue the execution of the program. This report includes three sectons: a theoretical manual section, a user manual section, and an example manual secton. In the theoretical secton, the mathematics behind the various aspects of the program are concisely outlined. In the user manual section, a line-by-line explanation of the input data is given. In the example manual secton, three types of examples are presented to demonstrate the accuracy and illustrate the use of this program.

  2. JAERI R & D on accelerator-based transmutation under OMEGA program

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Takizuka, T.; Nishida, T.; Mizumoto, M.

    1995-10-01

    The overview of the Japanese long-term research and development program on nuclide partitioning and transmutation, called {open_quotes}OMEGA,{close_quotes} is presented. Under this national program, major R&D activities are being carried out at JAERI, PNC, and CRIEPI. Accelerator-based transmutation study at JAERI is focused on a dedicated transmutor with a subcritical actinide-fueled subcritical core coupled with a spallation target driven by a high intensity proton accelerator. Two types of system concept, solid system and molten-salt system, are discussed. The solid system consists of sodium-cooled tungsten target and metallic actinide fuel. The molten-salt system is fueled with molten actinide chloride that acts alsomore » as a target material. The proposed plant transmutes about 250 kg of minor actinide per year, and generates enough electricity to power its own accelerator. JAERI is proposing the development of an intense proton linear accelerator ETA with 1.5 GeV-10 mA beam for engineering tests of accelerator-based transmutation. Recent achievements in the accelerator development are described.« less

  3. Effect of a rehabilitation-based chronic disease management program targeting severe COPD exacerbations on readmission patterns.

    PubMed

    Lalmolda, C; Coll-Fernández, R; Martínez, N; Baré, M; Teixidó Colet, M; Epelde, F; Monsó, E

    2017-01-01

    Pulmonary rehabilitation (PR) is recommended after a severe COPD exacerbation, but its short- and long-term effects on health care utilization have not been fully established. The aims of this study were to evaluate patient compliance with a chronic disease management (CDM) program incorporating home-based exercise training as the main component after a severe COPD exacerbation and to determine its effects on health care utilization in the following year. COPD patients with a severe exacerbation were included in a case-cohort study at admission. An intervention group participated in a nurse-supervised CDM program during the 2 months after discharge, comprising of home-based PR with exercise components directly supervised by a physiotherapist, while the remaining patients followed usual care. Nineteen of the twenty-one participants (90.5%) were compliant with the CDM program and were compared with 29 usual-care patients. Compliance with the program was associated with statistically significant reductions in admissions due to respiratory disease in the following year (median [interquartile range]: 0 [0-1] vs 1 [0-2.5]; P =0.022) and in days of admission (0 [0-7] vs 7 [0-12]; P =0.034), and multiple linear regression analysis confirmed the protective effect of the CDM program (β coefficient -0.785, P =0.014, and R 2 =0.219). A CDM program incorporating exercise training for COPD patients without limiting comorbidities after a severe exacerbation achieves high compliance and reduces admissions in the year following after the intervention.

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bartolac, S; Letourneau, D; University of Toronto, Toronto, Ontario

    Purpose: Application of process control theory in quality assurance programs promises to allow earlier identification of problems and potentially better quality in delivery than traditional paradigms based primarily on tolerances and action levels. The purpose of this project was to characterize underlying seasonal variations in linear accelerator output that can be used to improve performance or trigger preemptive maintenance. Methods: Review of runtime plots of daily (6 MV) output data acquired using in house ion chamber based devices over three years and for fifteen linear accelerators of varying make and model were evaluated. Shifts in output due to known interventionsmore » with the machines were subtracted from the data to model an uncorrected scenario for each linear accelerator. Observable linear trends were also removed from the data prior to evaluation of periodic variations. Results: Runtime plots of output revealed sinusoidal, seasonal variations that were consistent across all units, irrespective of manufacturer, model or age of machine. The average amplitude of the variation was on the order of 1%. Peak and minimum variations were found to correspond to early April and September, respectively. Approximately 48% of output adjustments made over the period examined were potentially avoidable if baseline levels had corresponded to the mean output, rather than to points near a peak or valley. Linear trends were observed for three of the fifteen units, with annual increases in output ranging from 2–3%. Conclusion: Characterization of cyclical seasonal trends allows for better separation of potentially innate accelerator behaviour from other behaviours (e.g. linear trends) that may be better described as true out of control states (i.e. non-stochastic deviations from otherwise expected behavior) and could indicate service requirements. Results also pointed to an optimal setpoint for accelerators such that output of machines is maintained within set tolerances and interventions are required less frequently.« less

  5. The tailored activity program (TAP) to address behavioral disturbances in frontotemporal dementia: a feasibility and pilot study.

    PubMed

    O'Connor, Claire M; Clemson, Lindy; Brodaty, Henry; Low, Lee-Fay; Jeon, Yun-Hee; Gitlin, Laura N; Piguet, Olivier; Mioshi, Eneida

    2017-10-15

    To explore the feasibility of implementing the Tailored Activity Program with a cohort of people with frontotemporal dementia and their carers (dyads). The Tailored Activity Program is an occupational therapy based intervention that involves working collaboratively with family carers and prescribes personalized activities for behavioral management in people with dementia. Twenty dyads randomized into the study (Tailored Activity Program: n = 9; Control: n = 11) were assessed at baseline and 4-months. Qualitative analyzes evaluated feasibility and acceptability of the program for the frontotemporal dementia cohort, and quantitative analyzes (linear mixed model analyzes, Spearman's rho correlations) measured the impact of the program on the dyads. The Tailored Activity Program was an acceptable intervention for the frontotemporal dementia dyads. Qualitative analyses identified five themes: "carer perceived benefits", "carer readiness to change", "strategies used by carer to engage person with dementia", "barriers to the Tailored Activity Program uptake/implementation", and "person with dementia engagement". Quantitative outcomes showed an overall reduction of behavioral symptoms (F 18.34  = 8.073, p = 0.011) and maintenance of functional performance in the person with dementia (F 18.03  = 0.375, p = 0.548). This study demonstrates the potential for using an activity-based intervention such as the Tailored Activity Program in frontotemporal dementia. Service providers should recognize that while people with frontotemporal dementia present with challenging issues, tailored therapies may support their function and reduce their behavioral symptoms. Implications for rehabilitation The Tailored Activity Program is an occupational therapy based intervention that involves prescribing personalized activities for behavioral management in dementia. The Tailored Activity Program is an acceptable and feasible intervention approach to address some of the unique behavioral and functional impairments inherent in frontotemporal dementia.

  6. A FORTRAN technique for correlating a circular environmental variable with a linear physiological variable in the sugar maple.

    PubMed

    Pease, J M; Morselli, M F

    1987-01-01

    This paper deals with a computer program adapted to a statistical method for analyzing an unlimited quantity of binary recorded data of an independent circular variable (e.g. wind direction), and a linear variable (e.g. maple sap flow volume). Circular variables cannot be statistically analyzed with linear methods, unless they have been transformed. The program calculates a critical quantity, the acrophase angle (PHI, phi o). The technique is adapted from original mathematics [1] and is written in Fortran 77 for easier conversion between computer networks. Correlation analysis can be performed following the program or regression which, because of the circular nature of the independent variable, becomes periodic regression. The technique was tested on a file of approximately 4050 data pairs.

  7. Applicational possibilities of linear and non-linear (polynomial) regression and analysis of variance. III. Stability determination of pharmaceutical preparations: stability of diclofenac-sodium in Diclofen injections.

    PubMed

    Arambasić, M B; Jatić-Slavković, D

    2004-05-01

    This paper presents the application of the regression analysis program and the program for comparing linear regressions (modified method for one-way, analysis of variance), writtens in BASIC program language, for instance, determination of content of Diclofenac-Sodium (active ingredient in DIKLOFEN injections, ampules á 75 mg/3 ml). Stability testing of Diclofenac-Sodium was done by isothermic method of accelerated aging at 4 different temperatures (30 degrees, 40 degrees, 50 degrees and 60 degrees C) as a function of time (4 different duration of treatment: (0-155, 0-145, 0-74 and 0-44 days). The decrease in stability (decrease in the mean value of the content of Diclofenac-Sodium (in %), at different temperatures as a function of time, is possible to describe by, linear dependance. According to the value for regression equation values, the times are assessed in which the content of Diclofenac-Sodium (in %) will decrease by 10%, of the initial value. The times are follows at 30 degrees C 761.02 days, at 40 degrees C 397.26 days, at 50 degrees C 201.96 days and at 60 degrees C 58.85 days. The estimated times (in days) in which the mean value for Diclofenac-Sodium content (in %) will by 10% of the initial values, as a junction of time, are most suitably described by 3rd order parabola. Based on the parameter values which describe the 3rd order parabola, the time was estimated in which Diclofenac-Sodium content mean value (in %) will fall by 10% of the initial one at average ambient temperatures of 20 degrees C and 25 degrees C. The times are: 1409.47 days (20 degrees C) and 1042.39 days (25 degrees C). Based on the value for Fischer's coefficien (F), the comparison of trenf of Diclofenac-Sodium content (in %) shows that, under the influence of different temperatures as a function of time, among them, depending on temperature value, there is: statistically very significant difference (P < .05) at 50 degrees C and lower toward 60 degrees C, i.e. statistically probably significant difference (P > 0.01) at 40 degrees C and lower towards 50 degrees C and there is no statistically significance difference (P > 0.05) at 30 degrees C towards 40 degrees C.

  8. MODY - calculation of ordered structures by symmetry-adapted functions

    NASA Astrophysics Data System (ADS)

    Białas, Franciszek; Pytlik, Lucjan; Sikora, Wiesława

    2016-01-01

    In this paper we focus on the new version of computer program MODY for calculations of symmetryadapted functions based on the theory of groups and representations. The choice of such a functional frame of coordinates for description of ordered structures leads to a minimal number of parameters which must be used for presentation of such structures and investigations of their properties. The aim of this work is to find those parameters, which are coefficients of a linear combination of calculated functions, leading to construction of different types of structure ordering with a given symmetry. A spreadsheet script for simplification of this work has been created and attached to the program.

  9. Program for the solution of multipoint boundary value problems of quasilinear differential equations

    NASA Technical Reports Server (NTRS)

    1973-01-01

    Linear equations are solved by a method of superposition of solutions of a sequence of initial value problems. For nonlinear equations and/or boundary conditions, the solution is iterative and in each iteration a problem like the linear case is solved. A simple Taylor series expansion is used for the linearization of both nonlinear equations and nonlinear boundary conditions. The perturbation method of solution is used in preference to quasilinearization because of programming ease, and smaller storage requirements; and experiments indicate that the desired convergence properties exist although no proof or convergence is given.

  10. Description of a computer program and numerical techniques for developing linear perturbation models from nonlinear systems simulations

    NASA Technical Reports Server (NTRS)

    Dieudonne, J. E.

    1978-01-01

    A numerical technique was developed which generates linear perturbation models from nonlinear aircraft vehicle simulations. The technique is very general and can be applied to simulations of any system that is described by nonlinear differential equations. The computer program used to generate these models is discussed, with emphasis placed on generation of the Jacobian matrices, calculation of the coefficients needed for solving the perturbation model, and generation of the solution of the linear differential equations. An example application of the technique to a nonlinear model of the NASA terminal configured vehicle is included.

  11. Galaxy Redshifts from Discrete Optimization of Correlation Functions

    NASA Astrophysics Data System (ADS)

    Lee, Benjamin C. G.; Budavári, Tamás; Basu, Amitabh; Rahman, Mubdi

    2016-12-01

    We propose a new method of constraining the redshifts of individual extragalactic sources based on celestial coordinates and their ensemble statistics. Techniques from integer linear programming (ILP) are utilized to optimize simultaneously for the angular two-point cross- and autocorrelation functions. Our novel formalism introduced here not only transforms the otherwise hopelessly expensive, brute-force combinatorial search into a linear system with integer constraints but also is readily implementable in off-the-shelf solvers. We adopt Gurobi, a commercial optimization solver, and use Python to build the cost function dynamically. The preliminary results on simulated data show potential for future applications to sky surveys by complementing and enhancing photometric redshift estimators. Our approach is the first application of ILP to astronomical analysis.

  12. Implementing Linear Algebra Related Algorithms on the TI-92+ Calculator.

    ERIC Educational Resources Information Center

    Alexopoulos, John; Abraham, Paul

    2001-01-01

    Demonstrates a less utilized feature of the TI-92+: its natural and powerful programming language. Shows how to implement several linear algebra related algorithms including the Gram-Schmidt process, Least Squares Approximations, Wronskians, Cholesky Decompositions, and Generalized Linear Least Square Approximations with QR Decompositions.…

  13. AESOP: An interactive computer program for the design of linear quadratic regulators and Kalman filters

    NASA Technical Reports Server (NTRS)

    Lehtinen, B.; Geyser, L. C.

    1984-01-01

    AESOP is a computer program for use in designing feedback controls and state estimators for linear multivariable systems. AESOP is meant to be used in an interactive manner. Each design task that the program performs is assigned a "function" number. The user accesses these functions either (1) by inputting a list of desired function numbers or (2) by inputting a single function number. In the latter case the choice of the function will in general depend on the results obtained by the previously executed function. The most important of the AESOP functions are those that design,linear quadratic regulators and Kalman filters. The user interacts with the program when using these design functions by inputting design weighting parameters and by viewing graphic displays of designed system responses. Supporting functions are provided that obtain system transient and frequency responses, transfer functions, and covariance matrices. The program can also compute open-loop system information such as stability (eigenvalues), eigenvectors, controllability, and observability. The program is written in ANSI-66 FORTRAN for use on an IBM 3033 using TSS 370. Descriptions of all subroutines and results of two test cases are included in the appendixes.

  14. A computer program for converting rectangular coordinates to latitude-longitude coordinates

    USGS Publications Warehouse

    Rutledge, A.T.

    1989-01-01

    A computer program was developed for converting the coordinates of any rectangular grid on a map to coordinates on a grid that is parallel to lines of equal latitude and longitude. Using this program in conjunction with groundwater flow models, the user can extract data and results from models with varying grid orientations and place these data into grid structure that is oriented parallel to lines of equal latitude and longitude. All cells in the rectangular grid must have equal dimensions, and all cells in the latitude-longitude grid measure one minute by one minute. This program is applicable if the map used shows lines of equal latitude as arcs and lines of equal longitude as straight lines and assumes that the Earth 's surface can be approximated as a sphere. The program user enters the row number , column number, and latitude and longitude of the midpoint of the cell for three test cells on the rectangular grid. The latitude and longitude of boundaries of the rectangular grid also are entered. By solving sets of simultaneous linear equations, the program calculates coefficients that are used for making the conversion. As an option in the program, the user may build a groundwater model file based on a grid that is parallel to lines of equal latitude and longitude. The program reads a data file based on the rectangular coordinates and automatically forms the new data file. (USGS)

  15. Longitudinal Behavioral Effects of a School-Based Fruit and Vegetable Promotion Program

    PubMed Central

    Franko, Debra L.; Thompson, Douglas R.; Power, Thomas J.; Stallings, Virginia A.

    2010-01-01

    Objective This study examined the longitudinal effects of a school-based program on kindergarten and first grade children's fruit and vegetable (F&V) consumption. Methods The program included lunchroom, classroom, school-wide, and family components. The primary dependent variable, F&V consumed at lunch, was assessed using weighed plate waste. Hierarchical linear models were used to analyze the differences between intervention and control groups and to account for repeated measurements. Results Children in the experimental group consumed more F&V (F = 29 g; V = 6 g; 0.43 portions/lunch; 0.28 servings/lunch) at the end of Year 1 compared with children in the control group. At the end of Year 2, children in the experimental group consumed more fruit (21 g; 0.23 portions/lunch; 0.15 servings/lunch), but not more vegetables compared with children in the control group. Conclusions The intervention resulted in increased F&V consumption, with more pronounced and enduring effects for fruits than vegetables. PMID:19439567

  16. Linear systems analysis program, L224(QR). Volume 2: Supplemental system design and maintenance document

    NASA Technical Reports Server (NTRS)

    Heidergott, K. W.

    1979-01-01

    The computer program known as QR is described. Classical control systems analysis and synthesis (root locus, time response, and frequency response) can be performed using this program. Programming details of the QR program are presented.

  17. An O({radical}nL) primal-dual affine scaling algorithm for linear programming

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, Siming

    1994-12-31

    We present a new primal-dual affine scaling algorithm for linear programming. The search direction of the algorithm is a combination of classical affine scaling direction of Dikin and a recent new affine scaling direction of Jansen, Roos and Terlaky. The algorithm has an iteration complexity of O({radical}nL), comparing to O(nL) complexity of Jansen, Roos and Terlaky.

  18. Using Set Covering with Item Sampling to Analyze the Infeasibility of Linear Programming Test Assembly Models

    ERIC Educational Resources Information Center

    Huitzing, Hiddo A.

    2004-01-01

    This article shows how set covering with item sampling (SCIS) methods can be used in the analysis and preanalysis of linear programming models for test assembly (LPTA). LPTA models can construct tests, fulfilling a set of constraints set by the test assembler. Sometimes, no solution to the LPTA model exists. The model is then said to be…

  19. A Revised Simplex Method for Test Construction Problems. Research Report 90-5.

    ERIC Educational Resources Information Center

    Adema, Jos J.

    Linear programming models with 0-1 variables are useful for the construction of tests from an item bank. Most solution strategies for these models start with solving the relaxed 0-1 linear programming model, allowing the 0-1 variables to take on values between 0 and 1. Then, a 0-1 solution is found by just rounding, optimal rounding, or a…

  20. An Interactive Method to Solve Infeasibility in Linear Programming Test Assembling Models

    ERIC Educational Resources Information Center

    Huitzing, Hiddo A.

    2004-01-01

    In optimal assembly of tests from item banks, linear programming (LP) models have proved to be very useful. Assembly by hand has become nearly impossible, but these LP techniques are able to find the best solutions, given the demands and needs of the test to be assembled and the specifics of the item bank from which it is assembled. However,…

Top